text
stringlengths 16
172k
| source
stringlengths 32
122
|
---|---|
Active learningis a special case ofmachine learningin which a learning algorithm can interactively query a human user (or some other information source), tolabelnew data points with the desired outputs. The human user must possess knowledge/expertise in the problem domain, including the ability to consult/research authoritative sources when necessary.[1][2][3]In statistics literature, it is sometimes also calledoptimal experimental design.[4]The information source is also calledteacherororacle.
There are situations in which unlabeled data is abundant but manual labeling is expensive. In such a scenario, learning algorithms can actively query the user/teacher for labels. This type of iterative supervised learning is called active learning. Since the learner chooses the examples, the number of examples to learn a concept can often be much lower than the number required in normal supervised learning. With this approach, there is a risk that the algorithm is overwhelmed by uninformative examples. Recent developments are dedicated to multi-label active learning,[5]hybrid active learning[6]and active learning in a single-pass (on-line) context,[7]combining concepts from the field of machine learning (e.g. conflict and ignorance) with adaptive,incremental learningpolicies in the field ofonline machine learning. Using active learning allows for faster development of a machine learning algorithm, when comparative updates would require a quantum or super computer.[8]
Large-scale active learning projects may benefit fromcrowdsourcingframeworks such asAmazon Mechanical Turkthat include manyhumans in the active learning loop.
LetTbe the total set of all data under consideration. For example, in a protein engineering problem,Twould include all proteins that are known to have a certain interesting activity and all additional proteins that one might want to test for that activity.
During each iteration,i,Tis broken up into three subsets
Most of the current research in active learning involves the best method to choose the data points forTC,i.
Algorithms for determining which data points should be labeled can be organized into a number of different categories, based upon their purpose:[1]
A wide variety of algorithms have been studied that fall into these categories.[1][4]While the traditional AL strategies can achieve remarkable performance, it is often challenging to predict in advance which strategy is the most suitable in aparticular situation. In recent years, meta-learning algorithms have been gaining in popularity. Some of them have been proposed to tackle the problem of learning AL strategies instead of relying on manually designed strategies. A benchmark which compares 'meta-learning approaches to active learning' to 'traditional heuristic-based Active Learning' may give intuitions if 'Learning active learning' is at the crossroads[17]
Some active learning algorithms are built uponsupport-vector machines(SVMs) and exploit the structure of the SVM to determine which data points to label. Such methods usually calculate themargin,W, of each unlabeled datum inTU,iand treatWas ann-dimensional distance from that datum to the separating hyperplane.
Minimum Marginal Hyperplane methods assume that the data with the smallestWare those that the SVM is most uncertain about and therefore should be placed inTC,ito be labeled. Other similar methods, such as Maximum Marginal Hyperplane, choose data with the largestW. Tradeoff methods choose a mix of the smallest and largestWs.
|
https://en.wikipedia.org/wiki/Active_learning_(machine_learning)
|
Revealed preference theory, pioneered by economistPaul Anthony Samuelsonin 1938,[1][2]is a method of analyzing choices made by individuals, mostly used for comparing the influence of policies[further explanation needed]onconsumer behavior. Revealed preference models assume that thepreferencesof consumers can berevealedby their purchasing habits.
Revealed preference theory arose because existing theories of consumerdemandwere based on a diminishingmarginal rate of substitution(MRS). This diminishing MRS relied on the assumption that consumers make consumption decisions to maximise theirutility. While utility maximisation was not a controversial assumption, theunderlying utility functionscould not be measured with great certainty. Revealed preference theory was a means to reconcile demand theory by defining utility functions by observing behaviour.
Therefore, revealed preference is a way to infer the preferences of individuals given the observed choices. It contrasts with attempts to directly measure preferences or utility, for example through stated preferences. Taking economics to be an empirical subject, there is the issue that one cannot observe preferences.
Let there be two bundles of goods,aandb, available in abudget setB{\displaystyle B}. If it is observed thatais chosen overb, thenais considered (directly)revealed preferredtob.
If the budget setB{\displaystyle B}is defined for two goods;X,Y{\displaystyle X,Y}, and determined by pricesp,q{\displaystyle p,q}and incomem{\displaystyle m}, then let bundleabe(x1,y1)∈B{\displaystyle (x_{1},y_{1})\in B}and bundlebbe(x2,y2)∈B{\displaystyle (x_{2},y_{2})\in B}. This situation would typically be represented arithmetically by theinequalitypX+qY≤m{\displaystyle pX+qY\leq m}and graphically by abudget linein the positive real numbers. Assuming stronglymonotonicpreferences, only bundles that are graphically located on the budget line, i.e. bundles wherepx1+qy1=m{\displaystyle px_{1}+qy_{1}=m}andpx2+qy2=m{\displaystyle px_{2}+qy_{2}=m}are satisfied, need to be considered. If, in this situation, it is observed that(x1,y1){\displaystyle (x_{1},y_{1})}is chosen over(x2,y2){\displaystyle (x_{2},y_{2})}, it is concluded that(x1,y1){\displaystyle (x_{1},y_{1})}is (directly) revealed preferred to(x2,y2){\displaystyle (x_{2},y_{2})}, which can be summarized as thebinary relation(x1,y1)⪰(x2,y2){\displaystyle (x_{1},y_{1})\succeq (x_{2},y_{2})}or equivalently asa⪰b{\displaystyle \mathbf {a} \succeq \mathbf {b} }.[3]
TheWeak Axiom of Revealed Preference (WARP)is one of the criteria which needs to be satisfied in order to make sure that the consumer is consistent with their preferences. If a bundle of goodsais chosen over another bundlebwhen both are affordable, then the consumer reveals that they preferaoverb. WARP says that when preferences remain the same, there are no circumstances (budget set) where the consumer prefersbovera. By choosingaoverbwhen both bundles are affordable, the consumer reveals that their preferences are such that they will never chooseboverawhen both are affordable, even as prices vary. Formally:
wherea{\displaystyle \mathbf {a} }andb{\displaystyle \mathbf {b} }are arbitrary bundles andC(B,⪰)⊂B{\displaystyle C(B,\succeq )\subset B}is the set of bundles chosen in budget setB{\displaystyle B}, given preference relation⪰{\displaystyle \succeq }.
In other words, ifais chosen overbin budget setB{\displaystyle B}where bothaandbare feasible bundles, butbis chosen when the consumer faces some other budget setB′{\displaystyle B'}, thenais not a feasible bundle in budget setB′{\displaystyle B'}.
Thestrong axiom of revealed preferences (SARP)is equivalent to WARP, except that the choices A and B are not allowed to be either directly or indirectly revealed preferable to each other at the same time. Here A is consideredindirectlyrevealed preferred to B if C exists such that A is directly revealed preferred to C, and C is directly revealed preferred to B. In mathematical terminology, this says thattransitivityis preserved. Transitivity is useful as it can reveal additional information by comparing two separate bundles from budget constraints.
It is often desirable in economic models to prevent such "loops" from happening, for example in order to model choices withutility functions(which have real-valued outputs and are thus transitive). One way to do so is to impose completeness on the revealed preference relation with regards to the choices at large, i.e. without any price considerations or affordability constraints. This is useful because when evaluating {A,B,C} as standalone options, it isdirectlyobvious which is preferred or indifferent to which other. Using the weak axiom then prevents two choices from being preferred over each other at the same time; thus it would be impossible for "loops" to form.
Another way to solve this is to impose SARP, which ensures transitivity. This is characterised by taking thetransitive closureof direct revealed preferences and require that it isantisymmetric, i.e. if A is revealed preferred to B (directly or indirectly), then B is not revealed preferred to A (directly or indirectly).
These are two different approaches to solving the issue; completeness is concerned with the input (domain) of the choice functions; while the strong axiom imposes conditions on the output.
TheGeneralised axiom of revealed preference (GARP)is a generalisation of SARP. It is the final criteria required so that constancy may be satisfied to ensure consumers preferences do not change.
This axiom accounts for conditions in which two or more consumption bundles satisfy equal levels of utility, given that the price level remains constant. It covers circumstances in which utility maximisation is achieved by more than one consumption bundle.[4]
A set of data satisfies GARP ifxiRxj{\displaystyle x^{i}Rx^{j}}implies notxjP0xi{\displaystyle x^{j}P^{0}x^{i}}.[5]This establishes that if consumption bundlexi{\displaystyle x^{i}}is revealed preferred toxj{\displaystyle x^{j}}, then the expenditure necessary to acquire bundlexj{\displaystyle x^{j}}given that prices remain constant, cannot be more than the expenditure necessary to acquire bundlexi{\displaystyle x^{i}}.[6]
To satisfy GARP, a dataset must also not establish a preference cycle. Therefore, when considering the bundles {A,B,C}, the revealed preference bundle must be an acyclic order pair as such, IfA⪰B{\displaystyle A\succeq B}andB⪰C{\displaystyle B\succeq C}, thenB⋡A{\displaystyle B\nsucceq A}andA⪰C{\displaystyle A\succeq C}thus ruling out “preference cycles” while still holding transitivity.[4]
As GARP is closely related to SARP, it is very easy to demonstrate that each condition of SARP can imply GARP, however, GARP does not imply SARP. This is a result of the condition in which GARP is compatible with multivalued demand functions, whereas SARP is only compatible with single valued demand functions. As such, GARP permits for flat sections within indifference curves, as stated by Hal R Varian (1982).[5]
Afriat's Theorem, introduced by economistSydney Afriatin 1967, extends GARP by proving that a finite dataset of observed choices can be explained by autility function.[7]Specifically, it states that a set of price vectorspiand quantity vectorsxi(fori= 1, 2, ...,n) satisfies GARP if and only if there exists a continuous, increasing, andconcaveutility functionu(x)such that eachximaximizesu(x)under the budget constraintpi·x≤pi·xi.[8]
The theorem provides a practical test: if GARP holds, there exist utility levelsuiand positive weightsλisatisfying the inequalitiesui-uj≤λj(pj· (xi-xj)) for alli,j.[7]TheseAfriat inequalitiesallow construction of the utility function directly from the data, unlike earlier axioms like SARP, which only prove existence for infinite datasets.[9]For instance, if two bundles both maximize utility at the same budget (as in the GARP figure), Afriat's Theorem ensures a utility function exists, even where SARP fails.[8]This result is widely used ineconometricsto test rationality and build preferences from empirical data.[10]
Revealed preference theory has been used in numerous applications,including college rankings in the U.S.[11][12]
Several economists criticised the theory of revealed preferences for different reasons.
|
https://en.wikipedia.org/wiki/Revealed_preference
|
Inmarketing,market segmentationorcustomer segmentationis the process of dividing a consumer or businessmarketinto meaningful sub-groups of current or potentialcustomers(orconsumers) known assegments.[1]Its purpose is to identify profitable and growing segments that a company can target with distinct marketing strategies.
In dividing or segmenting markets, researchers typically look for common characteristics such as shared needs, common interests, similar lifestyles, or even similardemographic profiles. The overall aim of segmentation is to identifyhigh-yield segments– that is, those segments that are likely to be the most profitable or that have growth potential – so that these can be selected for special attention (i.e. becometarget markets). Many different ways to segment a market have been identified.Business-to-business(B2B) sellers might segment the market into different types ofbusinessesorcountries, whilebusiness-to-consumer(B2C) sellers might segment the market intodemographicsegments, such as lifestyle, behavior, or socioeconomic status.
Market segmentation assumes that different market segments require different marketing programs – that is, different offers, prices, promotions, distribution, or some combination of marketing variables. Market segmentation is not only designed to identify the most profitable segments but also to develop profiles of key segments to better understand their needs and purchase motivations. Insights from segmentation analysis are subsequently used to support marketing strategy development and planning.
In practice, marketers implement market segmentation using theS-T-P framework,[2]which stands for Segmentation →Targeting→Positioning. That is, partitioning a market into one or more consumer categories, of which some are further selected for targeting, and products or services are positioned in a way that resonates with the selected target market or markets.
Market segmentation is the process of dividing mass markets into groups with similar needs and wants.[3]The rationale for market segmentation is that in order to achieve competitive advantage and superior performance, firms should: "(1) identify segments of industry demand, (2) target specific segments of demand, and (3) develop specific 'marketing mixes' for each targeted market segment. "[4]From an economic perspective, segmentation is built on the assumption that heterogeneity in demand allows for demand to be disaggregated into segments with distinct demand functions.[5]
The business historianRichard S. Tedlowidentifies four stages in the evolution of market segmentation:[6]
The practice of market segmentation emerged well before marketers thought about it at a theoretical level.[7]Archaeological evidence suggests that Bronze Age traders segmented trade routes according to geographical circuits.[8]Other evidence suggests that the practice of modern market segmentation was developed incrementally from the 16th century onwards. Retailers, operating outside the major metropolitan cities, could not afford to serve one type of clientele exclusively, yet retailers needed to find ways to separate the wealthier clientele from the "riff-raff". One simple technique was to have a window opening out onto the street from which customers could be served. This allowed the sale of goods to the common people, without encouraging them to come inside. Another solution, that came into vogue starting in the late sixteenth century, was to invite favored customers into a back room of the store, where goods were permanently on display. Yet another technique that emerged around the same time was to hold a showcase of goods in the shopkeeper's private home for the benefit of wealthier clients. Samuel Pepys, for example, writing in 1660, describes being invited to the home of a retailer to view a wooden jack.[9]The eighteenth-century English entrepreneurs,Josiah WedgewoodandMatthew Boulton, both staged expansive showcases of their wares in their private residences or in rented halls to which only the upper classes were invited while Wedgewood used a team of itinerant salesmen to sell wares to the masses.[10]
Evidence of early marketing segmentation has also been noted elsewhere in Europe. A study of the German book trade found examples of both product differentiation and market segmentation in the 1820s.[11]From the 1880s, German toy manufacturers were producing models oftin toysfor specific geographic markets; London omnibuses and ambulances destined for the British market; French postal delivery vans for Continental Europe and American locomotives intended for sale in America.[12]Such activities suggest that basic forms of market segmentation have been practiced since the 17th century and possibly earlier.
Contemporary market segmentation emerged in the first decades of the twentieth century as marketers responded to two pressing issues. Demographic and purchasing data were available for groups but rarely for individuals and secondly, advertising and distribution channels were available for groups, but rarely for single consumers. Between 1902 and 1910, George B Waldron, working at Mahin's Advertising Agency in the United States used tax registers, city directories, and census data to show advertisers the proportion of educated vs illiterate consumers and the earning capacity of different occupations, etc. in a very early example of simple market segmentation.[13][14]In 1924 Paul Cherington developed the 'ABCD' household typology; the first socio-demographic segmentation tool.[13][15]By the 1930s, market researchers such asErnest Dichterrecognized that demographics alone were insufficient to explain different marketing behaviors and began exploring the use of lifestyles, attitudes, values, beliefs and culture to segment markets.[16]With access to group-level data only, brand marketers approached the task from a tactical viewpoint. Thus, segmentation was essentially a brand-driven process.
Wendell R. Smith is generally credited with being the first to introduce the concept of market segmentation into the marketing literature in 1956 with the publication of his article, "Product Differentiation and Market Segmentation as Alternative Marketing Strategies."[17]Smith's article makes it clear that he had observed "many examples of segmentation" emerging and to a certain extent saw this as a "natural force" in the market that would "not be denied."[18]As Schwarzkopf points out, Smith was codifying implicit knowledge that had been used in advertising and brand management since at least the 1920s.[19]
Until relatively recently, most segmentation approaches have retained a tactical perspective in that they address immediate short-term decisions; such as describing the current “market served” and are concerned with informing marketing mix decisions. However, with the advent of digital communications and mass data storage, it has been possible for marketers to conceive of segmenting at the level of the individual consumer. Extensive data is now available to support segmentation in very narrow groups or even for a single customer, allowing marketers to devise a customized offer with an individual price that can be disseminated via real-time communications.[20]Some scholars have argued that the fragmentation of markets has rendered traditional approaches to market segmentation less useful.[21]
The limitations of conventional segmentation have been well documented in the literature.[22]
Market segmentation has many critics. Despite its limitations, market segmentation remains one of the enduring concepts in marketing and continues to be widely used in practice. One American study, for example, suggested that almost 60 percent of senior executives had used market segmentation in the past two years.[31]
A key consideration for marketers is whether they should segment. Depending on company philosophy, resources, product type, or market characteristics, a business may develop anundifferentiated approachordifferentiated approach. In an undifferentiated approach, the marketer ignores segmentation and develops a product that meets the needs of the largest number of buyers.[32]In a differentiated approach, the firm targets one or more market segments and develops separate offers for each segment.[32]
In consumer marketing, it is difficult to find examples of undifferentiated approaches. Even goods such assaltandsugar, which were once treated as commodities, are now highly differentiated. Consumers can purchase a variety of salt products; cooking salt, table salt, sea salt, rock salt, kosher salt, mineral salt, herbal or vegetable salts, iodized salt, salt substitutes, and many more. Sugar also comes in many different types -cane sugar,beet sugar,raw sugar, whiterefined sugar,brown sugar,caster sugar, sugar lumps, icing sugar (also known as milled sugar),sugar syrup,invert sugar, and a plethora of sugar substitutes includingsmart sugarwhich is essentially a blend of pure sugar and a sugar substitute. Each of these product types is designed to meet the needs of specific market segments. Invert sugar and sugar syrups, for example, are marketed to food manufacturers where they are used in the production of conserves, chocolate, and baked goods. Sugars marketed to consumers appeal to different usage segments – refined sugar is primarily for use on the table, while caster sugar and icing sugar are primarily designed for use in home-baked goods.
Many factors are likely to affect a company's segmentation strategy:[34]
The process of segmenting the market is deceptively simple. Marketers tend to use the so-calledS-T-P process, that isSegmentation→Targeting →Positioning, as a broad framework for simplifying the process.[1]Segmentation comprises identifying the market to be segmented; identification, selection, and application of bases to be used in that segmentation; and development of profiles. Targeting comprises an evaluation of each segment's attractiveness and selection of the segments to be targeted. Positioning comprises the identification of optimal positions and the development of the marketing program.
Perhaps the most important marketing decision a firm makes is the selection of one or more market segments on which to focus. A market segment is a portion of a larger market whose needs differ somewhat from the larger market. Since a market segment has unique needs, a firm that develops a total product focused solely on the needs of that segment will be able to meet the segment's desires better than a firm whose product or service attempts to meet the needs of multiple segments.[36]Current research shows that, in practice, firms apply three variations of theS-T-P framework: ad-hoc segmentation, syndicated segmentation, and feral segmentation.[30]
The market for any given product or service is known as themarket potentialor thetotal addressable market(TAM).Given that this is the market to be segmented, the market analyst should begin by identifying the size of the potential market. For existing products and services, estimating the size and value of the market potential is relatively straightforward. However, estimating the market potential can be very challenging when a product or service is new to the market and no historical data on which to base forecasts exists.
A basic approach is to first assess the size of the broad population, then estimate the percentage likely to use the product or service, and finally estimate the revenue potential.
Another approach is to use a historical analogy.[37]For example, the manufacturer of HDTV might assume that the number of consumers willing to adopt high-definition TV will be similar to the adoption rate for color TV. To support this type of analysis, data for household penetration of TV, Radio, PCs, and other communications technologies are readily available from government statistics departments. Finding useful analogies can be challenging because every market is unique. However, analogous product adoption and growth rates can provide the analyst with benchmark estimates and can be used to cross-validate other methods that might be used to forecast sales or market size.
A more robust technique for estimating the market potential is known as theBass diffusion model, the equation for which follows:[38]
Where:
The major challenge with the Bass model is estimating the parameters forpandq. However, the Bass model has been so widely used in empirical studies that the values ofpandqfor more than 50 consumer and industrial categories have been determined and are widely published in tables.[39]The average value forpis 0.037 and forqis 0.327.
A major step in the segmentation process is the selection of a suitable base. In this step, marketers are looking for a means of achieving internal homogeneity (similarity within the segments), and external heterogeneity (differences between segments).[40]In other words, they are searching for a process that minimizes differences between members of a segment and maximizes differences between each segment. In addition, the segmentation approach must yield segments that are meaningful for the specific marketing problem or situation. For example, a person's hair color may be a relevant base for a shampoo manufacturer, but it would not be relevant for a seller of financial services. Selecting the right base requires a good deal of thought and a basic understanding of the market to be segmented.
In reality, marketers can segment the market using any base or variable provided that it is identifiable, substantial, responsive, actionable, and stable.[41]
For example, although dress size is not a standard base for segmenting a market, some fashion houses have successfully segmented the market using women's dress size as a variable.[43]However, the most common bases for segmenting consumer markets include: geographics, demographics, psychographics, and behavior. Marketers normally select a single base for the segmentation analysis, although, some bases can be combined into a single segmentation with care. Combining bases is the foundation of an emerging form of segmentation known as ‘Hybrid Segmentation’ (see§ Hybrid segmentation). This approach seeks to deliver a single segmentation that is equally useful across multiple marketing functions such as brand positioning, product and service innovation as well as eCRM.
The following sections provide a description of the most common forms of consumer market segmentation.
Segmentation according to demography is based on consumer demographic variables such as age, income, family size, socio-economic status, etc.[44]Demographic segmentation assumes that consumers with similar demographic profiles will exhibit similar purchasing patterns, motivations, interests, and lifestyles and that these characteristics will translate into similar product/brand preferences.[45]In practice, demographic segmentation can potentially employ any variable that is used by the nation's census collectors. Examples of demographic variables and their descriptors include:
In practice, most demographic segmentation utilizes a combination of demographic variables.
The use of multiple segmentation variables normally requires the analysis of databases using sophisticated statistical techniques such as cluster analysis or principal components analysis. These types of analysis require very large sample sizes. However, data collection is expensive for individual firms. For this reason, many companies purchase data from commercial market research firms, many of whom develop proprietary software to interrogate the data.
The labels applied to some of the more popular demographic segments began to enter the popular lexicon in the 1980s.[51][52][53]These include the following:[54][55]
Geographic segmentation divides markets according to geographic criteria. In practice, markets can be segmented as broadly as continents and as narrowly as neighborhoods or postal codes.[56]Typical geographic variables include:
The geo-cluster approach (also calledgeodemographic segmentation) combines demographic data with geographic data to create richer, more detailed profiles.[57]Geo-cluster approaches are a consumer classification system designed for market segmentation and consumer profiling purposes. They classify residential regions or postcodes based on census and lifestyle characteristics obtained from a wide range of sources. This allows the segmentation of a population into smaller groups defined by individual characteristics such as demographic, socio-economic, or other shared socio-demographic characteristics.
Geographic segmentation may be considered the first step in international marketing, where marketers must decide whether to adapt their existing products and marketing programs to the unique needs of distinct geographic markets.[58]Tourism Marketing Boards often segment international visitors based on their country of origin.
Several proprietary geo-demographic packages are available for commercial use. Geographic segmentation is widely used in direct marketing campaigns to identify areas that are potential candidates for personal selling, letter-box distribution, or direct mail. Geo-cluster segmentation is widely used by Governments and public sector departments such as urban planning, health authorities, police, criminal justice departments, telecommunications, and public utility organizations such as water boards.[59]
Geo-demographic or geoclusters is a combination of geographic & demographic variables.
Psychographicsegmentation, which is sometimes called psychometric orlifestylesegmentation, is measured by studying the activities, interests, and opinions (AIOs) of customers. It considers how people spend their leisure,[60]and which external influences they are most responsive to and influenced by. Psychographics is a very widely used basis for segmentation because it enables marketers to identify tightly defined market segments and better understand consumer motivations for product or brand choice.
While many of these proprietary psychographic segmentation analyses are well-known, the majority of studies based on psychographics are custom-designed. That is, the segments are developed for individual products at a specific time. One common thread among psychographic segmentation studies is that they use quirky names to describe the segments.[61]
Behavioural segmentation divides consumers into groups according to their observed behaviours. Many marketers believe that behavioural variables are superior to demographics and geographics for building market segments,[62]and some analysts have suggested that behavioural segmentation is killing off demographics.[63]Typical behavioural variables and their descriptors include:[64]
Note that these descriptors are merely commonly used examples. Marketers customize the variables and descriptors for both local conditions and for specific applications. For example, in the health industry, planners often segment broad markets according to 'health consciousness' and identify low, moderate, and highly health-conscious segments. This is an applied example of behavioural segmentation, using attitude to a product or service as a key descriptor or variable which has been customized for the specific application.
Purchase or usage occasion segmentation focuses on analyzing occasions when consumers might purchase or consume a product. This approach customer-level and occasion-level segmentation models and provides an understanding of the individual customers’ needs, behaviour, and value under different occasions of usage and time. Unlike traditional segmentation models, this approach assigns more than one segment to each unique customer, depending on the current circumstances they are under.
Benefit segmentation (sometimes calledneeds-based segmentation) was developed by Grey Advertising in the late 1960s.[66]The benefits-sought by purchasers enables the market to be divided into segments with distinct needs, perceived value, benefits sought, or advantage that accrues from the purchase of a product or service. Marketers using benefit segmentation might develop products with different quality levels, performance, customer service, special features, or any other meaningful benefit and pitch different products at each of the segments identified. Benefit segmentation is one of the more commonly used approaches to segmentation and is widely used in many consumer markets including motor vehicles, fashion and clothing, furniture, consumer electronics, and holiday-makers.[67]
Loker and Purdue, for example, used benefit segmentation to segment the pleasure holiday travel market. The segments identified in this study were the naturalists, pure excitement seekers, and escapists.[68]
Attitudinal segmentation provides insight into the mindset of customers, especially the attitudes and beliefs that drive consumer decision-making and behaviour. An example of attitudinal segmentation comes from the UK's Department of Environment which segmented the British population into six segments, based on attitudes that drive behaviour relating to environmental protection:[69]
One of the difficulties organisations face when implementing segmentation into their business processes is that segmentations developed using a single variable base, e.g. attitudes, are useful only for specific business functions. As an example, segmentations driven by functional needs (e.g. “I want home appliances that are very quiet”) can provide clear direction for product development, but tell little about how to position brands, or who to target on the customer database and with what tonality of messaging.
Hybrid segmentation is a family of approaches that specifically addresses this issue by combining two or more variable bases into a single segmentation. This emergence has been driven by three factors. First, the development of more powerful AI and machine learning algorithms to help attribute segmentations to customer databases; second, the rapid increase in the breadth and depth of data that is available to commercial organisations; third, the increasing prevalence of customer databases amongst companies (which generates the commercial demand for segmentation to be used for different purposes).
A successful example of hybrid segmentation came from the travel company TUI, which in 2018 developed a hybrid segmentation using a combination of geo-demographics, high-level category attitudes, and more specific holiday-related needs.[70]Before the onset of Covid-19 travel restrictions, they credited this segmentation with having generated an incremental £50 million of revenue in the UK market alone in just over two years.[71]
In addition to geographics, demographics, psychographics, and behavioural bases, marketers occasionally turn to other means of segmenting the market or developing segment profiles.
A generation is defined as "a cohort of people born within a similar period (15 years at the upper end) who share a comparable age and life stage and who were shaped by a particular period (events, trends, and developments)."[72]Generational segmentation refers to the process of dividing and analyzing a population into cohorts based on their birth date. Generational segmentation assumes that people's values and attitudes are shaped by the key events that occurred during their lives and that these attitudes translate into product and brand preferences.
Demographers, studying population change, disagree about precise dates for each generation.[73]Dating is normally achieved by identifying population peaks or troughs, which can occur at different times in each country. For example, in Australia the post-war population boom peaked in 1960,[74]while the peak occurred somewhat later in the US and Europe,[75]with most estimates converging on 1964. Accordingly, Australian Boomers are normally defined as those born between 1945 and 1960; while American and European Boomers are normally defined as those born between 1946 and 1964. Thus, the generational segments and their dates discussed here must be taken as approximations only.
The primary generational segments identified by marketers are:[76]
Cultural segmentation is used to classify markets according to their cultural origin. Culture is a major dimension ofconsumer behaviourand can be used to enhance customer insight and as a component of predictive models. Cultural segmentation enables appropriate communications to be crafted for particular cultural communities. Cultural segmentation can be applied to existing customer data to measure market penetration in key cultural segments by product, brand, and channel as well as traditional measures of recency, frequency, and monetary value. These benchmarks form an important evidence base to guide strategic direction and tactical campaign activity, allowing engagement trends to be monitored over time.[78]
Cultural segmentation can be combined with other bases, especially geographics so that segments are mapped according to state, region, suburb, and neighborhood. This provides a geographical market view of population proportions and may be of benefit in selecting appropriately located premises, determining territory boundaries, and local marketing activities.
Census data is a valuable source of cultural data but cannot meaningfully be applied to individuals. Name analysis (onomastics) is the most reliable and efficient means of describing the cultural origin of individuals. The accuracy of using name analysis as a surrogate for cultural background in Australia is between 80 and 85%, after allowing for female name changes due to marriage, social or political reasons, or colonial influence. The extent of name data coverage means a user will code a minimum of 99% of individuals with their most likely ancestral origin.
Online market segmentation is similar to the traditional approaches in that the segments should be identifiable, substantial, accessible, stable, differentiable, and actionable.[79]Customer data stored in online data management systems such as aCRMorDMPenables the analysis and segmentation of consumers across a diverse set of attributes.[80]Forsyth et al., in an article 'Internet research' grouped current active online consumers into six groups: Simplifiers, Surfers, Bargainers, Connectors, Routiners, and Sportsters. The segments differ regarding four customers' behaviours, namely:[81]
For example,Simplifiersmake up over 50% of all online transactions. Their main characteristic is that they need easy (one-click) access to information and products as well as easy and quickly available service regarding products.Amazonis an example of a company that created an online environment for Simplifiers. They also 'dislike unsolicited e-mail, uninviting chat rooms, pop-up windows intended to encourage impulse buys, and other features that complicate their on- and off-line experience'. Surfers like to spend a lot of time online, thus companies must have a variety of products to offer and constant updates,Bargainersare looking for the best price, Connectors like to relate to others,Routinerswant content, andSportsterslike sport and entertainment sites.
Another major decision in developing the segmentation strategy is the selection of market segments that will become the focus of special attention (known astarget markets). The marketer faces important decisions:
When a marketer enters more than one market, the segments are often labeled theprimary target marketandsecondary target market.The primary market is the target market selected as the main focus of marketing activities. The secondary target market is likely to be a segment that is not as large as the primary market, but has growth potential. Alternatively, the secondary target group might consist of a small number of purchasers that account for a relatively high proportion of sales volume perhaps due to purchase value or purchase frequency.
In terms of evaluating markets, three core considerations are essential:[82]
There are no formulas for evaluating the attractiveness of market segments and a good deal of judgment must be exercised.[83]There are approaches to assist in evaluating market segments for overall attractiveness. The following lists a series of questions to evaluate target segments.
When the segments have been determined and separate offers developed for each of the core segments, the marketer's next task is to design a marketing program (also known as the marketing mix) that will resonate with the target market or markets. Developing the marketing program requires a deep knowledge of key market segments' purchasing habits, their preferred retail outlet, their media habits, and their price sensitivity. The marketing program for each brand or product should be based on the understanding of the target market (or target markets) revealed in the market profile.
Positioning is the final step in the S-T-P planning approach; Segmentation → Targeting → Positioning. It is a core framework for developing marketing plans and setting objectives. Positioning refers to decisions about how to present the offer in a way that resonates with the target market. During the research and analysis that forms the central part of segmentation and targeting, the marketer will gain insights into what motivates consumers to purchase a product or brand. These insights will form part of the positioning strategy.
According to advertising guru, David Ogilvy, "Positioning is the act of designing the company’s offering and image to occupy a distinctive place in the minds of the target market. The goal is to locate the brand in the minds of consumers to maximize the potential benefit to the firm. A good brand positioning helps guide marketing strategy by clarifying the brand’s essence, what goals it helps the consumer achieve, and how it does so in a unique way."[84]
The technique known as perceptual mapping is often used to understand consumers' mental representations of brands within a given category. Traditionally two variables (often, but not necessarily, price and quality) are used to construct the map. A sample of people in the target market are asked to explain where they would place various brands in terms of the selected variables. Results are averaged across all respondents, and results are plotted on a graph, as illustrated in the figure. The final map indicates how theaveragemember of the population views the brand that makes up a category and how each of the brands relates to other brands within the same category. While perceptual maps with two dimensions are common, multi-dimensional maps are also used.
There are different approaches to positioning:[85]
Segmenting business markets is more straightforward than segmenting consumer markets. Businesses may be segmented according to industry, business size, business location, turnover, number of employees, company technology, purchasing approach, or any other relevant variables.[86]The most widely used segmentation bases used in business to business markets are geographics and firmographics.[87]
The most widely used bases for segmenting business markets are:
The basic approach to retention-based segmentation is that a company tags each of its active customers on four axes:
This analysis of customer lifecycles is usually included in thegrowth planof a business to determine which tactics to implement to retain or let go of customers.[91]Tactics commonly used range from providing special customer discounts to sending customers communications that reinforce the value proposition of the given service.
The choice of an appropriate statistical method for the segmentation depends on numerous factors that may include, the broad approach (a-prioriorpost-hoc), the availability of data, time constraints, the marketer's skill level, and resources.[92]
A priori research occurs when "a theoretical framework is developed before the research is conducted".[93]In other words, the marketer has an idea about whether to segment the market geographically, demographically, psychographically or behaviourally before undertaking any research. For example, a marketer might want to learn more about the motivations and demographics of light and moderate users to understand what tactics could be used to increase usage rates. In this case, the target variable is known – the marketer has already segmented using a behavioural variable –user status. The next step would be to collect and analyze attitudinal data for light and moderate users. The typical analysis includes simple cross-tabulations, frequency distributions, and occasionally logistic regression or one of several proprietary methods.[94]
The main disadvantage of a-priori segmentation is that it does not explore other opportunities to identify market segments that could be more meaningful.
In contrast, post-hoc segmentation makes no assumptions about the optimal theoretical framework. Instead, the analyst's role is to determine the segments that are the most meaningful for a given marketing problem or situation. In this approach, the empirical data drives the segmentation selection. Analysts typically employ some type of clustering analysis or structural equation modeling to identify segments within the data. Post-hoc segmentation relies on access to rich datasets, usually with a very large number of cases, and uses sophisticated algorithms to identify segments.[95]
The figure alongside illustrates how segments might be formed using clustering; however, note that this diagram only uses two variables, while in practice clustering employs a large number of variables.[96]
Marketers often engage commercial research firms or consultancies to carry out segmentation analysis, especially if they lack the statistical skills to undertake the analysis. Some segmentation, especially post-hoc analysis, relies on sophisticated statistical analysis.
Common statistical approaches and techniques used in segmentation analysis include:
Marketers use a variety of data sources for segmentation studies and market profiling. Typical sources of information include:[108][109]
|
https://en.wikipedia.org/wiki/Market_segmentation
|
Psychologyis the scientific study ofmindandbehavior.[1][2]Its subject matter includes the behavior of humans and nonhumans, bothconsciousandunconsciousphenomena, and mental processes such asthoughts,feelings, andmotives. Psychology is an academic discipline of immense scope, crossing the boundaries between thenaturalandsocial sciences. Biological psychologists seek an understanding of theemergentproperties of brains, linking the discipline toneuroscience. As social scientists, psychologists aim to understand the behavior of individuals and groups.[3][4]
A professional practitioner or researcher involved in the discipline is called apsychologist. Some psychologists can also be classified asbehavioralorcognitive scientists. Some psychologists attempt to understand the role of mental functions in individual andsocial behavior. Others explore thephysiologicalandneurobiologicalprocesses that underlie cognitive functions and behaviors.
Psychologists are involved in research onperception,cognition,attention,emotion,intelligence,subjective experiences,motivation,brain functioning, andpersonality. Psychologists' interests extend tointerpersonal relationships,psychological resilience,family resilience, and other areas withinsocial psychology. They also consider the unconscious mind.[5]Research psychologists employempirical methodsto infercausalandcorrelationalrelationships between psychosocialvariables. Some, but not all,clinicalandcounselingpsychologists rely onsymbolic interpretation.
While psychological knowledge is often applied to the assessment and treatment of mental health problems, it is also directed towards understanding and solving problems in several spheres of human activity. By many accounts, psychology ultimately aims to benefit society.[6][7][8]Many psychologists are involved in some kind of therapeutic role, practicingpsychotherapyin clinical, counseling, orschoolsettings. Other psychologists conduct scientific research on a wide range of topics related to mental processes and behavior. Typically the latter group of psychologists work in academic settings (e.g., universities, medical schools, or hospitals). Another group of psychologists is employed inindustrial and organizationalsettings.[9]Yet others are involved in work onhuman development, aging,sports, health,forensic science,education, and themedia.
The wordpsychologyderives from the Greek wordpsyche, for spirit orsoul. The latter part of the wordpsychologyderives from -λογία-logia, which means "study" or "research".[10]The word psychology was first used in the Renaissance.[11]In itsLatinformpsychiologia, it was first employed by theCroatianhumanistandLatinistMarko Marulićin his bookPsichiologia de ratione animae humanae(Psychology, on the Nature of the Human Soul) in the decade 1510–1520[11][12]The earliest known reference to the wordpsychologyin English was bySteven Blankaartin 1694 inThe Physical Dictionary. The dictionary refers to "Anatomy, which treats the Body, and Psychology, which treats of the Soul."[13]
Ψ(psi), the firstletterof the Greek wordpsychefrom which the term psychology is derived, is commonly associated with the field of psychology.
In 1890,William Jamesdefinedpsychologyas "the science of mental life, both of its phenomena and their conditions."[14]This definition enjoyed widespread currency for decades. However, this meaning was contested, notably byJohn B. Watson, who in 1913 asserted themethodological behavioristview of psychology as a purely objective experimental branch ofnatural science, the theoretical goal of which "is the prediction and control of behavior."[15]Since James defined "psychology", the term more strongly implicates scientific experimentation.[16][15]Folk psychologyis the understanding of the mental states and behaviors of people held byordinary people, as contrasted with psychology professionals' understanding.[17]
The ancient civilizations of Egypt, Greece, China, India, and Persia all engaged in the philosophical study of psychology. In Ancient Egypt theEbers Papyrusmentioneddepressionand thought disorders.[18]Historians note that Greek philosophers, includingThales,Plato, andAristotle(especially in hisDe Animatreatise),[19]addressed the workings of the mind.[20]As early as the 4th century BC, the Greek physicianHippocratestheorized thatmental disordershad physical rather than supernatural causes.[21]In 387 BCE, Plato suggested that the brain is where mental processes take place, and in 335 BCE Aristotle suggested that it was the heart.[22]
In China, the foundations of psychological thought emerged from the philosophical works of ancient thinkers likeLaoziandConfucius, as well as the teachings ofBuddhism.[23]This body of knowledge drew insights from introspection, observation, and techniques for focused thinking and behavior. It viewed the universe as comprising physical and mental realms, along with the interplay between the two.[24]Chinese philosophy also emphasized purifying the mind in order to increase virtue and power. An ancient text known asThe Yellow Emperor's Classic of Internal Medicineidentifies the brain as the nexus of wisdom and sensation, includes theories of personality based onyin–yangbalance, and analyzes mental disorder in terms of physiological and social disequilibria. Chinese scholarship that focused on the brain advanced during theQing dynastywith the work of Western-educated Fang Yizhi (1611–1671),Liu Zhi(1660–1730), and Wang Qingren (1768–1831). Wang Qingren emphasized the importance of the brain as the center of the nervous system, linked mental disorder with brain diseases, investigated the causes of dreams andinsomnia, and advanced a theory ofhemispheric lateralizationin brain function.[25]
Influenced byHinduism,Indian philosophyexplored distinctions in types of awareness. A central idea of theUpanishadsand otherVedictexts that formed the foundations of Hinduism was the distinction between a person's transient mundane self and theireternal, unchanging soul. Divergent Hindu doctrines andBuddhismhave challenged this hierarchy of selves, but have all emphasized the importance of reaching higher awareness.Yogaencompasses a range of techniques used in pursuit of this goal.Theosophy, a religion established byRussian-AmericanphilosopherHelena Blavatsky, drew inspiration from these doctrines during her time inBritish India.[26][27]
Psychology was of interest toEnlightenment thinkersin Europe. In Germany,Gottfried Wilhelm Leibniz(1646–1716) applied his principles of calculus to the mind, arguing that mental activity took place on an indivisible continuum. He suggested that the difference between conscious and unconscious awareness is only a matter of degree.Christian Wolffidentified psychology as its own science, writingPsychologia Empiricain 1732 andPsychologia Rationalisin 1734.Immanuel Kantadvanced the idea ofanthropologyas a discipline, with psychology an important subdivision. Kant, however, explicitly rejected the idea of anexperimental psychology, writing that "the empirical doctrine of the soul can also never approach chemistry even as a systematic art of analysis or experimental doctrine, for in it the manifold of inner observation can be separated only by mere division in thought, and cannot then be held separate and recombined at will (but still less does another thinking subject suffer himself to be experimented upon to suit our purpose), and even observation by itself already changes and displaces the state of the observed object."
In 1783, Ferdinand Ueberwasser (1752–1812) designated himselfProfessor of Empirical Psychology and Logicand gave lectures on scientific psychology, though these developments were soon overshadowed by theNapoleonic Wars.[28]At the end of the Napoleonic era, Prussian authorities discontinued the Old University of Münster.[28]Having consulted philosophersHegelandHerbart, however, in 1825the Prussian stateestablished psychology as a mandatory discipline in its rapidly expanding and highly influentialeducational system. However, this discipline did not yet embrace experimentation.[29]In England, early psychology involvedphrenologyand the response to social problems including alcoholism, violence, and the country's crowded "lunatic" asylums.[30]
PhilosopherJohn Stuart Millbelieved that the human mind was open to scientific investigation, even if the science is in some ways inexact.[31]Mill proposed a "mentalchemistry" in which elementary thoughts could combine into ideas of greater complexity.[31]Gustav Fechnerbegan conductingpsychophysicsresearch inLeipzigin the 1830s. He articulated the principle that human perception of a stimulus varieslogarithmicallyaccording to its intensity.[32]: 61The principle became known as theWeber–Fechner law. Fechner's 1860Elements of Psychophysicschallenged Kant's negative view with regard to conducting quantitative research on the mind.[33][29]Fechner's achievement was to show that "mental processes could not only be given numerical magnitudes, but also that these could be measured by experimental methods."[29]In Heidelberg,Hermann von Helmholtzconducted parallel research on sensory perception, and trained physiologistWilhelm Wundt. Wundt, in turn, came to Leipzig University, where he established the psychological laboratory that brought experimental psychology to the world. Wundt focused on breaking down mental processes into the most basic components, motivated in part by an analogy to recent advances in chemistry, and its successful investigation of the elements and structure of materials.[34]Paul FlechsigandEmil Kraepelinsoon created another influential laboratory at Leipzig, a psychology-related lab, that focused more on experimental psychiatry.[29]
James McKeen Cattell, a professor of psychology at theUniversity of PennsylvaniaandColumbia Universityand the co-founder ofPsychological Review, was the first professor of psychology in theUnited States.[35]
The German psychologistHermann Ebbinghaus, a researcher at theUniversity of Berlin, was a 19th-century contributor to the field. He pioneered the experimental study of memory and developed quantitative models of learning and forgetting.[36]In the early 20th century,Wolfgang Kohler,Max Wertheimer, andKurt Koffkaco-founded the school ofGestalt psychologyofFritz Perls. The approach of Gestalt psychology is based upon the idea that individuals experience things as unified wholes. Rather thanreducingthoughts and behavior into smaller component elements, as in structuralism, the Gestaltists maintain that whole of experience is important, "and is something else than the sum of its parts, because summing is a meaningless procedure, whereas the whole-part relationship is meaningful."[37]
Psychologists in Germany, Denmark, Austria, England, and the United States soon followed Wundt in setting up laboratories.[38]G. Stanley Hall, an American who studied with Wundt, founded a psychology lab that became internationally influential. The lab was located atJohns Hopkins University. Hall, in turn, trainedYujiro Motora, who brought experimental psychology, emphasizing psychophysics, to theImperial University of Tokyo.[39]Wundt's assistant,Hugo Münsterberg, taught psychology at Harvard to students such asNarendra Nath Sen Gupta—who, in 1905, founded a psychology department and laboratory at theUniversity of Calcutta.[26]Wundt's studentsWalter Dill Scott,Lightner Witmer, andJames McKeen Cattellworked on developing tests of mental ability. Cattell, who also studied witheugenicistFrancis Galton, went on to found thePsychological Corporation. Witmer focused on the mental testing of children; Scott, on employee selection.[32]: 60
Another student of Wundt, the EnglishmanEdward Titchener, created the psychology program atCornell Universityand advanced "structuralist" psychology. The idea behind structuralism was to analyze and classify different aspects of the mind, primarily through the method ofintrospection.[40]William James,John Dewey, andHarvey Carradvanced the idea offunctionalism, an expansive approach to psychology that underlined the Darwinian idea of a behavior's usefulness to the individual. In 1890, James wrote an influential book,The Principles of Psychology, which expanded on the structuralism. He memorably described "stream of consciousness." James's ideas interested many American students in the emerging discipline.[40][14][32]: 178–82Dewey integrated psychology with societal concerns, most notably by promotingprogressive education, inculcating moral values in children, and assimilating immigrants.[32]: 196–200
A different strain of experimentalism, with a greater connection to physiology, emerged in South America, under the leadership of Horacio G. Piñero at theUniversity of Buenos Aires.[41]In Russia, too, researchers placed greater emphasis on the biological basis for psychology, beginning withIvan Sechenov's 1873 essay, "Who Is to Develop Psychology and How?" Sechenov advanced the idea of brainreflexesand aggressively promoted adeterministicview of human behavior.[42]The Russian-SovietphysiologistIvan Pavlovdiscovered in dogs a learning process that was later termed "classical conditioning" and applied the process to human beings.[43]
One of the earliest psychology societies wasLa Société de Psychologie Physiologiquein France, which lasted from 1885 to 1893. The first meeting of the International Congress of Psychology sponsored by theInternational Union of Psychological Sciencetook place in Paris, in August 1889, amidstthe World's Faircelebrating the centennial of the French Revolution. William James was one of three Americans among the 400 attendees. TheAmerican Psychological Association(APA) was founded soon after, in 1892. The International Congress continued to be held at different locations in Europe and with wide international participation. The Sixth Congress, held in Geneva in 1909, included presentations in Russian, Chinese, and Japanese, as well asEsperanto. After a hiatus for World War I, the Seventh Congress met in Oxford, with substantially greater participation from the war-victorious Anglo-Americans. In 1929, the Congress took place at Yale University in New Haven, Connecticut, attended by hundreds of members of the APA.[38]Tokyo Imperial University led the way in bringing new psychology to the East. New ideas about psychology diffused from Japan into China.[25][39]
American psychology gained status upon the U.S.'s entry into World War I. A standing committee headed byRobert Yerkesadministered mental tests ("Army Alpha" and "Army Beta") to almost 1.8 million soldiers.[44]Subsequently, theRockefeller family, via theSocial Science Research Council, began to provide funding for behavioral research.[45][46]Rockefeller charities funded the National Committee on Mental Hygiene, which disseminated the concept of mental illness and lobbied for applying ideas from psychology to child rearing.[44][47]Through the Bureau of Social Hygiene and later funding ofAlfred Kinsey, Rockefeller foundations helped establish research on sexuality in the U.S.[48]Under the influence of the Carnegie-fundedEugenics Record Office, the Draper-fundedPioneer Fund, and other institutions, theeugenics movementalso influenced American psychology. In the 1910s and 1920s, eugenics became a standard topic in psychology classes.[49]In contrast to the US, in the UK psychology was met with antagonism by the scientific and medical establishments, and up until 1939, there were only six psychology chairs in universities in England.[50]
During World War II and the Cold War, the U.S. military and intelligence agencies established themselves as leading funders of psychology by way of the armed forces and in the newOffice of Strategic Servicesintelligence agency. University of Michigan psychologist Dorwin Cartwright reported that university researchers began large-scale propaganda research in 1939–1941. He observed that "the last few months of the war saw a social psychologist become chiefly responsible for determining the week-by-week-propaganda policy for the United States Government." Cartwright also wrote that psychologists had significant roles in managing the domestic economy.[51]The Army rolled out its newGeneral Classification Testto assess the ability of millions of soldiers. The Army also engaged in large-scale psychological research oftroop morale and mental health.[52]In the 1950s, theRockefeller FoundationandFord Foundationcollaborated with theCentral Intelligence Agency(CIA) to fund research onpsychological warfare.[53]In 1965, public controversy called attention to the Army'sProject Camelot, the "Manhattan Project" ofsocial science, an effort which enlisted psychologists and anthropologists to analyze the plans and policies of foreign countries for strategic purposes.[54][55]
In Germany after World War I, psychology held institutional power through the military, which was subsequently expanded along with the rest of the military duringNazi Germany.[29]Under the direction ofHermann Göring's cousinMatthias Göring, theBerlin Psychoanalytic Institutewas renamed the Göring Institute.Freudian psychoanalystswere expelled and persecuted under the anti-Jewish policies of theNazi Party, and all psychologists had to distance themselves fromFreudandAdler, founders ofpsychoanalysiswho were also Jewish.[56]The Göring Institute was well-financed throughout the war with a mandate to create a "New German Psychotherapy." This psychotherapy aimed to align suitable Germans with the overall goals of the Reich. As described by one physician, "Despite the importance of analysis, spiritual guidance and the active cooperation of the patient represent the best way to overcome individual mental problems and to subordinate them to the requirements of theVolkand theGemeinschaft." Psychologists were to provideSeelenführung[lit., soul guidance], the leadership of the mind, to integrate people into the new vision of a German community.[57]Harald Schultz-Henckemelded psychology with the Nazi theory of biology and racial origins, criticizing psychoanalysis as a study of the weak and deformed.[58]Johannes Heinrich Schultz, a German psychologist recognized for developing the technique ofautogenic training, prominently advocated sterilization and euthanasia of men considered genetically undesirable, and devised techniques for facilitating this process.[59]
After the war, new institutions were created although some psychologists, because of their Nazi affiliation, were discredited.Alexander Mitscherlichfounded a prominent applied psychoanalysis journal calledPsyche. With funding from the Rockefeller Foundation, Mitscherlich established the first clinical psychosomatic medicine division at Heidelberg University. In 1970, psychology was integrated into the required studies of medical students.[60]
After theRussian Revolution, theBolshevikspromoted psychology as a way to engineer the "New Man" of socialism. Consequently, university psychology departments trained large numbers of students in psychology. At the completion of training, positions were made available for those students at schools, workplaces, cultural institutions, and in the military. The Russian state emphasizedpedologyand the study of child development.Lev Vygotskybecame prominent in the field of child development.[42]The Bolsheviks also promotedfree loveand embraced the doctrine of psychoanalysis as an antidote to sexual repression.[61]:84–6[62]Although pedology and intelligence testing fell out of favor in 1936, psychology maintained its privileged position as an instrument of the Soviet Union.[42]Stalinist purgestook a heavy toll and instilled a climate of fear in the profession, as elsewhere in Soviet society.[61]:22Following World War II, Jewish psychologists past and present, includingLev Vygotsky,A.R. Luria, and Aron Zalkind, were denounced; Ivan Pavlov (posthumously) and Stalin himself were celebrated as heroes of Soviet psychology.[61]: 25–6, 48–9Soviet academics experienced a degree of liberalization during theKhrushchev Thaw. The topics of cybernetics, linguistics, and genetics became acceptable again. The new field ofengineering psychologyemerged. The field involved the study of the mental aspects of complex jobs (such as pilot and cosmonaut). Interdisciplinary studies became popular and scholars such asGeorgy Shchedrovitskydeveloped systems theory approaches to human behavior.[61]:27–33
Twentieth-century Chinese psychology originally modeled itself on U.S. psychology, with translations from American authors like William James, the establishment of university psychology departments and journals, and the establishment of groups including the Chinese Association of Psychological Testing (1930) and theChinese Psychological Society(1937). Chinese psychologists were encouraged to focus on education and language learning. Chinese psychologists were drawn to the idea that education would enable modernization. John Dewey, who lectured to Chinese audiences between 1919 and 1921, had a significant influence on psychology in China. ChancellorT'sai Yuan-p'eiintroduced him atPeking Universityas a greater thinker than Confucius.Kuo Zing-yangwho received a PhD at the University of California, Berkeley, became President ofZhejiang Universityand popularizedbehaviorism.[63]: 5–9After theChinese Communist Partygained control of the country, the Stalinist Soviet Union became the major influence, withMarxism–Leninismthe leading social doctrine and Pavlovian conditioning the approved means of behavior change. Chinese psychologists elaborated on Lenin's model of a "reflective" consciousness, envisioning an "active consciousness" (pinyin:tzu-chueh neng-tung-li) able to transcend material conditions through hard work and ideological struggle. They developed a concept of "recognition" (pinyin:jen-shih) which referred to the interface between individual perceptions and the socially accepted worldview; failure to correspond with party doctrine was "incorrect recognition."[63]:9–17Psychology education was centralized under theChinese Academy of Sciences, supervised by theState Council. In 1951, the academy created a Psychology Research Office, which in 1956 became the Institute of Psychology. Because most leading psychologists were educated in the United States, the first concern of the academy was the re-education of these psychologists in the Soviet doctrines. Child psychology and pedagogy for the purpose of a nationally cohesive education remained a central goal of the discipline.[63]: 18–24
Women in the early 1900s started to make key findings within the world of psychology. In 1923,Anna Freud,[64]the daughter ofSigmund Freud, built on her father's work using differentdefense mechanisms(denial, repression, and suppression) topsychoanalyzechildren. She believed that once a child reached thelatency period,child analysiscould be used as a mode oftherapy. She stated it is important focus on the child's environment, support their development, and preventneurosis. She believed a child should be recognized as their own person with their own right and have each session catered to the child's specific needs. She encouraged drawing, moving freely, and expressing themselves in any way. This helped build a strong therapeutic alliance with child patients, which allows psychologists to observe their normal behavior. She continued her research on the impact of children after family separation, children with socio-economically disadvantaged backgrounds, and all stages of child development from infancy to adolescence.[65]
Functional periodicity, the belief women are mentally and physically impaired duringmenstruation, impactedwomen's rightsbecause employers were less likely to hire them due to the belief they would be incapable of working for 1 week a month.Leta Stetter Hollingworthwanted to prove this hypothesis andEdward L. Thorndike'stheory, that women have lesser psychological and physical traits than men and were simply mediocre, incorrect.Hollingworthworked to prove differences were not from male genetic superiority, but from culture. She also included the concept of women's impairment duringmenstruationin her research. She recorded both women and men performances on tasks (cognitive, perceptual, and motor) for three months. No evidence was found of decreased performance due to a woman'smenstrualcycle.[66]She also challenged the belief intelligence is inherited and women here are intellectually inferior to men. She stated that women do not reach positions of power due to thesocietal normsand roles they are assigned. As she states in her article, "Variability as related to sex differences in achievement: A Critique",[67]the largest problem women have is the social order that was built due to the assumption women have less interests and abilities than men. To further prove her point, she completed another experiment with infants who have not been influenced by the environment of social norms, like the adult male getting more opportunities than women. She found no difference between infants besides size. After this research proved the original hypothesis wrong,Hollingworthwas able to show there is no difference between the physiological and psychological traits of men and women, and women are not impaired duringmenstruation.[68]
The first half of the 1900s was filled with new theories and it was a turning point for women's recognition within the field of psychology. In addition to the contributions made byLeta Stetter HollingworthandAnna Freud,Mary Whiton Calkinsinvented the paired associates technique of studying memory and developedself-psychology.[69]Karen Horneydeveloped the concept of "womb envy" and neurotic needs.[70]PsychoanalystMelanie Kleinimpacteddevelopmental psychologywith her research ofplay therapy.[71]These great discoveries and contributions were made during struggles ofsexism,discrimination, and little recognition for their work.
Women in the second half of the 20th century continued to do research that had large-scale impacts on the field of psychology.Mary Ainsworth's work centered aroundattachment theory. Building off fellow psychologistJohn Bowlby, Ainsworth spent years doingfieldworkto understand the development of mother-infant relationships. In doing this field research, Ainsworth developed the Strange Situation Procedure, a laboratory procedure meant to study attachment style by separating and uniting a child with their mother several different times under different circumstances. These field studies are also where she developed herattachment theoryand the order ofattachment styles, which was a landmark fordevelopmental psychology.[72][73]Because of her work, Ainsworth became one of the most cited psychologists of all time.[74]Mamie Phipps Clarkwas another woman in psychology that changed the field with her research. She was one of the first African-Americans to receive a doctoral degree in psychology fromColumbia University, along with her husband,Kenneth Clark. Her master's thesis, "The Development of Consciousness in Negro Pre-School Children," argued that black children'sself-esteemwas negatively impacted byracial discrimination. She and her husband conduced research building off her thesis throughout the 1940s. These tests, called thedoll tests, asked young children to choose between identical dolls whose only difference was race, and they found that the majority of the children preferred the white dolls and attributed positive traits to them. Repeated over and over again, these tests helped to determine the negative effects ofracial discriminationandsegregationon black children'sself-imageand development. In 1954, this research would help decide the landmarkBrown v. Board of Educationdecision, leading to the end of legal segregation across the nation. Clark went on to be an influential figure in psychology, her work continuing to focus on minority youth.[75]
As the field of psychology developed throughout the latter half of the 20th century, women in the field advocated for their voices to be heard and their perspectives to be valued.Second-wave feminismdid not miss psychology. An outspoken feminist in psychology wasNaomi Weisstein, who was an accomplished researcher in psychology andneuroscience, and is perhaps best known for her paper, "Kirche, Kuche, Kinder as Scientific Law: Psychology Constructs the Female." Psychology Constructs the Female criticized the field of psychology for centering men and using biology too much to explain gender differences without taking into account social factors.[76]Her work set the stage for further research to be done insocial psychology, especially ingender construction.[77]Other women in the field also continued advocating for women in psychology, creating theAssociation for Women in Psychologyto criticize how the field treated women.E. Kitsch Child,Phyllis Chesler, andDorothy Riddlewere some of the founding members of the organization in 1969.[78][79]
The latter half of the 20th century further diversified the field of psychology, with women of color reaching new milestones. In 1962,Martha Bernalbecame the first Latina woman to get a Ph.D. in psychology. In 1969,Marigold Linton, the first Native American woman to get a Ph.D. in psychology, founded theNational Indian Education Association. She was also a founding member of theSociety for Advancement of Chicanos and Native Americans in Science. In 1971, The Network of Indian Psychologists was established byCarolyn Attneave. Harriet McAdoo was appointed to the White House Conference on Families in 1979.[80]
In the 21st century, women have gained greater prominence in psychology, contributing significantly to a wide range of subfields. Many have taken on leadership roles, directed influential research labs, and guided the next generation of psychologists. However, gender disparities remain, especially when it comes to equal pay and representation in senior academic positions.[81]The number of women pursuing education and training in psychological science has reached a record high. In the United States, estimates suggest that women make up about 78% of undergraduate students and 71% of graduate students in psychology.[81]
In 1920,Édouard ClaparèdeandPierre Bovetcreated a new applied psychology organization called the International Congress of Psychotechnics Applied to Vocational Guidance, later called the International Congress of Psychotechnics and then theInternational Association of Applied Psychology.[38]The IAAP is considered the oldest international psychology association.[82]Today, at least 65 international groups deal with specialized aspects of psychology.[82]In response to male predominance in the field, female psychologists in the U.S. formed the National Council of Women Psychologists in 1941. This organization became the International Council of Women Psychologists after World War II and the International Council of Psychologists in 1959. Several associations including theAssociation of Black Psychologistsand the Asian American Psychological Association have arisen to promote the inclusion of non-European racial groups in the profession.[82]
TheInternational Union of Psychological Science(IUPsyS) is the world federation of national psychological societies. The IUPsyS was founded in 1951 under the auspices of theUnited Nations Educational, Cultural and Scientific Organization (UNESCO).[38][83]Psychology departments have since proliferated around the world, based primarily on the Euro-American model.[26][83]Since 1966, the Union has published theInternational Journal of Psychology.[38]IAAP and IUPsyS agreed in 1976 each to hold a congress every four years, on a staggered basis.[82]
IUPsyS recognizes 66 national psychology associations and at least 15 others exist.[82]The American Psychological Association is the oldest and largest.[82]Its membership has increased from 5,000 in 1945 to 100,000 in the present day.[40]The APA includes54 divisions, which since 1960 have steadily proliferated to include more specialties. Some of these divisions, such as theSociety for the Psychological Study of Social Issuesand theAmerican Psychology–Law Society, began as autonomous groups.[82]
TheInteramerican Psychological Society, founded in 1951, aspires to promote psychology across the Western Hemisphere. It holds the Interamerican Congress of Psychology and had 1,000 members in year 2000. The European Federation of Professional Psychology Associations, founded in 1981, represents 30 national associations with a total of 100,000 individual members. At least 30 other international organizations represent psychologists in different regions.[82]
In some places, governments legally regulate who can provide psychological services or represent themselves as a "psychologist."[84]The APA defines a psychologist as someone with a doctoral degree in psychology.[85]
Early practitioners of experimental psychology distinguished themselves fromparapsychology, which in the late nineteenth century enjoyed popularity (including the interest of scholars such as William James). Some people considered parapsychology to be part of "psychology". Parapsychology,hypnotism, andpsychismwere major topics at the early International Congresses. But students of these fields were eventually ostracized, and more or less banished from the Congress in 1900–1905.[38]Parapsychology persisted for a time at Imperial University in Japan, with publications such asClairvoyance and Thoughtographyby Tomokichi Fukurai, but it was mostly shunned by 1913.[39]
As a discipline, psychology has long sought to fend off accusations that it is a"soft" science. Philosopher of scienceThomas Kuhn's 1962 critique implied psychology overall was in a pre-paradigm state, lacking agreement on the type of overarching theory found in mature hard sciences such as chemistry and physics.[86]Because some areas of psychology rely on research methods such asself-reportsin surveys and questionnaires, critics asserted that psychology is not anobjectivescience. Skeptics have suggested that personality, thinking, and emotion cannot be directly measured and are often inferred from subjective self-reports, which may be problematic. Experimental psychologists have devised a variety of ways to indirectly measure these elusive phenomenological entities.[87][88][89]
Divisions still exist within the field, with some psychologists more oriented towards the unique experiences of individual humans, which cannot be understood only as data points within a larger population. Critics inside and outside the field have argued that mainstream psychology has become increasingly dominated by a "cult of empiricism", which limits the scope of research because investigators restrict themselves to methods derived from the physical sciences.[90]:36–7Feminist critiques have argued that claims to scientific objectivity obscure the values and agenda of (historically) mostly male researchers.[44]Jean Grimshaw, for example, argues that mainstream psychological research has advanced apatriarchalagenda through its efforts to control behavior.[90]:120
Psychologists generally consider biology the substrate of thought and feeling, and therefore an important area of study. Behaviorial neuroscience, also known as biological psychology, involves the application of biological principles to the study of physiological and genetic mechanisms underlying behavior in humans and other animals. The allied field ofcomparative psychologyis the scientific study of the behavior and mental processes of non-human animals.[92]A leading question in behavioral neuroscience has been whether and how mental functions arelocalized in the brain. FromPhineas GagetoH.M.andClive Wearing, individual people with mental deficits traceable to physical brain damage have inspired new discoveries in this area.[93]Modern behavioral neuroscience could be said to originate in the 1870s, when in FrancePaul Brocatraced production of speech to the left frontal gyrus, thereby also demonstrating hemispheric lateralization of brain function. Soon after,Carl Wernickeidentified a related area necessary for the understanding of speech.[94]: 20–2
The contemporary field ofbehavioral neurosciencefocuses on the physical basis of behavior. Behaviorial neuroscientists use animal models, often relying on rats, to study the neural, genetic, and cellular mechanisms that underlie behaviors involved in learning, memory, and fear responses.[95]Cognitive neuroscientists, by using neural imaging tools, investigate the neural correlates of psychological processes in humans.Neuropsychologistsconduct psychological assessments to determine how an individual's behavior and cognition are related to the brain. Thebiopsychosocial modelis a cross-disciplinary, holistic model that concerns the ways in which interrelationships of biological, psychological, and socio-environmental factors affect health and behavior.[96]
Evolutionary psychologyapproaches thought and behavior from a modernevolutionaryperspective. This perspective suggests that psychological adaptations evolved to solve recurrent problems in human ancestral environments. Evolutionary psychologists attempt to find out how human psychological traits are evolved adaptations, the results ofnatural selectionorsexual selectionover the course of human evolution.[97]
The history of the biological foundations of psychology includes evidence of racism. The idea of white supremacy and indeed the modern concept of race itself arose during the process of world conquest by Europeans.[98]Carl von Linnaeus's four-fold classification of humans classifies Europeans as intelligent and severe, Americans as contented and free, Asians as ritualistic, and Africans as lazy and capricious. Race was also used to justify the construction of socially specific mental disorders such asdrapetomaniaanddysaesthesia aethiopica—the behavior of uncooperative African slaves.[99]After the creation of experimental psychology, "ethnical psychology" emerged as a subdiscipline, based on the assumption that studying primitive races would provide an important link between animal behavior and the psychology of more evolved humans.[100]
A tenet of behavioral research is that a large part of both human and lower-animal behavior is learned. A principle associated with behavioral research is that the mechanisms involved in learning apply to humans and non-human animals. Behavioral researchers have developed a treatment known asbehavior modification, which is used to help individuals replace undesirable behaviors with desirable ones.
Early behavioral researchers studied stimulus–response pairings, now known asclassical conditioning. They demonstrated that when a biologically potent stimulus (e.g., food that elicits salivation) is paired with a previously neutral stimulus (e.g., a bell) over several learning trials, the neutral stimulus by itself can come to elicit the response the biologically potent stimulus elicits.Ivan Pavlov—known best for inducing dogs to salivate in the presence of a stimulus previously linked with food—became a leading figure in the Soviet Union and inspired followers to use his methods on humans.[42]In the United States,Edward Lee Thorndikeinitiated "connectionist" studies by trapping animals in "puzzle boxes" and rewarding them for escaping. Thorndike wrote in 1911, "There can be no moral warrant for studying man's nature unless the study will enable us to control his acts."[32]: 212–5From 1910 to 1913 the American Psychological Association went through a sea change of opinion, away frommentalismand towards "behavioralism." In 1913, John B. Watson coined the term behaviorism for this school of thought.[32]: 218–27Watson's famousLittle Albert experimentin 1920 was at first thought to demonstrate that repeated use of upsetting loud noises could instillphobias(aversions to other stimuli) in an infant human,[15][101]although such a conclusion was likely an exaggeration.[102]Karl Lashley, a close collaborator with Watson, examined biological manifestations of learning in the brain.[93]
Clark L. Hull,Edwin Guthrie, and others did much to help behaviorism become a widely used paradigm.[40]A new method of "instrumental" or "operant" conditioning added the concepts ofreinforcementandpunishmentto the model of behavior change.Radical behavioristsavoided discussing the inner workings of the mind, especially the unconscious mind, which they considered impossible to assess scientifically.[103]Operant conditioning was first described by Miller and Kanorski and popularized in the U.S. byB.F. Skinner, who emerged as a leading intellectual of the behaviorist movement.[104][105]
Noam Chomskypublished an influential critique of radical behaviorism on the grounds that behaviorist principles could not adequately explain the complex mental process oflanguage acquisitionand language use.[106][107]The review, which was scathing, did much to reduce the status of behaviorism within psychology.[32]: 282–5Martin Seligmanand his colleagues discovered that they could condition in dogs a state of "learned helplessness", which was not predicted by the behaviorist approach to psychology.[108][109]Edward C. Tolmanadvanced a hybrid "cognitive behavioral" model, most notably with his 1948 publication discussing thecognitive mapsused by rats to guess at the location of food at the end of a maze.[110]Skinner's behaviorism did not die, in part because it generated successful practical applications.[107]
TheAssociation for Behavior Analysis Internationalwas founded in 1974 and by 2003 had members from 42 countries. The field has gained a foothold in Latin America and Japan.[111]Applied behavior analysisis the term used for the application of the principles of operant conditioning to change socially significant behavior (it supersedes the term, "behavior modification").[112]
GreenRedBluePurpleBluePurple
BluePurpleRedGreenPurpleGreen
The Stroop effect is the fact that naming the color of the first set of words is easier and quicker than the second.
Cognitive psychology involves the study ofmental processes, includingperception,attention, language comprehension and production,memory, and problem solving.[113]Researchers in the field of cognitive psychology are sometimes calledcognitivists. They rely on aninformation processingmodel of mental functioning. Cognitivist research is informed byfunctionalismand experimental psychology.
Starting in the 1950s, the experimental techniques developed by Wundt, James, Ebbinghaus, and others re-emerged as experimental psychology became increasingly cognitivist and, eventually, constituted a part of the wider, interdisciplinarycognitive science.[114][115]Some called this development thecognitive revolutionbecause it rejected the anti-mentalist dogma of behaviorism as well as the strictures of psychoanalysis.[115]
Albert Bandurahelped along the transition in psychology from behaviorism to cognitive psychology. Bandura and othersocial learning theoristsadvanced the idea of vicarious learning. In other words, they advanced the view that a child can learn by observing the immediate social environment and not necessarily from having been reinforced for enacting a behavior, although they did not rule out the influence of reinforcement on learning a behavior.[116]
Technological advances also renewed interest in mental states and mental representations. English neuroscientistCharles Sherringtonand Canadian psychologistDonald O. Hebbused experimental methods to link psychological phenomena to the structure and function of the brain. The rise of computer science,cybernetics, andartificial intelligenceunderlined the value of comparing information processing in humans and machines.
A popular and representative topic in this area iscognitive bias, or irrational thought. Psychologists (and economists) have classified and described asizeable catalog of biaseswhich recur frequently in human thought. Theavailability heuristic, for example, is the tendency to overestimate the importance of something which happens to come readily to mind.[117]
Elements of behaviorism and cognitive psychology were synthesized to formcognitive behavioral therapy, a form of psychotherapy modified from techniques developed by American psychologistAlbert Ellisand American psychiatristAaron T. Beck.
On a broader level, cognitive science is an interdisciplinary enterprise involving cognitive psychologists, cognitive neuroscientists, linguists, and researchers in artificial intelligence, human–computer interaction, andcomputational neuroscience. The discipline of cognitive science covers cognitive psychology as well as philosophy of mind, computer science, and neuroscience.[118]Computer simulations are sometimes used to model phenomena of interest.
Social psychology is concerned with howbehaviors,thoughts,feelings, and the social environment influence human interactions.[119]Social psychologists study such topics as the influence of others on an individual's behavior (e.g.conformity,persuasion) and the formation of beliefs,attitudes, andstereotypesabout other people.Social cognitionfuses elements of social and cognitive psychology for the purpose of understanding how people process, remember, or distort social information. The study ofgroup dynamicsinvolves research on the nature of leadership, organizational communication, and related phenomena. In recent years, social psychologists have become interested inimplicitmeasures,mediationalmodels, and the interaction of person and social factors in accounting for behavior. Some concepts thatsociologistshave applied to the study of psychiatric disorders, concepts such as the social role, sick role, social class, life events, culture, migration, andtotal institution, have influenced social psychologists.[120]
Psychoanalysis is a collection of theories and therapeutic techniques intended to analyze the unconscious mind and its impact on everyday life. These theories and techniques inform treatments for mental disorders.[121][122][123]Psychoanalysis originated in the 1890s, most prominently with the work ofSigmund Freud. Freud's psychoanalytic theory was largely based on interpretive methods,introspection, and clinical observation. It became very well known, largely because it tackled subjects such assexuality,repression, and the unconscious.[61]: 84–6Freud pioneered the methods offree associationanddream interpretation.[124][125]
Psychoanalytic theory is not monolithic. Other well-known psychoanalytic thinkers who diverged from Freud includeAlfred Adler,Carl Jung,Erik Erikson,Melanie Klein,D.W. Winnicott,Karen Horney,Erich Fromm,John Bowlby, Freud's daughterAnna Freud, andHarry Stack Sullivan. These individuals ensured that psychoanalysis would evolve into diverse schools of thought. Among these schools areego psychology,object relations, andinterpersonal,Lacanian, andrelational psychoanalysis.
Psychologists such asHans Eysenckand philosophers includingKarl Poppersharply criticized psychoanalysis. Popper argued that psychoanalysis was notfalsifiable(no claim it made could be proven wrong) and therefore inherently not a scientific discipline,[126]whereas Eysenck advanced the view that psychoanalytic tenets had been contradicted by experimental data. By the end of the 20th century, psychology departments inAmerican universitiesmostly had marginalized Freudian theory, dismissing it as a "desiccated and dead" historical artifact.[127]Researchers such asAntónio Damásio,Oliver Sacks, andJoseph LeDoux; and individuals in the emerging field ofneuro-psychoanalysishave defended some of Freud's ideas on scientific grounds.[128]
Humanistic psychology, which has been influenced by existentialism and phenomenology,[130]stressesfree willandself-actualization.[131]It emerged in the 1950s as a movement within academic psychology, in reaction to both behaviorism and psychoanalysis.[132]The humanistic approach seeks to view the whole person, not just fragmented parts of the personality or isolated cognitions.[133]Humanistic psychology also focuses on personal growth,self-identity, death, aloneness, and freedom. It emphasizes subjective meaning, the rejection of determinism, and concern for positive growth rather than pathology. Some founders of the humanistic school of thought were American psychologistsAbraham Maslow, who formulated ahierarchy of human needs, andCarl Rogers, who created and developedclient-centered therapy.[134]
Later,positive psychologyopened up humanistic themes to scientific study. Positive psychology is the study of factors which contribute to human happiness and well-being, focusing more on people who are currently healthy. In 2010,Clinical Psychological Reviewpublished a special issue devoted to positive psychological interventions, such asgratitude journalingand the physical expression of gratitude. It is, however, far from clear that positive psychology is effective in making people happier.[135][136]Positive psychological interventions have been limited in scope, but their effects are thought to be somewhat better thanplaceboeffects.
TheAmerican Association for Humanistic Psychology, formed in 1963, declared:
Humanistic psychology is primarily an orientation toward the whole of psychology rather than a distinct area or school. It stands for respect for the worth of persons, respect for differences of approach, open-mindedness as to acceptable methods, and interest in exploration of new aspects of human behavior. As a "third force" in contemporary psychology, it is concerned with topics having little place in existing theories and systems: e.g., love, creativity, self, growth, organism, basic need-gratification, self-actualization, higher values, being, becoming, spontaneity, play, humor, affection, naturalness, warmth, ego-transcendence, objectivity, autonomy, responsibility, meaning, fair-play, transcendental experience, peak experience, courage, and related concepts.[137]
Existential psychology emphasizes the need to understand a client's total orientation towards the world. Existential psychology is opposed to reductionism, behaviorism, and other methods that objectify the individual.[131]In the 1950s and 1960s, influenced by philosophersSøren KierkegaardandMartin Heidegger, psychoanalytically trained American psychologistRollo Mayhelped to develop existential psychology.Existential psychotherapy, which follows from existential psychology, is a therapeutic approach that is based on the idea that a person's inner conflict arises from that individual's confrontation with the givens of existence. Swiss psychoanalystLudwig Binswangerand American psychologistGeorge Kellymay also be said to belong to the existential school.[138]Existential psychologists tend to differ from more "humanistic" psychologists in the former's relatively neutral view of human nature and relatively positive assessment of anxiety.[139]Existential psychologists emphasized the humanistic themes of death, free will, and meaning, suggesting that meaning can be shaped by myths and narratives; meaning can be deepened by the acceptance of free will, which is requisite to living anauthenticlife, albeit often with anxiety with regard to death.[140]
Austrian existential psychiatrist andHolocaustsurvivorViktor Frankldrew evidence of meaning's therapeutic power from reflections upon his owninternment.[141]He created a variation of existential psychotherapy calledlogotherapy, a type ofexistentialistanalysis that focuses on awill to meaning(in one's life), as opposed to Adler'sNietzscheandoctrine ofwill to poweror Freud'swill to pleasure.[142]
Personality psychology is concerned with enduring patterns of behavior, thought, and emotion. Theories of personality vary across different psychological schools of thought. Each theory carries different assumptions about such features as the role of the unconscious and the importance of childhood experience. According to Freud, personality is based on the dynamic interactions of theid, ego, and super-ego.[143]By contrast,trait theoristshave developed taxonomies of personality constructs in describing personality in terms of key traits. Trait theorists have often employed statistical data-reduction methods, such asfactor analysis. Although the number of proposed traits has varied widely,Hans Eysenck's early biologically based model suggests at least three major trait constructs are necessary to describe human personality,extraversion–introversion,neuroticism-stability, andpsychoticism-normality.Raymond Cattellempirically derived a theory of16 personality factorsat the primary-factor level and up to eight broader second-stratum factors.[144][145][146][147]Since the 1980s, theBig Five(openness to experience,conscientiousness,extraversion,agreeableness, andneuroticism) emerged as an important trait theory of personality.[148]Dimensional models of personality disordersare receiving increasing support, and a version of dimensional assessment, namely theAlternative DSM-5 Model for Personality Disorders, has been included in theDSM-5. However, despite a plethora of research into the various versions of the "Big Five" personality dimensions, it appears necessary to move on from static conceptualizations of personality structure to a more dynamic orientation, acknowledging that personality constructs are subject to learning and change over the lifespan.[149][150]
An early example of personality assessment was theWoodworth Personal Data Sheet, constructed during World War I. The popular, although psychometrically inadequate,Myers–Briggs Type Indicator[151]was developed to assess individuals' "personality types" according to thepersonality theories of Carl Jung. TheMinnesota Multiphasic Personality Inventory(MMPI), despite its name, is more a dimensional measure of psychopathology than a personality measure.[152]California Psychological Inventorycontains 20 personality scales (e.g., independence, tolerance).[153]TheInternational Personality Item Pool, which is in the public domain, has become a source of scales that can be used personality assessment.[154]
Study of the unconscious mind, a part of the psyche outside the individual's awareness but that is believed to influence conscious thought and behavior, was a hallmark of early psychology. In one of the first psychology experiments conducted in the United States,C.S. PeirceandJoseph Jastrowfound in 1884 that research subjects could choose the minutely heavier of two weights even if consciously uncertain of the difference.[155]Freud popularized the concept of the unconscious mind, particularly when he referred to an uncensored intrusion of unconscious thought into one's speech (aFreudian slip) or to his effortsto interpret dreams.[156]His 1901 bookThe Psychopathology of Everyday Lifecatalogs hundreds of everyday events that Freud explains in terms of unconscious influence.Pierre Janetadvanced the idea of a subconscious mind, which could contain autonomous mental elements unavailable to the direct scrutiny of the subject.[157]
The concept of unconscious processes has remained important in psychology. Cognitive psychologists have used a "filter" model of attention. According to the model, much information processing takes place below the threshold of consciousness, and only certain stimuli, limited by their nature and number, make their way through the filter. Much research has shown that subconsciousprimingof certain ideas can covertly influence thoughts and behavior.[157]Because of the unreliability of self-reporting, a major hurdle in this type of research involves demonstrating that a subject's conscious mind has not perceived a target stimulus. For this reason, some psychologists prefer to distinguish betweenimplicitandexplicitmemory. In another approach, one can also describe asubliminal stimulusas meeting anobjectivebut not asubjectivethreshold.[158]
Theautomaticitymodel ofJohn Barghand others involves the ideas of automaticity and unconscious processing in our understanding ofsocial behavior,[159][160]although there has been dispute with regard to replication.[161][162]Some experimental data suggest that thebrain begins to consider taking actionsbefore the mind becomes aware of them.[163]The influence of unconscious forces on people's choices bears on the philosophical question of free will. John Bargh,Daniel Wegner, andEllen Langerdescribe free will as an illusion.[159][160][164]
Some psychologists study motivation or the subject of why people or lower animals initiate a behavior at a particular time. It also involves the study of why humans and lower animals continue or terminate a behavior. Psychologists such as William James initially used the termmotivationto refer to intention, in a sense similar to the concept ofwillin European philosophy. With the steady rise of Darwinian and Freudian thinking, instinct also came to be seen as a primary source of motivation.[165]According todrive theory, the forces of instinct combine into a single source of energy which exerts a constant influence. Psychoanalysis, like biology, regarded these forces as demands originating in the nervous system. Psychoanalysts believed that these forces, especially the sexual instincts, could become entangled and transmuted within the psyche. Classical psychoanalysis conceives of a struggle between the pleasure principle and thereality principle, roughly corresponding to id and ego. Later, inBeyond the Pleasure Principle, Freud introduced the concept of thedeath drive, a compulsion towards aggression, destruction, andpsychic repetition of traumatic events.[166]Meanwhile, behaviorist researchers used simple dichotomous models (pleasure/pain, reward/punishment) and well-established principles such as the idea that a thirsty creature will take pleasure in drinking.[165][167]Clark Hullformalized the latter idea with hisdrive reductionmodel.[168]
Hunger, thirst, fear, sexual desire, and thermoregulation constitute fundamental motivations in animals.[167]Humans seem to exhibit a more complex set of motivations—though theoretically these could be explained as resulting from desires for belonging, positive self-image, self-consistency, truth, love, and control.[169][170]
Motivation can be modulated or manipulated in many different ways. Researchers have found thateating, for example, depends not only on the organism's fundamental need forhomeostasis—an important factor causing the experience of hunger—but also on circadian rhythms, food availability, food palatability, and cost.[167]Abstract motivations are also malleable, as evidenced by such phenomena asgoal contagion: the adoption of goals, sometimes unconsciously, based on inferences about the goals of others.[171]Vohs andBaumeistersuggest that contrary to the need-desire-fulfillment cycle of animal instincts, human motivations sometimes obey a "getting begets wanting" rule: the more you get a reward such as self-esteem, love, drugs, or money, the more you want it. They suggest that this principle can even apply to food, drink, sex, and sleep.[172]
Developmental psychology is the scientific study of how and why the thought processes, emotions, and behaviors of humans change over the course of their lives.[173]Some credit Charles Darwin with conducting the first systematic study within the rubric of developmental psychology, having published in 1877 a short paper detailing the development of innate forms of communication based on his observations of his infant son.[174]The main origins of the discipline, however, are found in the work ofJean Piaget. Like Piaget, developmental psychologists originally focused primarily on the development of cognition from infancy to adolescence. Later, developmental psychology extended itself to the study cognition over the life span. In addition to studying cognition, developmental psychologists have also come to focus on affective, behavioral, moral, social, and neural development.
Developmental psychologists who study children use a number of research methods. For example, they make observations of children in natural settings such as preschools[175]and engage them in experimental tasks.[176]Such tasks often resemble specially designed games and activities that are both enjoyable for the child and scientifically useful. Developmental researchers have even devised clever methods to study the mental processes of infants.[177]In addition to studying children, developmental psychologists also study aging and processes throughout the life span, including old age.[178]These psychologists draw on the full range of psychological theories to inform their research.[173]
All researched psychological traits are influenced by bothgenesandenvironment, to varying degrees.[179][180]These two sources of influence are often confounded in observational research of individuals and families. An example of this confounding can be shown in the transmission ofdepressionfrom a depressed mother to her offspring. A theory based on environmental transmission would hold that an offspring, by virtue of their having a problematic rearing environment managed by a depressed mother, is at risk for developing depression. On the other hand, a hereditarian theory would hold that depression risk in an offspring is influenced to some extent by genes passed to the child from the mother. Genes and environment in these simple transmission models are completely confounded. A depressed mother may both carry genes that contribute to depression in her offspring and also create a rearing environment that increases the risk of depression in her child.[181]
Behavioral genetics researchers have employed methodologies that help to disentangle this confound and understand the nature and origins of individual differences in behavior.[97]Traditionally the research has involvedtwin studiesandadoption studies, two designs where genetic and environmental influences can be partially un-confounded. More recently, gene-focused research has contributed to understanding genetic contributions to the development of psychological traits.
The availability ofmicroarraymolecular geneticorgenome sequencingtechnologies allows researchers to measure participant DNA variation directly, and test whether individual genetic variants within genes are associated with psychological traits andpsychopathologythrough methods includinggenome-wide association studies. One goal of such research is similar to that inpositional cloningand its success inHuntington's: once a causal gene is discovered biological research can be conducted to understand how that gene influences the phenotype. One major result of genetic association studies is the general finding that psychological traits and psychopathology, as well as complex medical diseases, are highlypolygenic,[182][183][184][185][186]where a large number (on the order of hundreds to thousands) of genetic variants, each of small effect, contribute to individual differences in the behavioral trait or propensity to the disorder. Active research continues to work toward understanding the genetic and environmental bases of behavior and their interaction.
Psychology encompasses many subfields and includes different approaches to the study of mental processes and behavior.
Psychological testing has ancient origins, dating as far back as 2200 BC, in theexaminations for the Chinese civil service. Written exams began during theHan dynasty(202 BC – AD 220). By 1370, the Chinese system required a stratified series of tests, involving essay writing and knowledge of diverse topics. The system was ended in 1906.[187]: 41–2In Europe, mental assessment took a different approach, with theories ofphysiognomy—judgment of character based on the face—described by Aristotle in 4th century BC Greece. Physiognomy remained current through the Enlightenment, and added the doctrine of phrenology: a study of mind and intelligence based on simple assessment of neuroanatomy.[187]: 42–3
When experimental psychology came to Britain, Francis Galton was a leading practitioner. By virtue of his procedures for measuring reaction time and sensation, he is considered an inventor of modern mental testing (also known aspsychometrics).[187]: 44–5James McKeen Cattell, a student of Wundt and Galton, brought the idea of psychological testing to the United States, and in fact coined the term "mental test".[187]: 45–6In 1901, Cattell's studentClark Wisslerpublished discouraging results, suggesting that mental testing of Columbia and Barnard students failed to predict academic performance.[187]: 45–6In response to 1904 orders from theMinister of Public Instruction, One example of an observational study was run by Arthur Bandura. This observational study focused on children who were exposed to an adult exhibiting aggressive behaviors and their reaction to toys versus other children who were not exposed to these stimuli. The result shows that children who had seen the adult acting aggressively towards a toy, in turn, were aggressive towards their own toy when put in a situation that frustrated them.[188]psychologistsAlfred BinetandThéodore Simondeveloped and elaborated a new test of intelligence in 1905–1911. They used a range of questions diverse in their nature and difficulty. Binet and Simon introduced the concept ofmental ageand referred to the lowest scorers on their test asidiots.Henry H. Goddardput the Binet-Simon scale to work and introduced classifications of mental level such asimbecileandfeebleminded. In 1916, (after Binet's death), Stanford professorLewis M. Termanmodified the Binet-Simon scale (renamed theStanford–Binet scale) and introduced theintelligence quotientas a score report.[187]: 50–56Based on his test findings, and reflecting the racism common to that era, Terman concluded that intellectual disability "represents the level of intelligence which is very, very common among Spanish-Indians and Mexican families of the Southwest and also among negroes. Their dullness seems to be racial."[189]
Following the Army Alpha and Army Beta tests, which was developed by psychologistRobert Yerkesin 1917 and then used in World War 1 by industrial and organizational psychologists for large-scale employee testing and selection of military personnel.[190]Mental testing also became popular in the U.S., where it was applied to schoolchildren. The federally created National Intelligence Test was administered to 7 million children in the 1920s. In 1926, theCollege Entrance Examination Boardcreated theScholastic Aptitude Testto standardize college admissions.[187]: 61The results of intelligence tests were used to argue for segregated schools and economic functions, including the preferential training of Black Americans for manual labor. These practices were criticized by Black intellectuals such aHorace Mann BondandAllison Davis.[189]Eugenicists used mental testing to justify and organize compulsory sterilization of individuals classified as mentally retarded (now referred to asintellectual disability).[49]In the United States, tens of thousands of men and women were sterilized. Setting a precedent that has never been overturned, the U.S. Supreme Court affirmed the constitutionality of this practice in the 1927 caseBuck v. Bell.[191]
Today mental testing is a routine phenomenon for people of all ages in Western societies.[187]:2Modern testing aspires to criteria including standardization of procedure,consistency of results, output of an interpretable score, statistical norms describing population outcomes, and, ideally,effective predictionof behavior and life outcomes outside of testing situations.[187]: 4–6Psychological testing is regularly used in forensic contexts to aid legal judgments and decisions.[192]Developments in psychometrics include work on test and scalereliabilityandvalidity.[193]Developments initem-response theory,[194]structural equation modeling,[195]and bifactor analysis[196]have helped in strengthening test and scale construction.
The provision of psychological health services is generally called clinical psychology in the U.S. Sometimes, however, members of the school psychology and counseling psychology professions engage in practices that resemble that of clinical psychologists. Clinical psychologists typically include people who have graduated from doctoral programs in clinical psychology. In Canada, some of the members of the abovementioned groups usually fall within the larger category ofprofessional psychology. In Canada and the U.S., practitioners get bachelor's degrees and doctorates; doctoral students in clinical psychology usually spend one year in a predoctoral internship and one year in postdoctoral internship. In Mexico and most other Latin American and European countries, psychologists do not get bachelor's and doctoral degrees; instead, they take a three-year professional course following high school.[85]Clinical psychology is at present the largest specialization within psychology.[197]It includes the study and application of psychology for the purpose of understanding, preventing, and relieving psychological distress, dysfunction, and/ormental illness. Clinical psychologists also try to promote subjective well-being and personal growth. Central to the practice of clinical psychology are psychological assessment and psychotherapy although clinical psychologists may also engage in research, teaching, consultation, forensic testimony, and program development and administration.[198]
Credit for the first psychology clinic in the United States typically goes toLightner Witmer, who established his practice in Philadelphia in 1896. Another modern psychotherapist wasMorton Prince, an early advocate for the establishment of psychology as a clinical and academic discipline.[197]In the first part of the twentieth century, most mental health care in the United States was performed by psychiatrists, who are medical doctors. Psychology entered the field with its refinements of mental testing, which promised to improve the diagnosis of mental problems. For their part, some psychiatrists became interested in usingpsychoanalysisand other forms ofpsychodynamic psychotherapyto understand and treat the mentally ill.[44][199]
Psychotherapy as conducted by psychiatrists blurred the distinction between psychiatry and psychology, and this trend continued with the rise ofcommunity mental health facilities. Some in the clinical psychology community adoptedbehavioral therapy, a thoroughly non-psychodynamic model that used behaviorist learning theory to change the actions of patients. A key aspect of behavior therapy is empirical evaluation of the treatment's effectiveness. In the 1970s,cognitive-behavior therapyemerged with the work ofAlbert EllisandAaron Beck. Although there are similarities between behavior therapy and cognitive-behavior therapy, cognitive-behavior therapy required the application of cognitive constructs. Since the 1970s, the popularity of cognitive-behavior therapy among clinical psychologists increased. A key practice in behavioralandcognitive-behavioral therapy is exposing patients to things they fear, based on the premise that their responses (fear, panic, anxiety) can be deconditioned.[200]
Mental health care today involves psychologists and social workers in increasing numbers. In 1977, National Institute of Mental Health directorBertram Browndescribed this shift as a source of "intense competition and role confusion."[44]Graduate programs issuing doctorates in clinical psychology emerged in the 1950s and underwent rapid increase through the 1980s. The PhD degree is intended to train practitioners who could also conduct scientific research. The PsyD degree is more exclusively designed to train practitioners.[85]
Some clinical psychologists focus on the clinical management of patients with brain injury. This subspecialty is known asclinical neuropsychology. In many countries, clinical psychology is a regulated mental health profession. The emerging field ofdisaster psychology(seecrisis intervention) involves professionals who respond to large-scale traumatic events.[201]
The work performed by clinical psychologists tends to be influenced by various therapeutic approaches, all of which involve a formal relationship between professional and client (usually an individual, couple, family, or small group). Typically, these approaches encourage new ways of thinking, feeling, or behaving. Four major theoretical perspectives are psychodynamic, cognitive behavioral, existential–humanistic, and systems or family therapy. There has been a growing movement to integrate the various therapeutic approaches, especially with an increased understanding of issues regarding culture, gender, spirituality, and sexual orientation. With the advent of more robust research findings regarding psychotherapy, there is evidence that most of the major therapies have equal effectiveness, with the key common element being a strongtherapeutic alliance.[202][203]Because of this, more training programs and psychologists are now adopting aneclectic therapeutic orientation.[204][205][206][207][208]
Diagnosis in clinical psychology usually follows theDiagnostic and Statistical Manual of Mental Disorders(DSM).[209]The study of mental illnesses is calledabnormal psychology.
Educational psychology is the study of how humans learn in educational settings, the effectiveness of educational interventions, the psychology of teaching, and the social psychology of schools as organizations. Educational psychologists can be found in preschools, schools of all levels including post secondary institutions, community organizations and learning centers, Government or private research firms, and independent or private consultant.[210]The work of developmental psychologists such as Lev Vygotsky,Jean Piaget, andJerome Brunerhas been influential in creating teaching methods and educational practices. Educational psychology is often included in teacher education programs in places such as North America, Australia, and New Zealand.
School psychology combines principles from educational psychology and clinical psychology to understand and treat students with learning disabilities; to foster the intellectual growth ofgiftedstudents; to facilitateprosocial behaviorsin adolescents; and otherwise to promote safe, supportive, and effective learning environments. School psychologists are trained in educational and behavioral assessment, intervention, prevention, and consultation, and many have extensive training in research.[211]
Industrial and organizational (I/O) psychology involves research and practices that apply psychological theories and principles to organizations and individuals' work-lives.[212]In the field's beginnings, industrialists brought the nascent field of psychology to bear on the study ofscientific managementtechniques for improving workplace efficiency. The field was at first calledeconomic psychologyorbusiness psychology; later,industrial psychology,employment psychology, orpsychotechnology.[213]An influential early study examined workers at Western Electric's Hawthorne plant in Cicero, Illinois from 1924 to 1932. Western Electric experimented on factory workers to assess their responses to changes in illumination, breaks, food, and wages. The researchers came to focus on workers' responses to observation itself, and the termHawthorne effectis now used to describe the fact that people's behavior can change when they think they are being observed.[214]Although the Hawthorne research can be found in psychology textbooks, the research and its findings were weak at best.[215][216]
The name industrial and organizational psychology emerged in the 1960s. In 1973, it became enshrined in the name of theSociety for Industrial and Organizational Psychology, Division 14 of the American Psychological Association.[213]One goal of the discipline is to optimize human potential in the workplace. Personnel psychology is a subfield of I/O psychology. Personnel psychologists apply the methods and principles of psychology in selecting and evaluating workers. Another subfield,organizational psychology, examines the effects of work environments and management styles on worker motivation, job satisfaction, and productivity.[217]Most I/O psychologists work outside of academia, for private and public organizations and as consultants.[213]A psychology consultant working in business today might expect to provide executives with information and ideas about their industry, their target markets, and the organization of their company.[218][219]
Organizational behavior (OB) is an allied field involved in the study of human behavior within organizations.[220]One way to differentiate I/O psychology from OB is that I/O psychologists train in university psychology departments and OB specialists, in business schools.
One role forpsychologists in the militaryhas been to evaluate and counsel soldiers and other personnel. In the U.S., this function began during World War I, when Robert Yerkes established the School of Military Psychology atFort Oglethorpein Georgia. The school provided psychological training for military staff.[44][221]Today, U.S. Army psychologists perform psychological screening, clinical psychotherapy,suicide prevention, and treatment for post-traumatic stress, as well as provide prevention-related services, for example, smoking cessation.[222]The United States Army's Mental Health Advisory Teams implement psychological interventions to help combat troops experiencing mental problems.[223][224]
Psychologists may also work on a diverse set of campaigns known broadly as psychological warfare. Psychological warfare chiefly involves the use of propaganda to influence enemy soldiers and civilians. This so-called black propaganda is designed to seem as if it originates from a source other than the Army.[225]TheCIA'sMKULTRAprogram involved more individualized efforts atmind control, involving techniques such as hypnosis, torture, and covert involuntary administration ofLSD.[226]The U.S. military used the namePsychological Operations(PSYOP) until 2010, when these activities were reclassified as Military Information Support Operations (MISO), part ofInformation Operations(IO).[227]Psychologists have sometimes been involved in assisting the interrogation and torture of suspects, staining the records of the psychologists involved.[228]
An example of the contribution of psychologists to social change involves the research ofKennethandMamie Phipps Clark. These two African American psychologists studied segregation's adverse psychological impact on Black children. Their research findings played a role in the desegregation caseBrown v. Board of Education(1954).[229]
The impact of psychology on social change includes the discipline's broad influence on teaching and learning. Research has shown that compared to the "whole word" or "whole language" approach, the phonics approach to reading instruction is more efficacious.[230]
Medical facilities increasingly employ psychologists to perform various roles. One aspect of health psychology is thepsychoeducationof patients: instructing them in how to follow a medical regimen. Health psychologists can also educate doctors and conduct research on patient compliance.[231][232]Psychologists in the field of public health use a wide variety of interventions to influence human behavior. These range from public relations campaigns and outreach to governmental laws and policies. Psychologists study the composite influence of all these different tools in an effort to influence whole populations of people.[233]
Psychologists work with organizations to apply findings from psychological research to improve the health and well-being of employees. Some work as external consultants hired by organizations to solve specific problems, whereas others are full-time employees of the organization. Applications include conducting surveys to identify issues and designing interventions to make work healthier. Some of the specific health areas include:
Interventions that improve climates are a way to address accidents and violence. Interventions that reduce stress at work or provide employees with tools to better manage it can help in areas where stress is an important component.
Industrial psychology became interested in worker fatigue during World War I, when government ministers in Britain were concerned about the impact of fatigue on workers in munitions factories but not other types of factories.[241][242]In the U. K. some interest in workerwell-beingemerged with the efforts ofCharles Samuel Myersand his National Institute of Industrial Psychology (NIIP) during the inter-War years.[243]In the U. S. during the mid-twentieth century industrial psychologistArthur Kornhauserpioneered the study of occupational mental health, linking industrial working conditions to mental health as well as the spillover of an unsatisfying job into a worker's personal life.[244][245]Zickar accumulated evidence to show that "no other industrial psychologist of his era was as devoted to advocating management and labor practices that would improve the lives of working people."[244]
As interest in the worker health expanded toward the end of the twentieth century, the field ofoccupational health psychology(OHP) emerged. OHP is a branch of psychology that is interdisciplinary.[52][246]OHP is concerned with the health and safety of workers.[52][246]OHP addresses topic areas such as the impact of occupational stressors on physical and mental health, mistreatment of workers (e.g., bullying and violence), work-family balance, the impact ofinvoluntary unemploymenton physical and mental health, the influence of psychosocial factors on safety and accidents, and interventions designed to improve/protect worker health.[52][247]OHP grew out ofhealth psychology,industrial and organizational psychology, andoccupational medicine.[248]OHP has also been informed by disciplines outside psychology, includingindustrial engineering, sociology, and economics.[249][250]
Quantitative psychological researchlends itself to the statistical testing of hypotheses. Although the field makes abundant use ofrandomized and controlled experimentsin laboratory settings, such research can only assess a limited range of short-term phenomena. Some psychologists rely on less rigorously controlled, but moreecologically valid,field experimentsas well. Other research psychologists rely on statistical methods to glean knowledge from population data.[251]The statistical methods research psychologists employ include thePearson product–moment correlation coefficient, theanalysis of variance,multiple linear regression,logistic regression,structural equation modeling, andhierarchical linear modeling. Themeasurementandoperationalizationof importantconstructsis an essential part of these research designs.
Although this type of psychological research is much less abundant than quantitative research, some psychologists conductqualitative research. This type of research can involve interviews, questionnaires, and first-hand observation.[252]While hypothesis testing is rare, virtually impossible, in qualitative research, qualitative studies can be helpful in theory and hypothesis generation, interpreting seemingly contradictory quantitative findings, and understanding why some interventions fail and others succeed.[253]
Atrue experimentwith random assignment of research participants (sometimes called subjects) to rival conditions allows researchers to make strong inferences about causal relationships. When there are large numbers of research participants, the random assignment (also called random allocation) of those participants to rival conditions ensures that the individuals in those conditions will, on average, be similar on most characteristics, including characteristics that went unmeasured. In an experiment, the researcher alters one or more variables of influence, calledindependent variables, and measures resulting changes in the factors of interest, calleddependent variables. Prototypical experimental research is conducted in a laboratory with a carefully controlled environment.
Aquasi-experimentis a situation in which different conditions are being studied, but random assignment to the different conditions is not possible. Investigators must work with preexisting groups of people. Researchers can use common sense to consider how much the nonrandom assignment threatens the study'svalidity.[256]For example, in research on the best way to affect reading achievement in the first three grades of school, school administrators may not permit educational psychologists to randomly assign children to phonics and whole language classrooms, in which case the psychologists must work with preexisting classroom assignments. Psychologists will compare the achievement of children attending phonics and whole language classes and, perhaps, statistically adjust for any initial differences in reading level.
Experimental researchers typically use astatistical hypothesis testingmodel which involves making predictions before conducting the experiment, then assessing how well the data collected are consistent with the predictions. These predictions are likely to originate from one or more abstract scientifichypothesesabout how the phenomenon under study actually works.[257]
Surveysare used in psychology for the purpose of measuringattitudesandtraits, monitoring changes inmood, and checking the validity of experimental manipulations (checking research participants' perception of the condition they were assigned to). Psychologists have commonly used paper-and-pencil surveys. However, surveys are also conducted over the phone or through e-mail. Web-based surveys are increasingly used to conveniently reach many subjects.
Observational studiesare commonly conducted in psychology. Incross-sectionalobservational studies, psychologists collect data at a single point in time. The goal of many cross-sectional studies is the assess the extent factors are correlated with each other. By contrast, inlongitudinal studiespsychologists collect data on the same sample at two or more points in time. Sometimes the purpose of longitudinal research is to study trends across time such as the stability of traits or age-related changes in behavior. Because some studies involve endpoints that psychologists cannot ethically study from an experimental standpoint, such as identifying the causes of depression, they conduct longitudinal studies a large group of depression-free people, periodically assessing what is happening in the individuals' lives. In this way psychologists have an opportunity to test causal hypotheses regarding conditions that commonly arise in people's lives that put them at risk for depression. Problems that affect longitudinal studies includeselective attrition, the type of problem in which bias is introduced when a certain type of research participant disproportionately leaves a study.
One example of an observational study was run by Arthur Bandura. This observational study focused on children who were exposed to an adult exhibiting aggressive behaviors and their reaction to toys versus other children who were not exposed to these stimuli. The result shows that children who had seen the adult acting aggressively towards a toy, in turn, were aggressive towards their own toy when put in a situation that frustrated them.[188]
Exploratory data analysisincludes a variety of practices that researchers use to reduce a great many variables to a small number overarching factors. InPeirce's three modes of inference, exploratory data analysis corresponds toabduction.[258]Meta-analysisis the technique research psychologists use to integrate results from many studies of the same variables and arriving at a grand average of the findings.[259]
A classic and popular tool used to relate mental and neural activity is theelectroencephalogram(EEG), a technique using amplified electrodes on a person's scalp to measure voltage changes in different parts of the brain.Hans Berger, the first researcher to use EEG on an unopened skull, quickly found that brains exhibit signature "brain waves": electric oscillations which correspond to different states of consciousness. Researchers subsequently refined statistical methods for synthesizing the electrode data, and identified unique brain wave patterns such as thedelta waveobserved during non-REM sleep.[260]
Newerfunctional neuroimagingtechniques includefunctional magnetic resonance imagingandpositron emission tomography, both of which track the flow of blood through the brain. These technologies provide more localized information about activity in the brain and create representations of the brain with widespread appeal. They also provide insight which avoids the classic problems of subjective self-reporting. It remains challenging to draw hard conclusions about where in the brain specific thoughts originate—or even how usefully such localization corresponds with reality. However, neuroimaging has delivered unmistakable results showing the existence of correlations between mind and brain. Some of these draw on a systemicneural networkmodel rather than a localized function model.[261][262][263]
Interventions such astranscranial magnetic stimulationand drugs also provide information about brain–mind interactions.Psychopharmacologyis the study of drug-induced mental effects.
Computational modeling is a tool used inmathematical psychologyand cognitive psychology to simulate behavior.[264]This method has several advantages. Since modern computers process information quickly, simulations can be run in a short time, allowing for high statistical power. Modeling also allows psychologists to visualize hypotheses about the functional organization of mental events that could not be directly observed in a human. Computational neuroscience uses mathematical models to simulate the brain. Another method is symbolic modeling, which represents many mental objects using variables and rules. Other types of modeling includedynamic systemsandstochasticmodeling.
Animal experiments aid in investigating many aspects of human psychology, including perception, emotion, learning, memory, and thought, to name a few. In the 1890s, Russian physiologist Ivan Pavlov famously used dogs to demonstrate classical conditioning. Non-human primates, cats, dogs, pigeons, and rats and other rodents are often used in psychological experiments. Ideally, controlled experiments introduce only one independent variable at a time, in order to ascertain its unique effects upon dependent variables. These conditions are approximated best in laboratory settings. In contrast, human environments and genetic backgrounds vary so widely, and depend upon so many factors, that it is difficult to control importantvariablesfor human subjects. There are pitfalls, however, in generalizing findings from animal studies to humans through animal models.[265]
Comparative psychology is the scientific study of the behavior and mental processes of non-human animals, especially as these relate to the phylogenetic history, adaptive significance, and development of behavior. Research in this area explores the behavior of many species, from insects to primates. It is closely related to other disciplines that study animal behavior such asethology.[266]Research in comparative psychology sometimes appears to shed light on human behavior, but some attempts to connect the two have been quite controversial, for example theSociobiologyofE.O. Wilson.[267]Animal models are often used to study neural processes related to human behavior, e.g. in cognitive neuroscience.
Qualitative research is often designed to answer questions about the thoughts, feelings, and behaviors of individuals. Qualitative research involving first-hand observation can help describe events as they occur, with the goal of capturing the richness of everyday behavior and with the hope of discovering and understanding phenomena that might have been missed if only more cursory examinations are made.
Qualitative psychological researchmethods include interviews, first-hand observation, and participant observation. Creswell (2003) identified five main possibilities for qualitative research, including narrative, phenomenology,ethnography,case study, andgrounded theory. Qualitative researchers[269]sometimes aim to enrich our understanding of symbols, subjective experiences, or social structures. Sometimeshermeneuticand critical aims can give rise to quantitative research, as inErich Fromm's application of psychological and sociological theories, in his bookEscape from Freedom, to understanding why many ordinary Germans supported Hitler.[270]
Just asJane Goodallstudied chimpanzee social and family life by careful observation of chimpanzee behavior in the field, psychologists conductnaturalistic observationof ongoing human social, professional, and family life. Sometimes the participants are aware they are being observed, and other times the participants do not know they are being observed. Strict ethical guidelines must be followed when covert observation is being carried out.
Program evaluationinvolves the systematic collection, analysis, and application of information to answer questions about projects, policies and programs, particularly about their effectiveness.[271][272]In both the public and private sectors, stakeholders often want to know the extent which the programs they are funding, implementing, voting for, receiving, or objecting to are producing the intended effects. While program evaluation first focuses on effectiveness, important considerations often include how much the program costs per participant, how the program could be improved, whether the program is worthwhile, whether there are better alternatives, if there are unintended outcomes, and whether the program goals are appropriate and useful.[273]
Metascience involves the application of scientific methodology to study science itself. The field ofmetasciencehas revealed problems in psychological research. Some psychological research has suffered frombias,[274]problematicreproducibility,[275]andmisuse of statistics.[276]These findings have led to calls for reform from within and from outside the scientific community.[277]
In 1959, statistician Theodore Sterling examined the results of psychological studies and discovered that 97% of them supported their initial hypotheses, implying possiblepublication bias.[278][279][280]Similarly, Fanelli (2010)[281]found that 91.5% of psychiatry/psychology studies confirmed the effects they were looking for, and concluded that the odds of this happening (a positive result) was around five times higher than in fields such asspace scienceorgeosciences. Fanelli argued that this is because researchers in "softer" sciences have fewer constraints to their conscious and unconscious biases.
Areplication crisisin psychology has emerged. Many notable findings in the field have not been replicated. Some researchers were even accused of publishing fraudulent results.[282][283][284]Systematic efforts, including efforts by theReproducibility Projectof theCenter for Open Science, to assess the extent of the problem found that as many as two-thirds of highly publicized findings in psychology failed to be replicated.[285]Reproducibility has generally been stronger in cognitive psychology (in studies and journals) than social psychology[285]and subfields ofdifferential psychology.[286][287]Other subfields of psychology have also been implicated in the replication crisis, including clinical psychology,[288][289][290]developmental psychology,[291][292][293]and a field closely related to psychology,educational research.[294][295][296][297][298]
Focus on the replication crisis has led to other renewed efforts in the discipline to re-test important findings.[299][300]In response to concerns about publication bias anddata dredging(conducting a large number of statistical tests on a great many variables but restricting reporting to the results that were statistically significant), 295 psychology and medical journals have adoptedresult-blind peer reviewwhere studies are accepted not on the basis of their findings and after the studies are completed, but before the studies are conducted and upon the basis of the methodological rigor of their experimental designs and the theoretical justifications for their proposed statistical analysis before data collection or analysis is conducted.[301][302]In addition, large-scale collaborations among researchers working in multiple labs in different countries have taken place. The collaborators regularly make their data openly available for different researchers to assess.[303]Allen and Mehler[304]estimated that 61 per cent of result-blind studies have yieldednull results, in contrast to an estimated 5 to 20 per cent in traditional research.
Some critics viewstatistical hypothesis testingas misplaced. Psychologist and statisticianJacob Cohenwrote in 1994 that psychologists routinely confuse statistical significance with practical importance, enthusiastically reporting great certainty in unimportant facts.[305]Some psychologists have responded with an increased use ofeffect sizestatistics, rather than sole reliance onp-values.[306]
In 2008, Arnett pointed out that most articles in American Psychological Association journals were about U.S. populations when U.S. citizens are only 5% of the world's population. He complained that psychologists had no basis for assuming psychological processes to be universal and generalizing research findings to the rest of the global population.[307]In 2010, Henrich, Heine, and Norenzayan reported a bias in conducting psychology studies with participants from "WEIRD" ("Western, Educated, Industrialized, Rich, and Democratic") societies.[308][309]Henrich et al. found that "96% of psychological samples come from countries with only 12% of the world's population" (p. 63). The article gave examples of results that differ significantly between people from WEIRD and tribal cultures, including theMüller-Lyer illusion. Arnett (2008),Altmaierand Hall (2008) and Morgan-Consoli et al. (2018) view the Western bias in research and theory as a serious problem considering psychologists are increasingly applying psychological principles developed in WEIRD regions in their research, clinical work, and consultation with populations around the world.[307][310][311]In 2018, Rad, Martingano, and Ginges showed that nearly a decade after Henrich et al.'s paper, over 80% of the samples used in studies published in the journalPsychological Scienceemployed WEIRD samples. Moreover, their analysis showed that several studies did not fully disclose the origin of their samples; the authors offered a set of recommendations to editors and reviewers to reduce WEIRD bias.[312]
Similar to theWEIRDbias, starting in 2020, researchers of non-human behavior have started to emphasize the need to document the possibility of the STRANGE (Social background, Trappability and self-selection, Rearing history, Acclimation and habituation, Natural changes in responsiveness, Genetic makeup, and Experience) bias in study conclusions.[313]
Some observers perceive a gap between scientific theory and its application—in particular, the application of unsupported or unsound clinical practices.[314]Critics say there has been an increase in the number of mental health training programs that do not instill scientific competence.[315]Practices such as "facilitated communicationfor infantile autism"; memory-recovery techniques includingbody work; and other therapies, such asrebirthingandreparenting, may be dubious or even dangerous, despite their popularity.[316]These practices, however, are outside the mainstream practices taught in clinical psychology doctoral programs.
Ethical standards in the discipline have changed over time. Some famous past studies are today considered unethical and in violation ofestablished codes(e.g., the Canadian Code of Conduct for Research Involving Humans, and theBelmont Report). The American Psychological Association has advanced a set of ethical principles and acodeof conduct for the profession.[317]
The most important contemporary standards include informed and voluntary consent. After World War II, theNuremberg Codewas established because of Nazi abuses of experimental subjects. Later, most countries (and scientific journals) adopted theDeclaration of Helsinki. In the U.S., theNational Institutes of Healthestablished theInstitutional Review Boardin 1966, and in 1974 adopted theNational Research Act(HR 7724). All of these measures encouraged researchers to obtain informed consent from human participants in experimental studies. A number of influential but ethically dubious studies led to the establishment of this rule; such studies included theMIT-Harvard Fernald School radioisotope studies, theThalidomide tragedy, theWillowbrook hepatitis study,Stanley Milgram's studies of obedience to authority, and theStanford Prison Experiment.
Theethics code of the American Psychological Associationoriginated in 1951 as "Ethical Standards of Psychologists." This code has guided the formation of licensing laws in most American states. It has changed multiple times over the decades since its adoption, and contains both aspirational principles and binding ethical standards.
The APA's Ethical Principles of Psychologists and Code of Conduct consists of five General Principles, which are meant to guide psychologists to higher ethical practice where a particular standard does not apply. Those principles are:
A. Beneficence and Nonmaleficence- meaning the psychologists must work to benefit those they work with and "do no harm." This includes awareness of indirect benefits and harms their work might have on others due to personal, social, political, or other factors.
B. Fidelity and Responsibility- an awareness of public trust in the profession and adherence to ethical standards and clarification of roles to preserve that trust. This includes managing conflicts of interest, as well as committing some portion of a psychologist's professional time to low-cost or pro bono work.
C. Integrity- upholding honesty and accuracy in all psychological practices, including avoiding misrepresentations and fraud. In situations where psychologists would use deception (i.e., certain research), psychologists must consider the necessity, benefits, and harms, and mitigate any harms where possible.
D. Justice -an understanding that psychology must be for everyone's benefit, and that psychologists take special care to avoid unjust practices as a result of biases or limitations of expertise.
E. Respect for People's Rights and Dignity- the preservation of people's rights when working with psychologists, including confidentially, privacy, and autonomy. Psychologists should consider a multitude of factors, including a need for special safeguards for protected populations (e.g., minors, incarcerated individuals) and awareness of differences based on numerous factors, including culture, race, age, gender, and socioeconomic status.
In 1989, the APA revised its policies on advertising and referral fees to negotiate the end of an investigation by the Federal Trade Commission. The 1992 incarnation was the first to distinguish between "aspirational" ethical standards and "enforceable" ones. The APA code was further revised in 2010 to prevent the use of the code to justify violating human rights, which was in response to the participation of APA members in interrogations under the administration of United States President George W. Bush.[318]Members of the public have a five-year window to file ethics complaints about APA members with the APA ethics committee; members of the APA have a three-year window.[319]
The Canadian Psychological Association used the APA code until 1986, when it developed its own code drawing from four similar principles: 1) Respect for the Dignity of Persons and Peoples, 2) Responsible Caring, 3) Integrity in Relationships, 4) Responsibility to Society.[320][321]The European Federation of Psychologist's Associations, have adopted a model code using the principles of the Canadian Code, while also drawing from the APA code.[322][323]
Universities have ethics committees dedicated to protecting the rights (e.g., voluntary nature of participation in the research, privacy) and well-being (e.g., minimizing distress) of research participants. University ethics committees evaluate proposed research to ensure that researchers protect the rights and well-being of participants; an investigator's research project cannot be conducted unless approved by such an ethics committee.[324]
The field of psychology also identifies certain categories of people that require additional or special protection due to particular vulnerabilities, unequal power dynamics, or diminished capacity for informed consent. This list often includes, but is not limited to, children, incarcerated individuals, pregnant women, human fetuses and neonates, institutionalized persons, those with physical or mental disabilities, and the educationally or economically disadvantaged.[325]
Some of the ethical issues considered most important are the requirement to practice only within the area of competence, to maintain confidentiality with the patients, and to avoid sexual relations with them. Another important principle isinformed consent, the idea that a patient or research subject must understand and freely choose a procedure they are undergoing.[319]Some of the most common complaints against clinical psychologists include sexual misconduct[319]and breaches in confidentiality or privacy.[326]
Psychology ethics apply to all types of human contact in a psychologist's professional capacity, including therapy, assessment, teaching, training, work with research subjects, testimony in courts and before government bodies, consulting, and statements to the public or media pertaining to matters of psychology.[317]
Research on other animals is governed by university ethics committees. Research on nonhuman animals cannot proceed without permission of the ethics committee, of the researcher's home institution. Ethical guidelines state that using non-human animals for scientific purposes is only acceptable when the harm (physical or psychological) done to animals is outweighed by the benefits of the research.[327]Psychologists can use certain research techniques on animals that could not be used on humans.
Comparative psychologistHarry Harlowdrew moral condemnation forisolation experimentson rhesus macaque monkeys at theUniversity of Wisconsin–Madisonin the 1970s.[328]The aim of the research was to produce an animal model ofclinical depression. Harlow also devised what he called a "rape rack", to which the female isolates were tied in normal monkey mating posture.[329]In 1974, American literary criticWayne C. Boothwrote that, "Harry Harlow and his colleagues go on torturing their nonhuman primates decade after decade, invariably proving what we all knew in advance—that social creatures can be destroyed by destroying their social ties." He writes that Harlow made no mention of the criticism of the morality of his work.[330]
Animal research is influential in psychology, while still being debated among academics. The testing of animals for research has led to medical breakthroughs in human medicine. Many psychologists argue animal experimentation is essential for human advancement, but must be regulated by the government to ensure ethicality.
|
https://en.wikipedia.org/wiki/Psychology
|
In China,judgment defaulter(Chinese:失信被执行人)[1]orcourt defaulters, commonly known aslaolai(Chinese:老赖) oruntrustworthy person(Chinese:失信人), is defined as a person who is able to fulfill legal obligations determined by the court, but has refused to do so, or illegally tries to evade enforcement such as hiding their assets.[2]
According to the relevant regulations, persons who receivedefault judgmentby the People's Courts are subject to restrictions on "high spending" or "high consumption" that are unrelated to basic living or business activities. These can include bans from traveling on high speed trains, or not being able to have your children go toprivate schools.[3][4][5]Jeremy Daum, a senior research fellow at Yale Law School's Paul Tsai China Center, explains that the idea is that since the majority of "court awards" are going to be monetary, the "judgement defaulters" should not be continuing to be spending a lot of money if they have not yet paid back the court award, and instead their money should be spent to "fix that problem".[6]
According to statistics from theSupreme People's Court, the number of the cases concluded by People's Courts at all levels from 2008 to 2012 in which the defendant had property, more than 70 percent of the defendants had evaded, avoided or even violently resisted enforcement, and less than 30 percent of them had automatically fulfilled their obligations. It is also reported that the chronic problems caused by thelaolaihave seriously affected the harmony and stability of society.[7]To this end, at the end of August 2013, the Supreme People's Court issuedSeveral Provisions on the Publication of Information on the List of Judgement Defaulters.[7][8]
According to theDecision of the Supreme People's Court on Amending the Several Provisions of the Supreme People's Court on the Publication of Information on the List of Judgment Defaulters in Default of Trustadopted at the 1582nd meeting of the Judgment Committee of theSupreme People's Courton July 1, 2013, and amended according to the 1707th meeting of the Judgment Committee of the Supreme People's Court on January 16, 2017, theSupreme People's Court on the Publication of Information on the List of judgment defaulters in Defaultprovides that the people's courts at all levels shall list judgment defaulters and impose credit discipline on them in accordance with the law:[9]
According to theRegulations, the period of inclusion in the list of persons who have failed to trust is two years. When the judgment defaulter has engaged in violence, threats to obstruct, resisted the implementation, or if the circumstances are particularly serious or the judgment defaulter has a number of breaches of trust, the period can be extended by 1 to 3 years. In addition, the people's courts at all levels shall not include the judgment defaulter in the list under one of the following circumstances, in accordance with the provisions of Article 1, paragraph 1:[9]
In addition, if the judgment defaulter is a minor, the people's courts at all levels shall not include him/her in the list of judgment defaulters.[9]
According to theSeveral Provisions of the Supreme People's Court on the Publication of Information on the List of Judgement Defaulters, the recorded and published information on the list of judgment defaulters shall include the following:[9]
On October 24, 2013, the information publication and query platform of the list of judgment defaulters of the national courts (nowChina Defaulter Information Public Notification) was opened to the public. The public can input the name or name of the judgment defaulter to inquire about the information of the judgment defaulter, and the above information is announced to the public.[10]In addition, local courts can also publish information on the list of judgment defaulters through bulletin boards, newspapers, radio, television, the Internet, and press conferences.[7]In recent years there have also been court publicity of judgment defaulters through cinema screenings of films,[11]and the information on the list of judgment defaulters is published in the form ofDouyinand other social media.[12]In July 2014, theExecutive Bureau of the Supreme People's Courtand People's Daily Online jointly launched theRanking of Judgment Defaulter.[13]
According to the Several Provisions of the Supreme People's Court on the Publication of Information on the List of Judgment Defaulter in Default of Trust, the Judgment Defaulter in default of trust will be subject to credit discipline in government procurement, bidding and tendering, administrative approval, government support, financing and credit, market access, qualification recognition, etc.[9]According to theDecision of the Supreme People's Court on Amending the Several Provisions of the Supreme People's Court on Restricting High Consumption of Judgment Defaulteradopted at the 1487th Session of the Judicial Committee of the Supreme People's Court on May 17, 2010, and Amended in accordance with the Decision of the Supreme People's Court on Amending Certain Provisions of the Supreme People's Court on Restricting the High Consumption of the Judgment Defaulter adopted at the 1657th Meeting of the Judicial Committee of the Supreme People's Court on July 6, 2015Supreme People's Court on Restricting the High Consumption of the Judgment Defaulter and Related Consumption Several Provisions on Consumption" stipulates that persons (natural persons) included in the list of judgment defaulters shall not engage in the following acts of high consumption and consumption not essential to life and work:[14]
In addition to the above measures, the judgment defaulter included in the list see their housing, bank accounts,pension,mobile paymentaccounts (such asAlipay,WeChat Pay, etc.) frozen and seized, and the judgment defaulter is not allowed to serve as the legal representative, director, supervisor, and senior management of any company nationwide, nor be allowed to enroll his or her children inprivate schools, and their speculation in stocks, leaving the country, taking outloansor applying for credit cards in financial institutions will also be restricted. At the same time, vehicles under the defaulter's name are not allowed to drive into theExpressways of the People's Republic of China, and once a vehicle under the defaulter's name enters or leaves an expressway toll booth, the vehicle will be suspended and transferred to the court by the highway enforcement brigade. According to theCriminal Law of the People's Republic of ChinaAmendment (IX), which was implemented on November 1, 2015, people's court judgments and rulings have the ability to perform but refuse to do so, will be punished with "refusal to take action called by the judgment. For serious circumstances, the penalty shall be imprisonment for up to three years, detention, or a fine; if the circumstances are particularly serious, the penalty shall be imprisonment for a term of more than three years and up to seven years and a fine.[16][17]
Since July 2015,Zhima Credit, a subsidiary ofAnt Group, and the Supreme People's Court have realized a system connection to update the data of judgment defaulters in real time. Once an Alipay user is included in the list of judgment defaulters, their Sesame Credit score will be deducted, and their consumption and shopping at Sesame Credit's merchant partners will also be restricted.[18]In addition, there are also some places that cooperate with communication operators to set up exclusivecolored ringfor the judgment defaulter, and the opening cannot be canceled without the consent of the court. If the public calls the phone number under the name of the judgment defaulter, an alert will be reported that the owner is listed as a judgment defaulter.[19]In Beijing, people who are included in the list of judgment defaulters are not allowed to participate in the minibus lottery.[20]
According to a press conference held by the Supreme People's Court on July 10, 2018, as of July 2018, there were 7.89 million cases of judgment defaulters in mainland China under publication, involving 4.4 million judgment defaulters. In terms of punishment, 12.22 million people have been restricted from purchasing air tickets, 4.58 million people have been restricted from purchasing tickets for moving trains and high-speed trains, and 280,000 people have been restricted from serving as legal representatives and executives of enterprises. Nationwide, 2.8 million judgment defaulters are forced to fulfill their obligations automatically due to the pressure of credit discipline.[21]
Prominent persons included in the list of judgment defaulter judgment defaulters
|
https://en.wikipedia.org/wiki/Defaulted_executee
|
Government by algorithm[1](also known asalgorithmic regulation,[2]regulation by algorithms,algorithmic governance,[3][4]algocratic governance,algorithmic legal orderoralgocracy[5]) is an alternative form ofgovernmentorsocial orderingwhere the usage of computeralgorithmsis applied to regulations, law enforcement, and generally any aspect of everyday life such as transportation or land registration.[6][7][8][9][10]The term "government by algorithm" has appeared in academic literature as an alternative for "algorithmic governance" in 2013.[11]A related term, algorithmic regulation, is defined as setting the standard, monitoring and modifying behaviour by means of computational algorithms – automation ofjudiciaryis in its scope.[12]In the context of blockchain, it is also known asblockchain governance.[13]
Government by algorithm raises new challenges that are not captured in thee-governmentliterature and the practice of public administration.[14]Some sources equatecyberocracy, which is a hypotheticalform of governmentthat rules by the effective use of information,[15][16][17]with algorithmic governance, although algorithms are not the only means of processing information.[18][19]Nello Cristianiniand Teresa Scantamburlo argued that the combination of a human society and certain regulation algorithms (such as reputation-based scoring) forms asocial machine.[20]
In 1962, the director of the Institute for Information Transmission Problems of theRussian Academy of Sciencesin Moscow (later Kharkevich Institute),[21]Alexander Kharkevich, published an article in the journal "Communist" about a computer network for processing information and control of the economy.[22][23]In fact, he proposed to make a network like the modern Internet for the needs of algorithmic governance (ProjectOGAS). This created a serious concern among CIA analysts.[24]In particular,Arthur M. Schlesinger Jr.warned that"by 1970 the USSR may have a radically new production technology, involving total enterprises or complexes of industries, managed by closed-loop, feedback control employingself-teaching computers".[24]
Between 1971 and 1973, theChileangovernment carried outProject Cybersynduring thepresidency of Salvador Allende. This project was aimed at constructing a distributeddecision support systemto improve the management of the national economy.[25][2]Elements of the project were used in 1972 to successfully overcome the traffic collapse caused by aCIA-sponsored strike of forty thousand truck drivers.[26]
Also in the 1960s and 1970s,Herbert A. Simonchampionedexpert systemsas tools for rationalization and evaluation of administrative behavior.[27]The automation of rule-based processes was an ambition of tax agencies over many decades resulting in varying success.[28]Early work from this period includes Thorne McCarty's influential TAXMAN project[29]in the US and Ronald Stamper'sLEGOLproject[30]in the UK. In 1993, the computer scientistPaul Cockshottfrom theUniversity of Glasgowand the economist Allin Cottrell from theWake Forest Universitypublished the bookTowards a New Socialism, where they claim to demonstrate the possibility of a democraticallyplanned economybuilt on modern computer technology.[31]The Honourable JusticeMichael Kirbypublished a paper in 1998, where he expressed optimism that the then-available computer technologies such aslegal expert systemcould evolve to computer systems, which will strongly affect the practice of courts.[32]In 2006, attorneyLawrence Lessig, known for the slogan"Code is law", wrote:
[T]he invisible hand of cyberspace is building an architecture that is quite the opposite of its architecture at its birth. This invisible hand, pushed by government and by commerce, is constructing an architecture that will perfect control and make highly efficient regulation possible[33]
Since the 2000s, algorithms have been designed and used toautomatically analyze surveillance videos.[34]
In his 2006 bookVirtual Migration,A. Aneeshdeveloped the concept of algocracy — information technologies constrain human participation in public decision making.[35][36]Aneesh differentiated algocratic systems from bureaucratic systems (legal-rational regulation) as well as market-based systems (price-based regulation).[37]
In 2013, algorithmic regulation was coined byTim O'Reilly, founder and CEO of O'Reilly Media Inc.:
Sometimes the "rules" aren't really even rules. Gordon Bruce, the former CIO of the city of Honolulu, explained to me that when he entered government from the private sector and tried to make changes, he was told, "That's against the law." His reply was "OK. Show me the law." "Well, it isn't really a law. It's a regulation." "OK. Show me the regulation." "Well, it isn't really a regulation. It's a policy that was put in place by Mr. Somebody twenty years ago." "Great. We can change that!" [...] Laws should specify goals, rights, outcomes, authorities, and limits. If specified broadly, those laws can stand the test of time. Regulations, which specify how to execute those laws in much more detail, should be regarded in much the same way that programmers regard their code and algorithms, that is, as a constantly updated toolset to achieve the outcomes specified in the laws. [...] It's time for government to enter the age of big data. Algorithmic regulation is an idea whose time has come.[38]
In 2017, Ukraine'sMinistry of Justiceran experimentalgovernment auctionsusingblockchaintechnology to ensure transparency and hinder corruption in governmental transactions.[39]"Government by Algorithm?" was the central theme introduced at Data for Policy 2017 conference held on 6–7 September 2017 in London.[40]
Asmart cityis an urban area where collected surveillance data is used to improve various operations. Increase in computational power allows more automated decision making and replacement of public agencies by algorithmic governance.[41]In particular, the combined use of artificial intelligence and blockchains forIoTmay lead to the creation ofsustainablesmart city ecosystems.[42]Intelligent street lightinginGlasgowis an example of successful government application of AI algorithms.[43]A study of smart city initiatives in the US shows that it requires public sector as a main organizer and coordinator, the private sector as a technology and infrastructure provider, and universities as expertise contributors.[44]
Thecryptocurrencymillionaire Jeffrey Berns proposed the operation oflocal governmentsinNevadaby tech firms in 2021.[45]Berns bought 67,000 acres (271 km2) in Nevada's ruralStorey County(population 4,104) for $170,000,000 (£121,000,000) in 2018 in order to develop a smart city with more than 36,000 residents that could generate an annual output of $4,600,000,000.[45]Cryptocurrency would be allowed for payments.[45]Blockchains, Inc. "Innovation Zone" was canceled in September 2021 after it failed to secure enough water[46]for the planned 36,000 residents, through water imports from a site located 100 miles away in the neighboringWashoe County.[47]A similar water pipeline proposed in 2007 was estimated to cost $100 million and would have taken about 10 years to develop.[47]With additional water rights purchased from Tahoe Reno Industrial General Improvement District, "Innovation Zone" would have acquired enough water for about 15,400 homes - meaning that it would have barely covered its planned 15,000 dwelling units, leaving nothing for the rest of the projected city and its 22 million square-feet of industrial development.[47]
InSaudi Arabia, the planners ofThe Lineassert that it will be monitored by AI to improve life by using data and predictive modeling.[48]
Tim O'Reilly suggested that data sources andreputation systemscombined in algorithmic regulation can outperform traditional regulations.[38]For instance, once taxi-drivers are rated by passengers, the quality of their services will improve automatically and "drivers who provide poor service are eliminated".[38]O'Reilly's suggestion is based on thecontrol-theorericconcept offeed-back loop—improvementsanddisimprovementsof reputation enforce desired behavior.[20]The usage of feedback-loops for the management of social systems has already been suggested inmanagement cyberneticsbyStafford Beerbefore.[50]
These connections are explored byNello Cristianiniand Teresa Scantamburlo, where the reputation-credit scoring system is modeled as an incentive given to the citizens and computed by asocial machine, so that rational agents would be motivated to increase their score by adapting their behaviour. Several ethical aspects of that technology are still being discussed.[20]
China'sSocial Credit Systemwas said to be a mass surveillance effort with a centralized numerical score for each citizen given for their actions, though newer reports say that this is a widespread misconception.[51][52][53]
Smart contracts,cryptocurrencies, anddecentralized autonomous organizationare mentioned as means to replace traditional ways of governance.[54][55][10]Cryptocurrencies are currencies which are enabled by algorithms without a governmentalcentral bank.[56]Central bank digital currencyoften employs similar technology, but is differentiated from the fact that it does use a central bank. It is soon to be employed by major unions and governments such as the European Union and China.Smart contractsare self-executablecontracts, whose objectives are the reduction of need in trusted governmental intermediators, arbitrations and enforcement costs.[57][58]A decentralized autonomous organization is anorganizationrepresented by smart contracts that is transparent, controlled by shareholders and not influenced by a central government.[59][60][61]Smart contracts have been discussed for use in such applications as use in (temporary)employment contracts[62][63]and automatic transfership of funds and property (i.e.inheritance, upon registration of adeath certificate).[64][65][66][67]Some countries such as Georgia and Sweden have already launched blockchain programs focusing on property (land titlesandreal estateownership)[39][68][69][70]Ukraine is also looking at other areas too such asstate registers.[39]
According to a study ofStanford University, 45% of the studied US federal agencies have experimented with AI and related machine learning (ML) tools up to 2020.[1]US federal agencies counted the number ofartificial intelligenceapplications, which are listed below.[1]53% of these applications were produced by in-house experts.[1]Commercial providers of residual applications includePalantir Technologies.[71]
In 2012,NOPDstarted a collaboration with Palantir Technologies in the field ofpredictive policing.[72]Besides Palantir's Gotham software, other similar (numerical analysis software) used by police agencies (such as the NCRIC) includeSAS.[73]
In the fight against money laundering,FinCENemploys the FinCEN Artificial Intelligence System (FAIS) since 1995.[74][75]
National health administration entities and organisations such as AHIMA (American Health Information Management Association) holdmedical records. Medical records serve as the central repository for planning patient care and documenting communication among patient and health care provider and professionals contributing to the patient's care. In the EU, work is ongoing on aEuropean Health Data Spacewhich supports the use of health data.[76]
USDepartment of Homeland Securityhas employed the software ATLAS, which run onAmazon Cloud. It scanned more than 16.5 million records of naturalized Americans and flagged approximately 124,000 of them for manual analysis and review byUSCISofficers regardingdenaturalization.[77][78]They were flagged due to potential fraud, public safety and national security issues. Some of the scanned data came fromTerrorist Screening DatabaseandNational Crime Information Center.
TheNarxCareis a US software,[79]which combines data from the prescription registries of variousU.S. states[80][81]and usesmachine learningto generate various three-digit "risk scores" for prescriptions of medications and an overall "Overdose Risk Score", collectively referred to as Narx Scores,[82]in a process that potentially includesEMSand criminal justice data[79]as well as court records.[83]
In Estonia, artificial intelligence is used in itse-governmentto make it more automated and seamless. A virtual assistant will guide citizens through any interactions they have with the government. Automated and proactive services "push" services to citizens at key events of their lives (including births, bereavements, unemployment). One example is the automated registering of babies when they are born.[84]Estonia'sX-Road systemwill also be rebuilt to include even more privacy control and accountability into the way the government uses citizen's data.[85]
In Costa Rica, the possible digitalization of public procurement activities (i.e. tenders for public works) has been investigated. The paper discussing this possibility mentions that the use of ICT in procurement has several benefits such as increasing transparency, facilitating digital access to public tenders, reducing direct interaction between procurement officials and companies at moments of high integrity risk, increasing outreach and competition, and easier detection of irregularities.[86]
Besides using e-tenders for regularpublic works(construction of buildings, roads), e-tenders can also be used forreforestationprojects and othercarbon sinkrestoration projects.[87]Carbon sinkrestoration projectsmaybe part of thenationally determined contributionsplans in order to reach the nationalParis agreement goals.
Governmentprocurementaudit softwarecan also be used.[88][89]Audits are performed in some countries aftersubsidies have been received.
Some government agencies provide track and trace systems for services they offer. An example istrack and tracefor applications done by citizens (i.e. driving license procurement).[90]
Some government services useissue tracking systemsto keep track of ongoing issues.[91][92][93][94]
Judges' decisions in Australia are supported by the"Split Up" softwarein cases of determining the percentage of a split after adivorce.[95]COMPASsoftware is used in the USA to assess the risk ofrecidivismin courts.[96][97]According to the statement of Beijing Internet Court, China is the first country to create an internet court or cyber court.[98][99][100]The Chinese AI judge is avirtual recreationof an actual female judge. She "will help the court's judges complete repetitive basic work, including litigation reception, thus enabling professional practitioners to focus better on their trial work".[98]Also,Estoniaplans to employ artificial intelligence to decide small-claim cases of less than €7,000.[101]
Lawbotscan perform tasks that are typically done by paralegals or young associates at law firms. One such technology used by US law firms to assist in legal research is from ROSS Intelligence,[102]and others vary in sophistication and dependence on scriptedalgorithms.[103]Another legal technologychatbotapplication isDoNotPay.
Due to the COVID-19 pandemic in 2020, in-person final exams were impossible for thousands of students.[104]The public high schoolWestminster Highemployed algorithms to assign grades. UK'sDepartment for Educationalso employed a statistical calculus to assign final grades inA-levels, due to the pandemic.[105]
Besides use in grading, software systems like AI were used in preparation for college entrance exams.[106]
AI teaching assistants are being developed and used for education (e.g. Georgia Tech's Jill Watson)[107][108]and there is also an ongoing debate on the possibility of teachers being entirely replaced by AI systems (e.g. inhomeschooling).[109]
In 2018, an activist named Michihito Matsuda ran for mayor in theTama city area of Tokyoas a human proxy for anartificial intelligenceprogram.[110]While election posters and campaign material used the termrobot, and displayedstock imagesof a feminineandroid, the "AI mayor" was in fact amachine learning algorithmtrained using Tama city datasets.[111]The project was backed by high-profile executives Tetsuzo Matsumoto ofSoftbankand Norio Murakami ofGoogle.[112]Michihito Matsuda came third in the election, being defeated byHiroyuki Abe.[113]Organisers claimed that the 'AI mayor' was programmed to analyzecitizen petitionsput forward to thecity councilin a more 'fair and balanced' way than human politicians.[114]
In 2018,Cesar Hidalgopresented the idea ofaugumented democracy.[115]In an augumented democracy, legislation is done bydigital twinsof every single person.
In 2019, AI-powered messengerchatbotSAM participated in the discussions on social media connected to an electoral race in New Zealand.[116]The creator of SAM, Nick Gerritsen, believed SAM would be advanced enough to run as acandidateby late 2020, when New Zealand had its next general election.[117]
In 2022, the chatbot "Leader Lars" or "Leder Lars" was nominated forThe Synthetic Partyto run in the 2022Danishparliamentary election,[118]and was built by the artist collectiveComputer Lars.[119]Leader Lars differed from earlier virtual politicians by leading apolitical partyand by not pretending to be an objective candidate.[120]This chatbot engaged in critical discussions on politics with users from around the world.[121]
In 2023, In the Japanese town of Manazuru, a mayoral candidate called "AI Mayer" hopes to be the first AI-powered officeholder in Japan in November 2023. This candidacy is said to be supported by a group led by Michihito Matsuda[122]
In the2024 United Kingdom general election, a businessman named Steve Endacott ran for the constituency ofBrighton Pavilionas an AI avatar named "AI Steve",[123]saying that constituents could interact with AI Steve to shape policy. Endacott stated that he would only attend Parliament to vote based on policies which had garnered at least 50% support.[124]AI Steve placed last with 179 votes.[125]
In February 2020, China launched amobile appto deal with theCoronavirus outbreak[127]called "close-contact-detector".[128]Users are asked to enter their name and ID number. The app is able to detect "close contact" using surveillance data (i.e. using public transport records, including trains and flights)[128]and therefore a potential risk of infection. Every user can also check the status of three other users. To make this inquiry users scan a Quick Response (QR) code on their smartphones using apps likeAlipayorWeChat.[129]The close contact detector can be accessed via popular mobile apps including Alipay. If a potential risk is detected, the app not only recommends self-quarantine, it also alerts local health officials.[130]
Alipay also has theAlipay Health Codewhich is used to keep citizens safe. This system generates a QR code in one of three colors (green, yellow, or red) after users fill in a form on Alipay with personal details. A green code enables the holder to move around unrestricted. A yellow code requires the user to stay at home for seven days and red means a two-week quarantine. In some cities such as Hangzhou, it has become nearly impossible to get around without showing one's Alipay code.[131]
In Cannes, France, monitoring software has been used on footage shot byCCTVcameras, allowing to monitor their compliance to localsocial distancingandmask wearingduring the COVID-19 pandemic. The system does not store identifying data, but rather allows to alert city authorities and police where breaches of the mask and mask wearing rules are spotted (allowingfiningto be carried out where needed). The algorithms used by the monitoring software can be incorporated into existing surveillance systems in public spaces (hospitals, stations, airports, shopping centres, ...)[132]
Cellphone data is used to locate infected patients in South Korea, Taiwan, Singapore and other countries.[133][134]In March 2020, the Israeli government enabled security agencies to track mobile phone data of people supposed to have coronavirus. The measure was taken to enforce quarantine and protect those who may come into contact with infected citizens.[135]Also in March 2020,Deutsche Telekomshared private cellphone data with the federal government agency,Robert Koch Institute, in order to research and prevent the spread of the virus.[136]Russia deployedfacial recognition technologyto detect quarantine breakers.[137]Italian regional health commissionerGiulio Gallerasaid that "40% of people are continuing to move around anyway", as he has been informed by mobile phone operators.[138]In USA, Europe and UK,Palantir Technologiesis taken in charge to provide COVID-19 tracking services.[139]
Tsunamiscan be detected byTsunami warning systems. They can make use of AI.[140][141]Floodingscan also be detected using AI systems.[142]Wildfirescan be predicted using AI systems.[143][144]Wildfire detection is possible by AI systems(i.e. through satellite data, aerial imagery, and GPS phone personnel position) and can help in the evacuation of people during wildfires,[145]to investigate how householders responded in wildfires[146]and spotting wildfire in real time usingcomputer vision.[147][148]Earthquake detection systemsare now improving alongside the development of AI technology through measuring seismic data and implementing complex algorithms to improve detection and prediction rates.[149][150][151]Earthquake monitoring, phase picking, and seismic signal detection have developed through AI algorithms ofdeep-learning, analysis, and computational models.[152]Locustbreeding areas can be approximated using machine learning, which could help to stop locust swarms in an early phase.[153]
Algorithmic regulation is supposed to be a system of governance where more exact data, collected from citizens via their smart devices and computers, is used to more efficiently organize human life as a collective.[154][155]AsDeloitteestimated in 2017, automation of US government work could save 96.7 million federal hours annually, with a potential savings of $3.3 billion; at the high end, this rises to 1.2 billion hours and potential annual savings of $41.1 billion.[156]
There are potential risks associated with the use of algorithms in government. Those include:
According to a 2016's bookWeapons of Math Destruction, algorithms andbig dataare suspected to increase inequality due to opacity, scale and damage.[159]
There is also a serious concern thatgamingby the regulated parties might occur, once moretransparency is brought into the decision making by algorithmic governance, regulated parties might try to manipulate their outcome in own favor and even useadversarial machine learning.[1][20]According toHarari, the conflict between democracy and dictatorship is seen as a conflict of two different data-processing systems—AI and algorithms may swing the advantage toward the latter by processing enormous amounts of information centrally.[160]
In 2018, the Netherlands employed an algorithmic system SyRI (Systeem Risico Indicatie) to detect citizens perceived as being high risk for committingwelfare fraud, which quietly flagged thousands of people to investigators.[161]This caused a public protest. The district court of Hague shut down SyRI referencingArticle 8 of the European Convention on Human Rights(ECHR).[162]
The contributors of the 2019 documentaryiHumanexpressed apprehension of "infinitely stable dictatorships" created by government AI.[163]
Due to public criticism, the Australian government announced the suspension ofRobodebt schemekey functions in 2019, and a review of all debts raised using the programme.[164]
In 2020, algorithms assigning exam grades to students in theUK sparked open protestunder the banner "Fuck the algorithm."[105]This protest was successful and the grades were taken back.[165]
In 2020, the US government softwareATLAS, which run onAmazon Cloud, sparked uproar from activists and Amazon's own employees.[166]
In 2021, Eticas Foundation launched a database of governmental algorithms calledObservatory of Algorithms with Social Impact(OASI).[167]
An initial approach towards transparency included theopen-sourcing of algorithms.[168]Software code can be looked into and improvements can be proposed throughsource-code-hosting facilities.
A 2019 poll conducted byIE University's Center for the Governance of Change in Spain found that 25% of citizens from selected European countries were somewhat or totally in favor of letting an artificial intelligence make important decisions about how their country is run.[169]The following table lists the results by country:
Researchers found some evidence that when citizens perceive their political leaders or security providers to be untrustworthy, disappointing, or immoral, they prefer to replace them by artificial agents, whom they consider to be more reliable.[170]The evidence is established by survey experiments on university students of all genders.
A 2021 poll byIE Universityindicates that 51% of Europeans are in favor of reducing the number of national parliamentarians and reallocating these seats to an algorithm. This proposal has garnered substantial support in Spain (66%), Italy (59%), and Estonia (56%). Conversely, the citizens of Germany, the Netherlands, the United Kingdom, and Sweden largely oppose the idea.[171]The survey results exhibit significant generational differences. Over 60% of Europeans aged 25-34 and 56% of those aged 34-44 support the measure, while a majority of respondents over the age of 55 are against it. International perspectives also vary: 75% of Chinese respondents support the proposal, whereas 60% of Americans are opposed.[171]
The 1970David Bowiesong "Saviour Machine" depicts an algocratic society run by the titular mechanism, which ended famine and war through "logic" but now threatens to cause an apocalypse due to its fear that its subjects have become excessively complacent.[172]
The novelsDaemon(2006) andFreedom™(2010) byDaniel Suarezdescribe a fictional scenario of global algorithmic regulation.[173]Matthew De Abaitua'sIf Thenimagines an algorithm supposedly based on "fairness" recreating a premodern rural economy.[174]
|
https://en.wikipedia.org/wiki/Government_by_algorithm
|
Anhonor system,trust systemorhonesty systemis a way of running a variety of endeavors based ontrust,honor, andhonesty.
The honor system is also a system granting freedom from customary surveillance (as to students or prisoners) with the understanding that those who are so freed will be bound by theirhonorto observe regulations (e.g. prison farms may be operated under the honor system),[1]and will therefore not abuse the trust placed in them.
The first honor system in America was created at theCollege of William & Maryin 1779.[2]In some colleges, the honor system is used to administer tests unsupervised. Students are generally asked to sign anhonor codestatement that says they will not cheat or use unauthorized resources when taking tests. As an example, atVanderbilt Universitystudents taking examinations are required to sign and include the following pledge: “I pledge on my honor that have neither given nor received unauthorized aid on this examination”. Any student caught in violation of the Honor Code is referred to the Honor Council which investigates and determines the appropriate action, which can range from failing the course to expulsion from the university.[3]
At theUniversity of Virginiaa student taking an examination is also required to sign a pledge not to give or receive aid and there is one penalty for transgression of the honor code, dismissal from the university.[4]Texas A&M also has an Honor System which states, Aggies do not lie, cheat or steal or tolerate those who do.[5]Any student that does not follow the code is remanded to the Honor council so they can determine the severity of the case and how the student should be punished or if expulsion is necessary.[6]The students at theUniversity of North Carolina at Chapel Hillalso maintain a student-run honor system. Students maintain the integrity of the university by pledging not to cheat, steal or lie. Unlike the University of Virginia, the honor system at Chapel Hill allows for different sanctions, ranging from probation to expulsion. A single-sanction Honor Code exists at theVirginia Military Institute, where a "drum out" ceremony is still carried out upon a cadet's dismissal.[7]
Some private universities are run by or associated withreligious organizationsand their honor code reflect that association. AtBrigham Young Universitystudents commit to theChurch Educational System Honor Codewhich unlike other honor codes has restrictions placed on how students should engage in sexual and romantic relationships and that students and employees should attend religious services.[8]
Somesupermarketchains allow customers to scan their own groceries with handheldbarcode readerswhile placing them in their own carts (seeself-checkout). Customers can be randomly audited. While the system gives customers the ability to place groceries in their bags without paying, participating supermarkets have reported that this experimental system has not increased the amount ofshoplifting.[9]
In some countries, farmers leave bags of produce beside the road outside their houses with prices affixed. Passers-by pay by leaving cash in a container. InIreland,New Zealand,Australiaand theUnited Kingdomthis is called thehonesty boxsystem. In other countries, small unmanned stores are run, where customers are able to enter, obtain what they need, and pay the bill in a secure container.[10]
During theCOVID-19 pandemic, as many people have received theirvaccines, theCenters for Disease Control and Preventionissued guidance that fully-vaccinated people no longer had to wearface masks. Many places relied on an honor system to trust that people who were not vaccinated continued to wear face masks.[11]
Various public transport systems are ungated and operate on an enforced honour system. Random inspections are made but there is no systematic means of ensuring that everyone has paid. If aRevenue protection inspectorfinds a person indeed lacks the proper ticket, the passenger gets aPenalty fare.[12]
|
https://en.wikipedia.org/wiki/Honor_system
|
Antiquity
Medieval
Early modern
Modern
Iran
India
East-Asia
Karma(/ˈkɑːrmə/, fromSanskrit:कर्म,IPA:[ˈkɐɾmɐ]ⓘ;Pali:kamma) is an ancient Indian concept that refers to an action, work, or deed, and its effect or consequences.[1]InIndian religions, the term more specifically refers to a principle ofcause and effect, often descriptively called theprinciple of karma, wherein individuals' intent and actions (cause) influence their future (effect):[2]Good intent and good deeds contribute to good karma and happierrebirths, while bad intent and bad deeds contribute to bad karma and worse rebirths. In some scriptures, however, there is no link between rebirth and karma.[3][4]
InHinduism, karma is traditionally classified into four types: Sanchita karma (accumulated karma from past actions across lifetimes), Prārabdha karma (a portion of Sanchita karma that is currently bearing fruit and determines the circumstances of the present life), Āgāmi karma (future karma generated by present actions), and Kriyamāṇa karma (immediate karma created by current actions, which may yield results in the present or future).[5]
Karma is often misunderstood as fate, destiny, or predetermination.[6]Fate, destiny or predetermination has specific terminology in Sanskrit and is calledPrarabdha.
The concept of karma is closely associated with the idea of rebirth in many schools of Indian religions (particularlyin Hinduism,Buddhism,Jainism, andSikhism),[7]as well asTaoism.[8]In these schools, karma in the present affects one's future in the current life as well as the nature and quality of future lives—one'ssaṃsāra.[9][10]
ManyNew Agersbelieve in karma, treating it as a law of cause and effect that assures cosmic balance, although in some cases they stress that it is not a system that enforces punishment for past actions.[11]
The termkarma(Sanskrit:कर्म;Pali:kamma) refers to both the executed 'deed, work, action, act' and the 'object, intent'.[3]
Wilhelm Halbfass(2000) explains karma (karman) by contrasting it with theSanskritwordkriya:[3]whereaskriyais the activity along with the steps and effort in action,karmais (1) the executed action as a consequence of that activity, as well as (2) the intention of the actor behind an executed action or a planned action (described by some scholars[12]as metaphysical residue left in the actor). A good action creates good karma, as does good intent. A bad action creates bad karma, as does bad intent.[3]
Difficulty in arriving at a definition of karma arises because of the diversity of views among theschools of Hinduism; some, for example, considerkarmaandrebirthlinked and simultaneously essential, some consider karma but not rebirth to be essential, and a few discuss and conclude karma and rebirth to be flawed fiction.[13]BuddhismandJainismhave their own karma precepts. Thus, karma has not one, but multiple definitions and different meanings.[14]It is a concept whose meaning, importance, and scope varies between the various traditions that originated in India, and various schools in each of these traditions. According to Manu Doshi, all Aryan philosophies accept karma but Jainism has gone deeper into this subject.[15]Wendy O'Flahertyclaims that, furthermore, there is an ongoing debate regarding whether karma is a theory, a model, a paradigm, a metaphor, or ametaphysicalstance.[16]
Karmaalso refers to a conceptual principle that originated in India, often descriptively called theprinciple of karma, and sometimes thekarma-theoryor thelaw of karma.[17]
In the context of theory,karmais complex and difficult to define.[16]Different schools ofIndologyderive different definitions for the concept from ancient Indian texts; their definition is some combination of (1) causality that may beethicalor non-ethical; (2) ethicization, i.e., good or bad actions have consequences; and (3) rebirth.[16][18]Other Indologists include in the definition that which explains the present circumstances of an individual with reference to his or her actions in the past. These actions may be those in a person's current life, or, in some schools of Indian traditions, possibly actions from their past lives; furthermore, the consequences may result in the current life, or a person's future lives.[16][19]The law of karma operates independent of any deity or any process of divine judgment.[20]
A common theme to theories of karma is itsprinciple of causality.[17]This relationship between karma and causality is a central motif in all schools ofHindu,Buddhist, andJainthought.[21]One of the earliest associations of karma to causality occurs in theBrihadaranyaka Upanishadverses 4.4.5–6:
Now as a man is like this or like that,according as he acts and according as he behaves, so will he be;a man of good acts will become good, a man of bad acts, bad;he becomes pure by pure deeds, bad by bad deeds;And here they say that a person consists of desires,and as is his desire, so is his will;and as is his will, so is his deed;and whatever deed he does, that he will reap.
The theory of karma as causation holds that: (1) executed actions of an individual affects the individual and the life he or she lives, and (2) the intentions of an individual affects the individual and the life he or she lives. Disinterested actions, or unintentional actions do not have the same positive or negative karmic effect, as interested and intentional actions. In Buddhism, for example, actions that are performed, or arise, or originate without any bad intent, such as covetousness, are considered non-existent in karmic impact or neutral in influence to the individual.[24]
Another causality characteristic, shared by karmic theories, is thatlike deedslead tolike effects. Thus, good karma produces good effect on the actor, while bad karma produces bad effect. This effect may be material, moral, or emotional – that is, one's karma affects both one's happiness and unhappiness.[21]The effect of karma need not be immediate; the effect of karma can be later in one's current life, and in some schools it extends to future lives.[25]
The consequence or effects of one's karma can be described in two forms:phalaandsamskara. Aphala(lit.'fruit' or 'result') is the visible or invisible effect that is typically immediate or within the current life. In contrast, asamskara(Sanskrit:संस्कार) is an invisible effect, produced inside the actor because of the karma, transforming the agent and affecting his or her ability to be happy or unhappy in their current and future lives. The theory of karma is often presented in the context ofsamskaras.[21][26]
Karl Potter andHarold Cowardsuggest that karmic principle can also be understood as a principle of psychology and habit.[17][27][note 2]Karma seeds habits (vāsanā), and habits create the nature of man. Karma also seedsself perception, and perception influences how one experiences life-events. Both habits and self perception affect the course of one's life. Breaking bad habits is not easy: it requires conscious karmic effort.[17][29]Thus, psyche and habit, according to Potter and Coward, link karma to causality in ancient Indian literature.[17][27]The idea of karma may be compared to the notion of a person's 'character', as both are an assessment of the person and determined by that person's habitual thinking and acting.[10]
The second theme common to karma theories is ethicization. This begins with the premise that every action has a consequence,[9]which will come to fruition in either this life or a future life; thus, morally good acts will have positive consequences, whereas bad acts will produce negative results. An individual's present situation is thereby explained by reference to actions in his present or in previous lifetimes. Karma is not itself 'reward and punishment', but the law that produces consequence.[30]Wilhelm Halbfassnotes that good karma is considered asdharmaand leads topunya('merit'), while bad karma is consideredadharmaand leads topāp('demerit, sin').[31]
Reichenbach (1988) suggests that the theories of karma are anethical theory.[21]This is so because the ancient scholars of India linked intent and actual action to the merit, reward, demerit, and punishment. A theory without ethical premise would be a purecausal relation; the merit or reward or demerit or punishment would be same regardless of the actor's intention. In ethics, one's intentions, attitudes, and desires matter in the evaluation of one's action. Where the outcome is unintended, the moral responsibility for it is less on the actor, even though causal responsibility may be the same regardless.[21]A karma theory considers not only the action, but also the actor's intentions, attitude, and desires before and during the action. The karma concept thus encourages each person to seek and live a moral life, as well as avoid an immoral life. The meaning and significance of karma is thus as a building-block of an ethical theory.[32]
The third common theme of karma theories is the concept ofreincarnationor the cycle of rebirths (saṃsāra).[9][33][34]Rebirth is a fundamental concept ofHinduism,Buddhism, Jainism, and Sikhism.[10]Rebirth, orsaṃsāra, is the concept that all life forms go through a cycle of reincarnation, that is, a series of births and rebirths. The rebirths and consequent life may be in different realm, condition, or form. The karma theories suggest that the realm, condition, and form depends on the quality and quantity of karma.[35]In schools that believe in rebirth, every living being's soul transmigrates (recycles) after death, carrying the seeds of Karmic impulses from life just completed, into another life and lifetime of karmas.[9][14]This cycle continues indefinitely, except for those who consciously break this cycle by reachingmoksha. Those who break the cycle reach the realm of gods, those who do not continue in the cycle.
The concept has been intensely debated in ancient literature of India; with different schools of Indian religions considering the relevance of rebirth as either essential, or secondary, or unnecessary fiction.[13]Hiriyanna (1949) suggests rebirth to be a necessary corollary of karma;[36]Yamunacharya (1966) asserts that karma is a fact, while reincarnation is a hypothesis;[37]and Creel (1986) suggests that karma is a basic concept, rebirth is a derivative concept.[38]
The theory of 'karma and rebirth' raises numerous questions – such as how, when, and why did the cycle start in the first place, what is the relative Karmic merit of one karma versus another and why, and what evidence is there that rebirth actually happens, among others. Various schools of Hinduism realized these difficulties, debated their own formulations – some reaching what they considered as internally consistent theories – while other schools modified and de-emphasized it; a few schools in Hinduism such asCharvakas(or Lokayata) abandoned the theory of 'karma and rebirth' altogether.[3][31][39][40]Schools of Buddhism consider karma-rebirth cycle as integral to their theories ofsoteriology.[41][42]
TheVedic Sanskritwordkárman-(nominativekárma) means 'work' or 'deed',[44]often used in the context ofSrautarituals.[45]In theRigveda, the word occurs some 40 times.[44]InSatapatha Brahmana1.7.1.5,sacrificeis declared as the "greatest" of works;Satapatha Brahmana10.1.4.1 associates the potential of becomingimmortal(amara) with the karma of theagnicayanasacrifice.[44]
In the early Vedic literature, the concept of karma is also present beyond the realm of rituals or sacrifices. The Vedic language includes terms for sins and vices such as āgas, agha, enas, pāpa/pāpman, duṣkṛta, as well as for virtues and merit like sukṛta and puṇya, along with the neutral term karman.
Whatever good deed man does that is inside the Vedi; and whatever evil he does that is outside the Vedi.
The verse refers to the evaluation of virtuous and sinful actions in the afterlife. Regardless of their application in rituals (whether within or outside the Vedi), the concepts of good and evil here broadly represent merits and sins.
What evil is done here by man, that it (i.e. speech =Brahman) makes manifest. Although he thinks that he does it secretly, as it were, still it makes it manifest. Verily, therefore one should not commit evil.
This is the eternal greatness of the Brahmin. He does not increase by kárman, nor does he become less. Hisātmanknows the path. Knowing him (the ātman) one is not polluted by evil karman.
The Vedic words for "action" and "merit" in pre-Upaniṣadic texts carry moral significance and are not solely linked to ritual practices. The word karman simply means "action," which can be either positive or negative, and is not always associated with religious ceremonies; its predominant association with ritual in the Brāhmaṇa texts is likely a reflection of their ritualistic nature. In the same vein, sukṛta (and subsequently, puṇya) denotes any form of "merit," whether it be ethical or ritualistic. In contrast, terms such as pāpa and duṣkṛta consistently represent morally wrong actions.[46]
The earliest clear discussion of the karma doctrine is in theUpanishads.[9][44]The doctrine occurs here in the context of a discussion of the fate of the individual after death.[47]For example, causality and ethicization is stated inBṛhadāraṇyaka Upaniṣad3.2.13:[48][49]
Truly, one becomes good through gooddeeds, and evil through evildeeds.
Some authors state that thesamsara(transmigration) and karma doctrine may be non-Vedic, and the ideas may have developed in the "shramana" traditions that precededBuddhismandJainism.[50]Others state that some of the complex ideas of the ancient emerging theory of karma flowed from Vedic thinkers to Buddhist and Jain thinkers.[16][51]The mutual influences between the traditions is unclear, and likely co-developed.[52]
Many philosophical debates surrounding the concept are shared by the Hindu, Jain, and Buddhist traditions, and the early developments in each tradition incorporated different novel ideas.[53]For example, Buddhists allowed karma transfer from one person to another and sraddha rites, but had difficulty defending the rationale.[53][54]In contrast, Hindu schools and Jainism would not allow the possibility of karma transfer.[55][56]
The concept of karma in Hinduism developed and evolved over centuries. The earliestUpanishadsbegan with the questions about how and why man is born, and what happens after death. As answers to the latter, the early theories in these ancient Sanskrit documents includepancagni vidya(the five fire doctrine),pitryana(the cyclic path of fathers), anddevayana(the cycle-transcending, path of the gods).[57]Those who perform superficial rituals and seek material gain, claimed these ancient scholars, travel the way of their fathers and recycle back into another life; those who renounce these, go into the forest and pursue spiritual knowledge, were claimed to climb into the higher path of the gods. It is these who break the cycle and are not reborn.[58]With the composition of the Epics – the common man's introduction todharmain Hinduism – the ideas of causality and essential elements of the theory of karma were being recited in folk stories. For example:
As a man himself sows, so he himself reaps; no man inherits the good or evil act of another man. The fruit is of the same quality as the action.
The 6th chapter of theAnushasana Parva(the Teaching Book), the 13th book of theMahabharata, opens withYudhishthiraaskingBhishma: "Is the course of a person's life already destined, or can human effort shape one's life?"[60]The future, replies Bhishma, is both a function of current human effort derived from free will and past human actions that set the circumstances.[61]Over and over again, the chapters of Mahabharata recite the key postulates of karma theory. That is: intent and action (karma) has consequences; karma lingers and doesn't disappear; and, all positive or negative experiences in life require effort and intent.[62]For example:
Happiness comes due to good actions, suffering results from evil actions,by actions, all things are obtained, by inaction, nothing whatsoever is enjoyed.If one's action bore no fruit, then everything would be of no avail,if the world worked from fate alone, it would be neutralized.
Over time, various schools of Hinduism developed many different definitions of karma, some making karma appear quite deterministic, while others make room for free will and moral agency.[14]Among the six most studied schools of Hinduism, the theory of karma evolved in different ways, as their respective scholars reasoned and attempted to address the internal inconsistencies, implications and issues of the karma doctrine. According to ProfessorWilhelm Halbfass,[3]
The above schools illustrate the diversity of views, but are not exhaustive. Each school has sub-schools in Hinduism, such as that of non-dualism and dualism under Vedanta. Furthermore, there are other schools of Indian philosophy, such asCharvaka(or Lokayata; thematerialists), that denied the theory of karma-rebirth, as well as the existence of God; to this non-Vedic school, the properties of things come from the nature of things.Causalityemerges from the interaction, actions, and nature of things and people, making determinative principles such as karma or God unnecessary.[70][71]
Karma andkarmaphalaare fundamental concepts in Buddhism,[72][73]which explain how our intentional actions keep us tied to rebirth insamsara, whereas the Buddhist path, as exemplified in theNoble Eightfold Path, shows us the way out ofsamsara.[74][75]
The cycle of rebirth is determined by karma, literally 'action'.[76][note 4]Karmaphala(whereinphalameans 'fruit, result')[82][83][84]refers to the 'effect' or 'result' of karma.[85][72]The similar termkarmavipaka(whereinvipākameans 'ripening') refers to the 'maturation, ripening' of karma.[83][86][87]
In the Buddhist tradition,karmarefers to actions driven by intention (cetanā),[88][89][84][note 5]a deed done deliberately through body, speech or mind, which leads to future consequences.[92]TheNibbedhika Sutta,Anguttara Nikaya6.63:
Intention (cetana) I tell you, is kamma. Intending, one does kamma by way of body, speech, & intellect.[93][note 6]
How these intentional actions lead to rebirth, and how the idea of rebirth is to be reconciled with the doctrines ofimpermanenceandno-self,[95][note 7]is a matter of philosophical inquiry in the Buddhist traditions, for which several solutions have been proposed.[76]In early Buddhism, no explicit theory of rebirth and karma is worked out,[79]and "the karma doctrine may have been incidental to early Buddhist soteriology."[80][81]In early Buddhism, rebirth is ascribed to craving or ignorance.[77][78]Unlike that of Jains, Buddha's teaching of karma is not strictly deterministic, but incorporated circumstantial factors such as otherNiyamas.[96][97][note 8]It is not a rigid and mechanical process, but a flexible, fluid and dynamic process.[98]There is no set linear relationship between a particular action and its results.[97]The karmic effect of a deed is not determined solely by the deed itself, but also by the nature of the person who commits the deed, and by the circumstances in which it is committed.[97][99]Karmaphalais not a "judgement" enforced by a God, Deity or other supernatural being that controls the affairs of the Cosmos. Rather,karmaphalais the outcome of a natural process of cause and effect.[note 9]Within Buddhism, the real importance of the doctrine of karma and its fruits lies in the recognition of the urgency to put a stop to the whole process.[101][102]TheAcintita Suttawarns that "the results of karma" is one of the four incomprehensible subjects (oracinteyya),[103][104]subjects that are beyond all conceptualization,[103]and cannot be understood with logical thought or reason.[note 10]
Nichiren Buddhismteaches that transformation and change through faith and practice changes adverse karma—negative causes made in the past that result in negative results in the present and future—to positive causes for benefits in the future.[108]
InJainism, karma conveys a totally different meaning from that commonly understood inHindu philosophyand western civilization.[109]Jain philosophyis one of the oldest Indian philosophy that completely separates body (matter) from the soul (pure consciousness).[110]In Jainism, karma is referred to as karmic dirt, as it consists of very subtle particles of matter that pervade the entire universe.[111]Karmas are attracted to the karmic field of a soul due to vibrations created by activities of mind, speech, and body as well as various mental dispositions. Hence the karmas are thesubtle mattersurrounding theconsciousnessof a soul. When these two components (consciousness and karma) interact, we experience the life we know at present.Jain textsexpound that seventattvas(truths or fundamentals) constitute reality. These are:[112]
According toPadmanabh Jaini,
This emphasis on reaping the fruits only of one's own karma was not restricted to the Jainas; both Hindus and Buddhist writers have produced doctrinal materials stressing the same point. Each of the latter traditions, however, developed practices in basic contradiction to such belief. In addition toshrardha(the ritual Hindu offerings by the son of deceased), we find among Hindus widespread adherence to the notion of divine intervention in ones fate, while Buddhists eventually came to propound such theories like boon-granting bodhisattvas, transfer of merit and like. Only the Jainas have been absolutely unwilling to allow such ideas to penetrate their community, despite the fact that there must have been tremendous amount of social pressure on them to do so.[113]
The relationship between the soul and karma, states Padmanabh Jaini, can be explained with the analogy of gold. Like gold is always found mixed with impurities in its original state, Jainism holds that the soul is not pure at its origin but is always impure and defiled like natural gold. One can exert effort and purify gold, similarly, Jainism states that the defiled soul can be purified by proper refining methodology.[114]Karma either defiles the soul further, or refines it to a cleaner state, and this affects future rebirths.[115]Karma is thus anefficient cause(nimitta) in Jain philosophy, but not thematerial cause(upadana). The soul is believed to be the material cause.[116]
The key points where the theory of karma in Jainism can be stated as follows:
There are eight types of Karma which attach a soul to Samsara (the cycle of birth and death):[119][120]
InSikhism, all living beings are described as being under the influence of the three qualities ofmaya. Always present together in varying mix and degrees, these three qualities ofmayabind the soul to the body and to the earth plane. Above these three qualities is the eternal time. Due to the influence of three modes ofmaya'snature,jivas(individual beings) perform activities under the control and purview of the eternal time. These activities are calledkarma, wherein the underlying principle is that karma is the law that brings back the results of actions to the person performing them.
This life is likened to a field in which our karma is the seed. We harvest exactly what we sow; no less, no more. This infallible law of karma holds everyone responsible for what the person is or is going to be. Based on the total sum of past karma, some feel close to the Pure Being in this life and others feel separated. This is the law of karma inGurbani(Sri Guru Granth Sahib). Like other Indian and oriental schools of thought, the Gurbani also accepts the doctrines of karma and reincarnation as the facts of nature.[121]
David Ownby, a scholar of Chinese history at the University of Montreal,[122]asserts thatFalun Gongdiffers from Buddhism in its definition of the term "karma" in that it is taken not as a process of award and punishment, but as an exclusively negative term. The Chinese termde, or 'virtue', is reserved for what might otherwise be termed 'good karma' in Buddhism. Karma is understood as the source of all suffering – what Buddhism might refer to as 'bad karma'. According toLi Hongzhi, the founder of Falun Gong: "A person has done bad things over his many lifetimes, and for people this results in misfortune, or for cultivators, its karmic obstacles, so there's birth, aging, sickness, and death. This is ordinary karma."[123]
Falun Gong teaches that the spirit is locked in the cycle of rebirth, also known assamsara,[124]due to the accumulation of karma.[125]This is a negative, black substance that accumulates in other dimensions lifetime after lifetime, by doing bad deeds and thinking bad thoughts. Falun Gong states that karma is the reason for suffering, and what ultimately blocks people from the truth of the universe and attainingenlightenment. At the same time, karma is also the cause of one's continued rebirth and suffering.[125]Li says that due to accumulation of karma, the human spirit upon death will reincarnate over and over again, until the karma is paid off or eliminated through cultivation, or the person is destroyed due to the bad deeds he has done.[125]
Ownby regards the concept of karma as a cornerstone to individual moral behaviour in Falun Gong, and also readily traceable to the Christian doctrine of "one reaps what one sows". Others sayMatthew 5:44means no unbeliever will not fully reap what they sow until they are judged by God after death in Hell. Ownby says Falun Gong is differentiated by a "system of transmigration", although, "in which each organism is the reincarnation of a previous life form, its current form having been determined by karmic calculation of the moral qualities of the previous lives lived." Ownby says the seeming unfairness of manifest inequities can then be explained, at the same time allowing a space for moral behaviour in spite of them.[126]In the same vein of Li'smonism, matter and spirit are one, karma is identified as a black substance which must be purged in the process of cultivation.[123]
According to Li,
Human beings all fell here from the many dimensions of the universe. They no longer met the requirements of the Fa at their given levels in the universe, and thus had to drop down. Just as we have said before, the heavier one's mortal attachments, the further down one drops, with the descent continuing until one arrives at the state of ordinary human beings.[127]
He says that, in the eyes of higher beings, the purpose of human life is not merely to be human, but to awaken quickly on Earth, a "setting of delusion," and return. "That is what they really have in mind; they are opening a door for you. Those who fail to return will have no choice but toreincarnate, with this continuing until they amass a huge amount of karma and are destroyed."[127]
Ownby regards this as the basis for Falun Gong's apparent "opposition to practitioners' takingmedicinewhen ill; they are missing an opportunity to work off karma by allowing an illness to run its course (suffering depletes karma) or to fight theillnessthrough cultivation."Benjamin Pennyshares this interpretation. Since Li believes that "karma is the primary factor that causes sickness in people," Penny asks: "if disease comes from karma and karma can be eradicated through cultivation ofxinxing, then what good will medicine do?"[128]Li himself states that he is not forbidding practitioners from taking medicine, maintaining that "What I'm doing is telling people the relationship between practicing cultivation and medicine-taking." Li also states that "An everyday person needs to take medicine when he gets sick."[129]Danny Schechter (2001) quotes a Falun Gong student who says "It is always an individual choice whether one should take medicine or not."[130]
Karma is an important concept inTaoism. Every deed is tracked by deities and spirits. Appropriate rewards or retribution follow karma, just like a shadow follows a person.[8]
The karma doctrine of Taoism developed in three stages.[131]In the first stage, causality between actions and consequences was adopted, with supernatural beings keeping track of everyone's karma and assigning fate (ming). In the second phase, transferability of karma ideas from Chinese Buddhism were expanded, and a transfer or inheritance of Karmic fate from ancestors to one's current life was introduced. In the third stage of karma doctrine development, ideas of rebirth based on karma were added. One could be reborn either as another human being or another animal, according to this belief. In the third stage, additional ideas were introduced; for example, rituals, repentance and offerings at Taoist temples were encouraged as it could alleviate Karmic burden.[131][132]
Interpreted asmusubi(産霊), a view of karma is recognized inShintoas a means of enriching, empowering, and affirming life.[133]Musubihas fundamental significance in Shinto, because creative development forms the basis of the Shinto worldview.[134]
Many deities are connected to musubi and have it in their names.
One of the significant controversies with the karma doctrine is whether it always impliesdestiny, and its implications on free will. This controversy is also referred to as themoral agencyproblem;[135]the controversy is not unique to karma doctrine, but also found in some form inmonotheistic religions.[136]
The free will controversy can be outlined in three parts:[135]
The explanations and replies to the above free will problem vary by the specific school of Hinduism, Buddhism and Jainism. The schools of Hinduism, such asYogaandAdvaita Vedanta, that have emphasized current life over the dynamics of karma residue moving across past lives, allow free will.[14]Their argument, as well of other schools, are threefold:
Other schools of Hinduism, as well as Buddhism and Jainism that do consider cycle of rebirths central to their beliefs and that karma from past lives affects one's present, believe that both free will (cetanā) and karma can co-exist; however, their answers have not persuaded all scholars.[135][139]
Another issue with the theory of karma is that it is psychologically indeterminate, suggests Obeyesekere (1968).[140]That is, if no one can know what their karma was in previous lives, and if the karma from past lives can determine one's future, then the individual is psychologically unclear what if anything he or she can do now to shape the future, be more happy, or reduce suffering. If something goes wrong, such as sickness or failure at work, the individual is unclear if karma from past lives was the cause, or the sickness was caused by curable infection and the failure was caused by something correctable.[140]
This psychological indeterminacy problem is also not unique to the theory of karma; it is found in every religion adopting the premise that God has a plan, or in some way influences human events. As with the karma-and-free-will problem above, schools that insist on primacy of rebirths face the most controversy. Their answers to the psychological indeterminacy issue are the same as those for addressing the free will problem.[139]
Some schools of Indian religions, particularly withinBuddhism, allow transfer of karma merit and demerit from one person to another. This transfer is an exchange of non-physical quality just like an exchange of physical goods between two human beings. The practice of karma transfer, or even its possibility, is controversial.[39][141]Karma transfer raises questions similar to those withsubstitutionary atonementand vicarious punishment. It undermines the ethical foundations, and dissociates the causality and ethicization in the theory of karma from the moral agent. Proponents of some Buddhist schools suggest that the concept of karma merit transfer encourages religious giving and that such transfers are not a mechanism to transfer bad karma (i.e., demerit) from one person to another.
In Hinduism, Sraddha rites during funerals have been labelled as karma merit transfer ceremonies by a few scholars, a claim disputed by others.[142]Other schools in Hinduism, such as theYogaandAdvaita Vedanticphilosophies, and Jainism hold that karma can not be transferred.[16][18]
There has been an ongoing debate about karma theory and how it answers theproblem of eviland related problem oftheodicy. The problem of evil is a significant question debated in monotheistic religions with two beliefs:[143]
The problem of evil is then stated in formulations such as, "why does the omnibenevolent, omniscient and omnipotent God allow any evil and suffering to exist in the world?" SociologistMax Weberextended the problem of evilto Eastern traditions.[144]
The problem of evil, in the context of karma, has been long discussed in Eastern traditions, both in theistic and non-theistic schools; for example, inUttara MīmāṃsāSutras Book 2 Chapter 1;[145][146]the 8th century arguments byAdi SankarainBrahma Sutrabhasyawhere he posits that God cannot reasonably be the cause of the world because there exists moral evil, inequality, cruelty and suffering in the world;[147][148]and the 11th century theodicy discussion byRamanujainSri Bhasya.[149]Epics such as theMahabharata, for example, suggest three prevailing theories in ancient India as to why good and evil exist – one being that everything is ordained by God, another being karma, and a third citing chance events (yadrccha, यदृच्छा).[150][151]TheMahabharata, which includes Hindu deityVishnuin theavatarofKrishnaas one of the central characters, debates the nature and existence of suffering from these three perspectives, and includes a theory of suffering as arising from an interplay of chance events (such as floods and other events of nature), circumstances created by past human actions, and the current desires, volitions, dharma, adharma and current actions (purusakara) of people.[150][152][153]However, while karma theory in theMahabharatapresents alternative perspectives on the problem of evil and suffering, it offers no conclusive answer.[150][154]
Other scholars[155]suggest thatnontheisticIndian religious traditions do not assume an omnibenevolent creator, and some[156]theistic schools do not define or characterize their God(s) as monotheistic Western religions do and the deities have colorful, complex personalities; the Indian deities are personal and cosmic facilitators, and in some schools conceptualized like Plato'sDemiurge.[149]Therefore, the problem of theodicy in many schools of major Indian religions is not significant, or at least is of a different nature than in Western religions.[157]Many Indian religions place greater emphasis on developing the karma principle for first cause and innate justice with Man as focus, rather than developing religious principles with the nature and powers of God and divine judgment as focus.[158]Some scholars, particularly of theNyaya schoolof Hinduism and Sankara inBrahma Sutra bhasya, have posited that karma doctrine implies existence of god, who administers and affects the person's environment given that person's karma, but then acknowledge that it makes karma as violable, contingent and unable to address the problem of evil.[159]Arthur Herman states that karma-transmigration theory solves all three historical formulations to the problem of evil while acknowledging the theodicy insights of Sankara and Ramanuja.[160]
Some theistic Indian religions, such as Sikhism, suggest evil and suffering are a human phenomenon and arises from the karma of individuals.[161]In other theistic schools such as those in Hinduism, particularly its Nyaya school, karma is combined withdharmaand evil is explained as arising from human actions and intent that is in conflict with dharma.[149]In nontheistic religions such as Buddhism, Jainism and the Mimamsa school of Hinduism, karma theory is used to explain the cause of evil as well as to offer distinct ways to avoid or be unaffected by evil in the world.[147]
Those schools of Hinduism, Buddhism, and Jainism that rely on karma-rebirth theory have been critiqued for their theological explanation of suffering in children by birth, as the result of his or her sins in a past life.[162]Others disagree, and consider the critique as flawed and a misunderstanding of the karma theory.[163]
Western culture, influenced by Christianity,[7]holds a notion similar to karma, as demonstrated in the phrase "what goes around comes around".
Mary Jo Meadow suggests karma is akin to "Christian notions ofsinand its effects."[164]She states that the Christian teaching on aLast Judgmentaccording to one's charity is a teaching on karma.[164]Christianity also teaches morals such asone reaps what one sows(Galatians6:7) andlive by the sword, die by the sword(Matthew26:52).[165]Most scholars, however, consider the concept of Last Judgment as different from karma, with karma as an ongoing process that occurs every day in one's life, while Last Judgment, by contrast, is a one-time review at the end of life.[166]
There is a concept in Judaism called in Hebrewmidah k'neged midah, which is often translated as "measure for measure".[167]The concept is used not so much in matters of law, but rather in matters ofdivine retributionfor a person's actions.David Wolpecomparedmidah k'neged midahto karma.[168]
Carl Jungonce opined on unresolved emotions and thesynchronicityof karma;
When an inner situation is not made conscious, it appears outside as fate.[169]
Popular methods for negatingcognitive dissonanceincludemeditation,metacognition,counselling,psychoanalysis, etc., whose aim is to enhance emotional self-awareness and thus avoid negative karma. This results in better emotional hygiene and reduced karmic impacts.[170]Permanent neuronal changes within theamygdalaand leftprefrontal cortexof the human brain attributed to long-term meditation and metacognition techniques have been proven scientifically.[171]This process of emotional maturation aspires to a goal ofIndividuationorself-actualisation. Suchpeak experiencesare hypothetically devoid of any karma (nirvanaormoksha).
The idea of karma was popularized in theWestern worldthrough the work of theTheosophical Society. In this conception, karma was a precursor to theNeopaganlaw of returnorThreefold Law,the idea that the beneficial or harmful effects one has on the world will return to oneself. Colloquially this may be summed up as 'what goes around comes around.'
TheosophistI. K. Taimniwrote, "Karma is nothing but the Law of Cause and Effect operating in the realm of human life and bringing about adjustments between an individual and other individuals whom he has affected by his thoughts, emotions and actions."[172]Theosophyalso teaches that when humans reincarnate they come back as humans only, not as animals or other organisms.[173]
|
https://en.wikipedia.org/wiki/Karma
|
Online presence managementis the process of creating and promoting traffic to a personal or professionalbrandonline. This process combines web design, development,blogging,search engine optimization,pay-per-clickmarketing,reputation management, directory listings,social media, link sharing, and other avenues to create a long-term positive presence for a person, organization, or product in search engines and on the web in general.
Online presence management is distinct fromweb presence managementin that the former is generally a marketing and messaging discipline while the latter isGovernance, risk management, and complianceoperational and security discipline.
The theory of online presence management considers a website to be insufficient to promote most brands. To maintain aweb presenceandbrand recognition, individuals and companies need to use a variety of digital platforms such asGoogle Maps,Facebook,Twitter,Instagram,[1]Flickr,YouTube, andPinterest, as well as cultivating a brand presence onmobile appsand other online databases.
The online presence management process starts by determining goals that will define an online strategy. Once this strategy is put in place, an ongoing and constant process of evaluating and fine-tuning is necessary to drive online presence towards the identified goals.[citation needed]
An online presence managementstrategyconsists of several components. Generally, these will includesearch engineplacement (making sure the brand appears high in search engine results when the end-user has a relevant query), monitoringonline discussionaround the brand, and analyzing the brand's overallweb presence.
An online profile or reputation is a product of multiple activities and platforms. It includes the following:
The online portfolio helps to build the visibility of a brand or individual. It works as a centralized hub for all the activities related to the brand and includes the contact info of the brand, what the brand is about (history, vision etc.) and a product showcase. The web portfolio comes in different forms. The most common form of a portfolio is the website. A website, usually built on the same domain as the brand's name, represents the business/person throughout the web.
Brands and companies prefer to use websites to establish themselves and gain higher brand awareness levels because it is very important for the company to maintain relevance over time, and promoting a product online makes it much easier to keep up with the times.[2]Businesses need to keep online visibility high as well as their performance compared to their competitors.[2]Onlinereputation managementshould be tracked to see how consumers react and feel towards a company's brand.[2]
Ablogprovides a brand with a way to express itself. It allows the brand to talk and get their voice/opinion heard on any topic they choose. Blogging can promote a brand through consistent, interesting content generation associated with a particular brand or the market said the brand caters to. Blogs can be created on the website or third-party platforms such asLinkedIn,Facebook,Instagram,Quora,WordPress,Blogger.comandMedium, etc. Apart from conventional blogging, social media has enabledMicroblogging(through services such as Twitter and Tumblr) which is particularly effective in establishing a brand name and building the brand's recognition through interaction with the masses. It is also a quick way to respond to brand-related complaints and queries.
Corporate blogging is a powerful tool when attempting to communicate an idea important to the firm's identity. However, there are a few rules to keep in mind when utilizing corporate blogs:
Search Engine Optimization(SEO) is one of the most popular techniques to build traction and turn a web page into a revenue-generation machine. Search engine optimization allows companies or individuals to:
Search engines use a spider or acrawlerto gather listings by automatically "crawling" the web. The spider follows links to web pages, makes copies of the pages, and stores them in the search engine's index. Based on this data, the search engines then index the pages and rank the websites accordingly. Major search engines that index pages using spiders includeGoogle,Yahoo,Bing,AOL, andLycos.
Some methods that help optimize a web page for the search engine include:
Internet advertising is a form of broadcasting and promotion of products, ideas, or services using the internet to attract customers. This idea is very similar to that of social media marketing. Internet advertising has overtaken other traditional advertising media such as newspapers, magazines, and radio.Internet advertisingtargets users interested in relevant keywords and displays a text or image ad next to search results or within social media. While searching to these specific keywords, firms can target their advertisements for specific audiences. The advertisements will more than likely pop up on social media, but can also show up just on websites that customers would visit. This increases the firm's online presence and makes their products or services more visible to potential consumers.[2]
In today's digital age, internet advertising has become an integral part of marketing strategies for businesses of all sizes. With the rise of e-commerce, social media, and search engines, companies have been able to reach a much wider audience than ever before. The ability to target specific demographics and interests through online advertising has made it a cost-effective alternative to traditional advertising channels. As a result, more and more businesses are turning to internet advertising to promote their products and services. With the continued growth of the internet, it is safe to say that this trend is only going to continue.
Reputation management is a critical part of online presence management. A company's reputation is based on consumer trust and is often dependent on information that is found on social networks and the internet. Having a good reputation has a significant impact on a company's survival. But having no reputation can be as critical as having a bad reputation. No reputation means you don't exist in the eyes of a consumer.[4]
The first steps in developing and managing a company's reputation require the business to define their image, identify a target market and develop a core message. Next, the company should plan a detailed media strategy emphasizing issues management, message development and media preparation.[5]Once these aspects are defined, the company can build effective communication strategies on how and where to get these messages across. These strategies should include how to address and involve the customer in all the firm's activities. This will impact reputation because engaged customers can contribute to the long term relationship of a firm by creating and disseminating information.[6]
Management of a company's online reputation should include maintaining a social platform. Four main objectives for creating a positive reputation on a social web platform are building trust, promoting quality, facilitating member matching and sustaining loyalty.[7]Ideally, this platform should emphasize a company's culture, industry news and valuable company information. Companies should consider what the consumer finds valuable when developing this platform. Finding the right combination of information and how to present it will require research into the target markets.
Managing a firm's reputation requires constant oversight to stay on top of current topics or news, and to update online sites and manage content as needed. It is difficult to control how someone might view a company, product or service and negative opinions are bound to happen. With the internet and multiple social media outlets, information is everywhere and is hard to control. It is how managers react and oversee these opinions or reviews that is critical. Digital media offers the possibility to monitor customer opinions almost in real time. It is imperative that businesses constantly track what their customers think of them and work proactively to ensure the conversation remains positive toward them.[4]
Reputation management is the process of tracking actions and opinions, looking for positive and negative reviews that reflect the opinion of the users about any particular service or product, removing negative opinions (if any) and converting them into positive ones. It is important, however, not to attack or try to obscure negative opinions through devious means, as this is likely to have an overall negative effect on the brand. A better strategy is to respond to complaints with information and an apologetic attitude, cultivating later positive reviews. Managers can take some control by planning ahead and developing strategies to communicate valuable information and address any negative reviews. Competence, cooperation and compassion should be the guiding principles when responding to media and other constituents in a crisis.[5]Corporate managers must work together to understand and govern communications provided through social media. Several things managers can do to safeguard and enhance reputation with the use of social media include:
In today's business environment, consumers have unlimited accessible information about a company, product or service on social network sites. How a company is perceived or their reputation can make or break a business. For these reasons, businesses cannot afford not to monitor, communicate and maintain their online reputation.
Social media marketinguses social media platforms to create and foster communities and relationships among people and businesses. Social media is one of the most prevalent forms of communication and marketing today and businesses reach massive amounts of potential customers through it. Social media marketing is focused on creating content that attracts attention and encourages readers to share content with their social networks. Social messages are often effective because they usually come from a trusted, third-party source, rather than the brand itself. However, a brand having their own specific voice through social media can also be very effective for keeping loyal customers close and attracting new ones.
Understanding what tools are available and how to use them effectively is key to success in social media marketing. Some of these tools include:
It is also important to understand the different effects that social media marketing has on past, current, or future customers. Having a clean social media platform can increase brand loyalty among the customers who already purchase a specific product or brand. It can also affect a customer's future purchase intentions. Both of these effects have been studied and go hand in hand with one another. Although there is always more research to be done, specific studies have supported that when a business has a positive social media campaign, it influences the potential customers and their prospective purchases. All of these ideas tie back to a company having a strong sense of their online presence and managing it well.[9]
Many of the tools listed above are often found in asocial media management system. This is a collection of procedures used to manageworkflowin a disparatesocial mediaenvironment. These procedures can be manual or computer-based and enable the manager (or managing team) to listen, aggregate, publish, and manage multiple social media channels from one tool.[10]
Social media management systems effectively promote businesses to prospective clients. A firm could hire one of many social media managers to oversee the operations of this are for their team. Because the social network has changed how customers are reached and how they view products, it is incredibly important to manage a firm's social media responsibly. These social media managers can research effective ways on how to market the brand through social media platforms likeFacebook,Twitter,LinkedIn,Instagram, andTikTok, along with other social media platforms. Companies pay on average $68,000 a year for someone to perform these services for them, however, that number is expected to rise. With the ongoing improvement and need for social media, it makes sense that this number would eventually surpass 6 digits. All in all, managing a firm's social media can boost their online presence by targeting their marketing to a specific group, improving on their existing marketing tactics, and keeping the brand name organized with their prospective goals in mind.[11]
This kind of model is widely used by smaller businesses by following certain steps and stages. These models help for a business to better represent who they are and what they are trying to sell. A stage model, for example, may consist of stages such aspromotional,provisionalandprocessing. These stages are extremely important in the marketing and selling of any given product. The promotional stage is where networking and advertising are extremely prominent. These are essential to growbrand awarenessand create sales. The provisional stage consists of adding features to the webpage for the product so that the consumers can have a better understanding of things such as price of a product. The processing stage is the financial part of the model. This is when the product is being sold to its customers and the actual order goes through with their payment.Data visualizationis used in the Web Presence Pyramid Model because it gives a more simplistic look at the companies data received from their products sales rates and web pages views.[12]
Using a step-by-step stage model is an effective way for small businesses to develop a strong online presence and showcase their products to potential customers.[citation needed]These models help businesses to create a clear representation of who they are and what they offer. The various stages of the model, such as promotional, provisional and processing, all play crucial roles in the marketing and selling of a product. Overall, the use of a step-by-step model can help small businesses create a strong online presence and effectively market their products to a wider audience.[citation needed]
|
https://en.wikipedia.org/wiki/Online_presence_management
|
Reputation capitalis thequantitativemeasure of some entity's reputational value in some context – acommunityormarketplace.[citation needed]In the world ofWeb 2.0, what is increasingly valuable is trying to measure the effects ofcollaborationand contribution to community.[citation needed]Reputation capital is often seen as a form of non-cashremunerationfor their efforts, and generally generatesrespectwithin the community or marketplace where the capital is generated.
For a business, reputation capital is the sum of the value of all corporateintangible assets, which include: business processes,patents,trademarks; reputations forethicsandintegrity;quality,safety,sustainability,security, andresilience.[3]
Delivering functional and social expectations of the public on the one hand and manage to build a unique identity on the other hand creates trust and this trust builds the informal framework of a company. This framework provides "return in cooperation" and produces Reputation Capital. A positive reputation will secure a company or organisation long-term competitive advantages. The higher the Reputation Capital, the less the costs for supervising and exercising control.[4]
Reputation capital is a corporate asset that can be managed, accumulated and traded in for trust, legitimisation of a position ofpowerand social recognition, a premium price for goods and services offered, a stronger willingness amongshareholdersto hold on to shares in times of crisis, or a stronger readiness to invest in the company'sstock.[4]
|
https://en.wikipedia.org/wiki/Reputation_capital
|
Zhima Credit(Chinese:芝麻信用;pinyin:Zhīma Xìnyòng; also known asSesame Credit) is a private company-runcredit scoringandloyalty programsystem developed byAnt Group, an affiliate ofAlibaba Group. It uses data from Alibaba's services to compile its score. Customers receive a score based on a variety of factors based on social media interactions and purchases carried out on Alibaba Group websites or paid for using its affiliate Ant Financial'sAlipaymobile wallet. The rewards of having a high score include easier access to loans from Ant Financial and having a more trustworthy profile one-commercesites within the Alibaba Group.[1][2]It has frequently been confused with theSocial Credit System.[3]
China has a much lower rate of credit use than developed markets.[4]: 67As a result, it lacks the associated credit reports.[4]: 67
Zhima Credit was introduced on 28 January 2015. It was the first credit agency in China to use a score system for individual users, using both online and offline information.[5]It was developed when thePeople's Bank of Chinalifted the restrictions and let non-bank institutes conduct personal credit information operations.[6]
A higher Zhima credit score increases the availability of microloans from Alibaba (for example, for theTaobaoplatform).[4]: 68It can also have some benefits outside of Alibaba platforms, like potentially waiving deposits for hotel bookings or bicycle rentals from businesses that partner with Alipay.[4]: 68
All Taobao buyers and sellers with a sufficiently high Zhima credit score can vote and express their opinions on proposed changes to Taobao rules.[4]: 68
Baihe.com, a Chinese matchmaking company, uses Zhima Credit data as part of its service.[7]
In 2015, Zhima Credit published information on the methodology behind its currently running beta version.
Zhima Credit's scoring system is roughly modeled afterFICOscoring in the United States andSchufain Germany.[8][5]
The corporate network of Zhima Credit, led by the Alibaba Group, spans over insurance, loan, historical payment, dating, shopping and mobility data.[9]It collects data from all sources by utilizing the regulatory freedom it built from objects and social networks, public and private institutions and offline and online. The system is powered by "data from more than 300 million real-name registered users and 37 million small businesses that buy and sell on Alibaba Group marketplaces". Due to Zhima Credit's close collaboration with the government, it also has access to all public documents, such as official identity and financial records.[5]
Zhima Credit emphasizes its strict privacy and data protection, ensured through encryption and segregation.[5]The firm also states that data is only gathered upon knowledge and consent of the user.[5]According to Ant Financial, users’ scores can currently only be shared with their authorization or by themselves.[5]
Big data andbehavioral analyticsare building blocks for the system. Data fragments are classified into five categories:[5][9]
The specifications of the algorithm that determine the classification, as well as the analytical parameters and indicators remain confidential.[5]It is unclear is whether data is structured to build in tolerances for errors, for example the likelihood of a unit of data being false or from an unreliable source.[5][9]
The five categories that Zhima Credit classifies its data into, have different weightings attached to them.[5]Based on those, an algorithm determines a citizen's final citizen score, ranked among others.[5]The scores in the ranking range from 350 (lowest trustworthiness) to 950 (highest trustworthiness).[5]From 600 up, one can gain privileges, while lower scorers will revoke them.[9]According to current plans, the final score and ranking will be publicly available.[9]
Zhima Credit has frequently been mistaken for theSocial Credit System.[10][11]: 55
In 2015, the PBOC designated eight private companies to pilot personal credit reporting (zhengxin) mechanisms.[11]: 55Because the pilot programs werezhengxinmechanisms, they had little connection to the idea of social credit more broadly.[11]: 55Zhima Credit was one of the pilotzhengxinmechanisms.[11]: 55It was an opt-in scoring initiative proposed to assess users' credit worthiness even if those users lacked formal credit history.[11]: 55It did not include standard industry metrics like income or debts, instead it assessed factors like user spending ability and whether users showed up for travel bookings.[11]: 55
Following the release of Zhima Credit, there was significant media speculation that it might turn into a national social credit system by 2020.[11]: 55This did not occur.[11]: 55Zhima Credit and the other pilot initiatives were never linked to the broader financial system.[11]: 55Zhima Credit did not prove to be an effective credit evaluation mechanism because the data showed no statistically significant link between its metrics and a user's ability to repay loans.[11]: 55
In one interview, Alibaba's technology director suggested that people who played too many video games might be considered less trustworthy.[11]: 161Various news outlets around the world incorrectly suggested that people could lose social credit for playing too many video games.[11]: 161No video game playing metric was ever implemented.[11]: 161
Ultimately Zhima Credit became a loyalty program that rewarded users for using Alibaba services and shopping platforms.[11]: 55PBOC decided not to extend the credit licenses of the eight private pilot programs from 2015.[11]: 55
|
https://en.wikipedia.org/wiki/Sesame_Credit
|
Thesharing economyis asocio-economic systemwhereby consumers share in the creation, production, distribution, trade and consumption of goods, and services. These systems take a variety of forms, often leveraginginformation technologyand theInternet, particularly digital platforms, to facilitate the distribution, sharing and reuse of excess capacity in goods and services.[1][2][3][4]
It can be facilitated bynonprofit organizations, usually based on the concept of book-lending libraries, in which goods and services are provided for free (or sometimes for a modest subscription) or by commercial entities, in which a company provides a service to customers for profit.
It relies on the will of the users to share and the overcoming ofstranger danger.[5]
It provides benefits, for example can lower theGHG emissionsof products by 77%-85%.[6]
Dariusz Jemielniakand Aleksandra Przegalinska credit Marcus Felson and Joe L. Spaeth's academic article "Community Structure andCollaborative Consumption" published in 1978[7]with coining the termeconomy of sharing.[8]: 6
The term "sharing economy" began to appear around the time of theGreat Recession, enabling social technologies, and an increasing sense of urgency around global population growth andresource depletion.Lawrence Lessigwas possibly first to use the term in 2008, though others claim the origin of the term is unknown.[9][10]
There is a conceptual and semantic confusion caused by the many facets ofInternet-based sharing leading to discussions regarding the boundaries and the scope of the sharing economy[11]and regarding the definition of the sharing economy.[12][8]: 7, 27Arun Sundararajannoted in 2016 that he is "unaware of any consensus on a definition of the sharing economy".[13]: 27–28As of 2015, according to aPew Research Centersurvey, only 27% of Americans had heard of the term "sharing economy".[14]
The term "sharing economy" is often used ambiguously and can imply different characteristics.[15]Survey respondents who had heard of the term had divergent views on what it meant, with many thinking it concerned "sharing" in the traditional sense of the term.[14]To this end, the terms “sharing economy” and “collaborative consumption” have often been used interchangeably. Collaborative consumption refers to the activities and behaviors that drive the sharing economy, making the two concepts closely interrelated. A definition published in the Journal of Consumer Behavior in 2015 emphasizes these synergies: “Collaborative consumption takes place in organized systems or networks, in which participants conduct sharing activities in the form of renting, lending, trading, bartering, and swapping of goods, services, transportation solutions, space, or money.”[16]
The sharing economy is sometimes understood exclusively as apeer-to-peerphenomenon[17]while at times, it has been framed as abusiness-to-customerphenomenon.[18]Additionally, the sharing economy can be understood to encompass transactions with a permanent transfer of ownership of a resource, such as a sale,[19]while other times, transactions with a transfer of ownership are considered beyond the boundaries of the sharing economy.[20]One definition of the sharing economy, developed to integrate existing understandings and definitions, based on a systematic review is:
"the sharing economy is an IT-facilitated peer-to-peer model for commercial or non-commercial sharing of underutilized goods and service capacity through an intermediary without transfer of ownership"[15]
The phenomenon has been defined from a legal perspective as "a for-profit, triangular legal structure where two parties (Providers and Users) enter into binding contracts for the provision of goods (partial transfer of the property bundle of rights) or services (ad hoc or casual services) in exchange for monetary payment through an online platform operated by a third party (Platform Operator) with an active role in the definition and development of the legal conditions upon which the goods and services are provided."[21]Under this definition, the "Sharing Economy" is a triangular legal structure with three different legal actors: "1) a Platform Operator which using technology provides aggregation and interactivity to create a legal environment by setting the terms and conditions for all the actors; (2) a User who consumes the good or service on the terms and conditions set by the Platform Operator; and (3) a Provider who provides a good or service also abiding by the Platform Operator's terms and conditions."[21]
While the term "sharing economy" is the term most often used, the sharing economy is also referred to as the access economy, crowd-based capitalism, collaborative economy,community-based economy,gig economy, peer economy, peer-to-peer (P2P) economy,platform economy, renting economy and on-demand economy, though at times some of those terms have been defined as separate if related topics.[13]: 27–28[22][23]
The notion of "sharing economy" has often been considered anoxymoron, and amisnomerfor actual commercial exchanges.[24]Arnould and Rose proposed to replace the misleading term "sharing" with "mutuality".[25]In an article inHarvard Business Review, authors Giana M. Eckhardt and Fleura Bardhi argue that "sharing economy" is a misnomer, and that the correct term for this activity is access economy. The authors say, "When 'sharing' is market-mediated—when a company is an intermediary between consumers who don't know each other—it is no longer sharing at all. Rather, consumers are paying to access someone else's goods or services."[26]The article states that companies (such asUber) that understand this, and whose marketing highlights the financial benefits to participants, are successful, while companies (such asLyft) whose marketing highlights the social benefits of the service are less successful.[26]According toGeorge Ritzer, this trend towards increased consumer input in commercial exchanges refers to the notion ofprosumption, which, as such, is not new.[27]Jemielniak and Przegalinska note that the term sharing economy is often used to discuss aspects of the society that do not predominantly relate to the economy, and propose a broader termcollaborative societyfor such phenomena.[8]: 11
The term "platform capitalism" has been proposed by some scholars as more correct than "sharing economy" in discussion of activities of for-profit companies like Uber and Airbnb in the economy sector.[8]: 30Companies that try to focus on fairness and sharing, instead of justprofit motive, are much less common, and have been contrastingly described asplatform cooperatives(or cooperativist platforms vs capitalist platforms). In turn, projects likeWikipedia, which rely on unpaid labor of volunteers, can be classified ascommons-based peer-productioninitiatives. A related dimension is concerned with whether users are focused on non-profit sharing ormaximizing their own profit.[8]: 31, 36Sharing is a model that is adapting to the abundance of resource, whereas for-profit platform capitalism is a model that persists in areas where there is still ascarcityof resources.[8]: 38
Yochai Benkler, one of the earliest proponents of open source software, who studied thetragedy of the commons, which refers to the idea that when people all act solely in our self-interest, they deplete the shared resources they need for their own quality of life, posited that network technology could mitigate this issue through what he called "commons-based peer production", a concept first articulated in 2002.[28]Benkler then extended that analysis to "shareable goods" inSharing Nicely: On Shareable Goods and the emergence of sharing as a modality of economic production, written in 2004.[29]
There are a wide range of actors who participate in the sharing economy. This includes individual users, for-profit enterprises, social enterprise or cooperatives, digital platform companies, local communities, non-profit enterprises and thepublic sectoror the government.[30]Individual users are the actors engaged in sharing goods and resources through "peer-to-peer (P2P) or business-to-peer (B2P) transactions".[30]The for-profit enterprises are those actors who are profit-seekers who buy, sell, lend, rent or trade with the use of digital platforms as means to collaborate with other actors.[30]The social enterprises, sometimes referred to as cooperatives, are mainly "motivated by social or ecological reasons" and seek to empower actors as means of genuine sharing.[30]Digital platforms are technology firms that facilitate the relationship between transacting parties and make profits by charging commissions.[31]The local communities are the players at the local level with varied structures and sharing models where most activities are non-monetized and often carried out to further develop the community. The non-profit enterprises have a purpose of "advancing a mission or purpose" for a greater cause and this is their primary motivation which is genuine sharing of resources. In addition, the public sector or the government can participate in the sharing economy by "using public infrastructures to support or forge partnerships with other actors and to promote innovative forms of sharing".[30]
Geographer Lizzie Richardson describes the sharing economy as a paradox, since it is framed as bothcapitalistand an alternative to capitalism.[32]A distinction can be made between free sharing, such as genuine sharing, and for-profit sharing, often associated with companies such asUber,Airbnb, andTaskRabbit.[33][34][8]: 22–24Commercial co-options of the 'sharing economy' encompass a wide range of structures including mostly for-profit, and, to a lesser extent, co-operative structures.[35]
The usage of the term sharing by for-profit companies has been described as "abuse" and "misuse" of the term, or more precisely, itscommodification.[8]: 21, 24In commercial applications, the sharing economy can be considered amarketing strategymore than an actual 'sharing economy' ethos;[8]: 8, 24for example,Airbnbhas sometimes been described as a platform for individuals to 'share' extra space in their homes, but in some cases, the space is rented, not shared. Airbnb listings additionally are often owned byproperty managementcorporations.[36][34]This has led to a number of legal challenges, with some jurisdiction ruling, for example, thatride sharingthrough for-profit services like Uber de facto makes the drivers indistinguishable from regular employees of ride sharing companies.[8]: 9
According to a report by theUnited States Department of Commercein June 2016, quantitative research on the size and growth of the sharing economy remains sparse. Growth estimates can be challenging to evaluate due to different and sometimes unspecified definitions about what sort of activity counts as sharing economy transactions. The report noted a 2014 study byPricewaterhouseCoopers, which looked at five components of the sharing economy: travel, car sharing, finance, staffing and streaming. It found that global spending in these sectors totaled about $15 billion in 2014, which was only about 5% of the total spending in those areas. The report also forecasted a possible increase of "sharing economy" spending in these areas to $335 billion by 2025, which would be about 50% of the total spending in these five areas. A 2015PricewaterhouseCoopersstudy found that nearly one-fifth of American consumers partake in some type of sharing economy activity.[37]A 2017 report byDiana Farrelland Fiona Greig suggested that at least in the US, sharing economy growth may have peaked.[38]
A February 2018 study ordered by theEuropean Commissionand theDirectorate-General for Internal Market, Industry, Entrepreneurship and SMEsindicated the level of collaborative economy development between the EU-28 countries across the transport, accommodation, finance and online skills sectors. The size of the collaborative economy relative to the total EU economy was estimated to be €26.5 billion in 2016.[39]Some experts predict that shared economy could add between €160 and €572 billion to the EU economy in the upcoming years.[40]
According to "The Sharing Economy in Europe"[41]from 2022 the sharing economy is spreading rapidly and widely in today's European societies; however, the sharing economy requires more regulation at European level because of increasing problems related to its functioning. The authors also suggest that sometimes the local initiatives, especially when it comes to specific niches, are doing even better than global corporations.
In China, the sharing economy doubled in 2016, reaching 3.45 trillion yuan ($500 billion) in transaction volume, and was expected to grow by 40% per year on average over the next few years, according to the country's State Information Center.[42]In 2017, an estimated 700 million people used sharing economy platforms.[43]According to a report fromState Information Center of China, in 2022 sharing economy is still growing and reached about 3.83 trillion yuan (US$555 billion). The report also includes an overview of 7 main sectors of China's sharing economy: domestic services, production capacity, knowledge, and skills, shared transportation, shared healthcare, co-working space, and shared accommodation.[44]
In most sharing-economy platforms in China the user profiles connected toWeChatorAlipaywhich require real name and identification, which ensures that service abuse is minimised. This fact contributes to an increase in interest for shared healthcare services.[44][45]
According to TIARCENTER and the Russian Association of Electronic Communications, eight key verticals of Russia's sharing economy (C2C sales, odd jobs, car sharing, carpooling, accommodation rentals, shared offices, crowdfunding, and goods sharing) grew 30% to 511 billion rubles ($7.8 billion) in 2018.[46]
According to Sharing Economy Association of Japan, The market size of the sharing economy in Japan in 2021 was 2.4 trillion yen. It is expected to expand up to 14.2799 trillion yen in FY2030.[47][48]
Overall the Japanese environment is not well suited for the development of a sharing economy. Industries do not seek new revolutionary solutions and some services are banned.[49]For example, for ride-hailing services,Uberis not very popular in Japan as the public transport is very sufficient and the regulations ban from operating private car-sharing services and taxi apps are much more popular.[50]According toThe Japan Times(2024) it is possible that car-sharing services will be available in the future, however only in certain areas when taxis are deemed in short supply.[51]
The impacts of the access economy in terms of costs, wages and employment are not easily measured and appear to be growing.[52]Various estimates indicate that 30-40% of the U.S. workforce is self-employed, part-time, temporary or freelancers. However, the exact percentage of those performing short-term tasks or projects found via technology platforms was not effectively measured as of 2015 by government sources.[53]In the U.S., one private industry survey placed the number of "full-time independent workers" at 17.8 million in 2015, roughly the same as 2014. Another survey estimated the number of workers who do at least some freelance work at 53.7 million in 2015, roughly 34% of the workforce and up slightly from 2014.[54]
EconomistsLawrence F. KatzandAlan B. Kruegerwrote in March 2016 that there is a trend towards more workers in alternative (part-time or contract) work arrangements rather than full-time; the percentage of workers in such arrangements rose from 10.1% in 2005 to 15.8% in late 2015.[55]Katz and Krueger defined alternative work arrangements as "temporary help agency workers, on-call workers, contract company workers, and independent contractors or free-lancers".[56]They also estimated that approximately 0.5% of all workers identify customers through an online intermediary; this was consistent with two others studies that estimated the amount at 0.4% and 0.6%.[56]
At the individual transaction level, the removal of a higher overhead business intermediary (say a taxi company) with a lower cost technology platform helps reduce the cost of the transaction for the customer while also providing an opportunity for additional suppliers to compete for the business, further reducing costs.[53]Consumers can then spend more on other goods and services, stimulating demand and production in other parts of the economy. Classical economics argues that innovation that lowers the cost of goods and services represents a net economic benefit overall. However, like many new technologies and business innovations, this trend is disruptive to existing business models and presents challenges for governments and regulators.[57]
For example, should the companies providing the technology platform be liable for the actions of the suppliers in their network? Should persons in their network be treated as employees, receiving benefits such as healthcare and retirement plans? If consumers tend to be higher income persons while the suppliers are lower-income persons, will the lower cost of the services (and therefore lower compensation of the suppliers) worsen income inequality? These are among the many questions the on-demand economy presents.[53][58]
Using a personal car to transport passengers or deliveries requires payment, or sufferance, ofcostsfor fees deducted by the dispatching company, fuel, wear and tear, depreciation, interest, taxes, as well as adequate insurance. The driver is typically not paid for driving to an area where fares might be found in the volume necessary for high earnings, or driving to the location of a pickup or returning from a drop-off point.[59]Mobile appshave been written that help a driver be aware of and manage such costs has been introduced.[60]
Ridesharing companieshave affectedtraffic congestionand Airbnb has affected housing availability. According to transportation analyst Charles Komanoff, "Uber-caused congestion has reduced traffic speeds in downtown Manhattan by around 8 percent".[61]
Depending on the structure of the country's legal system, companies involved in the sharing economy may shift legal realm where cases involving sharers is disputed. Technology (such as algorithmic controls) which connects sharers also allows for the development of policies and standards of service. Companies can act as 'guardians' of their customer base by monitoring their employee's behavior. For example, Uber and Lyft can monitor their employees' driving behavior, location, and provide emergency assistance.[62]Several studies have shown that In the United States, the sharing economy restructures how legal disputes are resolved and who is considered the victims of potential crime.
In the United States's civil law, the dispute is between two individuals, determining which individual (if any) is the victim of the other party. U.S. criminal law considers the actions of a criminal who "victimizes" the state or federal law(s) by breaking said law(s). In criminal law cases, a government court punishes the offender to make the legal victim (the government) whole, but any civilian victim does not necessarily receive restitution from the state. In civil law cases, it is the direct victim party, not the state, who receives the compensatory restitution, fees, or fines. While it is possible for both kinds of law to apply to a case, the additional contracts created in sharing economy agreements creates the opportunity for more cases to be classified as civil law disputes. When the sharing economy is directly involved, the victim is the individual rather than the state. This means the civilian victim of a crime is more likely to receive compensation under a civil law case in the sharing economy than in the criminal law precedent.[63]The introduction of civil law cases has the potential to increase victims' ability to be made whole, since the legal change shifts incentives of consumers towards action.[64]
Suggested benefits of the sharing economy include:
Freelance work entails better opportunities for employment, as well as more flexibility for workers, since people have the ability to pick and choose the time and place of their work. As freelance workers, people can plan around their existing schedules and maintain multiple jobs if needed. Evidence of the appeal to this type of work can be seen from a 2015 survey conducted by theFreelancers Union, which showed that around 34% of the U.S. population was involved in freelance work.[65]
Freelance work can also be beneficial for small businesses. During their early developmental stages, many small companies can't afford or aren't in need of full-time departments, but rather require specialized work for a certain project or for a short period of time. With freelance workers offering their services in the sharing economy, firms are able to save money on long-term labor costs and increase marginal revenue from their operations.[66]
The sharing economy allows workers to set their own hours of work. An Uber driver explains, "the flexibility extends far beyond the hours you choose to work on any given week. Since you don’t have to make any sort of commitment, you can easily take time off for the big moments in your life as well, such as vacations, a wedding, the birth of a child, and more."[67]Workers are able to accept or reject additional work based on their needs while using the commodities they already possess to make money. It provides increased flexibility of work hours and wages for independent contractors of the sharing economy[68]
Depending on their schedules and resources, workers can provide services in more than one area with different companies. This allows workers to relocate and continue earning income. Also, by working for such companies, the transaction costs associated with occupational licenses are significantly lowered. For example, in New York City, taxi drivers must have a special driver's license and undergo training and background checks,[69]while Uber contractors can offer "their services for little more than a background check".[70]
The percentage of seniors in the work force increased from 20.7% in 2009 to 23.1% in 2015, an increase in part attributed to additional employment as gig workers.[71]
A common premise is that when information about goods is shared (typically via anonline marketplace), the value of those goods may increase for the business, for individuals, for the community and for society in general.[72]
Many state, local and federal governments are engaged inopen datainitiatives and projects such asdata.gov.[73]The theory of open or "transparent" access to information enables greater innovation, and makes for more efficient use of products and services, and thus supporting resilient communities.[74]
Unused value refers to the time over which products, services, and talents lay idle. This idle time is wasted value thatbusiness modelsand organizations that are based on sharing can potentially utilize. The classic example is that the average car is unused 95% of the time.[75]This wasted value can be a significant resource, and hence an opportunity, for sharing economy car solutions. There is also significant unused value in "wasted time", as articulated byClay Shirkyin his analysis of the power of crowds connected by information technology.[citation needed]Many people have unused capacity in the course of their day. With social media and information technology, such people can donate small slivers of time to take care of simple tasks that others need doing. Examples of thesecrowdsourcingsolutions include the for-profitAmazon Mechanical Turk[76]and the non-profitUshahidi.
Christopher Koopman, an author of a 2015 study byGeorge Mason Universityeconomists, said the sharing economy "allows people to take idle capital and turn them into revenue sources". He has stated, "People are taking spare bedroom[s], cars, tools they are not using and becoming their own entrepreneurs."[77]
Arun Sundararajan, a New York University economist who studies the sharing economy, told a congressional hearing that "this transition will have a positive impact on economic growth and welfare, by stimulating new consumption, by raising productivity, and by catalyzing individual innovation and entrepreneurship".[77]
An independent data study conducted byBusbudin 2016 compared the average price of hotel rooms with the average price ofAirbnblistings in thirteen major cities in the United States. The research concluded that in nine of the thirteen cities, Airbnb rates were lower than hotel rates by an average price of $34.56.[78]A further study conducted by Busbud compared the average hotel rate with the average Airbnb rate in eight major European cities. The research concluded that the Airbnb rates were lower than the hotel rates in six of the eight cities by a factor of $72.[78]Data from a separate study shows that with Airbnb's entry into the market in Austin, Texas hotels were required to lower prices by 6 percent to keep up with Airbnb's lower prices.[79]
The sharing economy lowers consumer costs via borrowing and recycling items.[80]
The sharing economy reduces negative environmental impacts by decreasing the amount of goods needed to be produced, cutting down on industry pollution (such as reducing thecarbon footprintand overall consumption of resources)[81][80][82]
The sharing economy allows the reuse and repurpose of already existing commodities. Under this business model, private owners share the assets they already possess when not in use.[83]
The sharing economy acceleratessustainable consumptionand production patterns.[84]
In 2019 a comprehensive study checked the effect of one sharing platform, which facilitate the sharing of around 7,000 product and services, ongreenhouse gas emissions. It found the emissions were reduced by 77%-85%.[6]
The sharing economy provides people with access to goods who can't afford or have no interest in buying them.[85]
The sharing economy facilitates increased quality of service through rating systems provided by companies involved in the sharing economy[86]It also facilitates increased quality of service provided by incumbent firms that work to keep up with sharing firms like Uber and Lyft[87]
A study inIntereconomics / The Review of European Economic Policynoted that the sharing economy has the potential to bring many benefits for the economy, while noting that this presupposes that the success of sharing economy services reflects their business models rather than 'regulatory arbitrage' from avoiding the regulation that affects traditional businesses.[88]
Additional benefits include:
Oxford Internet Institute Economic Geographer Mark Graham argued that key parts of the sharing economy impose a new balance of power onto workers.[90]By bringing together workers in low- and high-income countries, gig economy platforms that are not geographically confined can bring about a 'race to the bottom' for workers.
New York Magazinewrote that the sharing economy has succeeded in large part because the real economy has been struggling. Specifically, in the magazine's view, the sharing economy succeeds because of a depressed labor market, in which "lots of people are trying to fill holes in their income by monetizing their stuff and their labor in creative ways", and in many cases, people join the sharing economy because they've recently lost a full-time job, including a few cases where the pricing structure of the sharing economy may have made their old jobs less profitable (e.g. full-time taxi drivers who may have switched toLyftorUber). The magazine writes that "In almost every case, what compels people to open up their homes and cars to complete strangers is money, not trust.... Tools that help people trust in the kindness of strangers might be pushing hesitant sharing-economy participants over the threshold to adoption. But what's getting them to the threshold in the first place is a damaged economy and harmful public policy that has forced millions of people to look to odd jobs for sustenance."[91][92][93]
Uber's "audacious plan to replace human drivers" may increase job loss as even freelance driving will be replaced by automation.[94]
However, in a report published in January 2017,Carl Benedikt Freyfound that while the introduction of Uber had not led to jobs being lost, but had caused a reduction in the incomes of incumbent taxi drivers of almost 10%. Frey found that the "sharing economy", and Uber, in particular, has had substantial negative impacts on workers wages.[95]
Some people believe theGreat Recessionled to the expansion of the sharing economy because job losses enhanced the desire fortemporary work, which is prevalent in the sharing economy. However, there are disadvantages to the worker; when companies use contract-based employment, the "advantage for a business of using such non-regular workers is obvious: It can lower labor costs dramatically, often by 30 percent, since it is not responsible for health benefits, social security, unemployment or injured workers' compensation, paid sick or vacation leave and more. Contract workers, who are barred from forming unions and have no grievance procedure, can be dismissed without notice".[61]
There is debate over the status of the workers within the sharing economy; whether they should be treated asindependent contractorsoremployeesof the companies. This issue seems to be most relevant among sharing economy companies such as Uber. The reason this has become such a major issue is that the two types of workers are treated very differently. Contract workers are not guaranteed any benefits and pay can be below average. However, if they are employees, they are granted access to benefits and pay is generally higher. This has been described as "shifting liabilities and responsibilities" to the workers, while denying them the traditionaljob security.[8]: 25It has been argued that this trend is de facto "obliterating the achievements ofunionsthus far in their struggle to secure basic mutual obligations in worker-employer relations".[8]: 28
InUberland: How the Algorithms are Rewriting the Rules of Work, technologyethnographerAlex Rosenblat argues that Uber's reluctance to classify its drivers as "employees" strips them of their agency as the company's revenue-generating workforce, resulting in lower compensation and, in some cases, risking their safety.[96]: 138–147In particular, Rosenblat critiques Uber's ratings system, which she argues elevates passengers to the role of "middle managers" without offering drivers the chance to contest poor ratings.[96]: 149Rosenblat notes that poor ratings, or any other number of unspecified breaches of conduct, can result in an Uber driver's "deactivation", an outcome Rosenblat likens to being fired without notice or stated cause.[96]: 152Prosecutors have used Uber's opaque firing policy as evidence of illegal worker misclassification; Shannon Liss-Riordan, an attorney leading a class action lawsuit against the company, claims that "the ability to fire at will is an important factor in showing a company's workers are employees, not independent contractors."[97]
TheCalifornia Public Utilities Commissionfiled a case, later settled out of court, that "addresses the same underlying issue seen in the contract worker controversy—whether the new ways of operating in the sharing economy model should be subject to the same regulations governing traditional businesses".[98]Like Uber, Instacart faced similar lawsuits. In 2015, a lawsuit was filed against Instacart alleging the company misclassified a person who buys and delivers groceries as an independent contractor.[99]Instacart had to eventually make all such people as part-time employees and had to accord benefits such as health insurance to those qualifying. This led to Instacart having thousands of employees overnight from zero.[99]
A 2015 article by economists at George Mason University argued that many of the regulations circumvented by sharing economy businesses are exclusive privileges lobbied for by interest groups.[100]Workers and entrepreneurs not connected to the interest groups engaging in thisrent-seekingbehavior are thus restricted from entry into the market. For example, taxi unions lobbying a city government to restrict the number of cabs allowed on the road prevents larger numbers of drivers from entering the marketplace.
The same research finds that while access economy workers do lack the protections that exist in the traditional economy,[101]many of them cannot actually find work in the traditional economy.[100]In this sense, they are taking advantage of opportunities that the traditional regulatory framework has not been able to provide for them. As the sharing economy grows, governments at all levels are reevaluating how to adjust their regulatory schemes to accommodate these workers.
However, a 2021 research on Uber's downfall in Turkey, which was carried out with user-generated content from TripAdvisor comments and YouTube videos related to Uber use in Istanbul, finds that the main reasons for people to use Uber are that since the drivers are independent, they tend to treat the customers in a kinder way than the regular taxi drivers and that it's much cheaper to use Uber.[102]Although, Turkish taxi drivers claim that Uber's operations in Turkey are illegal because the independent drivers don't pay the operating license fee, which is compulsory for taxi drivers to pay, to the government. Their efforts led to the banning of Uber in Turkey by the Turkish government in October 2019. After being unavailable for approximately two years, Uber eventually became available again in Turkey in January 2021.[103]
Andrew Leonard,[104][105][106]Evgeny Morozov,[107]criticized the for-profit sector of the sharing economy, writing that sharing economy businesses "extract" profits from their given sector by "successfully [making] an end run around the existing costs of doing business" – taxes, regulations, and insurance. Similarly, In the context of online freelancing marketplaces, there have been worries that the sharing economy could result in a 'race to the bottom' in terms of wages and benefits: as millions of new workers from low-income countries come online.[90][108]
Susie Caglewrote that the benefits big sharing economy players might be making for themselves are "not exactly" trickling down, and that the sharing economy "doesn't build trust" because where it builds new connections, it often "replicates old patterns of privileged access for some, and denial for others".[109]William Alden wrote that "The so-called sharing economy is supposed to offer a new kind of capitalism, one where regular folks, enabled by efficient online platforms, can turn their fallow assets into cash machines ... But the reality is that these markets also tend to attract a class of well-heeled professional operators, who outperform the amateurs—just like the rest of the economy".[110]
The local economic benefit of the sharing economy is offset by its current form, which is that huge tech companies reap a great deal of profit in many cases. For example, Uber, which is estimated to be worth $50B as of mid-2015,[111]takes up to 30% commission from the gross revenue of its drivers,[112]leaving many drivers making less than minimum wage.[113]This is reminiscent of a peakRentier state"which derives all or a substantial portion of its national revenues from the rent of indigenous resources to external clients".
Agriculture
Finance
Food
Property
Labor
Real estate
Transportation
Governance
Business
Technology
Digital rights
Other
In order to reap the real benefits of a sharing economy and somehow address some issues that revolve around it, there is a great need for the government and policy-makers to create the “right enabling framework based on a set of guiding principles” proposed by the World Economic Forum. These principles are derived from the analysis of global policymaking and consultation with experts. The following are the seven principles for regulation in the sharing economy.[30]
|
https://en.wikipedia.org/wiki/Sharing_economy
|
TheSocial Credit System(Chinese:社会信用体系;pinyin:shèhuì xìnyòng tǐxì) is a nationalcredit ratingandblacklistimplemented by thegovernment of the People's Republic of China.[1][2]The social credit system is a record system so that businesses, individuals, and government institutions can be tracked and evaluated for trustworthiness.[1][2]The national regulatory method is based on varying degrees ofwhitelisting(termed redlisting in China) and blacklisting.[1][2][3]
There has been a widespread misconception that China operates a nationwide and unitary social credit "score" based on individuals' behavior, leading to punishments if the score is too low. Media reports in the West have sometimes exaggerated or inaccurately described this concept.[4][5][6]In 2019, the central government voiced dissatisfaction with pilot cities experimenting with social credit scores. It issued guidelines clarifying that citizens could not be punished for having low scores and that punishments should only be limited to legally defined crimes and civil infractions. As a result, pilot cities either discontinued their point-based systems or restricted them to voluntary participation with no major consequences for having low scores.[4][7]According to a February 2022 report by theMercator Institute for China Studies(MERICS), a social credit "score" is a myth as there is "no score that dictates citizen's place in society".[4]
The origin of the concept can be traced back to the 1980s when the Chinese government attempted to develop a personal banking and financial credit rating system, especially for rural individuals and small businesses who lacked documented records.[8]The program first emerged in the early 2000s, inspired by thecredit scoringsystems in other countries.[2]The program initiated regional trials in 2009, before launching a national pilot with eight credit scoring firms in 2014.[9][10]
The Social Credit System is an extension to the existing legal and financial credit rating system in China.[11]Managed by theNational Development and Reform Commission(NDRC), thePeople's Bank of China(PBOC) and theSupreme People's Court(SPC),[12]the system was intended to standardize the credit rating function and perform financial and social assessment for businesses, government institutions, individuals and non-government organizations.[13][14][7]The Chinese government's stated aim is to enhance trust in society with the system and regulate businesses in areas such asfood safety,intellectual property, andfinancial fraud.[11][8][15]By 2023, most private social credit initiatives had been shut down by the PBOC.[16]: 12
The origin of the Social Credit System can be traced back to the early 1990s as part of attempts to develop personal banking and financial credit rating systems in China, and was inspired by Western commercial credit systems likeFICO,Equifax, andTransUnion.[17]The credit system aims to facilitate financial assessment[17]in rural areas, where individuals and small business entities often lacked financial documents.
In 1999, businesswoman Huang Wenyun wrote a report following her negative experiences with domestic business trustworthiness and her research into credit management in the United States business environment.[16]: 17–18At the time, credit management and rating were largely unfamiliar concepts within the Chinese economy.[16]: 17Huang sent her report to PremierZhu Rongji, who approved it and in August 1999 ordered the People's Bank of China to take immediate action.[16]: 18In September 1999, the Institute of Economics of theChinese Academy of Social Sciencesbegan a research project on establishing a national credit management system.[16]: 18Huang contributed more than RMB 300,000 to fund the research initiative and sponsored fieldwork in the United States and Europe.[16]: 18In the United States, the research group studied and prepared translations of 17 American credit reporting laws, including theFair Credit Reporting Act.[16]: 18
In January 2000, the research group from the Chinese Academy of Social Sciences compiled their research into a text titledNational Credit Management System.[16]: 18Among these academics was Lin Junyue, who became an important intellectual figure in the development of social credit.[16]: 18Premier Zhu approved the text and instructed government figures from ten ministries and commissions to begin studying the creation of a social credit management system.[16]: 18In late January 2000, theState Councilreleased an essay by Zhu in which Zhu stated that China must "vigorously rectify social credit."[16]: 18In March 2000, Zhu delivered the government's work report to theNational People's Congress, in which Zhu talked about the need to rectify social credit in the context of supervision of financial institutions, fraud, tax evasion, and debt repayments.[16]: 18
In 2002, the construction of a social credit system was formally announced during the16th National Congress of the Chinese Communist Party.[16]: 71The central government had not developed a specific vision for what a finished system might look like.[16]: 71Local governments were to develop pilot initiatives which could then guide the larger policy approach.[16]: 71
In 2003, the State Council stated that the basic framework and operational mechanisms for a social credit should be established within five years.[16]: 72Most of the goals in this period were missed, although the financial aspects of social credit developed much further than non-financial aspects.[16]: 72–75
Among the financial aspects of social credit which developed quickly was credit reporting.[16]: 74In March 2006, the People's Bank of China established the Credit Reference Center, which has information regarding financial credit worthiness and has established basic financial records for 990 million Chinese citizens as of 2019.[16]: 47Its records relate only to finance and does not have any blacklist mechanism.[16]: 47
In 2007, the Inter-Ministerial Joint Conference on the Establishment of the SCS was established, replacing theleading small groupwhich had previously been the top policy organ for social credit issues.[16]: 76The initial blueprints of the Social Credit System were drafted in 2007 by government bodies.[8]The social credit system also attempts to solve the moral vacuum problem, insufficient market supervision and income inequality generated by the rapid economic and social changes sinceChinese economic reformin 1978.[8]As a result of these problems, trust issues emerged in Chinese society such as food safety scandals, labor law violations, intellectual property thefts and corruption.[8]Among the purposes of social credit is promotion and moral education regarding personal integrity and honesty.[18]: 104The policy of the social credit system traces its origin from both policing and work management practices.[8]
The government of modern China has maintained systems of paper records on individuals and households such as thedàng'àn(档案) andhùkǒu(户口) which officials might refer to, but these systems do not provide the same degree and rapidity of feedback and consequences for Chinese citizens as the integrated electronic system because of the much greater difficulty of aggregating paper records for rapid, robust analysis.[19]
The Social Credit System also originated fromgrid-style social management, a policing strategy first implemented in select locations from 2001 and 2002 (during the administration ofChinese Communist PartyGeneral SecretaryJiang Zemin) in specific locations acrossmainland China. In 2002, the Jiang administration proposed a social credit system as part of the promotion of a "unified, open, competitive, and orderly modern market system."[17]In its first phase, grid-style policing was a system for more effective communication between public security bureaus. Within a few years, the grid system was adapted for use in distributing social services. Grid management provided the authorities not only with greater situational awareness on the group level, but also enhanced the tracking and monitoring of individuals.[8][20]In 2018, sociologistZhang Lifanexplained that Chinese society today is still deficient intrust. People often expect to becheatedor to get in trouble even if they areinnocent. He believes that it is due to theCultural Revolution, where friends and family members were deliberately pitted against each other and millions of Chinese were killed. The stated purpose of the social credit system is to help Chinese people trust each other again.[20]
One focus of social credit is to build judicial credibility through more effective enforcement of court orders.[16]: 53In 2013, theSupreme People's Court(SPC) of China started ablacklistofdebtorswith roughly 32,000 names. The list has since been described as a first step towards a national Social Credit System by state-owned media.[21][22]The SPC's blacklist is composed of Chinese citizens and companies that refuse to comply with court orders (typically court orders to pay a fine or to repay a loan) despite having the ability to do so.[16]: 53It is hosted online at the Supreme People's Court judgment defaulter blacklist portal, and the information is shared with Credit China and the National Enterprise Credit Information Publicity System.[16]: 60The SPC also began working with private companies. For example, Sesame Credit began deducting credit points from people who defaulted on court fines.[21]
Although there was institutional enthusiasm for a social credit system during the 2004 to 2014 period, implementation was adversely impacted by planning difficulties stemming from the relationship between credit reporting initiatives (which were defined narrowly) and regulatory objectives (which were more vaguely defined).[16]: 10A lack of central coordination resulted in institutional bottlenecks.[16]: 10
The State Council sought to accelerate the development of social credit and, in 2014, issued thePlanning Outline for the Construction of a Social Credit System (2014-2020).[16]: 78ThePlanning Outlinewas a major step in China's approach to developing a social credit system; before the 2014Planning Outline, there had been only one high-level policy document (issued in 2007).[16]: 79Since thePlanning Outline,the State Council has issued new guidance annually.[16]: 79
ThePlanning Outlinefocused primarily on economic activity in commerce, government affairs, social integrity, and judicial credibility.[16]: 79It set broad goals intended to be reached by 2020:
In 2015, the People's Bank of China licensed eight companies to begin a trial of social credit systems.[10]Among these eight firms isSesame Credit(owned byAlibaba Groupand operated byAnt Financial),Tencent, and China's biggest ride-sharing and online-dating services,Didi ChuxingandBaihe.com, respectively.[15][10]In general, multiple firms collaborated with the government to develop the software and algorithms used to calculate credit.[15][23]Commercial pilot programs developed by private Chinese conglomerates that have the authorization from the state to test out social credit experiments. The pilots are more widespread than their local government counterparts but function on a voluntary basis: citizens can decide to opt-out of these systems at any time on request. Users with good scores are offered advantages such as easier access to credit loans, discounts for car and bike sharing services, fast-tracked visa applications, free health check-ups and preferential treatment at hospitals.[24]
In 2016, the State Council encouraged market entities to provide preferential treatment to those with outstanding financial credit records and differentiated services to those with seriously untrustworthy records.[16]: 54
The Chinese central government originally considered having the Social Credit System be run by a private firm, but by 2017, it acknowledged the need for third-party administration. However, no licenses to private companies were granted.[10]By mid-2017, the Chinese government had decided that none of the pilot programs would receive authorization to be official credit reporting systems. The reasons includeconflict of interest, the remaining control of the government, as well as the lack of cooperation in data sharing among the firms that participate in the development.[19]However, the Social Credit System's operation by a seemingly external association, such as a formal collaboration between private firms, has not been ruled out yet.[10]In November 2017, Sesame Credit denied that Sesame Credit data was shared with the Chinese government.[25][better source needed]In 2017, the People's Bank of China issued a jointly owned license to Baihang Credit valid for three years.[26]Baihang Credit is co-owned by the National Internet Finance Association (36%) and the eight other companies (8% each), allowing the state to maintain control and oversee the creation of new commercial pilot programs.[27]As of mid-2018, only pilot schemes had been tested without any official implementation.[28][29][30]
Private companies have also signed contracts with provincial governments to set up the basic infrastructure for the Social Credit System at the provincial level.[31]As of March 2017, 137 commercial credit reporting companies were active on the Chinese market.[14]As part of the development of the Social Credit System, the Chinese government has been monitoring the progress of third-party Chinese credit rating systems.[32]Ultimately, Chinese government dropped the support for privately developed credit rating system, and these pilot projects remained as corporateloyalty programs.[7]
In December 2017 theNational Development and Reform Commissionand People's Bank of China selected "model cities" that demonstrated the steps needed to make a functional and efficient implementation of the Social Credit System. Among them are:Hangzhou,Nanjing,Xiamen,Chengdu,Suzhou,Suqian,Huizhou,Wenzhou,Weihai,Weifang,YiwuandRongcheng.[33][34][non-primary source needed]These pilots were deemed successful in their handling of "blacklists and 'redlists'", their creation of "credit sharing platforms" and their "data sharing efforts with the other cities".[citation needed]
By 2018, some restrictions had been placed on citizens whichstate-owned mediadescribed as the first step toward creating a nationwide social credit system.[29][30][28]
According to Antonia Hmaidi of theMercator Institute for China Studies(MERICS), the local government Social Credit System experiments are focused more on the construction of transparent rule-based systems, in contrast with the rating systems used in the commercial pilots. Citizens often begin with an initial score, to which points are added or deducted depending on their actions. The specific number of points for each action are often listed in publicly available catalogs. Cities also experimented with a multi-level system, in which districts decide on scorekeepers who are responsible for reporting scores to higher-ups. Some experiments also allowed citizens to appeal the scores they were attributed.[35]
In 2019, the central government expressed "unhappiness" at the pilot cities that were experimenting with social credit scores and issued guidelines that no citizens can be punished for having low scores, and instead punishment can only be for legally defined crimes and civil infractions, consequently leading to pilot cities either changing their programs to be encouragement-only or not materializing at all.[4]
In July 2019, an NDRC spokesperson stated that at a press conference that "personal credit scores can be combined with incentives for trustworthiness, but cannot be used for punishments".[16]: 171TheHong Kong Governmentstated in July 2019 that claims that the social credit system will be rolled out in Hong Kong are "totally unfounded" and stated that the system will not be implemented there.[36]
In 2019, high-level NDRC officials stated that over 10% of people blacklisted for their commission of tax fraud had repaid their taxes, that the bad credit rate had decreased by 22.7%, and that the proportion of companies blacklisted had decreased.[16]: 124In the view of these officials, these were "remarkable results."[16]: 124
In 2020, the Supreme People's Court announced that a nationwide total of 7.51 million blacklisted judgment defaulters had fulfilled their legal obligations and been removed from the judgment defaulter blacklist, accounting for half of the blacklisted judgment defaulters as of that date.[16]: 124
As a result of theCOVID-19 pandemic, various aspects of social credit were modified.[16]: 134–137On February 1, 2020, the People's Bank of China announced it would temporarily suspend the inclusion of mortgage and credit card payments in the credit record of people impacted by the pandemic.[16]: 134Private financial credit scoring companies, including Sesame Credit, suspended financial credit ratings.[16]: 134Various cities established mechanisms to incentivize companies to provide pandemic relief, with measures including redlisting for those donating funds and supplies with benefits like simplified administrative procedures, increased policy support, or increased financial support.[16]: 135On the enforcement side of social credit, provinces and cities promulgated regulations emphasizing heavy penalties for price hikes, violence against doctors, counterfeit medical supplies, refusal to comply with pandemic prevention measures, and wildlife trade violations.[16]: 134
In 2020, the rights protection metrics in the NDRC'sCity Credit Status Monitoring and Early Warning Indicatorsemphasized that cities must establish transparent credit repair procedures handled within an appropriate timeframe.[16]: 138It also emphasized that cities should prevent the over generalization of the concept of credit, stating that individual behavior such aspetitioningthe government, unpaid property fees, running red lights (among other listed examples) must not be included in a person's credit record.[16]: 138
The State Council issued itsGuiding Opinions on Further Improving Systems for Restraining the Untrustworthy and Building Mechanisms for Building Credit Worthiness that have Long-term Effectin November 2020.[16]: 139The central message of theGuiding Opinionswas that new blacklists should not be created on anad hocbasis and that social credit should not be applied in policy areas without sufficient consensus.[16]: 139It stated that credit repair processes must be improved, that blacklists must only be used in instances of severe harm, and that information security and privacy should be prioritized.[16]: 139
In November 2021, theUnited Nations Education, Scientific, and Cultural Organization(UNESCO) adopted aRecommendation on the Ethics of AI.[16]: 176Among its recommendations is that "AI systems should not be used for social scoring or mass surveillance purposes."[16]: 176China is a signatory of the document.[16]: 176
Following their submission for public comment, China in December 2021 issued theNational List of Basic Penalty Measures for Untrustworthinessand theNational Directory of Public Credit Information.[16]: 140TheNational Directoryestablishes limitations on what types of credit information can be collected or used as a basis for social credit penalties or rewards.[16]: 140It describes three categories of data:
Appropriate information for consideration includes information on the execution of judicial judgments, administrative violations, among other material, and positive recognition for trustworthy behavior.[16]: 140Information appropriate only when the circumstances of the violation are severe include small payment arrears or public transportation fare evasion.[16]: 141TheNational Directorybans the consideration of private information like religious preferences or government petitioning activity.[16]: 141
The December 2021National List's purpose is to further standardize penalty measures.[16]: 143It specifies that administrative bodies cannot extent penalties beyond those provided in national level law and regulation.[16]: 143In a 2022 directive, the State Council stated that it will "actively explore innovative ways to use the credit concept and methods to solve difficulties, bottlenecks, and painful points that restrict the country's economic and social activities."[17]On 14 November 2022, the NDRC issued a draftLaw on the Establishment of the Social Credit System.[16]: 188According to academic Vincent Brussee, the draft "was deeply unsatisfactory to SCS observers worldwide. It did not stipulate anything not already regulated in one of the many recent documents on the system. The draft just copy-pasted bits from those."[16]: 188Academic Haiqing Yu writes that "the draft law is a patchwork of existing policies and regulations that prioritise unification rather than clarification."[37]
As of 2022, over 62 different Social Credit System pilot programs were implemented by local governments.[17]The pilot programs began following the release of the 2014 "Planning Outline for the Construction of a Social Credit System" by Chinese authorities. The government oversees the creation and development of these governmental pilots by requesting they each publish a regular "interdepartmental agreement on joint enforcement of rewards and punishments for 'trustworthy' and 'untrustworthy' conduct."[38]
Though some reports stated social credit would be powered byartificial intelligence(AI), as of 2023 penalty decisions were made by humans, not AI, and digitization remained limited.[16]: 14Credit systems for local government remained undeveloped and resemble incentivized loyalty programs like those run by airlines.[16]: 14Participation is fully voluntary and there are no enticement beyond losing access to minor rewards. For fear of overreach and pushback, the Chinese central government banned punishments for low scores and minor offences.[7]During the city trials, pilot programs only saw limited participation.[17]Many people living in pilot program cities are unaware of the programs.[17]InXiamen, 210,059 users activated their social credit account, roughly 5% of the population of Xiamen; 60,000 or 1.5% of population inWuhuparticipated the system;Hangzhouhas 1,872,316 (15%) participants and fewer regularly use the system. Scores are not shared between cities as the scoring criteria and mechanisms are different.[7]
By 2023, most private social credit initiatives had been shut down by the People's Bank of China, and regulations had cracked down on most local scoring pilot programs.[16]: 12
Social credit in China is a broad policy category seeking to enforce legal obligations including laws, regulations, and contracts.[16]: 3Social credit does not itself bring new restrictions; it focuses on increasing implementation of existing restrictions.[18]: 105There are multiple social credit systems in China, some of which are designed and operated by the state, while others are operated by private companies.[17]China's governmental approaches to social credit are described by various sets of documents issued by different institutions.[18]: 103There is no integrated system,[16]: 3nor a comprehensive document setting out a unified approach.[18]: 103Generally, the different approaches to social credit are united by the theme of increasing digitization, data collection, and data centralization.[18]: 103
There is no unified, numerical credit score for businesses or individuals, rather national and local platforms use different evaluation or rating systems.[7][39]Due to the differences in various pilot programs and a fragment system structure, information regarding the scoring mechanism is often conflicting.[40][7]Inspired byFICO,[41]a numerical social credit score calculated by individual behavior and activities was given to citizens in certain pilot programs developed by financial firms or localized initiatives.[7][42]However, these practices were not widespread applications and eventually, the numerical score mechanism was limited to private credit rating and loyalty programs.[43][44]Private involvements were ultimately abandoned by the government.[45]
The system includes sanctions for the offenders; unlike in the past where the offenders were punished by one supervising agency or court, they now face sanctions from multiple agencies, greatly increasing their effect. Though the sanctions are severe, they affect a small part of companies and individuals. By publicizing these punishments and blacklists through state-media and through other agencies, the system is aimed to create a deterrence effect.[7]
Social credit is an example of China's "top-level design" (顶层设计) approach. It is coordinated by theCentral Comprehensively Deepening Reforms Commission.[14]Social credit when referred by the Chinese government, generally covers two different concepts. The first is "traditional financial creditworthiness" where it documents the financial history of individuals and companies and score them on how well they are able to pay off future loans. The second concept is “social creditworthiness” where the government is stating that there needs to be higher "trust in society". And to build such trust, the government had proposed to combat corruption, scammers, tax evasion, counterfeiting of goods, false advertising, pollution and other problematic issues, and to create the mechanisms to keep individuals and companies accountable for such transgressions.[46]
Scholars have conceptualized four different types of systems. These four systems are not interconnected, but relatively independent from each other with their own jurisdictions, rules and logic.[19][43]
As of 2023, the government has only created a system that is primarily focused on assessing businesses rather than on individuals, and consists of a database that collects the data on corporate regulation compliance from a number of government agencies. Kendra Schaefer, head of tech policy research at the Beijing-based consultancy firm Trivium China, had described the system in a report for the US government's US-China Economic and Security Review Commission, as being “roughly equivalent to the IRS, FBI, EPA, USDA, FDA, HHS, HUD, Department of Energy, Department of Education, and every courthouse, police station, and major utility company in the US sharing regulatory records across a single platform”.[5]The database can be openly accessed by any Chinese citizen on the newly created website called "Credit China". Its database also includes random information like a list of approved robot building companies, hospitals that have committed insurance fraud, universities that are deemed legitimate and a list of individuals who have defaulted on a court judgement.[46]
Social credit does not itself bring new restrictions; it focuses on increasing implementation of existing restrictions.[18]: 105Although the Chinese government announced in 2014 that it would implement a nationwide social credit system by 2020, as of 2023 no full-fledged system exists.[18]: 123–124
Implementation of social credit is primarily focused on marketplace behavior.[16]: 14As of 2023, about 1% of companies and 0.3% of individuals receive social credit-related penalties per year.[16]: 14
National financial credit reporting for businesses and individuals is provided by the People's Bank of China, which does not assign any numerical scoring.[44]
Red Listing practices seek to incentivize exemplary personal behavior or business compliance.[16]: 118Red List practices vary significantly and there are no top-level regulations or guidance addressing red lists in detail.[16]: 118–119The most common benefit to red listed companies include reduced administrative burdens or simplified procedures.[16]: 118–119Part of the government logic for red listing companies is that it facilitates regulators' ability to focus on companies with a worse compliance record.[16]: 119Red Listed individuals may receive benefits like parking and public transit discounts or discount tourist site tickets.[16]: 119
Blacklisting is based on specific instances of misconduct, not any numerical score.[18]: 103The Central Government operates a number of national and regional blacklists based on various types of violations. The court system is available for businesses, organizations and individuals to appeal their violations. As of 2019, it typically took 2–5 years to be removed from the blacklist, but early removal is also possible if the blacklisted person "fulfills legal obligations or remedies".[49][50]By the end of 2021, over five million citizens had been affected by the blacklisting scheme in some form.[3][dubious–discuss]
Three main types of blacklists exist: thejudgment defaulterblacklist, sectoral blacklists, andno-fly/no-ridelists.[16]: 107
Before being added to a blacklist, a person or company must be informed of the decision and the legal basis for it.[16]: 115Blacklists may be publicized, although as of at least 2023 there is no uniform method for doing so.[16]: 115Some blacklist portals can be searched online while others are uploaded asPDFsor image files.[16]: 118Blacklisted parties are sometimes displayed in public settings, including on the Internet, in newspapers, or television.[16]: 118
Before 2013, the process of obtaining court-ordered enforcement against judgment debtors was fragmented.[16]: 108In 2013, the Supreme People's Court issued theSeveral Provisions on Announcement of the Judgment Defaulter Blacklistwhich became the foundational regulation for the judgment defaulter blacklist.[16]: 108It stated that to be included on the list, a defaulter must be capable of complying with the court orders, but actively avoids doing so.[16]: 108Based on the idea that judgment defaulters should repay their debts before purchasing luxuries, once added to the list, judgment defaulters are restricted from:
In 2019, a Hebei court released an app showing a "map of deadbeat debtors" within 500 meters and encouraged users to report individuals who they believed could repay their debts.[51]According toChina Daily, A spokesman of the court stated that "it's a part of our measures to enforce our rulings and create a socially credible environment."[52]
The Supreme People's Court's blacklist is one of its most important enforcement tools and its use has resulted in the recovery of tens of trillions of RMB for fines and delinquent repayments as of 2023.[16]: 53Chinesefoundersare increasingly placed on the national debtor blacklist by venture capitalists seeking a return of invested funds.[53]
Many sectoral blacklists exist and are managed by a variety of regulatory and administrative bodies.[16]: 110Primarily, the penalties for being included on these blacklists are discretionary restrictions in administrative processes and interactions with the government.[16]: 111For example, regulators may exclude a company on a sectoral blacklist from participating in public procurement, revoke government funding or subsidies, cancel permits or revoke qualifications or certifications, or restrict the issuance of corporate bonds.[16]: 111Penalties cannot be developedad hocand must instead be based in national level law and regulation.[16]: 111Penalties from inclusion on sectoral blacklists may be imposed both on the violating company as well as legal representatives, senior company management, and the staff directly responsible for the violation that placed the company on the blacklist.[16]: 112Multiple government bodies may impose restrictions as a result of a person or company's inclusion on a sectoral blacklist.[16]: 110–111The availability of sectoral blacklist with the public also means that potential business partners may act accordingly and decline to deal with a blacklisted company.[16]: 111
Inclusion on the no-ride list or no-fly list results from specific instances of misconduct on trains or planes.[16]: 113Misconduct resulting in inclusion on the no-ride or no fly lists can include violation of safety regulations, harassing other passengers or transportation workers, smoking, scalping tickets, or using counterfeit tickets.[16]: 113Inclusion on the list prohibits a person from buying new tickets for a designated time period, usually six to twelve months.[16]: 113This is the only penalty under the no-ride or no-fly list, and inclusion on these blacklists has no impact in other areas of life or business.[16]: 113
By May 2018, several million flight and high-speed train trips had been denied to people who had been blacklisted either through misbehavior on planes or trains, or failing to follow a court-ordered judgement.[29]As of June 2019, according to the National Development and Reform Commission of China, 26.82 million air tickets as well as 5.96 million high-speed rail tickets had been denied to people who were deemed "untrustworthy" (失信) (on a blacklist) and 4.37 million blacklisted people had chosen to fulfill their duties required by the law, such asrepaying court-ordered judgementsbefore being allowed to travel on high-speed rail and planes.[54][55]In July 2019, additional 2.56 million flight tickets as well as 90 thousand high-speed train tickets were denied to those on the blacklist.[56]
The no-fly list is administered byCivil Aviation Administration of China.[16]: 113The no-ride list is administered by theNational Railway Administration.[16]: 113
After a blacklist decision becomes effective, the blacklisted party can file for credit repair.[16]: 115Through the credit repair process, a violator corrects the impact of the underlying violation and commits to abide by laws and regulations in the future.[16]: 115Companies undergoing credit repair typically must supply evidence that they have corrected their violations.[16]: 123Companies may also have to agree to a credit pledge in which they commit to upholding laws and regulations, commit to abiding by contracts, and agree to be subject to more severe penalties for any future violations.[16]: 123If authorities approve of the request for credit repair, the violator is removed from the blacklist and penalties are ended.[16]: 115
The Social Credit System is meant to provide an answer to the problem of lack of trust on the Chinese market. As of 2020[update], the corporate regulation function of the system appears to be more advanced than other parts of the system and the "Corporate Social Credit System" has been the primary focus of government attention.[11]As of 2020[update], over 73.3% of the enforcement action since 2014 is targeted toward companies, the largest part of all enforcements, while around 1-2% of all companies were sanctioned by the system annually.[7]
For businesses, the Social Credit System is meant to serve as amarket regulationmechanism. The goal is to establish a self-enforcing regulatory regime fueled bybig datain which businesses exercise "self-restraint" (企业自我约束). The basic idea is that with a functional credit system in place, companies will comply with government policies and regulations to avoid having their scores lowered by disgruntled employees, customers or clients.[14]For example, the central government can use social credit data to offer risk-assessed grants and loans tosmall and medium enterprises(SMEs), encouraging banks to offer greater loan access for SMEs.[11]
As currently envisioned, companies with good credit scores will enjoy benefits such as good credit conditions, lower tax rates, less custom checks,[11]and more investment opportunities. Companies with bad credit scores will potentially face unfavorable conditions for new loans, higher tax rates, investment restrictions and lower chances to participate in publicly funded projects.[14]Government plans also envision real-time monitoring of a business's activities. In that case, infractions on the part of a business could result in a lower score almost instantly. However, whether this will actually happen depends on the future implementation of the system as well as on the availability of technology needed for this kind of monitoring.[14]
To improve credit score, companies need to conform to the government rules, such as following the COVID-19 containment guidelines.[11]
Government institutions receive the second highest number of enforcement actions, accounting for 13.3% of the penalties as of 2020[update], while less than 0.1% of all government entities were sanctioned by the system annually.[7]The social credit system targets government agencies, assesses local governments' performance and focuses on financial problems such as local governments' debts and contract defaults.[7]The Central Government hopes the system can improve "government self-discipline."[11]Local governments are also encouraged and rewarded by the social credit system if they successfully implement and follow the orders from the central government.[51]
As of 2020, individuals receive 10.3% of all enforcement actions, affecting around 0.15% to 0.3% of the national population annually.[7]The dealing of the social credit system with individuals focuses on the financial trustworthiness of individual citizens. The dealing of the system with individuals is primarily focused on debt repayment, though major violations of the law have also been sanctioned.[7]One major focus is that of the debt-dodger (laolai), a phrase which refers to those who can pay their debts but choose not to.[17]Alaolaiblacklist is maintained by the Supreme People's Court.[17]
In addition to dishonest and fraudulent financial behavior, there have been proposals in some cities to officially list several behaviors as negative factors of credit ratings, including playing loud music or eating inrapid transits,[57]violating traffic rules such as jaywalking and red-light violations,[58][59]making reservations at restaurants or hotels, but not showing up,[60]failing to correctly sort personal waste,[61][62][63]fraudulently using other people's public transportationID cards,[64]etc.; on the other hand, including behavior listed as positive factors of credit ratings such asdonating blood, donating tocharity, volunteering forcommunity services, praising government efforts on social media and so on.[65][66][67]However, due to the system mainly relying on digitized administrative documents, early efforts to integrate behavioral data into the system were mainly discarded.[7]
There are various punishments for debtors. Delinquent debtors are placed on blacklists maintained by Chinese courts and shared with theMinistry of Public Security, which controls the country's entry-exit checkpoints. Individuals with outstanding debts can be subject to exit bans and prevented from leaving the country as a way of encouraging or forcing the collection of debt. According to theFinancial Times, as of 2017, some 6.7 million debtors had already been placed on blacklists and prevented from exiting the country as a result of the new policy.[22]Future rewards of having a high score might include easier access to loans and jobs and priority during bureaucratic paperwork. A person with poor social credit may be denied employment in places such as banks,state-owned enterprises, or as a business executive. The Chinese government encourages checking whether candidates names' appear on the blacklist when hiring.[68][needs update]
In certain test programs,public humiliationis used as a mechanism to deter sanctioned individuals.[50][69][70][71]Mugshots of blacklisted individuals are sometimes displayed on large LED screens on buildings or shown before the movie in movie theaters.[72]Certain personal information of the blacklisted people isdeliberately made accessibleto the public and is displayed online as well as at various public venues such as movie theaters and buses, while some cities have also banned children of "untrustworthy" residents from attending private schools and even universities.[73][74][75][76][needs update]People with high credit ratings may receive rewards such as less waiting time at hospitals and government agencies, discounts at hotels, greater likelihood of receiving employment offers, and so on.[64][65][66][77][needs update]
According to Sarah Cook ofFreedom Housein 2019, city-level pilot projects for the social credit system have included rewarding individuals for aiding authorities in enforcing restrictions of religious practices, including coercing practitioners ofFalun Gongto renounce their beliefs and reporting onUighurswho publicly pray, fast duringRamadanor perform otherIslamicpractices.[51][78]In an October 2022 study, professors fromPrinceton University,Freie Universität BerlinandPennsylvania State Universityalso found that "repressing protesters, petitioners, journalists, and political activists via the SCS is common among Chinese localities."[79]
As of 2020, non-government organizations receive 3.3% of all enforcement actions. Although the enforcement remain a small group in numerical terms, but their inclusion has an important implication as it affects foreign NGOs operated within China.[7]
Most initiatives under the social credit system do not involve actual numerical scores; instead, documentation of specific offenses is recorded in one'scredit profile, the exception being the trial programs launched by some cities and communities. The actual policy varies greatly from city to city, and participation is voluntary. Local credit profiles are not shared between cities.
Since the early 2010s, several cities in China launched pilot programs to test and develop a potential social credit system. Some of these programs assigned scores to individuals, but many of the scoring programs faced criticism. The main criticism of these pilot programs came from Chinese state media, which denounced these practices as having unfairly restricted legal rights or tracked personal behaviors that were completely unrelated to the concept of “credit.” In 2019, the Chinese government reinforced this criticism by issuing clear guidelines to prevent misuse, explicitly stating that "scores" can not be used to punish citizens.[4]As a result, many pilot programs were discontinued, while some pilot cities revised their programs. Examples wereWenzhou, which abandoned their initial program and, in 2019, revised it to be an "encouragement-only scheme". Another wasRongcheng, which changed their pilot program in 2021, so that it was strictly voluntary and can only issue rewards. According to a 2022 article from theMercator Institute for China Studies(MERICS), the only social credit system programs that continue to have "personal scores" of individuals are strictly for issuing positive incentives only.[4]Under some policies, higher scores can earn a participant cheaper public transportation, shorter security lines in subways, or tax reductions.[80]: 204
Writing in 2023, academic Vincent Brussee observes that European misconceptions of social credit in China have become a source of amusement among Chinese Internet users.[16]: 3
A series of studies have concluded that social credit is well-received domestically.[104]: 125In a 2018 study, 80% of respondents either strongly approved or approved of China's Social Credit System, while one percent disapproved.[17]The study was conducted by Professor Genia Kostka ofFree University of Berlinand was based on a cross-regional Internet survey of 2,209 Chinese citizens of various backgrounds.[24][105]The study found "a surprisingly high degree of approval of SCSs across respondent groups" and that "more socially advantaged citizens (wealthier, better-educated and urban residents) show the strongest approval of SCSs, along with older people".[24]Kostka explained in the paper that "while one might expect such knowledgeable citizens to be most concerned about the privacy implications of SCS, they instead appear to embrace SCSs because they interpret it through frames of benefit-generation and promoting honest dealings in society and the economy instead of privacy-violation."[24]
In August 2019, assistant researcher Zhengjie Fan ofChina Institute of International Studiespublished an article, claiming that the current punishment policies such as the blacklist do not overstep the limits of law. He argued that since 2014, China's Social Credit System and the credit system of the market had grown to complement each other, forming a mutually beneficial interaction.[106]According toDoing Business 2019byWorld Bank Groupwhich ranked "190 countries on the ease of doing business within their borders", China rose from 78th place in previous year to 46th place and Fan claimed that the Social Credit System has played an important role.[106][107]In 2020, it further improved to 31st place in the now-defunctEase of Doing Business index.[108]: 115
In an October 2022 study, professors from Princeton University, Freie Universität Berlin (Genia Kostka), and Pennsylvania State University discovered through a field survey of college students in China that "revealing the repressive potential of the SCS significantly reduces support for the system, whereas emphasizing its function in maintaining social order does not increase support."[79]Additionally, the professors found that a nationwide survey of Chinese netizens showed higher support for the SCS among Chinese citizens who learned about it through state media.[79]
Chinese academics have produced a substantial body of work analyzing social credit in China.[16]: 7As of 2023, the large majority of Chinese scholarships accept the legitimacy of social credit as a whole, although there are also criticisms of different approaches or implementation efforts.[16]: 7–8In several instances, academics' criticisms of social credit have been adopted and re-issued by state media outlets, includingXinhuaandPeople's Daily.[16]: 8
In October 2019, Professor Kui Shen of the Law School ofPeking Universitypublished a paper inChina Legal Science, suggesting that some of the then-current credit policies violated the "rule of law" or "Rechtsstaat": that they infringed the legal rights of residents and organizations, possibly violated the principle of respecting and protecting human rights, especially the right to reputation, theright to privacyas well as personal dignity and overstepped the boundary of reasonable punishment.[109]In May 2020, Chinese investigative media groupCaixinreported that business social credit systems in China were insufficient in deterring problematic business activities and that the social credit system was easy to game in favour of businesses.[110]
China's Social Credit System has been implicated in a number of controversies. Western critics view social credit as an intrusive mechanism that infringes on privacy.[111]In October 2018,U.S. Vice PresidentMike Pencecriticized the social credit system, describing it as "anOrwelliansystem premised on controlling virtually every facet of human life."[112]In January 2019,George Soroscriticized the social credit system, saying it would give CCP leader Xi Jinping "total control over the people of China".[113][114]
From 2017 to 2018, researchers argued that the credit system would be part of the government's plan to automate their authoritarian rule over the Chinese population.[8][115][116]In June 2019, Samantha Hoffman of theAustralian Strategic Policy Instituteargued that "there are no genuine protections for the people and entities subject to the system... In China there is no such thing as the rule of law. Regulations that can be largely apolitical on the surface can be political when the Chinese Communist Party (CCP) decides to use them for political purposes."[117]In August 2018, Professor Genia Kostka ofFree University of Berlinstated in her published paper that "if successful in [their] effort, the Communist Party will possess a powerful means of quelling dissent, one that is comparatively low-cost and which does not require the overt (and unpopular) use of coercion by the state."[24]In December 2017,Human Rights Watchdescribed the proposed social credit system as "chilling" and filled with arbitrary abuses.[118]
There has been a degree of misreporting and misconceptions in English-language mass media due to translation errors,sensationalism, conflicting information and lack of comprehensive analysis.[6][43][111][44][119]Examples of such popular misconceptions include a widespread misassumption thatChinese citizensare rewarded and punished based on a numerical score (social credit score) assigned by the system, that its decisions are taken by AI and that it constantly monitors Chinese citizens.[7][111][39][120][4][5]
Alibaba'sZhima Credit, also rendered in English as Sesame Credit, is a private market credit initiative which ultimately became a loyalty program.[16]: 54–55It has frequently been mistaken for social credit.[7][16]: 55
In 2015, the PBOC designated eight private companies to pilot personal credit reporting (zhengxin) mechanisms.[16]: 55Because the pilot programs werezhengxinmechanisms, they had little connection to the idea of social credit more broadly.[16]: 55Zhima Credit was one of the pilotzhengxinmechanisms.[16]: 55It was an opt-in scoring initiative proposed to assess users' credit worthiness even if those users lacked formal credit history.[16]: 55It did not include standard industry metrics like income or debts, instead it assessed factors like user spending ability and whether users showed up for travel bookings.[16]: 55
Following the release of Zhima Credit, there was significant media speculation that it might turn into a national social credit system by 2020.[16]: 55This did not occur.[16]: 55Zhima Credit and the other pilot initiatives were never linked to the broader financial system.[16]: 55Zhima Credit did not prove to be an effective credit evaluation mechanism because the data showed no statistically significant link between its metrics and a user's ability to repay loans.[16]: 55
In one interview, Alibaba's technology director suggested that people who played too many video games might be considered less trustworthy.[16]: 161Various news outlets around the world incorrectly suggested that people could lose social credit for playing too many video games.[16]: 161No video game playing metric was ever implemented.[16]: 161
Ultimately Zhima Credit became a loyalty program that rewarded users for usingAlibabaservices and shopping platforms.[16]: 55PBOC decided not to extend the credit licenses of the eight private pilot programs from 2015.[16]: 55
In 2021, the social credit system was popularized as anInternet memeon varioussocial mediaplatforms.VICEreported that the memes' popularity reflects the "widespread discontent toward the Chinese government over its restrictions of people's freedoms", however, the article noted the trend continued the existing misapprehension and misinformation regarding the SCS mechanism, such as the idea that people in China are rewarded or punished based on a numerical "social credit score".[120]The joke is often posed as a positive or negative action towards the Chinese government which affects the poster's "social credit score" positively or negatively.[120]
According to a 2022 article inThe Spectator, the Western narrative of the "social credit score" at the time received widespread mockery and satirical comments from the Chinese Internet community, due to the Western perception being drastically different from the reality in China.[124]
Around 80% of Russians will reportedly get a digital profile that will document personal successes and failures in less than a decade under the government's comprehensive plans to digitize the economy. Observers have compared this to China's social credit system,[125]althoughDeputy Prime MinisterMaxim Akimovhas denied that, saying a Chinese-style social credit system is a "threat".[126][127]
In Spain, people who cannot repay their home mortgages may declare bankruptcy.[108]: 219Bankruptcy and foreclosure discharges the obligation to pay mortgage interest, but not mortgage principal.[108]: 219If mortgage principal is not paid, the debtor is placed on a list of untrustworthy people.[108]: 219
In 2018, theNew Economics Foundationcompared the Chinese citizen score to other rating systems in the United Kingdom. These included using data from a citizen's credit score, phone usage, rent payment, and so on, to filter job applications, determine access to social services, determine advertisements served, etc.[128][129]
Some media outlets have compared the social credit system tocredit scoring systems in the United States.[130][131][132]According toMike ElganofFast Company, "an increasing number of societal "privileges" related to transportation, accommodations, communications and the rates US citizens pay for services (like insurance) are either controlled by technology companies or affected by how we use technology services. AndSilicon Valley's rules for being allowed to use their services are getting stricter."[130]
In 2017,Venezuelastarted developing a smart-card ID known as the "carnet de la patria" or "fatherland card", with the help of the Chinese telecom companyZTE.[133]The system included a database which stores details like birthdays, family information, employment and income, property owned, medical history, state benefits received, presence on social media, membership in a political party and whether a person voted.[133]Many in Venezuela have expressed concern that the card is an attempt to tighten social control through monitoring all aspects of daily life.[134][135]
|
https://en.wikipedia.org/wiki/Social_Credit_System
|
Social currencyrefers to the actual and potential resources from presence insocial networksand communities, including bothdigitaland offline. It is, in essence, an action made by a company or stance of being, to which consumers feel a sense of value when associating with a brand, while the humanization of the brand generates loyalty and "word of mouth" virality for the organization. The concept derives fromPierre Bourdieu'ssocial capitaltheory and relates to increasing one's sense of community, grantingaccess to informationand knowledge, helping to form one's identity, and providingstatusand recognition.
In their study on social currency, the consulting company Vivaldi Partners defined social currency as the extent to which people share the brand or information about the brand as part of their everyday social lives at work or at home. This sharing helps companies to create unique brand identities and earn permission to interact with consumers or customers. In today's age, building social currency is an important investment companies can make to create value for themselves. Social Currency moves social initiatives and campaigns beyond marketing and communications efforts to impacting and changing entire industries and categories. Consumers and customers will benefit as well as they increasingly participate in social platforms, and use social technologies.[1]
Social currency can be divided into six dimensions or levers:
It is about creating a sense of community and by that a strong affiliation between customers, consumers and users of a brand. Having social currency increases a brand's engagement with consumers and interaction with customers, and by that adding to the customer conversation around the brand, it grants access to information and knowledge, which is being shared within the customer base. Belonging to a group also helps users of a brand to grow personally by accessing new utility and also developing their own identity in the respective peer group. A strong attachment to a brand will also be a core driver for an active advocacy recommending or even defending the brand.
The Social Currency Wheel is an alternative to the traditional brand funnel or customer decision journey. The Social Currency Wheel evaluates the impact of social behaviors of customers on social currency and three outcomes: consideration, purchase, and loyalty.[2]The goal of the Social Currency Wheel is to explain how customers' social processes and behaviors drive each of the conversions. Marketers can engage with customers during these social processes and behaviors, and influence the outcomes.
Social currency is information shared which encourages further social encounters. It can be a factor in establishingfansof sports or television programmes.[3]As well as talking about sports, attendance at sports events themselves is a form of social currency. Young men in particular feel the need to learn about sporting current events in order to facilitate social interaction. However, these types of fan can easily move to a new sport, team, or programme in the future if the new one offers more social currency.[4]Women may use jewellery and clothes as part of their social currency, providing a way into communication.[5]
Proper social currency tactics includes responding to comments, sharing posts to groups and forums, and strategic posting time and placement.[6]
|
https://en.wikipedia.org/wiki/Social_currency
|
Social profilingis the process of constructing asocial mediauser's profile using his or hersocial data. In general,profilingrefers to thedata scienceprocess of generating a person's profile with computerizedalgorithmsand technology.[1]There are various platforms for sharing this information with the proliferation of growing popularsocial networks, including but not limited toLinkedIn,Google+,FacebookandTwitter.[2]
A person'ssocial datarefers to the personal data that they generate either online or offline[3](for more information, seesocial data revolution). A large amount of these data, including one's language, location and interest, is shared throughsocial mediaandsocial network. Users join multiple social media platforms and their profiles across these platforms can be linked using different methods[4]to obtain their interests, locations, content, and friend list. Altogether, this information can be used to construct a person's social profile.
Meeting the user's satisfaction level for information collection is becoming more challenging. This is because of too much "noise" generated, which affects the process of information collection due to explosively increasing online data. Social profiling is an emerging approach to overcome the challenges faced in meeting user's demands by introducing the concept ofpersonalized searchwhile keeping in consideration user profiles generated using social network data. A study reviews and classifies research inferring users social profile attributes from social media data as individual and group profiling. The existing techniques along with utilized data sources, the limitations, and challenges were highlighted.
The prominent approaches adopted includemachine learning,
ontology, andfuzzy logic. Social media data fromTwitterandFacebookhave been used by most of the studies to infer the social attributes of users. The literature showed that user social attributes, including age, gender, home location, wellness, emotion, opinion, relation, influence are still need to be explored.[5]
The ever-increasing online content has resulted in the lack of proficiency of centralizedsearch engine's results.[6][7]It can no longer satisfy user's demand for information. A possible solution that would increase coverage of search results would bemeta-search engines,[6]an approach that collects information from numerous centralized search engines. A new problem thus emerges, that is too much data and too much noise is generated in the collection process.
Therefore, a new technique called personalized meta-search engines was developed. It makes use of a user's profile (largely social profile) to filter the search results. A user's profile can be a combination of a number of things, including but not limited to, "a user's manual selected interests, user's search history", and personal social network data.[6]
According toSamuel D. Warren IIandLouis Brandeis(1890), disclosure of private information and the misuse of it can hurt people's feelings and cause considerable damage in people's lives.[8]Social networks provide people access to intimate online interactions; therefore, information access control, information transactions,privacy issues, connections and relationships on social media have become important research fields and are subjects of concern to the public.
Ricard Fogues and other co-authors state that "any privacy mechanism has at its base an access control", that dictate "howpermissionsare given, what elements can be private, how access rules are defined, and so on".[9]Current access control for social media accounts tend to still be very simplistic: there is very limited diversity in the category of relationships on for social network accounts. User's relationships to others are, on most platforms, only categorized as "friend" or "non-friend" and people may leak important information to "friends" inside their social circle but not necessarily users to they consciously want to share the information to.[9]The below section is concerned with social media profiling and what profiling information on social media accounts can achieve.
A lot of information is voluntarily shared on online social networks, such as photos and updates on life activities (new job, hobbies, etc.). People rest assured that different social network accounts on different platforms will not be linked as long as they do not grant permission to these links. However, according to Diane Gan, information gathered online enables "target subjects to be identified on other social networking sites such as Foursquare, Instagram, LinkedIn, Facebook and Google+, where more personal information was leaked".[10]
The majority of social networking platforms use the "opt out approach" for their features. If users wish to protect their privacy, it is user's own responsibility to check and change theprivacy settingsas a number of them are set to default option.[10]A major social network platforms have developed geo-tag functions and are in popular usage. This is concerning because 39% of users have experienced profiling hacking; 78% burglars have used major social media networks and Google Street-view to select their victims; and an astonishing 54% of burglars attempted to break into empty houses when people posted their status updates and geo-locations.[11]
Formation and maintenance of social media accounts and their relationships with other accounts are associated with various social outcomes.[12]In 2015, for many firms,customer relationship managementis essential and is partially done throughFacebook.[13]Before the emergence and prevalence of social media, customer identification was primarily based upon information that a firm could directly acquire:[14]for example, it may be through a customer's purchasing process or voluntary act of completing asurvey/loyalty program. However, the rise of social media has greatly reduced the approach of building acustomer's profile/modelbased on available data. Marketers now increasingly seek customer information through Facebook;[13]this may include a variety of information users disclose to all users or partial users on Facebook: name, gender, date of birth, e-mail address, sexual orientation, marital status, interests, hobbies, favorite sports team(s), favorite athlete(s), or favorite music, and more importantly, Facebook connections.[13]
However, due to the privacy policy design, acquiring true information on Facebook is no trivial task. Often, Facebook users either refuse to disclose true information (sometimes using pseudonyms) or setting information to be only visible to friends, Facebook users who "LIKE" your page are also hard to identify. To do online profiling of users and cluster users, marketers and companies can and will access the following kinds of data: gender, the IP address and city of each user through the Facebook Insight page, who "LIKED" a certain user, a page list of all the pages that a person "LIKED" (transaction data), other people that a user follow (even if it exceeds the first 500, which we usually can not see) and all the publicly shared data.[13]
First launched on the Internet in March 2006, Twitter is a platform on which users can connect and communicate with any other user in just 280 characters.[10]Like Facebook,Twitteris also a crucial tunnel for users to leak important information, often unconsciously, but able to be accessed and collected by others.
According toRachel Nuwer, in a sample of 10.8 million tweets by more than 5,000 users, their posted and publicly shared information are enough to reveal a user's income range.[15]A postdoctoral researcher from theUniversity of Pennsylvania, Daniel Preoţiuc-Pietro and his colleagues were able to categorize 90% of users into corresponding income groups. Their existing collected data, after being fed into a machine-learning model, generated reliable predictions on the characteristics of each income group.[15]
The mobile app called Streamd.in displays live tweets on Google Maps by using geo-location details attached to the tweet, and traces the user's movement in the real world.[10]
The advent and universality of social media networks have boosted the role of images and visual information dissemination.[16]Many types of visual information on social media transmit messages from the author, location information and other personal information. For example, a user may post a photo of themselves in which landmarks are visible, which can enable other users to determine where they are. In a study done by Cristina Segalin, Dong Seon Cheng and Marco Cristani, they found that profiling user posts' photos can reveal personal traits such as personality and mood.[16]In the study, convolutional neural networks (CNNs) is introduced. It builds on the main characteristics of computational aesthetics CA (emphasizing "computational methods", "human aesthetic point of view", and "the need to focus on objective approaches"[16]) defined by Hoenig (Hoenig, 2005). This tool can extract and identify content in photos.
In a study called "A Rule-Based Flickr Tag Recommendation System", the author suggests personalized tag recommendations,[17]largely based on user profiles and other web resources. It has proven to be useful in many aspects: "web content indexing", "multimedia data retrieval", and enterprise Web searches.[17]
In 2011, marketers and retailers are increasing their market presence by creating their own pages on social media, on which they post information, ask people to like and share to enter into contests, and much more. Studies in 2011 show that on average a person spends about 23 minutes on a social networking site per day.[18]Therefore, companies from small to large ones are investing in gathering user behavior information, rating, reviews, and more.[19]
Until 2006, communications online are not content led in terms of the amount of time people spend online. However, content sharing and creating has been the primary online activity of general social media users and that has forever changed online marketing.[20]In the book Advanced Social media Marketing,[21]the author gives an example of how a New York wedding planner might identify his audience when marketing on Facebook. Some of these categories may include: (1) who live in the United States; (2) Who live within 50 miles of New York; (3) Age 21 and older; (4) engaged female.[21]No matter you choose to pay cost per click or cost per impressions/views "the cost of Facebook Marketplace ads and Sponsored Stories is set by your maximum bid and the competition for the same audiences".[21]The cost of clicks is usually $0.5–1.5 each.
Kloutis a popular online tool that focuses on assessing a user'ssocial influenceby social profiling. It takes several social media platforms (such asFacebook,Twitteretc.) and numerous aspects into account and generate a user's score from 1 to 100. Regardless of one's number of likes for a post, or connections on LinkedIn, social media contains plentiful personal information. Klout generates a single score that indicates a person's influence.[22]
In a study called "How Much Klout do You Have...A Test of System Generated Cues on Source Credibility" done by Chad Edwards, Klout scores can influence people's perceived credibility.[23]As Klout Score becomes a popular combined-into-one-score method of accessing people's influence, it can be a convenient tool and a biased one at the same time. A study of how social media followers influence people's judgments done by David Westerman illustrates that possible bias that Klout may contain.[24]In one study, participants were asked to view six identical mock Twitter pages with only one major independent variable: page followers. Result shows that pages with too many or too fewer followers would both decrease its credibility, despite its similar content. Klout score may be subject to the same bias as well.[24]
While this is sometimes used during recruitment process, it remains to be controversial.
Krednot only assigns each user an influence score, but also allows each user to claim a Kred profile and Kred account. Through this platform, each user can view how topinfluencersengage with their online community and how each of your online action impacted your influence scores.
Several suggestions that Kred is giving to the audience about increasing influence are: (1) be generous with your audience, free comfortable sharing content from your friends and tweeting others; (2) join an online community; (3) create and share meaningful content; (4) track your progress online.
Follower Wonk is specifically targeted towards Twitter analytics, which helps users to understand follower demographics, and optimizes your activities to find which activity attracts the most positive feedback from followers.
Keyhole is a hashtag tracking and analytics device that tracks Instagram, Twitter and Facebook hashtag data. It is a service that allows you to track which top influencer is using a certain hashtag and what are the other demographic information about the hashtag. When you enter a hashtag on its website, it will automatically randomly sample users that currently used this tag which allows user to analyze each hashtag they are interested in.
The prevalence of the Internet and social media has providedonline activistsboth a new platform for activism, and the most popular tool. While online activism might stir up great controversy and trend, few people actually participate or sacrifice for relevant events. It becomes an interesting topic to analyse the profile of online activists. In a study done by Harp and his co-authors about online activist in China, Latin America and United States, the majority of online activists are males in Latin America and China with a median income of $10,000 or less, while the majority of online activist is female inUnited Stateswith a median income of $30,000 - $69,999; and the education level of online activists in the United States tend to be postgraduate work/education while activists in other countries have lower education levels.[25]
A closer examination of their online shared content shows that the most shared information online include five types:
The Chinese government hopes to establish a "social-credit system" that aims to score "financial creditworthiness of citizens", social behavior and even political behaviour.[26]This system will be combining big data and social profiling technologies. According to Celia Hatton fromBBC News, everyone in China will be expected to enroll in anational databasethat includes and automatically calculates fiscal information, political behavior, social behavior and daily life including minor traffic violations – a single score that evaluates a citizen's trustworthiness.[27]
Credibility scores, social influence scores and other comprehensive evaluations of people are not rare in other countries. However, China's "social-credit system" remains to be controversial as this single score can be a reflection of a person's every aspect.[27]Indeed, "much about the social-credit system remains unclear".[26]
Although the implementation of socialcredit scoreremains controversial in China, Chinese government aims to fully implement this system by 2018.[28]According to Jake Laband (the deputy director of the Beijing office of the US-ChinaBusiness Council), low credit scores will "limit eligibility for financing, employment, and Party membership, as well restrict real estate transactions and travel." Social credit score will not only be affected by legal criteria, but also social criteria, such as contract breaking. However, this has been a great concern for privacy for big companies due to the huge amount of data that will be analyzed by the system.
|
https://en.wikipedia.org/wiki/Social_profiling
|
Inpsychologyandsociology, atrust metricis ameasurementormetricof the degree to which one social actor (an individual or a group)trustsanother social actor. Trust metrics may be abstracted in a manner that can be implemented oncomputers, making them of interest for the study and engineering ofvirtual communities, such asFriendsterandLiveJournal.
Trust escapes a simple measurement because its meaning is too subjective for universally reliable metrics, and the fact that it is a mental process, unavailable to instruments. There is a strong argument[1]against the use of simplistic metrics to measure trust due to the complexity of the process and the 'embeddedness' of trust that makes it impossible to isolate trust from related factors.
There is no generally agreed set of properties that make a particular trust metric better than others, as each metric is designed to serve different purposes, e.g.[2]provides certain classification scheme for trust metrics. Two groups of trust metrics can be identified:
Trust metrics enable trust modelling[3]and reasoning about trust. They are closely related toreputation systems. Simple forms of binary trust metrics can be found e.g. in PGP.[4]The first commercial forms of trust metrics in computer software were in applications likeeBay's Feedback Rating.Slashdotintroduced its notion ofkarma, earned for activities perceived to promote group effectiveness, an approach that has been very influential in latervirtual communities.[citation needed]
Empirical metrics capture the value of trust by exploring the behavior or introspection of people, to determine the perceived or expressed level of trust. Those methods combine theoretical background (determining what it is that they measure) with defined set of questions and statistical processing of results.
The willingness to cooperate, as well as actual cooperation, are commonly used to both demonstrate and measure trust. The actual value (level of trust and/or trustworthiness) is assessed from the difference between observed and hypothetical behaviors i.e. those that would have been anticipated in the absence of cooperation.
Surveys capture the level of trust by means of both observations or introspection, but without engaging into any experiments. Respondents are usually providing answers to a set of questions or statements and responses are e.g. structured according to aLikert scale. Differentiating factors are the underlying theoretical background and contextual relevance.
One of the earliest surveys are McCroskey's scales[5]that have been used to determine authoritativeness (competence) and character (trustworthiness) of speakers. Rempel's trust scale[6]and Rotter's scale[7]are quite popular in determining the level of interpersonal trust in different settings. The Organizational Trust Inventory (OTI)[8]is an example of an exhaustive, theory-driven survey that can be used to determine the level of trust within the organisation.
For a particular research area a more specific survey can be developed. For example, the interdisciplinary model of trust,[9]has been verified using a survey while[10]uses a survey to establish the relationship between design elements of the web site and perceived trustworthiness of it.
Another empirical method to measure trust is to engage participants in experiments, treating the outcome of such experiments as estimates of trust. Several games and game-like scenarios have been tried, some of which estimate trust or confidence in monetary terms (see[11]for an interesting overview).
Games of trust are designed in a way that theirNash equilibriumdiffer fromPareto optimumso that no player alone can maximize their own utility by altering his selfish strategy without cooperation, while cooperating partners can benefit. Trust can be therefore estimated on the basis of monetary gain attributable to cooperation.
The original 'game of trust' has been described in[12]as an abstracted investment game between an investor and his broker. The game can be played once or several times, between randomly chosen players or in pairs that know each other, yielding different results.
Several variants of the game exist, focusing on different aspects of trust as the observable behaviour. For example, rules of the game can be reversed into what can be called a game of distrust,[13]declaratory phase can be introduced[14]or rules can be presented in a variety of ways, altering the perception of participants.
Other interesting games are e.g. binary-choice trust games,[15]the gift-exchange game,[16]cooperative trust games,[citation needed]and various other forms of social games. Specifically the Prisoners Dilemma[17]are popularly used to link trust with economic utility and demonstrate the rationality behind reciprocity. For multi-player games, different forms of close market simulations exist.[18]
Formal metrics focus on facilitating trust modelling, specifically for large scale models that represent trust as an abstract system (e.g.social networkorweb of trust). Consequently, they may provide weaker insight into the psychology of trust, or in particulars of empirical data collection. Formal metrics tend to have a strong foundations inalgebra,probabilityorlogic.
There is no widely recognised way to attribute value to the level of trust, with each representation of a 'trust value' claiming certain advantages and disadvantages. There are systems that assume only binary values,[19]that use fixed scale,[20]where confidence range from −100 to +100 (while excluding zero),[21]from 0 to 1[22][23]or from [−1 to +1);[24]where confidence is discrete or continuous, one-dimensional or have many dimensions.[25]Some metrics use ordered set of values without attempting to convert them to any particular numerical range (e.g.[26]See[27]for a detailed overview).
There is also a disagreement about the semantics of some values. The disagreement regarding the attribution of values to levels of trust is specifically visible when it comes to the meaning of zero and to negative values. For example, zero may indicate either the lack of trust (but not distrust), or lack of information, or a deep distrust. Negative values, if allowed, usually indicate distrust, but there is a doubt[28]whether distrust is simply trust with a negative sign, or a phenomenon of its own.
Subjective probability[29]focuses on trustor's self-assessment about his trust in the trustee. Such an assessment can be framed as an anticipation regarding future behaviour of the trustee, and expressed in terms of probability. Such a probability is subjective as it is specific to the given trustor, their assessment of the situation, information available to him etc. In the same situation other trustors may have a different level of a subjective probability.
Subjective probability creates a valuable link between formalisation and empirical experimentation. Formally, subjective probability can benefit from available tools of probability and statistics. Empirically, subjective probability can be measured through one-side bets. Assuming that the potential gain is fixed, the amount that a person bets can be used to estimate his subjective probability of a transaction.
The logic for uncertain probabilities (subjective logic) has been introduced by Josang,[30][31]where uncertain probabilities are calledsubjective opinions. This concept combines probability distribution with uncertainty, so that each opinion about trust can be viewed as a distribution of probability distributions where each distribution is qualified by associated uncertainty. The foundation of the trust representation is that an opinion (an evidence or a confidence) about trust can be represented as a four-tuple (trust, distrust, uncertainty, base rate), where trust, distrust and uncertainty must add up to one, and hence are dependent through additivity.
Subjective logic is an example of computational trust where uncertainty is inherently embedded in the calculation process and is visible at the output. It is not the only one, it is e.g. possible to use a similar quadruplet (trust, distrust, unknown, ignorance) to express the value of confidence,[32]as long as the appropriate operations are defined. Despite the sophistication of the subjective opinion representation, the particular value of a four-tuple related to trust can be easily derived from a series of binary opinions about a particular actor or event, thus providing a strong link between this formal metric and empirically observable behaviour.
Finally, there are CertainTrust[33]and CertainLogic.[34]Both share a common representation, which is equivalent to subjective opinions, but based on three independent parameters named 'average rating', 'certainty', and 'initial expectation'. Hence, there is a bijective mapping between the CertainTrust-triplet and the four-tuple of subjective opinions.
Fuzzy systems,[35]as trust metrics can link natural language expressions with a meaningful numerical analysis.
Application offuzzy logicto trust has been studied in the context ofpeer-to-peernetworks[36]to improve peer rating. Also for grid computing[37]it has been demonstrated that fuzzy logic allows to solve security issues in reliable and efficient manner.
The set of properties that should be satisfied by a trust metric vary, depending on the application area. Following is a list of typical properties.
Transitivity is a highly desired property of a trust metric.[38]In situations where A trusts B and B trusts C, transitivity concerns the extent to which A trusts C. Without transitivity, trust metrics are unlikely to be used to reason about trust in more complex relationships.
The intuition behind transitivity follows everyday experience of 'friends of a friend' (FOAF), the foundation of social networks. However, the attempt to attribute exact formal semantics to transitivity reveals problems, related to the notion of a trust scope or context. For example,[39]defines conditions for the limited transitivity of trust, distinguishing between direct trust and referral trust. Similarly,[40]shows that simple trust transitivity does not always hold, based on information on theAdvogatomodel and, consequently, have proposed new trust metrics.
The simple, holistic approach to transitivity is characteristic to social networks (FOAF,Advogato). It follows everyday intuition and assumes that trust and trustworthiness apply to the whole person, regardless of the particular trust scope or context. If one can be trusted as a friend, one can be also trusted to recommend or endorse another friend. Therefore, transitivity is semantically valid without any constraints, and is a natural consequence of this approach.
The more thorough approach distinguishes between different scopes/contexts of trust, and does not allow for transitivity between contexts that are semantically incompatible or inappropriate. A contextual approach may, for instance, distinguish between trust in a particular competence, trust in honesty, trust in the ability to formulate a valid opinion, or trust in the ability to provide reliable advice about other sources of information. A contextual approach is often used in trust-based service composition.[41]The understanding that trust is contextual (has a scope) is a foundation of acollaborative filtering.
For a formal trust metric to be useful, it should define a set of operations over values of trust in such way that the result of those operations produce values of trust. Usually at least two elementary operators are considered:
The exact semantics of both operators are specific to the metric. Even within one representation, there is still a possibility for a variety of semantic interpretations. For example, for the representation as the logic for uncertain probabilities, trust fusion operations can be interpreted by applying different rules (Cumulative fusion, averaging fusion, constraint fusion (Dempster's rule), Yager's modified Dempster's rule, Inagaki's unified combination rule, Zhang's centre combination rule, Dubois and Prade's disjunctive consensus rule etc.). Each interpretations leads to different results, depending on the assumptions for trust fusion in the particular situatation to be modelled. See[42][43]for detailed discussions.
The growing size of networks of trust make scalability another desired property, meaning that it is computationally feasible to calculate the metric for large networks. Scalability usually puts two requirements of the metric:
Attack resistanceis an important non-functional property of trust metrics which reflects their ability not to be overly influenced by agents who try to manipulate the trust metric and who participate in bad faith (i.e. who aim to abuse the presumption of trust).
Thefree softwaredeveloper resourceAdvogatois based on a novel approach to attack-resistant trust metrics ofRaph Levien. Levien observed thatGoogle'sPageRankalgorithm can be understood to be an attack resistant trust metric rather similar to that behind Advogato.
|
https://en.wikipedia.org/wiki/Trust_metric
|
Down and Out in the Magic Kingdomis a2003science fictionbook, the first novel by Canadian author and digital-rights activistCory Doctorow. It depicts people competing over how new technology is being used atWalt Disney World, in apost-scarcityworld with an economy based onreputation. Concurrent with its publication byTor Books, Doctorow released the entire text of the novel under aCreative Commonsnoncommercial license on his website, allowing the whole text of the book to be freely read and distributed without needing any further permission from him or his publisher.
The novel was nominated for theNebula AwardforBest Novelin 2004.
Thisfuture historybook takes place in the 22nd century, mostly in Walt Disney World. Disney World is run by rivaladhocracies, each dedicated to providing the best experience to the park's visitors and competing for the Whuffie the guests offer. In the post-scarcity world of the novel, Whuffie is a currency-like system that primarily measures the esteem of others, or in the case of extremely low Whuffie, their disdain.
The story is told in first person by Julius, whose old college buddy Dan used to be one of the most popular people in the country (as measured by Whuffie). Julius and girlfriend Lil are working with the committee (called anad hoc) that oversees theMagic Kingdom'sLiberty Square. Dan, who has hit rock bottom and lost all his Whuffie, doesn't believe in rejuvenation and wishes to die, but not while he's at rock bottom. He moves in with Julius and Lil in order to rebuild his life. At the park, Julius is murdered and soon refreshed. By the time he wakes up, Debra'sad hocgroup has taken control of theHall of Presidents, and is planning to replace its old-fashionedanimatronicrobots with the synthetic memory imprinting of the experience of being the president for a moment. Julius believes that this rival committee had him killed as a distraction so that they could seize the Hall in the interim.
Fearing that they will next try to revamp his favorite ride, theHaunted Mansion, he resolves to take a stand against the virtualization of the park, endangering his relationship with both Lil and Dan; eventually Lil leaves Julius for Dan. Julius finally "cracks" when he sees his dreams turned to dust and he bashes up the attractions in the Hall of Presidents, in the process also damaging his own cranial interface to the point that he can no longer back himself up. This pushes his Whuffie to ground level when he is caught and gives Debra and her colleagues enough "sympathy Whuffie" to take over the Haunted Mansion, by invitation of the same fans that Julius had recruited to work in the Mansion.
Dan leaves Lil, Julius is kicked out of thead hocand his Whuffie hits rock bottom — low enough that others take his possessions with impunity and elevators don't stop for him. Then comes the revelation: a few days before Dan's planned suicide by lethal injection, Dan reveals that it was in fact he who had arranged to kill Julius, in collusion with Debra, in exchange for the Whuffie that her team could give him. Dan had asked one of his converts from his missionary days, a young girl, to do the dirty work. Debra then had herself restored from a backup made before this plan, so that she would honestly believe that she wasn't involved. He makes this public; Debra is thrown out, Julius gets sympathy Whuffie and develops a friendly affection for his sweet young murderer. He never restores himself, because doing so would erase his memories of that entire year, his last with Dan, but lives with his damaged interface. The book is his attempt to manually document the happenings of the previous year so that, when this incarnation is eventually killed by age or accident, his restored backup will have a partial record of the transpiring events. Dan decides not to take a lethal injection, but to deadhead (putting oneself into a voluntary coma) till theheat death of the Universe.
On February 12, 2004, Doctorow re-licensed his book under aCreative CommonsAttribution-Noncommercial-Share Alike (by-nc-sa) license. Under the new license, one can now make derivative works from the book without permission, provided the license and attribution is retained with each new work and the derivatives are not used commercially. Already, fans of the book have begun Russian and Spanish translations, an audio book version, and several amusing re-arrangements of the text. Doctorow has noted that he is pleased that people are building on his work, and that he hopes that further innovations will follow.
Despite these measures, in 2007 invalidDMCAtakedown notices were sent byScience Fiction Writers of America (SFWA)with regard to this novel.[1][2][3]Cory Doctorowsaid "Down and Out in the Magic Kingdomwas the first novel released under a Creative Commons license, and I've spent the past four years exhorting fans to copy my work and share it. Now I've started to hear from readers who've seen this notice and concluded that I am a hypocrite who usesSFWAto send out legal threats to people who heeded my exhortation."[1]
|
https://en.wikipedia.org/wiki/Whuffie
|
Thealt-right pipeline(also called thealt-right rabbit hole) is a proposedconceptual modelregarding internetradicalizationtoward thealt-rightmovement. It describes a phenomenon in which consuming provocativeright-wingpolitical content, such asantifeministoranti-SJWideas, gradually increases exposure to the alt-right or similarfar-right politics. It posits that this interaction takes place due to the interconnected nature of political commentators andonline communities, allowing members of one audience or community to discover more extreme groups.[1][2]This process is most commonly associated with and has been documented on the video platformYouTube, and is largely faceted by the method in whichalgorithmson various social media platforms function through the process recommending content that is similar to what users engage with, but can quickly lead users down rabbit-holes.[2][3][4]The effects of YouTube's algorithmic bias in radicalizing users has been replicated by one study,[2][5][6][7]although two other studies found little or no evidence of a radicalization process.[3][8][9]
Many political movements have been associated with the pipeline concept. Theintellectual dark web,[2]libertarianism,[10]themen's rights movement,[11]and thealt-litemovement[2]have all been identified as possibly introducing audiences to alt-right ideas. Audiences that seek out and are willing to accept extreme content in this fashion typically consist of young men, commonly those that experience significant loneliness and seek belonging or meaning.[12]
The alt-right pipeline may be a contributing factor todomestic terrorism.[13][14]Many social media platforms have acknowledged this path of radicalization and have taken measures to prevent it, including the removal of extremist figures and rules againsthate speechand misinformation.[3][12]Left-wingmovements, such asBreadTube, also oppose the alt-right pipeline and "seek to create a 'leftist pipeline' as a counterforce to the alt-right pipeline."[15]
Use of the internet allows individuals withheterodoxbeliefs to alter their environment, which in turn has transformative effects on the user. Influence from external sources such as the internet can be gradual so that the individual is not immediately aware of their changing understanding or surroundings. Members of the alt-right refer to this radicalization process as "taking thered pill" in reference to the method of immediately achieving greater awareness inThe Matrix. This is in contrast to the gradual nature of radicalization described by the alt-right pipeline.[14][16]Many on the far-right recognize the potential of this radicalization method and actively share right-wing content with the intention of gradually radicalizing those around them. The use of racist imagery or humor may be used by these individuals under the guise ofironyor insincerity to make alt-right ideas palpable and acceptable to newer audiences. The nature ofinternet memesmeans they can easily be recreated and spread to many different internet communities.[16][17]
YouTube has been identified as a major element in the alt-right pipeline. This is facilitated through an "Alternative Influence Network", in which various right-wing scholars, pundits, and internet personalities interact with one another to boost performance of their content. These figures may vary in their ideologies betweenconservatism,libertarianism, orwhite nationalism, but they share a common opposition tofeminism,progressivism, andsocial justicethat allows viewers of one figure to quickly acclimate to another.[1]They often prioritize right-wing social issues over right-wing economic issues, with little discussion offiscal conservatism. Some individuals in this network may not interact with one another, but a collection of interviews, internet debates, and other interactions create pathways for users to be introduced to new content.[2]
YouTube's algorithmic system for recommending videos allows users to quickly access content similar to what they have previously viewed, allowing them to more deeply explore an idea once they have expressed interest. This allows newer audiences to be exposed to extreme content when videos that promotemisinformationandconspiracy theoriesgain traction.[14][12]When a user is exposed to certain content featuring certain political issues orculture warissues, thisrecommendation systemmay lead users to different ideas or issues, includingIslamophobia,opposition to immigration,antifeminism, orreplacement theory.[14][18]Recommended content is often somewhat related, which creates an effect of gradual radicalization between multiple issues, referred to as a pipeline.[14]
At times, the platform will also recommend these videos to users that had not indicated interest in these viewpoints.[4][18]Radicalization also takes place in interactions with other radicalized users online, on varied platforms such asGab,Reddit, 4chan, orDiscord.[14]Major personalities in this chain often have a presence onFacebookandTwitter, though YouTube is typically their primary platform for messaging and earning income.[12]
The alt-right pipeline mainly targetsangry white men, including those who identify asincels, reflecting themisogynyof the alt-right.Harvard Political Reviewhas described this process as the "exploitation of latent misogyny and sexual frustration through 'male bonding' gone horribly awry". The pipeline also targets people withself-doubt.[19]
The alt-right pipeline has been found to begin with theintellectual dark webcommunity, which is made up of internet personalities that are unified by an opposition toidentity politicsandpolitical correctness, such asJoe Rogan,Ben Shapiro,Dave Rubin, andJordan Peterson.[2]The intellectual dark web community overlaps and interacts with thealt-litecommunity, such asSteven Crowder,Paul Joseph Watson,Mark Dice, andSargon of Akkad.[2]This community in turn overlaps and interacts with the alt-right community, such asJames Allsup, Black Pigeon Speaks,Varg Vikernes, andRed Ice.[2]The most extreme endpoint often involvesfascismor belief in aninternational Jewish conspiracy,[16]though the severity of extremism can vary between individuals.[12]
Alt-right content on the internet spreads ideology that is similar to earlier white supremacist and fascist movements. The internet packages the ideology differently, often in a way that is more palatable and thus is more successful in delivering it to a larger number of people.[20]Due to the conservative nature of the alt-right, much of the ideology is associated with the preservation of traditional values and ways of living. This creates a susceptibility toward conspiracy theories about secret forces that seek to destroy traditional ways of life.[21]
The antifeministManospherehas been identified as another early point in the alt-right pipeline.[11]Themen's rights movementoften discussesmen's issuesmore visibly than other groups, attracting young men with interest in such issues when no alternative is made available. Many right-wing internet personalities have developed a method to expand their audiences by commenting on popular media; videos that criticize movies or video games for supporting left-wing ideas are more likely to attract fans of the respective franchises.[12]
The format presented by YouTube has allowed various ideologies to access new audiences through this means.[12]The same process has also been used to facilitate access toanti-capitalist politicsthrough the internet communityBreadTube. This community was developed through the use this pipeline process to introduce users to left-wing content and mitigate exposure to right-wing content,[12][15]though the pipeline process has been found to be less effective for left-wing politics due to the larger variety of opposing left-wing groups that limits interaction and overlap.[15]This dichotomy can also cause a "whiplashpolarization" in which individuals are converted between far-right and far-left politics.[12]
The psychological factors of radicalization through the alt-right pipeline are similar to other forms of radicalization, includingnormalization, acclimation, anddehumanization. Normalization involves the trivialization of racist and antisemitic rhetoric. Individuals early in the alt-right pipeline will not willingly embrace such rhetoric, but will adopt it under the guise ofdark humor, causing it to be less shocking over time.[14]This may sometimes be engineered intentionally by members of the alt-right to make their beliefs more palatable and provide plausible deniability forextreme beliefs.[16][17]Acclimation is the process of being conditioned to seeing bigoted content. By acclimating to controversial content, individuals become more open to slightly more extreme content. Over time, conservative figures appear too moderate and users seek out more extreme voices. Dehumanization is the final step of the alt-right pipeline, where minorities are seen as lesser or undeserving of life and dehumanizing language is used to refer to people that disagree withfar-right beliefs.[14]
The process is associated with young men that experience loneliness, meaninglessness, or a lack of belonging.[12]An openness to unpopular views is necessary for individuals to accept beliefs associated with the alt-right pipeline. It has been associated with contrarianism, in which an individual uses the working assumption that the worldviews of most people are entirely wrong. From this assumption, individuals are more inclined to adopt beliefs that are unpopular or fringe. This makes effective several entry points of the alt-right pipeline, such as libertarianism, in which ideologies attract individuals with traits that make them susceptible to radicalization when exposed to other fringe ideas.[10]Motivation for pursuing these communities varies, with some people finding them by chance while others seek them out. Interest in video games is associated with the early stages of the alt-right pipeline.[12]
Along with algorithms, online communities can also play a large part inradicalization. People with fringe and radical ideologies can meet other people who share, validate and reinforce those ideologies. Because people can control who and what they engage with online, they can avoid hearing any opinion or idea that conflicts with what their prior beliefs. This creates an echo chamber that upholds and reinforces radical beliefs. The strong sense of community and belonging that comes with it is a large contributing factor for people joining the alt-right and adopting it as an identity.[22]
Internet radicalization correlates with an increase inlone wolf attacksanddomestic terrorism.[13][23]The alt-right pipeline has been associated with theChristchurch mosque shootings, in which a far-right extremist killed 51 Muslim worshipers inChristchurch, who directly credited the Internet for the formation of his beliefs in his manifesto.[14][24]The informal nature of radicalization through the alt-right pipeline allows radicalization to occur at an individual level, and radicalized individuals are able to live otherwise normal lives offline. This has complicated efforts by experts to track extremism and predict acts of domestic terrorism, as there is no reliable way of determining who has been radicalized or whether they are planning to carry out political violence.[14][25]Harassment campaignsagainst perceived opponents of the alt-right movement are another common effect of radicalization.[14]
Many social media platforms have recognized the potential of radicalization and have implemented measures to limit its prevalence. High-profile extremist commentators such asAlex Joneshave been banned from several platforms, and platforms often have rules againsthate speechand misinformation.[12]In 2019, YouTube announced a change to its recommendation algorithm to reduce conspiracy theory related content.[12][18]Some extreme content, such as explicit depictions of violence, are typically removed on most social media platforms. On YouTube, content that expresses support of extremism may have monetization features removed, may be flagged for review, or may have public user comments disabled.[3]
A September 2018 study published by the Data & Society Research Institute found that 65 right-wingpolitical influencersuse YouTube'srecommendation engine—in concert with conventional brand-building techniques such ascross-marketingbetween similar influencers—to attract followers and radicalize their viewers into a particular right-wing ideology.[26]An August 2019 study conducted by theUniversidade Federal de Minas GeraisandÉcole polytechnique fédérale de Lausanne, and presented at theACM Conference on Fairness, Accountability, and Transparency2020 used information from the earlier Data & Society research and theAnti-Defamation League(ADL) to categorize the levels of extremism of 360 YouTube channels. The study also tracked users over an 11-year period by analysing 72 million comments, 2 million video recommendations, and 10,000 channel recommendations. The study found that users who engaged with less radical right-wing content tended over time to engage with more extremist content, which the researchers argued provides evidence for a "radicalization pipeline".[2][5][6][7]
A 2020 study published inThe International Journal of Press/Politicsargued that the "emerging journalistic consensus" that YouTube's algorithm radicalizes users to the far-right "is premature." Instead, the study proposes a "'Supply and Demand' framework for analyzing politics on YouTube."[27]
A 2021 study published in theProceedings of the National Academy of Sciencesfound "no evidence that engagement with far-right content is caused by YouTube recommendations systematically, nor do we find clear evidence that anti-woke channels serve as a gateway to thefar right." Instead, the study found that "consumption of political content on YouTube appears to reflect individual preferences that extend across the web as a whole."[8]A 2022 study published by theCity University of New Yorkfound that "little systematic evidence exists to support" the claim that YouTube's algorithm radicalizes users, adding that exposure to extremist views "on YouTube is heavily concentrated among a small group of people with high prior levels of gender and racial resentment", and that "non-subscribers are rarely recommended videos from alternative and extremist channels and seldom follow such recommendations when offered."[9]
|
https://en.wikipedia.org/wiki/Alt-right_pipeline
|
Complex contagionis the phenomenon insocial networksin which multiple sources of exposure to aninnovationare required before an individual adopts the change of behavior.[1]It differs from simple contagion in that unlike a disease, it may not be possible for the innovation to spread after only one incident of contact with an infected neighbor. The spread of complex contagion across a network of people may depend on many social and economic factors; for instance, how many of one's friends adopt the new idea as well as how many of them cannot influence the individual, as well as their own disposition in embracing change.
Complex Contagion and the Weakness of Long TiesbyDamon CentolaofUniversity of PennsylvaniaandMichael MacyofCornell Universityfound that information and disease spread as “simple contagions”, requiring only one contact for transmission, while behaviors typically spread as “complex contagions”, requiring multiples sources of reinforcement to induce adoption. Centola’s work builds onGranovetter’s workon the strength of weak ties and threshold models of collective behavior, as well asDuncan WattsandSteve Strogatz’swork onsmall world networks.[2]Centola and Macy show that the weak ties and small worlds networks are both very good for spreading simple contagions. However, for complex contagions, weak ties and small worlds can slow diffusion.
Centola and Macy suggest four mechanisms of complex contagion. These properties explain the need for multiple exposures in the spread of contagion:
Consider agraphof any reasonable size. Node v’s neighbors can be split into two sets: Set A contains v's neighbors who have adopted a new behavior and B is the set of those behaving conservatively. Node v will only adopt the behavior of those in A if at least aqfraction of neighbors follow behavior A.[3]
Many interactions happen at a local, rather than a global, level – we often don't care as much about the full population's decisions as about the decisions made by friends and colleagues. For example, in a work setting we may choose technology to be compatible with the people we directly collaborate with, rather than the universally most popular technology. Similarly, we may adopt political views that are aligned with those of our friends, even if they belong to minorities.[3]
|
https://en.wikipedia.org/wiki/Complex_contagion
|
Computational propagandais the use of computational tools (algorithms and automation) to distribute misleading information usingsocial medianetworks. The advances in digital technologies and social media resulted in enhancement in methods ofpropaganda.[1]It is characterized by automation, scalability, and anonymity.[2]
Autonomous agents (internet bots) can analyzebig datacollected from social media andInternet of thingsin order to ensure manipulatingpublic opinionin a targeted way, and what is more, to mimic real people in the social media.[3]Coordination is an important component that bots help achieve, giving it an amplified reach.[4]Digital technology enhance well-established traditional methods of manipulation with public opinion: appeals to people's emotions and biases circumventrational thinkingand promote specific ideas.[5]
A pioneering work[6]in identifying and analyzing of the concept has been done by the team ofPhilip N. Howardat theOxford Internet Institutewho since 2012 have been investigating computational propaganda,[7]following earlier Howard's research of the effects of social media on general public, published, e.g., in his 2005 bookNew Media Campaigns and the Managed Citizenand earlier articles. In 2017, they published a series of articles detailing computational propaganda's presence in several countries.[8]
Regulatory efforts have proposed tackling computational propaganda tactics using multiple approaches.[9]Detection techniques are another front considered towards mitigation;[10][4]these can involve machine learning models, with early techniques having issues such as a lack of datasets or failing against the gradual improvement of accounts.[10]Newer techniques to address these aspects use other machine learning techniques or specialized algorithms, yet other challenges remain such as increasingly believable text and its automation.[10]
Computational propaganda is the strategic posting on social media of misleading information by fake accounts that are automated to a degree in order to manipulate readers.[11]
In social media,botsare accounts pretending to be human.[12][13][11]They are managed to a degree via programs,[11][12]and are used to spread information that leads to mistaken impressions.[12][14]In social media, they may be referred to as “social bots”, and may be helped by popular users that amplify them and make them seem reliable through sharing their content.[13]Bots allow propagandists to keep their identities secret.[11]One study from Oxford’s Computational Propaganda Research Project indeed found that bots achieved effective placement in Twitter during a political event.[15]
Bots can be coordinated,[14][16]which may be leveraged to make use of algorithms.[14]Propagandists mix real and fake users;[16]their efforts make use of a variety of actors, including botnets, online paid users, astroturfers, seminar users, and troll armies.[4][10]Bots can provide a fake sense of prevalence.[17][12]Bots can also engage in spam and harassment.[13][14]They are progressively becoming sophisticated, one reason being the improvement of AI.[16]Such development complicates detection for humans and automatized methods alike.[10]
The problematic content tactics propagandists employ includedisinformation,misinformation, and information shared regardless of veracity.[14]The spread of fake and misleading information seeks to influence public opinion.[18]Deepfakesandgenerative language modelsare also employed, creating convincing content.[17]The proportion of misleading information is expected to grow, complicating detection.[16]
Algorithms are another important element to computational propaganda.[18]Algorithmic curationmay influence beliefs through repetition.[14]Algorithms boost and hide content, which propagandists use to their favor.[14]Social media algorithms prioritize user engagement, and to that end their filtering prefers controversy andsensationalism.[17]The algorithmic selection of what is presented can createecho chambersand assert influence.[19]
One study poses thatTiktok’s automated (e.g. the sound page) and interactive (e.g. stitching, duetting, and the content imitation trend) features can also boost misleading information.[2]Furthermore, anonymity is kept through deleting the audio's origin.[2]
A multidisciplinary approach has been proposed towards combating misinformation, proposing the use of psychology to understand its effectiveness.[17]Some studies have looked at misleading information through the lens of cognitive processes, seeking insight into how humans come to accept it.[11][14]
Media theories can help understand the complexity of relationships present in computational propaganda and surrounding actors, its effect, and to guide regulation efforts.[19]Agenda-setting theoryandframing theoryhave also been considered for analysis of computational propaganda phenomena, finding these effects present; algorithmic amplification is an instance of the former,[18]which states media’s selection and occlusion of topics influences the public’s attention.[19]It also states that repetition focuses said attention.[15]
Repetition is a key characteristic of computational propaganda;[16]in social media it can modify beliefs.[14]One study posits that repetition makes topics fresh on the mind, having a similar effect on perceived significance.[11]TheIllusory Truth Effect, which states people will believe what is repeated to them over time, has also been suggested to bring into light that computational propaganda may be doing the same.[20]
Other phenomena have been proposed to be at play in Computational Propaganda tools. One study posits the presence of the megaphone effect, thebandwagon effect, and cascades.[15]Other studies point to the use of content that evokes emotions.[11][9]Another tactic used is suggesting connection between topics by placing them in the same sentence.[11]Incidence of Trust bias, Validation By Intuition Rather Than Evidence,Truth Bias,Confirmation Bias, andCognitive Dissonanceare present as well.[14]Another study points to the occurrence ofNegativity Biasand Novelty Bias.[9]
Bots are used by both private and public parties[16]and have been observed in politics and crises.[18]Its presence has been studied across many countries,[12]with incidence in more than 80 countries.[18][4]Some studies have found bots to be effective.[2][16][15]though another found limited impact.[13]Similarly, algorithmic manipulation has been found to have an effect.[19]
Some studies propose a strategy that incorporates multiple approaches towards regulation of the tools used in computational propaganda.[17][9]Controlling misinformation and its usage in politics through legislation and guidelines; having platforms combat fake accounts and misleading information; and devising psychology-based intervention tactics are some of the possible measures.[9]Information Literacyhas also been proposed as an affront to these tools.[9][17][14]
However, it has also been reported that some of these approaches have their faults. In Germany, for example, legislation efforts have encountered problems and opposition.[13]In the case of social media, self-regulation is hard to request.[9]These platforms’ measures also may not be enough and put the power of decision on them.[13]Information literacy has its limits as well.[14]
Computational propaganda detection can focus either on content or on accounts.[4]
Two ways to detect propaganda content include analyzing the text through various means, called “Text Analysis”, and tackling detecting coordination of users, called “Social Network Analysis”.[4][10]Early techniques to detect coordination involved mostly supervised models such as decision trees, random forests, SVMs and neural networks.[10]These just analyze accounts one by one without modeling coordination.[10]Advanced bots and the difficulty in finding or creating datasets have hindered these detection methods.[10]Modern detection techniques’ strategies include making the model study a large group of accounts considering coordination; creating specialized algorithms for it; and building unsupervised and semi-supervised models.[10]
Detecting accounts has a variety of approaches: they either seek to find the author of a piece, use statistical methods, analyze a mix of both text and data beyond it such as account characteristics, or scan user activity tendencies.[4]This second focus also has a Social Network Analysis approach, with a technique that looks at time elements on campaigns alongside features of detected groups.[4]
Detection techniques are not without their issues. One of them is that actors evolve their coordination techniques and can operate in the time it takes for detection methods to be created,[10][4]requiring real-time approaches.[4]Other challenges detection faces are that techniques have yet to adapt to different media formats, should integrate explainability, could inform the how and why of a propagandistic document or user, and may face increasingly difficult to detect content and may further be automatized.[10]It is also presented with a lack of datasets, and creating them can involve sensitive user data that requires extensive work to protect them.[10]
|
https://en.wikipedia.org/wiki/Computational_propaganda
|
Disinformation attacksare strategic deception campaigns[1]involvingmedia manipulationandinternet manipulation,[2]to disseminatemisleading information,[3]aiming toconfuse, paralyze, andpolarizeanaudience.[4]Disinformationcan be considered an attack when it involves orchestrated and coordinated efforts[5]to build an adversarial narrative campaign that weaponizes multiple rhetorical strategies and forms of knowing—including not only falsehoods but also truths,half-truths, andvalue-ladenjudgements—to exploit and amplify identity-driven controversies.[6]Disinformation attacks use media manipulation to targetbroadcast medialike state-sponsored TV channels and radios.[7][8]Due to the increasing use of internet manipulation onsocial media,[2]they can be considered acyber threat.[9][10]Digital tools such asbots,algorithms, andAI technology, along with human agents includinginfluencers, spread and amplify disinformation tomicro-targetpopulations on online platforms likeInstagram,Twitter,Google,Facebook, andYouTube.[11][6]
According to a 2018 report by theEuropean Commission,[12]disinformation attacks can pose threats todemocratic governance, by diminishing the legitimacy of the integrity ofelectoral processes. Disinformation attacks are used by and againstgovernments,corporations, scientists, journalists,activists, and other private individuals.[13][14][15][16]These attacks are commonly employed to reshape attitudes and beliefs, drive a particular agenda, or elicit certain actions from a target audience. Tactics include circulating incorrect or misleading information, creating uncertainty, and undermining the legitimacy of official information sources.[17][18][19]
An emerging area ofdisinformation researchfocuses on the countermeasures to disinformation attacks.[20][21][19]Technologically, defensive measures includemachine learning applicationsandblockchain technologiesthat can flag disinformation ondigital platforms.[22][18]Socially, educational programs are being developed to teach people how to better discern between facts and disinformation online.[23]Journalists publish recommendations for assessing sources.[24]Commercially, revisions to algorithms,advertising, and influencer practices on digital platforms are proposed.[2]Individual interventions include actions that can be taken by individuals to improve their own skills in dealing with information (e.g.,media literacy), and individual actions to challenge disinformation.
Disinformation attacks involve the intentional spreading of false information, with end goals of misleading, confusing, and encouraging violence,[25]and gaining money, power, or reputation.[26]Disinformation attacks may involve political, economic, and individual actors. They may attempt to influence attitudes and beliefs, drive a specific agenda, get people to act in specific ways, or destroy credibility of individuals or institutions. The presentation of incorrect information may be the most obvious part of a disinformation attack, but it is not the only purpose. The creation of uncertainty and the undermining of both correct information and the credibility of information sources are often intended as well.[17][18][19]
If individuals can be convinced of something that is factually incorrect, they may make decisions that will run counter to the best interests of themselves and those around them. If the majority of people in a society can be convinced of something that is factually incorrect, the misinformation may lead to political and social decisions that are not in the best interest of that society. This can have serious impacts at both individual and societal levels.[27]
In the 1990s, a British doctor who held a patent on a single-shot measles vaccine promoted distrust of combinedMMR vaccine. Hisfraudulent claimswere meant to promote sales of his own vaccine. The subsequent media frenzy increased fear and many parents chose not to immunize their children.[28]This was followed by a significant increase in cases, hospitalizations and deaths that would have been preventable by the MMR vaccine.[29][30]It also led to the expenditure of substantial money on follow-up research that tested the assertions made in the disinformation,[31]and on public information campaigns attempting to correct the disinformation. The fraudulent claim continues to be referenced and to increasevaccine hesitancy.[32]
In the case of the2020 United States presidential election, disinformation was used in an attempt to convince people to believe something that was not true and change the outcome of the election.[33][34]Repeated disinformation messages about the possibility of election fraud were introduced years before the actual election occurred, as early as 2016.[35][36]Researchers found that much of the fake news originated in domestic right-wing groups. The nonpartisan Election Integrity Partnership reported prior to the election that "What we're seeing right now are essentially seeds being planted, dozens of seeds each day, of false stories... They're all being planted such that they could be cited and reactivated ... after the election."[37]Groundwork was laid through multiple and repeated disinformation attacks for claims that voting was unfair and to delegitimize the results of the election once it occurred.[37]Although the2020 United States presidential electionresults were upheld, some people still believe the "big lie".[34]
People who get information from a variety of news sources, not just sources from a particular viewpoint, are more likely to detect disinformation.[38]Tips for detecting disinformation include reading reputable news sources at a local or national level, rather than relying on social media. Beware of sensational headlines that are intended to attract attention and arouse emotion. Fact-check information broadly, not just on one usual platform or among friends. Check the original source of the information. Ask what was really said, who said it, and when. Consider possible agendas or conflicts of interest on the part of the speaker or those passing along the information.[39][40][41][42][43]
Sometimes undermining belief in correct information is a more important goal of disinformation than convincing people to hold a new belief. In the case of combined MMR vaccines, disinformation was originally intended to convince people of a specific fraudulent claim and by doing so promote sales of a competing product.[28]However, the impact of the disinformation became much broader. The fear that one type of vaccine might pose a danger fueled general fears that vaccines might pose a risk. Rather than convincing people to choose one product over another, belief in a whole area of medical research was eroded.[32]
There is widespread agreement that disinformation is spreading confusion.[44]This is not just a side effect; confusing and overwhelming people is an intentional objective.[45][46]Whether disinformation attacks are used against political opponents or "commercially inconvenient science", they sow doubt and uncertainty as a way of undermining support for an opposing position and preventing effective action.[47]
A 2016 paper describes social media-driven political disinformation tactics as a "firehose of falsehood" that "entertains, confuses and overwhelms the audience."[48]Four characteristics were illustrated with respect to Russian propaganda. Disinformation is used in a way that is 1) high-volume and multichannel 2) continuous and repetitive 3) ignores objective reality and 4) ignores consistency. It becomes effective by creating confusion and obscuring, disrupting and diminishing the truth. When one falsehood is exposed, "the propagandists will discard it and move on to a new (though not necessarily more plausible) explanation."[48]The purpose is not to convince people of a specific narrative, but to "Deny, deflect, distract".[49]
Countering this is difficult, in part because "It takes less time to make up facts than it does to verify them."[48]There is evidence that false information "cascades" travel farther, faster, and more broadly than truthful information, perhaps due to novelty and emotional loading.[50]Trying to fight amany-headed hydraof disinformation may be less effective than raising awareness of how disinformation works and how to identify it, before an attack occurs.[48]For example, Ukraine was able to warm citizens and journalists about the potential use of state-sponsoreddeepfakesin advance of an actual attack, which likely slowed its spread.[51]
Another way to counter disinformation is to focus on identifying and countering its real objective.[48]For example, if disinformation is trying to discourage voters, find ways to empower voters and elevate authoritative information about when, where and how to vote.[52]If claims of voter fraud are being put forward, provide clear messaging about how the voting process occurs, and refer people back to reputable sources that can address their concerns.[53]
Disinformation involves more than just a competition between inaccurate and accurate information. Disinformation, rumors and conspiracy theories call into question underlying trust at multiple levels. Undermining of trust can be directed at scientists, governments and media and have very real consequences. Public trust in science is essential to the work of policymakers and to good governance, particularly for issues in medicine, public health, and the environmental sciences. It is essential that individuals, organizations and governments have access to accurate information when making decisions.[14][15]
An example is disinformation around COVID-19 vaccines. Disinformation has targeted the products themselves, the researchers and organizations who develop them, the healthcare professionals and organizations who administer them, and the policy-makers that have supported their development and advised their use.[14][54][55]Countries where citizens had higher levels of trust in society and government appear to have mobilized more effectively against the virus, as measured by slower virus spread and lower mortality rates.[56]
Studies of people's beliefs about the amount of disinformation and misinformation in the news media suggest that distrust of traditional news media tends to be associated with reliance on alternate information sources such as social media. Structural support for press freedoms, a stronger independent press, and evidence of the credibility and honesty of the press can help to restore trust in traditional media as a provider of independent, honest, and transparent information.[57][46]
A major tactic of disinformation is to attack and attempt to undermine the credibility of people and organizations who are in a position to oppose the disinformation narrative due to their research or position of authority.[58]This can include politicians, government officials, scientists, journalists, activists, human rights defenders and others.[16]
For example, aNew Yorkerreport in 2023 revealed details about the campaign run by theUAE, under which the Emirati PresidentMohamed bin Zayedpaid millions of euros to a Swiss businessman, Mario Brero, for "dark PR" against their targets. Brero and his company Alp Services used the UAE money to create damning Wikipedia entries and publish propaganda articles against Qatar and those with ties to theMuslim Brotherhood. Targets included the company Lord Energy, which eventually declared bankruptcy following unproven allegations of links to terrorism.[59]Alp was also paid by the UAE to publish 100 propaganda articles a year against Qatar.[60]
Disinformation attacks on scientists and science, including attacks funded by the tobacco and fossil fuels industries, have been painstakingly documented in books such asMerchants of Doubt,[58][61][62]Doubt Is Their Product,[63][64]andThe Triumph of Doubt: Dark Money and the Science of Deception(2020).[65][66]While scientists, doctors and teachers are considered the most trustworthy professionals globally[15]scientists are concerned about whether confidence in science has decreased.[15][55]Sudip Parikh, CEO of theAmerican Association for the Advancement of Science(AAAS) in 2022 is quoted as saying "We now have a significant minority of the population that's hostile to the scientific enterprise... We're going to have to work hard to regain trust."[55]That said, at the same time that disinformation poses a threat, the widespread use of social media by scientists offers an unprecedented opportunity for scientific communication and engagement between scientists and the public, with the potential to increase public knowledge.[15][67]
TheAmerican Council on Science and Healthhas advice for scientists facing a disinformation campaign, and notes that disinformation campaigns often incorporate some elements of truth to make them more convincing. The five recommendations include identifying and acknowledging any parts of the story that are actually true; explaining why other parts are untrue, out of context or manipulated; calling out motivations that may be behind the disinformation, such as financial interests or power; preparing an "accusation audit" in anticipation of further attacks; and maintaining calm and self-control.[68]Others recommend educating oneself about the platforms one uses and the privacy tools that platforms offer to protect personal information and to mute, block, and report online participants. Disinformers and online trolls are unlikely to engage in reasoned discussion or interact in good faith, and responding to them is rarely useful.[69]
Studies clearly document the harassment of scientists, personally and in terms of scientific credibility. In 2021, aNaturesurvey reported that nearly 60% of scientists who had made public statements about COVID-19 had their credibility attacked. Attacks disproportionately affected those in nondominant identity groups such as women, transgender people, and people of color.[69]A highly visible example isAnthony S. Fauci. He is deeply respected nationally and internationally as an expert on infectious diseases. He also has been subjected to intimidation, harassment and death threats fueled by disinformation attacks and conspiracy theories.[70][71][72]Despite those experiences, Fauci encourages early-career scientists "not to be deterred, because the satisfaction and the degree of contribution you can make to society by getting into public service and public health is immeasurable."[73]
Individual decisions, like whether or not to smoke, are major targets for disinformation. So are policymaking processes such as the formation of public health policy, the recommendation and adoption of policy measures, and the acceptance or regulation of processes and products. Public opinion and policy interact: public opinion and the popularity of public health measures can strongly influence government policy and the creation and enforcement of industry standards. Disinformation attempts to undermine public opinion and prevent the organization of collection actions, including policy debates, government action, regulation and litigation.[47]
An important type of collective activity is the act of voting. In the2017 Kenyan general election, 87% of Kenyans surveyed reported encountering disinformation before the August election, and 35% reported being unable to make an informed voting decision as a result.[8]Disinformation campaigns often target specific groups such as black or Latino voters to discourage voting andcivic engagement. Fake accounts and bots are used to amplify uncertainty about whether voting really matters, whether voters are "appreciated", and whose interests politicians care about.[74][75]Microtargeting can present messages precisely designed for a chosen population, while geofencing can pinpoint people based on where they go, like churchgoers. In some cases, voter suppression attacks have circulated incorrect information about where and when to vote.[76]During the 2020 U.S. Democratic primaries, disinformation narratives arose around the use of masks and the use of mail-in ballots, relating to whether and how people would vote.[77]
Disinformation strikes at the foundation of democratic government: "the idea that the truth is knowable and that citizens can discern and use it to govern themselves."[78]Disinformation campaigns are designed by both foreign and domestic actors to gain political and economic advantage. The undermining of functional government weakens the rule of law and can enable both foreign and domestic actors to profit politically and economically. At home and abroad, the goal is to weaken opponents. Elections are an especially critical target, but the day-to-day ability to govern is also undermined.[78][79]
The Oxford Internet Institute atOxford Universityreports that in 2020, organized social media manipulation campaigns were active in 81 countries, an increase from 70 countries in 2019. 76 of those countries used disinformation attacks. The report describes disinformation as being produced globally "on an industrial scale".[80]
A Russian operation known as theInternet Research Agency(IRA) spent thousands on social media ads to influence the2016 United States presidential election, confuse the public on key political issues and sow discord. These political ads leveraged user data to micro-target certain populations and spread misleading information, with an end goal of exacerbatingpolarizationand eroding public trust in political institutions.[10][81][20]TheComputational PropagandaProject at theOxford Internet Institutefound that the IRA's ads specifically sought to sow mistrust towards the U.S. government amongMexican Americansand discourage voter turnout amongAfrican Americans.[82]
An examination of twitter activity prior to the2017 French presidential electionindicates that 73% of the disinformation flagged byLe Mondewas traceable to two political communities: one associated withFrançois Fillon(right-wing, with 50.75% of the fake link shares) and another withMarine Le Pen(extreme-right wing, 22.21%). 6% of accounts in the Fillon community and 5% of the Le Pen community were early spreaders of disinformation. Debunking of the disinformation came from other communities, and was most often related toEmmanuel Macron(39.18% of debunks) andJean-Luc Mélenchon(14% of debunks).[83]
Another analysis, of the2017 #MacronLeaksdisinformation campaign, illustrates frequent patterns of election-related disinformation campaigns. Such campaigns often peak 1–2 days before an election. The scale of a campaign like #MacronLeaks can be comparable to the volume of regular discussion in that time period, suggesting that it can obtain considerable collective attention. About 18 percent of the users involved in #MacronLeaks were identifiable as bots. Spikes in bot content tended to occur slightly ahead of spikes in human-created content, suggesting bots were able to trigger cascades of disinformation. Some bot accounts showed a pattern of previous use: creation shortly before the 2016 U.S. presidential election, brief usage then, and no further activity until early May 2017, prior to the French election. Alt-right media personalities including Britain'sPaul Joseph Watsonand AmericanJack Posobiecprominently shared MacronLeaks content prior to the French election.[84]Experts worry that disinformation attacks will increasingly be used to influence national elections and democratic processes.[10]
InA Lot of People Are Saying: The New Conspiracism and the Assault on Democracy(2020)Nancy L. RosenblumandRussell Muirheadexamine the history and psychology ofconspiracy theoriesand the ways in which they are used to de-legitimize the political system. They distinguish between classical conspiracy theory in which actual issues and events (such as theassassination of John F. Kennedy) are examined and combined to create a theory, and a new form of "conspiracism without theory" that relies on repeating false statements and hearsay without factual grounding.[85][86]
Such disinformation exploits human bias towards accepting new information. Humans constantly share information and rely on others to provide information they cannot verify for themselves. Much of that information will be true, whether they ask if it is cold outside or cold in Antarctica. As a result, they tend to believe what they hear. Studies show an "illusory truth effect": the more often people hear a claim, the more likely they are to consider it true. This is the case even when people identify a statement as false the first time they see it; they are likely to rank the probability that it is true higher after multiple exposures.[86][87]Social media is particularly dangerous as a source of disinformation because robots and multiple fake accounts are used to repeat and magnify the impact of false statements. Algorithms track what users click on and recommend content similar to what users have chosen, creatingconfirmation biasandfilter bubbles. In more tightly focused communities anecho chambereffect is enhanced.[88][89][86][90][91][20]
Autocratshave employed domestic voter disinformation attacks to cover upelectoral corruption. Voter disinformation can include public statements that assert local electoral processes are legitimate and statements that discredit electoral monitors. Public-relations firms may be hired to execute specialized disinformation campaigns, including media advertisements and behind-the-sceneslobbying, to push the narrative of an honest and democratic election.[92]Independent monitoring of the electoral process is essential to combatting electoral disinformation. Monitoring can include both citizen election monitors and international observers, as long as they are credible. Norms for accurate characterization of elections are based on ethical principles, effective methodologies, and impartial analysis. Democratic norms emphasize the importance of open electoral data, the free exercise of political rights, and protection for human rights.[92]
Disinformation attacks can increase political polarization and alter public discourse.[91]Foreign manipulation campaigns may attempt to amplify extreme positions and weaken a target society, while domestic actors may try to demonize political opponents.[78]States with highly polarized political landscapes and low public trust in local media and government are particularly vulnerable to disinformation attacks.[93][94]
There is concern that Russia will employ disinformation, propaganda, and intimidation to destabilizeNATOmembers, such as theBaltic statesand coerce them into accepting Russian narratives and agendas.[82][93]During theRusso-Ukrainian Warof 2014, Russia combined traditional combat warfare with disinformation attacks in a form ofhybrid warfarein its offensive strategy, to sow doubt and confusion among enemy populations and intimidate adversaries, erode public trust in Ukrainian institutions, and boost Russia's reputation and legitimacy.[95]Since escalating theRusso-Ukrainian Warwith the2022 Russian invasion of Ukraine, Russia's pattern of disinformation has been described byCBC Newsas "Deny, deflect, distract".[49]
Thousands of stories have been debunked, including doctored photographs and deepfakes. At least 20 main "themes" are being promoted by Russia propaganda, targeting audiences far beyond Ukraine and Russia. Many of these try to reinforce ideas that Ukraine is somehow Nazi-controlled, that its military forces are weak, and that damage and atrocities are due to Ukrainian, not Russian, actions.[49]Many of the images they examine are shared onTelegraph. Government organizations and independent journalistic groups such asBellingcatwork to confirm or deny such reports, often using open-source data and sophisticated tools to identify where and when information has originated and whether claims are legitimate. Bellingcat works to provide an accurate account of events as they happen and to create a permanent, verified, longer-term record.[96]
Fear-mongering and conspiracy theories are used to encourage polarization, to promote exclusionary narratives, and to legitimize hate speech and aggression.[54][89]As has been painstakingly documented, the period leading up tothe Holocaustwas marked by repeated disinformation and increasing persecution by theNazi government,[97][98]culminating in the mass murder[99]of 165,200 German Jews[100]by a "genocidal state".[99]Populations in Africa, Asia, Europe and South America today are considered to be at serious risk for human rights abuses.[8]Changing conditions in the United States have also been identified as increasing risk factors for violence.[94]
Elections are particularly tense political transition points, emotionally charged at any time, and increasingly targeted by disinformation. These conditions increase the risk of individual violence, civil unrest, and mass atrocities. Countries such asKenyawhose history has involved ethnic or election-related violence, foreign or domestic interference, and a high reliance on the use of social media for political discourse, are considered to be at higher risk.
The United Nations Framework of Analysis for Atrocity Crimes identifies elections as an atrocity risk indicator: disinformation can act as a threat multiplier foratrocity crime. Recognition of the seriousness of this problem is essential, to mobilize governments, civic society, and social media platforms to take steps to prevent both online and offline harm.[8]
Disinformation attacks target the credibility of science, particularly in areas ofpublic health[26]andenvironmental science.[101][15]Examples include denying the dangers ofleaded gasoline,[102][103]smoking,[104][105][106]andclimate change.[61][107][21][108]
A pattern for disinformation attacks involving scientific sources developed in the 1920s. It illustrates tactics that continue to be used.[109]As early as 1910, industrial toxicologistAlice Hamiltondocumented the dangers associated with exposure tolead.[110][111]In the 1920s,Charles Kettering,Thomas Midgley Jr.andRobert A. Kehoeof the Ethyl Gasoline Corporation introduced lead into gasoline. Following the sensational madness and deaths of workers at their plants, a Public Health Service conference was held in 1925, to review the use oftetraethyllead(TEL). Hamilton and others warned of leaded gasoline's potential danger to people and the environment. They questioned the research methodology used by Kehoe, who claimed that lead was a "natural" part of the environment and that high lead levels in workers were "normal".[112][113][102]Kettering, Midgley and Kehoe emphasized that a gas additive was needed, and argued that until "it is shown ... that an actual danger to the public is had as a result",[110]the company should be allowed to produce its product. Rather than requiring industry to show that their product was safe before it could be sold, the burden of proof was placed on public health advocates to show uncontestable proof that harm had occurred.[110][114]Critics of TEL were described as "hysterical".[115]With industry support, Kehoe went on to became a prominent industry expert and advocate for the position that leaded gasoline was safe, holding "an almost complete monopoly" on research in the area.[116]It would be decades before his work was finally discredited.[102]In 1988, theEPAestimated that over the previous 60 years, 68 million children suffered high toxic exposure to lead from leaded fuels.[117]A 2022 review reported that the use of lead in gasoline was linked to neurodevelopmental disabilities in children and to neurobehavioral deficits, cardiovascular and kidney disease, and premature deaths in adults.[118]
By the 1950s, the production and use of biased "scientific" research was part of a consistent "disinformation playbook", used by companies in the tobacco,[119]pesticide[120]and fossil fuels industries.[61][107][121]In many cases, the same researchers, research groups, and public relations firms were hired by multiple industries. They repeatedly argued that products were safe while knowing that they were unsafe. When assertions of safety were challenged, it was argued that the products were necessary.[106]Through coordinated and widespread campaigns, they worked to influence public opinion and to manipulate government officials and regulatory agencies, to prevent regulatory or legal action that might interfere with profits.[47]
Similar tactics continue to be used by scientific disinformation campaigns. When proof of harm is presented, it is argued that the proof is not sufficient. The argument that more proof is needed is used to put off action to some future time. Delays are used to block attempts to limit or regulate industry, and to avoid litigation, while continuing to profit. Industry-funded experts carry out research that all too often can be challenged on methodological grounds as well as over conflicts of interest. Disinformers use bad research as a basis for claiming that scientists are not in agreement, and to generate specific claims as part of a disinformation narrative. Opponents are often attacked on a personal level as well as in terms of their scientific work.[122][47][123]
A tobacco industry memo summarized this approach by saying "Doubt is our product".[122]Scientists generally consider a question in terms of the likelihood that a conclusion is supported, given the weight of the best available scientific evidence. Evidence tends to involve measurement, and measurement introduces a potential for error. A scientist may say that available evidence is sufficient to support a conclusion about a problem, but will rarely claim that a problem is fully understood or that a conclusion is 100% certain. Disinformation rhetoric tries to undermine science and sway public opinion by using a "doubt strategy". Reframing the normal scientific process, disinformation often suggests that anything less than 100% certainty implies doubt, and that doubt means there is no consensus about an issue. Disinformation attempts to undermine both certainty about a particular issue and about science itself.[122][47]Decades of disinformation attacks have considerably eroded public belief in science.[47]
Scientific information can become distorted as it is transferred among primary scientific sources, the popular press, and social media. This can occur both intentionally and unintentionally.
Some features of current academic publishing like the use of preprint servers make it easier for inaccurate information to become public, particularly if the information reported is novel or sensational.[39]
Steps to protect science from disinformation and interference include both individual actions on the part of scientists, peer reviewers, and editors, and collective actions via research, granting, and professional organizations, and regulatory agencies.[47][124][125]
Traditional media channels can be used to spread disinformation. For example,Russia Todayis a state-funded news channel that is broadcast internationally. It aims to boost Russia's reputation abroad and also depictWesternnations, such as the U.S., in a negative light. It has served as a platform to disseminate propaganda andconspiracy theoriesintended to mislead and misinform its audience.[7]
Within the United States, sharing of disinformation and propaganda has been associated with the development of increasingly "partisan" media, most strongly in right-wing sources such asBreitbart,The Daily Caller, andFox News.[126]As local news outlets have declined, there has been an increase in partisan media outlets that "masquerade" as local news sources.[127][128]The impact of partisanship and its amplification through the media is documented. For example, attitudes to climate legislation were bipartisan in the 1990s but became intensely polarized by 2010. While media messaging on climate from Democrats increased between 1990 and 2015 and tended to support the scientific consensus on climate change, Republican messaging around climate decreased and became more mixed.[21]
A "gateway belief" that affects people's acceptance of scientific positions and policies is their understanding of the extent of scientific agreement on a topic. Undermining scientific consensus is therefore a frequent disinformation tactic. Indicating that there is a scientific consensus (and explaining the science involved) can help to counter misinformation.[21]Indicating the broad consensus of experts can help to align people's perceptions and understandings with the empirical evidence.[129]Presenting messages in a way that aligns with someone's cultural frame of reference makes them more likely to be accepted.[21]
It is important to avoidfalse balance, in which opposing claims are presented in a way that is out of proportion to the actual evidence for each side. One way to counter false balance is to present a weight-of-evidence statement that explicitly indicates the balance of evidence for different positions.[129][130]
Perpetrators primarily use social media channels as a medium to spread disinformation, using a variety of tools.[131]Researchers have compiled multiple actions through which disinformation attacks occur on social media, which are summarized in the table below.[2][132][133]
An app called "Dawn of Glad Tidings," developed byIslamic Statemembers, assists in the organization's efforts to rapidly disseminate disinformation in social media channels. When a user downloads the app, they are prompted to link it to their Twitter account and grant the app access to tweeting from their personal account. This app allows for automated Tweets to be sent out from real user accounts and helps create trends across Twitter that amplify disinformation produced by the Islamic State on an international scope.[82]
In many cases, individuals and companies in different countries are paid to create false content and push disinformation, sometimes earning both payments and advertising revenue by doing so.[131][2]"Disinfo-for-hire actors" often promote multiple issues, or even multiple sides in the same issue, solely for material gain.[146]Others are motivated politically or psychologically.[147][2]
More broadly, themonetizationpractices ofsocial mediaandonline advertisingcan be exploited to amplify disinformation.[148]Social media'sbusiness modelcan be used to spread disinformation Media outlets (1) provide content to the public at little or no cost, (2) capture and refocus public attention and (3) collect, use and resell user data. Advertising companies, publishers,influencers, brands, and clients may benefit from disinformation in a variety of ways.[2]
In 2022, theJournal of Communicationpublished a study of thepolitical economyunderlying disinformation around vaccines. Researchers identified 59 English-language "actors" that provided "almost exclusively anti-vaccination publications". Their websites monetized disinformation through appeals for donations, sales of content-based media and other merchandise, third-party advertising, and membership fees. Some maintained a group of linked websites, attracting visitors with one site and appealing for money and selling merchandise on others. In how they gainedattentionand obtained funding, their activities displayed a "hybrid monetization strategy". They attracted attention by combining eye-catching aspects of "junk news" and online celebrity promotion. At the same time, they developed campaign-specific communities to publicize and legitimize their position, similar to radical social movements.[147]
Emotion is used and manipulated to spread disinformation and false beliefs.[20]Arousing emotions can be persuasive. When people feel strongly about something, they are more likely to see it as true.[87]Emotion can also cause people to think less clearly about what they are reading and the credibility of its source. Content that appeals to emotion is more likely to spread quickly on the internet. Fear, confusion, and distraction can all interfere with people's ability to think critically and make good decisions.[149]
Human psychologyis leveraged to make disinformation attacks more potent and viral.[20]Psychological phenomena, such as stereotyping,confirmation bias, selective attention, andecho chambers, contribute to the virality and success of disinformation on digital platforms.[140][150][6]Disinformation attacks are often considered a type ofpsychological warfarebecause of their use of psychological techniques to manipulate populations.[151][27]
Perceptions of identify and a sense of belonging are manipulated so as to influence people.[20]Feelings of social belonging are reinforced to encourage affiliation with a group and discourage dissent. This can make people more susceptible to aninfluenceror leader who may encourage his "engagedfollowership" to attack others. The type of behavior has been compared to thecollective behaviorof mobs and is similar to dynamics withincults.[69][152][153]
As has been noted by the Knight First Amendment Institute at Columbia University, "The misinformation problem is social and not just technological or legal."[154]It raises serious ethical issues about how we engage with each other.[155]The 2023 Summit on "Truth, Trust, and Hope", held by theNobel Committeeand the USNational Academy of Science, identified disinformation as more dangerous than any other crisis, because of the way in which it hampers the addressing and resolution of all other problems.[156]
Defensive measures against disinformation can occur at a wide variety of levels, in diverse societies, under different laws and conditions. Responses to disinformation can involve institutions, individuals, and technologies, including government regulation, self-regulation, monitoring by third parties, the actions of private actors, the influence of crowds, and technological changes to platform architecture and algorithmic behaviors.[157][158]Advanced systems that involveblockchain technoloigies,crowd wisdomandartificial intelligencewere developed to fight against online disinformation.[22]It is also important to develop and share best practices for countering disinformation and building resilience against it.[78]
Existing social, legal and regulatory guidelines may not apply easily to actions in an international virtual world, where private corporations compete for profitability, often on the basis of user engagement.[157][2]Ethical concerns apply to some of the possible responses to disinformation, as people debate issues of content moderation, free speech, the right to personal privacy, human identity, human dignity, suppression of human rights and religious freedom, and the use of data.[155]The scope of the problem means that "Building resilience to and countering manipulative information campaigns is a whole-of-society endeavor."[78]
While authoritarian regimes have chosen to use disinformation attacks as a policy tool, their use poses specific dangers for democratic governments: using equivalent tactics will further deepen public distrust of political processes and undermine the basis of democratic and legitimate government. "Democracies should not seek to covertly influence public debate either by deliberately spreading information that is false or misleading or by engaging in deceptive practices, such as the use of fictitious online personas."[159]Further, democracies are encouraged to play to their strengths, including rule of law, respect for human rights, cooperation with partners and allies,soft power, and technical capability to address cyber threats.[159]
The constitutional norms that govern a society are needed both to makegovernanceeffective and to averttyranny.[154]Providing accurate information and countering disinformation are legitimate activities of government. TheOECDsuggests that public communication of policy responses should followopen governmentprinciples of integrity, transparency, accountability and citizen participation.[160]A discussion of the US government's ability to legally respond to disinformation argues that responses should be based on principles of transparency and generality. Responses should avoidad hominemattacks, racial appeals, or selectivity in the person responded to. Criticism should focus first on providing correct information and secondarily on explaining why the false information is wrong, rather than focusing on the speaker or repeating the false narrative.[154][149][161]
In the case of theCOVID-19pandemic, multiple factors created "space for misinformation to proliferate". Government responses to thispublic healthissue indicate several areas of weakness including gaps in basic public health knowledge, lack of coordination in government communication, and confusion about how to address a situation involving significant uncertainty. Lessons from the pandemic include the need to admit uncertainty when it exists, and to distinguish clearly between what is known and what is not yet known. Science is a process, and it is important to recognize and communicate that scientific understanding and related advice will change over time on the basis of new evidence.[160]
Regulation of disinformation raises ethical issues. Therighttofreedom of expressionis recognized as ahuman rightin theUniversal Declaration of Human Rightsandinternational human rights lawby theUnited Nations. Many countries haveconstitutional lawthat protects free speech. A country's laws may identify specific categories of speech that are or are not protected, and specific parties whose actions are restricted.[157]
TheFirst Amendment to the United States Constitutionprotects both freedom of speech andfreedom of the pressfrom interference by theUnited States Congress. As a result, the regulation of disinformation in the United States tends to be left to private rather than government action.[157]
The First Amendment does not protect speech used to incite violence or break the law,[162]or "obscenity, child pornography, defamatory speech, false advertising, true threats, and fighting words".[163]With these exceptions, debating matters of "public or general interest" in a way that is "uninhibited, robust and wide-open" is expected to benefit a democratic society.[164]
The First Amendment tends to rely on counterspeech as a workable corrective measure, preferring refutation of falsehood to regulation.[157][154]There is an underlying assumption that identifiable parties will have the opportunity to share their views on a relatively level playing field, where a public figure being drawn into a debate will have increased access to the media and a chance of rebuttal.[164]This may no longer hold true when rapid, massive disinformation attacks are deployed against an individual or group through anonymous or multiple third parties, where "A half-day's delay is a lifetime for an online lie."[154]
Other civil and criminal laws are intended to protect individuals and organizations in cases where speech involvesdefamation of character(libelorslander) orfraud. In such cases, being incorrect is not sufficient to justify legal or governmental action. Incorrect information must demonstrably cause harm to others or enable the liar to gain an unjustified benefit. Someone who has knowingly spread disinformation and used that disinformation to gain money may be chargeable with fraud.[165]The extent to which these existing laws can be effectively applied against disinformation attacks is unclear.[157][154][166]Under this approach, a subset of disinformation, which is not only untrue but "communicated for the purpose of gaining profit or advantage by deceit and causes harm as a result" could be considered "fraud on the public",[33]and no longer considered a type of protected speech. Much of the speech that constitutes disinformation would not meet this test.[33]
TheDigital Services Act(DSA) is aRegulationinEU lawthat establishes a legal framework within the European Union for the management of content on intermediaries, including illegal content, transparent advertising, and disinformation.[167][168]The European Parliament approved the DSA along with theDigital Markets Acton 5 July 2022.[169]The European Council gave its final approval to the Regulation on a Digital Services Act on 4 October 2022.[170]It was published in theOfficial Journal of the European Unionon the 19 October 2022. Affected service providers will have until 1 January 2024 to comply with its provisions.[169]DSA aims to harmonise differing laws at the national level in the European Union[167]including Germany (NetzDG), Austria ("Kommunikationsplattformen-Gesetz") and France ("Loi Avia").[171]Platforms with more than 45 million users in theEuropean Union, includingFacebook,YouTube,TwitterandTikTokwould be subject to the new obligations. Companies failing to meet those obligations could risk fines of up to 10% of their annual turnover.[172]
As of April 25, 2023, Wikipedia was one of 17 platforms to be designated a Very Large Online Platform (VLOP) by the EU Commission, with regulations taking effect as of August 25, 2023.[173]In addition to any steps taken by the Wikimedia Foundation, Wikipedia's compliance with the Digital Services Act will be independently audited, on a yearly basis, beginning in 2024.[174]
It has been suggested that China and Russia are jointly portraying the United States and the European Union in an adversarial way in terms of the use of information and technology. This narrative is then used by China and Russia to justify the restriction of freedom of expression, access to independent media, and internet freedoms. They have jointly called for the "internationalization of internet governance", meaning distribution of control of the internet to individual sovereign states. In contrast, calls for global internet governance emphasize the existence of a free and open internet, whose governance involves citizens and civil society.[175][78]Democratic governments need to be aware of the potential impact of measures used to restrict disinformation both at home and abroad. This is not an argument that should block legislation, but it should be taken into consideration when forming legislation.[78]
In the United States, the First Amendment limits the actions of Congress, not those of private individuals, companies and employers.[162]Private entities can establish their own rules (subject to local and international laws) for dealing with information.[163]Social media platforms like Facebook, Twitter and Telegram could legally establish guidelines for moderation of information and disinformation on their platforms. Ideally, platforms should attempt to balance free expression by their users against the moderation or removal of harmful and illegal speech.[40][176]
Sharing of information through broadcast media and newspapers has been largely self-regulating. It has relied on voluntary self-governance and standard-setting by professional organizations such as the USSociety of Professional Journalists(SPJ). The SPJ has acode of ethicsfor professional accountability, which includes seeking and reporting truth, minimizing harm, accountability and transparency.[177]The code states that "whoever enjoys a special measure of freedom, like a professional journalist, has an obligation to society to use their freedoms and powers responsibly."[178]Anyone can write a letter to the editor of theNew York Times, but theTimeswill not publish that letter unless they choose to do so.[179]
Arguably, social media platforms are treated more like the post office—which passes along information without reviewing it—than they are like journalists and print publishers who make editorial decisions and are expected to take responsibility for what they publish. The kinds of ethical, social and legal frameworks that journalism and print publishing have developed have not been applied to social media platforms.[180]
It has been pointed out that social media platforms like Facebook and Twitter lack incentives to control disinformation or to self-regulate.[177][157][181]To the extent that platforms rely on advertising for revenue, it is to their financial benefit to maximize user engagement, and the attention of users is demonstrably captured by sensational content.[20][182]Algorithms that push content based on user search histories, frequent clicks and paid advertising leads to unbalanced, poorly sourced, and actively misleading information. It is also highly profitable.[177][181][183]When countering disinformation, the use of algorithms for monitoring content is cheaper than employing people to review and fact-check content. People are more effective at detecting disinformation. People may also bring their own biases (or their employer's biases) to the task of moderation.[180]
Privately owned social media platforms such as Facebook and Twitter can legally develop regulations, procedures and tools to identify and combat disinformation on their platforms.[184]For example, Twitter can use machine learning applications to flag content that does not comply with its terms of service and identify extremist posts encouraging terrorism. Facebook andGooglehave developed a content hierarchy system where fact-checkers can identify and de-rank possible disinformation and adjust algorithms accordingly.[10]Companies are considering using procedural legal systems to regulate content on their platforms as well. Specifically, they are considering using appellate systems: posts may be taken down for violating terms of service and posing as a disinformation threat, but users can contest this action through a hierarchy of appellate bodies.[136]
Blockchaintechnology has been suggested as a potential defense mechanism against internet manipulation.[22][185]While blockchain was originally developed to create a ledger of transactions for the digital currencybitcoin, it is now widely used in applications where a permanent record or history of assets, transactions, and activities is desired. It provides a potential for transparency and accountability,[186]Blockchain technology could be applied to make data transport more secure in online spaces and theInternet of Thingsnetworks, making it difficult for actors to alter or censor content and carry out disinformation attacks.[187]Applying techniques such as blockchain and keyed watermarking on social media/messaging platforms could also help to detect and curb disinformation attacks. The density and rate of forwarding of a message could be observed to detect patterns of activity that suggest the use of bots and fake account activity in disinformation attacks. Blockchain could support both backtracking and forward tracking of events that involve the spreading of disinformation. If the content is deemed dangerous or inappropriate, its spread could be curbed immediately.[185]
Understandably, methods for countering disinformation that involvealgorithmic governanceraise ethical concerns. The use of technologies that track and manipulate information raises questions about "who is accountable for their operation, whether they can create injustices and erode civic norms, and how we should resolve their (un)intended consequences".[177][188][189]
A study from thePew Research Centerreports that public support for restriction of disinformation by both technology companies and government increased among Americans from 2018 to 2021. However, views on whether government and technology companies should take such steps became increasingly partisan and polarized during the same time period.[190]
Cyber security experts claim that collaboration between public and private sectors is necessary to successfully combat disinformation attacks.[20]Recommended cooperative defense strategies include:
However, in the United States, the Republican party is actively opposing both disinformation research and government involvement in fighting disinformation. Republicans gained a majority in the House in January 2023. Since then, the House Judiciary Committee has used legal action to send letters, subpoenas, and threats of legal action to researchers, demanding notes, emails and other records from researchers and even student interns, dating back to 2015. Institutions affected include theStanford Internet ObservatoryatStanford University, theUniversity of Washington, theAtlantic Council's Digital Forensic Research Lab and the social media analytics firm Graphika. Projects include the Election Integrity Partnership, formed to identify attempts "to suppress voting, reduce participation, confuse voters or delegitimize election results without evidence"[192]and the Virality Project, which has examined the spread of false claims about vaccines. Researchers argue that they haveacademic freedomto study social media and disinformation as well asfreedom of speechto report their results.[192][193][194]Despite conservative claims that the government acted to censor speech online, "no evidence has emerged that government officials coerced the companies to take action against accounts".[192]
At the state level, state governments that were politically aligned with anti-vaccine activists successfully sought apreliminary injunctionto prevent theBiden Administrationfrom urging social media companies to fight misinformation about public health. The order issued byUnited States Court of Appeals for the Fifth Circuitin 2023 "severely limits the ability of the White House, the surgeon general, [and] the Centers for Disease Control and Prevention... to communicate with social media companies about content related to COVID-19... that the government views as misinformation".[195]
Reports on disinformation inArmenia[54]andAsia[78]identify key issues and make recommendations. These can be applied to many other countries, particularly those experiencing "both profound disruption and an opportunity for change".[54]The report emphasizes the importance of strengthening civil society by protecting the integrity of elections and rebuilding trust in public institutions. Steps to support the integrity of elections include: ensuring a free and fair process, allowing independent observation and monitoring, allowing independent journalistic access, and investigating electoral infractions. Other suggestions include rethinking state communication strategies to enable all levels of government to more effectively communicate and to address disinformation attacks.[54]
National dialogue bringing together diverse public, community, political, state and nonstate actors as stakeholders is recommended for effective long-term strategic planning. Creating a unified strategy for legislation to deal with information spaces is recommended. Balancing concerns about freedom of expression with protections for individuals and democratic institutions is critical.[54][196][197]
Another concern is the development of a healthy information environment that supports fact-based journalism, truthful discourse, and independent reporting at the same time that it rejects information manipulation and disinformation. Key issues for the support of resilient independent media include transparency of ownership, financial viability, editorial independence, media ethics and professional standards, and mechanisms for self-regulation.[54][196][197][198][78]
During the2018 Mexican general election, the collaborative journalism project Verificado 2018 was established to address misinformation. It involved at least eighty organizations, including local and national media outlets, universities and civil society and advocacy groups. The group researched online claims and political statements and published joint verifications. During the course of the election, they produced over 400 notes and 50 videos documenting false claims and suspect sites, and tracked instances where fake news went viral.[199]Verificado.mx received 5.4 million visits during the election, with its partner organizations registering millions more.[200]: 25To deal with the sharing of encrypted messages viaWhatsApp, Verificado set up a hotline where WhatsApp users could submit messages for verification and debunking. Over 10,000 users subscribed to Verificado's hotline.[199]
Organizations promoting civil society and democracy, independent journalists, human rights defenders, and other activists are increasingly targets of disinformation campaigns and violence. Their protection is essential. Journalists, activists and organizations can be key allies in combating false narratives, promoting inclusion, and encouraging civic engagement. Oversight and ethics bodies are also critical.[54][201]Organizations that have developed resources and trainings to better support journalists against online and offline violence andviolence against womeninclude theCoalition Against Online Violence,[202][203]Knight Center for Journalism in the Americas,[204]International Women's Media Foundation,[205]UNESCO,[201][204]PEN America,[206]First Draft,[207]and others.[208]
Media literacy education and information on how to identify and combat disinformation is recommended for public schools and universities.[54]In 2022, countries in theEuropean Unionwere ranked on a Media Literacy Index to measure resilience against disinformation.Finland, the highest ranking country, has developed an extensive curriculum that teaches critical thinking and resistance to information warfare, and integrated it into its public education system. Fins also rank high in trust in government authorities and the media.[209][210][211]Organizations such asFaktabaariand Mediametka develop tools and resources around information, media and voter literacy.[211]Following a2007 cyberattackthat included disinformation tactics, the country ofEstoniafocused on improving its cyberdefenses and made media literacy education a major focus from kindergarten through to high school.[212][213]
In 2018, theExecutive Vice President of the European Commission for A Europe Fit for the Digital Agegathered a group of experts to produce a report with recommendations for teaching digital literacy. Proposed digital literacy curricula familiarize students withfact-checkingwebsites such asSnopesandFactCheck.org. This curricula aims to equip students with critical thinking skills to discern between factual content and disinformation online.[23]Suggested areas to focus on include skills incritical thinking,[214]information literacy,[215][216]science literacy[217]andhealth literacy.[26]
Another approach is to buildinteractive gamessuch as theCranky Unclegame, which teaches critical thinking and inoculates players against techniques of disinformation and science denial. TheCranky Unclegame is freely available and has been translated into at least 9 languages.[218][219]Videos for teaching critical thinking and addressing disinformation can also be found online.[220][221]
Training and best practices for identifying and countering disinformation are being developed and shared by groups of journalists, scientists, and others (e.g.
Climate Action Against Disinformation,[24]PEN America,[222][223][224]UNESCO,[46]Union of Concerned Scientists,[225][226]Young African Leaders Initiative[227]).
Research suggests that a number of tactics have proven useful against scientific disinformation around climate change. These include: 1) providing clear explanations about why climate change is occurring 2) indicating that there is scientific consensus about the existence of climate change and about its basis in human actions 3) presenting information in ways that are culturally aligned with the listener 4) "inoculating" people by clearly identifying misinformation (ideally before a myth is encountered, but also later through debunking).[21][19]
A "Toolbox of Interventions Against Online Misinformation and Manipulation" reviews research into individually-focused interventions to combat misinformation and their possible effectiveness. Tactics include:[228][229]
|
https://en.wikipedia.org/wiki/Disinformation_attack
|
Innews mediaandsocial media, anecho chamberis an environment or ecosystem in which participants encounterbeliefsthat amplify or reinforce their preexisting beliefs by communication and repetition inside a closed system and insulated from rebuttal.[2][3][4]An echo chamber circulates existing views without encountering opposing views, potentially resulting inconfirmation bias. Echo chambers may increasesocialandpolitical polarizationandextremism.[5]On social media, it is thought that echo chambers limit exposure to diverse perspectives, and favor and reinforce presupposed narratives and ideologies.[4][6]
The term is a metaphor based on an acousticecho chamber, in which soundsreverberatein a hollow enclosure. Another emerging term for this echoing and homogenizing effect within social-media communities on the Internet isneotribalism.
Many scholars note the effects that echo chambers can have on citizens' stances and viewpoints, and specifically implications has for politics.[7]However, some studies have suggested that the effects of echo chambers are weaker than often assumed.[8]
The Internet has expanded the variety and amount of accessible political information. On the positive side, this may create a more pluralistic form of public debate; on the negative side, greater access to information may lead toselective exposureto ideologically supportive channels.[5]In an extreme "echo chamber", one purveyor of information will make a claim, which many like-minded people then repeat, overhear, and repeat again (often in an exaggerated or otherwise distorted form)[9]until most people assume that some extreme variation of the story is true.[10]
The echo chamber effect occurs online when a harmonious group of people amalgamate and developtunnel vision. Participants in online discussions may find their opinions constantly echoed back to them, whichreinforcestheir individual belief systems due to the declining exposure to other's opinions.[11]Their individual belief systems are what culminate into a confirmation bias regarding a variety of subjects. When an individual wants something to be true, they often will only gather the information that supports their existing beliefs and disregard any statements they find that are contradictory or speak negatively upon their beliefs.[12]Individuals who participate in echo chambers often do so because they feel more confident that their opinions will be more readily accepted by others in the echo chamber.[13]This happens because the Internet has provided access to a wide range of readily available information. People are receiving their news online more rapidly through less traditional sources, such asFacebook,Google, andTwitter. These and many other social platforms and online media outlets have established personalizedalgorithmsintended to cater specific information to individuals’ online feeds. This method of curatingcontenthas replaced the function of the traditional news editor.[14]The mediated spread of information through online networks causes a risk of an algorithmic filter bubble, leading to concern regarding how the effects of echo chambers on the internet promote the division of online interaction.[15]
Members of an echo chamber are not fully responsible for their convictions. Once part of an echo chamber, an individual might adhere to seemingly acceptable epistemic practices and still be further misled. Many individuals may bestuckin echo chambers due to factors existing outside of their control, such as being raised in one.[3]
Furthermore, the function of an echo chamber does not entail eroding a member's interest intruth; it focuses upon manipulating their credibility levels so that fundamentally different establishments and institutions will be considered proper sources of authority.[16]
However, empirical findings to clearly support these concerns are needed[17]and the field is very fragmented when it comes to empirical results. There are some studies that do measure echo chamber effects, such as the study of Bakshy et al. (2015).[18][19]In this study the researchers found that people tend to share news articles they align with. Similarly, they discovered a homophily in online friendships, meaning people are more likely to be connected on social media if they have the samepolitical ideology. In combination, this can lead to echo chamber effects. Bakshy et al. found that a person's potential exposure to cross-cutting content (content that is opposite to their own political beliefs) through their own network is only 24% for liberals and 35% for conservatives. Other studies argue that expressing cross-cutting content is an important measure of echo chambers: Bossetta et al. (2023) find that 29% of Facebook comments during Brexit were cross-cutting expressions.[20]Therefore, echo chambers might be present in a person's media diet but not in how they interact with others on social media.
Another set of studies suggests that echo chambers exist, but that these are not a widespread phenomenon: Based on survey data, Dubois and Blank (2018) show that most people do consume news from various sources, while around 8% consume media with low diversity.[21]Similarly, Rusche (2022) shows that, most Twitter users do not show behavior that resembles that of an echo chamber. However, through high levels of online activity, the small group of users that do, make up a substantial share populist politicians' followers, thus creating homogeneous online spaces.[22]
Finally, there are other studies which contradict the existence of echo chambers. Some found that people also share news reports that don't align with their political beliefs.[23]Others found that people using social media are being exposed to more diverse sources than people not using social media.[24]In summation, it remains that clear and distinct findings are absent which either confirm or falsify the concerns of echo chamber effects.
Research on thesocial dynamicsof echo chambers shows that the fragmented nature ofonline culture, the importance of collective identity construction, and the argumentative nature of online controversies can generate echo chambers where participants encounter self-reinforcing beliefs.[2]Researchers show that echo chambers are prime vehicles to disseminatedisinformation, as participants exploit contradictions against perceived opponents amidst identity-driven controversies.[2]As echo chambers build uponidentity politicsand emotion, they can contribute topolitical polarizationandneotribalism.[25]
Echo chamber studies fail to achieve consistent and comparable results due to unclear definitions, inconsistent measurement methods, and unrepresentative data.[26]Social media platforms continually change their algorithms, and most studies are conducted in the US, limiting their application to political systems with more parties.
In recent years, closed epistemic networks have increasingly been held responsible for the era of post-truth andfake news.[27]However, the media frequently conflates two distinct concepts of socialepistemology: echo chambers and epistemic bubbles.[16]
An epistemic bubble is an informational network in which important sources have been excluded by omission, perhaps unintentionally. It is an impaired epistemic framework which lacks strong connectivity.[28]Members within epistemic bubbles are unaware of significant information and reasoning.
On the other hand, an echo chamber is an epistemic construct in which voices are actively excluded and discredited. It does not suffer from a lack in connectivity; rather it depends on a manipulation of trust by methodically discrediting all outside sources.[29]According to research conducted by theUniversity of Pennsylvania, members of echo chambers become dependent on the sources within the chamber and highly resistant to any external sources.[30]
An important distinction exists in the strength of the respective epistemic structures. Epistemic bubbles are not particularly robust. Relevant information has merely been left out, not discredited.[31]One can ‘pop’ an epistemic bubble by exposing a member to the information and sources that they have been missing.[3]
Echo chambers, however, are incredibly strong. By creating pre-emptive distrust between members and non-members, insiders will be insulated from the validity of counter-evidence and will continue to reinforce the chamber in the form of a closed loop.[29]Outside voices are heard, but dismissed.
As such, the two concepts are fundamentally distinct and cannot be utilized interchangeably. However, one must note that this distinction is conceptual in nature, and an epistemic community can exercise multiple methods of exclusion to varying extents.
Afilter bubble– a term coined by internet activistEli Pariser– is a state of intellectual isolation that allegedly can result frompersonalized searcheswhen a website algorithm selectively guesses what information a user would like to see based on information about the user, such as location, past click-behavior and search history. As a result, users become separated from information that disagrees with their viewpoints, effectively isolating them in their own cultural or ideological bubbles. The choices made by these algorithms are not transparent.
Homophilyis the tendency of individuals to associate andbondwith similar others, as in the proverb "birds of a feather flock together". The presence of homophily has been detected in a vast array ofnetworkstudies. For example, a study conducted by Bakshy et. al. explored the data of 10.1 million Facebook users. These users identified as either politically liberal, moderate, or conservative, and the vast majority of their friends were found to have a political orientation that was similar to their own. Facebook algorithms recognize this and selects information with a bias towards this political orientation to showcase in their newsfeed.[32]
Recommender systemsare information filtering systems put in place on different platforms that provide recommendations depending on information gathered from the user. In general, recommendations are provided in three different ways: based on content that was previously selected by the user, content that has similar properties or characteristics to that which has been previously selected by the user, or a combination of both.[32]
Both echo chambers and filter bubbles relate to the ways individuals are exposed to content devoid of clashing opinions, and colloquially might be used interchangeably. However, echo chamber refers to the overall phenomenon by which individuals are exposed only to information from like-minded individuals, while filter bubbles are a result of algorithms that choose content based on previous online behavior, as with search histories or online shopping activity.[18]Indeed, specific combinations of homophily and recommender systems have been identified as significant drivers for determining the emergence of echo chambers.[33]
Culture warsarecultural conflictsbetween social groups that have conflictingvaluesandbeliefs. It refers to "hot button" topics on which societalpolarizationoccurs.[34]A culture war is defined as "the phenomenon in which multiple groups of people, who hold entrenched values and ideologies, attempt to contentiously steer public policy."[2]Echo chambers on social media have been identified as playing a role on how multiple social groups, holding distinct values and ideologies, create groups circulate conversations through conflict and controversy.
Online social communities become fragmented by echo chambers when like-minded people group together and members hear arguments in one specific direction with no counter argument addressed. In certain online platforms, such as Twitter, echo chambers are more likely to be found when the topic is more political in nature compared to topics that are seen as more neutral.[35]Social networkingcommunities are communities that are considered to be some of the most powerful reinforcements of rumors[36]due to the trust in the evidence supplied by their own social group and peers, over the information circulating the news.[37][38]In addition to this, the reduction of fear that users can enjoy through projecting their views on the internet versus face-to-face allows for further engagement in agreement with their peers.[39]
This can create significant barriers to critical discourse within an online medium. Social discussion and sharing can potentially suffer when people have anarrow information baseand do not reach outside their network. Essentially, the filter bubble can distort one'srealityin ways which are not believed to be alterable by outside sources.[40]
Findings by Tokita et al. (2021) suggest that individuals’ behavior within echo chambers may dampen their access to information even from desirable sources. In highly polarized information environments, individuals who are highly reactive to socially-shared information are more likely than their less reactive counterparts to curate politically homogenous information environments and experience decreased information diffusion in order to avoid overreacting to news they deem unimportant. This makes these individuals more likely to develop extreme opinions and to overestimate the degree to which they are informed.[41]
Research has also shown that misinformation can become more viral as a result of echo chambers, as the echo chambers provide an initial seed which can fuel broader viral diffusion.[42]
Many offline communities are also segregated by politicalbeliefsand cultural views. The echo chamber effect may prevent individuals from noticing changes in language andcultureinvolving groups other than their own. Online echo chambers can sometimes influence an individual's willingness to participate in similar discussions offline. A 2016 study found that "Twitter users who felt their audience on Twitter agreed with their opinion were more willing to speak out on that issue in the workplace".[13]
Group polarizationcan occur as a result of growing echo chambers. The lack of external viewpoints and the presence of a majority of individuals sharing a similar opinion or narrative can lead to a more extreme belief set. Group polarisation can also aid the current of fake news and misinformation through social media platforms.[43]This can extend to offline interactions, with data revealing that offline interactions can be as polarising as online interactions (Twitter), arguably due to social media-enabled debates being highly fragmented.[44]
Echo chambers have existed in many forms. Examples cited since the late 20th century include:
Since the creation of the internet, scholars have been curious to see the changes in political communication.[56]Due to the new changes in information technology and how it is managed, it is unclear how opposing perspectives can reach common ground in a democracy.[57]The effects seen from the echo chamber effect has largely been cited to occur in politics, such asTwitter[58]andFacebookduring the2016 presidential election in the United States.[19]Some believe that echo chambers played a big part in the success ofDonald Trumpin the 2016 presidential elections.[59]
Some companies have also made efforts in combating the effects of an echo chamber onan algorithmic approach. A high-profile example of this is the changes Facebook made to its "Trending" page, which is an on-site news source for its users. Facebook modified their "Trending" page by transitioning from displaying a single news source to multiple news sources for a topic or event.[60]The intended purpose of this was to expand the breadth of news sources for any given headline, and therefore expose readers to a variety of viewpoints. There are startups building apps with the mission of encouraging users to open their echo chambers, such asUnFound.news.[61]Another example is a beta feature onBuzzFeed Newscalled "Outside Your Bubble",[62]which adds a module to the bottom of BuzzFeed News articles to show reactions from various platforms like Twitter, Facebook, and Reddit. This concept aims to bring transparency and prevent biased conversations, diversifying the viewpoints their readers are exposed to.[63]
|
https://en.wikipedia.org/wiki/Echo_chamber_(media)
|
Enshittification, also known ascrapificationandplatform decay, is a pattern in whichtwo-sidedonline products and servicesdecline in quality over time. Initially, vendors create high-quality offerings to attract users, then they degrade those offerings to better serve business customers, and finally degrade their services to users and business customers to maximize profits for shareholders.
WriterCory Doctorowcoined theneologismenshittificationin November 2022,[1]though he was not the first to describe and label the concept.[2][3]Doctorow's term has been widely adopted. TheAmerican Dialect Societyselected it as its 2023Word of the Year, with Australia'sMacquarie Dictionaryfollowing suit for 2024.Merriam-WebsterandDictionary.comalso listenshittificationas a word.[4][5]
Doctorow advocates for two ways to reduce enshittification: upholding theend-to-end principle, which asserts that platforms should transmit data in response to user requests rather than algorithm-driven decisions; and guaranteeingthe right of exit—that is, enabling a user to leave aplatformwithout data loss, which requires interoperability. These moves aim to uphold the standards and trustworthiness of online platforms, emphasize user satisfaction, and encourage market competition.
Enshittificationwas first used byCory Doctorowin a November 2022 blog post[6]that was republished three months later inLocus.[7]He expanded on the concept in another blog post[8]that was republished in the January 2023 edition ofWired:[9]
Here is how platforms die: first, they are good to their users; then they abuse their users to make things better for their business customers; finally, they abuse those business customers to claw back all the value for themselves. Then, they die. I call this enshittification, and it is a seemingly inevitable consequence arising from the combination of the ease of changing how a platform allocates value, combined with the nature of a "two-sided market", where a platform sits between buyers and sellers, hold each hostage to the other, raking off an ever-larger share of the value that passes between them.
In a 2024 op-ed in theFinancial Times, Doctorow argued that"'enshittification' is coming for absolutely everything" with "enshittificatory" platforms leaving humanity in an "enshittocene".[10]
Doctorow argues that new platforms offer useful products and services at a loss, as a way to gain new users. Once users are locked in, the platform then offers access to the userbase to suppliers at a loss; once suppliers are locked in, the platform shifts surpluses to shareholders.[11]Once the platform is fundamentally focused on the shareholders, and the users and vendors are locked in, the platform no longer has any incentive to maintain quality. Enshittified platforms that act asintermediariescan act as both amonopolyon services and amonopsonyon customers, as highswitching costsprevent either from leaving even when alternatives technically exist.[9]Doctorow has described the process of enshittification as happening through "twiddling": the continual adjustment of the parameters of the system in search of marginal improvements of profits, without regard to any other goal.[12]Enshittification can be seen as a form ofrent-seeking.[9]
To solve the problem, Doctorow has called for two general principles to be followed:
Doctorow's concept has been cited by various scholars and journalists as a framework for understanding the decline in quality of online platforms. Discussions about enshittification have appeared in numerous media outlets, including analyses of how tech giants likeFacebook,Google, andAmazonhave shifted their business models to prioritize profits at the expense of user experience.[14]This phenomenon has sparked debates about the need for regulatory interventions and alternative models to ensure the integrity and quality of digital platforms.[15]
TheAmerican Dialect Societyselectedenshittificationas its 2023word of the year.[10][16]
TheMacquarie Dictionarynamedenshittificationas its 2024 word of the year, selected by both the committee's and people's choice votes for only the third time since the inaugural event in 2006.[17]
Originally meant to be a cheap alternative to hotels,Airbnbbecame a popular company in theplatform economy. However, in similar enterprises in the platform economy which offer very cheap prices, once theventure capitalruns out, the cheap prices are gone. This is presumably what happened to Airbnb, where prices of many places hosted there have since increased to be higher than hotels, often with fewer amenities, deceptive advertising, additional rules and fees by hosts, less quality control, and sometimeshidden cameras.[18][19]
In Doctorow's original post, he discussed the practices ofAmazon. The online retailer began by wooing users with goods sold below cost and (with anAmazon Primesubscription) free shipping. Once its user base was solidified, more sellers began to sell their products through Amazon. Finally, Amazon began to add fees to increase profits. In 2023, over 45% of the sale price of items went to Amazon in the form of various fees.[20]Doctorow described advertisement within Amazon as apayolascheme in which sellers bid against one another for search-ranking preference, and said that the first five pages of a search for "cat beds" were half advertisements.[9]
Doctorow has also criticized Amazon'sAudibleservice, which controls over 90% of the audiobook market and applies mandatorydigital rights management(DRM) to all audio books. He pointed out that this meant that a user leaving the platform would lose access to their audiobook library. Doctorow decided in 2014 to not sell his audiobooks via Audible anymore but produce them himself even though that meant earning a lot less than he would have by letting Amazon "slap DRM" on his books. He has since then published over half a dozen of his audiobooks independently as Amazon's system would not distribute them without DRM.[21][22]
The market fordating appshas been cited as an example of enshittification due to the conflict between the dating apps' ostensible goal of matchmaking, and their operators' desire to convert users to the paid version of the app and retaining them as paying users indefinitely by keeping them single, creating aperverse incentivethat leads performance to decline over time as efforts at monetization begin to dominate.[23]Mathematical modeling has suggested that it is in the financial interests of app operators to offer their user base a sub-optimal experience.[24]
According to Doctorow,Facebookoffered a good service until it had reached a "critical mass" of users, and it became difficult for people to leave because they would need to convince their friends to go with them. Facebook then began to add posts from media companies into feeds until the media companies too were dependent on traffic from Facebook, and then adjusted the algorithm to prioritize paid "boosted" posts.Business Insideragreed with the view that Facebook was being enshittified, adding that it "constantly floods users' feeds with sponsored (or 'recommended') content, and seems to bury the things people want to see under what Facebook decides is relevant".[25]Doctorow pointed at theFacebook metrics controversy, in which video statistics were inflated on the site, which led to media companies over-investing in Facebook and collapsing. He described Facebook as "terminally enshittified".[9]
Doctorow citesGoogle Searchas one example, which became dominant through relevant search results and minimal ads, then later degraded through increased advertising,search engine optimization, and outright fraud, benefitting its advertising customers. This was followed by Google rigging the ad market throughJedi Blueto recapture value for itself. Doctorow also cites Google's firing of 12,000 employees in January 2023, which coincided with astock buybackscheme which "would have paid all their salaries for the next 27 years", as well as Google's rush to research an AI search chatbot, "a tool that won't show you what you ask for, but rather, what it thinks you should see".[9][13][26][27][28]
After years of competing fiercely in the "streaming wars", Netflix emerged as the main winner in the early 2020s.[29]Once it had achieved a quasi-monopolistic position, Netflix proceeded to raise prices, introduce an ad-supported tier, with Netflix also discontinuing its cheapest ad-free plan in the UK and Canada in 2024,[30]as well as a crackdown on password sharing.[31]
In 2023, shortly after its initial filings for aninitial public offering,Redditannounced that it would begincharging fees for API access, a move that would effectively shut down many third-party apps by making them cost-prohibitive to operate.[32]CEOSteve Huffmanstated that it was in response to AI firms scraping data without paying Reddit for it, but coverage linked the move to the upcoming IPO; the move shut down large numbers of third-party apps, forcing users to use official Reddit apps that provided more profit to the company.[32][33][34]Moderators on the site conducted a blackout protest against the company's new policy, although the changes ultimately went ahead. Many third party Reddit apps such as theApolloapp were shut down because of the new fees.[35][33][36]
In September 2024, Reddit announced that moderators will no longer have the ability of changing subreddit accessibility from "public" to "private" without approval from Reddit staff. This was widely interpreted by moderators as a punitive change in response to the 2023 API protests.[37]
The term was applied to the changes toTwitterin the wake of its2022 acquisitionbyElon Musk.[38][26]This included theclosure of the service's APIto stop interoperable software from being used,suspending usersfor posting (rival service)Mastodonhandles in their profiles, andplacing restrictionson the ability to view the site without logging in. Other changes included temporary rate limits for the number of tweets that could be viewed per day, theintroduction of paid subscriptionsto the service in the form ofTwitter Blue(later renamed to X Premium),[38]and thereduction of moderation.[39]Musk had the algorithm modified to promote his own posts above others, which caused users' feeds to be flooded with his content in February 2023.[40]In April 2024, Musk announced that new users would have to pay a fee to be able to post.[41]
The changes led to a dramatic decline in revenue for the company. The increase in hate speech on the platform, particularlyantisemitismandIslamophobiaduring theGaza war, led to some organizations pulling advertisements.[42]According to internal documents seen byThe New York Timesin late 2023, the losses from advertisers were projected to cost the company $75 million by the end of the year.[43]Musk delivered an interview on November 29, 2023, in which he told advertisers leaving the website to "go fuck yourself."[44][45]By August 2024, revenue had fallen 84% compared to before Musk's ownership.[46]As a result of Musk's acquisition tens of millions of users migrated to a new platform,Bluesky.[47][48][49]
App-basedridesharing companyUbergained market share by ignoring local licensing systems such astaxi medallionswhile alsokeeping consumer costs artificially lowby subsidizing rides viaventure capitalfunding.[50]Once they achieved aduopolywith competitorLyft, the company implementedsurge pricingto increase the cost of travel to riders and dynamically adjust the payments made to drivers.[50]The suitability of Uber surge pricing as an example of the phenomenon of enshittification is questionable, however, as surge pricing has been found to increase the quantity of drivers during periods when the surge pricing is in effect and a reallocation of rides to those who receive the most benefit from them.[51][52]This increase in quantity has been found to increase the availability of Ubers for riders, keeping waiting times low and ride completion rates high during periods of surge pricing.[51][52]
The proposed (and eventually abandoned) changes to theUnity game engine's licensing model in 2023 were described byGameindustry.bizas an example of enshittification, as the changes would have applied retroactively to projects which had already been in development for years while degrading quality for both developers and end users, while increasing fees.[53]While the Unity Engine itself is not a two-sided market, the move was related to Unity's position as a provider of mobilefree-to-playservices to developers, including in-app purchase systems.[54]
In response to these changes, many game developers announced their intention to abandon Unity for an alternative engine, despite the significantswitching costof doing so, with game designerSam Barlowspecifically using the wordenshittificationwhen describing the new fee policy as the motive.[55]Use of the Unity engine at game jams declined rapidly in 2024 as indie developers switched to other engines. Unity usage at theGlobal Game Jamdeclined to 36% that year, from 61% in 2023. TheGMTKGame Jamalso reported a major decline in Unity usership.[56][57]
|
https://en.wikipedia.org/wiki/Enshittification
|
Extremismis "the quality or state of being extreme" or "the advocacy of extreme measures or views".[1]The term is primarily used in apoliticalorreligioussense to refer to anideologythat is considered (by the speaker or by some implied shared social consensus) to be far outside the mainstream attitudes of society.[2]It can also be used in an economic context. The term may be usedpejorativelyby opposing groups, but is also used in academic and journalistic circles in a purely descriptive and non-condemning sense.
Extremists' views are typically contrasted with those ofmoderates. InWestern countries, for example, in contemporary discourse onIslamor onIslamic political movements, the distinction betweenextremistandmoderate Muslimsis commonly stressed.[citation needed]Political agendas perceived as extremist often include those from thefar-left politicsorfar-right politics, as well asradicalism,reactionism,chauvinism,fundamentalism, andfanaticism.
Peter T. Coleman and Andrea Bartoli give observation of definitions:[3]Extremism is a complex phenomenon, although its complexity is often hard to see. Most simply, it can be defined as activities (beliefs, attitudes, feelings, actions, strategies) of a character far removed from the ordinary. In conflict settings it manifests as a severe form of conflict engagement. However, the labeling of activities, people, and groups as "extremist", and the defining of what is "ordinary" in any setting is always a subjective and political matter. Thus, we suggest that any discussion of extremism be mindful of the following: Typically, the same extremist act will be viewed by some as just and moral (such as pro-social "freedom fighting"), and by others as unjust and immoral (antisocial "terrorism") depending on the observer's values, politics, moral scope, and the nature of their relationship with the actor. In addition, one's sense of the moral or immoral nature of a given act of extremism (such as Nelson Mandela's use of guerilla war tactics against the South African Government) may change as conditions (leadership, world opinion, crises, historical accounts, etc.) change. Thus, the current and historical context of extremist acts shapes our view of them. Power differences also matter when defining extremism. When in conflict, the activities of members of low power groups tend to be viewed as more extreme than similar activities committed by members of groups advocating the status quo.
In addition, extreme acts are more likely to be employed by marginalized people and groups who view more normative forms of conflict engagement as blocked for them or biased. However, dominant groups also commonly employ extreme activities (such as governmental sanctioning of violent paramilitary groups or theattack in Wacoby the FBI in the U.S.).
Extremist acts often employ violent means, although extremist groups will differ in their preference forviolent extremismvs.nonviolent extremism, in the level of violence they employ, and in the preferred targets of their violence (from infrastructure to military personnel to civilians to children). Again, low power groups are more likely to employ direct, episodic forms of violence (such as suicide bombings), whereas dominant groups tend to be associated with more structural or institutionalized forms (like the covert use of torture or the informal sanctioning of police brutality).[3]
In Germany, extremism is explicitly used for differentiation between democratic and non-democratic intentions. The German Ministry of Home Affairs defines extremism as an intention that rejects the democratic constitution state and fundamental values, its norms and its laws.[4]
Although extremist individuals and groups are often viewed as cohesive and consistently evil, it is important to recognize that they may be conflicted or ambivalent psychologically as individuals, or contain difference and conflict within their groups. For instance, individual members of Hamas may differ considerably in their willingness to negotiate their differences with the Palestinian Authority and, ultimately, with certain factions in Israel. Ultimately, the core problem that extremism presents in situations of protracted conflict is less the severity of the activities (although violence, trauma, and escalation are obvious concerns) but more so the closed, fixed, and intolerant nature of extremist attitudes, and their subsequent imperviousness to change.[3]
Astrid Bötticher notes several differences betweenradicalismand extremism, among them in goals (idealistic vs.restorative, emancipatory vs. anti-democratic), morals (universal vs. particular), approach towards diversity (acceptance vs. disdain), and use of violence (pragmatic and selective vs. legitimate and acceptable).[5]
Eric HofferandArthur Schlesinger Jr.were two political writers during the mid-20th century who gave what they purported to be accounts of "political extremism". Hoffer wroteThe True BelieverandThe Passionate State of Mindabout the psychology and sociology of those who join "fanatical" mass movements. Schlesinger wroteThe Vital Center, championing a supposed"center"of politics within which "mainstream" political discourse takes place, and underscoring the alleged need for societies to draw definite lines regarding what falls outside of this acceptability.
Seymour Martin Lipsetargued that besides the extremism of the left and right there is also anextremism of the center, and that it actually formed the base offascism.[6]
Laird Wilcoxidentifies 21 alleged traits of a "political extremist", ranging from "a tendency tocharacter assassination" andhatefulbehavior like "name calling andlabelling", to general character traits like "a tendency to view opponents and critics as essentially evil", "a tendency to substituteintimidationfor argument" or "groupthink".[7]
"Extremism" is not a standalone characteristic. The attitude or behavior of an "extremist" may be represented as part of a spectrum, which ranges from mild interest through "obsession" to "fanaticism" and "extremism". The alleged similarity between the "extreme left" and "extreme right", or perhaps between opposing religious zealots, may mean only that all these are "unacceptable" from the standpoint of the mainstream or majority.
Economist Ronald Wintrobe[8]argues that many extremist movements, even though having completely different ideologies, share a common set of characteristics. As an example, he lists the following common characteristics between "Jewish fundamentalists" and "the extremists of Hamas":[9]
Among the explanations for extremism is one that views it as a plague.Arno Gruensaid, "The lack of identity associated with extremists is the result of self-destructive self-hatred that leads to feelings of revenge toward life itself, and a compulsion to kill one's own humanness." In this context, extremism is seen as not a tactic, nor an ideology, but as a pathological illness which feeds on the destruction of life.[3]Dr. Kathleen Taylorbelievesreligious fundamentalismis a mental illness and is "curable."[10]There are distinct psychological features of extremists that contribute to conflict among societal groups;Jan-Willem van Prooijenidentified them as psychological distress, cognitive simplicity, overconfidence and intolerance.[11]
Another view is that extremism is an emotional outlet for severe feelings stemming from "persistent experiences of oppression, insecurity, humiliation, resentment, loss, and rage" which are presumed to "lead individuals and groups to adopt conflict engagement strategies which "fit" or feel consistent with these experiences".[3]
Extremism is seen by other researchers as a "rational strategy in a game over power",[3]as described in the works ofEli Berman.
In a 2018 study atUniversity College London, scientists have demonstrated that people with extreme political views (both extreme right and extreme left) had significantly worse metacognition, or the ability of a person to recognize they are wrong and modify their views when presented with contrary evidence, thus creating an opinion that supports only their idea of wrong and right. People found on either of the political extremes were shown to have much greater (but misplaced) confidence in their beliefs, and resisted change.[12]
A 2019 study found that political extremism on both the left and right tended to have four common psychological features: psychological distress stimulates the adoption of an extreme ideological outlook, extreme ideologies tend to have relatively simplistic black-white perceptions of the social world, said mental simplicity causes overconfidence in judgements, and political extremists are less tolerant of different groups and opinions than moderates.[13]
After being accused of extremism,Martin Luther King Jr.criticized the mainstream usage of the term in hisLetter from Birmingham Jail,
"But though I was initially disappointed at being categorized as an extremist, as I continued to think about the matter I gradually gained a measure of satisfaction from the label. Was notJesusan extremist for love…Was notAmosan extremist for justice…Was notMartin Lutheran extremist…So the question is not whether we will be extremists, but what kind of extremists we will be. Will we be extremists for hate or for love? Will we be extremists for the preservation of injustice or for the extension of justice?"[14][15]
In his 1964 acceptance speech at the1964 Republican National Convention,Barry Goldwatersaid, "I would remind you that extremism in the defense of liberty is no vice. And let me remind you also that moderation in the pursuit of justice is no virtue."[16]
Robert F. Kennedysaid "What is objectionable, what is dangerous about extremists is not that they are extreme but that they are intolerant. The evil is not what they say about their cause, but what they say about their opponents."[citation needed]
InRussia, thelawsprohibiting extremist content are used to suppress thefreedom of speechthrough very broad and flexible interpretation.[17]Published material classified as "extremist", and thus prosecuted, included protests against the court rulings in theBolotnaya Square case("calling for illegal action"), criticism of overspending by a local governor ("insult of the authorities"), publishing a poem in support ofUkraine("inciting hatred"),[18][19]an open letter against a war in Chechnya by the writerPolina Zherebcova,[20]theJehovah's Witnessesmovement in Russia,[21]Raphael Lemkin, and articles by the initiator of theGenocide Conventionof 1948.[22]
Tushar Gandhi, Mahatma Gandhi's great-grandson, says India's Hindu nationalism is a threat to Gandhi's legacy and that the ideology of hate, division and polarization that led to Gandhi's assassination by a religious zealot in 1948 has captured India.[23]
Since the 1990s, inUnited Statespolitics, the termSister Souljah momenthas been used to describe apolitician's public repudiation of an allegedly extremist person or group, statement, or position which might otherwise be associated with his own party.[citation needed]
The term "subversive" was often used interchangeably, in the United States at least, with "extremist" during theCold Warperiod, although the two words are not synonymous.[citation needed]
|
https://en.wikipedia.org/wiki/Extremism
|
Inpsychology, thefalse consensus effect, also known asconsensus bias, is a pervasivecognitive biasthat causes people to overestimate the extent to which other people share their beliefs and views;[1]it is the tendency to "see their own behavioral choices and judgments as relatively common and appropriate to existing circumstances".[2]In other words, they assume that their personal qualities, characteristics, beliefs, and actions are relatively widespread through the general population.
This false consensus is significant because it increasesself-esteem(overconfidence effect). This bias is especially prevalent in group settings where one thinks the collective opinion of their own group matches that of the larger population. Since the members of a group reach a consensus and rarely encounter those who dispute it, they tend to believe that everybody thinks the same way. The false-consensus effect is not restricted to cases where people believe that their values are shared by the majority, but it still manifests as an overestimate of the extent of their belief.[3]Additionally, when confronted with evidence that a consensus does not exist, people often assume that those who do not agree with them are defective in some way.[4]
The false consensus effect has been widely observed and supported by empirical evidence. One recent study has shown that consensus bias may improve decisions about other people's preferences.[5]Ross, Green and House first defined the false consensus effect in 1977 with emphasis on the relative commonness that people perceive about their own responses; however, similar projection phenomena had already caught attention in psychology. Specifically, concerns with respect to connections between individual's personal predispositions and their estimates of peers appeared in the literature for a while. For instances, Katz and Allport in 1931 illustrated that students’ estimates of the amount of others on the frequency of cheating was positively correlated to their own behavior. Later, around 1970, same phenomena were found on political beliefs andprisoner's dilemmasituation. In 2017, researchers identified a persistent egocentric bias when participants learned about other people's snack-food preferences.[5]Moreover, recent studies suggest that the false consensus effect can also affect professional decision makers; specifically, it has been shown that even experienced marketing managers project their personal product preferences onto consumers.[6][7]
There is no single cause for this cognitive bias; however, several underlying mechanisms have been suggested to contribute to its formation and maintenance. Previous research has suggested that cognitive and perceptional factors (motivated projection, accessibility of information, emotion, etc.) may contribute to the consensus bias, while recent studies have focused on its neural mechanisms. The bias may also result, at least in part, from non-social stimulus-reward associations.[5]
Cognitive mechanisms, such as theavailability heuristic,self-serving bias, andnaïve realismhave been suggested as at least partial underlying factors in the False Consensus Effect. The availability heuristic is a mental shortcut that people default to, in which people may incorrectly attribute the likelihood or commonness of something based on how cognitively available the concept is to them, or how quickly it comes to mind; this could contribute to the False Consensus Effect when individuals have a readily available concept, causing them to overestimate its commonality. Self-serving bias is an attribution error that describes the tendency to attribute successes and positive traits to one's own internal factors, and attribute failures or negative traits to the external environment. This can contribute to the False Consensus Effect by justifying our actions with self-serving bias, and consequently using the False Consensus Effect to reinforce that those actions were acceptable by believing our views are widely shared.Naïve realismis the idealist belief that we perceive the world accurately, and individuals who disagree with our perceptions are incorrect or bias; this contributes to the False Consensus Effect by reinforcing that people who disagree with our view are part of the minority, whereas the majority still agrees with us.
The False Consensus Effect can be partially attributed to the innate desire to conform and be liked by others in asocial environmentby sharing characteristics with members of a social group, within the parameters determined by the social environment; these parameters can be influenced by demographic factors, such as age, gender, and socioeconomic status, and cultural differences. The innate motivation to be liked is known as normative social influence,[8]conceptualized by revolutionary social psychologist Solomon Asch in 1951. Normative social influence is a social and evolutionary function to share characteristics with a group, form a group identity, and benefit from the protection and resources of group membership. It can cause the False Consensus Effect by creating a social illusion - the need to be liked causes one to agree with others outwardly even if they disagree internally, creating a social illusion of collective agreement. Additionally, the False Consensus Effect is fundamentally a perceptual effect; normative social influence motivates individuals to agree with each other, potentially leading some to believe that everyone getting along socially means that everyone agrees. Normative social influence also leads to people feeling validated in their beliefs when they are not challenged, reinforcing the illusion of correctness and group cohesion.
Another type of social pressure to conform isinformational social influence,[9][10]also coined by Asch, that may contribute to the False Consensus Effect. This describes individuals' tendency to conform to a majority consensus out of the need to be correct; additionally, Asch posited that informational social influence is partially caused by people learning how to act within socially determined guidelines by perceiving others' behavior, allowing them to fall into the cohesive group identity. Maintenance of the False Consensus Effect may be related to the tendency to make decisions with relatively little information.[11]When faced with uncertainty and a limited sample from which to make decisions, people often "project" themselves onto the situation. When this personal knowledge is used as input to make generalizations, it often results in the false sense of being part of the majority.[12]
The false-consensus effect can be traced back to two parallel theories ofsocial perception, "the study of how we form impressions of and make inferences about other people".[13]The first is the idea of social comparison. The principal claim ofLeon Festinger's (1954)social comparison theorywas that individuals evaluate their thoughts and attitudes based on other people.[14]This may be motivated by a desire for confirmation and the need to feel good about oneself. Informational social influence can be viewed as an extension of this theory, where people may use others as sources of information to define social reality and guide behavior. This is calledinformational social influence.[9][10]The problem, though, is that people are often unable to accurately perceive the social norm and the actual attitudes of others. In other words, research has shown that people are surprisingly poor "intuitive psychologists" and that our social judgments are often inaccurate.[14]This finding helped to lay the groundwork for an understanding of biased processing and inaccurate social perception. The false-consensus effect is just one example of such an inaccuracy.[10]
The second influential theory isprojection, the idea that people project their own attitudes and beliefs onto others.[15]This idea of projection is not a new concept. In fact, it can be found inSigmund Freud's work on thedefense mechanismof projection, D.S. Holmes' work on "attributive projection" (1968), and Gustav Ichheiser's work on social perception (1970).[16]D.S. Holmes, for example, described social projection as the process by which people "attempt to validate their beliefs by projecting their own characteristics onto other individuals".[14]In religious psychology, Ludwig Feuerbach (1804–1872) posited the Projection or Reflection theory of religion,[17]in that human perceptions of the divine are projections of our own ideal qualities in order to conceptualize our aspirations.
Here, a connection can be made between the two stated theories of social comparison and projection. First, as social comparison theory explains, individuals constantly look to peers as a reference group and are motivated to do so in order to seek confirmation for their own attitudes and beliefs.[14]
The false-consensus effect, as defined byRoss, Greene, and House in 1977, came to be the culmination of the many related theories that preceded it. In their well-known series of four studies, Ross and associates hypothesized and then demonstrated that people tend to overestimate the popularity of their own beliefs and preferences.[2]Studies were both conducted in hypothetical situations by questionnaire surveys and in authentic conflict situations. For questionnaire studies, participants were presented with hypothetical events and then were not only asked to indicate their own behavioral choices and characteristics under the provided circumstances, but also asked to rate the responses and traits of their peers who referred as "actors". As for real occasion studies, participants were actually confronted with the conflict situations in which they were asked to choose behavioral alternatives and to judge the traits as well as decisions of two supposedly true individuals who had attended in the study.[2]In general, the raters made more "extreme predictions" about the personalities of the actors that did not share the raters' own preference. In fact, the raters may have even thought that there was something wrong with the people expressing the alternative response.[4]
In the ten years after the influential Ross et al. study, close to 50 papers were published with data on the false-consensus effect.[18]Theoretical approaches were also expanded. The theoretical perspectives of this era can be divided into four categories: (a) selective exposure and cognitive availability, (b) salience and focus of attention, (c) logical information processing, and (d) motivational processes.[18]In general, the researchers and designers of these theories believe that there is not a single right answer. Instead, they admit that there is overlap among the theories and that the false-consensus effect is most likely due to a combination of these factors.[19]
This theory is closely tied to the availability heuristic, which suggests that perceptions of similarity (or difference) are affected by how easily those characteristics can be recalled from memory.[18]And as one might expect, similarities between oneself and others are more easily recalled than differences. This is in part because people usually associate with those who are similar to themselves. This selected exposure to similar people may bias or restrict the "sample of information about the true diversity of opinion in the larger social environment".[20]As a result of the selective exposure and availability heuristic, it is natural for the similarities to prevail in one's thoughts.[19]
Botvin et al. (1992)did a popular study on the effects of the false-consensus effect among a specific adolescent community in an effort to determine whether students show a higher level of false-consensus effect among their direct peers as opposed to society at large.[21]The participants of this experiment were 203 college students ranging in age from 18 to 25 (with an average age of 18.5). The participants were given a questionnaire and asked to answer questions regarding a variety of social topics. For each social topic, they were asked to answer how they felt about the topic and to estimate the percentage of their peers who would agree with them. The results determined that the false-consensus effect was extremely prevalent when participants were describing the rest of their college community; out of twenty topics considered, sixteen of them prominently demonstrated the false-consensus effect. The high levels of false-consensus effect seen in this study can be attributed to the group studied; because the participants were asked to compare themselves to a group of peers that they are constantly around (and view as very similar to themselves), the levels of false-consensus effect increased.[21]
This theory suggests that when an individual focuses solely on their own preferred position, they are more likely to overestimate its popularity, thus falling victim to the false-consensus effect.[20]This is because that position is the only one in their immediate consciousness. Performing an action that promotes the position will make it more salient and may increase the false-consensus effect. If, however, more positions are presented to the individual, the degree of the false-consensus effect might decrease significantly.[20]
This theory assumes that active and seemingly rational thinking underlies an individual's estimates of similarity among others.[20]In a study done by Fox, Yinon, and Mayraz, researchers were attempting to determine whether or not the levels of the false-consensus effect changed in different age groups. In order to come to a conclusion, it was necessary for the researchers to split their participants into four different age groups. Two hundred participants were used, and gender was not considered to be a factor. Just as in the previous study mentioned, this study used a questionnaire as its main source of information. The results showed that the false-consensus effect was extremely prevalent in all groups, but was the most prevalent in the oldest age group (the participants who were labeled as "old-age home residents"). They showed the false-consensus effect in all 12 areas that they were questioned about. The increase in false-consensus effect seen in the oldest age group can be accredited to their high level of "logical" reasoning behind their decisions; the oldest age group has obviously lived the longest, and therefore feels that they can project their beliefs onto all age groups due to their (seemingly objective) past experiences and wisdom. The younger age groups cannot logically relate to those older to them because they have not had that experience and do not pretend to know these objective truths. These results demonstrate a tendency for older people to rely more heavily on situational attributions (life experience) as opposed to internal attributions.[22]
This theory stresses the benefits of the false-consensus effect: namely, the perception of increased social validation, social support, and self-esteem. It may also be useful to exaggerate similarities in social situations in order to increase liking.[23]
The concept of false consensus effect can also be extended to predictions about future others.Belief in a favorable futureis the belief that future others will change their preferences and beliefs in alignment with one's own.[24]
Rogers, Moore, and Norton (2017)[24]find that belief in a favorable future is greater in magnitude than the false-consensus effect for two reasons:
In more recent years, researchers have taken to exploring potential differences in how the false consensus effect manifests across cultures. While there is still a notable gap in cross-cultural literature, growing empirical evidence posits differences in the strength and prevalence of the false consensus effect contingent on cultural context.
Broadly, research has found differences in the false consensus effect on the bases ofindividualismandcollectivism. Individualistic cultures encourage distinguishing the self from others and expressing unique characteristics, while collectivistic cultures value group harmony and cohesion.[25]One particularly well-studied difference between individualistic and collectivistic cultures is the way in which individuals comprise of and understand their sense of self, orself-concept: people in collectivistic cultures are found to have more interdependent self-concepts, in which the self is understood through relationships with close others; whereas people in individualistic cultures are found to have more independent self-concepts, in which the self is understood through personal characteristics that distinguish the self from others.[26]Differences in individualism and collectivism, and more specifically self-concept, suggest differences in perceptions and social motivations[27]that researchers theorize affect the influence of the false consensus effect.
Choi & Cha (2019)[28]find differences in the strength of the false consensus effect based on domain. In studying Koreans and European Americans, they find that false consensus effects are stronger in Koreans regarding political beliefs, personal problems, and behavioural choices, but not for personal traits and values. They suggest that these findings are a result of differences in individualism and collectivism, as they influence attribution and motivation. As collectivism places greater emphasis on situational factors, researchers posit that individuals will assume situational factors are dictating behaviour more so than people from individualistic cultures who are likely to attribute behaviour to disposition.[29]Thus, it is suggested that Koreans perceive greater similarity in domains with increased potential for social influence, as individuals perceive others as being similarly influenced by the situation. By contrast, it is suggested that in these same domains European Americans perceive less similarity as they view behaviours and opinions as resulting from an individual's personal characteristics. Additionally, they suggest that differences in perceived similarity across domains may be influenced by differences in consistency. Prior cross-cultural research finds that independence is motivated by self-consistency across contexts, while interdependence is motivated by consistency within social roles.[30]The researchers thus posit that European Americans perceive similarity in personal traits and values as they views these domains as more consistent. Further, they suggest that Koreans perceive greater similarity in domains that implicate others as they understand consistency through social roles and relationships.
Similar research by Ott-Holland et al. (2014)[31]finds evidence of greater false consensus in collectivistic cultures. Specifically, they look at institutional collectivism in which action for collective purpose and benefit is valued over individual action.[32]They find that people from countries high in institutional collectivism perceive more similarity between themselves and others than people from individualistic countries. Researches posit that emphasis on collective action motivates perceptions of similarity. However, this effect was small and a limited number of countries were studied.
Overall, the existing empirical work provides evidence of notable cross-cultural differences in the false consensus effect. It generally appears that in certain contexts, false consensus is stronger in collectivistic cultures. Though, this facet of cross-cultural research is still developing and the work thus far has been limited to specific collectivistic societies that cannot generalize to all contexts.
|
https://en.wikipedia.org/wiki/False_consensus_effect
|
Far-right political groupsuse mainstreamsocial media platformsfor communication,propaganda, and mobilization. These platforms includeFacebook,Instagram,TikTok, X (formerlyTwitter) andYouTube.[1]By leveraging viral trends, entertaining content, and direct interaction, far-right groups aim to spread their political messages, recruit followers, and foster a sense of community. Such activities are part of broader political processes and activities that involve the organization and spread of political values andideologies.
The internet has facilitated new channels of communication that significantly impact the spread of news and the dynamics of political discourse. The interactive nature of social media allows far-right groups to reach wider and younger audiences, often using subtle messaging and popular social media tactics. Social media has become a crucial[according to whom?]mediumfor how news and political information are consumed and shared, influencing public perception andcivic engagement.[2]
Far-right groups on platforms like TikTok engage with the youth through relatable and often non-political content to subtly promote their ideologies. This approach can affect political participation and election outcomes by shaping opinions and encouraging political involvement.[3]Additionally, social media usage inpolitical campaignshas become increasingly significant due to its communal and interactive nature, as users engage in discussions, share endorsements, and participate incollective actionssuch as voting encouragement.[not verified in body]
Social media platforms are known for enabling anyone with an internet connection to create content and actively participate in political discourse.[4]They enhance access to political information.[5]However, many users primarily consume content passively, with content creation concentrated among a small group of active users. According to aEurobarometersurvey by theEuropean Parliament, 79% of young Europeans aged 15 to 24 follow influencers orcontent creatorson social media, highlighting the increasing use of these platforms for news consumption in this age group.[6]
Far-right influencers use strategies frominfluencerculture to spread reactionary messages andmonetizetheir politics. They engage inviralstunts and create real-world commotion to gain online visibility, fostering a sense of shared intimacy with their followers. Additionally, they employ provocative tactics such astrollingand humor to build community and disguisehate speech, while also appearing authentic and relatable to maintain audience support.[7]Far-right groups exploit technological affordances of social media platforms to maximize the reach and impact of their messages. They rely on replicability to share and alter content across different platforms, often decontextualizing messages to fit their narrative. Scalability is achieved through strategic use ofalgorithmsand hashtags, allowing for broader audience engagement and visibility. Additionally, connectivity is enhanced by forming online communities that foster in-group solidarity and facilitate the spread of extremist ideologies, bypassing traditional media gatekeepers and leveraging direct communication with followers.[8]
Far-right groups have been exploitingFacebook's algorithmic tendencies to create ideologicalecho chambers, whereconservativesandliberalslargely consume different political news,[9]leading to increased political polarization - although research is not totally clear on this point yet.[9]Research has shown that changes to Facebook's algorithm significantly alter what users see and how they interact on the platform, with conservatives engaging more with political news and consuming more content flagged as untrustworthy or inaccurate. This asymmetry facilitates the spread of far-right misinformation, as politically aligned content is prioritized, encouraging conservative users to like, share, and comment more frequently on such posts.[10]In addition to algorithmic manipulation, far-right militias and extremist groups have established strong presences on Facebook, using the platform to organize, recruit, and spread their ideology.[11]They create private groups and pages that foster a sense of community and solidarity among members, often bypassing platform moderation policies. These groups frequently engage in activities designed to provoke conflict and gain visibility, such as trolling and viral stunts, and use Facebook's connectivity features to coordinate real-world actions and protests. Despite Meta's efforts to moderate content, far-right groups continue to leverage Facebook's features to maintain and grow their influence online.[11]
Far-right groups have adeptly utilized Instagram to recruit young followers and spread extremist ideologies. Instagram's visual nature and algorithmic design makes it subsceptible to these activities.[12]Far-right influencers often post aesthetically pleasing images interwoven with subtle far-right symbols and messages. For instance, women influencers play a key by blending personal lifestyle content with right-wing hashtags and symbols like theBlack Sun, which have deeper ideological meanings to those aware of their significance.[13]Instagram's algorithmic recommendations gradually expose users to more extremist content, fostering a sense of insider knowledge and belonging within far-right communities. This method creates filter bubbles and echo chambers where users repeatedly encounter content that reinforces their beliefs.[13][14]For example, right-wing groups exploit hashtags such as #heimatverliebt (love of homeland) to attract followers and gently introduce them toextremistideologies. Instagram's inadequate moderation has allowed groups like "The British Hand" and the "National Partisan Movement" to recruit young followers with minimal interference. These groups blend mainstream appeal with extremist ideology, using Instagram's visual and social engagement tools to build a community and propagate their messages. The platform's inadequatecontent moderationmakes it particularly vulnerable to far-right exploitation, with extremists using visually engaging content and weakly enforced policies to spread their ideology.[14]
Political entities, such as Germany's far-right party Alternative for Germany (AfD), have also used Instagram's ad features to promote divisive and hateful content. These ads often blame immigrants for societal issues, leveraging emotionally charged imagery, sometimes manipulated byAI, to incite fear and garner support. Despite Meta's policies against hate speech and divisive content, such ads have reached significant audiences, highlighting the challenges in moderating politically charged content on such a large platform.[15]By manipulating platform algorithms and exploiting visual appeal, far-right groups on Instagram have effectively created a recruitment pipeline that subtly guides young users from mainstream content to extremist ideologies, operating in plain sight and often evading content moderation efforts.[13][14][15]
Far-right groups have increasingly used TikTok to spread their ideologies, recruit members, and influence political processes, especially targeting young voters.[16]TikTok's user-friendly video tools and personalized content algorithms make it an effective platform for disseminating propaganda. These groups often disguise extremist messages as benign or humorous content, which lowers resistance among younger audiences.[16]Investigations reveal that parties such as Germany's Alternative for Germany (AfD) and Romania'sAlliance for the Union of Romanians(AUR) manipulate engagement metrics by purchasing fake followers and likes, enhancing the perceived popularity of their content. This tactic has significantly impacted youth votes in recentEuropean elections.[17]Additionally, the platform has been a conduit for spreadingconspiracy theoriesand misinformation, aligning with pro-Russian narratives and extremist ideologies across various countries.[3]Despite TikTok's assertions of robust policies against harmful content, the platform remains a significant vector for far-right activities.[citation needed]
Under Elon Musk's leadership, X (formerly Twitter) has transformed significantly, particularly regarding its openness to far-right and extremist content. Musk, who purchased Twitter in 2022, has positioned himself as a champion of "free speech," subsequently scaling back the platform's moderation efforts. This shift has led to a noticeable increase in right-wing and extremist content, includingantisemitismandmisinformation.[18][19]A notable instance reflecting Musk's influence on the platform was the announcement ofRon DeSantis’ 2024 presidential campaign via Twitter Spaces. This event underscored the platform's strategic pivot towards engaging conservative and far-right audiences.[18]Musk's tenure has been characterized by several controversial decisions, such as reinstating accounts previously banned for spreading misinformation and extremist rhetoric. This leniency has fueled the proliferation of far-right content.[19]Media Matters’ investigations have repeatedly highlighted the presence and impact of extremist content on X. A report from Media Matters revealed that advertisements from major corporations were appearing alongside posts with pro-Nazi and white supremacist content. This led to several large advertisers pulling their ads from the platform, emphasizing the ongoing challenge of content moderation.[20]Following this report, Musk announced a lawsuit againstMedia Matters, arguing that the report exaggerated the prevalence of extremist content. Texas Attorney GeneralKen Paxtonalso launched an investigation into Media Matters, aligning with Musk's stance and further politicizing the issue.[20]Overall, the changes under Musk's leadership have made X a more hospitable environment for far-right groups, amplifying their reach and influence in political and social spheres.[19]
|
https://en.wikipedia.org/wiki/Far-right_usage_of_social_media
|
Themisinformation effectoccurs when a person'srecallofepisodic memoriesbecomes less accurate because of post-event information.[1]The misinformation effect has been studied since the mid-1970s.Elizabeth Loftusis one of the most influential researchers in the field. One theory is that original information and the misleading information that was presented after the fact become blended together.[2]Another theory is that the misleading information overwrites the original information.[3]Scientists suggest that because the misleading information is the most recent, it is more easily retrieved.[4]
The misinformation effect is an example ofretroactive interferencewhich occurs when information presented later interferes with the ability to retain previously encoded information. Individuals have also been shown to be susceptible to incorporating misleading information into their memory when it is presented within a question.[5]Essentially, the new information that a person receives works backward in time to distort memory of the original event.[6]One mechanism through which the misinformation effect occurs issource misattribution, in which the false information given after the event becomes incorporated into people's memory of the actual event.[7]The misinformation effect also appears to stem from memory impairment, meaning that post-event misinformation makes it harder for people to remember the event.[7]The misinformation reflects two of the cardinal sins of memory:suggestibility, the influence of others' expectations on our memory; andmisattribution,information attributed to an incorrect source.
Research on the misinformation effect has uncovered concerns about the permanence and reliability of memory.[8]Understanding the misinformation effect is also important given its implications for the accuracy ofeyewitness testimony, as there are many chances for misinformation to be incorporated into witnesses' memories through conversations with other witnesses, police questioning, and court appearances.[9][7]
Loftusand colleagues conducts early misinformation effect studies in 1974 and 1978.[10][11]Both studies involved automobile accidents. In the latter study, participants were shown a series ofslides, one of which featured a car stopping in front of astop sign. After viewing the slides, participants read a description of what they saw. Some of the participants were given descriptions that containedmisinformation, which stated that the car stopped at ayield sign. Following the slides and the reading of the description, participants were tested on what they saw. The results revealed that participants who were exposed to such misinformation were more likely to report seeing a yield sign than participants who were not misinformed.[12]
Similar methods continue to be used in misinformation effect studies. Standard methods involve showing subjects an event, usually in the form of a slideshow or video. The event is followed by a time delay and introduction of post-event information. Finally, participants are retested on their memory of the original event.[13]The original study paved the way for multiple replications of the effect[specify]in order to test things such as the specific processes initially causing the effect to occur and how individual differences influence susceptibility to the effect.
Functional magnetic resonance imaging(fMRI) from 2010 pointed to certain brain areas which were especially active when false memories were retrieved. Participants studiedphotosduring an fMRI. Later, they viewed sentences describing the photographs, some of which contained information conflicting with the photographs. One day later, participants returned for a surprise item memory recognition test on the content of the photographs. Results showed that some participants created false memories, reporting the verbal misinformation conflicting with the photographs.[14]During the original event phase, increased activity in left thefusiform gyrusand the righttemporal/occipitalcortex was found which may have reflected the attention to visual detail, associated with later accurate memory for the critical item(s) and thus resulted in resistance to the effects of later misinformation.[14]Retrieval of true memories was associated with greater reactivation of sensory-specific cortices, for example, theoccipital cortexfor vision.[14]Electroencephalographyresearch on this issue also suggests that the retrieval of false memories is associated with reduced attention and recollection related processing relative to true memories.[15]
It is important to note that not everyone is equally susceptible to the misinformation effect. Individual traits and qualities can either increase or decrease one's susceptibility to recalling misinformation.[12]Such traits and qualities include age, working memory capacity, personality traits and imagery abilities.
Several studies have focused on the influence of the misinformation effect on various age groups.[16]Young children—especially pre-school-aged children—are more susceptible than older children and adults to the misinformation effect.[17][18][16]Young children are particularly susceptible to this effect as it relates to peripheral memories and information, as some evidence suggests that the misinformation effect is stronger on an ancillary, existent memory than on a new, purely fabricated memory. This effect is redoubled if its source is in the form of a narrative rather than a question.[19]However, children are also more likely to accept misinformation when it is presented in specific questions rather than in open-ended questions.[17]
Additionally, there are different perspectives regarding the vulnerability of elderly adults to the misinformation effect. Some evidence suggests that elderly adults are more susceptible to the misinformation effect than younger adults.[16][20][18]Contrary to this perspective, however, other studies hold that older adults may makefewermistakes when it comes to the misinformation effect than younger ones, depending on the type of question being asked and the skillsets required in the recall.[21]This contrasting perspective holds that the defining factor when it comes to age, at least in adults, depends largely on cognitive capacity, and the cognitive deterioration that commonly accompanies age to be the typical cause of the typically observed decline.[21]Additionally, there is some research to suggest that older adults and younger adults are equally susceptible to misinformation effects.[22]
Individuals with greater working memory capacity are better able to establish a more coherent image of an original event. Participants performed a dual task: simultaneously remembering a word list and judging the accuracy of arithmetic statements. Participants who were more accurate on the dual task werelesssusceptible to the misinformation effect, which allowed them to reject the misinformation.[12][23]
TheMyers–Briggs Type Indicatoris one type of test used to assess participant personalities. Individuals were presented with the same misinformation procedure as that used in the original Loftuset al.study in 1978 (see above). The results were evaluated in regards to their personality type. Introvert-intuitive participants weremorelikely to accept both accurate and inaccurate post-event information than extrovert-sensate participants. Researchers suggested that this likely occurred because introverts are more likely to have lower confidence in their memory and are more likely to accept misinformation.[12][24]Individual personality characteristics, includingempathy,absorptionandself-monitoring, have also been linked to greater susceptibility.[16]Furthermore, research indicates that people are more susceptible to misinformation when they are more cooperative, dependent on rewards, and self-directed and have lower levels of fear of negative evaluation.[18]
The misinformation effect has been examined in individuals with varyingimageryabilities. Participants viewed a filmed event followed by descriptive statements of the events in a traditional three-stage misinformation paradigm. Participants with higher imagery abilities were more susceptible to the misinformation effect than those with lower abilities. The psychologists argued that participants with higher imagery abilities were more likely to form vivid images of the misleading information at encoding or at retrieval, therefore increasing susceptibility.[12][25]
Some evidence suggests that participants, if paired together for discussion, tend to have a homogenizing effect on the memory of one another. In the laboratory, paired participants that discussed a topic containing misinformation tended to display some degree of memory blend, suggesting that the misinformation had diffused among them.[26]
Individuals may not be actively rehearsing the details of a given event after encoding, as psychologists have found that the likelihood of incorporating misinformation increases as the delay between the original event and post-event information increases.[13]Furthermore, studying the original event for longer periods of time leads to lower susceptibility to the misinformation effect, due to increased rehearsal time.[13]Elizabeth Loftus' discrepancy detection principle argue that people's recollections are more likely to change if they do not immediately detect discrepancies between misinformation and the original event.[16][27]At times people recognize a discrepancy between their memory and what they are being told.[28]People might recollect, "I thought I saw a stop sign, but the new information mentions a yield sign, I guess I must be wrong, it was a yield sign."[28]Although the individual recognizes the information as conflicting with their own memories, they still adopt it as true.[16]If these discrepancies are not immediately detected they are more likely to be incorporated into memory.[16]
The more reliable the source of the post-event information, the more likely it is that participants will adopt the information into their memory.[13]For example, Dodd and Bradshaw (1980) used slides of a car accident for their original event. They then had misinformation delivered to half of the participants by an unreliable source: a lawyer representing the driver. The remaining participants were presented with misinformation, but given no indication of the source. The misinformation was rejected by those who received information from the unreliable source and adopted by the other group of subjects.[13]
Psychologists have also evaluated whether discussion impacts the misinformation effect. One study examined the effects of discussion in groups on recognition. The experimenters used three different conditions: discussion in groups with a confederate providing misinformation, discussion in groups with no confederate, and a no-discussion condition. They found that participants in the confederate condition adopted the misinformation provided by the confederate. However, there was no difference between the no-confederate and no-discussion conditions, providing evidence that discussion (without misinformation) is neither harmful nor beneficial to memory accuracy.[29]Additionally, research has found that collaborative pairs showed a smaller misinformation effect than individuals, as collaborative recall allowed witnesses to dismiss misinformation generated by an inaccurate narrative.[30]Furthermore, there is some evidence suggesting that witnesses who talk with each other after watching two different videos of a burglary will claim to remember details shown in the video seen by the other witness.[31]
Various inhibitedstates of mindsuch asdrunkennessandhypnosiscan increase misinformation effects.[16]Assefi and Garry (2002) found that participants who believed they had consumed alcohol showed results of the misinformation effect on recall tasks.[32]The same was true of participants under the influence of hypnosis.[33]
Arousal induced after learning reduces source confusion, allowing participants to better retrieve accurate details and reject misinformation. In a study of how to reduce the misinformation effect, participants viewed four short film clips, each followed by a retention test, which for some participants included misinformation. Afterward, participants viewed another film clip that was either arousing or neutral. One week later, the arousal group recognized significantly more details and endorsed significantly fewer misinformation items than the neutral group.[34]Similarly, research also suggests that inducingsocial stressafter presenting misinformation makes individuals less likely to accept misinformation.[35]
Educating participants about the misinformation effect can enable them to resist its influence. However, if warnings are given after the presentation of misinformation, they do not aid participants in discriminating between original and post-event information.[16]
Research published in 2008 showed that placebos enhanced memory performance. Participants were given a placebo "cognitive enhancing drug" called R273. When they participated in a misinformation effect experiment, people who took R273 were more resistant to the effects of misleading post-event information.[36]As a result of taking R273, people used stricter source monitoring and attributed their behavior to theplaceboand not to themselves.[36]
Controversial perspectives exist regarding the effects of sleep on the misinformation effect. One school of thought supports the idea that sleep can increase individuals' vulnerability to the misinformation effect. In a study examining this, some evidence was found that misinformation susceptibility increases after a sleeping cycle. In this study, the participants that displayed the least degree of misinformation susceptibility were the ones who had not slept since exposure to the original information, indicating that a cycle of sleep increased susceptibility.[21]Researchers have also found that individuals display a stronger misinformation effect when they have a 12-hour sleep interval in between witnessing an event and learning misinformation than when they have a 12-hour wakefulness interval in between the event and the introduction of misinformation.[37]
In contrast, a different school of thought holds that sleep deprivation leads to greater vulnerability to the misinformation effect. This view holds that sleep deprivation increases individual suggestibility.[38]This theory posits that this increased susceptibility would result in a related increase in the development of false memories.[26][39]
Most obviously,leading questionsandnarrative accountscan change episodic memories and thereby affect witness' responses to questions about the original event. Additionally, witnesses are more likely to be swayed by misinformation when they are suffering from alcohol withdrawal[30][40]or sleep deprivation,[30][41]when interviewers are firm as opposed to friendly,[30][42]and when participants experience repeated questioning about the event.[30][43]
The misinformation effect can have dire consequences on decision making that can have harmful personal and public outcomes in a variety of circumstances. For this reason, various researchers have participated in the pursuit of a means to counter its effects, and many models have been proposed. As withSource Misattribution, attempts to unroot misinformation can have lingering unaddressed effects that do not display in short term examination. Although various perspectives have been proposed, all suffer from a similar lack of meta-analytic examination.
One of the problems with countering the misinformation effect, linked with the complexity of human memory, is the influence of information, whether legitimate or falsified, that appears to support the false information. The presence of these confirmatory messages can serve to validate the misinformation as presented, making it more difficult to unroot the problem. This is particularly present in situations where the person has a desire for the information to be legitimate.[44]
A common method of unrooting false concepts is presenting a contrasting, "factual" message. While this would intuitively be a good means of portraying the information to be inaccurate, this type of direct opposition has been linked to an increase in misinformation belief. Some researchers hypothesize that the counter message must have at least as much support, if not more, than the initial message to present a fully developed counter-model for consideration. Otherwise, the recipient may not remember what was wrong about the information and fall back on their prior belief model due to lack of support for the new model.[45]
Some studies suggest that the misinformation effect can occur despite exposure to accurate information.[46]This effect has been demonstrated when the participants have the ability to access an original, accurate video source at whim, and has even been demonstrated when the video is cued to the precise point in time where video evidence that refutes the misinformation is present.[46]Written and photographic contradictory evidence have also been shown to be similarly ineffective. Ultimately, this demonstrates that exposure to the original source is still not guaranteed to overcome the misinformation effect.[46]
There are a few existing evidence-based models for addressing the misinformation effect. Each of these, however, have their own limitations that impact their effectiveness.
Some evidence has been shown to suggest that those suffering from the misinformation effect can often tell they are reporting inaccurate information but are insufficiently confident in their own recollections to act on this impression.[47]As such, some research suggests that increased self-confidence, such as in the form of self-affirmative messages and positive feedback, can weaken the misinformation effect.[47]Unfortunately, due to the difficulty of introducing increased self-regard in the moment, these treatment methods are held to not be particularly realistic for use in a given moment.[47]
Another direction of study in preventing the misinformation effect is the idea of using a pretest to prevent the misinformation effect. This theory posits that a test, applied prior to the introduction of misleading information, can help maintain the accuracy of the memories developed after that point.[48]This model, however, has two primary limitations: its effects only seem to hold for one item at a time, and data supports the idea that itincreasesthe impact of the information on the subsequent point of data. Pretesting also, paradoxically, has been linked with a decrease in accurate attributions from the original sample.[48]
Another model with some support is that of the use of questions. This model holds that the use of questions rather than declaratory statements prevents the misinformation effect from developing, even when the same information is presented in both scenarios. In fact, the use of questions in presenting information after the fact was linked with increased correct recall, and further with an increase in perfect recall among participants. The advocates of this view hold that this occurs because the mind incorporates definitive statements into itself, whereas it does not integrate questions as easily.[49]
Correcting misinformation after it has been presented has been shown to be effective at significantly reducing the misinformation effect.[50]Similarly, researchers have also examined whether warning people that they might have been exposed to misinformation after the fact impacts the misinformation effect.[51][16]A meta-analysis of studies researching the effect of warnings after the introduction of misinformation found that warning participants about misinformation was an effective way to reduce—though not eliminate—the misinformation effect.[51]However, the efficacy of post-warnings appears to be significantly lower when using a recall test.[51]Warnings also appear to be less effective when people have been exposed to misinformation more frequently.[16]
Current research on the misinformation effect presents numerous implications for our understanding of human memory overall.
Some reject the notion that misinformation always causes impairment of original memories.[16]Modified tests can be used to examine the issue of long-term memory impairment.[16]In one example of such a test,(1985) participants were shown a burglar with a hammer.[52]Standard post-event information claimed the weapon was a screwdriver and participants were likely to choose the screwdriver rather than the hammer as correct. In the modified test condition, post-event information was not limited to one item, instead participants had the option of the hammer and another tool (a wrench, for example). In this condition, participants generally chose the hammer, showing that there was no memory impairment.[52]
Rich false memories are researchers' attempts to plant entire memories of events which never happened in participants' memories. Examples of such memories include fabricated stories about participants getting lost in the supermarket or shopping mall as children. Researchers often rely on suggestive interviews and the power of suggestion from family members, known as "familial informant false narrative procedure."[16]Around 30% of subjects have gone on to produce either partial or complete false memories in these studies.[16]There is a concern that real memories and experiences may be surfacing as a result of prodding and interviews. To deal with this concern, many researchers switched to implausible memory scenarios.[16]Researchers have also found that they were able to induce rich false memories of committing a crime in early adolescence using a false narrative paradigm.[53]
The misinformation effect can be observed in many situations. In particular, research on the misinformation effect has frequently applied toeyewitness testimonyand has been used to evaluate the trustworthiness of eyewitnesses' memory.[7][18][9]After witnessing a crime or accident there may be opportunities for witnesses to interact and share information.[7][9]Late-arriving bystanders or members of the media may ask witnesses to recall the event before law enforcement or legal representatives have the opportunity to interview them.[30]Collaborative recall may lead to a more accurate account of what happened, as opposed to individual responses that may contain more untruths after the fact.[30]However, there have also been instances where multiple eyewitnesses have all remembered information incorrectly.[18]Remembering even small details can be extremely important for eyewitnesses: A jury's perception of a defendant's guilt or innocence could depend on such a detail.[5]If a witness remembers a mustache or a weapon when there was none, the wrong person may be wrongly convicted.[6]
|
https://en.wikipedia.org/wiki/Misinformation_effect
|
Online youth radicalizationis the action in which a young individual or a group of people come to adopt increasinglyextremepolitical, social, or religious ideals and aspirations that reject, or undermine the status quo or undermine contemporary ideas and expressions of a state, which they may or may not reside in.[1]Online youth radicalization can be both violent or non-violent.
The phenomenon, often referred to as "incitement toradicalizationtowardsviolent extremism" (or "violent radicalization") has grown in recent years, due to the Internet andsocial mediain particular. In response to the increased attention ononline"incitement to extremism and violence", attempts to prevent this phenomenon have created challenges for freedom of expression. These range from indiscriminateblocking, censorship over-reach (affecting both journalists and bloggers), andprivacyintrusions—right through to the suppression or instrumentalization of media at the expense of independent credibility.[2]
The article also explores how online radicalization can involve misogynistic and gender-based ideologies, particularly targeting young men through social media algorithms and influencers who promote harmful views under the guise of self-improvement.[3]Afterterrorist attacks, political pressure is often put on social media companies to do more to prevent online radicalization of young people leading to violent extremism.[4]UNESCOcalls for "a policy that is constructed on the basis of facts and evidence, and not founded on hunches—or driven by panic and fearmongering."[5]
Cyberspaceis used to denote the Internet, as a network of networks, and social media as a social network that may combine various Internet platforms and applications to exchange and publish online: the online production of radical (political, social, religious) resources or content, the presence of terrorist or radicalized groups within the social networks, and the participation of young people in radical conversations.[2]
Radicalization refers to the processes by which individuals or groups come to adopt beliefs that challenge or reject established political, social, or religious norms.[6]In some cases, these beliefs may be used to justify participation in or support for acts of violence, often framed as necessary or morally justified actions in pursuit of ideological or political goals.[7][8]Definitions of radicalization vary across academic, governmental, and policy contexts; however, most characterize it as a gradual or staged progression.[8]
In the context of online radicalization, the term youth typically refers to individuals in adolescence or early adulthood.[9]The exact age range for youth in this context varies depending on the source. The United Nations refers to youth as individuals between the ages of 15-24 years old, primarily for statistical purposes.[10]Other sources, including academic research and governmental policies, may extend this age range up to 29 years old to account for ongoing social, cognitive and emotional developmental milestones.[11]Youth radicalization is often considered a distinct category due to these developmental factors that may increase young people's vulnerability to radical ideologies and recruitment strategies, especially in the online environment.[11]
Algorithmic radicalizationis the concept thatrecommender algorithmson popular social media sites such asYouTubeandFacebookdrive users toward progressively more extreme content over time, leading to them developingradicalizedextremist political views. Algorithms record user interactions, from likes/dislikes to amount of time spent on posts, to generate endless media aimed to keep users engaged. Throughecho chamberchannels, the consumer is driven to be morepolarizedthrough preferences in media and self-confirmation.[12][13][14][15][16]
The Internet has remained a medium for the spread of narratives. It has often been mistaken as a driver of violent extremism rather than the medium that it is. Unfortunately, social media has not only been used to bring people closer, to share thoughts and opinions, but also to spread false information. Additionally, the application of privacy rules has made it easier for closing the niche and advancing the targeting of vulnerable individuals. These privacy rules though welcomed, have made the process of analysis for prevention; challenging.[19]
Chatroomscan be embedded within most Internet-based media. Reports that have looked into the use of chatrooms by violent extremist groups describe these as the space where at-risk youth without previous exposure would be likely to come across radicalizing religious narratives.[20][21]This falls in line with Sageman's emphasis on the role of chatrooms andforums, based on his distinction between websites as passive sources of news and chat rooms as active sources of interaction.[22]According toSageman, "networking is facilitated by discussion forums because they develop communication among followers of the same ideas (experiences, ideas, values), reinforce interpersonal relationships and provide information about actions (tactics, objectives, tutorials)". Chatrooms can also include spaces where extremist people share information such as photos, videos, guides, and manuals.[23][2]Discussion forums such asReddit,4chan, and8chanhave become focal points oninternet meme-based and other forms ofradicalization.[24][25][26]
Many extremist groups are ideologically and strategically anti-Facebook, but a strong presence still exists on this platform either directly or through supporters.[20]Facebook does not seem to be used for direct recruitment or planning, possibly because it has mechanisms of tracking and can link users with real places and specific times. Facebook appears to have been more often used by extremists as a decentralized center for the distribution of information and videos or a way to find like-minded supporters and show support rather than direct recruitment.[20][21]This may be on the possibility that young sympathizers can share information and images and create Facebook groups in a decentralized way.[2]
The terrorist perpetrator of theChristchurch mosque shootingslive-streamed, on Facebook, a video of the attacks which resulted in the deaths of 51 people; this was then extensively shared on social media. In the wake of this tragedy,FacebookandTwitterbecame more active in banning extremists from their platforms. Facebook pages associated with Future Now Australia have been removed from the platform, including their main page, "Stop the Mosques and Save Australia."[27]On March 28, Facebook announced that they have banned white nationalist and white separatist content along with white supremacy.[28]
Micro-bloggingsites like Twitter present more advantages for extremist groups becausetraceabilityof the identity and the source of the tweets are harder to achieve, thus increasing the communication potential for recruiters.[20][29][30]Analyses of Twitter feeds generated by Islamist violent extremist groups show that they are mostly used for engaging with the opposition and the authorities, in what appear to be tweetclashes that mobilize the two sides, and also used for provocation.[20]Through Twitter, extremists can easily comment publicly on international events or personalities in several languages, enabling the activists to be vocal and timely when mounting campaigns.[20][2]
YouTubehas the advantage of being difficult to trace the identity of people posting content, while offering the possibility for users to generate comments and share contents.[20]Several researchers have conducted content analyses of YouTube and Facebook extremist discourses and video contents to identify the production features most used, including theirmodus operandiand intended effects.[31][32]Studies that have focused on therhetoricalstrategy of extremist groups show the multifaceted use of online resources by extremistgroups—that is, they produce "hypermediaseduction" via the use of visual motifs that are familiar to young people online,[33][34][35][36]and they provide content in several languages, mostly Arabic, English and French using subtitles or audio dubbing, to increase therecruitmentcapacity of youth across nations.[37]These videos provide rich media messaging that combines nonverbal cues and vivid images of events that can evokepsychologicalandemotional responsesas well as violent reactions.[31]Terrorists capture their attacks on video and disseminate them though the Internet, communicating an image of effectiveness and success. Such videos in turn are used to mobilize and recruit members and sympathizers. Videos also serve as authentication and archive, as they preserve live footage of actual damage and they validate terrorist performance acts.[38]In 2018, researchers from the Data & Society thinktank identified the YouTuberecommendation systemas promoting a range of political positions from mainstream libertarianism and conservatism to overtwhite nationalism.[39][40]
Video games can be placed in a similar category as social media because they increasingly have their own forums, chatrooms and microblogging tools. Video games, widely used by young people, are under-researched in relation to extremism and violent radicalization. There is mostly anecdotal evidence that ISIS supporters have proposed themodificationof some games to spread propaganda (e.g.Grand Theft Auto V), mods that allow players to act as terrorists attacking Westerners (Arma 3), and provide for hijacking of images and titles to allude to a notion of jihad (e.g.Call of Duty).[2]
Selepack[41]used qualitative textual analysis of hate-based video games found on right-wing religious supremacist groups’ websites to explore the extent to which they advocate violence. The results show that most hate groups were portrayed positively, and that video games promoted extreme violence towards people represented as Black or Jewish people. The games were often modified versions of classic video games in which the original enemies were replaced with religious, racial and/or ethnic minorities. Their main purpose is to indoctrinate players with white supremacist ideology and allow those who already hold racist ideologies to practice aggressive scripts toward minorities online, which may later be acted upon offline.[41]Some experimental social psychologists show that cumulative violent video games can increase hostile expectations and aggressive behavior.[42]
The Internet and social media have numerous advantages for extremist groups using religion as part of a radicalization strategy. The advantages stem from the very nature of Internet and social media channels and the way they are used by extremist groups. These includecommunication channelsthat are not bound to nationaljurisdictionsand are informal, large, cheap, decentralized, and anonymous.[43][44]This allows terrorists to network across borders and to bypass time and space.[45]Specifically, these channels provide networks of recruiters, working horizontally in all the countries they target due to the transborder nature of the Internet.[2]
Weinmann describes extremist groups’ use of Internet and social media in eight process strategies: "psychological warfare, publicity and propaganda,data mining,fundraising, recruitment and mobilization, networking, information sharing and planning and coordination".[46][47]Conway identifies five-core terrorist uses of the Internet and social media: "information provision, financing, networking, recruitment and information gathering".[47]The ones most relevant to social media and radicalization of young people are information provision, such as profiles ofleaders,manifestos, publicity and propaganda, and recruitment.[48]Some studies show that social media enables people to isolate themselves in an ideological niche by seeking and consuming only information consistent with their views (confirmation bias),[49][50]as well as simultaneously self-identifying with geographically distant international groups, which creates a sense of community that transcends geographic borders. This ability to communicate can promote membership and identity quests faster and in more efficient ways than in the "real"social world.[2]
While recruitment is not an instantaneous process, it is seen in the literature as a phase of radicalization, taking the process to a new level of identification and possible action. Indoctrination is easier post-recruitment and often occurs in specific virtual spaces where the extremist rhetoric is characterized by a clear distinction between "them" (described negatively) and "us" (described positively), and where violent actions are legitimized according to the principle of "no other option available".[51][52]These advantages of the Internet and social media open up prospects for extremist groups, by facilitating what used to be referred previously as block recruitment[53]and by substituting group decision to individual decision-making.[54][2]
Bouzar, Caupenne and Sulayman (2014) present the results of interviews with 160 French families with radicalized (though not violent) children aged mainly between 15 and 21. The vast majority of the youth interviewed claimed to have been radicalized through the Internet. This held true regardless of their family characteristics and dynamics. The vast majority of the families (80%) did not follow any specific religious beliefs or practices and only 16% belonged to the working class.[55]
Wojcieszak[56]analysedcross-sectionalandtextual dataobtained from respondents inneo-Nazionline discussion forums. The author found that "extremism increases with increasedonline participation, probably as a result of the informational andnormativeinfluences within the online groups". In addition, exposure to different parties/views offline that are dissimilar to the extremist group's values has in some instances reinforced radical beliefs online.[56]
Many authors hypothesize potential causation by associating online radicalization with external factors such as: search foridentityand meaning, the growinginequalitiesinEuropeanand other societies, unemployment and fewer opportunities for development especially forminority youth,exclusion, discrimination and inequality that are massively used in extremist discourses.[57][58][59][2]
Youth can come into contact with online content and individuals who radicalize them to adopt extremist views regarding women, masculinity, and gender roles. Research shows thatmisogynisticcontent online targets mostly young men (ages 13-25) who report feelings of social isolation or rejection.[3]This content is often appears as inspirational and aspirational self-improvement content containing both covert and overt misogynistic views.[3]This content might promote harmful gender ideologies, including the stigmatization of female sexuality, commonly referred to as "slut shaming" and the unrealistic masculine body ideals, such as the “gym bro” aesthetic, and targets the vulnerability or social rejection that attract young men towards this content in the first place.[3]
This content is partially pushed by algorithms used on social media platforms that present users with content depending on what content the user has interacted with previously.[60]Research on algorithms and online misogynistic content has found that young users, regardless of whether or not they actively engage with such content, are often exposed to it after spending some time on the platform.[60][61]
Research has investigated how younger users encounter and engage with misogynistic content promoted by influencers associated with themanosphere, such asAndrew Tate.[62]Tate’s content has been criticised for portraying violence and dominance as essential elements of masculinity, and for suggesting that men who do not conform to this model are weak.[60]In his commentary on women, Tate describes women as subordinate to men and has made comments that objectify women.[60]Much of Tate's content is framed within the context of self-improvement or aspirational content, which critics argue serves to obscure the underlying misogynistic ideology.[62]
The analysis of the profiles of researchers and publications on violent radicalization from theArab worldreveals the prominence of specialists onIslamist movements. They are, most often,humanitiesandsocial scienceresearchers and some are specialists in media and public opinion,international relations, or even security. Another specificity of research on violent radicalization in the Arabic-speaking region is the involvement of religious researchers in this field. The main objective of this contribution is part of a state strategy to counter faith advocated by violentradical groups. In this logic, radicalization orjihadismare replaced by the term terrorist in referral to these groups. In other regions, experts use terms such as jihadistSalafismor jihadism or violent radicalization. There is a clear tendency among most Arabic-speaking researchers to avoid the use of the wordIslamand its semantic field to denote violent radical groups. This is also why researchers from the region prefer to use the Arabic acronym Daesh or the State Organization instead of the ‘Islamic State.’ Most research published from the Arab world does not focus on the relation between violent radicalization and Internet or social media, nor does it evaluate the effect of prevention or intervention cyberinitiatives.[2]
Arab youth are major consumers of social media networks and especially Facebook, which is one of the top ten most used sites by Arab Internet users, a tendency that quickly found its translation into the Arab political realm.[63]According to a study by Mohamed Ibn Rachid Faculty forgovernancein theUnited Arab Emirates, the number of Facebook users in 22 Arab countries increased from 54.5 million in 2013 to 81.3 million in 2014 with a majority being young people.[2]The study of literature in the region reveals the role played bysocial networks, especially Facebook and Twitter, as platforms for collective expression for Arab youth on current issues, conflicts and wars (e.g., Gaza situation in particular).[64]InIraq, for example, young Internet users andbloggerslaunched several campaigns on Facebook and Twitter at the beginning of military operations to free the major cities occupied by ISIS (Fallujah and Mosul). InMorocco, other initiatives with the same objective were launched such as the one by Hamzah al-Zabadi on Facebook ( مغاربة_ضد_داعش#; Moroccans against Daesh), which consisted of sharing all kinds of content (images, texts, etc.) to contradict and challengeISIS's narratives. The involvement ofcivil societyactors on the web in the fight against terrorism and violent radicalization in the Arab region remains modest for many reasons including the lack of mediapoliciesdedicated to this struggle.[2]
Researchers in Asia have developed a complex understanding of radicalization as being deeply connected topsychosocialand economic grievances such as poverty and unemployment,[65][66]marginalizationthroughilliteracyand lack of education,[66]admiration forcharismatic leaders, pursuit of social acceptability, andpsychological trauma. These factors are considered by authors to facilitate online radicalization-oriented recruitment, especially among young people, who are more vulnerable and spend more time online.[2]
A 2016 report by "We Are Social" revealed that East Asia, Southeast Asia, and North America were the first, second, and third largest social media markets worldwide respectively. According to the same report, Facebook andFacebook Messengerare the predominant social and communications tools, followed by Twitter,LineandSkype. China is the notable exception as Facebook Messenger is outpaced by far by Chinese social media tools. China presents a very different profile from most countries in its mainstream social media and networks. American platforms such as Google,Yahoo!, Facebook, Twitter and YouTube have very little market penetration due to state restrictions and the strong monopoly of homegrown search engines and Internet platforms in Chinese language.[2]
There is rising interest among Chinese researchers in examining the relationship between social media and violent radicalization.[67]Research into violent radicalization andterrorism in Chinais mainly focused on radicalization inXinjiang. This could be linked to the fact that most of the recent terrorist attacks in China were not perpetrated by local residents, but by outsider violent extremist organizations that seek to separate the Xinjiang area from China.[68][69][70]Terrorist organizations spread their messages via TV, radio and the Internet.[71]Though there is no empirical evidence linking youth radicalization to online social media, theanonymityandtransbordercapacity of such media is seen as a "support for organized terrorist propaganda".[72][73][74]TheChinese governmenthas been responding to terrorist attacks by taking down sites, blocking andfilteringcontent. In return, Chinese government also uses the social media for messaging against terrorism.[75]
Indonesiahas an estimated 76 million Indonesians who connect regularly on Facebook, establishing the nation as the fourth largest user of the world, afterIndia, the United States andBrazil. Indonesia is also the fifth largest user of Twitter, after the United States, Brazil, Japan and the United Kingdom. The Institute for Policy Analysis of Conflict (IPAC) examines how Indonesian extremists use Facebook, Twitter and various mobile phone applications such asWhatsAppandTelegram. Social media use by extremists in Indonesia is increasing. They use social media, such as Facebook and Twitter, to communicate with young people, to train and to fundraise online. Recruitment is done throughonline games, propaganda videos on YouTube and calls to purchase weapons. The proliferation of ISIS propaganda via individual Twitter accounts has raised concerns about the possibility of "lone actor" attacks. That being said, the report points out that such attacks are extremely rare in Indonesia.[2]
There is little contemporary research on online radicalization inSub-Saharan Africa. However, at its heart, Africa carries a powerful extremist group: "Boko Haram", whose real name is Jama’atu Ahlu-Sunna wal Jihad Adda’wa Li («Group of the People of Sunnah for Preaching and Jihad») since 2002 and has pledged allegiance to the Daesh. The network is less resourceful and financed compared to Daesh, but it seems to have entered in a new era of communication by the use of social media networks, more so since its allegiance to Daesh.[76]To spread their principles this terrorist group uses the Internet and adapts Daesh communication strategies to the sub-Saharan African context to spread its propaganda (also in French and English) with more sophisticated videos. By its presence on the most used digital networks (Twitter,Instagram),[77]Boko Haram breaks with traditional forms of communication in the region such as propaganda videos sent to agencies on flash drives or CD-ROM.[78]Video content analyses has also shown a major shift from long monologues from the leaderAbubakar Shekau, that had poor editing and translation, to messages and videos that have increased its attractiveness among sub-Saharan youth. Today, Boko-Haram owns a real communications agency called «al-Urwa Wuqta» (literally «the most trustworthy», «the most reliable way»). Moreover, the group multiplies its activities on Twitter especially via theirsmartphones, as well as through YouTube news channels. Most tweets and comments of the group's supporters denounce the Nigerian government and call for support for Boko Haram movement. The tweets are written in Arabic at first and then translated and passed on into English and French, which reflect the group's desire to place itself in the context of what it sees as global jihad. In a recent study conducted in 2015, researchers have shown how Boko Haram-related tweets include rejection of the movement by non-members of the organisation.[79][2]
In Kenya, and by extension the Horn of Africa, online radicalization and recruitment processes are dependent on narrative formations and dissemination. However, other than one documented case of purely online radicalization and recruitment,[80]evidence shows that the process is cyclic involving both an online-offline-online, process that advances depending on the level of socialization and resonance factors shared with the vulnerable populations. A recent study fromScofield Associatesshows that narrative formation depends on three major attributes; having a believable story, actionable plans for those who encounter it, and the need for a religious cover. The third characteristic provides support to the persuasion process and adds to the global whole. The persuasion process plays out very well with an Online platform or audience.[19]
The Pew Research Center reports that 96% of U.S. teenagers (ages 13-17) use the Internet daily, and 46% say that they are online almost constantly.[81]This high level on online engagement increases the likelihood of exposure to a wide range of online content, including ideological or extremist material.
The U.S. Department of Homeland Security has noted that extremist groups are increasingly using social media and networking platforms to disseminate ideological content and to recruit new members.[82]In the United States, far-right groups such as theProud Boyshave used mainstream platforms for both propaganda and recruitment purposes.[83]These groups often evade content moderation policies by using encoded language, euphemisms and symbolism to obscure their messaging.In recent years, some of these communities have migrated to platforms with less stringent content moderation policies, such asX(formerly Twitter) andGab.[84]
Social media platforms act as a medium through which extremist groups may target and radicalize younger users, usually through the use of memes and short-form videos that are easily shareable and culturally relevant.[85]Some researchers have noted that extremist content is often packaged in ironic and humorous ways to appeal to younger audiences.[86]In several mass shootings, such as the ones in Poway, Christchurch, El Paso, and Buffalo, investigations revealed that the perpetrators had been exposed to or engaged with online extremist content prior to committing acts of violence.[86]
TheU.S. Department of Homeland Security's Center for Prevention Programs and Partnerships (CP3) works with local organizations and agencies to prevent radicalization before it escalates into real-world violence.[87]CP3 facilitates the Targeted Violence and Terrorism Prevention (TVTP) Grant Program and provides funding to governments, nonprofit organizations, and educational institutions to establish or enhance initiatives to prevent targeted violence and terrorism.[88]CP3 also works with faith-based organizations to improve the safety of their facilities.[88]However, federal layoffs by theTrump Administrationand theDepartment of Government Efficiency (DOGE)in March 2025, has led to a 20% reduction in CP3 staff.[89]
Van Eerten, Doosje, Konijn, De Graaf, and De Goede suggest that counter or alternative narratives could be a promising prevention strategy.[90]Some researchers argue that a strong alternative narrative to violent jihadist groups is to convey the message that they mostly harm Muslims.[91][92][93][94]During the last decade, theUnited States governmenthas set up two online programs against radicalization designed to counter anti-American propaganda andmisinformationfromal-Qaedaor the Islamic state. These programs seek to win the "war of ideas" by countering self-styled jihadist rhetoric.[2]
Private sectorcounter-initiatives involve the YouTubeCreators for Changewith young "ambassadors" mandated to "drive greater awareness and foster productive dialogue around social issues through content creation and speaking engagements";[95]the "redirectmethod.org" pilot initiative to use search queries in order to direct vulnerable young people to online videos of citizentestimonies, on-the-ground reports, and religious debates that debunk narratives used for violent recruitment. The initiative avoids "government-produced content and newly or custom created material, using only existing and compelling YouTube content".[96]
Several governments are opting to invest in primary prevention through education of the public at large, and of young public in particular, via various "inoculatory" tactics that can be grouped under the broad label of Media and Information Literacy (MIL). Based on knowledge about the use ofMILin other domains, this initiative can be seen, interalia, as a long term comprehensive preventive strategy for reducing the appeal of violent radicalization.[97][98][2]
MIL has a long tradition of dealing with harmful content and violent representations, including propaganda.[99]In its early history, MIL was mostly put in place to fight misinformation (particularly in advertising) by developing critical skills about the media. By the 1980s, MIL also introduced cultural and creative skills to use the media in an empowering way, with active pedagogies.[100][101]Since the year 2000, MIL has enlarged the media definition to incorporate the Internet and social media, adding issues related toethicaluses of online media to the traditional debates over harmful content and harmful behavior and aligning them more with the perspectives that consider issues of gratifications of media users.[2]
In October 2015,UNESCO's Executive Board adopted a decision on UNESCO's role in promoting education as a tool to prevent violent extremism.[103]
This article incorporates text from afree contentwork. Licensed under CC BY SA 3.0 IGO (license statement/permission). Text taken fromAlava, Frau-Meigs & Hassan 2017
|
https://en.wikipedia.org/wiki/Online_youth_radicalization
|
Radical trustis the confidence that any structuredorganization, such as agovernment,library,business,religion,[1]ormuseum, has in collaboration and empowerment withinonline communities. Specifically, it pertains to the use ofblogs,wikiandonline social networkingplatforms by organizations to cultivate relationships with an online community that then can provide feedback and direction for the organization's interest. The organization 'trusts' and uses that input in itsmanagement.
One of the first appearances of the notion of radical trust appears in an info graphic outlining the base principles ofweb 2.0inTim O'Reilly's weblog post "What is Web 2.0". Radical Trust is listed as the guiding example of trusting the validity ofconsumer generated media.[2]
This concept is considered to be an underlying assumption ofLibrary 2.0. The adoption of radical trust by a library would require its management let go of some of its control over the library and building an organization without an end result in mind. The direction a library would take would be based on input provided by people through online communities. These changes in the organization may merely be anecdotal in nature, making this method of organization management dramatically distinct from data-based or evidence based management.[3]
In marketing, Collin Douma further describes the notion of radical trust as a key mindset required formarketersandadvertisersto enter the social media marketing space. Conventional marketing dictates and maintains control of messages to cause the greatest persuasion in consumer decisions, but Douma argued that in the social media space, brands would need to cede that control in order to build brand loyalty.[4][5][permanent dead link]
|
https://en.wikipedia.org/wiki/Radical_trust
|
Selective exposureis a theory within the practice ofpsychology, often used in media andcommunicationresearch, that historically refers to individuals' tendency to favor information which reinforces their pre-existing views while avoidingcontradictoryinformation. Selective exposure has also been known and defined as "congeniality bias" or "confirmation bias" in various texts throughout the years.[1]
According to the historical use of the term, people tend to select specific aspects of exposed information which they incorporate into their mindset. These selections are made based on their perspectives, beliefs, attitudes, and decisions.[2]People can mentally dissect the information they are exposed to and select favorable evidence, while ignoring the unfavorable. The foundation of this theory is rooted in thecognitive dissonance theory(Festinger 1957),[3]which asserts that when individuals are confronted with contrasting ideas, certain mentaldefense mechanismsare activated to produce harmony between new ideas and pre-existing beliefs, which results in cognitive equilibrium. Cognitive equilibrium, which is defined as a state of balance between a person's mental representation of the world and his or her environment, is crucial to understanding selective exposure theory. According toJean Piaget, when a mismatch occurs, people find it to be "inherently dissatisfying".[4]
Selective exposure relies on the assumption that one will continue to seek out information on an issue even after an individual has taken a stance on it. The position that a person has taken will be colored by various factors of that issue that are reinforced during thedecision-making process. According to Stroud (2008), theoretically, selective exposure occurs when people's beliefs guide their media selections.[5]
Selective exposure has been displayed in various contexts such as self-serving situations and situations in which people holdprejudicesregardingoutgroups, particular opinions, and personal and group-related issues.[6]Perceived usefulness of information, perceived norm of fairness, and curiosity of valuable information are three factors that can counteract selective exposure.
Also of great concern is the theory of "Selective Participation" proposed bySir Godson Davidin 2024
This theory suggests that individuals have the ability to selectively participate in certain aspects of events or activities that are most meaningful or important to them, while being fully aware of the consequences of neglecting other aspects.
In this theory, individuals may prioritize certain elements of an event based on personal values, interests, or goals, and may choose to invest their time, energy, and resources in these specific areas. They may also make conscious decisions to limit participation in other aspects of the event, recognizing that they cannot engage fully in all aspects simultaneously.
By selectively participating in specific aspects of events, individuals can focus on what matters most to them, optimize their resources and efforts in those areas, and compensate for any potential neglect in other areas. This approach may allow individuals to maintain a sense of control, satisfaction, and well-being while navigating complex events or activities.
Overall, the theory of Selective Participation emphasizes the importance of intentional decision-making and prioritization in event participation, acknowledging that individuals have the agency to choose where to direct their time and attention based on their individual preferences and goals.
Selective exposure can often affect the decisions people make as individuals or as groups because they may be unwilling to change their views and beliefs either collectively or on their own, despite conflicting and reliable information. An example of the effects of selective exposure is the series of events leading up to theBay of Pigs Invasionin 1961.President John F. Kennedywas given the go ahead by his advisers to authorize the invasion of Cuba by poorly trained expatriates despite overwhelming evidence that it was a foolish and ill-conceived tactical maneuver. The advisers were so eager to please the President that they confirmed their cognitive bias for the invasion rather than challenging the faulty plan.[7]Changing beliefs about one's self, other people, and the world are three variables as to why people fear new information.[8]A variety of studies has shown that selective exposure effects can occur in the context of both individual and group decision making.[9]Numerous situational variables have been identified that increase the tendency toward selective exposure.[10]Social psychology, specifically, includes research with a variety of situational factors and related psychological processes that eventually persuade a person to make a quality decision. Additionally, from a psychological perspective, the effects of selective exposure can both stem from motivational and cognitive accounts.
According to research study by Fischer, Schulz-Hardt, et al. (2008), the quantity of decision-relevant information that the participants were exposed to had a significant effect on their levels of selective exposure. A group for which only two pieces of decision-relevant information were given had experienced lower levels of selective exposure than the other group who had ten pieces of information to evaluate. This research brought more attention to the cognitive processes of individuals when they are presented with a very small amount of decision-consistent and decision-inconsistent information. The study showed that in situations such as this, an individual becomes more doubtful of their initial decision due to the unavailability of resources. They begin to think that there is not enough data or evidence in this particular field in which they are told to make a decision about. Because of this, the subject becomes more critical of their initial thought process and focuses on both decision-consistent and inconsistent sources, thus decreasing his level of selective exposure. For the group who had plentiful pieces of information, this factor made them confident in their initial decision because they felt comfort from the fact that their decision topic was well-supported by a large number of resources.[11]Therefore, the availability of decision-relevant and irrelevant information surrounding individuals can influence the level of selective exposure experienced during the process of decision-making.
Selective exposure is prevalent within singular individuals and groups of people and can influence either to reject new ideas or information that is not commensurate with the original ideal. In Jonas et al. (2001) empirical studies were done on four different experiments investigating individuals' and groups' decision making. This article suggests thatconfirmation biasis prevalent in decision making. Those who find new information often draw their attention towards areas where they hold personal attachment. Thus, people are driven toward pieces of information that are coherent with their own expectations or beliefs as a result of this selective exposure theory occurring in action. Throughout the process of the four experiments, generalization is always considered valid and confirmation bias is always present when seeking new information and making decisions.[9]
Fischer and Greitemeyer (2010) explored individuals' decision making in terms of selective exposure to confirmatory information.[12]Selective exposure posed that individuals make their decisions based on information that is consistent with their decision rather than information that is inconsistent. Recent research has shown that "Confirmatory Information Search" was responsible for the 2008 bankruptcy of theLehman BrothersInvestment Bank which then triggered the2008 financial crisis. In the zeal for profit and economic gain, politicians, investors, and financial advisors ignored the mathematical evidence that foretold the housing market crash in favor of flimsy justifications for upholding the status quo.[12]Researchers explain that subjects have the tendency to seek and select information using their integrative model. There are two primary motivations for selective exposure: Accuracy Motivation and Defense Motivation. Accuracy Motivation explains that an individual is motivated to be accurate in their decision making and Defense Motivation explains that one seeks confirmatory information to support their beliefs and justify their decisions. Accuracy motivation is not always beneficial within the context of selective exposure and can instead be counterintuitive, increasing the amount of selective exposure. Defense motivation can lead to reduced levels of selective exposure.[12]
Selective exposure avoids information inconsistent with one's beliefs and attitudes. For example, former Vice PresidentDick Cheneywould only enter a hotel room after the television was turned on and tuned to a conservative television channel.[1]When analyzing a person'sdecision-makingskills, his or her unique process of gathering relevant information is not the only factor taken into account. Fischer et al. (2010) found it important to consider the information source itself, otherwise explained as the physical being that provided the source of information.[10]Selective exposure research generally neglects the influence of indirect decision-related attributes, such as physical appearance. In Fischer et al. (2010) two studies hypothesized that physically attractive information sources resulted in decision makers to be more selective in searching and reviewing decision-relevant information. Researchers explored the impact of social information and its level of physical attractiveness. The data was then analyzed and used to support the idea that selective exposure existed for those who needed to make a decision.[10]Therefore, the more attractive an information source was, the more positive and detailed the subject was with making the decision. Physical attractiveness affects an individual's decision because theperceptionof quality improves. Physically attractive information sources increased the quality of consistent information needed to make decisions and further increased the selective exposure in decision-relevant information, supporting the researchers' hypothesis.[12]Both studies concluded that attractiveness is driven by a different selection and evaluation of decision-consistent information. Decision makers allow factors such as physical attractiveness to affect everyday decisions due to the works of selective exposure.
In another study, selective exposure was defined by the amount of individual confidence. Individuals can control the amount of selective exposure depending on whether they have a low self-esteem or high self-esteem. Individuals who maintain higher confidence levels reduce the amount of selective exposure.[13]Albarracín and Mitchell (2004) hypothesized that those who displayed higher confidence levels were more willing to seek out information both consistent and inconsistent with their views. The phrase "decision-consistent information" explains the tendency to actively seek decision-relevant information. Selective exposure occurs when individuals search for information and show systematic preferences towards ideas that are consistent, rather than inconsistent, with their beliefs.[10]On the contrary, those who exhibited low levels of confidence were more inclined to examine information that did not agree with their views. The researchers found that in three out of five studies participants showed more confidence and scored higher on theDefensive Confidence Scale,[13]which serves as evidence that their hypothesis was correct.
Bozo et al. (2009) investigated the anxiety of fearing death and compared it to various age groups in relation to health-promoting behaviors. Researchers analyzed the data by using theterror management theoryand found that age had no direct effect on specific behaviors. The researchers thought that a fear of death would yield health-promoting behaviors in young adults. When individuals are reminded of their own death, it causes stress and anxiety, but eventually leads to positive changes in their health behaviors. Their conclusions showed that older adults were consistently better at promoting and practicing good health behaviors, without thinking about death, compared to young adults.[14]Young adults were less motivated to change and practice health-promoting behaviors because they used the selective exposure to confirm their prior beliefs. Selective exposure thus creates barriers between the behaviors in different ages, but there is no specific age at which people change their behaviors.
Though physical appearance will impact one's personal decision regarding an idea presented, a study conducted by Van Dillen, Papies, and Hofmann (2013) suggests a way to decrease the influence of personal attributes and selective exposure ondecision-making. The results from this study showed that people do pay more attention to physically attractive or tempting stimuli; however, this phenomenon can be decreased through increasing the "cognitive load." In this study, increasing cognitive activity led to a decreased impact of physical appearance and selective exposure on the individual's impression of the idea presented. This is explained by acknowledging that we are instinctively drawn to certain physical attributes, but if the required resources for this attraction are otherwise engaged at the time, then we might not notice these attributes to an equal extent. For example, if a person is simultaneously engaging in a mentally challenging activity during the time of exposure, then it is likely that less attention will be paid to appearance, which leads to a decreased impact of selective exposure ondecision-making.[15]
Leon Festingeris widely considered as the father of modern social psychology and as an important figure to that field of practice as Freud was to clinical psychology and Piaget was to developmental psychology.[16]He was considered to be one of the most significant social psychologists of the 20th century. His work demonstrated that it is possible to use the scientific method to investigate complex and significant social phenomena without reducing them to the mechanistic connections between stimulus and response that were the basis ofbehaviorism.[16]Festinger proposed the groundbreaking theory ofcognitive dissonancethat has become the foundation of selective exposure theory today despite the fact that Festinger was considered as an "avant-garde" psychologist when he had first proposed it in 1957.[17]In an ironic twist, Festinger realized that he himself was a victim of the effects of selective exposure. He was a heavy smoker his entire life and when he was diagnosed with terminal cancer in 1989, he was said to have joked, "Make sure that everyone knows that it wasn't lung cancer!"[16]Cognitive dissonance theory explains that when a person either consciously or unconsciously realizes conflicting attitudes, thoughts, or beliefs, they experience mental discomfort. Because of this, an individual will avoid such conflicting information in the future since it produces this discomfort, and they will gravitate towards messages sympathetic to their own previously held conceptions.[18]Decision makers are unable to evaluate information quality independently on their own (Fischer, Jonas, Dieter & Kastenmüller, 2008).[19]When there is a conflict between pre-existing views and information encountered, individuals will experience an unpleasant and self-threatening state of aversive-arousal which will motivate them to reduce it through selective exposure. They will begin to prefer information that supports their original decision and neglect conflicting information. Individuals will then exhibit confirmatory information to defend their positions and reach the goal of dissonance reduction.[20]Cognitive dissonance theory insists that dissonance is a psychological state of tension that people are motivated to reduce (Festinger 1957). Dissonance causes feelings of unhappiness, discomfort, or distress.Festinger (1957, p. 13) asserted the following: "These two elements are in a dissonant relation if, considering these two alone, the obverse of one element would follow from the other." To reduce dissonance, people add consonant cognition or change evaluations for one or both conditions in order to make them more consistent mentally.[21]Such experience of psychological discomfort was found to drive individuals to avoid counterattitudinal information as a dissonance-reduction strategy.[3]
In Festinger's theory, there are two basic hypotheses:
1) The existence of dissonance, being psychologically uncomfortable, will motivate the person to try to reduce the dissonance and achieve consonance.
2) When dissonance is present, in addition to trying to reduce it, the person will actively avoid situations and information which would likely increase the dissonance (Festinger 1957, p. 3).
The theory ofcognitive dissonancewas developed in the mid-1950s to explain why people of strong convictions are so resistant in changing their beliefs even in the face of undeniable contradictory evidence. It occurs when people feel an attachment to and responsibility for a decision, position or behavior. It increases the motivation to justify their positions through selective exposure to confirmatory information (Fischer, 2011). Fischer suggested that people have an inner need to ensure that their beliefs and behaviors are consistent. In an experiment that employed commitment manipulations, it impacted perceived decision certainty. Participants were free to choose attitude-consistent and inconsistent information to write an essay. Those who wrote an attitude-consistent essay showed higher levels of confirmatory information search (Fischer, 2011).[22]The levels and magnitude of dissonance also play a role. Selective exposure to consistent information is likely under certain levels of dissonance. At high levels, a person is expected to seek out information that increases dissonance because the best strategy to reduce dissonance would be to alter one's attitude or decision (Smith et al., 2008).[23]
Subsequent research on selective exposure within the dissonance theory produced weak empirical support until the dissonance theory was revised and new methods, more conducive to measuring selective exposure, were implemented.[24]To date, scholars still argue that empirical results supporting the selective exposure hypothesis are still mixed. This is possibly due to the problems with the methods of the experimental studies conducted.[25]Another possible reason for the mixed results may be the failure to simulate an authentic media environment in the experiments.[26]
According to Festinger, the motivation to seek or avoid information depends on the magnitude of dissonance experienced (Smith et al., 2008).[23]It is observed that there is a tendency for people to seek new information or select information that supports their beliefs in order to reduce dissonance.
There exist three possibilities which will affect extent of dissonance (Festinger 1957, pp. 127–131):
When little or no dissonance exists, there is little or no motivation to seek new information. For example, when there is an absence of dissonance, the lack of motivation to attend or avoid a lecture on 'The Advantages of Automobiles with Very High Horsepower Engines' will be independent of whether the car a new owner has recently purchased has a high or low horsepower engine. However, it is important to note the difference between a situation when there is no dissonance and when the information has no relevance to the present or future behavior. For the latter, accidental exposure, which the new car owner does not avoid, will not introduce any dissonance; while for the former individual, who also does not avoid information, dissonance may be accidentally introduced.
The existence of dissonance and consequent pressure to reduce it will lead to an active search of information, which will then lead people to avoid information that will increase dissonance. However, when faced with a potential source of information, there will be an ambiguous cognition to which a subject will react in terms of individual expectations about it. If the subject expects the cognition to increase dissonance, they will avoid it. In the event that one's expectations are proven wrong, the attempt at dissonance reduction may result in increasing it instead. It may in turn lead to a situation of active avoidance.
If two cognitive elements exist in a dissonant relationship, the magnitude of dissonance matches the resistance to change. If the dissonance becomes greater than the resistance to change, then the least resistant elements of cognition will be changed, reducing dissonance. When dissonance is close to the maximum limit, one may actively seek out and expose oneself to dissonance-increasing information. If an individual can increase dissonance to the point where it is greater than the resistance to change, he will change the cognitive elements involved, reducing or even eliminating dissonance. Once dissonance is increased sufficiently, an individual may bring himself to change, hence eliminating all dissonance (Festinger 1957, pp. 127–131).
The reduction incognitive dissonancefollowing a decision can be achieved by selectively looking for decision-consonant information and avoiding contradictory information. The objective is to reduce the discrepancy between the cognitions, but the specification of which strategy will be chosen is not explicitly addressed by the dissonance theory. It will be dependent on the quantity and quality of the information available inside and outside the cognitive system.[24]
In the early 1960s, Columbia University researcher Joseph T. Klapper asserted in his bookThe Effects Of Mass Communicationthat audiences were not passive targets of political and commercial propaganda from mass media but that mass media reinforces previously held convictions. Throughout the book, he argued that the media has a small amount of power to influence people and, most of the time, it just reinforces our preexisting attitudes and beliefs. He argued that the media effects of relaying or spreading new public messages or ideas were minimal because there is a wide variety of ways in which individuals filter such content. Due to this tendency, Klapper argued that media content must be able to ignite some type of cognitive activity in an individual in order to communicate its message.[27]Prior to Klapper's research, the prevailing opinion was that mass media had a substantial power to sway individual opinion and that audiences were passive consumers of prevailing mediapropaganda. However, by the time of the release ofThe Effects of Mass Communication, many studies led to a conclusion that many specifically targeted messages were completely ineffective. Klapper's research showed that individuals gravitated towards media messages that bolstered previously held convictions that were set by peer groups, societal influences, and family structures and that the accession of these messages over time did not change when presented with more recent media influence. Klapper noted from the review of research in the social science that given the abundance of content within the mass media, audiences were selective to the types of programming that they consumed. Adults would patronize media that was appropriate for their demographics and children would eschew media that was boring to them. So individuals would either accept or reject a mass media message based upon internal filters that were innate to that person.[27]
The following are Klapper's five mediating factors and conditions to affect people:[28]
Three basic concepts:
Groups and group norms work as mediators. For example, one can be strongly disinclined to change to the Democratic Party if their family has voted Republican for a long time. In this case, the person's predisposition to the political party is already set, so they don't perceive information about Democratic Party or change voting behavior because ofmass communication. Klapper's third assumption is inter-personal dissemination of mass communication. If someone is already exposed by close friends, which creates predisposition toward something, it will lead to an increase in exposure to mass communication and eventually reinforce the existing opinion. Anopinion leaderis also a crucial factor to form one's predisposition and can lead someone to be exposed by mass communication. The nature of commercial mass media also leads people to select certain types of media contents.
This new model combines the motivational and cognitive processes of selective exposure. In the past, selective exposure had been studied from a motivational standpoint. For instance, the reason behind the existence of selective exposure was that people felt motivated to decrease the level of dissonance they felt while encountering inconsistent information. They also felt motivated to defend their decisions and positions, so they achieved this goal by exposing themselves to consistent information only. However, the new cognitive economy model not only takes into account the motivational aspects, but it also focuses on the cognitive processes of each individual. For instance, this model proposes that people cannot evaluate the quality of inconsistent information objectively and fairly because they tend to store more of the consistent information and use this as their reference point. Thus, inconsistent information is often observed with a more critical eye in comparison to consistent information. According to this model, the levels of selective exposure experienced during the decision-making process are also dependent on how much cognitive energy people are willing to invest. Just as people tend to be careful with their finances, cognitive energy or how much time they are willing to spend evaluating all the evidence for their decisions works the same way. People are hesitant to use this energy; they tend to be careful so they don't waste it. Thus, this model suggests that selective exposure does not happen in separate stages. Rather, it is a combined process of the individuals' certain acts of motivations and their management of the cognitive energy.[11]
Recent studies have shown relevant empirical evidence for the pervasive influence of selective exposure on the greater population at large due tomass media. Researchers have found that individual media consumers will seek out programs to suit their individual emotional and cognitive needs. Individuals will seek out palliative forms of media during the recent times of economic crisis to fulfill a "strong surveillance need" and to decrease chronic dissatisfaction with life circumstances as well as fulfill needs for companionship.[29]Consumers tend to select media content that exposes and confirms their own ideas while avoiding information that argues against their opinion. A study conducted in 2012 has shown that this type of selective exposure affects pornography consumption as well. Individuals with low levels oflife satisfactionare more likely to have casual sex after consumption of pornography that is congruent with their attitudes while disregarding content that challenges their inherently permissive 'no strings attached' attitudes.[30]
Music selection is also affected by selective exposure. A 2014 study conducted by Christa L. Taylor and Ronald S. Friedman at the SUNY University at Albany, found that mood congruence was effected by self-regulation of music mood choices. Subjects in the study chose happy music when feeling angry or neutral but listened to sad music when they themselves were sad. The choice of sad music given a sad mood was due less to mood-mirroring but as a result of subjects having an aversion to listening to happy music that was cognitively dissonant with their mood.[31]
Politics are more likely to inspire selective exposure among consumers as opposed to single exposure decisions. For example, in their 2009 meta-analysis of Selective Exposure Theory, Hart et al. reported that "A 2004 survey by The Pew Research Center for the People & the Press (2006) found that Republicans are about 1.5 times more likely to report watchingFox Newsregularly than are Democrats (34% for Republicans and 20% of Democrats). In contrast, Democrats are 1.5 times more likely to report watchingCNNregularly than Republicans (28% of Democrats vs. 19% of Republicans). Even more striking, Republicans are approximately five times more likely than Democrats to report watching "The O'Reilly Factor" regularly and are seven times more likely to report listening to "Rush Limbaugh" regularly."[32]As a result, when the opinions of Republicans who only tune into conservative media outlets were compared to those of their fellow conservatives in a study by Stroud (2010), their beliefs were considered to be more polarized. The same result was retrieved from the study of liberals as well.[33]Due to our greater tendency toward selective exposure, current political campaigns have been characterized as being extremely partisan and polarized. As Bennett and Iyengar (2008) commented, "The new, more diversified information environment makes it not only more feasible for consumers to seek out news they might find agreeable but also provides a strong economic incentive for news organizations to cater to their viewers' political preferences."[33]Selective exposure thus plays a role in shaping and reinforcing individuals' political attitudes. In the context of these findings, Stroud (2008) comments "The findings presented here should at least raise the eyebrows of those concerned with the noncommercial role of the press in our democratic system, with its role in providing the public with the tools to be good citizens." The role of public broadcasting, through its noncommercial role, is to counterbalance media outlets that deliberately devote their coverage to one political direction, thus driving selective exposure and political division in a democracy.
Many academic studies on selective exposure, however, are based on theelectoral systemandmedia systemof the United States. Countries with a strongpublic service broadcastinglike many European countries, on the other hand, have less selective exposure based on political ideology or political party.[34]In Sweden, for instance, there were no differences in selective exposure to public service news between the political left and right over a period of 30 years.[35]
In early research, selective exposure originally provided an explanation for limited media effects. The "limited effects" model of communication emerged in the 1940s with a shift in the media effects paradigm. This shift suggested that while the media has effects on consumers' behavior such as their voting behavior, these effects are limited and influenced indirectly by interpersonal discussions and the influence ofopinion leaders. Selective exposure was considered one necessary function in the early studies of media's limited power over citizens' attitudes and behaviors.[36]Political ads deal with selective exposure as well because people are more likely to favor a politician that agrees with their own beliefs. Another significant effect of selective exposure comes from Stroud (2010) who analyzed the relationship between partisan selective exposure and political polarization. Using data from the 2004National Annenberg Election Survey, analysts found that over time partisan selective exposure leads to polarization.[37][5]This process is plausible because people can easily create or have access to blogs, websites, chats, and online forums where those with similar views and political ideologies can congregate. Much of the research has also shown that political interaction online tends to be polarized. Further evidence for this polarization in the political blogosphere can be found in the Lawrence et al. (2010)'s[38]study on blog readership that people tend to read blogs that reinforce rather than challenge their political beliefs. According to Cass Sunstein's book,Republic.com, the presence of selective exposure on the web creates an environment that breeds political polarization and extremism. Due to easy access to social media and other online resources, people are "likely to hold even stronger views than the ones they started with, and when these views are problematic, they are likely to manifest increasing hatred toward those espousing contrary beliefs."[39]This illustrates how selective exposure can influence an individual's political beliefs and subsequently his participation in the political system.
One of the major academic debates on the concept of selective exposure is whether selective exposure contributes to people's exposure to diverse viewpoints or polarization. Scheufele and Nisbet (2012)[40]discuss the effects of encountering disagreement on democratic citizenship. Ideally, true civil deliberation among citizens would be the rational exchange of non-like-minded views (or disagreement). However, many of us tend to avoid disagreement on a regular basis because we do not like to confront with others who hold views that are strongly opposed to our own. In this sense, the authors question about whether exposure to non-like-minded information brings either positive or negative effects on democratic citizenship. While there are mixed findings of peoples' willingness to participate in the political processes when they encounter disagreement, the authors argue that the issue of selectivity needs to be further examined in order to understand whether there is a truly deliberative discourse in online media environment.
|
https://en.wikipedia.org/wiki/Selective_exposure_theory
|
Asocial bot, also described as asocial AIorsocial algorithm, is asoftware agentthat communicates autonomously onsocial media. The messages (e.g.tweets) it distributes can be simple and operate in groups and various configurations with partial human control (hybrid) viaalgorithm. Social bots can also useartificial intelligenceandmachine learningto express messages in more natural human dialogue.
Social bots are used for a large number of purposes on a variety of social media platforms, includingTwitter,Instagram,Facebook, andYouTube. One common use of social bots is to inflate a social media user’s apparent popularity, usually by artificially manipulating their engagement metrics with large volumes of fake likes, reposts, or replies. Social bots can similarly be used to artificially inflate a user’s follower count withfake followers, creating a false perception of a larger and more influential online following than is the case.[1]The use of social bots to create the impression of a large social media influence allows individuals, brands, and organizations to attract a higher number of human followers and boost their online presence. Fake engagement can be bought and sold in the black market of social media engagement.[2]
Corporations typically use automated customer service agents on social media to affordably manage high levels of support requests.[3]Social bots are used to send automated responses to users’ questions, sometimes prompting the user to private message the support account with additional information. The increased use of automated support bots and virtual assistants has led to some companies laying off customer-service staff.[4]
Social bots are also often used to influence public opinion. Autonomous bot accounts can flood social media with large numbers of posts expressing support for certain products, companies, orpolitical campaigns, creating the impression of organicgrassrootssupport.[5]This can create a false perception of the number of people who support a certain position, which may also have effects on the direction of stock prices or on elections.[6][7]Messages with similar content can also influencefads or trends.[8]
Many social bots are also used to amplifyphishing attacks. These malicious bots are used to trick a social media user into giving up theirpasswordsor otherpersonal data. This is usually accomplished by posting links claiming to direct users to news articles that would in actuality direct to malicious websites containingmalware.[9]Scammers often useURL shorteningservices such asTinyURLandbit.lyto disguise a link’s domain address, increasing the likelihood of a user clicking the malicious link.[10]The presence of fake social media followers and high levels of engagement help convince the victim that the scammer is in fact a trusted user.
Social bots can be a tool forcomputational propaganda.[11]Bots can also be used foralgorithmic curation,algorithmic radicalization, and/orinfluence-for-hire, a term that refers to the selling of an account on social media platforms.
Bots have coexisted with computer technology since the earliest days of computing. Social bots have their roots in the 1950s withAlan Turing, whose work focused on machine intelligence with the development of theTuring Test. The following decades saw further progress made towards the goal of creating programs capable of mimicking human behavior, notably withJoseph Weizenbaum’s creation ofELIZA.[12]Considered to be one of the firstChatbots, ELIZA could simulate natural conversations with human users through pattern matching. Its most famous script was DOCTOR, a simulation of a Rogerian psychotherapist that was programmed to chat with patients and respond to questions.[13]
With the growth of social media platforms in the early 2000s, these bots could be used to interact with much larger user groups in an inconspicuous manner. Early instances ofautonomous agentson social media could be found on sites likeMySpace, with social bots being used by marketing firms to inflate activity on a user’s page in an effort to make them appear more popular.[14]
Social bots have been observed on a large variety of social media websites, with Twitter being one of the most widely observed examples. The creation ofTwitter botsis generally against the site’sterms of servicewhen used to post spam or to automatically like and follow other users, but some degree of automation using Twitter’sAPImay be permitted if used for “entertainment, informational, or novelty purposes.”[15]Other platforms such asRedditandDiscordalso allow for the use of social bots as long as they are not used to violate policies regarding harmful content and abusive behavior. Social media platforms have developed their own automated tools to filter out messages that come from bots, although they cannot detect all bot messages.[16]
Due to the difficulty of recognizing social bots and separating them from "eligible" automation via social media APIs, it is unclear how legal regulation can be enforced. Social bots are expected to play a role in shapingpublic opinionby autonomously acting asinfluencers. Some social bots have been used to rapidly spread misinformation, manipulate stock markets, influence opinion on companies and brands, promote political campaigns, and engage in malicious phishing campaigns.[17]
In theUnited States, some states have started to implement legislation in an attempt to regulate the use of social bots. In 2019,Californiapassed the Bolstering Online Transparency Act (the B.O.T. Act) to make it unlawful to use automated software to appear indistinguishable from humans for the purpose of influencing a social media user’s purchasing and voting decisions.[18]Other states such asUtahandColoradohave passed similar bills to restrict the use of social bots.[19]
TheArtificial Intelligence Act(AI Act) in theEuropean Unionis the first comprehensive law governing the use of Artificial Intelligence.[20]The law requires transparency in AI to prevent users from being tricked into believing they are communicating with another human. AI-generated content on social media must be clearly marked as such, preventing social bots from using AI in a manner that mimics human behavior.[21]
The first generation of bots could sometimes be distinguished from real users by their oftensuperhumancapacities to post messages. Later developments have succeeded in imprinting more "human" activity andbehavioral patternsin the agent. With enough bots, it might be even possible to achieve artificialsocial proof. To unambiguously detect social bots as what they are, a variety of criteria[22]must be applied together usingpattern detectiontechniques, some of which are:[23]
Social bots are always becoming increasingly difficult to detect and understand. The bots' human-like behavior, ever-changing behavior of the bots, and the sheer volume of bots covering every platform may have been a factor in the challenges of removing them.[27]Social media sites, likeTwitter, are among the most affected, withCNBCreporting up to 48 million of the 319 million users (roughly 15%) were bots in 2017.[28]
Botometer[29](formerly BotOrNot) is a publicWeb servicethat checks the activity of a Twitter account and gives it a score based on how likely the account is to be a bot. The system leverages over a thousand features.[30][31]An active method for detecting early spam bots was to set uphoneypotaccounts that post nonsensical content, which may get reposted (retweeted) by the bots.[32]However, bots evolve quickly, and detection methods have to be updated constantly, because otherwise they may get useless after a few years.[33]One method is the use ofBenford's Lawfor predicting the frequency distribution of significant leading digits to detect malicious bots online. This study was first introduced at theUniversity of Pretoriain 2020.[34]Another method is artificial-intelligence-driven detection. Some of the sub-categories of this type of detection would beactive learningloop flow,feature engineering,unsupervised learning,supervised learning, and correlation discovery.[27]
Some operations of bots work together in a synchronized way. For example,ISISused Twitter to amplify its Islamic content by numerous orchestrated accounts which further pushed an item to the Hot List news,[35]thus further amplifying the selected news to a larger audience.[36]This mode of synchronized bots accounts can be used as a tool ofpropagandaas well as stock markets manipulations.[37]
Instagramreached a billion active monthly users in June 2018,[38]but of those 1 billion active users, it was estimated that up to 10% were being run by automated social bots. While malicious propaganda posting bots are still popular, many individual users use engagement bots to propel themselves to a false virality, making them seem more popular on the app. These engagement bots can like, watch, follow, and comment on the users' posts.[39]
Around the same time, the platform achieved the 1 billion monthly user plateau.Facebook(Instagram andWhatsApp's parent company) planned to hire 10,000 to provide additional security to their platforms; this would include combatting the rising number of bots and malicious posts on the platforms.[40]Due to increased security on the platform and the detection methods used by Instagram, some botting companies are reporting issues with their services because Instagram imposes interaction limit thresholds based on past and current app usage, and many payment and email platforms deny the companies access to their services, preventing potential clients from being able to purchase them.[41]
Twitter's bot problem is caused by the ease of creating and maintaining them. The ease of creating the account as and the many APIs that allow for complete automation of the accounts are leading to excessive amounts of organizations and individuals using these tools to push their own needs.[28][42]CNBC claimed that about 15% of the 319 million Twitter users in 2017 were bots; the exact number is 48 million.[28]As of July 7, 2022, Twitter is claiming that they remove 1 million spam bots from their platform every day.[43]
Some bots are used to automate scheduled tweets, download videos, set reminders and send warnings of natural disasters.[44]Those are examples of bot accounts, but Twitter'sAPIallows for real accounts (individuals or organizations) to use certain levels of bot automation on their accounts and even encourages the use of them to improve user experiences and interactions.[45]
In 2025, Meta announced it would be creating an AI product that helps users create AI characters on Instagram and Facebook, allowing these characters to have bios, profile pictures, generate and share "AI-powered content" on the platforms.[46][47][48]Bot accounts managed by Meta began to be identified by the public around on January 1, 2025,[49][50]with social media users noting that they appeared to be unblockable by human accounts and came with blue ticks to indicate they had been verified by Meta as trustworthy profiles.[51]
SocialAI, an app created on September 18, 2024, was created with the full purpose of chatting with only AI bots without human interaction.[52]Its creator wasMichael Sayman, a former product lead atGooglewho also worked atFacebook,Roblox, andTwitter.[53]An article on theArs Technicawebsite linked SocialAI to the Dead Internet Theory.[54]
|
https://en.wikipedia.org/wiki/Social_bot
|
Thesocial data revolutionis the shift in human communication patterns towards increased personal information sharing and its related implications, made possible by the rise ofsocial networksin the early 2000s. This phenomenon has resulted in the accumulation ofunprecedented amounts of public data.[1]
This large and frequently updated data source has been described as a new type of scientific instrument for the social sciences.[2]Several independent researchers have used social data to "nowcast" and forecast trends such as unemployment, flu outbreaks,[3]mood of whole populations,[4]travel spending and political opinions in a way that is faster, more accurate and cheaper than standard government reports orGallup polls.[2]
Social data refers to data individuals create that is knowingly and voluntarily shared by them. Cost and overhead previously rendered this semi-public form of communication unfeasible, but advances in social networking technology from 2004–2010 has made broader concepts of sharing possible.[5]The types of data users are sharing includegeolocation, medical data,[6]dating preferences, open thoughts, interesting news articles, etc.
The social data revolution enables not only new business models like the ones onAmazon.combut also provides large opportunities to improvedecision-makingfor public policy andinternational development.[7]
The analysis of large amounts of social data leads to the field ofcomputational social science. Classic examples include the study of media content[8]or social media content.[3][4][9]
Every internet activity leaves behind traces of data (adigital footprint) which can be used to learn more about the user.[10]As use of the internet is becoming more widespread, the datafication of the world is progressing rapidly: Currently, around 16 zettabytes of data are produced per year and for the year 2025 163 zettabytes of data are expected.[11]This has led to data becoming a critical commodity.[10]This ties together all societal actors: Public institutions, private firms, as well as individuals, each relying on data in a unique way.
Governments have been collectingdatafor centuries to ensure the continuance of institutional systems, through limiting the risk of defaulting credits, collecting tax based on income and providing the necessary infrastructure under consideration of their citizens' demographic distribution.[12]In its beginnings, this data entailed written information for record keeping and control, including a census system.[12]
This analogue process was very time- and cost-intensive, leaving little room for interpreting larger data sets.[12]Meanwhile, corporate technological developments have moved this offline data into the digital age, allowing visualization and data analytics.[12][10]In the public sphere, connecting the survey and poll methodologies with database computing, resulted in the ability to gather and store large data sets on individuals.[10]
Over the last few decades, the internet has shifted from being used mostly as a source of information about the world to being primarily used for communication, user-generated content,data sharing, andcommunity building.[13]This is what many consider to be the development of "Web 2.0" social network sites such asFacebookandYouTubeare the foundation of the development of Web 2.0 and the shift to social data sharing.[13]
Early examples of social data websites areCraigslistand the wishlists ofAmazon.com. Both enable users to communicate information to anybody who is looking for it. They differ in their approach toidentity. Craigslist leverages the power of anonymity, while Amazon.com leverages the power of persistent identity, based on the history of the customer with the firm. The job market is even being shaped by the information people share about themselves on sites likeLinkedInand Facebook.[14]
Examples of more sophisticated social data sites areTwitterand Facebook. On Twitter, sending a message or tweet is as simple as sending an SMS text message. Twitter made this C2W, customer to the world: Any tweet a user sends can potentially be read by the entire world. Facebook focuses on interactions between friends, C2C in traditional language. It provides many ways for collecting data from its users: "tag" a friend in a photo, "comment" on what they posted, or just "like" it. These data are the basis for sophisticated models of the relationships between users. They can be used to significantly increase therelevanceof what is shown to the user, and for advertising purposes.[15]
By 2009, the popularity of social networking sites had increased to four times of what it had been in 2005.[16]As of 2013, Twitter has over 250 million users sharing almost 500 million tweets per day, and Facebook has well over one billion users around the world.[17]
Companies often use the data that is shared via social networking sites and other forms of data sharing avenues, advertisers, etc.[18]Social networking sites, for example, can sell user data to advertisers and other entities which they can then influence consumer decisions.[13]Data miningis also used to gather this information.[18]
While websites and other applications were the origins of this data collection, with improvements in technology, many devices that are used in daily life have the ability to collect data on individuals and therefore are increasing the amount of personal data that is available (ex. smartphones, tech watches, music devices, etc.).[19][20]
This growth of people'sdigital identity– the information available via these electronic sources- is being used by companies and organizations to improve products and services and to reduce costs by targeting what consumers want/expect.[20]The data that can be gathered can include shopping experiences, social media preferences, demographic information and more.[18]
Using this data can allow for better personalization of products and has become an expected and vital aspect of product use and production.[19]The data that is accessible about consumers can be used to infer behavioral patterns of consumers.[21]For example, location information is used to assess when and where consumers are going to target ads and promotions based on what stores consumers are going to.[21]Online retailers also have gained insight as to how better personalize the online shopping experience through data gathered during the online transaction.[22]
Businesses can even use consumer data to determine whether different shelf spacing of products has an effect on consumer purchasing decisions as well as assess potential cross-item marketing potentials based on items often purchased together.[23]
While businesses and advertisers often take advantage of the consumer data available, consumers also use other users' information for their purchase decisions.Social commercesites are where consumers share product/service experiences and opinions and other information.[24]A famous example of such a site isPinterestwhich has over 100 million users.[24]These sites and other online sources of product/brand information are influential on consumer's purchasing decisions.[25]It is estimated that about 67% of online customers use this information in making their purchase decisions.[24]These sites create an environment that is considered trusted by consumers since the information is coming from other consumers.[24]
With the vast amount of data available about individuals that are accessible, the potential uses of this information are growing.
The healthcare sector has many potential uses for this data. Information gathered from social media, and other social data sharing sources can be used to predict the flu, disease outbreaks, how emergency responses are handled, and more.[26]With the use of Twitter andgeotags, medical researchers can evaluate the health of a particular neighborhood and use that information to provide better outreach and services.[26]Medtronichas developed a digital blood glucose meter that allows health care providers and patients know about low levels.[19]
Social data can also be used to assess reactions to crises.[27]AfterHurricane Sandy, researchers used Twitter to evaluate the emotions and issues that those affected were facing.[27]This information can potentially be used to help better prepare and respond to future crises.
This data can be used to assist with urban planning. The city ofBostonhas used rider information fromUberto improve transportation planning and road maintenance.[19]
Using social data for research purposes has led to the development of computational social science. Computational social science combines social science, computer science, and network science.[28]This field emerged in 2009.[29]Before the rise of social data and the technological advances that supported it, researchers were limited to a narrow view of information based on individuals since their primary form of research relied on interviews.[29]With the vast amount of social data available today, researchers can now analyze a wider group and can obtain a broader view of information. They can use social networks, cell phone data, and perform online experiments that allow them to gather more information than before.[29]
With the amount of data available about individuals accessible by many sources, privacy has become a major concern. Security breaches of customer and other social information such as the compromise of more than 56 millionHome Depotcustomers' credit card information[19]have impacted the concern of privacy with social data. How companies are using, and the potential misuse of the personal information gathered is a concern for the majority of consumers.[19][20]Despite this, many people do not know how social networking sites and other sources are using and selling their data.[30]In 2014 study, only 25% of online users knew that their location could be accessed and only 14% knew that their web-surfing history could be accessed and shared.[19]
Even though privacy concern is a critical factor in people's sharing of personal information on the internet and overall internet involvement,[22]most people are willing to share this information if the benefits of doing so outweigh the potential privacy and security costs.[18][20]Consumers enjoy the personalization of products and services that are possible because of this information gathering and despite the concerns, continue to use them.[19]
"From a macro-perspective, it is expected that Big Data-informed decision-making will have a similar positive effect on efficiency and productivity as ICT have had during the recent decade."
In his study of the data revolution in international development, Social Sciences Professor at UC Davis, Martin Hilbert, argued that the natural next step frominformation societies, fueled byICT, since the late 1990s areknowledge societiesinformed byBig Dataanalysis. Decision-making informed by big data analysis has improved both efficiency and productivity in the developed world. Hilbert examines the challenges and potential of the data revolution on "the unruly world of international development."[7]
Hilbert identified four types of data available in large quantities by 2013: words, locations, nature, and behavior.[7]
Individual interactions with the internet, such as words in comments, social media postings, and Google search term volumes, offer an increasingly large source of big data. Typically statistics are generated through a census or a probability survey, for example, theAnnual Social and Economic Supplement(ASEC),Current Population Survey(CPS),American Community Survey(ACS),National Health Interview Survey(NHIS) in the United States or administrative records, such as payroll, unemployment, Social Security income taxes, scanner data and credit card data and other commercial transaction records.[31]
"Google has analyzed clusters of search terms by region in the United States to predict flu outbreaks faster than was possible using hospital admission records."
Weatherhead University Professor Gary King described how the revolution is not just regarding the quantity of data available but in the ability to do something with the data to benefit society.[32]
Global Positioning System(GPS)-enabled mobile tablets, phones,Radio-frequency identification(RFID) chips (part ofAutomatic identification and data capture(AIDC) technologies),telematics,Location-based games, etc. provide data on absolute location and relative movement.
Hilbert categorizes data on natural processes under 'Nature' which includes sensors that provide data on moisture in the air and temperature.[7]
Data can be generated from user-behavior inmultiplayer online games,[7]such asLeague of Legends,World of Warcraft,Minecraft,Call of Duty, andDota 2.Nathan Eagle's, a computer scientist at the Santa Fe Institute in New Mexico, began using cellphones in the early 2000s to collect accurate, large-scale data about real social interactions.[33][34][35]The project was named one of the "10 Technologies Most Likely To Change The Way We Live" by theMIT Technology Review.[36]
|
https://en.wikipedia.org/wiki/Social_data_revolution
|
Thesocial influence biasis an asymmetricherding effecton online social media platforms which makes users overcompensate for negative ratings but amplify positive ones. Driven by the desire to be accepted within a specific group, it surrounds the idea that people alter certain behaviors to be like those of the people within a group.[1]Therefore, it is a subgroup term for various types of cognitive biases. Some social influence bias types include thebandwagon effect,authority bias,groupthinkingeffect,social comparison bias, social media bias and more.[1]Understanding these biases helps us understand the term overall.
However, the composition of the term "social influence bias" requires critical examination to understand the way that it affects individuals' and groups' lives. The term "influence" has 2 different types of stigma. For one, it surrounds the idea that people show their true inner selves when "under the influence". On the other end, it also proposes the idea that people are not their own selves when "under the influence". These tend to be constructions made by people, which also tend to fit the situation based on their own perspectives. So, even in social terms, it requires both sides to be examined to understand whether we truly are affected by context, or we remain to be and behave in terms of our own selves. The term "influence" doesn't necessarily say that there lies greater strength in our inner self's desires and decisions, nor does it say that external factors have the greater power.[2]In a similar manner, both social and non-social judgments are to be associated with anxiety, but the same can't necessarily be said in the case of social conformity.[3]So, the gray areas within this topic beg the question, "What does social influence bias say about us, and does it affect us all in the same way?"
Media bias is reflected in search systems in social media. Kulshrestha and her team found through research in 2018 that the top-ranked results returned by these search engines can influence users' perceptions when they conduct searches for events or people, which is particularly reflected in political bias and polarizing topics.[4]Fueled by confirmation bias, onlineecho chambersallow users to be steeped within their own ideology. Because social media is tailored to your interests and your selected friends, it is an easy outlet for political echo chambers.[5]
Social media bias is also reflected inhostile media effect. Social media has a place in disseminating news in modern society, where viewers are exposed to other people's comments while reading news articles. In their 2020 study, Gearhart and her team showed that viewers' perceptions of bias increased and perceptions of credibility decreased after seeing comments with which they held different opinions.[6]
In observational data, how social influence affects collected judgment is challenging to fully understand. Positive social influence can accumulate and result in a rating bubble, while negative social influence is neutralized by crowd correction.[7]This phenomenon was first described in a paper written by Lev Muchnik,[8]Sinan Aral[9]and Sean J. Taylor[10]in 2014,[11]then the question was revisited by Cicognani et al., whose experiment reinforced Munchnik's and his co-authors' results.[12]
Online customer reviews are trusted sources of information in various contexts such asonline marketplaces, dining, accommodation, movies, or digital products. However, these online ratings are not immune toherd behavior, which means that subsequent reviews are not independent from each other. As on many such sites, preceding opinions are visible to a new reviewer, he or she can be heavily influenced by the antecedent evaluations in his or her decision about the certain product, service or online content.[13]This form of herding behavior inspired Muchnik, Aral and Taylor to conduct their experiment on influence in social contexts.
Muchnik, Aral, and Taylor designed a large-scale randomized experiment to measure social influence on user reviews. The experiment was conducted on social news aggregation website likeReddit. The study lasted for 5 months, the authors randomly assigned 101 281 comments to one of the following treatment groups: up-treated (4049), down-treated (1942), or control (the proportions reflect the observed ratio of up-and down-votes. Comments which fell to the first group were given an up-vote upon the creation of the comment, the second group got a down-vote upon creation, the comments in the control group remained untouched. A vote is equivalent to a single rating (+1 or -1). As other users are unable to trace a user’s votes, they were unaware of the experiment. Due to randomization, comments in the control and the treatment group were not different in terms of expected rating. The treated comments were viewed more than 10 million times and rated 308 515 times by successive users.[11]
The up-vote treatment increased the probability of up-voting by the first viewer by 32% over the control group, while the probability of down-voting did not change compared to the control group, which means that users did not correct the random positive rating. The upward bias remained inplace for the observed 5-month period. The accumulating herding effect increased the comment’s mean rating by 25% compared to the control group comments. Positively manipulated comments did receive higher ratings at all parts of the distribution, which means that they were also more likely to collect extremely high scores.[14]
The negative manipulation created an asymmetric herd effect: although the probability of subsequent down-votes was increased by the negative treatment, the probability of up-voting also grew for these comments. The community performed a correction which neutralized the negative treatment and resulted non-different final mean ratings from the control group. The authors also compared the final mean scores of comments across the most active topic categories on the website. The observed positive herding effect was present in the "politics," "culture and society," and "business" subreddits, but was not applicable for "economics," "IT," "fun," and "general news".[11]-
The skewed nature of online ratings makes review outcomes different to what it would be without the social influence bias. In a 2009 experiment[15]by Hu, Zhang and Pavlou showed that the distribution of reviews of a certain product made by unconnected individuals is approximatelynormal, however, the rating of the same product onAmazonfollowed a J-Shaped distribution with twice as much five-star ratings than others. Cicognani, Figini and Magnani came to similar conclusions after their experiment conducted on a tourism services website: positive preceding ratings influenced raters' behavior more than mediocre ones.[12]Positive crowd correction makes community-based opinions upward-biased.
|
https://en.wikipedia.org/wiki/Social_influence_bias
|
Media biasoccurs whenjournalistsandnews producersshowbiasin how they report and cover news. The term "media bias" implies a pervasive or widespread bias contravening ofthe standards of journalism, rather than the perspective of an individual journalist or article.[1]The direction and degree of media bias in various countries is widely disputed.[2]
Practical limitations tomedia neutralityinclude the inability of journalists to report all available stories and facts, and the requirement that selected facts be linked into a coherentnarrative.[3]Governmentinfluence, including overt and covertcensorship, biases the media in some countries, for exampleChina,North Korea,SyriaandMyanmar.[4][5]Politics and media bias may interact with each other; the media has the ability to influence politicians, and politicians may have the power to influence the media. This can change the distribution of power in society.[6]Marketforces may also cause bias. Examples include bias introduced by the ownership of media, including aconcentration of media ownership, the subjective selection ofstaff, or the perceivedpreferencesof an intendedaudience.
Assessing possible bias is one aspect ofmedia literacy, which is studied at schools of journalism, university departments (includingmedia studies,cultural studies, andpeace studies). Other focuses beyond political bias include international differences in reporting, as well as bias in reporting of particular issues such as economic class or environmental interests. Academic findings around bias can also differ significantly from public discourse and understanding of the term.[7]
In the 2017 Oxford Handbook of Political Communication, S. Robert Lichter described how in academic circles, media bias is more of a hypothesis to explain various patterns in news coverage than any fully-elaborated theory,[7]and that a variety of potentially overlapping types of bias have been proposed that remain widely debated.
Various proposed hypotheses of media bias have included:
An ongoing and unpublished research project named "The Media Bias Taxonomy" is attempting to assess the various definitions and meanings of media bias. While still ongoing, it attempts to summarize the domain as the distinct subcategories linguistic bias (encompassing linguistic intergroup bias, framing bias, epistemological bias, bias by semantic properties, and connotation bias), text-level context bias (featuring statement bias, phrasing bias, and spin bias), reporting-level context bias (highlighting selection bias, coverage bias, and proximity bias), cognitive biases (such as selective exposure and partisan bias), and related concepts likeframingeffects, hate speech, sentiment analysis, and group biases (encompassing gender bias, racial bias, and religion bias). The authors emphasize the complex nature of detecting and mitigating bias across different media content and contexts.[29][better source needed]
John Milton's 1644 pamphletAreopagitica, a Speech for the Liberty of Unlicensed Printingwas one of the first publications advocatingfreedom of the press.[30]
In the 19th century, journalists began to recognize the concept of unbiased reporting as an integral part ofjournalistic ethics. This coincided with the rise of journalism as a powerful social force. Even today, the most conscientiously objectivejournalistscannot avoid accusations of bias.[31][page needed]
Like newspapers, the broadcast media (radio and television) have been used as a mechanism forpropagandafrom their earliest days, a tendency made more pronounced by the initial ownership ofbroadcast spectrumby national governments. Although a process of media deregulation has placed the majority of the western broadcast media in private hands, there still exists a strong government presence, or even monopoly, in the broadcast media of many countries across the globe. At the same time, theconcentration of media ownershipin private hands, and frequently amongst a comparatively small number of individuals, has also led to accusations of media bias.[citation needed]
There are many examples of accusations of bias being used as a political tool, sometimes resulting in government censorship.[original research?][globalize]
Not all accusations of bias are political. Science writerMartin Gardnerhas accused the entertainment media ofanti-sciencebias. He claimed that television programs such asThe X-Filespromote superstition.[9]In contrast, theCompetitive Enterprise Institute, which is funded by businesses, accuses the media of being biased in favor of science and against business interests, and of credulously reporting science that shows that greenhouse gasses cause global warming.[39]
While most accusations of bias tend to revolve around ideological disagreements, other forms of bias are cast as structural in nature. There is little agreement on how they operate or originate but some involve economics, government policies, norms, and the individual creating the news.[40]Some examples, according to Cline (2009) include commercial bias, temporal bias, visual bias, bad news bias, narrative bias, status quo bias, fairness bias, expediency bias, class bias and glory bias (or the tendency to glorify the reporter).[41]
There is also a growingeconomicsliterature on mass media bias, both on the theoretical and the empirical side. On the theoretical side the focus is on understanding to what extent the political positioning of mass media outlets is mainly driven by demand or supply factors. This literature was surveyed byAndrea Pratof Columbia University and David Stromberg of Stockholm University in 2013.[42]
When an organization prefers consumers to take particular actions, this would be supply-driven bias.
Implications of supply-driven bias:[15]
An example of supply-driven bias is Zinman and Zitzewitz's study of snowfall reporting. Ski attractions tend to be biased in snowfall reporting, reporting higher snowfall than official forecasts.[43][better source needed]
David Baron suggests a game-theoretic model of mass media behaviour in which, given that the pool of journalists systematically leans towards the left or the right, mass media outlets maximise their profits by providing content that is biased in the same direction as their employees.[44]
HermanandChomsky(1988) cite supply-driven bias including around the use of official sources, funding from advertising, efforts to discredit independent media ("flak"), and "anti-communist" ideology, resulting in news in favor of U.S. corporate interests.[45]
Demand from media consumer for a particular type of bias is known as demand-driven bias. Consumers tend to favor a biased media based on their preferences, an example ofconfirmation bias.[15]
There are three major factors that make this choice for consumers:
Demand-side incentives are often not related to distortion. Competition can still affect the welfare and treatment of consumers, but it is not very effective in changing bias compared to the supply side.[15]
In demand-driven bias, preferences and attitudes of readers can be monitored on social media, and mass media write news that caters to readers based on them. Mass media skew news driven by viewership and profits, leading to the media bias. And readers are also easily attracted to lurid news, although they may be biased and not true enough.
Dong, Ren, and Nickerson investigated Chinese stock-related news and weibos in 20132014 from Sina Weibo and Sina Finance (4.27 million pieces of news and 43.17 million weibos) and found that news that aligns with Weibo users' beliefs are more likely to attract readers. Also, the information in biased reports also influences the decision-making of the readers.[46]
In Raymond and Taylor's test of weather forecast bias, they investigated weather reports of the New York Times during the games of the baseball team the Giants from 1890 to 1899. Their findings suggest that the New York Times produce biased weather forecast results depending on the region in which the Giants play. When they played at home in Manhattan, reports of sunny days predicting increased. From this study, Raymond and Taylor found that bias pattern in New York Times weather forecasts was consistent with demand-driven bias.[43][better source needed]
Sendhil Mullainathan and Andrei Shleifer of Harvard University constructed a behavioural model in 2005, which is built around the assumption that readers and viewers hold beliefs that they would like to see confirmed by news providers, which they argue the market then provides.[47]
Demand-driven models evaluate to what extent media bias stems from companies providing consumers what they want.[48]Stromberg posits that because wealthier viewers result in more advertising revenue, the media as a result ends up targeted to whiter and more conservative consumers while wealthier urban markets may be more liberal and produce an opposite effect in newspapers in particular.[49]
Perceptions of media bias may also be related to the rise of social media. The rise of social media has undermined the economic model of traditional media. The number of people who rely upon social media has increased and the number who rely on print news has decreased.[50]Studies of social media anddisinformationsuggest that the political economy of social media platforms has led to a commodification of information on social media. Messages are prioritized and rewarded based on their virality and shareability rather than their truth,[51]promoting radical, shocking click-bait content.[52]Social media influences people in part because of psychological tendencies to accept incoming information, to take feelings as evidence of truth, and to not check assertions against facts and memories.[53]
Media bias in social media is also reflected inhostile media effect. Social media has a place in disseminating news in modern society, where viewers are exposed to other people's comments while reading news articles. In their 2020 study, Gearhart and her team showed that viewers' perceptions of bias increased and perceptions of credibility decreased after seeing comments with which they held different opinions.[54]
Within the United States,Pew Research Centerreported that 64% of Americans believed that social media had a toxic effect on U.S. society and culture in July 2020. Only 10% of Americans believed that it had a positive effect on society. Some of the main concerns with social media lie with the spread ofdeliberately false informationand the spread of hate and extremism. Social scientist experts explain the growth of misinformation and hate as a result of the increase inecho chambers.[55]
Fueled by confirmation bias, onlineecho chambersallow users to be steeped within their own ideology. Because social media is tailored to your interests and your selected friends, it is an easy outlet for political echo chambers.[56]AnotherPew Researchpoll in 2019 showed that 28% of US adults "often" find their news through social media, and 55% of US adults get their news from social media either "often" or "sometimes".[57]Additionally, more people are reported as going to social media for their news as theCOVID-19 pandemichas restricted politicians to online campaigns and social media live streams. GCF Global encourages online users to avoidecho chambersby interacting with different people and perspectives along with avoiding the temptation of confirmation bias.[58][59]
Yu-Ru and Wen-Ting's research looks into how liberals and conservatives conduct themselves on Twitter after three mass shooting events. Although they would both show negative emotions towards the incidents they differed in the narratives they were pushing. Both sides would often contrast in what the root cause was along with who is deemed the victims, heroes, and villain/s. There was also a decrease in any conversation that was considered proactive.[60]
Media scholarSiva Vaidhyanathan, in his bookAnti-Social Media: How Facebook Disconnects Us and Undermines Democracy(2018), argues that on social media networks, the most emotionally charged and polarizing topics usually predominate, and that "If you wanted to build a machine that would distribute propaganda to millions of people, distract them from important issues, energize hatred and bigotry, erode social trust, undermine journalism, foster doubts about science, and engage in massive surveillance all at once, you would make something a lot likeFacebook."[61][62]
In a 2021 report, researchers at theNew York University'sStern Center for Business and Human Rightsfound that Republicans' frequent argument that social media companies like Facebook and Twitter have an "anti-conservative" bias is false and lacks any reliable evidence supporting it; the report found that right-wing voices are in fact dominant on social media and that the claim that these platforms have an anti-conservative lean "is itself a form ofdisinformation."[63][64]
A 2021 study inNature Communicationsexamined political bias on social media by assessing the degree to which Twitter users were exposed to content on the left and right – specifically, exposure on the home timeline (the "news feed"). The study found that conservative Twitter accounts are exposed to content on the right, whereas liberal accounts are exposed to moderate content, shifting those users' experiences toward the political center.[65]The study determined: "Both in terms of information to which they are exposed and content they produce, drifters initialized with Right-leaning sources stay on the conservative side of the political spectrum. Those initialized with Left-leaning sources, on the other hand, tend to drift toward the political center: they are exposed to more conservative content and even start spreading it."[65]These findings held true for both hashtags and links.[65]The study also found that conservative accounts are exposed to substantially more low-credibility content than other accounts.[65]
A 2022 study inPNAS,using a long-running massive-scale randomized experiment, found that the political right enjoys higher algorithmic amplification than the political left in six out of seven countries studied. In the US, algorithmic amplification favored right-leaning news sources.[66]
Media bias is also reflected in search systems in social media. Kulshrestha and her team found through research in 2018 that the top-ranked results returned by these search engines can influence users' perceptions when they conduct searches for events or people, which is particularly reflected in political bias and polarizing topics.[67]
Tanya Pamplone warns that since much of international journalism takes place in English, there can be instances where stories and journalists from countries where English is not taught have difficulty entering the global conversation.[68]
Language may also introduce a more subtle form of bias. The selection of metaphors and analogies, or the inclusion of personal information in one situation but not another can introduce bias, such as a gender bias.[69]
TheSatanic panic, amoral panicand episode of national hysteria that emerged in the U.S. in the 1980s (and thereafter to Canada, Britain, and Australia), was reinforced bytabloid mediaandinfotainment.[70]Scholar Sarah Hughes, in a study published in 2016, argued that the panic "both reflected and shaped a cultural climate dominated by the overlapping worldviews of politically active conservatives" whose ideology "was incorporated into the panic and reinforced through" tabloid media, sensationalist television and magazine reporting, and local news.[70]Although the panic dissipated in the 1990s after it was discredited by journalists and the courts, Hughes argues that the panic has had an enduring influence in American culture and politics even decades later.[70]
In 2012,Huffington Post, columnist Jacques Berlinerblau argued thatsecularismhas often been misinterpreted in the media as another word for atheism.[71]
According toStuart A. Wrightin 1997, there are six factors that contribute to media bias against minority religions: first, the knowledge and familiarity of journalists with the subject matter; second, the degree of cultural accommodation of the targeted religious group; third, limited economic resources available to journalists; fourth, time constraints; fifth, sources of information used by journalists; and finally, the front-end/back-end disproportionality of reporting. According to Yale Law professor Stephen Carter, "it has long been the American habit to be more suspicious of – and more repressive toward – religions that stand outside the mainline Protestant-Roman Catholic-Jewish troika that dominates America's spiritual life." As for front-end/back-end disproportionality, Wright says: "news stories on unpopular or marginal religions frequently are predicated on unsubstantiated allegations or government actions based on faulty or weak evidence occurring at the front-end of an event. As the charges weighed in against material evidence, these cases often disintegrate. Yet rarely is there equal space and attention in the mass media given to the resolution or outcome of the incident. If the accused are innocent, often the public is not made aware."[72][non-primary source needed][undue weight?–discuss]
Academic studies tend not to confirm a popular media narrative of liberal journalists producing a left-leaning media bias in the U.S., though some studies suggest economic incentives may have that effect. Instead, the studies reviewed byS. Robert Lichtergenerally found the media to be a conservative force in politics.[73]
Critics of media bias tend to point out how a particular bias benefits existing power structures, undermines democratic outcomes and fails to inform people with the information they need to make decisions around public policy.[74]
Experiments have shown that media bias affects behavior and more specifically influences the readership's political ideology. A study found higher politicization rates with increased exposure to theFox News channel,[75]while a 2009 study found a weakly-linked decrease in support for the Bush administration when given a free subscription to the right-leaningThe Washington Timesor left-leaningThe Washington Post.[76]
Perceptions of media bias and trust in the media have changed significantly from 1985-2011 in the US. Pew studies reported that the percentage of Americans who trusted that news media “get their facts straight” dropped from 55% in 1985, to 25% in 2011. Similarly, the percentage of Americans who trusted that news organizations would deal fairly with all sides when dealing with political and social issues dropped from 34% in 1985 to 16% in 2011. By 2011 almost two-thirds of respondents considered news organizations to be “politically biased in their reporting”, up from 45% in 1985.[21]Similar decreases in trust have been reported by Gallup, with In 2022, half of Americans responded that they believed that news organizations would deliberately attempt to mislead them.[77]
Jonathan M. Ladd (2012), who has conducted intensive studies of media trust and media bias, concluded that the primary cause of belief in media bias is telling people that particular media are biased. People who are told that a medium is biased tend to believe that it is biased, and this belief is unrelated to whether that medium is actually biased or not. The only other factor with as strong an influence on belief that media is biased, he found, was extensive coverage of celebrities. A majority of people see such media as biased, while at the same time preferring media with extensive coverage of celebrities.[78]
NPR's ombudsman wrote a 2011 article about how to note the political leanings of think tanks or other groups that the average listener might not know much about before citing a study or statistic from an organization.[79]
Polis(or Pol.is) is a social media website that allows people to share their opinions and ideas while elevating ideas that have more consensus.[80]By September 2020, it had helped to form the core of dozens of pieces of legislation passed in Taiwan.[80]Proponents had sought out a way to inform the government with the opinions of citizens between elections while also providing an online outlet for citizens that was less divisive and more informative than social media and other large websites.[80][81]
Attempts have also been made to utilizemachine-learningto analyze the bias of text.[82]For example, person-oriented framing analysis attempts to identify frames, i.e., "perspectives", in news coverage on a topic by determining how each person mentioned in the topic's coverage is portrayed.[83][84]
Another approach, matrix-based news aggregation can help to reveal differences in media coverage between different countries, for example.[85][non-primary source needed]
A technique used to avoid bias is the "point/counterpoint" or "round table", an adversarial format in which representatives of opposing views comment on an issue. This approach theoretically allows diverse views to appear in the media. However, the person organizing the report still has the responsibility to choose reporters or journalists that represent a diverse or balanced set of opinions, to ask them non-prejudicial questions, and to edit or arbitrate their comments fairly. When done carelessly, a point/counterpoint can be as unfair as a simple biased report, by suggesting that the "losing" side lost on its merits. Besides these challenges, exposing news consumers to differing viewpoints seems to be beneficial for a balanced understanding and more critical assessment of current events and latent topics.[83]Using this format can also lead to accusations that the reporter has created a misleading appearance that viewpoints have equal validity (sometimes called "false balance"). This may happen when atabooexists around one of the viewpoints, or when one of the representatives habitually makes claims that are easily shown to be inaccurate.[citation needed]
TheCBCandRadio Canada, itsFrench languagecounterpart, are governed by the 1991 Broadcasting Act, which states programming should be "varied and comprehensive, providing balance of information...provide a reasonable opportunity for the public to be exposed to the expression of differing views on matters of public concern."[86]
|
https://en.wikipedia.org/wiki/Media_bias#Social_media_bias
|
Vicarious trauma after viewing mediadevelops after anindividuallearns or hears about someone else experiencing a traumatic event. The information they hear may have a negative psychological impact on the person, even though they did not experience the trauma themselves.[1]
Over the last fifty years, there has been an increase in the different types of media that are accessible to the public.[2]Most people use onlinesearch engines,social media, or otheronline news outletsto find out what is going on in the world.[3]This increase can lead to people easily viewing negative images and stories about traumatic events that they would not have been exposed to otherwise. One thing to consider is how the dissemination of this information may be impacting themental healthof people who identify with the victims of the violence they hear and see through the media. The viewing of these traumatic videos and stories can lead to the vicarious traumatization of the viewers.[4][5][6]
Research on vicarious trauma has focused on how mental health providers, medical workers, andfirst respondersrespond to the trauma they hear about in their everyday work experiences.[1][7]While the person does not directly experience the trauma, they have symptoms like an individual diagnosed withpost-traumatic stress disorder.[1]Some of those symptoms includehypervigilance, difficultiessleeping, changes in how they view the world and themselves, and intrusive images of the trauma.[7]
As research on vicarious trauma has expanded,researchersandjournalistshave begun to analyze how it might impact thegeneral public. One of the ways information about traumatic events is dispersed is through the media, which includes news broadcasts and social media applications.The New York Timescommented on how even though traumatic events have happened since the dawn of time, the news, and more recently social media, is what allows people across the world to know about major events.[8][9]The difference between major news organizations and social media is that most news organizations discuss how viewing traumatic or violent events impact their staff and consumers.[10]Some media organizations also make a point to flag content that could be considered disturbing to their viewers to decrease the amount of violent and traumatic content they release online.[10]While major news outlets often regulate what they post, they still show the aftermath of traumatic events on their websites and in their newspapers. Examples include pictures of thetwin towersafter 9/11, theBoston bombing, the Rodney King video, and footage of the L.A. riots.
While major media companies were the main source people received information about major events, people have also begun turning to social media to stay updated. Since the information is posted by private individuals, they are allowed to post unedited footage that may contain graphic and traumatic material to their social media platforms.[10]There is also the risk of having distressing content appear on someone's page as anadvertisementwhile they are browsing material that does not relate to the traumatic event,[8]which can make it difficult to avoid sincesmartphonesare constantly updating news around major events that happen in society.[11]
Due to the increase of online social interactions, researchers have questioned the impact of indirect online contact on theemotionsandthoughtsof online users. While past studies have found that emotions can spread between people during direct social contact due to concepts like mimicry,[12]researchers were unsure on if the same could happen through indirect contact made over social media. Coviello et al. (2014), found that people's posts on social media influenced the emotions and behaviors of other people who were their friends or who followed their online account.[12]They also found that people tended to use language similar to the initial post they saw when responding or further commenting on their own posts to which causes them to further spread the same emotionally valent message to others.[13][5]This research expanded on the knowledge that people's emotions were only influenced by nonverbal communication like thefacial expressionsandbody languageof the people around them to now also being influenced by text-only communication.[12][13]
As discussed in the section above, emotional contagion can happen through different forms of indirect contact with media. Over the last decade, researchers have found data to support the idea that some people are vicariously traumatized when viewing or reading media pertaining to a traumatic event.[4][5][11][14]Studies have questioned if media leads to a greater impact on the development of some symptoms of vicarious trauma and if a specific type of media had was greater impact than others. Holman et al. (2013) found that people who watched six or more hours of media coverage up to a week after theBoston bombingshad higher stress levels than people who were directly exposed to the bombing.[4]Goodwin et al. (2013) found that the participants in their study showed greater stress reactions when they took in information about the trauma from social when compared to those who used more traditional forms of media.[15]
Researchers have shown that social media is a major risk factor for a person to develop trauma symptoms,[15][16]or even be diagnosed withpost-traumatic stress disorder.[6]The frequency of exposure to traumatic or disturbing information through media is related to the development ofanxietyand P.T.S.D.-related symptoms.[17]While the initial reaction to viewing media may cause acute stress symptoms, generally they decrease over time. Repeated exposure to the distressing information or images that may result in the development of longer-lasting symptoms.[17]
|
https://en.wikipedia.org/wiki/Vicarious_trauma_after_viewing_media
|
Intrait theory, theBig Five personality traits(sometimes known as thefive-factor model of personalityorOCEANorCANOEmodels) are a group of five characteristics used to studypersonality:[1]
The Big Five traits did not arise from studying an existingtheoryof personality, but rather, they were anempiricalfinding in earlylexicalstudies that English personality-descriptive adjectives clustered together underfactor analysisinto five unique factors.[3][4]The factor analysis indicates that these five factors can be measured, but further studies have suggested revisions and critiques of the model. Cross-language studies have found a sixthHonesty-Humility factor, suggesting a replacement by theHEXACO model of personality structure.[5]A study of short-form constructs found that the agreeableness and openness constructs were ill-defined in a larger population, suggesting that these traits should be dropped and replaced by more specific dimensions. In addition, the labels such as "neuroticism" are ill-fitting, and the traits are more properly thought of as unnamed dimensions, "Factor A", "Factor B", and so on.[6]
Despite these issues with its formulation, the five-factor approach has been enthusiastically and internationally embraced, becoming central to much of contemporary personality research. Many subsequent factor analyses, variously formulated and expressed in a variety of languages, have repeatedly reported the finding of five largely similar factors. The five-factor approach has been portrayed as a fruitful, scientific achievement―a fundamental advance in the understanding of human personality. Some have claimed that the five factors of personality are "an empirical fact, like the fact that there are seven continents on earth and eight American Presidents from Virginia".[7]Others such asJack Blockhave expressed concerns over the uncritical acceptance of the approach.[8]
William McDougall, writing in 1932, put forward a conjecture observing that "five distinguishable but separable factors" could be identified when looking at personality. His suggestions, "intellect, character, temperament, disposition and temper", have been seen as "anticipating" the adoption of the Big Five model in subsequent years.[9]The model was built on understanding the relationship between personality andacademic behaviour.[10]It was defined by several independent sets of researchers who analysed words describing people's behaviour.[9]These researchers first studied relationships between many words related to personality traits. They made lists of these words shorter by 5–10 times and then usedfactor analysisto group the remaining traits (with data mostly based upon people's estimations, in self-report questionnaires and peer ratings) to find the basic factors of personality.[11][12][13][14][15]
The initial model was advanced in 1958 by Ernest Tupes and Raymond Christal, research psychologists at theLackland Air Force Basein Texas, but failed to reach scholars and scientists until the 1980s. In 1990, J.M. Digman advanced his five-factor model of personality, whichLewis Goldbergput at the highest organised level.[16]These five overarching domains have been found to contain most known personality traits and are assumed to represent the basic structure behind them all.[17]
At least four sets of researchers have worked independently for decades toreflect personality traits in languageand have mainly identified the same five factors: Tupes and Christal were first, followed by Goldberg at theOregon Research Institute,[18][19][20][21][22]Cattellat the University of Illinois,[13][23][24][25]and finallyCostaandMcCrae.[26][27][28][29]These four sets of researchers used somewhat different methods in finding the five traits, making the sets of five factors have varying names and meanings. However, all have been found to be strongly correlated with their corresponding factors.[30][31][32][33][34]Studies indicate that the Big Five traits are not nearly as powerful in predicting and explaining actual behaviour as the more numerousfacetsor primary traits.[35][36]
Each of the Big Five personality traits contains two separate, but correlated, aspects reflecting a level of personality below the broad domains but above the many facet scales also making up part of the Big Five.[37]The aspects are labelled as follows: Volatility and Withdrawal for Neuroticism; Enthusiasm and Assertiveness for Extraversion; Intellect and Openness for Openness to Experience; Industriousness and Orderliness for Conscientiousness; and Compassion and Politeness for Agreeableness.[37]
In 1884, British scientistSir Francis Galtonbecame the first person known to consider deriving a comprehensive taxonomy of human personality traits by sampling language.[11]The idea that this may be possible is known as thelexical hypothesis. In 1936, American psychologistsGordon AllportofHarvard Universityand Henry Odbert ofDartmouth Collegeimplemented Galton's hypothesis. They organised for three anonymous people to categorise adjectives fromWebster's New International Dictionaryand a list of common slang words. The result was a list of 4504 adjectives they believed were descriptive of observable and relatively permanent traits.[38]
In 1943,Raymond Cattellof Harvard University took Allport and Odbert's list and reduced this to a list of roughly 160 terms by eliminating words with very similar meanings. To these, he added terms from 22 other psychological categories, and additional "interest" and "abilities" terms. This resulted in a list of 171 traits. From this he used factor analysis to derive 60 "personality clusters or syndromes" and an additional 7 minor clusters.[39]Cattell then narrowed this down to 35 terms, and later added a 36th factor in the form of an IQ measure. Throughfactor analysisfrom 1945 to 1948, he created 11 or 12 factor solutions.[40][41][42]
In 1947,Hans EysenckofUniversity College Londonpublished his bookDimensions of Personality. He posited that the two most important personality dimensions were "Extraversion" and "Neuroticism", a term that he coined.[43]
In July 1949,Donald Fiskeof theUniversity of Chicagoused 22 terms either adapted from Cattell's 1947 study, and through surveys of male university students and statistics derived five factors: "Social Adaptability", "Emotional Control", "Conformity", "Inquiring Intellect", and "Confident Self-expression".[44]In the same year, Cattell, with Maurice Tatsuoka and Herbert Eber, found 4 additional factors, which they believed consisted of information that could only be provided through self-rating. With this understanding, they created the sixteen factor16PF Questionnaire.[45][46][47][48][49]
In 1953, John W French ofEducational Testing Servicepublished an extensive meta-analysis of personality trait factor studies.[50]
In 1957, Ernest Tupes of theUnited States Air Forceundertook a personality trait study of US Air Force officers. Each was rated by their peers using Cattell's 35 terms (or in some cases, the 30 most reliable terms).[51][52]In 1958, Tupes and Raymond Christal began a US Air Force study by taking 37 personality factors and other data found in Cattell's 1947 paper, Fiske's 1949 paper, and Tupes' 1957 paper.[53]Through statistical analysis, they derived five factors they labeled "Surgency", "Agreeableness", "Dependability", "Emotional Stability", and "Culture".[54][55]In addition to the influence of Cattell and Fiske's work, they strongly noted the influence of French's 1953 study.[54]Tupes and Christal further tested and explained their 1958 work in a 1961 paper.[56][14]
Warren Norman[57]of theUniversity of Michiganreplicated Tupes and Christal's work in 1963. He relabeled "Surgency" as "Extroversion or Surgency", and "Dependability" as "Conscientiousness". He also found four subordinate scales for each factor.[15]Norman's paper was much more read than Tupes and Christal's papers had been. Norman's laterOregon Research InstitutecolleagueLewis Goldbergcontinued this work.[58]
In the 4th edition of the 16PF Questionnaire released in 1968, 5 "global factors" derived from the 16 factors were identified: "Extraversion", "Independence", "Anxiety", "Self-control" and "Tough-mindedness".[59]16PF advocates have since called these "the original Big 5".[60]
During the 1970s, the changingzeitgeistmade publication of personality research difficult. In his 1968 bookPersonality and Assessment,Walter Mischelasserted that personality instruments could not predict behavior with acorrelationof more than 0.3.Social psychologistslike Mischel argued that attitudes and behavior were not stable, but varied with the situation. Predicting behavior from personality instruments was claimed to be impossible.[by whom?]
In 1978,Paul CostaandRobert McCraeof theNational Institutes of Healthpublished a book chapter describing theirNeuroticism-Extroversion-Openness(NEO) model. The model was based on the three factors in its name.[61]They used Eysenck's concept of "Extroversion" rather thanCarl Jung's.[62]Each factor had six facets. The authors expanded their explanation of the model in subsequent papers.
Also in 1978, British psychologistPeter SavilleofBrunel Universityapplied statistical analysis to 16PF results, and determined that the model could be reduced to five factors, "Anxiety", "Extraversion", "Warmth", "Imagination" and "Conscientiousness".[63]
At a 1980 symposium in Honolulu,Lewis Goldberg,Naomi Takemoto-Chock, Andrew Comrey, and John M. Digman, reviewed the available personality instruments of the day.[64]In 1981, Digman and Takemoto-Chock of theUniversity of Hawaiireanalysed data from Cattell, Tupes, Norman, Fiske and Digman. They re-affirmed the validity of the five factors, naming them "Friendly Compliance vs. Hostile Non-compliance", "Extraversion vs. Introversion", "Ego Strength vs. Emotional Disorganization", "Will to Achieve" and "Intellect". They also found weak evidence for the existence of a sixth factor, "Culture".[65]
Peter Saville and his team included the five-factor "Pentagon" model as part of theOccupational Personality Questionnaires(OPQ) in 1984. This was the first commercially available Big Five test.[66]Its factors are "Extroversion", "Vigorous", "Methodical", "Emotional Stability", and "Abstract".[67]
This was closely followed by another commercial test, theNEO PIthree-factor personality inventory, published by Costa and McCrae in 1985. It used the three NEO factors. The methodology employed in constructing the NEO instruments has since been subject to critical scrutiny.[68]: 431–33
Emerging methodologies increasingly confirmed personality theories during the 1980s. Though generally failing to predict single instances of behavior, researchers found that they could predict patterns of behavior by aggregating large numbers of observations.[69]As a result, correlations between personality and behavior increased substantially, and it became clear that "personality" did in fact exist.[70]
In 1992, the NEO PI evolved into theNEO PI-R, adding the factors "Agreeableness" and "Conscientiousness",[58]and becoming a Big Five instrument. This set the names for the factors that are now most commonly used. The NEO maintainers call their model the "Five Factor Model" (FFM). Each NEO personality dimension has six subordinate facets.
Wim Hofstee at theUniversity of Groningenused a lexical hypothesis approach with the Dutch language to develop what became theInternational Personality Item Poolin the 1990s. Further development in Germany and the United States saw the pool based on three languages. Its questions and results have been mapped to various Big Five personality typing models.[71][72]
Kibeom Lee and Michael Ashton released a book describing theirHEXACOmodel in 2004.[73]It adds a sixth factor, "Honesty-Humility" to the five (which it calls "Emotionality", "Extraversion", "Agreeableness", "Conscientiousness", and "Openness to Experience"). Each of these factors has four facets.
In 2007,Colin DeYoung, Lena C. Quilty andJordan Petersonconcluded that the 10 aspects of the Big Five may have distinct biological substrates.[37]This was derived through factor analyses of two data samples with the International Personality Item Pool, followed by cross-correlation with scores derived from 10 genetic factors identified as underlying the shared variance among the Revised NEO Personality Inventory facets.[74]
By 2009, personality and social psychologists generally agreed that both personal and situational variables are needed to account for human behavior.[75]
A FFM-associated test was used byCambridge Analytica, and was part of the "psychographic profiling"[76]controversy during the2016 US presidential election.[77][78]
Whenfactor analysisis applied topersonality surveydata, semantic associations between aspects of personality and specific terms are often applied to the same person. For example, someone described asconscientiousis more likely to be described as "always prepared" rather than "messy". These associations suggest five broad dimensions used in common language to describe the human personality,temperament, andpsyche.[16][79]
Beneath each proposed global factor, there are a number of correlated and more specific primary factors. For example, extraversion is typically associated with qualities such as gregariousness, assertiveness, excitement-seeking, warmth, activity, andpositive emotions.[80]These traits are not black and white; each one is treated as aspectrum.[81]
Openness to experienceis a general appreciation for art, emotion, adventure, unusual ideas, imagination, curiosity, and variety of experience. People who are open to experience are intellectually curious, open to emotion, sensitive to beauty, and willing to try new things. They tend to be, when compared to closed people, more creative and more aware of their feelings. They are also more likely to hold unconventional beliefs. Open people can be perceived as unpredictable or lacking focus, and more likely to engage in risky behaviour or drug-taking.[82]Moreover, individuals with high openness are said to pursueself-actualisationspecifically byseeking out intense, euphoric experiences. Conversely, those with low openness want to be fulfilled by persevering and are characterised as pragmatic and data-driven – sometimes even perceived to be dogmatic and closed-minded. Some disagreement remains about how to interpret and contextualise the openness factor as there is a lack of biological support for this particular trait. Openness has not shown a significant association with any brain regions as opposed to the other four traits which did when using brain imaging to detect changes in volume associated with each trait.[83]
Conscientiousnessis a tendency to beself-disciplined, act dutifully, and strive for achievement against measures or outside expectations. It is related to people's level of impulse control, regulation, and direction. High conscientiousness is often perceived as being stubborn and focused. Low conscientiousness is associated with flexibility and spontaneity, but can also appear as sloppiness and lack of reliability.[85]High conscientiousness indicates a preference for planned rather than spontaneous behaviour.[86]
Extraversionis characterised by breadth of activities (as opposed to depth),surgencyfrom external activities/situations, and energy creation from external means.[87]The trait is marked by pronounced engagement with the external world. Extraverts enjoy interacting with people, and are often perceived as energetic. They tend to be enthusiastic and action-oriented. They possess high group visibility, like to talk, and assert themselves. Extraverts may appear more dominant in social settings, as opposed to introverts in that setting.[88]
Introverts have lower social engagement and energy levels than extraverts. They tend to seem quiet, low-key, deliberate, and less involved in the social world. Their lack of social involvement should not be interpreted as shyness or depression, but as greater independence of their social world than extraverts. Introverts need less stimulation and more time alone than extraverts. This does not mean that they are unfriendly or antisocial; rather, they are aloof and reserved in social situations.[89]
Generally, people are a combination of extraversion and introversion, with personality psychologistHans Eysencksuggesting a model by which differences in their brains produce these traits.[88]: 106
Agreeablenessis the general concern for social harmony. Agreeable individuals value getting along with others. They are generally considerate, kind, generous, trusting and trustworthy, helpful, and willing to compromise their interests with others.[89]Agreeable people also have an optimistic view of human nature. Being agreeable helps us cope with stress.[90]
Disagreeable individuals place self-interest above getting along with others. They are generally unconcerned with others' well-being and are less likely to extend themselves for other people. Sometimes their skepticism about others' motives causes them to be suspicious, unfriendly, and uncooperative.[91]Disagreeable people are often competitive or challenging, which can be seen as argumentative or untrustworthy.[85]
Because agreeableness is a social trait, research has shown that one's agreeableness positively correlates with the quality of relationships with one's team members. Agreeableness also positively predictstransformational leadershipskills. In a study conducted among 169 participants in leadership positions in a variety of professions, individuals were asked to take a personality test and be directly evaluated by supervised subordinates. Very agreeable leaders were more likely to be considered transformational rather thantransactional. Although the relationship was not strong (r=0.32,β=0.28,p<0.01), it was the strongest of the Big Five traits. However, the same study could not predict leadership effectiveness as evaluated by the leader's direct supervisor.[92]
Conversely, agreeableness has been found to be negatively related to transactional leadership in the military. A study of Asian military units showed that agreeable people are more likely to be poor transactional leaders.[93]Therefore, with further research, organisations may be able to determine an individual's potential for performance based on their personality traits. For instance,[94]in their journal article "Which Personality Attributes Are Most Important in the Workplace?" Paul Sackett and Philip Walmsley claim that conscientiousness and agreeableness are "important to success across many different jobs."
Neuroticismis the tendency to have strongnegative emotions, such as anger, anxiety, or depression.[95]It is sometimes called emotional instability, or is reversed and referred to as emotional stability. According toHans Eysenck's (1967) theory of personality, neuroticism is associated with low tolerance for stress or a strong dislike of change.[96]Neuroticism is a classic temperament trait that has been studied in temperament research for decades, even before it was adapted by the Five Factor Model.[97]Neurotic people are emotionally reactive and vulnerable to stress. They are more likely to interpret ordinary situations as threatening. They can perceive minor frustrations as hopelessly difficult. Theirnegative emotionalreactions tend to stay for unusually long periods of time, which means they are often in a bad mood. For instance, neuroticism is connected to pessimism toward work, to certainty that work hinders personal relationships, and to higher levels of anxiety from the pressures at work.[98]Furthermore, neurotic people may display moreskin-conductance reactivitythan calm and composed people.[96][99]These problems in emotional regulation can make a neurotic person think less clearly, make worse decisions, and cope less effectively with stress. Being disappointed with one's life achievements can make one more neurotic and increase one's chances of falling into clinical depression. Moreover, neurotic individuals tend to experience more negative life events,[95][100]but neuroticism also changes in response to positive and negative life experiences.[95][100]Also, neurotic people tend to have worse psychological well-being.[101]
At the other end of the scale, less neurotic individuals are less easily upset and are less emotionally reactive. They tend to be calm, emotionally stable, and free from persistent negative feelings. Freedom from negative feelings does not mean that low scorers experience a lot of positive feelings; that is related to extraversion instead.[102]
Neuroticism is similar but not identical to being neurotic in the Freudian sense (i.e.,neurosis). Some psychologists[who?]prefer to call neuroticism by the term emotional instability to differentiate it from the term neurotic in a career test.
The factors that influence a personality are called the determinants of personality. These factors determine the traits which a person develops in the course of development from a child.
There are debates betweentemperamentresearchers andpersonalityresearchers as to whether or not biologically based differences define a concept of temperament or a part of personality. The presence of such differences in pre-cultural individuals (such as animals or young infants) suggests that they belong to temperament since personality is a socio-cultural concept. For this reason developmental psychologists generally interpret individual differences in children as an expression of temperament rather than personality.[103]Some researchers argue that temperaments and personality traits are age-specific demonstrations of virtually the same internal qualities.[104][105]Some believe that early childhood temperaments may become adolescent and adult personality traits as individuals' basic genetic characteristics interact with their changing environments to various degrees.[103][104][106]
Researchers of adult temperament point out that, similarly to sex, age, and mental illness, temperament is based on biochemical systems whereas personality is a product of socialisation of an individual possessing these four types of features. Temperament interacts with socio-cultural factors, but, similar to sex and age, still cannot be controlled or easily changed by these factors.[107][108][109][110]Therefore, it is suggested that temperament (neurochemically based individual differences) should be kept as an independent concept for further studies and not be confused with personality (culturally-based individual differences, reflected in the origin of the word "persona" (Lat) as a "social mask").[111][112]
Moreover, temperament refers to dynamic features of behaviour (energetic, tempo, sensitivity, and emotionality-related), whereas personality is to be considered a psycho-social construct comprising the content characteristics of human behaviour (such as values, attitudes, habits, preferences, personal history, self-image).[108][109][110]Temperament researchers point out that the lack of attention to surviving temperament research by the creators of the Big Five model led to an overlap between its dimensions and dimensions described in multiple temperament models much earlier. For example, neuroticism reflects the traditional temperament dimension of emotionality studied byJerome Kagan's group since the '60s. Extraversion was also first introduced as a temperament type byJungfrom the '20s.[110][113]
A 1996behavioural geneticsstudy of twinssuggested thatheritability(the degree ofvariationin atraitwithin apopulationthat is due togenetic variationin that population) and environmental factors both influence all five factors to the same degree.[114]Among four twin studies examined in 2003, the mean percentage for heritability was calculated for each personality and it was concluded that heritability influenced the five factors broadly. The self-report measures were as follows: openness to experience was estimated to have a 57% genetic influence, extraversion 54%, conscientiousness 49%, neuroticism 48%, and agreeableness 42%.[115]
The Big Five personality traits have been assessed in some non-human species but methodology is debatable. In one series of studies, human ratings ofchimpanzeesusing theHominoid Personality Questionnaire, revealed factors of extraversion, conscientiousness and agreeableness– as well as an additional factor of dominance–across hundreds of chimpanzees inzoological parks, a large naturalistic sanctuary, and a research laboratory. Neuroticism and openness factors were found in an original zoo sample, but were not replicated in a new zoo sample or in other settings (perhaps reflecting the design of the CPQ).[116]A study review found that markers for the three dimensions extraversion, neuroticism, and agreeableness were found most consistently across different species, followed by openness; only chimpanzees showed markers for conscientious behavior.[117]
A study completed in 2020 concluded that dolphins have some similar personality traits to humans. Both are large brained intelligent animals but have evolved separately for millions of years.[118]
Research on the Big Five, and personality in general, has focused primarily on individual differences in adulthood, rather than in childhood and adolescence, and often include temperament traits.[103][104][106]Recently, there has been growing recognition of the need to study child and adolescent personality trait development in order to understand how traits develop and change throughout the lifespan.[119]
Recent studies have begun to explore the developmental origins and trajectories of the Big Five among children and adolescents, especially those that relate to temperament.[103][104][106]Many researchers have sought to distinguish between personality and temperament.[120]Temperament often refers to early behavioral and affective characteristics that are thought to be driven primarily by genes.[120]Models of temperament often include four trait dimensions: surgency/sociability,negative emotionality, persistence/effortful control, and activity level.[120]Some of these differences in temperament are evident at, if not before, birth.[103][104]For example, both parents and researchers recognize that some newborn infants are peaceful and easily soothed while others are comparatively fussy and hard to calm.[104]Unlike temperament, however, many researchers view the development of personality as gradually occurring throughout childhood.[120]Contrary to some researchers who question whether children have stable personality traits, Big Five or otherwise,[121]most researchers contend that there are significant psychological differences between children that are associated with relatively stable, distinct, and salient behavior patterns.[103][104][106]
The structure, manifestations, and development of the Big Five in childhood and adolescence have been studied using a variety of methods, including parent- and teacher-ratings,[122][123][124]preadolescent and adolescent self- and peer-ratings,[125][126][127]and observations of parent-child interactions.[106]Results from these studies support the relative stability of personality traits across the human lifespan, at least from preschool age through adulthood.[104][106][127][128]More specifically, research suggests that four of the Big Five – namely Extraversion, Neuroticism, Conscientiousness, and Agreeableness – reliably describe personality differences in childhood, adolescence, and adulthood.[104][106][127][128]However, some evidence suggests that Openness may not be a fundamental, stable part of childhood personality. Although some researchers have found that Openness in children and adolescents relates to attributes such as creativity, curiosity, imagination, and intellect,[129]many researchers have failed to find distinct individual differences in Openness in childhood and early adolescence.[104][106]Potentially, Openness may (a) manifest in unique, currently unknown ways in childhood or (b) may only manifest as children develop socially and cognitively.[104][106]Other studies have found evidence for all of the Big Five traits in childhood and adolescence as well as two other child-specific traits: Irritability and Activity.[130]Despite these specific differences, the majority of findings suggest that personality traits – particularly Extraversion, Neuroticism, Conscientiousness, and Agreeableness – are evident in childhood and adolescence and are associated with distinct social-emotional patterns of behavior that are largely consistent with adult manifestations of those same personality traits.[104][106][127][128]Some researchers have proposed the youth personality trait is best described by six trait dimensions: neuroticism, extraversion, openness to experience, agreeableness, conscientiousness, and activity.[131]Despite some preliminary evidence for this "Little Six" model,[120][131]research in this area has been delayed by a lack of available measures.
Previous research has found evidence that most adults become more agreeable and conscientious and less neurotic as they age.[132]This has been referred to as thematurationeffect.[105]Many researchers have sought to investigate how trends in adult personality development compare to trends in youth personality development.[131]Two main population-level indices have been important in this area of research: rank-order consistency and mean-level consistency. Rank-order consistency indicates the relative placement of individuals within a group.[133]Mean-level consistency indicates whether groups increase or decrease on certain traits throughout the lifetime.[132]
Findings from these studies indicate that, consistent with adult personality trends, youth personality becomes increasingly more stable in terms of rank-order throughout childhood.[131]Unlike adult personality research, which indicates that people become agreeable, conscientious, and emotionally stable with age,[132]some findings in youth personality research have indicated that mean levels of agreeableness, conscientiousness, and openness to experience decline from late childhood to late adolescence.[131]The disruption hypothesis, which proposes that biological, social, and psychological changes experienced during youth result in temporary dips in maturity, has been proposed to explain these findings.[120][131]
In Big Five studies, extraversion has been associated withsurgency.[103]Children with high Extraversion are energetic, talkative, social, and dominant with children and adults, whereas children with low extraversion tend to be quiet, calm, inhibited, and submissive to other children and adults.[104]Individual differences in extraversion first manifest in infancy as varying levels of positive emotionality.[134]These differences in turn predict social and physical activity during later childhood and may represent, or be associated with, thebehavioral activation system.[103][104]In children, Extraversion/Positive Emotionality includes four sub-traits: three of these (activity,sociability, andshyness) are similar to the previously described traits of temperament;[135][97]the other isdominance.
Many studies oflongitudinaldata, which correlate people's test scores over time, andcross-sectionaldata, which compare personality levels across different age groups, show a high degree of stability in personality traits during adulthood, especially Neuroticism that is often regarded as a temperament trait[147]similarly to longitudinal research in temperament for the same traits.[97]It is shown that the personality stabilizes for working-age individuals within about four years after starting working. There is also little evidence that adverse life events can have any significant impact on the personality of individuals.[148]More recent research and meta-analyses of previous studies, however, indicate thatchange occursin all five traits at various points in the lifespan. The new research shows evidence for amaturationeffect. On average, levels of agreeableness and conscientiousness typically increase with time, whereas extraversion, neuroticism, and openness tend to decrease.[149]Research has also demonstrated that changes in Big Five personality traits depend on the individual's current stage of development. For example, levels of agreeableness and conscientiousness demonstrate a negative trend during childhood and early adolescence before trending upwards during late adolescence and into adulthood.[119]In addition to these group effects, there are individual differences: different people demonstrate unique patterns of change at all stages of life.[150]
In addition, some research (Fleeson, 2001) suggests that the Big Five should not be conceived of as dichotomies (such as extraversion vs. introversion) but as continua. Each individual has the capacity to move along each dimension as circumstances (social or temporal) change. Someone is therefore not simply on one end of each trait dichotomy but is a blend of both, exhibiting some characteristics more often than others:[151]
Research regarding personality with growing age has suggested that as individuals enter their elder years (79–86), those with lower IQ see a raise in extraversion, but a decline in conscientiousness and physical well-being.[152]
Some cross-cultural research has shown some patterns of gender differences on responses to the NEO-PI-R and the Big Five Inventory.[153][154]For example, women consistently report higher Neuroticism, Agreeableness, warmth (an extraversion facet) and openness to feelings, and men often report higher assertiveness (a facet of extraversion) and openness to ideas as assessed by the NEO-PI-R.[155]
A study of gender differences in 55 nations using the Big Five Inventory found that women tended to be somewhat higher than men in neuroticism, extraversion, agreeableness, and conscientiousness. The difference in neuroticism was the most prominent and consistent, with significant differences found in 49 of the 55 nations surveyed.[156]
Gender differences in personality traits are largest in prosperous, healthy, and more gender-egalitarian nations. The explanation for this, as stated by the researchers of a 2001 paper, is that actions by women in individualistic, egalitarian countries are more likely to be attributed to their personality, rather than being attributed to ascribed gender roles within collectivist, traditional countries.[155]
Measured differences in the magnitude of sex differences between more or less developed world regions were caused by the changes in the measured personalities of men, not women, in these respective regions. That is, men in highly developed world regions were less neurotic, less extraverted, less conscientious and less agreeable compared to men in less developed world regions. Women, on the other hand tended not to differ in personality traits across regions.[156]
Frank Sullowayargues that firstborns are more conscientious, more socially dominant, less agreeable, and less open to new ideas compared to siblings that were born later. Large-scale studies using random samples and self-report personality tests, however, have found milder effects than Sulloway claimed, or no significant effects of birth order on personality.[157][158]A study using theProject Talentdata, which is a large-scale representative survey of American high school students, with 272,003 eligible participants, found statistically significant but very small effects (the average absolute correlation between birth order and personality was .02) of birth order on personality, such that firstborns were slightly more conscientious, dominant, and agreeable, while also being less neurotic and less sociable.[159]Parental socioeconomic status and participant gender had much larger correlations with personality.
In 2002, the Journal of Psychology posted a Big Five Personality Trait Difference; where researchers explored the relationship between the five-factor model and the Universal-Diverse Orientation (UDO) in counselor trainees. (Thompson, R., Brossart, D., and Mivielle, A., 2002). UDO is known as one social attitude that produces a strong awareness and/or acceptance towards the similarities and differences among individuals. (Miville, M., Romas, J., Johnson, J., and Lon, R. 2002) The study found that the counselor trainees that are more open to the idea of creative expression (a facet of Openness to Experience, Openness to Aesthetics) among individuals are more likely to work with a diverse group of clients, and feel comfortable in their role.[160]
Individual differences in personality traits are widely understood to be conditioned by cultural context.[88]: 189
Research into the Big Five has been pursued in a variety of languages and cultures, such as German,[161]Chinese,[162]and South Asian.[163][164]For example, Thompson has claimed to find the Big Five structure across several cultures using an international English language scale.[165]Cheung, van de Vijver, and Leong (2011) suggest, however, that the Openness factor is particularly unsupported in Asian countries and that a different fifth factor is identified.[166]
Sopagna Eap et al. (2008) found that European-American men scored higher than Asian-American men on extroversion, conscientiousness, and openness, while Asian-American men scored higher than European-American men on neuroticism.[167]Benet-Martínez and Karakitapoglu-Aygün (2003) arrived at similar results.[168]
Recent work has found relationships betweenGeert Hofstede'scultural factors, Individualism, Power Distance, Masculinity, and Uncertainty Avoidance, with the average Big Five scores in a country.[169]For instance, the degree to which a country values individualism correlates with its average extraversion, whereas people living in cultures which are accepting of large inequalities in their power structures tend to score somewhat higher on conscientiousness.[170][171]
A 2017 study has found that countries' average personality trait levels are correlated with their political systems. Countries with higher average trait Openness tended to have more democratic institutions, an association that held even after factoring out other relevant influences such as economic development.[172]
Attempts to replicate the Big Five have succeeded in some countries but not in others. Some research suggests, for instance, that Hungarians do not have a single agreeableness factor.[173]Other researchers have found evidence for agreeableness but not for other factors.[174]
Some diseases cause changes in personality. For example, although gradual memory impairment is the hallmark feature ofAlzheimer's disease, a systematic review of personality changes in Alzheimer's disease by Robins Wahlin and Byrne, published in 2011, found systematic and consistent trait changes mapped to the Big Five. The largest change observed was a decrease in conscientiousness. The next most significant changes were an increase in Neuroticism and decrease in Extraversion, but Openness and Agreeableness were also decreased. These changes in personality could assist with early diagnosis.[175]
A study published in 2023 found that the Big Five personality traits may also influence the quality of life experienced by people with Alzheimer's disease and other dementias, post diagnosis. In this study people with dementia with lower levels of Neuroticism self-reported higher quality of life than those with higher levels of Neuroticism while those with higher levels of the other four traits self-reported higher quality of life than those with lower levels of these traits. This suggests that as well as assisting with early diagnosis, the Big Five personality traits could help identify people with dementia potentially more vulnerable to adverse outcomes and inform personalized care planning and interventions.[176]
As of 2002[update], there were over fifty published studies relating the FFM to personality disorders.[177]Since that time, quite a number of additional studies have expanded on this research base and provided further empirical support for understanding the DSM personality disorders in terms of the FFM domains.[178]
In her review of the personality disorder literature published in 2007,Lee Anna Clarkasserted that "the five-factor model of personality is widely accepted as representing the higher-order structure of both normal and abnormal personality traits".[179]However, other researchers disagree that this model is widely accepted (see the section Critique below) and suggest that it simply replicates early temperament research.[110][180]Noticeably, FFM publications never compare their findings to temperament models even though temperament andmental disorders(especially personality disorders) are thought to be based on the sameneurotransmitterimbalances, just to varying degrees.[110][181][182][183]
The five-factor model was claimed to significantly predict all ten personality disorder symptoms and outperform theMinnesota Multiphasic Personality Inventory(MMPI) in the prediction ofborderline,avoidant, anddependentpersonality disorder symptoms.[184]However, most predictions related to an increase in Neuroticism and a decrease in Agreeableness, and therefore did not differentiate between the disorders very well.[185]
Converging evidence from several nationally representative studies has established three classes of mental disorders which are especially common in the general population: Depressive disorders (e.g.,major depressive disorder(MDD),dysthymic disorder),[187]anxiety disorders (e.g.,generalized anxiety disorder(GAD),post-traumatic stress disorder(PTSD),panic disorder,agoraphobia,specific phobia, andsocial phobia),[187]and substance use disorders (SUDs).[188][189]The Five Factor personality profiles of users of different drugs may be different.[190]For example, the typical profile for heroin users isN⇑,O⇑,A⇓,C⇓{\displaystyle {\rm {N}}\Uparrow ,{\rm {O}}\Uparrow ,{\rm {A}}\Downarrow ,{\rm {C}}\Downarrow }, whereas for ecstasy users the high level of N is not expected but E is higher:E⇑,O⇑,A⇓,C⇓{\displaystyle {\rm {E}}\Uparrow ,{\rm {O}}\Uparrow ,{\rm {A}}\Downarrow ,{\rm {C}}\Downarrow }.[190]
These common mental disorders (CMDs) have been empirically linked to the Big Five personality traits, neuroticism in particular. Numerous studies have found that having high scores of neuroticism significantly increases one's risk for developing a common mental disorder.[191][192]A large-scale meta-analysis (n > 75,000) examining the relationship between all of the Big Five personality traits and common mental disorders found that low conscientiousness yielded consistently strong effects for each common mental disorder examined (i.e., MDD, dysthymic disorder, GAD, PTSD, panic disorder, agoraphobia, social phobia, specific phobia, and SUD).[193]This finding parallels research on physical health, which has established that conscientiousness is the strongest personality predictor of reduced mortality, and is highly negatively correlated with making poor health choices.[194][195]In regards to the other personality domains, the meta-analysis found that all common mental disorders examined were defined by high neuroticism, most exhibited low extraversion, only SUD was linked to agreeableness (negatively), and no disorders were associated with Openness.[193]A meta-analysis of 59 longitudinal studies showed that high neuroticism predicted the development of anxiety, depression, substance abuse, psychosis, schizophrenia, and non-specific mental distress, also after adjustment for baseline symptoms and psychiatric history.[196]
Five major models have been posed to explain the nature of the relationship between personality and mental illness. There is currently no single "best model", as each of them has received at least some empirical support. These models are not mutually exclusive – more than one may be operating for a particular individual and various mental disorders may be explained by different models.[196][197]
To examine how the Big Five personality traits are related to subjective health outcomes (positive and negative mood, physical symptoms, and general health concern) and objective health conditions (chronic illness, serious illness, and physical injuries), Jasna Hudek-Knezevic and Igor Kardum conducted a study from a sample of 822 healthy volunteers (438 women and 384 men).[201]Out of the Big Five personality traits, they found neuroticism most related to worse subjective health outcomes and optimistic control to better subjective health outcomes. When relating to objective health conditions, connections drawn were presented weak, except that neuroticism significantly predicted chronic illness, whereas optimistic control was more closely related to physical injuries caused by accident.[201]
Being highlyconscientiousmay add as much as five years to one's life.[vague][195]The Big Five personality traits also predict positive health outcomes.[202]In an elderly Japanese sample, conscientiousness,extraversion, andopennesswere related to lower risk of mortality.[203]
Higher conscientiousness is associated with lower obesity risk. In already obese individuals, higher conscientiousness is associated with a higher likelihood of becoming non-obese over a five-year period.[204]
Personality plays an important role in academic achievement. A study of 308 undergraduates who completed the Five Factor Inventory Processes and reported theirGPAsuggested that conscientiousness and agreeableness have a positive relationship with all types of learning styles (synthesis-analysis, methodical study, fact retention, and elaborative processing), whereas neuroticism shows an inverse relationship. Moreover, extraversion and openness were proportional to elaborative processing. The Big Five personality traits accounted for 14% of the variance in GPA, suggesting that personality traits make some contributions to academic performance. Furthermore, reflective learning styles (synthesis-analysis and elaborative processing) were able to mediate the relationship between openness and GPA. These results indicate that intellectual curiosity significantly enhances academic performance if students combine their scholarly interest with thoughtful information processing.[205]
A recent study of Israeli high-school students found that those in the gifted program systematically scored higher onopennessand lower onneuroticismthan those not in the gifted program. While not a measure of the Big Five, gifted students also reported less state anxiety than students not in the gifted program.[206]Specific Big Five personality traits predictlearning stylesin addition to academic success.
Studies conducted on college students have concluded that hope, which is linked to agreeableness,[207]conscientiousness, neuroticism, and openness,[207]has a positive effect on psychological well-being. Individuals high in neurotic tendencies are less likely to display hopeful tendencies and are negatively associated with well-being.[208]Personality can sometimes be flexible and measuring the big five personality for individuals as they enter certain stages of life may predict their educational identity. Recent studies have suggested the likelihood of an individual's personality affecting their educational identity.[209]
Learning styleshave been described as "enduring ways of thinking and processing information".[205]
In 2008, theAssociation for Psychological Science(APS) commissioned a report that concludes that no significant evidence exists that learning-style assessments should be included in the education system.[210]Thus it is premature, at best, to conclude that the evidence links the Big Five to "learning styles", or "learning styles" to learning itself.
However, the APS report also suggested that all existing learning styles have not been exhausted and that there could exist learning styles worthy of being included in educational practices. There are studies that conclude that personality and thinking styles may be intertwined in ways that link thinking styles to the Big Five personality traits.[211]There is no general consensus on the number or specifications of particular learning styles, but there have been many different proposals.
As one example, Schmeck, Ribich, and Ramanaiah (1997) defined four types of learning styles:[212]
When all four facets are implicated within the classroom, they will each likely improve academic achievement.[205]By identifying learning strategies in individuals, learning and academic achievement can be improved, and a deeper understanding of information processing can be gained.[213]This model asserts that students develop either agentic/shallow processing or reflective/deep processing. Deep processors are more often found to be more conscientious, intellectually open, and extraverted than shallow processors. Deep processing is associated with appropriate study methods (methodical study) and a stronger ability to analyze information (synthesis analysis), whereas shallow processors prefer structured fact retention learning styles and are better suited for elaborative processing.[205]The main functions of these four specific learning styles are as follows:
Openness has been linked to learning styles that often lead to academic success and higher grades like synthesis analysis and methodical study. Because conscientiousness and openness have been shown to predict all four learning styles, it suggests that individuals who possess characteristics like discipline, determination, and curiosity are more likely to engage in all of the above learning styles.[205]
According to the research carried out by Komarraju, Karau, Schmeck & Avdic (2011), conscientiousness and agreeableness are positively related with all four learning styles, whereas neuroticism was negatively related with those four. Furthermore, extraversion and openness were only positively related to elaborative processing, and openness itself correlated with higher academic achievement.[205]
In addition, a previous study by psychologist Mikael Jensen has shown relationships between the Big Five personality traits, learning, and academic achievement. According to Jensen, all personality traits, except neuroticism, are associated with learning goals and motivation. Openness and conscientiousness influence individuals to learn to a high degree unrecognized, while extraversion and agreeableness have similar effects.[214]Conscientiousness and neuroticism also influence individuals to perform well in front of others for a sense of credit and reward, while agreeableness forces individuals to avoid this strategy of learning.[214]Jensen's study concludes that individuals who score high on the agreeableness trait will likely learn just to perform well in front of others.[214]
Besides openness, all Big Five personality traits helped predict the educational identity of students. Based on these findings, scientists are beginning to see that the Big Five traits might have a large influence of on academic motivation that leads to predicting a student's academic performance.[209]
Some authors suggested that Big Five personality traits combined with learning styles can help predict some variations in the academic performance and the academic motivation of an individual which can then influence their academic achievements.[215]This may be seen because individual differences in personality represent stable approaches to information processing. For instance, conscientiousness has consistently emerged as a stable predictor of success in exam performance, largely because conscientious students experience fewer study delays.[209]Conscientiousness shows a positive association with the four learning styles because students with high levels of conscientiousness develop focused learning strategies and appear to be more disciplined and achievement-oriented.
Personality and learning styles are both likely to play significant roles in influencing academic achievement. College students (308 undergraduates) completed the Five Factor Inventory and the Inventory of Learning Processes and reported their grade point average. Two of the Big Five traits, conscientiousness and agreeableness, were positively related with all four learning styles (synthesis analysis, methodical study, fact retention, and elaborative processing), whereas neuroticism was negatively related with all four learning styles. In addition, extraversion and openness were positively related with elaborative processing. The Big Five together explained 14% of the variance in grade point average (GPA), and learning styles explained an additional 3%, suggesting that both personality traits and learning styles contribute to academic performance. Further, the relationship between openness and GPA was mediated by reflective learning styles (synthesis-analysis and elaborative processing). These latter results suggest that being intellectually curious fully enhances academic performance when students combine this scholarly interest with thoughtful information processing. Implications of these results are discussed in the context of teaching techniques and curriculum design.
When the relationship between the five-factor personality traits and academic achievement in distance education settings was examined in brief, the openness personality trait was found to be the most important variable that has a positive relationship with academic achievement in distance education environments. In addition, it was found that self-discipline, extraversion, and adaptability personality traits are generally in a positive relationship with academic achievement. The most important personality trait that has a negative relationship with academic achievement has emerged as neuroticism. The results generally show that individuals who are organized, planned, determined, who are oriented to new ideas and independent thinking have increased success in distance education environments. On the other hand, it can be said that individuals with anxiety and stress tendencies generally have lower academic success.[216][217][218]
Researchers have long suggested that work is more likely to be fulfilling to the individual and beneficial to society when there is alignment between the person and their occupation.[219]For instance, software programmers and scientists often rank high on Openness to experience and tend to be intellectually curious, think in symbols and abstractions, and find repetition boring.[220]Psychologists and sociologists rank higher on Agreeableness and Openness than economists and jurists.[221]
It is believed that the Big Five traits are predictors of future performance outcomes to varying degrees. Specific facets of the Big Five traits are also thought to be indicators of success in the workplace, and each individual facet can give a more precise indication as to the nature of a person. Different traits' facets are needed for different occupations. Various facets of the Big Five traits can predict the success of people in different environments. The estimated levels of an individual's success in jobs that require public speaking versus one-on-one interactions will differ according to whether that person has particular traits' facets.[36]
Job outcome measures include job and training proficiency and personnel data.[222]However, research demonstrating such prediction has been criticized, in part because of the apparently low correlation coefficients characterizing the relationship between personality andjob performance. In a 2007 article states: "The problem with personality tests is ... that the validity of personality measures as predictors of job performance is often disappointingly low. The argument for using personality tests to predict performance does not strike me as convincing in the first place."[223]
Such criticisms were put forward byWalter Mischel,[224]whose publication caused a two-decades' long crisis in personality psychometrics. However, later work demonstrated that the correlations obtained by psychometric personality researchers were actually very respectable by comparative standards,[225]and that the economic value of even incremental increases in prediction accuracy was exceptionally large, given the vast difference in performance by those who occupy complex job positions.[226]
Research has suggested that individuals who are considered leaders typically exhibit lower amounts of neurotic traits, maintain higher levels of openness, balanced levels of conscientiousness, and balanced levels of extraversion.[227][228][229]Further studies have linked professional burnout to neuroticism, and extraversion to enduring positive work experience.[230]Studies have linked national innovation, leadership, and ideation to openness to experience and conscientiousness.[231]Occupationalself-efficacyhas also been shown to be positively correlated with conscientiousness and negatively correlated with neuroticism.[228]Some research has also suggested that the conscientiousness of a supervisor is positively associated with an employee's perception of abusive supervision.[232]Others have suggested that low agreeableness and high neuroticism are traits more related to abusive supervision.[233]
Opennessis positively related to proactivity at the individual and the organizational levels and is negatively related to team and organizational proficiency. These effects were found to be completely independent of one another. This is also counter-conscientious and has a negative correlation to Conscientiousness.[234]
Agreeablenessis negatively related to individual task proactivity. Typically this is associated with lower career success and being less able to cope with conflict. However there are benefits to the Agreeableness personality trait including higher subjective well-being; more positive interpersonal interactions and helping behavior; lower conflict; lower deviance and turnover.[234]Furthermore, attributes related to Agreeableness are important for workforce readiness for a variety of occupations and performance criteria.[94]Research has suggested that those who are high in agreeableness are not as successful in accumulating income.[235]
Extraversionresults in greater leadership emergence and effectiveness; as well as higher job and life satisfaction. However extraversion can lead to more impulsive behaviors, more accidents and lower performance in certain jobs.[234]
Conscientiousnessis highly predictive of job performance in general,[94]and is positively related to all forms of work role performance, including job performance and job satisfaction, greater leadership effectiveness, lower turnover and deviant behaviors. However this personality trait is associated with reduced adaptability, lower learning in initial stages of skill acquisition and more interpersonally abrasiveness, when also low in agreeableness.[234]It is also not the case that more or extreme conscientiousness is always necessarily better as there does appear to be a link between conscientiousness and obsessive-compulsive personality disorder (OCPD). Selecting employees for a moderate level of conscientiousness may actually provide the best occupational outcome.[236]
Neuroticismis negatively related to all forms of work role performance. This increases the chance of engaging in risky behaviors.[237][234]
Two theories have been integrated in an attempt to account for these differences in work role performance.Trait activation theoryposits that within a person trait levels predict future behavior, that trait levels differ between people, and that work-related cues activate traits which leads towork relevant behaviors. Role theory suggests that role senders provide cues to elicit desired behaviors. In this context, role senders provide workers with cues for expected behaviors, which in turn activates personality traits and work relevant behaviors. In essence, expectations of the role sender lead to different behavioral outcomes depending on the trait levels of individual workers, and because people differ in trait levels, responses to these cues will not be universal.[237]
As of 2020, remote work has become more and more prevalent as brought on by the COVID-19 pandemic. However, research has shown that the Big Five personality traits still influence remote work. Gavoille and Hazans have found that conscientiousness (β=0.06) and openness to experience are both positively correlated with willingness to work and worker productivity within a remote setting, with openness to experience being less significant (β=0.021). This is then contrasted with extraversion (β=-0.038), which negatively correlates with Willingness to work and openness. Another conclusion that was found is that gender did not play a role in the difference between conscientiousness and extraversion, and willingness to work from home.[238]Similarly, Wright investigated the influence of Big Five on the soft skills in the remote workplace, such as effort and cooperation. She delineated soft skills into two different groups, Task Performance and Contextual Performance, with each having three subgroups. Task Performance was more aligned with specific job responsibilities and handling cognitive tasks associated with their job, and the three subgroups were Job Knowledge, Organizational Skills, and Efficiency. Wright found that Job Knowledge did not correlate with any Big Five traits, Organizational Skill is only significantly correlated with Conscientiousness (T=7.952, P=.001), and Efficiency is significantly correlated with Conscientiousness (T=3.8, P=.001), and Neuroticism(T=-2.6, P=.008), which it is a negative correlation. Contextual Performance is concerned with non-job core requirements, such as perceived effort and job cooperation, with the subgroups being Persistent Effort, Cooperation, and Organizational Conscientiousness. Wright found that Persistent Effort is positively correlated with Openness(t=2.4, P=.014) and Conscientiousness (T=3.1, P=.002), and negatively correlated with Neuroticism (T=-3.2, P=.001). Cooperation was positively correlated with Extraversion (t=2.6, P=.009) and Conscientiousness (t=2.82, P=.005), as well as Organizational Conscientiousness was positively correlated with Agreeableness (t=4.059, P<.001) and Conscientiousness (t=4.511, P<.001)[239]
On another tack, scientists wanted to discover if the Big Five has any effect on remote worker burnout, and the effect that different Big Five traits have on worker health and engagement. Olsen et al found that when remote work days are increased, individuals high in extraversion start to struggle with work engagement (β=-.094, P<.03), and individuals with higher neuroticism are more likely to have poorer health (p= -.23), work engagement (p=-.18), and an increase in sick leaves(p=.38).[240]However, Olsen found that conscientiousness, coupled with an increase in remote work days, can lead to a decrease in general health, contrary to all of the benefits it has listed above. Similarly, Para et al. found that individuals with higher Neuroticism (β=.138, p<.05) also tend to have higher Remote Work Exhaustion (RWE). They also found that conscientiousness(β=-.336, p<.001) and agreeableness (β=-.267, p<.001) were negatively correlated with RWE, meaning that they were more resilient against RWE over large spans of remote work days.[241]The author attributed conscientious individuals to being hard workers and dependable, while agreeableness was attributed to the situation the study was completed under, which was the at-home quarantine due to COVID-19, stating individuals with high agreeableness did well with the forced contact due to quarantine, which transferred over to their work.
Various researchers have explored the association of Big Five and romantic relationships in terms of relationship satisfaction.[242][243][244]A meta-analysis showed that there was a higher level of marital satisfaction if their spouse showed lower levels in neuroticism (.22), but higher levels in agreeableness (.15) and conscientiousness(.12). There was only a weak correlation, but it was the same level of satisfaction for both genders. Much like the previous meta-analysis, a study on self-reported big five traits showed that those with higher levels of agreeableness, emotional stability, conscientiousness, and extraversion had higher levels of marital satisfaction(.20). That same study found that there was little to no difference in marital satisfaction if the two partners had similar or different levels of trait personality.[245]
O’Brien and colleagues[246]examined the association of Big Five and romantic relationships by investigating participants’ commitment levels. The three levels of commitment are affective commitment (emotional attachment), continuance commitment (financial considerations), and normative commitment (the ethical and moral responsibilities). The commitment levels were based on the taxonomy of organizational commitment[247]and the conceptual model of marital commitment of Johnson[248]and Johnson et al.[249]122 Individuals currently in a committed relationship responded to a 50-item personality questionnaire from the International Personality Item Pool (IPIP, 2006), and a questionnaire on commitment modified from Allen.[247]The key findings showed that participants high in Extraversion reported high levels of affective commitment; participants high on Extraversion were higher on Openness to Experience and affective commitment. Conscientiousness demonstrated a negative relationship with continuance commitment. While Extraversion and Agreeableness exhibited a positive correlation with each other, no significant relationships were found between Agreeableness and any of the commitment measures. The findings indicated gender differences in that women with lower levels of Openness to Experience were often paired with partners who scored higher in Extraversion. Men who exhibited strong affective commitment were more likely to be in relationships with women high in Conscientiousness. Additionally, women whose partners showed high affective commitment tended to be higher in both Conscientiousness and Emotional Stability.
Asselmann and Sprecht[250]examined the association of Big Five (BFI-S) and romantic relationships through major life events across years in 2005, 2009, 2013, and 2017 with a sample of 49,932 participants in Germany. Those major life events are (1) moving in with a partner, (2) getting married, (3) getting separated, and (4) getting divorced. Researchers also examined whether the Big Five personality traits play a significant role in romantic relationships. Along the spectrum of a person’s life satisfaction, marital satisfaction (one of romantic relationships) is shown to be stronger than job satisfaction, health satisfaction, and social satisfaction.[251]The key findings from Asselmann and Sprecht showed that more extraverted individuals were more likely to move in with a partner. Less agreeable and less emotionally stable women were more likely to move in with a partner. Men were more extraverted in the years before moving in and became gradually more open and more conscientious after moving in. Less agreeable men were more likely to get married. Individuals who got married became less open in the first three years after the marriage. Women became more extraverted after being separated. Men with lower emotional stability and women who were both less emotionally stable and more extraverted were more prone to experiencing relationship breakups. Individuals who got divorced were less agreeable in the years before the divorce. Personality may change after specific events. For example, both men and women who experienced separation or divorce became less emotionally stable in the following years. The results implicated that total agreeableness was not a guarantee for long-lasting romantic relationships, as less agreeable individuals were more likely to experience both positive and negative major romantic events.[250]Getting into a long-term romantic relationship can kick-start personality development in young adults ages 20–30 as they are faced with new social situations and expectations. For instance, high levels of trait neuroticism at the beginning of relationships can be seen decreasing over 8 years once the relationship has begun, as well as other Big Five personality traits, such as Conscientiousness and Agreeableness, can be seen increasing in long-term relationships.[252]
The Big Five Personality Model also has applications in the study of political psychology. Studies have been finding links between the big five personality traits and political identification. It has been found by several studies that individuals who score high in Conscientiousness are more likely to possess aright-wing political identification.[253][254][255]On the opposite end of the spectrum, a strong correlation was identified between high scores in Openness to Experience and aleft-leaning ideology.[253][256][257]While the traits of agreeableness, extraversion, and neuroticism have not been consistently linked to either conservative or liberal ideology, with studies producing mixed results, such traits are promising when analyzing the strength of an individual's party identification.[256][257]However, correlations between the Big Five and political beliefs, while present, tend to be small, with one study finding correlations ranged from 0.14 to 0.24.[258]
The predictive effects of the Big Five personality traits relate mostly to social functioning and rules-driven behavior and are not very specific for prediction of particular aspects of behavior. For example, it was noted by all temperament researchers that high neuroticism precedes the development of all common mental disorders[196]and is not associated with personality.[111]Further evidence is required to fully uncover the nature and differences between personality traits, temperament and life outcomes. Social and contextual parameters also play a role in outcomes and the interaction between the two is not yet fully understood.[259]
Though the effect sizes are small: Of the Big Five personality traits high Agreeableness, Conscientiousness and Extraversion relate to general religiosity, while Openness relate negatively toreligious fundamentalismand positively tospirituality. High Neuroticism may be related to extrinsic religiosity, whereas intrinsic religiosity and spirituality reflect Emotional Stability.[260]
Several measures of the Big Five exist:
The most frequently used measures of the Big Five comprise either items that are self-descriptive sentences[174]or, in the case of lexical measures, items that are single adjectives.[2]Due to the length of sentence-based and some lexical measures, short forms have been developed and validated for use in applied research settings where questionnaire space and respondent time are limited, such as the 40-item balancedInternational English Big-Five Mini-Markers[165]or a very brief (10 item) measure of the Big Five domains.[262]Research has suggested that some methodologies in administering personality tests are inadequate in length and provide insufficient detail to truly evaluate personality. Usually, longer, more detailed questions will give a more accurate portrayal of personality.[265]At the same time, shorter questionnaires may be sufficient to get a reasonable estimate of Big Five personality scores when questions are carefully selected and statistical imputation is used.[266]The five factor structure has been replicated in peer reports.[267]However, many of the substantive findings rely on self-reports.
Much of the evidence on the measures of the Big 5 relies on self-report questionnaires, which makes self-report bias and falsification of responses difficult to deal with and account for.[263]It has been argued that the Big Five tests do not create an accurate personality profile because the responses given on these tests are not true in all cases and can be falsified.[268]For example, questionnaires are answered by potential employees who might choose answers that paint them in the best light.[269]
Research suggests that a relative-scored Big Five measure in which respondents had to make repeated choices between equally desirable personality descriptors may be a potential alternative to traditional Big Five measures in accurately assessing personality traits, especially when lying or biased responding is present.[264]When compared with a traditional Big Five measure for its ability to predict GPA and creative achievement under both normal and "fake good"-bias response conditions, the relative-scored measure significantly and consistently predicted these outcomes under both conditions; however, theLikertquestionnaire lost its predictive ability in the faking condition. Thus, the relative-scored measure proved to be less affected by biased responding than the Likert measure of the Big Five.
Andrew H. Schwartz analyzed 700 million words, phrases, and topic instances collected from the Facebook messages of 75,000 volunteers, who also took standard personality tests, and found striking variations in language with personality, gender, and age.[270]
The proposed Big Five model has been subjected to considerable critical scrutiny in a number of published studies.[271][272][273][274][275][276][68][277][111]One prominent critic of the model has beenJack Blockat theUniversity of California, Berkeley. In response to Block, the model was defended in a paper published by Costa and McCrae.[278]This was followed by a number of published critical replies from Block.[279][280][8]
It has been argued that there are limitations to the scope of the Big Five model as an explanatory or predictive theory.[68][277]It has also been argued that measures of the Big Five account for only 56% of the normal personality trait sphere alone (not even considering the abnormal personality trait sphere).[68]Also, the static Big Five[281]is not theory driven, it is merely a statistically driven investigation of certain descriptors that tend to cluster together often based on less-than-optimal factor analytic procedures.[68]: 431–33[111]Measures of the Big Five constructs appear to show some consistency in interviews, self-descriptions and observations, and this static five-factor structure seems to be found across a wide range of participants of different ages and cultures.[282]However, whilegenotypictemperament trait dimensions might appear across different cultures, thephenotypicexpression of personality traits differs profoundly across different cultures as a function of the different socio-cultural conditioning and experiential learning that takes place within different cultural settings.[283]
Moreover, the fact that the Big Five model was based onlexical hypothesis(i.e. on the verbal descriptors of individual differences) indicated strong methodological flaws in this model, especially related to its main factors, Extraversion and Neuroticism. First, there is a natural pro-social bias of language in people's verbal evaluations. After all, language is an invention of group dynamics that was developed to facilitate socialization and the exchange of information and to synchronize group activity. This social function of language therefore creates a sociability bias in verbal descriptors of human behavior: there are more words related to social than physical or even mental aspects of behavior. The sheer number of such descriptors will cause them to group into the largest factor in any language, and such grouping has nothing to do with the way that core systems of individual differences are set up. Second, there is also a negativity bias in emotionality (i.e. most emotions have negative affectivity), and there are more words in language to describe negative rather than positive emotions. Such asymmetry in emotional valence creates another bias in language. Experiments using the lexical hypothesis approach indeed demonstrated that the use of lexical material skews the resulting dimensionality according to a sociability bias of language and a negativity bias of emotionality, grouping all evaluations around these two dimensions.[275]This means that the two largest dimensions in the Big Five model might be just an artifact of the lexical approach that this model employed.
One common criticism is that the Big Five does not explain all of human personality. Some psychologists have dissented from the model precisely because they feel it neglects other domains of personality, such asreligiosity,manipulativeness/machiavellianism,honesty, sexiness/seductiveness,thriftiness,conservativeness,masculinity/femininity,snobbishness/egotism,sense of humour, andrisk-taking/thrill-seeking.[276][284]Dan P. McAdamshas called the Big Five a "psychology of the stranger", because they refer to traits that are relatively easy to observe in a stranger; other aspects of personality that are more privately held or more context-dependent are excluded from the Big Five.[285]Block has pointed to several less-recognized but successful efforts to specify aspects of character not subsumed by the model.[8]
There may be debate as to what counts as personality and what does not and the nature of the questions in the survey greatly influence outcome. Multiple particularly broad question databases have failed to produce the Big Five as the top five traits.[286]
In many studies, the five factors are not fullyorthogonalto one another; that is, the five factors are not independent.[287][288]Orthogonality is viewed as desirable by some researchers because it minimizes redundancy between the dimensions. This is particularly important when the goal of a study is to provide a comprehensive description of personality with as few variables as possible.
The model is inappropriate for studyingearly childhood, as language is not yet developed.[8]
Factor analysis, the statistical method used to identify the dimensional structure of observed variables, lacks a universally recognized basis for choosing among solutions with different numbers of factors.[3]A five factor solution depends on some degree of interpretation by the analyst. A larger number of factors may underlie these five factors. This has led to disputes about the "true" number of factors. Big Five proponents have responded that although other solutions may be viable in a single data set, only the five-factor structure consistently replicates across different studies.[289]Block argues that the use of factor analysis as the exclusive paradigm for conceptualizing personality is too limited.[8]
Surveys in studies are often online surveys of college students (compareWEIRD bias). Results do not always replicate when run on other populations or in other languages.[290]It is not clear that different surveys measure the same 5 factors.[8]
Moreover, the factor analysis that this model is based on is a linear method incapable of capturing nonlinear, feedback and contingent relationships between core systems of individual differences.[275]
|
https://en.wikipedia.org/wiki/Five_Factor_Model
|
Personalization(broadly known ascustomization) consists of tailoring a service or product to accommodate specific individuals. It is sometimes tied to groups or segments of individuals. Personalization involves collecting data on individuals, includingweb browsing history,web cookies, and location. Various organizations use personalization (along with the opposite mechanism ofpopularization[1]) to improvecustomer satisfaction,digital salesconversion,marketingresults, branding, and improved website metrics as well as foradvertising. Personalization acts as a key element insocial media[2]andrecommender systems. Personalization influences every sector of society — be it work, leisure, or citizenship.
The idea of personalization is rooted in ancient rhetoric as part of the practice of an agent or communicator being responsive to the needs of the audience. When industrialization influenced the rise ofmass communication, the practice of message personalization diminished for a time.
In the recent times, there has been a significant increase in the number of mass media outlets that use advertising as a primary revenue stream. These companies gain knowledge about the specific demographic andpsychographiccharacteristics of readers and viewers.[3]After that, this information is used to personalize an audience’s experience and therefore draw customers in through the use of entertainment and information that interests them.
Another aspect of personalization is the increasing relevance ofopen dataon the Internet. Many organizations make their data available on the Internet viaAPIs, web services, and open data standards. One such example is Ordnance Survey Open Data.[4]Data made available in this way is structured to allow it to be inter-connected and used again by third parties.[5]
Data available from a user'ssocial graphmay be accessed by third-partyapplication softwareso that it fits the personalizedweb pageorinformation appliance.
Current open data standards on the Internet are:
Web pages can be personalized based on their users' characteristics (interests, social category, context, etc.), actions (click on a button, open a link, etc.), intents (make a purchase, check the status of an entity), or any other parameter that is prevalent and associated with an individual. This provides a tailored user experience. Note that the experience is not just the accommodation of the user but a relationship between the user and the desires of the site designers in driving specific actions to attain objectives (e.g. Increase sales conversion on a page). The termcustomizationis often used when the site only uses explicit data which include product ratings or user preferences.
Technically, web personalization can be accomplished by associating a visitor segment with a predefined action. Customizing the user experience based on behavioral, contextual, and technical data is proven to have a positive impact on conversion rate optimization efforts. Associated actions can be anything from changing the content of a webpage, presenting a modal display, presenting interstitials, triggering a personalized email, or even automating a phone call to the user.
According to a study conducted in 2014 at the research firm Econsultancy, less than 30% ofe-commercewebsites have invested in the field of web personalization. However, many companies now offer services for web personalization as well as web and emailrecommendation systemsthat are based on personalization or anonymously collected user behaviors.[6]
There are many categories of web personalization which includes:
There are several camps in defining and executing web personalization. A few broad methods for web personalization include:
With implicit personalization, personalization is performed based on data learned from indirect observations of the user. This data can be, for example, items purchased on other sites or pages viewed.[7]With explicit personalization, the web page (or information system) is changed by the user using the features provided by the system. Hybrid personalization combines the above two approaches to leverage both explicit user actions on the system and implicit data.
Web personalization can be linked to the notion ofadaptive hypermedia(AH). The main difference is that the former would usually work on what is considered "open corpus hypermedia", while the latter would traditionally work on "closed corpus hypermedia." However, recent research directions in the AH domain take both closed and open corpus into account, making the two fields very inter-related.
Personalization is also being considered for use in less open commercial applications to improve the user experience in the online world. Internet activistEli Pariserhas documentedpersonalized search, whereGoogleandYahoo! Newsgive different results to different people (even when logged out). He also points out social media siteFacebookchanges user's friend feeds based on what it thinks they want to see. This creates a clearfilter bubble.
Websites use a visitor'slocationdata to adjust content, design, and the entire functionality.[8]On anintranetorB2EEnterprise Web portals, personalization is often based on user attributes such as department, functional area, or the specified role. The term "customization" in this context refers to the ability of users to modify the page layout or specify what content should be displayed.
Digitalweb mapsare also being personalized.Google Mapschange the content of the map based on previous searches and profile information.[9]Technology writerEvgeny Morozovcriticized map personalization as a threat topublic space.[10]
Over time mobile phones have seen an increased attention placed on user personalization. Far from the black and white screens and monophonic ringtones of the past, smart phones now offer interactive wallpapers and MP3truetones. In the UK and Asia,WeeMeeshave become popular. WeeMees are 3D characters that are used as wallpaper and respond to the tendencies of the user.Video Graphics Array(VGA) picture quality allows people to change their background without any hassle and without sacrificing quality. All of these services are downloaded by the provider with the goal to make the user feelconnectedand enhance the experience while using the phone.[11]
In print media, ranging frommagazinestopromotional publications, personalization uses databases of individual recipients' information. Not only does the written document address itself by name to the reader, but the advertising is targeted to the recipient's demographics or interests using fields within the database or list,[12]such as "first name", "last name", "company", etc.
The term "personalization" should not be confused with variable data, which is a much more detailed method of marketing that leverages both images and text with the medium, not just fields within a database. Personalized children's books are created by companies who are using and leveraging all the strengths ofvariable data printing (VDP). This allows for full image and text variability within a printed book. With the rise of online 3D printing services including Shapeways and Ponoko, personalization is becoming present in the world of product design.
Promotional items (mugs,T-shirts,keychains,ballsand more) are personalized on a huge level. Personalized children's storybooks—wherein the child becomes theprotagonist, with the name and image of the child personalized—are extremely popular. Personalized CDs for children are also in the market. With the advent ofdigital printing, personalized calendars that start in any month, birthday cards, cards, e-cards, posters and photo books can also be easily obtained.
3D printingis a production method that allows to create unique and personalized items on a global scale. Personalized apparel and accessories, such as jewellery, are increasing in popularity.[13]This kind of customization is also relevant in other areas likeconsumer electronics[14]andretail.[15]By combining 3D printing with complex software a product can easily be customized by an end-user.
Mass personalization is custom tailoring by a company in accordance with its end users' tastes and preferences.[16]From a collaborative engineering perspective, mass customization can be viewed as collaborative efforts between customers and manufacturers, who have different sets of priorities and need to jointly search for solutions that best match customers' individual specific needs with manufacturers' customization capabilities.[17][18]The main difference between mass customization and mass personalization is that customization is the ability of a company to allow its customers to create and choose a product which, within limits, adheres to their personal specifications.[19]
For example, a website aware of its user's location and buying habits will offer suggestions tailored to their demographics. Each user is classified by some relevant trait, like location or age, and then given personalization aimed at that group. This means that the personalization is not individual to that singular user, it only pinpoints a specific trait that matches them up with a larger group of people.[20]
A related concept isbehavioral targeting, which also involves using user data—such as browsing or purchase history—to deliver personalized content or advertisements. While similar in intent, behavioral targeting is typically associated with advertising practices, whereas mass personalization encompasses a broader range of applications across product and service delivery.
Predictive personalization is defined as the ability to predict customer behavior, needs or wants—and tailor offers and communications very precisely.[21]Social data is one source of providing this predictive analysis, particularly social data that is structured. Predictive personalization is a much more recent means of personalization and can be used to augment current personalization offerings. Predictive personalization has grown to play an especially important role inonline grocers, where users, especially recurring clients, have come to expect "smart shopping lists" - mechanisms that predict what products they need based on customers similar to them and their past shopping behaviors.[22]
TheVolume-Control Modeloffers an analytical framework to understand how personalization helps to gain power.[1]It links between information personalization and the opposite mechanism, informationpopularization. This model explains how both personalization and popularization are employed together (by tech companies, organizations, governments or even individuals) as complementing mechanisms to gain economic, political, and social power. Among the social implications of information personalization is the emergence offilter bubbles.
|
https://en.wikipedia.org/wiki/Personalization
|
Build to Order(BTO: sometimes referred to asMake to OrderorMade to Order(MTO)) is aproductionapproach where products are not built until a confirmed order for products is received. Thus, the end consumer determines the time and number of produced products.[1]The orderedproduct is customized, meeting the design requirements of an individual, organization or business.[2]Such production orders can be generated manually, or through inventory/production management programs.[1]BTO is the oldest style oforder fulfillmentand is the most appropriate approach used forhighly customizedorlow volumeproducts. Industries with expensive inventory use this production approach.[1]Moreover, "Made to order" products are common in the food service industry, such as at restaurants.
BTO can be considered aJust in Time (JIT)production system, as components or products are only delivered just in time when demanded, in order to reduce wasted time and increase efficiency.[1]
This approach is considered good forhighly configuredproducts, e.g.automobiles,[3][4]bicycles,computer servers, or for products where holding inventories is very expensive, e.g.aircraft. In general, the BTO approach has become more popular in the last few years ever since high-tech companies such as Dell, BMW, Compaq and Gateway successfully implemented the system into their business operations.[5]
In an automotive context, BTO is a demand driven production approach where a product is scheduled and built in response to a confirmed order received for it from a final customer.[6]The final customer refers to a known individual owner and excludes all orders by theoriginal equipment manufacturer(OEM), national sales companies (NSC), dealers or point of sales, bulk orders or other intermediaries in thesupply chain. BTO excludes the order amendment function, whereby forecast orders in the pipeline are amended to customer requirements, as this is seen as another level of sophistication for abuild to stock(BTS) system (also known as build to forecast (BTF)).
BTS is the dominant approach used today across many industries and refers to products that are built before a final purchaser has been identified, with production volume driven by historical demand information.[4]This high stock level, endemic across the auto industry allows some dealers to find an exact or very close match to the customer's desired vehicle within the dealer networks and supplier parks. The vehicle can then be delivered as soon as transport can be arranged. This has been used to justify stock levels. Whilst providing a rapid response to customer demand, the approach is expensive, mainly in terms of stock, but also transportation as finished goods are rarely where they are required. Holding stock of such a high cash value as finished goods is a key driver of the current crisis in the automotive industry - a crisis that could be eased by implementation of a BTO system.[6]
A BTO system does not mean that all suppliers in the supplier chain should be producing only when a customer order has been confirmed. Clearly, it would not make economic sense for a manufacturer of low value high volume parts to employ BTO. It is appropriate that these should be identified and built to a supplier order, effectively BTS. Part of the challenge in a BTO supplier network is in the identification of which suppliers should be BTO and which BTS. The point in the supply chain when this change occurs is called the ‘decoupling point’. Currently, the majority of automotive supply chains lack a decoupling point and the dominant BTS approach has resulted in billions of dollars of capital being tied up in stock in the supply chain.[4]
Some firms build all their products to order while others practice (BTS). Given the widespread proliferation of products, there are a number ofmanufacturerstaking a combined approach, where some items are BTS and others are BTO, which is commonly referred to as "hybrid BTO".[7]
The main advantages of the BTO approach in environments of high product variety is the ability to supply the customer with the exact product specification required, the reduction in sales discounts and finished good inventory, as well a reduction in stock obsolescence risk.
Additionally, flexibility and customer lead time are improved to a match changes in consumer demand. Moreover, a business’ cash flow can be increased with BTO.[1]
The main disadvantage of BTO is manufacturers are susceptible to market demand fluctuations leading to a reduced capacity utilization in manufacturing. Hence, to ensure an effective use of production resources, a BTO approach should be coupled with proactive demand management. Finding the correct and appropriate balance of BTO and BTS to maintain stock levels appropriate to both the market requirement and operational stability is a current area of academic research. In Retail, an occurring problem may be customers choosing an alternative product that is available at that time and place, as they are not willing to wait for the BTO product to arrive.
Moreover, compared to mass production, customization of products implies higher costs. Thus, price-conscious customers may be turned away, as they do not feel a strong need for customized products and would therefore choose a more standardized product instead.[5]
Related approaches to BTO include the following:
InETO, after an order is received, a part of or the whole design starts to be developed. Construction by general contractors and plant construction by engineering companies are categorized as ETO.[8]
This strategy requires that basic parts of the product are already manufactured, however not yet assembled. Once a customer's order has been received, the parts of the product are quickly being assembled and sent out.[8][9]
Together with the BTS approach, these strategies form the spectrum oforder fulfillmentstrategies a firm can adopt.
|
https://en.wikipedia.org/wiki/Build_to_order
|
Custom-fitmeanspersonalizedwith regard to shape and size. A customized product would imply the modification of some of its characteristics according to the customers requirements such as with acustom car. However, when fit is added to the term, customization could give the idea of both the geometric characteristics of the body and the individual customer requirements,[1]e.g., the steering wheel of the Formula 1 driverFernando Alonso.
The custom-fit concept can be understood as the idea of offering one-of-a-kind products that, due to their intrinsic characteristics and use, can be totally adapted to geometric characteristics in order to meet the user requirements.[2]
With this new concept, industry moves from a resource based manufacturing system to a knowledge based manufacturing system and frommass productionto individual production. This encourages theLean Productiontrend as established by Toyota, or in other words, an efficiency-based production.
There are some studies referring to the positive impacts this concept would have on society:
The research studies found in February 2008 on the subject are the following:
The process starts with the capturing of data directly from the user byCADtechniques with the ultimate aim of manufacturing products usingCAMtechniques.
Although all these developments have been of great interest, the RM-processes have not fallen behind, due to improvement of new Rapid PrototypingDirect digital manufacturingtechniques.
MPP aims to become the equivalent of a high speed 3D-printer that produces three-dimensional objects directly from powder materials. This technique is based on the process principles ofxerographicprinters, (for example, laser orLED printers) that combine electrostatic printing with photography. The MPP process approach uses the same fundamental principles to build solid objects on a layer-by-layer basis. Layers of powder materials are generated by attracting different metal- and/or ceramic powders to their respective position on a charged pattern on a photoreceptor by means of an electrostatic field. The attracted layer is transferred to a punch and transported to the consolidation unit where each layer of part material is sintered onto the previous by pressure and heat. The procedure is repeated layer-by-layer until the three-dimensional object is fully formed and consolidated.
MPP has the ability to print different powders within the same layer and progressively change from one material to another, i.e., producing a functionally graded material. In addition to this, MPP uses external pressure to speed the densification process (sintering), which allows manufacturing with a wide range of materials and opens the possibility to produce unique material combinations and microstructures.
It has several print heads that produce continuous streams of material droplets at high frequency. The High Viscosity Inkjet Printing machine is also capable of printing multi-materials simultaneously and also enables the mixing and grading of materials in any combination that is desired. This will enable the manufacturing of products with two or more materials that are graded and there will be no distinct boundary between the materials. This will result in products with unique mechanical properties.
Dr. Michiel Willemse who is leading the project says, "The process is unique in its capability to print highly viscous, UV curable, resins. Material formulations with viscosities up to 500 mPa•s (at ambient temperature) have been printed successfully. This offers the opportunity to print products with unequaled mechanical properties when compared to any other printing systems."[8]
|
https://en.wikipedia.org/wiki/Custom-Fit
|
Dell Inc.is an Americantechnology companythat develops, sells, repairs, and supportspersonal computers(PCs),servers,data storagedevices,network switches,software, computerperipheralsincluding printers andwebcamsamong other products and services. Dell is based inRound Rock, Texas.
Founded byMichael Dellin 1984, Dell started makingIBMclonecomputers and pioneered selling cut-price PCs directly to customers,[3]managing itssupply chainandelectronic commerce.[4][5]The company rose rapidly during the 1990s[6]and in 2001 it became the largest global PC vendor for the first time.[7]Dell was a pure hardware vendor until 2009 when it acquiredPerot Systems. Dell then entered the market for IT services. The company has expanded storage and networking systems. In the late 2000s, it began expanding from offering computers only to delivering a range of technology for enterprise customers.[8]
Dell is a subsidiary ofDell Technologies, apublicly traded company, as well as a component of theNASDAQ-100andS&P 500. Dell is ranked 31st on the Fortune 500 list in 2022,[9]up from 76th in 2021.[10]It is also the sixth-largest company in Texas by total revenue, according toFortunemagazine. It is the second-largest non-oil company in Texas.[11][12]As of 2024,[update]it is theworld's third-largest personal computer vendorby unit sales, afterLenovoandHP.[13]In 2015, Dell acquired the enterprise technology firmEMC Corporation, together becoming divisions of Dell Technologies. Dell EMC sells data storage, information security,virtualization, analytics, andcloud computing.[14]
Michael Dell founded Dell Computer Corporation, doing business asPC's Limitedin 1984 while a student at theUniversity of Texas at Austin,[15]operating from Michael Dell's off-campus dormitory room atDobie Center.[16]The start-up aimed to sellIBM PC compatiblecomputers built from stock components. Michael Dell started trading in the belief that, by selling personal computer systems directly to customers, PC's Limited could better understand customers' needs and provide the most effective computing services to meet those needs.[17]Dell dropped out of college upon completion of his freshman year at the University of Texas in order to focus full-time on his fledgling business, after getting about $1,000 in expansion-capital from his family.[18]As of April 2021, Dell's net worth was estimated to be over $50 billion (equivalent to $55,470,000,000 in 2023).[19]
In 1985, PC's Limited launched its first computer, the "Turbo PC," priced at US$795 (equivalent to $1,913 in 2023).[20]The Turbo PC featured an Intel 8088-compatible processor with a maximum speed of 8 MHz.[21]PC's Limited marketed these systems through national computer magazines, selling directly to consumers while custom-assembling each unit based on a range of options. This approach allowed them to offer competitive prices compared to retail brands, coupled with the convenience of pre-assembled units, making them one of the early success stories of this business model. The company grossed over $73 million in its first year of operation.
The company dropped thePC's Limitedname in 1987 to become Dell Computer Corporation and began expanding globally. The reasoning was that this new company name better reflected its presence in the business market, and also resolved issues with the use of "Limited" in a company name in certain countries.[22]The company set up its first international operations in Britain; 11 more followed within the next four years. In June 1988, Dell Computer's market capitalization grew by $30 million to $80 million (equivalent to $177,850,000 in 2023) from its June 22 initial public offering of 3.5 million shares at $8.50 a share onNASDAQunder the ticker symbol DELL.[23]In 1989, Dell Computer set up its first on-site service programs in order to compensate for the lack of local retailers prepared to act as service centers.
In 1990, Dell Computer tried selling its products indirectly through warehouse clubs and computer superstores, but met with little success, and the company re-focused on its more successful direct-to-consumer sales model. In 1992,Fortuneincluded Dell Computer Corporation in its list of the world's500largest companies, making Michael Dell the youngest CEO of a Fortune 500 company at that time.
In 1993, to complement its own direct sales channel, Dell planned to sell PCs at big-box retail outlets such asWal-Mart, which would have brought in an additional $125 million (equivalent to $238,100,000 in 2023) in annual revenue.BainconsultantKevin Rollinspersuaded Michael Dell to pull out of these deals, believing they would be money losers in the long run.[24]Margins at retail were thin at best and Dell left the reseller channel in 1994.[25]Rollins would soon join Dell full-time and eventually become the company president and CEO.
By the early 1990s thelaptop computermarket was both more profitable and faster-growing than the overall personal computer market. After discontinuing its unsuccessful existing products in 1993, and hiring John Medica—who had led development of the very successful ApplePowerbook—the company in 1994 introduced theDell Latitudelaptop line.[26]
Originally, Dell did not emphasize the consumer market, due to the higher costs and low profit margins in selling to individuals and households; this changed when the company's Internet site took off in 1996 and 1997.[18]While the industry's average selling price to individuals was going down, Dell's was going up, as second- and third-time computer buyers who wanted powerful computers with multiple features and did not need much technical support were choosing Dell. Dell found an opportunity among PC-savvy individuals who liked the convenience of buying direct, customizing their PC to their means, and having it delivered in days. In early 1997, Dell created an internal sales and marketing group dedicated to serving the home market and introduced a product line designed especially for individual users.[25]
($000000s)
employees
From 1997 to 2004, Dell steadily grew and it gained market share from competitors even during industry slumps. During the same period, rival PC vendors such asCompaq,Gateway,IBM,Packard Bell, andAST Researchstruggled and eventually left the market or were bought out.[28]Dell surpassed Compaq to become the largest PC manufacturer in 1999.[29]Operating costs made up only 10 percent of Dell's $35 billion in revenue in 2002 (equivalent to $56,680,000,000 in 2023), compared with 21 percent of revenue at Hewlett-Packard, 25 percent at Gateway, and 46 percent at Cisco.[30]In 2002, when Compaq merged with Hewlett-Packard (the fourth-place PC maker), the newly combined Hewlett-Packard took the top spot for a time but struggled and Dell soon regained its lead. Dell grew the fastest in the early 2000s.[4]
In 2002, Dell expanded its product line to include televisions,handhelds, digital audio players, andprinters. Chairman and CEO Michael Dell had repeatedly blocked President and COO Kevin Rollins's attempt to lessen the company's heavy dependency on PCs, which Rollins wanted to fix by acquiring EMC Corporation; a move that would eventually occur over 12 years later.[31]
In 2003, at the annual company meeting, the stockholders approved changing the company name to "Dell Inc." to recognize the company's expansion beyond computers.[32]
In 2004, the company announced that it would build a new assembly-plant nearWinston-Salem,North Carolina; the city and county provided Dell with $37.2 million in incentive packages; the state provided approximately $250 million (equivalent to $386,600,000 in 2023) in incentives and tax breaks. In July, Michael Dell stepped aside aschief executive officerwhile retaining his position aschairman of the board.[33]Kevin Rollins, who had held a number of executive posts at Dell, became the new CEO. Despite no longer holding the CEO title, Dell essentially acted as a de facto co-CEO with Rollins.[31]
Under Rollins, Dell purchased the computer hardware manufacturerAlienwarein 2006. Dell Inc.'s plan anticipated Alienware continuing to operate independently under its existing management. Alienware expected to benefit from Dell's efficient manufacturing system.[34]
In 2005, while earnings and sales continued to rise, sales growth slowed considerably, and the company stock lost 25% of its value that year.[35]By June 2006, the stock traded around US$25 which was 40% down from July 2005—the high-water mark of the company in the post-dotcom era.[36][37]By June 2021, the stock had reached an all-time high of over US$100 per share, reflecting the company's successful transition to a technology service provider that helps customers navigate digital transformation.[38]
The slowing sales growth has been attributed to the maturing PC market, which constituted 66% of Dell's sales, and analysts suggested that Dell needed to make inroads into non-PC business segments such as storage, services, and servers. Dell's price advantage was tied to its ultra-lean manufacturing for desktop PCs,[39]but this became less important as savings became harder to find inside the company's supply chain, and as competitors such as Hewlett-Packard andAcermade their PC manufacturing operations more efficient to match Dell, weakening Dell's traditional price differentiation.[40]Throughout the entire PC industry, declines in prices along with commensurate increases in performance meant that Dell had fewer opportunities to upsell to their customers. As a result, the company was selling a greater proportion of inexpensive PCs than before, which eroded profit margins.[28]The laptop segment had become the fastest-growing of the PC market, but Dell produced low-cost notebooks in China like other PC manufacturers which eliminated Dell's manufacturing cost advantages, plus Dell's reliance on Internet sales meant that it missed out on growing notebook sales in big box stores.[36]CNEThas suggested that Dell was getting trapped in the increasing commoditization of high volume low margin computers, which prevented it from offering more exciting devices that consumers demanded.[39]
Despite plans of expanding into other global regions and product segments, Dell was heavily dependent on US corporate PC market, as desktop PCs sold to both commercial and corporate customers accounted for 32 percent of its revenue, 85 percent of its revenue comes from businesses, and 64 percent of its revenue comes from North and South America, according to its 2006 third-quarter results. US shipments of desktop PCs were shrinking, and the corporate PC market, which purchases PCs in upgrade cycles, had largely decided to take a break from buying new systems. The last cycle started around 2002, three or so years after companies started buying PCs ahead of the perceivedY2Kproblems, and corporate clients were not expected to upgrade again until extensive testing of Microsoft'sWindows Vista(expected in early 2007), putting the next upgrade cycle around 2008.[41][42]Heavily dependent on PCs, Dell had to slash prices to boost sales volumes, while demanding deep cuts from suppliers.[31]
Dell had long stuck by its direct sales model. Consumers had become the main drivers of PC sales in recent years,[42]yet there had a decline in consumers purchasing PCs through the Web or on the phone, as increasing numbers were visiting consumer electronics retail stores to try out the devices first. Dell's rivals in the PC industry, HP, Gateway andAcer, had a long retail presence and so were well poised to take advantage of the consumer shift.[43]The lack of a retail presence stymied Dell's attempts to offer consumer electronics such as flat-panel TVs and MP3 players.[39]Dell responded by experimenting with mall kiosks, plus quasi-retail stores in Texas and New York.[41]
Dell had a reputation as a company that relied upon supply chain efficiencies to sell established technologies at low prices, instead of being an innovator.[31][43][44]By the mid-2000s many analysts were looking to innovating companies as the next source of growth in the technology sector. Dell's low spending on R&D relative to its revenue (compared toIBM,Hewlett-Packard, andApple Inc.)—which worked well in the commoditized PC market—prevented it from making inroads into more lucrative segments, such as MP3 players and later mobile devices.[35]Increasing spending on R&D would have cut into the operating margins that the company emphasized.[4]Dell had done well with a horizontal organization that focused on PCs when the computing industry moved to horizontal mix-and-match layers in the 1980s, but by the mid-2000 the industry shifted to vertically integrated stacks to deliver an end-to-end IT product, and Dell lagged far behind competitors like Hewlett-Packard and Oracle.[40]
Dell's reputation for poor customer service, which was exacerbated as it moved call centers offshore and as its growth outstripped its technical support infrastructure, came under increasing scrutiny on the Web. The original Dell model was known for high customer satisfaction when PCs sold for thousands of dollars but by the 2000s, the company could not justify that level of service when computers in the same line-up sold for hundreds of dollars.[citation needed]Rollins responded by shifting Dick Hunter from the head of manufacturing to head of customer service. Hunter, who noted that Dell's DNA of cost-cutting "got in the way," aimed to reduce call transfer times and have call center representatives resolve inquiries in one call. By 2006, Dell had spent $100 million in just a few months to improve on this and rolled outDellConnectto answer customer inquiries more quickly. In July 2006, the company started its Direct2Dell blog, and then in February 2007, Michael Dell launched IdeaStorm.com, asking customers for advice including selling Linux computers and reducing the promotional "bloatware" on PCs. These initiatives did manage to cut the negative blog posts from 49% to 22%, as well as reduce the "Dell Hell" prominent on Internet search engines.[36][45]
There was also criticism that Dell used faulty components for its PCs, particularly the 11.8 million OptiPlex desktop computers sold to businesses and governments from May 2003 to July 2005 that suffered fromfaulty capacitors.[46]A battery recall in August 2006, as a result of a Dell laptop catching fire, caused much negative attention for the company though later,Sonywas found responsible for the manufacturing of the batteries, however a Sony spokesman said the problem concerned the combination of the battery with a charger, which was specific to Dell.[47]
2006 marked the first year that Dell's growth was slower than the PC industry as a whole. By the fourth quarter of 2006, Dell lost its title of the largest PC manufacturer to Hewlett Packard whose Personal Systems Group was invigorated thanks to a restructuring initiated by their CEOMark Hurd.[35][48][49]
In August 2005, Dell became the subject of an informal investigation by the United StatesSEC.[50]In 2006, the company disclosed that the US Attorney for the Southern District of New York had subpoenaed documents related to the company's financial reporting dating back to 2002.[51]The company delayed filing financial reports for the third and fourth fiscal quarter of 2006, and several class-action lawsuits were filed.[52]Dell Inc's failure to file its quarterly earnings report could have subjected the company to de-listing from theNasdaq,[53]but the exchange granted Dell a waiver, allowing the stock to trade normally.[54]In August 2007, the company announced that it would restate its earnings for fiscal years 2003 through 2006 and the first quarter of 2007 after an internal audit found that certain employees had changed corporate account balances to meet quarterly financial targets.[55]In July 2010, the SEC announced charges against several senior Dell executives, including Dell Chairman and CEO Michael Dell, former CEO Kevin Rollins, and former CFO James Schneider, "with failing to disclose material information to investors and using fraudulent accounting to make it falsely appear that the company was consistently meeting Wall Street earnings targets and reducing its operating expenses." Dell, inc. was fined $100 million, with Michael Dell personally fined $4 million.[56]
After four out of five quarterly earnings reports were below expectations, Rollins resigned as president and CEO on January 31, 2007, and founder Michael Dell assumed the role of CEO again.[57]
On March 1, 2007, the company issued a preliminary quarterly earnings report showing gross sales of $14.4 billion, down 5% year-over-year, and net income of $687 million (30 cents per share), down 33%. Net earnings would have declined even more if not for the effects of eliminated employee bonuses, which accounted for six cents per share. NASDAQ extended the company's deadline for filing financials to May 4.[58]
Dell announced a change campaign called "Dell 2.0," reducing the number of employees and diversifying the company's products.[43][59]While chairman of the board after relinquishing his CEO position, Michael Dell still had significant input in the company during Rollins' years as CEO. With the return of Michael Dell as CEO, the company saw changes in operations, the exodus of many senior vice-presidents and new personnel brought in from outside the company.[41]Michael Dell announced a number of initiatives and plans (part of the "Dell 2.0" initiative) to improve the company's financial performance. These include elimination of 2006 bonuses for employees with some discretionary awards, reduction in the number of managers reporting directly to Michael Dell from 20 to 12, and reduction of "bureaucracy". Jim Schneider retired as CFO and was replaced byDonald Carty, as the company came under an SEC probe for its accounting practices.[60]
On April 23, 2008, Dell announced the closure of one of its biggest Canadian call-centers inKanata, Ontario, terminating approximately 1100 employees, with 500 of those redundancies effective on the spot, and with the official closure of the center scheduled for the summer. The call-center had opened in 2006 after the city ofOttawawon a bid to host it. Less than a year later, Dell planned to double its workforce to nearly 3,000 workers add a new building. These plans were reversed, due to a highCanadian dollarthat made the Ottawa staff relatively expensive, and also as part of Dell's turnaround, which involved moving these call-center jobs offshore to cut costs.[61]The company had also announced the shutdown of itsEdmonton,Alberta, office, losing 900 jobs. In total, Dell announced the ending of about 8,800 jobs in 2007–2008 — 10% of its workforce.[62]
By the late 2000s, Dell's "configure to order" approach of manufacturing—delivering individual PCs configured to customer specifications from its US facilities was no longer as efficient or competitive with high-volume Asian contract manufacturers as PCs became powerful low-cost commodities.[5][63]Dell closed plants that produced desktop computers for the North American market, including the Mort Topfer Manufacturing Center inAustin, Texas(original location)[64][65]andLebanon, Tennessee(opened in 1999) in 2008 and early 2009, respectively. The desktop production plant inWinston-Salem, North Carolina, receivedUS$280 million in incentives from the state and opened in 2005 (equivalent to $419,900,000 in 2023), but ceased operations in November 2010. Dell's contract with the state required them to repay the incentives for failing to meet the conditions, and they sold the North Carolina plant to Herbalife.[66][67][68]Much work was transferred to manufacturers in Asia and Mexico, or some of Dell's own factories overseas.[63]On January 8, 2009, Dell announced the closure of its manufacturing plant in Limerick, Ireland, with the loss of 1,900 jobs and the transfer of production to its plant inŁodźin Poland.[69]
The release of Apple'siPadtablet computerhad a negative impact on Dell and other major PC vendors, as consumers switched away from desktop and laptop PCs. Dell's own mobility division has not managed success with developing smartphones or tablets, whether running Windows orAndroid.[70][71]TheDell Streakwas a failure commercially and critically due to its outdated OS, numerous bugs, and low resolution screen.InfoWorldsuggested that Dell and other OEMs saw tablets as a short-term, low-investment opportunity runningGoogle Android, an approach that neglected user interface and failed to gain long term market traction with consumers.[72][73]Dell has responded by pushing higher-end PCs, such as the XPS line of notebooks, which do not compete with theApple iPadandKindle Firetablets.[74]The growing popularity of smartphones and tablet computers instead of PCs drove Dell's consumer segment to an operating loss in Q3 2012. In December 2012, Dell suffered its first decline in holiday sales in five years, despite the introduction ofWindows 8.[75]
In the shrinking PC industry, Dell continued to lose market share, as it dropped belowLenovoin 2011 to fall to number three in the world. Dell and fellow American contemporary Hewlett Packard came under pressure from Asian PC manufacturers Lenovo,Asus, and Acer, all of which had lower production costs and were willing to accept lower profit margins. In addition, while the Asian PC vendors had been improving their quality and design—for instance, Lenovo'sThinkPadseries was winning corporate customers away from Dell's laptops—Dell's customer service and reputation had been slipping.[76][77]Dell remained the second-most profitable PC vendor, as it took 13 percent of operating profits in the PC industry during Q4 2012, behind Apple's Mac that took 45 percent, seven percent at Hewlett Packard, six percent at Lenovo and Asus, and one percent for Acer.[78]
Dell attempted to offset its declining PC business, which still accounted for half of its revenue and generates steady cash flow,[79]by expanding into the enterprise market with servers, networking, software, and services.[80]It avoided many of the acquisition write-downs and management turnover that plagued its chief rival Hewlett Packard.[71][81]Despite spending $13 billion on acquisitions to diversify its portfolio beyond hardware,[82]the company was unable to convince the market that it could thrive or made the transformation in the post-PC world,[81]as it suffered continued declines in revenue and share price.[83][84][85][86]Dell's market share in the corporate segment was previously a "moat" against rivals but this has no longer been the case as sales and profits have fallen precipitously.[87]
After several weeks of rumors, which started around January 11, 2013, Dell announced on February 5, 2013, that it had struck a $24.4 billion (equivalent to $31,470,000,000 in 2023)leveraged buyoutdeal, that would have delisted its shares from the NASDAQ and Hong Kong Stock Exchange and taken it private.[88][89][90]Reutersreported that Michael Dell andSilver Lake Partners, aided by a $2 billion loan fromMicrosoft, would acquire the public shares at $13.65 apiece.[91]The $24.4 billion buyout was projected to be the largest leveraged buyout backed by private equity since the2008 financial crisis(equivalent to $34,550,000,000 in 2023).[92]It is also the largest technology buyout ever, surpassing the 2006 buyout ofFreescale Semiconductorfor $17.5 billion (equivalent to $25,450,000,000 in 2023).[92]
Michael Dell said of the February offer "I believe this transaction will open an exciting new chapter for Dell, our customers and team members".[93]Dell rival Lenovo responded to the buyout, saying, "the financial actions of some of our traditional competitors will not substantially change our outlook."[93]
In March 2013, theBlackstone GroupandCarl Icahnexpressed interest in purchasing Dell.[94]In April 2013, Blackstone withdrew their offer, citing deteriorating business.[95][96]Other private equity firms such as KKR & Co. and TPG Capital declined to submit alternative bids for Dell, citing the uncertain market for personal computers and competitive pressures, so the "wide-open bidding war" never materialized.[82]Analysts said that the biggest challenge facing Silver Lake would be to find an "exit strategy" to profit from its investment, which would be when the company would hold an IPO to go public again, and one warned "But even if you can get a $25bn enterprise value for Dell, it will take years to get out."[97]
In May 2013, Michael Dell joined his board in voting for the offer.[98]The following August he reached a deal with the special committee on the board for $13.88 per share, a raised price of $13.75 plus a special dividend of 13 cents, as well as a change to the voting rules.[99]The $13.88 cash offer (plus a $.08 per share dividend for the third fiscal quarter) was accepted on September 12[100]and closed on October 30, 2013, ending Dell's 25-year run as a publicly traded company.
After the buyout, the newly private Dell offered a Voluntary Separation Program that they expected to reduce their workforce by up to seven percent. The reception to the program so exceeded the expectations that Dell may be forced to hire new staff to make up for the losses.[101]
On November 19, 2015, Dell, alongsideArm Holdings,Cisco Systems,Intel,Microsoft, andPrinceton University, founded theOpenFog Consortium, to promote interests and development infog computing.[102]
On October 12, 2015,Dell Inc.announced its intent to acquire EMC Corporation in a cash-and-stock deal valued at $67 billion (equivalent to $84,210,000,000 in 2023), which has been considered the largest-ever acquisition in the technology sector.[103][104]As part of the acquisition, Dell took over EMC's 81% stake in the cloud-computing and virtualization companyVMware.[105]This combined Dell's enterprise server, personal computer, and mobile businesses with EMC's enterprise storage business in a significant Vertical merger of IT giants. Dell paid $24.05 per share of EMC, and $9.05 per share oftracking stockinVMware.[106][107][104]
The announcement came two years after Dell Inc. returned to private ownership, claiming that it faced bleak prospects and would need several years out of the public eye to rebuild its business.[108]It was thought that the company's value had roughly doubled since then.[109]EMC was being pressured byElliott Management, a hedge fund holding 2.2% of EMC's stock, to reorganize their unusual "Federation" structure, in which EMC's divisions were effectively being run as independent companies. Elliott argued[110]this structure deeply undervalued EMC's core "EMC II" data storage business, and that increasing competition between EMC II and VMware products was confusing the market and hindering both companies.The Wall Street Journalestimated that in 2014 Dell had revenue of $27.3 billion (equivalent to $34,610,000,000 in 2023) from personal computers and $8.9 billion from servers, while EMC had $16.5 billion from EMC II, $1 billion fromRSA Security, $6 billion from VMware, and $230 million fromPivotal Software.[111]EMC owned around 80 percent of the stock of VMware.[112]The proposed acquisition maintained VMware as a separate company, held via a newtracking stock, while the other parts of EMC rolled into Dell.[113]Once the acquisition closed, Dell began publishing quarterly financial results again, having ceased these after going private in 2013.[114]
The combined business was expected to address the markets forscale-out architecture,converged infrastructureandprivate cloud computing, playing to the strengths of both EMC and Dell.[111][115]Commentators questioned the deal, withFBR Capital Marketssaying that though it makes a "ton of sense" for Dell, it was a "nightmare scenario that would lack strategic synergies" for EMC.[116]Fortunesaid there was a lot for Dell to like in EMC's portfolio, but "does it all add up enough to justify tens of billions of dollars for the entire package? Probably not."[117]The Registerreported the view ofWilliam Blair & Companythat the merger would "blow up the current IT chess board", forcing other IT infrastructure vendors to restructure to achieve scale and vertical integration.[118]The value of VMware stock fell 10% after the announcement, valuing the deal at around $63–64bn rather than the $67bn originally reported.[119]Key investors backing the deal besides Dell were Singapore'sTemasek HoldingsandSilver Lake Partners.[120]
On September 7, 2016, Dell completed the merger with EMC, which involved the issuance of $45.9 billion (equivalent to $57,140,000,000 in 2023) in debt and $4.4 billion (equivalent to $5,478,000,000 in 2023) of common stock.[121][122]At the time, some analysts claimed that Dell's acquisition of the former Iomega could harm theLenovoEMCpartnership.[123]
In July 2018, Dell announced intentions to become a publicly traded company again by paying $21.7 billion (equivalent to $25,940,000,000 in 2023) in both cash and stock to buy back shares from its stake in VMware, offering shareholders roughly 60 cents on the dollar as part of the deal.[124][105]In November, Carl Icahn (9.3% owner of Dell) sued the company over plans to go public.[125]As a result of pressure from Icahn and otheractivist investors, Dell renegotiated the deal, ultimately offering shareholders about 80% of market value. As part of this deal, Dell once again became a public company, with the original Dell computer business and Dell EMC operating under the newly created parent,Dell Technologies.[105]
Post-acquisition, Dell was re-organized with a new parent company, Dell Technologies, and into three main business divisions: Client Solutions Group, Infrastructure Solutions Group andVMware.[126][127][128]
In January 2021, Dell reported $94 billion (equivalent to $104,280,000,000 in 2023) in sales and $13 billion (equivalent to $14,420,000,000 in 2023) operating cash flow during 2020.[105]
On March 1, 2024, Dell's stock hit all-time high after earnings. It delivered a strong performance from its artificial intelligence unit that sent shares up nearly 40%, its highest daily gain since the company went public in 2018.[129]In August 2024, the company announced it would be laying off 12,500 employees—10% of its workforce—in order to invest in artificial intelligence initiatives.[130]
When Dell acquired Alienware early in 2006, some Alienware systems hadAMDchips. On August 17, 2006, a Dell press release[131]stated that starting in September, Dell Dimension desktop computers would have AMD processors and that later in the year Dell would release a two-socket, quad-processor server using AMDOpteronchips, moving away from Dell's tradition of only offering Intel processors in Dell PCs.
CNET's News.com on August 17, 2006, cited Dell's CEO Kevin Rollins as attributing the move to AMD processors to lower costs and to AMD technology.[132]AMD's senior VP in commercial business, Marty Seyer, stated: "Dell's wider embrace of AMD processor-based offerings is a win for Dell, for the industry and most importantly for Dell customers."
On October 23, 2006, Dell announced new AMD-based servers — the PowerEdge 6950 and thePowerEdgeSC1435.
On November 1, 2006, Dell's website began offering notebooks based on AMD processors (the Inspiron 1501 with a 15.4-inch (390 mm) display) with the choice of a single-core MK-36 processor, dual-core Turion X2 chips or Mobile Sempron.[133]
In 2017, Dell released the AlienWare 17 gaming laptop. The model was primarily based on NVIDIA GeForce GTX 1080 systems.[134]
In 1998,Ralph Naderasked Dell (and five other majorOEMs) to offer alternate operating systems toMicrosoft Windows, specifically includingLinux, for which "there is clearly a growing interest".[135][136]Possibly coincidentally, Dell started offering Linux notebook systems that "cost no more than their Windows 98 counterparts" in 2000,[137]and soon expanded, with Dell becoming "the first major manufacturer to offer Linux across its full product line".[138]However, by early 2001 Dell had "disbanded its Linux business unit."[139]
On February 26, 2007, Dell announced that it had commenced a program to sell and distribute a range of computers with pre-installed Linux distributions as an alternative toMicrosoft Windows. Dell indicated thatNovell'sSUSELinux would appear first.[140]However, the next day, Dell announced that its previous announcement related to certifying the hardware as ready to work with Novell SUSE Linux and that it (Dell) had no plans to sell systems pre-installed with Linux in the near future.[141]On March 28, 2007, Dell announced that it would begin shipping some desktops and laptops with Linux pre-installed, although it did not specify which distribution of Linux or which hardware would lead.[142]On April 18, a report appeared suggesting that Michael Dell usedUbuntuon one of his home systems.[143]On May 1, 2007, Dell announced it would ship the Ubuntu Linux distribution.[144]On May 24, 2007, Dell started selling models with Ubuntu Linux 7.04 pre-installed: a laptop, a budget computer, and a high-end PC.[145]
On June 27, 2007, Dell announced on its Direct2Dell blog that it planned to offer more pre-loaded systems (the newDell Inspirondesktops and laptops). After theIdeaStormsite supported extending the bundles beyond the US market, Dell later announced more international marketing.[146]On August 7, 2007, Dell officially announced that it would offer one notebook and one desktop in the UK, France and Germany with Ubuntu "pre-installed". AtLinuxWorld2007 Dell announced plans to provideNovell'sSUSE Linux Enterprise Desktopon selected models in China, "factory-installed".[147]On November 30, 2007, Dell reported shipping 40,000 Ubuntu PCs.[148]On January 24, 2008, Dell in Germany, Spain, France, and the United Kingdom launched a second laptop, an XPS M1330 withUbuntu7.10, for 849 euro or GBP 599 upwards.[149]On February 18, 2008, Dell announced that theInspiron 1525would have Ubuntu as an optional operating system.[150]On February 22, 2008, Dell announced plans to sell Ubuntu in Canada and inLatin America[151]From September 16, 2008, Dell has shipped bothDell Ubuntu Netbook RemixandWindows XPHome versions of theInspiron Mini 9and theInspiron Mini 12. As of November 2009[update]Dell shipped the Inspiron Mini laptops with Ubuntu version 8.04.[152]
As of 2021, Dell continues to offer select laptops and workstations with Ubuntu Linux pre-installed, under the "Developer Edition" moniker.[153]
The key trends for Dell are (as of the financial year ending late January/early February):[154][155]
Dell's headquarters is located inRound Rock, Texas.[187]As of 2013[update]the company employed about 14,000 people in central Texas and was the region's largest private employer,[188]which has 2,100,000 square feet (200,000 m2) of space.[189]As of 1999 almost half of the general fund of the city of Round Rock originated from sales taxes generated from the Dell headquarters.[190]
Dell previously had its headquarters in theArboretumcomplex in northern Austin, Texas.[191][192]In 1989 Dell occupied 127,000 square feet (11,800 m2) in the Arboretum complex.[193]In 1990, Dell had 1,200 employees in its headquarters.[191]In 1993, Dell submitted a document to Round Rock officials, titled "Dell Computer Corporate Headquarters, Round Rock, Texas, May 1993 Schematic Design." Despite the filing, during that year the company said that it was not going to move its headquarters.[194]In 1994, Dell announced that it was moving most of its employees out of the Arboretum, but that it was going to continue to occupy the top floor of the Arboretum and that the company's official headquarters address would continue to be the Arboretum. The top floor continued to hold Dell's board room, demonstration center, and visitor meeting room. Less than one month prior to August 29, 1994, Dell moved 1,100 customer support and telephone sales employees to Round Rock.[195]Dell's lease in the Arboretum had been scheduled to expire in 1994.[196]
By 1996, Dell was moving its headquarters to Round Rock.[197]As of January 1996, 3,500 people still worked at the current Dell headquarters. One building of the Round Rock headquarters, Round Rock 3, had space for 6,400 employees and was scheduled to be completed in November 1996.[198]In 1998 Dell announced that it was going to add two buildings to its Round Rock complex, adding 1,600,000 square feet (150,000 m2) of office space to the complex.[199]
In 2000, Dell announced that it would lease 80,000 square feet (7,400 m2) of space in theLas Cimasoffice complex inunincorporatedTravis County, Texas, between Austin andWest Lake Hills, to house the company's executive offices and corporate headquarters. 100 senior executives were scheduled to work in the building by the end of 2000.[200]In January 2001, the company leased the space in Las Cimas 2, located alongLoop 360. Las Cimas 2 housed Dell's executives, the investment operations, and some corporate functions. Dell also had an option for 138,000 square feet (12,800 m2) of space in Las Cimas 3.[201]After a slowdown in business required reducing employees and production capacity, Dell decided to sublease its offices in two buildings in the Las Cimas office complex.[202]In 2002 Dell announced that it planned to sublease its space to another tenant; the company planned to move its headquarters back to Round Rock once a tenant was secured.[201]By 2003, Dell moved its headquarters back to Round Rock. It leased all of Las Cimas I and II, with a total of 312,000 square feet (29,000 m2), for about a seven-year period after 2003. By that year roughly 100,000 square feet (9,300 m2) of that space was absorbed by new subtenants.[203]
In 2008, Dell switched the power sources of the Round Rock headquarters to more environmentally friendly ones, with 60% of the total power coming fromTXU Energywind farms and 40% coming from the Austin Community Landfill gas-to-energy plant operated byWaste Management, Inc.[189]
Dell facilities in the United States are located in Austin, Texas;Nashua, New Hampshire;Nashville, Tennessee;Oklahoma City, Oklahoma;Peoria, Illinois;Hillsboro, Oregon(Portland area);Winston-Salem, North Carolina;Eden Prairie, Minnesota(Dell Compellent);Bowling Green, Kentucky;Lincoln, Nebraska; and Miami, Florida. Facilities located abroad includePenang, Malaysia;Xiamen, China;Bracknell, UK;Manila, Philippines[204]Chennai, India;[205]Hyderabad, India;Noida, India;HortolândiaandPorto Alegre, Brazil;Bratislava, Slovakia;Łódź, Poland;[206]Panama City,Panama;DublinandLimerick, Ireland;Casablanca, Morocco and Montpellier, France.
The US and India are the only countries that have all Dell's business functions and provide support globally: research and development, manufacturing, finance, analysis, and customer care.[207]Dell was recognized as "India's Most Desired Brand in 2023", as per TRA's Most Desired Brands report 2023.
From its early beginnings, Dell operated as a pioneer in the "configure to order" approach to manufacturing—delivering individual PCs configured to customer specifications. In contrast, most PC manufacturers in those times delivered large orders to intermediaries on a quarterly basis.[208]
To minimize the delay between purchase and delivery, Dell has a general policy of manufacturing its products close to its customers. This also allows for implementing ajust-in-time(JIT) manufacturing approach, which minimizesinventorycosts. Low inventory is another signature of the Dell business model—a critical consideration in an industry where components depreciate very rapidly.[209]
Dell's manufacturing process covers assembly, software installation, functional testing (including "burn-in"), and quality control. Throughout most of the company's history, Dell manufactured desktop machines in-house and contracted out the manufacturing of base notebooks for configuration in-house.[210]The company's approach has changed, as cited in the 2006 Annual Report, which states, "We are continuing to expand our use of original design manufacturing partnerships and manufacturing outsourcing relationships."The Wall Street Journalreported in September 2008 that "Dell has approached contract computer manufacturers with offers to sell" their plants.[211]By the late 2000s, Dell's "configure to order" approach of manufacturing—delivering individual PCs configured to customer specifications from its US facilities was no longer as efficient or competitive with high-volume Asian contract manufacturers as PCs became powerful low-cost commodities.[63]
Assembly of desktop computers for the North American market formerly took place at Dell plants in Austin, Texas, (original location) andLebanon, Tennessee, (opened in 1999), which were closed in 2008 and early 2009, respectively. The plant inWinston-Salem, North Carolina, opened in 2005 but ceased operations in November 2010.[67][68]Most of the work that used to take place in Dell's US plants was transferred to contract manufacturers in Asia and Mexico, or some of Dell's own factories overseas. TheMiami, Florida, facility of its Alienware subsidiary remains in operation, while Dell continues to produce its servers (its most profitable products) in Austin, Texas.[63]
Dell assembled computers for theEMEAmarket at theLimerickfacility in the Republic of Ireland, and once employed about 4,500 people in that country. Dell began manufacturing in Limerick in 1991 and went on to become Ireland's largest exporter of goods and its second-largest company and foreign investor. On January 8, 2009, Dell announced that it would move all Dell manufacturing in Limerick to Dell's new plant in the Polish city ofŁódźby January 2010.[212]European Unionofficials said they would investigate a €52.7million aid package the Polish government used to attract Dell away from Ireland.[213]European Manufacturing Facility 1 (EMF1, opened in 1990) and EMF3 form part of theRaheen Industrial Estatenear Limerick. EMF2 (previously aWangfacility, later occupied byFlextronics, situated in Castletroy) closed in 2002,[citation needed]and Dell Inc. has consolidated production into EMF3 (EMF1 now[when?]contains only offices).[214]Subsidies from the Polish government did keep Dell for a long time.[215]After ending assembly in the Limerick plant theCherrywoodTechnology Campus in Dublin was the largest Dell office in the republic with over 1200 people in sales (mainly UK & Ireland), support (enterprise support for EMEA) and research and development for cloud computing, but no more manufacturing except[216]Dell's Alienware subsidiary, which manufactures PCs in an Athlone, Ireland, plant. Whether this facility will remain in Ireland is not certain.[217]Dell started production at EMF4 in Łódź, Poland, in late 2007.[218]
Dell moved desktop, notebook and PowerEdge server manufacturing for the South American market from theEldorado do Sulplant opened in 1999, to a new plant inHortolândia, Brazil, in 2007.[219]
The corporation markets specific brand names to differentmarket segments.
Its Business/Corporate class includes:
Dell's Home Office/Consumer class includes:
Dell's Peripherals class includesUSB keydrives,LCD televisions, andprinters; Dell monitors includesLCD TVs,plasma TVsandprojectorsforHDTVandmonitors. Dell UltraSharp is further a high-end brand of monitors.
Dell service and support brands include theDell Solution Station(extended domestic support services, previously "Dell on Call"),Dell Support Center(extended support services abroad),Dell Business Support(a commercial service-contract that provides an industry-certified technician with a lower call-volume than in normal queues),Dell Everdream Desktop Management("Software as a service"remote-desktop management, originally a SaaS company founded byElon Musk's cousin,Lyndon Rive, which Dell bought in 2007[221]), andYour Tech Team(a support-queue available to home users who purchased their systems either through Dell's website or through Dell phone-centers).
Discontinued products and brands includeAxim(PDA; discontinued April 9, 2007),[222]Dimension(home and small office desktop computers; discontinued July 2007),Dell Digital Jukebox(MP3 player; discontinued August 2006), Dell PowerApp (application-based servers), Dell Optiplex (desktop and tower computers previously supported to run server and desktop operating systems), Dell Unix (anSVR4-based Unix operating system for its Dell-branded PCs and workstations; discontinued in 1993) and Dell Mobile Connect(Windows Mobile application; discontinued July 31, 2022).[223]
In November 2015, it emerged that several Dell computers had shipped with an identical pre-installedroot certificateknown as "eDellRoot".[224]This raised such security risks as attackers impersonatingHTTPS-protected websites such asGoogleandBank of Americaand malware being signed with the certificate to bypass Microsoft software filtering.[224]Dell apologized and offered a removal tool.[225]
Also in November 2015, a researcher discovered that customers with diagnostic program Dell Foundation Services could be digitally tracked using the unique service tag number assigned to them by the program.[226]This was possible even if a customer enabledprivate browsingand deleted theirbrowser cookies.[226]Ars Technicarecommended that Dell customers uninstall the program until the issue was addressed.[226]
The board consists of nine directors. Michael Dell, the founder of the company, serves as chairman of the board and chief executive officer. Other board members includeDon Carty,Judy Lewent,Klaus Luft,Alex Mandl, andSam Nunn.Shareholderselect the nine board members at meetings, and those board members who do not get a majority of votes must submit a resignation to the board, which will subsequently choose whether or not to accept the resignation. The board of directors usually sets up five committees having oversight over specific matters. These committees include the Audit Committee, which handles accounting issues, including auditing and reporting; the Compensation Committee, which approves compensation for the CEO and other employees of the company; the Finance Committee, which handles financial matters such as proposed mergers and acquisitions; the Governance and Nominating Committee, which handles various corporate matters (including the nomination of the board); and the Antitrust Compliance Committee, which attempts to prevent company practices from violatingantitrustlaws.[citation needed]
Day-to-day operations of the company are run by the Global Executive Management Committee, which setsstrategic direction. Dell has regional senior vice-presidents for countries other than the United States.[citation needed]
Dell advertisements have appeared in several types of media including television, the Internet, magazines,catalogs, and newspapers. Some of Dell Inc's marketing strategies include lowering prices at all times of the year, free bonus products (such as Dell printers), and free shipping to encourage more sales and stave off competitors. In 2006, Dell cut its prices in an effort to maintain its 19.2% market share. This also cut profit margins by more than half, from 8.7 to 4.3 percent. To maintain its low prices, Dell continues to accept most purchases of its products via the Internet and through the telephone network, and to move its customer-care division to India andEl Salvador.[227]
A popular United States television and print ad campaign in the early 2000s featured the actorBen Curtisplaying the part of "Steven", a lightly mischievous blond-haired youth who came to the assistance of bereft computer purchasers. Each television advertisement usually ended with Steven's catch-phrase: "Dude, you're gettin' a Dell!"[228]
A subsequent advertising campaign featuredinternsat Dell headquarters (with Curtis' character appearing in a small cameo at the end of one of the first commercials in this particular campaign).
In 2007, Dell switched advertising agencies in the US fromBBDOtoWorking MotherMedia. In July 2007, Dell released new advertising created by Working Mother to support the Inspiron and XPS lines. The ads featured music from theFlaming LipsandDevowho re-formed especially to record the song in the ad "Work it Out". Also in 2007, Dell began using the slogan "Yours is here" to say that it customizes computers to fit customers' requirements.[229]
Beginning in 2011, Dell began hosting a conference in Austin, Texas, at the Austin Convention Center titled "Dell World". The event featured new technology and services provided by Dell and Dell's partners. In 2011, the event was held October 12–14.[230]In 2012, the event was held December 11–13.[231]In 2013, the event was held December 11–13.[232]In 2014, the event was held November 4–6.[233]
In late 2007, Dell Inc. announced that it planned to expand its program tovalue-added resellers(VARs), giving it the official name of "Dell Partner Direct" and a new Website.[234]
Dell India has started Online Ecommerce website[235]with its Dell Partner www.compuindia.com GNG Electronics Pvt Ltd[236]termed as Dell Express Ship Affiliate(DESA).
The main objective was to reduce the delivery time. Customers who visit Dell India official site are given the option to buy online which then will be redirected to Dell affiliate website compuindia.com.[207]
Dell also operates a captive analytics division which supports pricing, web analytics, and supply chain operations. DGA operates as a single, centralized entity with a global view of Dell's business activities. The firm supports over 500 internal customers worldwide and has created a quantified impact of over $500 million.[citation needed][237]
In 2008, Dell received press coverage over its claim of having the world's most secure laptops, specifically, its Latitude D630 and Latitude D830.[238]At Lenovo's request, the (US) National Advertising Division (NAD) evaluated the claim, and reported that Dell did not have enough evidence to support it.[239]
Dell first opened their retail stores in India.[207]
In the early 1990s, Dell sold its products throughBest Buy,CostcoandSam's Clubstores in the United States. Dell stopped this practice in 1994, citing low profit margins on the business, exclusively distributing through a direct-sales model for the next decade. In 2003, Dell briefly sold products inSearsstores in the US. In 2007, Dell started shipping its products to major retailers in the US once again, starting withSam's ClubandWal-Mart.Staples, the largest office-supply retailer in the US, and Best Buy, the largest electronics retailer in the US, became Dell retail partners later that same year.
Starting in 2002, Dell openedkiosklocations in the United States to allow customers to examine products before buying them directly from the company. Starting in 2005, Dell expandedkiosklocations to include shopping malls across Australia, Canada, Singapore and Hong Kong. On January 30, 2008, Dell announced it would shut down all 140 kiosks in the US due to expansion into retail stores.[240]By June 3, 2010, Dell had also shut down all of its mall kiosks in Australia.[241]
As of the end of February 2008[update], Dell products shipped to one of the largest office supply retailers in Canada,Staples Business Depot. In April 2008,Future ShopandBest Buybegan carrying a subset of Dell products, such as certain desktops, laptops, printers, and monitors.
Since some shoppers in certain markets show reluctance to purchase technological products through the phone or the Internet, Dell has looked into opening retail operations in some countries in Central Europe and Russia. In April 2007, Dell opened a retail store inBudapest. In October of the same year, Dell opened a retail store in Moscow.
In the UK,HMV's flagshipTrocaderostore has sold Dell XPS PCs since December 2007. From January 2008 the UK stores ofDSGihave sold Dell products (in particular, throughCurrysandPC Worldstores). As of 2008, the large supermarket chainTescohas sold Dell laptops and desktops in outlets throughout the UK.
In May 2008, Dell reached an agreement with the office supply chain,Officeworks(part ofColes Group), to stock a few modified models in the Inspiron desktop and notebook range. These models have slightly different model numbers, but almost replicate the ones available from the Dell Store. Dell continued its retail push in the Australian market with its partnership with Harris Technology (another part of Coles Group) in November of the same year. In addition, Dell expanded its retail distributions in Australia through an agreement with the discount electrical retailer,The Good Guys, known for "Slashing Prices". Dell agreed to distribute a variety of makes of both desktops and notebooks, includingStudioandXPSsystems in late 2008. Dell andDick Smith Electronics(owned byWoolworths Limited) reached an agreement to expand within Dick Smith's 400 stores throughout Australia and New Zealand in May 2009 (1 year since Officeworks—owned by Coles Group—reached a deal). The retailer has agreed to distribute a variety ofInspironandStudionotebooks, with minimal Studio desktops from the Dell range. As of 2009[update], Dell continues to run and operate its various kiosks in 18 shopping centers throughout Australia. On March 31, 2010, Dell announced to Australian Kiosk employees that they were shutting down the Australian/New Zealand Dell kiosk program.
In Germany, Dell is selling selected smartphones and notebooks viaMedia Marktand Saturn, as well as some shopping websites.[242]
Dell's major competitors includeLenovo,Hewlett-Packard(HP),Hasee,Acer,Fujitsu,Toshiba,Gateway,Sony,Asus,MSI,Panasonic,SamsungandApple. Dell and its subsidiary, Alienware, compete in the enthusiast market against AVADirect,Falcon Northwest,VoodooPC(a subsidiary of HP), and other manufacturers. In the second quarter of 2006, Dell had between 18% and 19% share of the worldwide personal computer market, compared to HP with roughly 15%.
In late 2006[update], Dell lost its lead in the PC business to Hewlett-Packard. BothGartnerandIDCestimated that in the third quarter of 2006, HP shipped more units[243]worldwide than Dell did. Dell's 3.6% growth paled in comparison to HP's 15% growth during the same period. The problem got worse in the fourth quarter, when Gartner estimated[244]that Dell PC shipments declined 8.9% (versus HP's 23.9% growth). As a result, at the end of 2006 Dell's overall PC market share stood at 13.9% (versus HP's 17.4%).
IDC reported that Dell lost more server market share than any of the top four competitors in that arena. IDC's Q4 2006 estimates show Dell's share of the server market at 8.1%, down from 9.5% in the previous year. This represents an 8.8% loss year-over-year, primarily to competitors EMC and IBM. As of 2021, Dell is the third-largest PC manufacturer after Lenovo and HP.[245]
In 2001, Dell and EMC entered into a partnership whereby both companies jointly design products, and Dell provided support for certain EMC products including midrange storage systems, such asfibre channelandiSCSIstorage area networks. The relationship also promotes and sells OEM versions of backup, recovery, replication and archiving software.[246]On December 9, 2008, Dell and EMC announced the multi-year extension, through 2013, of the strategic partnership with EMC. In addition, Dell expanded its product lineup by adding the EMC Celerra NX4 storage system to the portfolio of Dell/EMC family of networked storage systems and partnered on a new line ofdata deduplicationproducts as part of its TierDisk family ofdata storage devices.[247]
On October 17, 2011, Dell discontinued reselling all EMC storage products, ending the partnership 2 years early.[248][249]Later Dell would acquire and merge with EMC in the largest tech merger to date.
Dell committed to reducing greenhouse gas emissions from its global activities by 40% by 2015, with the 2008 fiscal year as the baseline year.[250]It is listed inGreenpeace's Guide to Greener Electronics that scores leading electronics manufacturers according to their policies on sustainability,climate and energyand how green their products are. In November 2011, Dell ranked 2nd out of 15 listed electronics makers (increasing its score to 5.1 from 4.9, which it gained in the previous ranking from October 2010).[251]
Dell was the first company to publicly state a timeline for the elimination of toxic polyvinyl chloride (PVC) and brominated flame retardants (BFRs), which it planned to phase out by the end of 2009. It revised this commitment and now aims to remove toxics by the end of 2011 but only in its computing products.[252]In March 2010, Greenpeace activists protested at Dell offices in Bangalore, Amsterdam and Copenhagen calling for Dell's founder and CEO Michael Dell to "drop the toxics" and claiming that Dell's aspiration to be 'the greenest technology company on the planet'[253]was "hypocritical".[254]Dell has launched its first products completely free of PVC and BFRs with the G-Series monitors (G2210 and G2410) in 2009.[255]
In its 2012 report on progress relating toconflict minerals, theEnough Projectrated Dell the eighth-highest of 24 consumer electronics companies.[256]
Dell became the first company in theinformation technology industryto establish a product-recyclinggoal (in 2004) and completed the implementation of its global consumer recycling-program in 2006.[257]On February 6, 2007, the National Recycling Coalition awarded Dell its "Recycling Works" award for efforts to promote producer responsibility.[258]On July 19, 2007, Dell announced that it had exceeded targets in working to achieve a multi-year goal of recovering 275 million pounds of computer equipment by 2009. The company reported the recovery of 78 million pounds (nearly 40,000 tons) of IT equipment from customers in 2006, a 93-percent increase over 2005; and 12.4% of the equipment Dell sold seven years earlier.[259]
On June 5, 2007, Dell set a goal of becoming the greenest technology company on Earth for the long term.[260]The company launched azero-carboninitiative that includes:
Dell reports its environmental performance in an annualCorporate Social Responsibility(CSR) Report that follows theGlobal Reporting Initiative(GRI) protocol. Dell's 2008 CSR report ranked as "Application Level B" as "checked by GRI".[261]
The company aims to reduce its external environmental impact through an energy-efficient evolution of products, and also reduce its direct operational impact through energy-efficiency programs.[citation needed]
In the 1990s, Dell switched from using primarilyATXmotherboardsandPSUto using boards and power supplies with mechanically identical but differently wired connectors. This meant customers wishing to upgrade their hardware would have to replace parts with scarce Dell-compatible parts instead of commonly available parts. While motherboard power connections reverted to the industry standard in 2003, Dell remains secretive about their motherboard pin-outs for peripherals (such as MMC readers and power on/off switches and LEDs).[262][263]
In 2005, complaints about Dell more than doubled to 1,533, after earnings grew 52% that year.[264]
In 2006, Dell acknowledged that it had problems with customer service. Issues included call transfers[265]of more than 45% of calls and long wait times. Dell'sblogdetailed the response: "We're spending more than a $100 million—and a lot of blood, sweat, and tears of talented people—to fix this."[266]Later in the year, the company increased its spending on customer service to $150 million.[267]Since 2018, Dell has seen significant increase in consumer satisfaction. Moreover, their customer service has been praised for its prompt and accurate answers to most questions, especially those directed to their social media support.[268][269]
On August 17, 2007, Dell Inc. announced that after an internal investigation into its accounting practices it would restate and reduce earnings from 2003 through to the first quarter of 2007 by a total amount of between $50 million and $150 million, or 2 cents to 7 cents per share.[270]The investigation, begun in November 2006, resulted from concerns raised by theU.S. Securities and Exchange Commissionover some documents and information that Dell Inc. had submitted.[271]It was alleged that Dell had not disclosed large exclusivity payments received fromIntelfor agreeing not to buy processors from rival manufacturerAMD. In 2010 Dell finally paid $100 million (equivalent to $136,400,000 in 2023) to settle the SEC's charges of fraud. Michael Dell and other executives also paid penalties and suffered other sanctions, without admitting or denying the charges.[272]
In July 2009, Dell apologized after drawing the ire of the Taiwanese Consumer Protection Commission for twice refusing to honor a flood of orders against unusually low prices offered on its Taiwanese website. In the first instance, Dell offered a 19" LCD panel for $15. In the second instance, Dell offered its Latitude E4300 notebook at NT$18,558 (US$580), 70% lower than the usual price of NT$60,900 (US$1900). Concerning the E4300, rather than honor the discount taking a significant loss, the firm withdrew orders and offered a voucher of up to NT$20,000 (US$625) a customer in compensation. The consumer rights authorities in Taiwan fined Dell NT$1 million (US$31250) for customer rights infringements. Many consumers sued the firm for unfair compensation. A court in southern Taiwan ordered the firm to deliver 18 laptops and 76 flat-panel monitors to 31 consumers for NT$490,000 (US$15,120), less than a third of the normal price.[273]The court said the event could hardly be regarded as mistakes, as the prestigious firm said the company mispriced its products twice on its Taiwanese website within 3 weeks.[274]
After Michael Dell made a $24.4 billion buyout bid in August 2013 (equivalent to $31,470,000,000 in 2023), activist shareholderCarl Icahnsued the company and its board in an attempt to derail the bid and promote his own forthcoming offer.[275]
In 2020, theAustralian Strategic Policy Instituteaccused at least 82 major brands, including Dell, of being connected to forcedUyghurlabor inXinjiang.[276]
|
https://en.wikipedia.org/wiki/Dell
|
Ineconomicsandmarketing,product differentiation(or simply differentiation) is the process of distinguishing aproductorservicefrom others to make it moreattractiveto a particulartarget market. This involves differentiating it fromcompetitors' products as well as from afirm'sother products. The concept was proposed byEdward Chamberlinin his 1933 book,The Theory of Monopolistic Competition.[1]
Firms have differentresource endowmentsthat enable them to construct specificcompetitive advantagesover competitors.[2]Resource endowments allow firms to be different, which reduces competition and makes it possible to reach newsegmentsof the market. Thus, differentiation is the process of distinguishing the differences of a product or offering from others, to make it more attractive to a particulartarget market.[3]
Although research in aniche marketmay result in changing a product in order to improve differentiation, the changes themselves are not differentiation. Marketing or product differentiation is the process of describing the differences between products or services, or the resulting list of differences. This is done in order to demonstrate the unique aspects of a firm's product and create a sense ofvalue. Marketing textbooks are firm on the point that any differentiation must be valued by buyers[3](a differentiation attempt that is not perceived does not count). The termunique selling propositionrefers toadvertisingto communicate a product's differentiation.[4]
Ineconomics, successful product differentiation leads tocompetitive advantageand is inconsistent with the conditions forperfect competition, which include the requirement that the products of competing firms should beperfect substitutes. There are three types of product differentiation:
The brand differences are mostly minor; they can be merely a difference inpackagingor an advertising theme. The physical product need not change, but it may. Differentiation is due to buyers perceiving a difference; hence, causes of differentiation may be functional aspects of the product or service, how it isdistributedand marketed, or who buys it. The major sources of product differentiation are as follows.
Theobjectiveof differentiation is to develop apositionthat potential customers see as unique. The term is used frequently when dealing withfreemiumbusiness models, in which businesses market a free and paid version of a given product. Given they target thesame group of customers, it is imperative that free and paid versions be effectively differentiated.
Differentiation primarily affects performance through reducing directness of competition: as the product becomes more different, categorization becomes more difficult and hence draws fewer comparisons with its competition. A successful product differentiation strategy will move a product from competing based primarily onpriceto competing on non-price factors (such as product characteristics, distribution strategy, or promotional variables).
Most people would say that the implication of differentiation is the possibility of charging aprice premium; however, this is an over-simplification. If customers value the firm's offer, they will be less sensitive to aspects of competing offers; price may not be one of these aspects. Differentiation makes customers in a given segment have a lower sensitivity to other features (non-price) of the product.[5]
Edward Chamberlin’s (1933) seminal work onmonopolisticcompetition mentioned the theory of differentiation, which maintained that for available products within the same industry, customers may have different preferences. However, a generic strategy of differentiation popularized byMichael Porter(1980) proposed that differentiation is any product (tangible or intangible) perceived as “being unique” by at least one set of customers. Hence, it depends on customers' perception of the extent of product differentiation. Even until 1999, the consequences of these concepts were not well understood. In fact, Miller (1986) proposed marketing andinnovationas two differentiation strategies, which was supported by some scholars like Lee and Miller (1999). Mintzberg (1988) proposed more specific but broad categories: quality, design, support, image, price, and undifferentiated products, which received support from Kotha and Vadlamani (1995). However, IO literature (Ethiraj & Zhu, 2008; Makadok, 2010, 2011) did deeper analysis into the theory and explored a clear distinction between the wide use of vertical and horizontal differentiation.[6]
Vertical product differentiation can be measured objectively by a consumer. For example, when comparing two similar products, the quality and price can clearly be identified and ranked by the customer. If both A and B products have the same price to the consumer, then themarket sharefor each one will be positive, according to theHotelling model. The major theory in this is that all consumers prefer the higher quality product if two distinct products are offered at the same price. A product can differ in many vertical attributes such as itsoperating speed. What really matters is the relationship between consumers' willingness to pay for improvements in quality and the increase in cost per unit that comes with such improvements. Therefore, the perceived difference in quality is different among different consumers, so it is objective.[7]For example, a green product might have a lower or zero negative effect on the environment; however, it may turn out to be inferior to other products in other aspects. Hence, the product's appeal also depends on the way it is advertised and the social pressure felt by a potential consumer. Even one vertical differentiation can be a decisive factor in purchasing.[8]
Horizontal differentiation seeks to affect an individual's subjective decision-making, that is the difference cannot be measured in an objective way. For example, different color versions of the same iPhone or MacBook. A lemon ice cream is not superior to a chocolate ice cream, is completely based on the user's preference. A restaurant may price all of its desserts at the same price and lets the consumer freely choose its preferences since all the alternatives cost the same.[9]A clear example of Horizontal Product Differentiation can be seen when comparing Coca Cola and Pepsi: if priced the same then individuals will differentiate between the two based purely on their own taste preference.
Whilst Product differentiation is typically broken into two types Vertical and Horizontal, it's important to note that all products exhibit a combination of both and they are not the only way to define differentiation. Another way to differentiate a product is through spatial differentiation. Spatial Product Differentiation is using a geographical location as a way to differentiate.[10]An example of Spatial Differentiation is a firm locally sourcing inputs and producing their product.
According to research conducted by combining mathematics and economics, decisions of pricing depend on the substitutability between products, the level of substitutability varies as the degree of differentiation between firms’ products change. A firm cannot charge a higher price if products are good substitutes, conversely as a product deviates from others in the segment producers can begin to charge a higher price. The lower non-cooperativeequilibriumprice the lower the differentiation. For this reason, firms might jointly raise prices above the equilibrium or competitive level by coordination between themselves. They have a verbal or writtencollusionagreement between them. Firms operating in a market of low product differentiation might not coordinate with others, which increases the incentive to cheat the collusion agreement. If a firm slightly lowers there prices, they can capture a large fraction of the market and obtain short term profits if the products are highly substitutable.[11]
Product differentiation within a given market segment can have both positive and negative affects on the consumer. From the producers perspective building a different product compared to competitors can create a competitive advantage which can result in higher profits. Through differentiation consumers gain greater value from a product, however this leads to increased demand and market segmentation which can cause anti-competitive effects on price.[12]From this perspective greater diversity leads to more choices which means each individual can purchase a product better suited to themselves, the negative to this is prices within the market segment tend to rise. The level of differentiation between goods can also affect demand. For example within grocery stores, If a category of goods is relatively nondifferentiated then a high amount of assortment depth leads to less sales.[13]
During the 1990s, steps taken by government onderegulationandEuropean integrationpersuaded banks to compete for deposits on many factors like deposit rates, accessibility and the quality of financial services.[14]
In this example using the Hotelling model, one feature is of variety (location) and one feature of quality (remote access). Remote access using bank services via postal and telephonic services like arranging payment facilities and obtaining account information). In this model, banks cannot become vertically differentiated without negatively affecting horizontal differentiation between them.[14]
Horizontal differentiation occurs with the location of bank's branch. Vertical differentiation, in this example, occurs whenever one bank offers remote access and the other does not. With remote access, it can spur a negative interaction between transportation rate and taste for quality: customers who have higher taste for remote access face a lower transportation rate.[14]
A depositor with a high (low) taste for remote access has low (high) linear transportation costs. Different equilibria emerge as the result of two effects. On the one hand, introducing remote access steals depositors from your competitor because the product specification becomes more appealing (direct effect). On the other hand, banks become closer substitutes (indirect effect). First, banks become closer substitutes as the impact of linear transportation costs decreases. Second, deposit rate competition is affected by the size of the quality difference. These two effects, "stealing" depositors versus "substitutability" between banks, determines the equilibrium. For low and high values of the ratio quality difference to transportation rate, only one bank offers remote access (specialization). Intermediate (very low) values of the ratio quality difference to transportation costs yield universal (no) remote access.[14]
This competition is a two factor game: one is of offering of remote access and the other is of deposit rates. Hypothetically, there will be two consequential scenarios if only one bank offers remote access. First, the bank gains a positive market share for all types of remote access, giving rise to horizontal dominance. This occurs when the transportation cost prevail over the quality of service, deposit rate and time. Second, vertical dominance comes into picture when the bank that is not offering remote access gets the entire market for depositors who have lowest preference for remote access. That is when the quality service, deposit rate and time prevails over the cost of transportation.[14]
|
https://en.wikipedia.org/wiki/Product_differentiation
|
Product managementis the business process of planning, developing, launching, and managing a product or service. It includes the entire lifecycle of a product, from ideation to development togo to market.Product managersare responsible for ensuring that a product meets the needs of its target market and contributes to the business strategy, while managing a product or products at all stages of theproduct lifecycle.Software product managementadapts the fundamentals of product management for digital products.
The concept of product management originates from a 1931 memo by Procter & Gamble PresidentNeil H. McElroy. McElroy, requesting additional employees focused on brand management, needed "Brand Men" who would take on the role of managing products, packaging, positioning, distribution, and sales performance.
The memo defined a brand man's work as:
In modern terms, McElroy defined the role as: analyzing product distribution, optimize working distribution strategies, diagnosing and solving distribution issues, optimize product positioning and product marketing, and collaborate with regional distribution managers.
Product managersare responsible for managing a company's product line on a day-to-day basis. As a result, product managers are critical in driving a company's growth, margins, and revenue. They are responsible for the business case, conceptualizing, planning,product development,product marketing, and delivering products to their target market. Depending on the company's size, industry, and history, product management has a variety of functions and roles. Frequently there is anincome statement(or profit and loss) responsibility as a key metric for evaluating product manager performance.
Product managers analyze information including customer research, competitive intelligence, industry analysis, trends, economic signals, and competitive activity,[1]as well as documenting requirements, settingproduct strategy, and creating the roadmap. Product managers align across departments within their company including product design and development, marketing, sales, customer support, and legal.
|
https://en.wikipedia.org/wiki/Product_management
|
3D printing, oradditive manufacturing, is theconstructionof athree-dimensional objectfrom aCADmodel or a digital3D model.[1][2][3]It can be done in a variety of processes in which material is deposited, joined or solidified undercomputer control,[4]with the material being added together (such as plastics, liquids or powder grains being fused), typically layer by layer.
In the 1980s, 3D printing techniques were considered suitable only for the production of functional or aesthetic prototypes, and a more appropriate term for it at the time wasrapid prototyping.[5]As of 2019[update], the precision, repeatability, and material range of 3D printing have increased to the point that some 3D printing processes are considered viable as an industrial-production technology; in this context, the termadditive manufacturingcan be used synonymously with3D printing.[6]One of the key advantages of 3D printing[7]is the ability to produce very complex shapes or geometries that would be otherwise infeasible to construct by hand, including hollow parts or parts with internaltrussstructures to reduce weight while creating less material waste.Fused deposition modeling(FDM), which uses a continuous filament of athermoplasticmaterial, is the most common 3D printing process in use as of 2020[update].[8]
Theumbrella termadditive manufacturing (AM)gained popularity in the 2000s,[9]inspired by the theme of material being added together (in any of various ways). In contrast, the termsubtractive manufacturingappeared as aretronymfor the large family ofmachiningprocesses with materialremovalas their common process. The term3D printingstill referred only to the polymer technologies in most minds, and the termAMwas more likely to be used in metalworking and end-use part production contexts than among polymer, inkjet, or stereolithography enthusiasts.
By the early 2010s, the terms3D printingandadditive manufacturingevolvedsensesin which they were alternate umbrella terms for additive technologies, one being used in popular language by consumer-maker communities and the media, and the other used more formally by industrial end-use part producers, machine manufacturers, and global technical standards organizations. Until recently, the term3D printinghas been associated with machines low in price or capability.[10]3D printingandadditive manufacturingreflect that the technologies share the theme of material addition or joining throughout a 3D work envelope under automated control. Peter Zelinski, the editor-in-chief ofAdditive Manufacturingmagazine, pointed out in 2017 that the terms are still oftensynonymousin casual usage,[11]but some manufacturing industry experts are trying to make a distinction whereby additive manufacturingcomprises3D printing plus other technologies or other aspects of amanufacturing process.[11]
Other terms that have been used as synonyms orhypernymshave includeddesktop manufacturing,rapid manufacturing(as the logical production-level successor torapid prototyping), andon-demand manufacturing(which echoeson-demand printingin the 2D sense ofprinting). The fact that the application of the adjectivesrapidandon-demandto the nounmanufacturingwas novel in the 2000s reveals the long-prevailingmental modelof the previous industrial era during which almost all production manufacturing had involved longlead timesfor laborious tooling development. Today, the termsubtractivehas not replaced the termmachining, insteadcomplementingit when a term that covers any removal method is needed.Agile toolingis the use of modular means to design tooling that is produced by additive manufacturing or 3D printing methods to enable quickprototypingand responses to tooling and fixture needs. Agile tooling uses a cost-effective and high-quality method to quickly respond to customer and market needs, and it can be used inhydroforming,stamping,injection moldingand other manufacturing processes.
The general concept of and procedure to be used in 3D-printing was first described byMurray Leinsterin his 1945 short story "Things Pass By": "But this constructor is both efficient and flexible. I feed magnetronic plastics — the stuff they make houses and ships of nowadays — into this moving arm. It makes drawings in the air following drawings it scans with photo-cells. But plastic comes out of the end of the drawing arm and hardens as it comes ... following drawings only"[12]
It was also described byRaymond F. Jonesin his story, "Tools of the Trade", published in the November 1950 issue ofAstounding Science Fictionmagazine. He referred to it as a "molecular spray" in that story.
In 1971, Johannes F Gottwald patented the Liquid Metal Recorder, U.S. patent 3596285A,[13]a continuous inkjet metal material device to form a removable metal fabrication on a reusable surface for immediate use or salvaged for printing again by remelting. This appears to be the first patent describing 3D printing with rapid prototyping and controlled on-demand manufacturing of patterns.
The patent states:
As used herein the term printing is not intended in a limited sense but includes writing or other symbols, character or pattern formation with an ink. The term ink as used in is intended to include not only dye or pigment-containing materials, but any flowable substance or composition suited for application to the surface for forming symbols, characters, or patterns of intelligence by marking. The preferred ink is of a hot melt type. The range of commercially available ink compositions which could meet the requirements of the invention are not known at the present time. However, satisfactory printing according to the invention has been achieved with the conductive metal alloy as ink.
But in terms of material requirements for such large and continuous displays, if consumed at theretofore known rates, but increased in proportion to increase in size, the high cost would severely limit any widespread enjoyment of a process or apparatus satisfying the foregoing objects.
It is therefore an additional object of the invention to minimize use to materials in a process of the indicated class.
It is a further object of the invention that materials employed in such a process be salvaged for reuse.
According to another aspect of the invention, a combination for writing and the like comprises a carrier for displaying an intelligence pattern and an arrangement for removing the pattern from the carrier.
In 1974,David E. H. Joneslaid out the concept of 3D printing in his regular columnAriadnein the journalNew Scientist.[14][15]
Early additive manufacturing equipment and materials were developed in the 1980s.[16]
In April 1980, Hideo Kodama ofNagoyaMunicipal Industrial Research Institute invented two additive methods for fabricating three-dimensional plastic models with photo-hardeningthermoset polymer, where theUV exposurearea is controlled by a mask pattern or a scanning fiber transmitter.[17]He filed a patent for this XYZ plotter, which was published on 10 November 1981. (JP S56-144478).[18]His research results as journal papers were published in April and November 1981.[19][20]However, there was no reaction to the series of his publications. His device was not highly evaluated in the laboratory and his boss did not show any interest. His research budget was just 60,000 yen or $545 a year. Acquiring the patent rights for the XYZ plotter was abandoned, and the project was terminated.
A US 4323756 patent,method of fabricating articles by sequential deposition, granted on 6 April 1982 to Raytheon Technologies Corp describes using hundreds or thousands of "layers" of powdered metal and a laser energy source and represents an early reference to forming "layers" and the fabrication of articles on a substrate.
On 2 July 1984, American entrepreneurBill Mastersfiled a patent for his computer automated manufacturing process and system (US 4665492).[21]This filing is on record at theUSPTOas the first 3D printing patent in history; it was the first of three patents belonging to Masters that laid the foundation for the 3D printing systems used today.[22][23]
On 16 July 1984,Alain Le Méhauté, Olivier de Witte, and Jean Claude André filed their patent for thestereolithographyprocess.[24]The application of the French inventors was abandoned by the French General Electric Company (now Alcatel-Alsthom) andCILAS(The Laser Consortium).[25]The claimed reason was "for lack of business perspective".[26]
In 1983, Robert Howard started R.H. Research, later named Howtek, Inc. in Feb 1984 to develop a color inkjet 2D printer, Pixelmaster, commercialized in 1986, using Thermoplastic (hot-melt) plastic ink.[27]A team was put together, 6 members[27]from Exxon Office Systems, Danbury Systems Division, an inkjet printer startup and some members of Howtek, Inc group who became popular figures in the 3D printing industry. One Howtek member, Richard Helinski (patent US5136515A, Method and Means for constructing three-dimensional articles by particle deposition, application 11/07/1989 granted 8/04/1992) formed a New Hampshire company C.A.D-Cast, Inc, name later changed to Visual Impact Corporation (VIC) on 8/22/1991. A prototype of the VIC 3D printer for this company is available with a video presentation showing a 3D model printed with a single nozzle inkjet. Another employee Herbert Menhennett formed a New Hampshire company HM Research in 1991 and introduced the Howtek, Inc, inkjet technology and thermoplastic materials to Royden Sanders of SDI and Bill Masters of Ballistic Particle Manufacturing (BPM) where he worked for a number of years. Both BPM 3D printers and SPI 3D printers use Howtek, Inc style Inkjets and Howtek, Inc style materials. Royden Sanders licensed the Helinksi patent prior to manufacturing the Modelmaker 6 Pro at Sanders prototype, Inc (SPI) in 1993. James K. McMahon who was hired by Howtek, Inc to help develop the inkjet, later worked at Sanders Prototype and now operates Layer Grown Model Technology, a 3D service provider specializing in Howtek single nozzle inkjet and SDI printer support. James K. McMahon worked with Steven Zoltan, 1972 drop-on-demand inkjet inventor, at Exxon and has a patent in 1978 that expanded the understanding of the single nozzle design inkjets (Alpha jets) and helped perfect the Howtek, Inc hot-melt inkjets. This Howtek hot-melt thermoplastic technology is popular with metal investment casting, especially in the 3D printing jewelry industry.[28]Sanders (SDI) first Modelmaker 6Pro customer was Hitchner Corporations, Metal Casting Technology, Inc in Milford, NH a mile from the SDI facility in late 1993–1995 casting golf clubs and auto engine parts.
On 8 August 1984 a patent, US4575330, assigned to UVP, Inc., later assigned toChuck Hullof3D SystemsCorporation[29]was filed, his own patent for astereolithographyfabrication system, in which individual laminae or layers are added by curingphotopolymerswith impinging radiation, particle bombardment, chemical reaction or justultraviolet lightlasers. Hull defined the process as a "system for generating three-dimensional objects by creating a cross-sectional pattern of the object to be formed".[30][31]Hull's contribution was theSTL (Stereolithography) file formatand the digital slicing and infill strategies common to many processes today. In 1986, Charles "Chuck" Hull was granted a patent for this system, and his company, 3D Systems Corporation was formed and it released the first commercial 3D printer, the SLA-1,[32]later in 1987 or 1988.
The technology used by most 3D printers to date—especially hobbyist and consumer-oriented models—isfused deposition modeling, a special application of plasticextrusion, developed in 1988 byS. Scott Crumpand commercialized by his companyStratasys, which marketed its first FDM machine in 1992.[28]
Owning a 3D printer in the 1980s cost upwards of $300,000 ($650,000 in 2016 dollars).[33]
AM processes for metal sintering or melting (such asselective laser sintering,direct metal laser sintering, and selective laser melting) usually went by their own individual names in the 1980s and 1990s. At the time, all metalworking was done by processes that are now called non-additive (casting,fabrication,stamping, andmachining); although plenty ofautomationwas applied to those technologies (such as byrobot weldingandCNC), the idea of a tool or head moving through a 3D work envelope transforming a mass ofraw materialinto a desired shape with a toolpath was associated in metalworking only with processes that removed metal (rather than adding it), such as CNCmilling, CNCEDM, and many others. However, the automated techniques thataddedmetal, which would later be called additive manufacturing, were beginning to challenge that assumption. By the mid-1990s, new techniques for material deposition were developed atStanfordandCarnegie Mellon University, including microcasting[34]and sprayed materials.[35]Sacrificial and support materials had also become more common, enabling new object geometries.[36]
The term3D printingoriginally referred to a powder bed process employing standard and custominkjetprint heads, developed atMITby Emanuel Sachs in 1993 and commercialized by Soligen Technologies, Extrude Hone Corporation, andZ Corporation.[citation needed]
The year 1993 also saw the start of an inkjet 3D printer company initially named Sanders Prototype, Inc and later namedSolidscape, introducing a high-precision polymer jet fabrication system with soluble support structures, (categorized as a "dot-on-dot" technique).[28]
In 1995 theFraunhofer Societydeveloped theselective laser meltingprocess.
In the early 2000s 3D printers were still largely being used just in the manufacturing and research industries, as the technology was still relatively young and was too expensive for most consumers to be able to get their hands on. The 2000s was when larger scale use of the technology began being seen in industry, most often in the architecture and medical industries, though it was typically used for low accuracy modeling and testing, rather than the production of common manufactured goods or heavy prototyping.[37]
In 2005 users began to design and distribute plans for 3D printers that could print around 70% of their own parts, the original plans of which were designed byAdrian Bowyerat the University of Bath in 2004, with the name of the project beingRepRap(Replicating Rapid-prototyper).[38]
Similarly, in 2006 the Fab@Home project was started by Evan Malone andHod Lipson, another project whose purpose was to design a low-cost and open source fabrication system that users could develop on their own and post feedback on, making the project very collaborative.[39]
Much of the software for 3D printing available to the public at the time wasopen source, and as such was quickly distributed and improved upon by many individual users. In 2009 the Fused Deposition Modeling (FDM) printing process patents expired. This opened the door to a new wave of startup companies, many of which were established by major contributors of these open source initiatives, with the goal of many of them being to start developing commercial FDM 3D printers that were more accessible to the general public.[40]
As the various additive processes matured, it became clear that soon metal removal would no longer be the onlymetalworkingprocess done through a tool or head moving through a 3D work envelope, transforming a mass of raw material into a desired shape layer by layer. The 2010s were the first decade in which metal end-use parts such as engine brackets[41]and large nuts[42]would be grown (either before or instead of machining) injob productionrather thanobligatelybeing machined frombar stockor plate. It is still the case that casting, fabrication, stamping, and machining are more prevalent than additive manufacturing in metalworking, but AM is now beginning to make significant inroads, and with the advantages ofdesign for additive manufacturing, it is clear to engineers that much more is to come.
One place that AM is making a significant inroad is in the aviation industry. With nearly 3.8 billion air travelers in 2016,[43]the demand for fuel efficient and easily produced jet engines has never been higher. For large OEMs (original equipment manufacturers) like Pratt and Whitney (PW) and General Electric (GE) this means looking towards AM as a way to reduce cost, reduce the number of nonconforming parts, reduce weight in the engines to increase fuel efficiency and find new, highly complex shapes that would not be feasible with the antiquated manufacturing methods. One example of AM integration with aerospace was in 2016 when Airbus delivered the first of GE'sLEAPengines. This engine has integrated 3D-printed fuel nozzles, reducing parts from 20 to 1, a 25% weight reduction, and reduced assembly times.[44]A fuel nozzle is the perfect inroad for additive manufacturing in a jet engine since it allows for optimized design of the complex internals and it is a low-stress, non-rotating part. Similarly, in 2015, PW delivered their first AM parts in the PurePower PW1500G to Bombardier. Sticking to low-stress, non-rotating parts, PW selected the compressor stators and synch ring brackets[45]to roll out this new manufacturing technology for the first time. While AM is still playing a small role in the total number of parts in the jet engine manufacturing process, the return on investment can already be seen by the reduction in parts, the rapid production capabilities and the "optimized design in terms of performance and cost".[46]
As technology matured, several authors began to speculate that 3D printing could aid insustainable developmentin the developing world.[47]
In 2012, Filabot developed a system for closing the loop[48]with plastic and allows for any FDM or FFF 3D printer to be able to print with a wider range of plastics.
In 2014,Benjamin S. Cookand Manos M. Tentzeris demonstrated the first multi-material, vertically integrated printed electronics additive manufacturing platform (VIPRE) which enabled 3D printing of functional electronics operating up to 40 GHz.[49]
As the price of printers started to drop people interested in this technology had more access and freedom to make what they wanted. As of 2014, the price for commercial printers was still high with the cost being over $2,000.[50]
The term "3D printing" originally referred to a process that deposits a binder material onto a powder bed with inkjet printer heads layer by layer. More recently, the popular vernacular has started using the term to encompass a wider variety of additive-manufacturing techniques such as electron-beam additive manufacturing and selective laser melting. The United States and global technical standards use the official termadditive manufacturingfor this broader sense.
The most commonly used 3D printing process (46% as of 2018[update]) is a material extrusion technique calledfused deposition modeling, or FDM.[8]While FDM technology was invented after the other two most popular technologies, stereolithography (SLA) and selective laser sintering (SLS), FDM is typically the most inexpensive of the three by a large margin,[citation needed]which lends to the popularity of the process.
As of 2020, 3D printers have reached the level of quality and price that allows most people to enter the world of 3D printing. In 2020 decent quality printers can be found for less than US$200 for entry-level machines. These more affordable printers are usuallyfused deposition modeling(FDM) printers.[51]
In November 2021 a British patient named Steve Verze received the world's first fully 3D-printed prosthetic eye from theMoorfields Eye HospitalinLondon.[52][53]
In April 2024, the world's largest 3D printer, the Factory of the Future 1.0 was revealed at theUniversity of Maine. It is able to make objects 96 feet long, or 29 meters.[54]
In 2024, researchers usedmachine learningto improve the construction of synthetic bone[55]and set a record for shock absorption.[56]
In July 2024, researchers published a paper inAdvanced Materials Technologiesdescribing the development of artificial blood vessels using 3D-printing technology, which are as strong and durable as naturalblood vessels.[57]The process involved using a rotating spindle integrated into a 3D printer to create grafts from a water-based gel, which were then coated in biodegradable polyester molecules.[57]
Additive manufacturing or 3D printing has rapidly gained importance in the field of engineering due to its many benefits. The vision of 3D printing is design freedom, individualization,[58]decentralization[59]and executing processes that were previously impossible through alternative methods.[60]Some of these benefits include enabling faster prototyping, reducing manufacturing costs, increasing product customization, and improving product quality.[61]
Furthermore, the capabilities of 3D printing have extended beyond traditional manufacturing, like lightweight construction,[62]or repair and maintenance[63]with applications in prosthetics,[64]bioprinting,[65]food industry,[66]rocket building,[67]design and art[68]and renewable energy systems.[69]3D printing technology can be used to produce battery energy storage systems, which are essential for sustainable energy generation and distribution.
Another benefit of 3D printing is the technology's ability to produce complex geometries with high precision and accuracy.[70]This is particularly relevant in the field of microwave engineering, where 3D printing can be used to produce components with unique properties that are difficult to achieve using traditional manufacturing methods.[71]
Additive Manufacturing processes generate minimal waste by adding material only where needed, unlike traditional methods that cut away excess material.[72]This reduces both material costs and environmental impact.[73]This reduction in waste also lowers energy consumption for material production and disposal, contributing to a smallercarbon footprint.[74][75]
3D printable models may be created with acomputer-aided design(CAD) package, via a3D scanner, or by a plaindigital cameraandphotogrammetry software. 3D printed models created with CAD result in relatively fewer errors than other methods. Errors in 3D printable models can be identified and corrected before printing.[76]The manual modeling process of preparing geometric data for 3D computer graphics is similar to plastic arts such as sculpting. 3D scanning is a process of collecting digital data on the shape and appearance of a real object, and creating a digital model based on it.
CAD models can be saved in thestereolithography file format (STL), a de facto CAD file format for additive manufacturing that stores data based on triangulations of the surface of CAD models. STL is not tailored for additive manufacturing because it generates large file sizes of topology-optimized parts and lattice structures due to the large number of surfaces involved. A newer CAD file format, theadditive manufacturing file format (AMF), was introduced in 2011 to solve this problem. It stores information using curved triangulations.[77]
Before printing a 3D model from anSTLfile, it must first be examined for errors. MostCADapplications produce errors in output STL files,[78][79]of the following types:
A step in the STL generation known as "repair" fixes such problems in the original model.[82][83]Generally, STLs that have been produced from a model obtained through3D scanningoften have more of these errors[84]as 3D scanning is often achieved by point to point acquisition/mapping.3D reconstructionoften includes errors.[85]
Once completed, the STL file needs to be processed by a piece of software called a "slicer", which converts the model into a series of thin layers and produces aG-codefile containing instructions tailored to a specific type of 3D printer (FDM printers).[86]This G-code file can then be printed with 3D printing client software (which loads the G-code and uses it to instruct the 3D printer during the 3D printing process).
Printer resolution describes layer thickness and X–Y resolution indots per inch(dpi) ormicrometers(μm). Typical layer thickness is around 100μm(250DPI), although some machines can print layers as thin as 16 μm (1,600 DPI).[87]X–Y resolution is comparable to that oflaser printers. The particles (3D dots) are around 0.01 to 0.1 μm (2,540,000 to 250,000 DPI) in diameter.[88]For that printer resolution, specifying a mesh resolution of0.01–0.03 mmand a chord length≤ 0.016 mmgenerates an optimal STL output file for a given model input file.[89]Specifying higher resolution results in larger files without increase in print quality.
Construction of a model with contemporary methods can take anywhere from several hours to several days, depending on the method used and the size and complexity of the model. Additive systems can typically reduce this time to a few hours, although it varies widely depending on the type of machine used and the size and number of models being produced simultaneously.
Though the printer-produced resolution and surface finish are sufficient for some applications, post-processing and finishing methods allow for benefits such as greater dimensional accuracy, smoother surfaces, and other modifications such as coloration.
The surface finish of a 3D-printed part can improved using subtractive methods such as sanding and bead blasting. When smoothing parts that require dimensional accuracy, it is important to take into account the volume of the material being removed.[90]
Some printable polymers, such asacrylonitrile butadiene styrene(ABS), allow the surface finish to be smoothed and improved using chemical vapor processes[91]based onacetoneor similar solvents.
Some additive manufacturing techniques can benefit fromannealingas a post-processing step. Annealing a 3D-printed part allows for better internal layer bonding due to recrystallization of the part. It allows for an increase in mechanical properties, some of which arefracture toughness,[92]flexural strength,[93]impact resistance,[94]andheat resistance.[94]Annealing a component may not be suitable for applications where dimensional accuracy is required, as it can introduce warpage or shrinkage due to heating and cooling.[95]
Additive or subtractive hybrid manufacturing (ASHM) is a method that involves producing a 3D printed part and usingmachining(subtractive manufacturing) to remove material.[96]Machining operations can be completed after each layer, or after the entire 3D print has been completed depending on the application requirements. These hybrid methods allow for 3D-printed parts to achieve better surface finishes and dimensional accuracy.[97]
The layered structure of traditional additive manufacturing processes leads to a stair-stepping effect on part-surfaces that are curved or tilted with respect to the building platform. The effect strongly depends on the layer height used, as well as the orientation of a part surface inside the building process.[98]This effect can be minimized using "variable layer heights" or "adaptive layer heights". These methods decrease the layer height in places where higher quality is needed.[99]
Painting a 3D-printed part offers a range of finishes and appearances that may not be achievable through most 3D printing techniques. The process typically involves several steps, such as surface preparation,priming, and painting.[100]These steps help prepare the surface of the part and ensuring the paint adheres properly.
Some additive manufacturing techniques are capable of using multiple materials simultaneously. These techniques are able to print in multiple colors and color combinations simultaneously and can produce parts that may not necessarily require painting.
Some printing techniques require internal supports to be built to support overhanging features during construction. These supports must be mechanically removed or dissolved if using a water-soluble support material such asPVAafter completing a print.
Some commercial metal 3D printers involve cutting the metal component off the metal substrate after deposition. A new process for theGMAW3D printing allows for substrate surface modifications to removealuminium[101]orsteel.[102]
Traditionally, 3D printing focused onpolymersfor printing, due to the ease of manufacturing and handling polymeric materials. However, the method has rapidly evolved to not only print various polymers[104]but also metals[105][106]andceramics,[107]making 3D printing a versatile option for manufacturing. Layer-by-layer fabrication of three-dimensional physical models is a modern concept that "stems from the ever-growing CAD industry, more specifically the solid modeling side of CAD. Before solid modeling was introduced in the late 1980s, three-dimensional models were created with wire frames and surfaces."[108]but in all cases the layers of materials are controlled by the printer and the material properties. The three-dimensional material layer is controlled by the deposition rate as set by the printer operator and stored in a computer file. The earliest printed patented material was a hot melt type ink for printing patterns using a heated metal alloy.
Charles Hull filed the first patent on August 8, 1984, to use a UV-cured acrylic resin using a UV-masked light source at UVP Corp to build a simple model. The SLA-1 was the first SL product announced by 3D Systems at Autofact Exposition, Detroit, November 1978. The SLA-1 Beta shipped in Jan 1988 to Baxter Healthcare, Pratt and Whitney, General Motors and AMP. The first production SLA-1 shipped to Precision Castparts in April 1988. The UV resin material changed over quickly to an epoxy-based material resin. In both cases, SLA-1 models needed UV oven curing after being rinsed in a solvent cleaner to remove uncured boundary resin. A post cure apparatus (PCA) was sold with all systems. The early resin printers required a blade to move fresh resin over the model on each layer. The layer thickness was 0.006 inches and the HeCd laser model of the SLA-1 was 12 watts and swept across the surface at 30 in per second. UVP was acquired by 3D Systems in January 1990.[109]
A review of the history shows that a number of materials (resins, plastic powder, plastic filament and hot-melt plastic ink) were used in the 1980s for patents in the rapid prototyping field. Masked lamp UV-cured resin was also introduced by Cubital's Itzchak Pomerantz in the Soldier 5600, Carl Deckard's (DTM) laser sintered thermoplastic powders, and adhesive-laser cut paper (LOM) stacked to form objects by Michael Feygin before 3D Systems made its first announcement. Scott Crump was also working with extruded "melted" plastic filament modeling (FDM) and drop deposition had been patented by William E Masters a week after Hull's patent in 1984, but he had to discover thermoplastic inkjets, introduced by Visual Impact Corporation 3D printer in 1992, using inkjets from Howtek, Inc., before he formed BPM to bring out his own 3D printer product in 1994.[109]
Efforts to achieve multi-material 3D printing range from enhanced FDM-like processes like VoxelJet to novel voxel-based printing technologies like layered assembly.[110]
A drawback of many existing 3D printing technologies is that they only allow one material to be printed at a time, limiting many potential applications that require the integration of different materials in the same object. Multi-material 3D printing solves this problem by allowing objects of complex and heterogeneous arrangements of materials to be manufactured using a single printer. Here, a material must be specified for eachvoxel(or 3D printing pixel element) inside the final object volume.
The process can be fraught with complications, however, due to the isolated and monolithic algorithms. Some commercial devices have sought to solve these issues, such as building a Spec2Fab translator, but the progress is still very limited.[111]Nonetheless, in the medical industry, a concept of 3D-printed pills and vaccines has been presented.[112]With this new concept, multiple medications can be combined, which is expected to decrease many risks. With more and more applications of multi-material 3D printing, the costs of daily life and high technology development will become inevitably lower.
Metallographic materials of 3D printing is also being researched.[113]By classifying each material, CIMP-3D can systematically perform 3D printing with multiple materials.[114]
Using 3D printing and multi-material structures in additive manufacturing has allowed for the design and creation of what is called 4D printing. 4D printing is an additive manufacturing process in which the printed object changes shape with time, temperature, or some other type of stimulation. 4D printing allows for the creation of dynamic structures with adjustable shapes, properties or functionality. The smart/stimulus-responsive materials that are created using 4D printing can be activated to create calculated responses such as self-assembly, self-repair, multi-functionality, reconfiguration and shape-shifting. This allows for customized printing of shape-changing and shape-memory materials.[115]
4D printing has the potential to find new applications and uses for materials (plastics, composites, metals, etc.) and has the potential to create new alloys and composites that were not viable before. The versatility of this technology and materials can lead to advances in multiple fields of industry, including space, commercial and medical fields. The repeatability, precision, and material range for 4D printing must increase to allow the process to become more practical throughout these industries.
To become a viable industrial production option, there are a few challenges that 4D printing must overcome. The challenges of 4D printing include the fact that the microstructures of these printed smart materials must be close to or better than the parts obtained through traditional machining processes. New and customizable materials need to be developed that have the ability to consistently respond to varying external stimuli and change to their desired shape. There is also a need to design new software for the various technique types of 4D printing. The 4D printing software will need to take into consideration the base smart material, printing technique, and structural and geometric requirements of the design.[116]
ISO/ASTM52900-15 defines seven categories of additive manufacturing (AM) processes within its meaning.[117][118]They are:
The main differences between processes are in the way layers are deposited to create parts and in the materials that are used. Each method has its own advantages and drawbacks, which is why some companies offer a choice of powder and polymer for the material used to build the object.[119]Others sometimes use standard, off-the-shelf business paper as the build material to produce a durable prototype. The main considerations in choosing a machine are generally speed, costs of the 3D printer, of the printed prototype, choice and cost of the materials, and color capabilities.[120]Printers that work directly with metals are generally expensive. However, less expensive printers can be used to make a mold, which is then used to make metal parts.[121]
The first process where three-dimensional material is deposited to form an object was done withmaterial jetting[28]or as it was originally called particle deposition. Particle deposition by inkjet first started with continuous inkjet technology (CIT) (1950s) and later with drop-on-demand inkjet technology (1970s) using hot-melt inks. Wax inks were the first three-dimensional materials jetted and later low-temperature alloy metal was jetted with CIT. Wax and thermoplastic hot melts were jetted next by DOD. Objects were very small and started with text characters and numerals for signage. An object must have form and can be handled. Wax characters tumbled off paper documents and inspired a liquid metal recorder patent to make metal characters for signage in 1971. Thermoplastic color inks (CMYK) were printed with layers of each color to form the first digitally formed layered objects in 1984. The idea of investment casting with Solid-Ink jetted images or patterns in 1984 led to the first patent to form articles from particle deposition in 1989, issued in 1992.
Some methods melt or soften the material to produce the layers. Infused filament fabrication, also known asfused deposition modeling(FDM), the model or part is produced by extruding small beads or streams of material that harden immediately to form layers. A filament ofthermoplastic, metal wire, or other material is fed into anextrusionnozzle head (3D printer extruder), which heats the material and turns the flow on and off. FDM is somewhat restricted in the variation of shapes that may be fabricated. Another technique fuses parts of the layer and then moves upward in the working area, adding another layer of granules and repeating the process until the piece has built up. This process uses the unfused media to support overhangs and thin walls in the part being produced, which reduces the need for temporary auxiliary supports for the piece.[122]Recently, FFF/FDM has expanded to 3-D print directly from pellets to avoid the conversion to filament. This process is called fused particle fabrication (FPF) (or fused granular fabrication (FGF) and has the potential to use more recycled materials.[123]
Powder bed fusion techniques, or PBF, include several processes such as DMLS,SLS,SLM, MJF andEBM. Powder bed fusion processes can be used with an array of materials and their flexibility allows for geometrically complex structures,[124]making it a good choice for many 3D printing projects. These techniques includeselective laser sintering, with both metals and polymers anddirect metal laser sintering.[125]Selective laser meltingdoes not use sintering for the fusion of powder granules but will completely melt the powder using a high-energy laser to create fully dense materials in a layer-wise method that has mechanical properties similar to those of conventional manufactured metals.Electron beam meltingis a similar type of additive manufacturing technology for metal parts (e.g.titanium alloys). EBM manufactures parts by melting metal powder layer by layer with an electron beam in a high vacuum.[126][127]Another method consists of aninkjet 3D printingsystem, which creates the model one layer at a time by spreading a layer of powder (plasterorresins) and printing a binder in the cross-section of the part using an inkjet-like process. Withlaminated object manufacturing, thin layers are cut to shape and joined. In addition to the previously mentioned methods,HPhas developed theMulti Jet Fusion(MJF) which is a powder base technique, though no lasers are involved. An inkjet array applies fusing and detailing agents which are then combined by heating to create a solid layer.[128]
The binder jetting 3D printing technique involves the deposition of a binding adhesive agent onto layers of material, usually powdered, and then this "green" state part may be cured and even sintered. The materials can be ceramic-based, metal or plastic. This method is also known asinkjet 3D printing. To produce a part, the printer builds the model using a head that moves over the platform base to spread or deposit alternating layers of powder (plasterandresins) and binder. Most modern binder jet printers also cure each layer of binder. These steps are repeated until all layers have been printed. This green part is usually cured in an oven to off-gas most of the binder before being sintered in a kiln with a specific time-temperature curve for the given material(s).
This technology allows the printing of full-color prototypes, overhangs, and elastomer parts. The strength of bonded powder prints can be enhanced by impregnating in the spaces between the necked or sintered matrix of powder with other compatible materials depending on the powder material, like wax,thermosetpolymer, or even bronze.[129][130]
Other methods cure liquid materials using different sophisticated technologies, such asstereolithography.Photopolymerizationis primarily used in stereolithography to produce a solid part from a liquid. Inkjet printer systems like theObjet PolyJetsystem sprayphotopolymermaterials onto a build tray in ultra-thin layers (between 16 and 30 μm) until the part is completed.[131]Each photopolymer layer iscuredwith UV light after it is jetted, producing fully cured models that can be handled and used immediately, without post-curing. Ultra-small features can be made with the 3D micro-fabrication technique used inmultiphotonphotopolymerisation. Due to the nonlinear nature of photo excitation, the gel is cured to a solid only in the places where the laser was focused while the remaining gel is then washed away. Feature sizes of under 100 nm are easily produced, as well as complex structures with moving and interlocked parts.[132]Yet another approach uses a synthetic resin that is solidified usingLEDs.[133]
In Mask-image-projection-based stereolithography, a 3D digital model is sliced by a set of horizontal planes. Each slice is converted into a two-dimensional mask image. The mask image is then projected onto a photocurable liquid resin surface and light is projected onto the resin to cure it in the shape of the layer.[134]Continuous liquid interface productionbegins with a pool of liquidphotopolymerresin. Part of the pool bottom is transparent toultraviolet light(the "window"), which causes the resin to solidify. The object rises slowly enough to allow the resin to flow under and maintain contact with the bottom of the object.[135]In powder-fed directed-energy deposition, a high-power laser is used to melt metal powder supplied to the focus of the laser beam. The powder-fed directed energy process is similar to selective laser sintering, but the metal powder is applied only where material is being added to the part at that moment.[136][137]
Computed axial lithographyis a method for 3D printing based oncomputerised tomography scansto create prints in photo-curable resin. It was developed by a collaboration between theUniversity of California, BerkeleywithLawrence Livermore National Laboratory.[138][139][140]Unlike other methods of 3D printing it does not build models through depositing layers of material likefused deposition modellingandstereolithography, instead it creates objects using a series of 2D images projected onto a cylinder of resin.[138][140]It is notable for its ability to build an object much more quickly than other methods using resins and the ability to embed objects within the prints.[139]
Liquid additive manufacturing(LAM) is a 3D printing technique that deposits a liquid or high viscosity material (e.g. liquid silicone rubber) onto a build surface to create an object which then isvulcanisedusing heat to harden the object.[141][142][143]The process was originally created byAdrian Bowyerand was then built upon by German RepRap.[141][144][145]
A technique calledprogrammable toolinguses 3D printing to create a temporary mold, which is then filled via a conventionalinjection moldingprocess and then immediately dissolved.[146]
In some printers, paper can be used as the build material, resulting in a lower cost to print. During the 1990s some companies marketed printers that cut cross-sections out of special adhesivecoated paperusing a carbon dioxide laser and then laminated them together.
In 2005Mcor Technologies Ltddeveloped a different process using ordinary sheets of office paper, atungsten carbideblade to cut the shape, and selective deposition of adhesive and pressure to bond the prototype.[147]
In powder-fed directed-energy deposition (also known aslaser metal deposition), a high-power laser is used to melt metal powder supplied to the focus of the laser beam. The laser beam typically travels through the center of the deposition head and is focused on a small spot by one or more lenses. The build occurs on anX-Y tablewhich is driven by a tool path created from a digital model to fabricate an object layer by layer. The deposition head is moved up vertically as each layer is completed. Some systems even make use of 5-axis[148][149]or 6-axis systems[150](i.e.articulated arms) capable of delivering material on the substrate (a printing bed, or a pre-existing part[151]) with few to no spatial access restrictions. Metal powder is delivered and distributed around the circumference of the head or can be split by an internal manifold and delivered through nozzles arranged in various configurations around the deposition head. A hermetically sealed chamber filled with inert gas or a local inert shroud gas (sometimes both combined) is often used to shield the melt pool from atmospheric oxygen, to limitoxidationand to better control the material properties. The powder-fed directed-energy process is similar to selective laser sintering, but the metal powder is projected only where the material is being added to the part at that moment. The laser beam is used to heat up and create a "melt pool" on the substrate, in which the new powder is injected quasi-simultaneously. The process supports a wide range of materials including titanium, stainless steel, aluminium, tungsten, and other specialty materials as well as composites and functionally graded materials. The process can not only fully build new metal parts but can also add material to existing parts for example for coatings, repair, and hybrid manufacturing applications.Laser engineered net shaping(LENS), which was developed by Sandia National Labs, is one example of the powder-fed directed-energy deposition process for 3D printing or restoring metal parts.[152][153]
Laser-based wire-feed systems, such as laser metal deposition-wire (LMD-w), feed the wire through a nozzle that is melted by a laser using inert gas shielding in either an open environment (gas surrounding the laser) or in a sealed chamber.Electron beam freeform fabricationuses an electron beam heat source inside a vacuum chamber.
It is also possible to use conventionalgas metal arc weldingattached to a 3D stage to 3-D print metals such as steel, bronze and aluminium.[154][155]Low-cost open sourceRepRap-style 3-D printers have been outfitted withArduino-basedsensorsand demonstrated reasonable metallurgical properties from conventional welding wire as feedstock.[156]
In selective powder deposition, build and support powders are selectively deposited into a crucible, such that the build powder takes the shape of the desired object and support powder fills the rest of the volume in the crucible. Then an infill material is applied, such that it comes in contact with the build powder. Then the crucible is fired up in a kiln at the temperature above the melting point of the infill but below the melting points of the powders. When the infill melts, it soaks the build powder. But it does not soak the support powder, because the support powder is chosen to be such that it is not wettable by the infill. If at the firing temperature, the atoms of the infill material and the build powder are mutually defusable, such as in the case of copper powder and zinc infill, then the resulting material will be a uniform mixture of those atoms, in this case, bronze. But if the atoms are not mutually defusable, such as in the case of tungsten and copper at 1100 °C, then the resulting material will be a composite. To prevent shape distortion, the firing temperature must be below the solidus temperature of the resulting alloy.[157]
Cryogenic 3D printing is a collection of techniques that forms solid structures by freezing liquid materials while they are deposited. As each liquid layer is applied, it is cooled by the low temperature of the previous layer and printing environment which results in solidification. Unlike other 3D printing techniques, cryogenic 3D printing requires a controlled printing environment. The ambient temperature must be below the material's freezing point to ensure the structure remains solid during manufacturing and the humidity must remain low to prevent frost formation between the application of layers.[158]Materials typically include water and water-based solutions, such asbrine,slurry, andhydrogels.[159][160]Cryogenic 3D printing techniques include rapid freezing prototype (RFP),[159]low-temperature deposition manufacturing (LDM),[161]and freeze-form extrusion fabrication (FEF).[162]
3D printing or additive manufacturing has been used in manufacturing, medical, industry and sociocultural sectors (e.g. cultural heritage) to create successful commercial technology.[163]More recently, 3D printing has also been used in the humanitarian and development sector to produce a range of medical items, prosthetics, spares and repairs.[164]The earliest application of additive manufacturing was on thetoolroomend of the manufacturing spectrum. For example,rapid prototypingwas one of the earliest additive variants, and its mission was to reduce thelead timeand cost of developing prototypes of new parts and devices, which was earlier only done with subtractive toolroom methods such as CNC milling, turning, and precision grinding.[165]In the 2010s, additive manufacturing enteredproductionto a much greater extent.
Additive manufacturing of foodis being developed by squeezing out food, layer by layer, into three-dimensional objects. A large variety of foods are appropriate candidates, such as chocolate and candy, and flat foods such as crackers, pasta,[166]and pizza.[167][168]NASAis looking into the technology in order to create 3D-printed food to limitfood wasteand to make food that is designed to fit an astronaut's dietary needs.[169]In 2018, Italian bioengineerGiuseppe Sciontideveloped a technology allowing the production of fibrous plant-based meat analogues using a custom3D bioprinter, mimicking meat texture and nutritional values.[170][171]
3D printing has entered the world of clothing, with fashion designers experimenting with 3D-printed bikinis, shoes, and dresses.[172]In commercial production,Nikeused 3D printing to prototype and manufacture the 2012 Vapor Laser Talon football shoe for players of American football, andNew Balancehas 3D manufactured custom-fit shoes for athletes.[172][173]3D printing has come to the point where companies are printing consumer-grade eyewear with on-demand custom fit and styling (although they cannot print the lenses). On-demand customization of glasses is possible with rapid prototyping.[174]
In cars, trucks, and aircraft, additive manufacturing is beginning to transform bothunibodyandfuselagedesign and production, andpowertraindesign and production. For example,General Electricuses high-end 3D printers to build parts forturbines.[175]Many of these systems are used for rapid prototyping before mass production methods are employed. Other prominent examples include:
AM's impact on firearms involves two dimensions: new manufacturing methods for established companies, and new possibilities for the making ofdo-it-yourselffirearms. In 2012, the US-based groupDefense Distributeddisclosed plans to design a working plastic3D-printed firearm"that could be downloaded and reproduced by anybody with a 3D printer".[184][185]After Defense Distributed released their plans, questions were raised regarding the effects that 3D printing and widespread consumer-levelCNCmachining[186][187]may have ongun controleffectiveness.[188][189][190][191]Moreover, armor-design strategies can be enhanced by taking inspiration from nature and prototyping those designs easily, using AM.[192]
Surgical uses of 3D printing-centric therapies began in the mid-1990s with anatomical modeling for bony reconstructive surgery planning. Patient-matched implants were a natural extension of this work, leading to truly personalized implants that fit one unique individual.[193]Virtual planning of surgery and guidance using 3D printed, personalized instruments have been applied to many areas of surgery including total joint replacement and craniomaxillofacial reconstruction with great success.[194][195]One example of this is the bioresorbable trachial splint to treat newborns with tracheobronchomalacia[196]developed at the University of Michigan. The use of additive manufacturing for serialized production of orthopedic implants (metals) is also increasing due to the ability to efficiently create porous surface structures that facilitateosseointegration. The hearing aid and dental industries are expected to be the biggest areas of future development using custom 3D printing technology.[197]
3D printing is not just limited to inorganic materials; there have been a number of biomedical advancements made possible by 3D printing. As of 2012[update], 3Dbio-printingtechnology has been studied bybiotechnologyfirms and academia for possible use in tissue engineering applications in which organs and body parts are built usinginkjet printingtechniques. In this process, layers of living cells are deposited onto a gel medium or sugar matrix and slowly built up to form three-dimensional structures including vascular systems.[198]3D printing has been considered as a method of implantingstem cellscapable of generating new tissues and organs in living humans.[199]In 2018, 3D printing technology was used for the first time to create a matrix for cell immobilization in fermentation. Propionic acid production byPropionibacterium acidipropioniciimmobilized on 3D-printed nylon beads was chosen as a model study. It was shown that those 3D-printed beads were capable of promoting high-density cell attachment and propionic acid production, which could be adapted to other fermentation bioprocesses.[200]
3D printing has also been employed by researchers in the pharmaceutical field. During the last few years, there has been a surge in academic interest regarding drug delivery with the aid of AM techniques. This technology offers a unique way for materials to be utilized in novel formulations.[201]AM manufacturing allows for the usage of materials and compounds in the development of formulations, in ways that are not possible with conventional/traditional techniques in the pharmaceutical field, e.g. tableting, cast-molding, etc. Moreover, one of the major advantages of 3D printing, especially in the case of fused deposition modelling (FDM), is the personalization of the dosage form that can be achieved, thus, targeting the patient's specific needs.[202]In the not-so-distant future, 3D printers are expected to reach hospitals and pharmacies in order to provide on-demand production of personalized formulations according to the patients' needs.[203]
3D printing has also been used for medical equipment. During theCOVID-19 pandemic3D printers were used to supplement the strained supply ofPPEthrough volunteers using their personally owned printers to produce various pieces of personal protective equipment (i.e. frames for face shields).
3D printing, and open source 3D printers, in particular, are the latest technologies making inroads into the classroom.[204][205][206]Higher education has proven to be a major buyer of desktop and professional 3D printers which industry experts generally view as a positive indicator.[207]Some authors have claimed that 3D printers offer an unprecedented "revolution" inSTEMeducation.[208][209]The evidence for such claims comes from both the low-cost ability forrapid prototypingin the classroom by students, but also the fabrication of low-cost high-quality scientific equipment fromopen hardwaredesigns formingopen-source labs.[210]Additionally, Libraries around the world have also become locations to house smaller 3D printers for educational and community access.[211]Future applications for 3D printing might include creating open-source scientific equipment.[210][212]
In the 2010s, 3D printing became intensively used in thecultural heritagefield for preservation, restoration and dissemination purposes.[213]Many Europeans and North American Museums have purchased 3D printers and actively recreate missing pieces of their relics[214]and archaeological monuments such asTiwanakuinBolivia.[215]TheMetropolitan Museum of Artand theBritish Museumhave started using their 3D printers to create museum souvenirs that are available in the museum shops.[216]Other museums, like the National Museum of Military History and Varna Historical Museum, have gone further and sell through the online platformThreedingdigital models of their artifacts, created usingArtec 3Dscanners, in 3D printing friendly file format, which everyone can 3D print at home.[217]Morehshin Allahyari, an Iranian-born U.S. artist, considers her use of 3D sculpting processes of re-constructing Iranian cultural treasures as feminist activism. Allahyari uses a 3D modeling software to reconstruct a series of cultural artifacts that were demolished by ISIS militants in 2014.[218]
The application of 3D printing for the representation of architectural assets has many challenges. In 2018, the structure ofIran National Bankwas traditionally surveyed and modeled in computer graphics software (specifically,Cinema4D) and was optimized for 3D printing. The team tested the technique for the construction of the part and it was successful. After testing the procedure, the modellers reconstructed the structure in Cinema4D and exported the front part of the model toNetfabb. The entrance of the building was chosen due to the 3D printing limitations and the budget of the project for producing the maquette. 3D printing was only one of the capabilities enabled by the produced 3D model of the bank, but due to the project's limited scope, the team did not continue modelling for the virtual representation or other applications.[219]In 2021, Parsinejad et al. comprehensively compared the hand surveying method for 3D reconstruction ready for 3D printing with digital recording (adoption of photogrammetry method).[219]
The world's first 3D-printed steel bridge was unveiled inAmsterdamin July 2021. Spanning 12 meters over theOudezijds Achterburgwalcanal, the bridge was created using robotic arms that printed over 4,500 kilograms of stainless steel. It took six months to complete.[220]
3D printed softactuatorsis a growing application of 3D printing technology that has found its place in the 3D printing applications. These soft actuators are being developed to deal with soft structures and organs, especially in biomedical sectors and where the interaction between humans and robots is inevitable. The majority of the existing soft actuators are fabricated by conventional methods that require manual fabrication of devices, post-processing/assembly, and lengthy iterations until the maturity of the fabrication is achieved. Instead of the tedious and time-consuming aspects of the current fabrication processes, researchers are exploring an appropriate manufacturing approach for the effective fabrication of soft actuators. Thus, 3D-printed soft actuators are introduced to revolutionize the design and fabrication of soft actuators with custom geometrical, functional, and control properties in a faster and inexpensive approach. They also enable incorporation of all actuator components into a single structure eliminating the need to use externaljoints,adhesives, andfasteners.
Circuit board manufacturing involves multiple steps which include imaging, drilling, plating, solder mask coating, nomenclature printing and surface finishes. These steps include many chemicals such as harsh solvents and acids. 3D printing circuit boards remove the need for many of these steps while still producing complex designs.[221]Polymer ink is used to create the layers of the build while silver polymer is used for creating the traces and holes used to allow electricity to flow.[222]Current circuit board manufacturing can be a tedious process depending on the design. Specified materials are gathered and sent into inner layer processing where images are printed, developed and etched. The etch cores are typically punched to add lamination tooling. The cores are then prepared for lamination. The stack-up, the buildup of a circuit board, is built and sent into lamination where the layers are bonded. The boards are then measured and drilled. Many steps may differ from this stage however for simple designs, the material goes through a plating process to plate the holes and surface. The outer image is then printed, developed and etched. After the image is defined, the material must get coated with a solder mask for later soldering. Nomenclature is then added so components can be identified later. Then the surface finish is added. The boards are routed out of panel form into their singular or array form and then electrically tested. Aside from the paperwork that must be completed which proves the boards meet specifications, the boards are then packed and shipped. The benefits of 3D printing would be that the final outline is defined from the beginning, no imaging, punching or lamination is required and electrical connections are made with the silver polymer which eliminates drilling and plating. The final paperwork would also be greatly reduced due to the lack of materials required to build the circuit board. Complex designs which may take weeks to complete through normal processing can be 3D printed, greatly reducing manufacturing time.
In 2005, academic journals began to report on the possible artistic applications of 3D printing technology.[223]Off-the-shelf machines were increasingly capable of producing practical household applications, for example, ornamental objects. Some practical examples include a working clock[224]andgearsprinted for home woodworking machines among other purposes.[225]Websites associated with home 3D printing tended to include backscratchers, coat hooks, door knobs, etc.[226]As of 2017, domestic 3D printing was reaching a consumer audience beyond hobbyists and enthusiasts. Several projects and companies are making efforts to develop affordable 3D printers for home desktop use. Much of this work has been driven by and targeted atDIY/maker/enthusiast/early adoptercommunities, with additional ties to the academic andhackercommunities.
Sped on by decreases in price and increases in quality, As of 2019[update]an estimated 2 million people worldwide have purchased a 3D printer for hobby use.[227]
3D printing has existed for decades within certain manufacturing industries where many legal regimes, includingpatents,industrial design rights,copyrights, andtrademarksmay apply. However, there is not muchjurisprudenceto say how these laws will apply if 3D printers become mainstream and individuals or hobbyist communities begin manufacturing items for personal use, for non-profit distribution, or for sale.
Any of the mentioned legal regimes may prohibit the distribution of the designs used in 3D printing or the distribution or sale of the printed item. To be allowed to do these things, where active intellectual property was involved, a person would have to contact the owner and ask for a licence, which may come with conditions and a price. However, many patent, design and copyright laws contain a standard limitation or exception for "private" or "non-commercial" use of inventions, designs or works of art protected under intellectual property (IP). That standard limitation or exception may leave such private, non-commercial uses outside the scope of IP rights.
Patents cover inventions including processes, machines, manufacturing, and compositions of matter and have a finite duration which varies between countries, but generally 20 years from the date of application. Therefore, if a type of wheel is patented, printing, using, or selling such a wheel could be an infringement of the patent.[228]
Copyright covers an expression[229]in a tangible, fixed medium and often lasts for the life of the author plus 70 years thereafter.[230]For example, a sculptor retains copyright over a statue, such that other people cannot then legally distribute designs to print an identical or similar statue without paying royalties, waiting for the copyright to expire, or working within afair useexception.
When a feature has both artistic (copyrightable) and functional (patentable) merits when the question has appeared in US court, the courts have often held the feature is not copyrightable unless it can be separated from the functional aspects of the item.[230]In other countries the law and the courts may apply a different approach allowing, for example, the design of a useful device to be registered (as a whole) as an industrial design on the understanding that, in case of unauthorized copying, only the non-functional features may be claimed under design law whereas any technical features could only be claimed if covered by a valid patent.
The USDepartment of Homeland Securityand theJoint Regional Intelligence Centerreleased a memo stating that "significant advances in three-dimensional (3D) printing capabilities, availability of free digital 3D printable files for firearms components, and difficulty regulating file sharing may present public safety risks from unqualified gun seekers who obtain or manufacture 3D printed guns" and that "proposed legislation to ban 3D printing of weapons may deter, but cannot completely prevent their production. Even if the practice is prohibited by new legislation, online distribution of these 3D printable files will be as difficult to control as any other illegally traded music, movie or software files."[231]
Attempting to restrict the distribution of gun plans via the Internet has been likened to the futility of preventing the widespread distribution ofDeCSS, which enabled DVDripping.[232][233][234][235]After the US government had Defense Distributed take down the plans, they were still widely available via thePirate Bayand other file sharing sites.[236]Downloads of the plans from the UK, Germany, Spain, and Brazil were heavy.[237][238]Some US legislators have proposed regulations on 3D printers to prevent them from being used for printing guns.[239][240]3D printing advocates have suggested that such regulations would be futile, could cripple the 3D printing industry and could infringe on free speech rights, with early pioneers of 3D printing professorHod Lipsonsuggesting that gunpowder could be controlled instead.[241][242][243][244][245][246]
Internationally, where gun controls are generally stricter than in the United States, some commentators have said the impact may be more strongly felt since alternative firearms are not as easily obtainable.[247]Officials in the United Kingdom have noted that producing a 3D-printed gun would be illegal under their gun control laws.[248]Europolstated that criminals have access to other sources of weapons but noted that as technology improves, the risks of an effect would increase.[249][250]
In the United States, the FAA has anticipated a desire to use additive manufacturing techniques and has been considering how best to regulate this process.[251]The FAA has jurisdiction over such fabrication because all aircraft parts must be made under FAA production approval or under other FAA regulatory categories.[252]In December 2016, the FAA approved the production of a 3D-printed fuel nozzle for the GE LEAP engine.[253]Aviation attorney Jason Dickstein has suggested that additive manufacturing is merely a production method, and should be regulated like any other production method.[254][255]He has suggested that the FAA's focus should be on guidance to explain compliance, rather than on changing the existing rules, and that existing regulations and guidance permit a company "to develop a robust quality system that adequately reflects regulatory needs for quality assurance".[254]
In 2021, first standards were issued, e.g. ASTM ISO/ASTM52900-21 Additive manufacturing general principles, fundamentals and vocabulary, and the above mentioned ISO/ASTM52900-15.[117][118]In 2023, the ISO/ASTM 52920:2023[256]defined the requirements for industrial additive manufacturing processes and production sites using additive manufacturing to ensure reěuired quality level. Aforetime in Germany there was a draft of DIN norm issued, DIN SPEC 17071:2019.
Polymer feedstock materials can releaseultrafine particlesandvolatile organic compounds(VOCs) if sufficiently heated, which in combination have been associated with adverserespiratoryandcardiovascularhealth effects. In addition, temperatures of 190 °C to 260 °C are typically reached by anFFFextrusion nozzle, which can cause skin burns.Vat photopolymerization stereolithographyprinters use high-poweredlasersthat present a skin and eye hazard, although they are considered nonhazardous during printing because the laser is enclosed within the printing chamber.[257]
3D printers also contain many moving parts that includestepper motors, pulleys, threaded rods, carriages, and small fans, which generally do not have enough power to cause serious injuries but can still trap a user's finger, long hair, or loose clothing. Most desktop FFF 3D printers do not have any added electrical safety features beyond regular internal fuses or external transformers, although the voltages in the exposed parts of 3D printers usually do not exceed 12V to 24V, which is generally considered safe.[257]
Research on the health and safety concerns of 3D printing is new and in development due to the recent proliferation of 3D printing devices. In 2017, theEuropean Agency for Safety and Health at Workpublished a discussion paper on the processes and materials involved in 3D printing, the potential implications of this technology for occupational safety and health and avenues for controlling potential hazards.[258]
Noise levelis measured indecibels(dB), and can vary greatly in home printers from 15 dB to 75 dB.[259]Some main sources of noise in filament printers are fans, motors and bearings, while in resin printers the fans usually are responsible for most of the noise.[259]Some methods for dampening the noise from a printer may be to installvibration isolation, use larger diameterfans, perform regular maintenance and lubrication, or use asoundproofingenclosure.[259]
Additive manufacturing, starting with today's infancy period, requires manufacturing firms to be flexible,ever-improvingusers of all available technologies to remain competitive. Advocates ofadditive manufacturingalso predict that this arc of technological development will counterglobalization, as end users will do much of their own manufacturing rather than engage in trade to buy products from other people and corporations.[16]The real integration of the newer additive technologies into commercial production, however, is more a matter of complementing traditional subtractive methods rather than displacing them entirely.[260]
ThefuturologistJeremy Rifkin[261]claimed that 3D printing signals the beginning of athird industrial revolution,[262]succeeding theproduction lineassembly that dominated manufacturing starting in the late 19th century.
Since the 1950s, a number of writers and social commentators have speculated in some depth about the social and cultural changes that might result from the advent of commercially affordable additive manufacturing technology.[263]In recent years, 3D printing has created a significant impact in the humanitarian and development sector. Its potential to facilitate distributed manufacturing is resulting in supply chain and logistics benefits, by reducing the need for transportation, warehousing and wastage. Furthermore, social and economic development is being advanced through the creation of local production economies.[164]
Others have suggested that as more and more 3D printers start to enter people's homes, the conventional relationship between the home and the workplace might get further eroded.[264]Likewise, it has also been suggested that, as it becomes easier for businesses to transmit designs for new objects around the globe, so the need for high-speed freight services might also become less.[265]Finally, given the ease with which certain objects can now be replicated, it remains to be seen whether changes will be made to current copyright legislation so as to protect intellectual property rights with the new technology widely available.
Some call attention to the conjunction ofcommons-based peer productionwith 3D printing and other low-cost manufacturing techniques.[266][267][268]The self-reinforced fantasy of a system of eternal growth can be overcome with the development of economies of scope, and here, society can play an important role contributing to the raising of the whole productive structure to a higher plateau of more sustainable and customized productivity.[266]Further, it is true that many issues, problems, and threats arise due to the democratization of the means of production, and especially regarding the physical ones.[266]For instance, the recyclability of advanced nanomaterials is still questioned; weapons manufacturing could become easier; not to mention the implications for counterfeiting[269]and on intellectual property.[270]It might be maintained that in contrast to the industrial paradigm whose competitive dynamics were about economies of scale,commons-based peer production3D printing could develop economies of scope. While the advantages of scale rest on cheap global transportation, the economies of scope share infrastructure costs (intangible and tangible productive resources), taking advantage of the capabilities of the fabrication tools.[266]And following Neil Gershenfeld[271]in that "some of the least developed parts of the world need some of the most advanced technologies", commons-based peer production and 3D printing may offer the necessary tools for thinking globally but acting locally in response to certain needs.
Larry Summerswrote about the "devastating consequences" of 3D printing and other technologies (robots, artificial intelligence, etc.) for those who perform routine tasks. In his view, "already there are more American men on disability insurance than doing production work in manufacturing. And the trends are all in the wrong direction, particularly for the less skilled, as the capacity of capital embodying artificial intelligence to replace white-collar as well as blue-collar work will increase rapidly in the years ahead." Summers recommends more vigorous cooperative efforts to address the "myriad devices" (e.g., tax havens, bank secrecy, money laundering, and regulatory arbitrage) enabling the holders of great wealth to "a paying" income and estate taxes, and to make it more difficult to accumulate great fortunes without requiring "great social contributions" in return, including: more vigorous enforcement of anti-monopoly laws, reductions in "excessive" protection for intellectual property, greater encouragement of profit-sharing schemes that may benefit workers and give them a stake in wealth accumulation, strengthening of collective bargaining arrangements, improvements in corporate governance, strengthening of financial regulation to eliminate subsidies to financial activity, easing of land-use restrictions that may cause the real estate of the rich to keep rising in value, better training for young people and retraining for displaced workers, and increased public and private investment in infrastructure development—e.g., in energy production and transportation.[272]
Michael Spencewrote that "Now comes a ... powerful, wave of digital technology that is replacing labor in increasingly complex tasks. This process of labor substitution anddisintermediationhas been underway for some time in service sectors—think of ATMs, online banking, enterprise resource planning, customer relationship management, mobile payment systems, and much more. This revolution is spreading to the production of goods, where robots and 3D printing are displacing labor." In his view, the vast majority of the cost of digital technologies comes at the start, in the design of hardware (e.g. 3D printers) and, more importantly, in creating the software that enables machines to carry out various tasks. "Once this is achieved, the marginal cost of the hardware is relatively low (and declines as scale rises), and the marginal cost of replicating the software is essentially zero. With a huge potential global market to amortize the upfront fixed costs of design and testing, the incentives to invest [in digital technologies] are compelling."[273]
Spence believes that, unlike prior digital technologies, which drove firms to deploy underutilized pools of valuable labor around the world, the motivating force in the current wave of digital technologies "is cost reduction via the replacement of labor". For example, as the cost of 3D printing technology declines, it is "easy to imagine" that production may become "extremely" local and customized. Moreover, production may occur in response to actual demand, not anticipated or forecast demand. Spence believes that labor, no matter how inexpensive, will become a less important asset for growth and employment expansion, with labor-intensive, process-oriented manufacturing becoming less effective, and that re-localization will appear in both developed and developing countries. In his view, production will not disappear, but it will be less labor-intensive, and all countries will eventually need to rebuild their growth models around digital technologies and the human capital supporting their deployment and expansion. Spence writes that "the world we are entering is one in which the most powerful global flows will be ideas and digital capital, not goods, services, and traditional capital. Adapting to this will require shifts in mindsets, policies, investments (especially in human capital), and quite possibly models of employment and distribution."[273]
Naomi Wuregards the usage of 3D printing in the Chinese classroom (where rote memorization is standard) to teach design principles and creativity as the most exciting recent development of the technology, and more generally regards 3D printing as being the nextdesktop publishingrevolution.[274]
A printer was donated to theJuan Fernandez Women's Groupin 2024, to support women in the remote community to be able to create parts to fix broken equipment, without having to wait for a ship to import the needed compenents.[275]
The growth of additive manufacturing could have a large impact on the environment. Traditional subtractive manufacturing methods such asCNC millingcreate products by cutting away material from a larger block. In contrast, additive manufacturing creates products layer-by layer, using the minimum required materials to create the product.[276]This has the benefit of reducing material waste, which further contributes to energy savings by avoiding raw material production.[277][278]
Life-cycle assessmentof additive manufacturing has estimated that adopting the technology could further lower carbon dioxide emissions since 3D printing creates localized production, thus reducing the need to transport products and the emissions associated.[279]AM could also allow consumers to create their own replacement parts to fix purchased products to extend the lifespan of purchased products.[280]
By making only the bare structural necessities of products, additive manufacturing also has the potential to make profound contributions tolightweighting.[276]The use of these lightweight components would allow for reductions in the energy consumption andgreenhouse gas emissionsof vehicles and other forms of transportation.[281]A case study on an airplane component made using additive manufacturing, for example, found that the use of the component saves 63% of relevant energy and carbon dioxide emissions over the course of the product's lifetime.[282]
However, the adoption of additive manufacturing also has environmental disadvantages. Firstly, AM has a high energy consumption compared to traditional processes. This is due to its use of processes such as lasers and high temperatures for product creation.[283]Secondly, despite additive manufacturing reducing up to 90% of waste compared to subtractive manufacturing, AM can generate waste that is non-recyclable.[284]For example, there are issues with the recyclability of materials in metal AM as some highly regulated industries such asaerospaceoften insist on using virgin powder in the creation of safety critical components.[276]Additive manufacturing has not yet reached its theoreticalmaterial efficiencypotential of 97%, but it may get closer as the technology continues to increase productivity.[285]
Despite the drawbacks, research and industry are making further strides to support AM's sustainability. Some large FDM printers that melthigh-density polyethylene(HDPE) pellets may also accept sufficiently clean recycled material such as chipped milk bottles. In addition, these printers can use shredded material from faulty builds or unsuccessful prototype versions, thus reducing overall project wastage and materials handling and storage. The concept has been explored in theRecycleBot.[286]There are also industrial efforts to produce metal powder from recycled metals.[287]
|
https://en.wikipedia.org/wiki/Rapid_manufacturing
|
Knowledge-based configuration, also referred to asproduct configurationorproduct customization, is an activity ofcustomisinga product to meet the needs of a particular customer. The product in question may consist of mechanical parts, services, and software. Knowledge-based configuration is a major application area forartificial intelligence(AI), and it is based on modelling of the configurations in a manner that allows the utilisation of AI techniques for searching for a valid configuration to meet the needs of a particular customer.[A 1][A 2][A 3][A 4][A 5][B 1][B 2][B 3]
Knowledge-based configuration (of complex products and services) has a long history as anartificial intelligenceapplication area, see, e.g.[B 1][A 1][A 6][A 7][A 8][A 9][A 10][A 11]Informally, configuration can be defined as a "special case of design activity, where the artifact being configured is assembled from instances of a fixed set of well-defined component types which can be composed conforming to a set of constraints".[A 2]Such constraints[B 4]represent technical restrictions, restrictions related to economic aspects, and conditions related to production processes. The result of a configuration process is a product configuration (concrete configuration), i.e., a list of instances and in some cases also connections between these instances. Examples of such configurations are computers to be delivered or financial service portfolio offers (e.g., a combination of loan and corresponding risk insurance).
Numerous practical configuration problems can be analyzed by the theoretical framework of Najmann and Stein,[A 12]an early axiomatic approach that does not presuppose any particularknowledge representationformalism. One important result of this methodology is that typical optimization problems (e.g. finding a cost-minimal configuration) areNP-complete. Thus they require (potentially) excessive computation time, makingheuristicconfiguration algorithms the preferred choice for complex artifacts (products, services).
Configuration systems[B 1][A 1][A 2], also referred to asconfiguratorsormass customization toolkits,[A 13]are one of the most successfully appliedartificial intelligencetechnologies. Examples are the automotive industry,[A 9]the telecommunication industry,[A 7]the computer industry,[A 6][A 14]and power electric transformers.[A 8]Starting withrule-basedapproaches such as R1/XCON,[A 6]model-based representations of knowledge (in contrast to rule-based representations) have been developed that strictly separate product domain knowledge from problem solving knowledge—examples thereof are theconstraint satisfaction problem, theBoolean satisfiability problem, and differentanswer set programming(ASP) representations. There are two commonly cited conceptualizations of configuration knowledge.[A 3][A 4]The most important concepts in these are components, ports, resources and functions. This separation of product domain knowledge and problem solving knowledge increased the effectiveness of configuration application development and maintenance,[A 7][A 9][A 10][A 15]since changes in the product domain knowledge do not affect search strategies and vice versa.
Configurators are also often considered as "open innovationtoolkits", i.e., tools that support customers in the product identification phase.[A 16]In this context customers are innovators who articulate their requirements leading to new innovative products.[A 16][A 17][A 18]"Mass Confusion"[A 19]– the overwhelming of customers by a large number of possible solution alternatives (choices) – is a phenomenon that often comes with the application of configuration technologies. This phenomenon motivated the creation of personalized configuration environments taking into account a customer's knowledge and preferences.[A 20][A 21]
Core configuration, i.e., guiding the user and checking the consistency of user requirements with the knowledge base, solution presentation and translation of configuration results intobill of materials(BOM) are major tasks to be supported by a configurator.[A 22][B 5][A 5][A 13][A 23]Configuration knowledge bases are often built using proprietary languages.[A 10][A 20][A 24]In most cases knowledge bases are developed by knowledge engineers who elicit product, marketing and sales knowledge from domain experts. Configuration knowledge bases are composed of a formal description of the structure of the product and further constraints restricting the possible feature and component combinations.
Configurators known ascharacteristic based product configuratorsuse sets of discrete variables that are either binary or have one of several values, and these variables define every possible product variation.
Recently, knowledge-based configuration has been extended to service and software configuration. Modeling software configuration has been based on two main approaches: feature modeling,[A 25][B 6]and component-connectors.[A 26]Kumbangdomain ontologycombines the previous approaches building on the tradition of knowledge-based configuration.[A 27]
|
https://en.wikipedia.org/wiki/Knowledge-based_Configuration
|
Aninformation societyis asocietyorsubculturewhere the usage,creation,distribution, manipulation andintegrationofinformationis a significant activity.[1][2]Its main drivers areinformation and communication technologies, which have resulted in rapid growth of a variety of forms of information. Proponents of this theory posit that these technologies are impacting most important forms of social organization, includingeducation,economy,[3]health,government,[4]warfare, and levels ofdemocracy.[5]The people who are able to partake in this form of society are sometimes called eithercomputer usersor evendigital citizens, defined by K. Mossberger as “Those who use the Internet regularly and effectively”. This is one of many dozen internet terms that have been identified to suggest that humans are entering a new and different phase of society.[6]
Some of the markers of this steady change may be technological, economic, occupational, spatial, cultural, or a combination of all of these.[7]Information society is seen as a successor toindustrial society. Closely related concepts are thepost-industrial society(post-fordism),post-modernsociety, computer society andknowledge society, telematic society,society of the spectacle(postmodernism),Information RevolutionandInformation Age,network society(Manuel Castells) or evenliquid modernity.
There is currently no universally accepted concept of what exactly can be defined as an information society and what shall not be included in the term. Most theoreticians agree that a transformation can be seen as started somewhere between the 1970s, the early 1990s transformations of theEastern Blocnationsfrom socialist to capitalist economiesand the 2000s period that formed most of today's net principles and currently as is changing the way societies work fundamentally. Information technology goes beyond theinternet, as the principles of internet design and usage influence other areas, and there are discussions about how big the influence of specific media or specific modes of production really is.Frank Websternotes five major types of information that can be used to define information society: technological, economic, occupational, spatial and cultural.[7]According to Webster, the character of information has transformed the way that we live today. How we conduct ourselves centers around theoretical knowledge and information.[8]
Kasiwulaya and Gomo (Makerere University) allude[where?][dubious–discuss]that information societies are those that have intensified their use of IT for economic, social, cultural and political transformation. In 2005, governments reaffirmed their dedication to the foundations of the Information
Society in theTunis Commitmentand outlined the basis for implementation and follow-up in the Tunis Agenda for the Information Society. In particular, the Tunis Agenda addresses the issues of financing of ICTs for development and Internet governance that could not be resolved in the first phase.
Some people, such asAntonio Negri, characterize the information society as one in which people do immaterial labour.[9]By this, they appear to refer to the production of knowledge or cultural artifacts. One problem with this model is that it ignores the material and essentially industrial basis of the society. However it does point to a problem for workers, namely how many creative people does this society need to function? For example, it may be that you only need a few star performers, rather than a plethora of non-celebrities, as the work of those performers can be easily distributed, forcing all secondary players to the bottom of the market. Itisnow common for publishers to promote only their best selling authors and to try to avoid the rest—even if they still sell steadily. Films are becoming more and more judged, in terms of distribution, by their first weekend's performance, in many cases cutting out opportunity for word-of-mouth development.
Michael Bucklandcharacterizes information in society in his bookInformation and Society.Buckland expresses the idea that information can be interpreted differently from person to person based on that individual's experiences.[10]
Considering that metaphors and technologies of information move forward in a reciprocal relationship, we can describe some societies (especially theJapanese society) as an information society because we think of it as such.[11][12]
The word information may be interpreted in many different ways. According to Buckland inInformation and Society, most of the meanings fall into three categories of human knowledge: information as knowledge, information as a process, and information as a thing.[13]
Thus, the Information Society refers to the social importance given to communication and information in today's society, where social, economic and cultural relations are involved.[14]
In the Information Society, the process of capturing, processing and communicating information is the main element that characterizes it. Thus, in this type of society, the vast majority of it will be dedicated to the provision of services and said services will consist of the processing, distribution or use of information.[14]
The growth of the amount of technologically mediated information has been quantified in different ways, including society's technological capacity to store information, to communicate information, and to compute information.[17]It is estimated that, the world's technological capacity to store information grew from 2.6 (optimally compressed)exabytesin 1986, which is the informational equivalent to less than one 730-MBCD-ROMper person in 1986 (539 MB per person), to 295 (optimally compressed)exabytesin 2007.[18]This is the informational equivalent of 60CD-ROMper person in 2007[19]and represents a sustained annual growth rate of some 25%. The world's combined technological capacity to receive information through one-waybroadcastnetworks was the informational equivalent of 174 newspapers per person per day in 2007.[18]
The world's combined effective capacity to exchange information through two-waytelecommunications networkswas 281petabytesof (optimally compressed) information in 1986, 471petabytesin 1993, 2.2 (optimally compressed)exabytesin 2000, and 65 (optimally compressed)exabytesin 2007, which is the informational equivalent of 6 newspapers per person per day in 2007.[19]The world's technological capacity to compute information with humanly guided general-purpose computers grew from 3.0 × 10^8 MIPS in 1986, to 6.4 x 10^12 MIPS in 2007, experiencing the fastest growth rate of over 60% per year during the last two decades.[18]
James R. Benigerdescribes the necessity of information in modern society in the following way: “The need for sharply increased control that resulted from the industrialization of material processes through application of inanimate sources of energy probably accounts for the rapid development of automatic feedback technology in the early industrial period (1740-1830)” (p. 174)
“Even with enhanced feedback control, industry could not have developed without the enhanced means to process matter and energy, not only as inputs of the raw materials of production but also as outputs distributed to final consumption.”(p. 175)[6]
One of the first people to develop the concept of the information society was the economistFritz Machlup. In 1933, Fritz Machlup began studying the effect of patents on research. His work culminated in the studyThe production and distribution of knowledge in the United Statesin 1962. This book was widely regarded[20]and was eventually translated intoRussianandJapanese. The Japanese have also studied the information society (orjōhōka shakai,情報化社会).
The issue of technologies and their role in contemporary society have been discussed in the scientific literature using a range of labels and concepts. This section introduces some of them. Ideas of a knowledge orinformation economy,post-industrial society,postmodernsociety,network society, theinformation revolution, informational capitalism, network capitalism, and the like, have been debated over the last several decades.
Fritz Machlup (1962) introduced the concept of theknowledge industry. He began studying the effects of patents on research before distinguishing five sectors of the knowledge sector: education, research and development, mass media, information technologies, information services. Based on this categorization he calculated that in 1959 29% per cent of the GNP in the USA had been produced in knowledge industries.[21][22][citation needed]
Peter Druckerhas argued that there is a transition from an economy based on material goods to one based on knowledge.[23]Marc Poratdistinguishes a primary (information goods and services that are directly used in the production, distribution or processing of information) and a secondary sector (information services produced for internal consumption by government and non-information firms) of the information economy.[24]
Porat uses the total value added by the primary and secondary information sector to the GNP as an indicator for the information economy. TheOECDhas employed Porat's definition for calculating the share of the information economy in the total economy (e.g. OECD 1981, 1986). Based on such indicators, the information society has been defined as a society where more than half of the GNP is produced and more than half of the employees are active in the information economy.[25]
ForDaniel Bellthe number of employees producing services and information is an indicator for the informational character of a society. "A post-industrial society is based on services. (…) What counts is not raw muscle power, or energy, but information. (…) A post industrial society is one in which the majority of those employed are not involved in the production of tangible goods".[26]
Alain Tourainealready spoke in 1971 of the post-industrial society. "The passage to postindustrial society takes place when investment results in the production of symbolic goods that modify values, needs, representations, far more than in the production of material goods or even of 'services'. Industrial society had transformed the means of production: post-industrial society changes the ends of production, that is, culture. (…) The decisive point here is that in postindustrial society all of the economic system is the object of intervention of society upon itself. That is why we can call it the programmed society, because this phrase captures its capacity to create models of management, production, organization, distribution, and consumption, so that such a society appears, at all its functional levels, as the product of an action exercised by the society itself, and not as the outcome of natural laws or cultural specificities" (Touraine 1988: 104). In the programmed society also the area of cultural reproduction including aspects such as information, consumption, health, research, education would be industrialized. That modern society is increasing its capacity to act upon itself means for Touraine that society is reinvesting ever larger parts of production and so produces and transforms itself. This makes Touraine's concept substantially different from that of Daniel Bell who focused on the capacity to process and generate information for efficient society functioning.
Jean-François Lyotard[27]has argued that "knowledge has become the principle [sic] force of production over the last few decades". Knowledge would be transformed into a commodity. Lyotard says that postindustrial society makes knowledge accessible to the layman because knowledge and information technologies would diffuse into society and break up Grand Narratives of centralized structures and groups. Lyotard denotes these changing circumstances as postmodern condition or postmodern society.
Similarly to Bell, Peter Otto and Philipp Sonntag (1985) say that an information society is a society where the majority of employees work in information jobs, i.e. they have to deal more with information, signals, symbols, and images than with energy and matter.Radovan Richta(1977) argues that society has been transformed into a scientific civilization based on services, education, and creative activities. This transformation would be the result of a scientific-technological transformation based on technological progress and the increasing importance of computer technology. Science and technology would become immediate forces of production (Aristovnik 2014: 55).
Nico Stehr(1994, 2002a, b) says that in the knowledge society a majority of jobs involves working with knowledge. "Contemporary society may be described as a knowledge society based on the extensive penetration of all its spheres of life and institutions by scientific and technological knowledge" (Stehr 2002b: 18). For Stehr, knowledge is a capacity for social action. Science would become an immediate productive force, knowledge would no longer be primarily embodied in machines, but already appropriated nature that represents knowledge would be rearranged according to certain designs and programs (Ibid.: 41-46). For Stehr, the economy of a knowledge society is largely driven not by material inputs, but by symbolic or knowledge-based inputs (Ibid.: 67), there would be a large number of professions that involve working with knowledge, and a declining number of jobs that demand low cognitive skills as well as in manufacturing (Stehr 2002a).
AlsoAlvin Tofflerargues that knowledge is the central resource in the economy of the information society: "In a Third Wave economy, the central resource – a single word broadly encompassing data, information, images, symbols, culture, ideology, and values – is actionable knowledge" (Dyson/Gilder/Keyworth/Toffler 1994).
At the end of the twentieth century, the concept of thenetwork societygained importance in information society theory. ForManuel Castells, network logic is besides information, pervasiveness, flexibility, and convergence a central feature of the information technology paradigm (2000a: 69ff). "One of the key features of informational society is the networking logic of its basic structure, which explains the use of the concept of 'network society'" (Castells 2000: 21). "As an historical trend, dominant functions and processes in the Information Age are increasingly organized around networks. Networks constitute the new social morphology of our societies, and the diffusion of networking logic substantially modifies the operation and outcomes in processes of production, experience, power, and culture" (Castells 2000: 500). For Castells the network society is the result of informationalism, a new technological paradigm.
Jan Van Dijk(2006) defines the network society as a "social formation with an infrastructure of social and media networks enabling its prime mode of organization at all levels (individual, group/organizational and societal). Increasingly, these networks link all units or parts of this formation (individuals, groups and organizations)" (Van Dijk 2006: 20). For Van Dijk networks have become the nervous system of society, whereas Castells links the concept of the network society to capitalist transformation, Van Dijk sees it as the logical result of the increasing widening and thickening of networks in nature and society.Darin Barneyuses the term for characterizing societies that exhibit two fundamental characteristics: "The first is the presence in those societies of sophisticated – almost exclusively digital – technologies of networked communication and information management/distribution, technologies which form the basic infrastructure mediating an increasing array of social, political and economic practices. (…) The second, arguably more intriguing, characteristic of network societies is the reproduction and institutionalization throughout (and between) those societies of networks as the basic form of human organization and relationship across a wide range of social, political and economic configurations and associations".[28]
The major critique of concepts such as information society, postmodern society, knowledge society, network society, postindustrial society, etc. that has mainly been voiced by critical scholars is that they create the impression that we have entered a completely new type of society. "If there is just more information then it is hard to understand why anyone should suggest that we have before us something radically new" (Webster 2002a: 259). Critics such asFrank Websterargue that these approaches stress discontinuity, as if contemporary society had nothing in common with society as it was 100 or 150 years ago. Such assumptions would have ideological character because they would fit with the view that we can do nothing about change and have to adapt to existing political realities (kasiwulaya 2002b: 267).
These critics argue that contemporary society first of all is still a capitalist society oriented towards accumulating economic, political, andcultural capital. They acknowledge that information society theories stress some important new qualities of society (notably globalization and informatization), but charge that they fail to show that these are attributes of overall capitalist structures. Critics such as Webster insist on the continuities that characterise change. In this way Webster distinguishes between different epochs of capitalism: laissez-faire capitalism of the 19th century,corporate capitalismin the 20th century, and informational capitalism for the 21st century (kasiwulaya 2006).
For describing contemporary society based on a new dialectic of continuity and discontinuity, other critical scholars have suggested several terms like:
Other scholars prefer to speak of information capitalism (Morris-Suzuki 1997) or informational capitalism (Manuel Castells2000,Christian Fuchs2005, Schmiede 2006a, b). Manuel Castells sees informationalism as a new technological paradigm (he speaks of a mode of development) characterized by "information generation, processing, and transmission" that have become "the fundamental sources of productivity and power" (Castells 2000: 21). The "most decisive historical factor accelerating, channelling and shaping the information technology paradigm, and inducing its associated social forms, was/is the process of capitalist restructuring undertaken since the 1980s, so that the new techno-economic system can be adequately characterized as informational capitalism" (Castells 2000: 18). Castells has added to theories of the information society the idea that in contemporary society dominant functions and processes are increasingly organized around networks that constitute the new social morphology of society (Castells 2000: 500).Nicholas Garnham[31]is critical of Castells and argues that the latter's account is technologically determinist because Castells points out that his approach is based on a dialectic of technology and society in which technology embodies society and society uses technology (Castells 2000: 5sqq). But Castells also makes clear that the rise of a new "mode of development" is shaped by capitalist production, i.e. by society, which implies that technology isn't the only driving force of society.
Antonio NegriandMichael Hardtargue that contemporary society is an Empire that is characterized by a singular global logic of capitalist domination that is based on immaterial labour. With the concept of immaterial labour Negri and Hardt introduce ideas of information society discourse into their Marxist account of contemporary capitalism. Immaterial labour would be labour "that creates immaterial products, such as knowledge, information, communication, a relationship, or an emotional response" (Hardt/Negri 2005: 108; cf. also 2000: 280-303), or services, cultural products, knowledge (Hardt/Negri 2000: 290). There would be two forms: intellectual labour that produces ideas, symbols, codes, texts, linguistic figures, images, etc.; andaffective labourthat produces and manipulates affects such as a feeling of ease, well-being, satisfaction, excitement, passion, joy, sadness, etc. (Ibid.).
Overall, neo-Marxist accounts of the information society have in common that they stress that knowledge, information technologies, and computer networks have played a role in the restructuration and globalization of capitalism and the emergence of a flexible regime of accumulation (David Harvey1989). They warn that new technologies are embedded into societal antagonisms that causestructural unemployment, rising poverty,social exclusion, thederegulationof thewelfare stateand oflabour rights, the lowering of wages, welfare, etc.
Concepts such as knowledge society, information society, network society, informational capitalism, postindustrial society, transnational network capitalism, postmodern society, etc. show that there is a vivid discussion in contemporary sociology on the character of contemporary society and the role that technologies, information, communication, and co-operation play in it.[citation needed]Information society theory discusses the role of information and information technology in society, the question which key concepts shall be used for characterizing contemporary society, and how to define such concepts. It has become a specific branch of contemporary sociology.
Information society is the means of sending and receiving information from one place to another.[32]As technology has advanced so too has the way people have adapted in sharing information with each other.
"Second nature" refers a group of experiences that get made over by culture.[33]They then get remade into something else that can then take on a new meaning. As a society we transform this process so it becomes something natural to us, i.e. second nature. So, by following a particular pattern created by culture we are able to recognise how we use and move information in different ways. From sharing information via different time zones (such as talking online) to information ending up in a different location (sending a letter overseas) this has all become a habitual process that we as a society take for granted.[34]
However, through the process of sharing information vectors have enabled us to spread information even further. Through the use of these vectors information is able to move and then separate from the initial things that enabled them to move.[35]From here, something called "third nature" has developed. An extension of second nature, third nature is in control of second nature. It expands on what second nature is limited by. It has the ability to mould information in new and different ways. So, third nature is able to ‘speed up, proliferate, divide, mutate, and beam in on us from elsewhere.[36]It aims to create a balance between the boundaries of space and time (see second nature). This can be seen through the telegraph, it was the first successful technology that could send and receive information faster than a human being could move an object.[37]As a result different vectors of people have the ability to not only shape culture but create new possibilities that will ultimately shape society.
Therefore, through the use of second nature and third nature society is able to use and explore new vectors of possibility where information can be moulded to create new forms of interaction.[38]
Insociology,informational societyrefers to apost-moderntype of society. Theoreticians likeUlrich Beck,Anthony GiddensandManuel Castellsargue that since the 1970s a transformation fromindustrial societyto informational society has happened on a global scale.[40]
Assteam powerwas the technology standing behind industrial society, soinformation technologyis seen as the catalyst for the changes in work organisation, societal structure and politics occurring in the late 20th century.
In the bookFuture Shock,Alvin Tofflerused the phrasesuper-industrial societyto describe this type of society. Other writers and thinkers have used terms like "post-industrial society" and "post-modern industrial society" with a similar meaning.
A number of terms in current use emphasize related but different aspects of the emerging global economic order. The Information Society intends to be the most encompassing in that an economy is a subset of a society. TheInformation Ageis somewhat limiting, in that it refers to a 30-year period between the widespread use of computers and theknowledge economy, rather than an emerging economic order. The knowledge era is about the nature of the content, not the socioeconomic processes by which it will be traded. Thecomputer revolution, and knowledge revolution refer to specific revolutionary transitions, rather than the end state towards which we are evolving. TheInformation Revolutionrelates with the well-known terms agricultural revolution andIndustrial Revolution.
One of the central paradoxes of the information society is that it makes information easily reproducible, leading to a variety of freedom/control problems relating tointellectual property. Essentially, business and capital, whose place becomes that of producing and selling information and knowledge, seems to require control over this new resource so that it can effectively be managed and sold as the basis of the information economy. However, such control can prove to be both technically and socially problematic. Technically becausecopy protectionis often easily circumvented and sociallyrejectedbecause the users and citizens of the information society can prove to be unwilling to accept such absolutecommodificationof the facts and information that compose their environment.
Responses to this concern range from theDigital Millennium Copyright Actin the United States (and similar legislation elsewhere) which makecopy protection(seeDigital rights management) circumvention illegal, to thefree software,open sourceandcopyleftmovements, which seek to encourage and disseminate the "freedom" of various information products (traditionally both as in "gratis" or free of cost, and liberty, as in freedom to use, explore and share).
Caveat: Information society is often used by politicians meaning something like "we all do internet now"; the sociological term information society (or informational society) has some deeper implications about change of societal structure. Because we lack political control of intellectual property, we are lacking in a concrete map of issues, an analysis of costs and benefits, and functioning political groups that are unified by common interests representing different opinions of this diverse situation that are prominent in the information society.[42]
|
https://en.wikipedia.org/wiki/Information_society
|
TheInformation Age[a]is ahistorical periodthat began in the mid-20th century. It is characterized by a rapid shift from traditional industries, as established during theIndustrial Revolution, to an economy centered oninformation technology.[2]The onset of the Information Age has been linked to the development of thetransistorin 1947.[2]This technological advance has had a significant impact on the way information is processed and transmitted.
According to theUnited Nations Public Administration Network, the Information Age was formed by capitalizing oncomputer miniaturizationadvances,[3]which led tomodernizedinformation systems and internet communications as the driving force ofsocial evolution.[4]
There is ongoing debate concerning whether the Third Industrial Revolution has already ended, and if theFourth Industrial Revolutionhas already begun due to the recent breakthroughs in areas such asartificial intelligenceandbiotechnology.[5]This next transition has been theorized to harken the advent of theImagination Age, theInternet of things(IoT), and rapid advancements inmachine learning.
The digital revolution converted technology from analog format to digital format. By doing this, it became possible to make copies that were identical to the original. In digital communications, for example, repeating hardware was able to amplify thedigital signaland pass it on with no loss of information in the signal. Of equal importance to the revolution was the ability to easily move the digital information between media, and to access or distribute it remotely. One turning point of the revolution was the change from analog to digitally recorded music.[6]During the 1980s the digital format of optical compact discs gradually replacedanalogformats, such asvinyl recordsandcassette tapes, as the popular medium of choice.[7]
Humans have manufactured tools for counting and calculating since ancient times, such as theabacus,astrolabe,equatorium, and mechanical timekeeping devices. More complicated devices started appearing in the 1600s, including theslide ruleandmechanical calculators. By the early 1800s, theIndustrial Revolutionhad produced mass-market calculators like thearithmometerand the enabling technology of thepunch card.Charles Babbageproposed a mechanical general-purpose computer called theAnalytical Engine, but it was never successfully built, and was largely forgotten by the 20th century and unknown to most of the inventors of modern computers.
TheSecond Industrial Revolutionin the last quarter of the 19th century developed useful electrical circuits and thetelegraph. In the 1880s,Herman Hollerithdeveloped electromechanical tabulating and calculating devices using punch cards andunit record equipment, which became widespread in business and government.
Meanwhile, variousanalog computersystems used electrical, mechanical, or hydraulic systems to model problems and calculate answers. These included an 1872tide-predicting machine,differential analysers,perpetual calendarmachines, theDeltarfor water management in the Netherlands,network analyzersfor electrical systems, and various machines for aiming military guns and bombs. The construction of problem-specific analog computers continued in the late 1940s and beyond, withFERMIACfor neutron transport,Project Cyclonefor various military applications, and thePhillips Machinefor economic modeling.
Building on the complexity of theZ1andZ2, German inventorKonrad Zuseused electromechanical systems to complete in 1941 theZ3, the world's first working programmable, fully automatic digital computer. Also during World War II, Allied engineers constructed electromechanicalbombesto break GermanEnigma machineencoding. The base-10 electromechanicalHarvard Mark Iwas completed in 1944, and was to some degree improved with inspiration from Charles Babbage's designs.
In 1947, the first workingtransistor, thegermanium-basedpoint-contact transistor, was invented byJohn BardeenandWalter Houser Brattainwhile working underWilliam ShockleyatBell Labs.[8]This led the way to more advanceddigital computers. From the late 1940s, universities, military, and businesses developed computer systems to digitally replicate and automate previously manually performed mathematical calculations, with theLEObeing the first commercially available general-purpose computer.
Digital communicationbecame economical for widespread adoption after the invention of the personal computer in the 1970s.Claude Shannon, aBell Labsmathematician, is credited for having laid out the foundations ofdigitalizationin his pioneering 1948 article,A Mathematical Theory of Communication.[9]
In 1948, Bardeen and Brattain patented an insulated-gate transistor (IGFET) with an inversion layer. Their concept, forms the basis of CMOS and DRAM technology today.[10]In 1957 at Bell Labs, Frosch and Derick were able to manufacture planar silicon dioxide transistors,[11]later a team at Bell Labs demonstrated a working MOSFET.[12]The first integrated circuit milestone was achieved byJack Kilbyin 1958.[13]
Other important technological developments included the invention of the monolithicintegrated circuitchip byRobert NoyceatFairchild Semiconductorin 1959,[14]made possible by theplanar processdeveloped byJean Hoerni.[15]In 1963,complementary MOS(CMOS) was developed byChih-Tang SahandFrank WanlassatFairchild Semiconductor.[16]Theself-aligned gatetransistor, which further facilitated mass production, was invented in 1966 by Robert Bower atHughes Aircraft[17][18]and independently by Robert Kerwin,Donald Kleinand John Sarace at Bell Labs.[19]
In 1962 AT&T deployed theT-carrierfor long-haulpulse-code modulation(PCM) digital voice transmission. The T1 format carried 24 pulse-code modulated, time-division multiplexed speech signals each encoded in 64 kbit/s streams, leaving 8 kbit/s of framing information which facilitated the synchronization and demultiplexing at the receiver. Over the subsequent decades the digitisation of voice became the norm for all but the last mile (where analogue continued to be the norm right into the late 1990s).
Following the development ofMOS integrated circuitchips in the early 1960s, MOS chips reached highertransistor densityand lower manufacturing costs thanbipolarintegrated circuits by 1964. MOS chips further increased in complexity at a rate predicted byMoore's law, leading tolarge-scale integration(LSI) with hundreds of transistors on a single MOS chip by the late 1960s. The application of MOS LSI chips tocomputingwas the basis for the firstmicroprocessors, as engineers began recognizing that a completecomputer processorcould be contained on a single MOS LSI chip.[20]In 1968, Fairchild engineerFederico Fagginimproved MOS technology with his development of thesilicon-gateMOS chip, which he later used to develop theIntel 4004, the first single-chip microprocessor.[21]It was released byIntelin 1971, and laid the foundations for themicrocomputer revolutionthat began in the 1970s.
MOS technology also led to the development of semiconductorimage sensorssuitable fordigital cameras.[22]The first such image sensor was thecharge-coupled device, developed byWillard S. BoyleandGeorge E. Smithat Bell Labs in 1969,[23]based onMOS capacitortechnology.[22]
The public was first introduced to the concepts that led to the Internet when a message was sent over theARPANETin 1969.Packet switchednetworks such as ARPANET,Mark I,CYCLADES,Merit Network,Tymnet, andTelenet, were developed in the late 1960s and early 1970s using a variety ofprotocols. The ARPANET in particular led to the development of protocols forinternetworking, in which multiple separate networks could be joined into a network of networks.
TheWhole Earthmovement of the 1960s advocated the use of new technology.[24]
In the 1970s, thehome computerwas introduced,[25]time-sharing computers,[26]thevideo game console, the first coin-op video games,[27][28]and thegolden age of arcade video gamesbegan withSpace Invaders. As digital technology proliferated, and the switch from analog to digital record keeping became the new standard in business, a relatively new job description was popularized, thedata entry clerk. Culled from the ranks of secretaries and typists from earlier decades, the data entry clerk's job was to convert analog data (customer records, invoices, etc.) into digital data.
In developed nations, computers achieved semi-ubiquity during the 1980s as they made their way into schools, homes, business, and industry.Automated teller machines,industrial robots,CGIin film and television,electronic music,bulletin board systems, and video games all fueled what became the zeitgeist of the 1980s. Millions of people purchased home computers, making household names of early personal computer manufacturers such asApple, Commodore, and Tandy. To this day the Commodore 64 is often cited as the best selling computer of all time, having sold 17 million units (by some accounts)[29]between 1982 and 1994.
In 1984, the U.S. Census Bureau began collecting data on computer and Internet use in the United States; their first survey showed that 8.2% of all U.S. households owned a personal computer in 1984, and that households with children under the age of 18 were nearly twice as likely to own one at 15.3% (middle and upper middle class households were the most likely to own one, at 22.9%).[30]By 1989, 15% of all U.S. households owned a computer, and nearly 30% of households with children under the age of 18 owned one.[31]By the late 1980s, many businesses were dependent on computers and digital technology.
Motorola created the first mobile phone,Motorola DynaTac, in 1983. However, this device used analog communication – digital cell phones were not sold commercially until 1991 when the2Gnetwork started to be opened in Finland to accommodate the unexpected demand for cell phones that was becoming apparent in the late 1980s.
Compute!magazine predicted thatCD-ROMwould be the centerpiece of the revolution, with multiple household devices reading the discs.[32]
The first truedigital camerawas created in 1988, and the first were marketed in December 1989 in Japan and in 1990 in the United States.[33]By the early 2000s, digital cameras had eclipsed traditional film in popularity.
Digital ink and paintwas also invented in the late 1980s. Disney's CAPS system (created 1988) was used for a scene in 1989'sThe Little Mermaidand for all their animation films between 1990'sThe Rescuers Down Underand 2004'sHome on the Range.
Tim Berners-Leeinvented theWorld Wide Webin 1989.[34]The "Web 1.0 era" ended in 2005, coinciding with the development of further advanced technologies during the start of the 21st century.[35]
The first public digitalHDTVbroadcast was of the1990 World Cupthat June; it was played in 10 theaters in Spain and Italy. However, HDTV did not become a standard until the mid-2000s outside Japan.
TheWorld Wide Webbecame publicly accessible in 1991, which had been available only to government and universities.[36]In 1993Marc AndreessenandEric BinaintroducedMosaic, the first web browser capable of displaying inline images[37]and the basis for later browsers such as Netscape Navigator and Internet Explorer.Stanford Federal Credit Unionwas the firstfinancial institutionto offer online internet banking services to all of its members in October 1994.[38]In 1996OP Financial Group, also acooperative bank, became the second online bank in the world and the first in Europe.[39]The Internet expanded quickly, and by 1996, it was part ofmass cultureand many businesses listed websites in their ads.[citation needed]By 1999, almost every country had a connection, and nearly half ofAmericansand people in several other countries used the Internet on a regular basis.[citation needed]However throughout the 1990s, "getting online" entailed complicated configuration, anddial-upwas the only connection type affordable by individual users; the present day massInternet culturewas not possible.
In 1989, about 15% of all households in the United States owned a personal computer.[40]For households with children, nearly 30% owned a computer in 1989, and in 2000, 65% owned one.
Cell phonesbecame as ubiquitous as computers by the early 2000s, with movie theaters beginning to show ads telling people to silence their phones. They also becamemuch more advancedthan phones of the 1990s, most of which only took calls or at most allowed for the playing of simple games.
Text messaging became widely used in the late 1990s worldwide, except for in the United States of America where text messaging didn't become commonplace till the early 2000s.[citation needed]
The digital revolution became truly global in this time as well – after revolutionizing society in thedeveloped worldin the 1990s, the digital revolution spread to the masses in thedeveloping worldin the 2000s.
By 2000, a majority of U.S. households had at least one personal computer andinternet accessthe following year.[41]In 2002, a majority of U.S. survey respondents reported having a mobile phone.[42]
In late 2005 the population of the Internet reached 1 billion,[43]and 3 billion people worldwide used cell phones by the end of the decade.HDTVbecame the standard television broadcasting format in many countries by the end of the decade. In September and December 2006 respectively,Luxembourgand theNetherlandsbecame the first countries to completelytransition from analog to digital television. In September 2007, a majority of U.S. survey respondents reported havingbroadband internetat home.[44]According to estimates from theNielsen Media Research, approximately 45.7 million U.S. households in 2006 (or approximately 40 percent of approximately 114.4 million) owned a dedicatedhome video game console,[45][46]and by 2015, 51 percent of U.S. households owned a dedicated home video game console according to anEntertainment Software Associationannual industryreport.[47][48]By 2012, over 2 billion people used the Internet, twice the number using it in 2007.Cloud computinghad entered the mainstream by the early 2010s. In January 2013, a majority of U.S. survey respondents reported owning asmartphone.[49]By 2016, half of the world's population was connected[50]and as of 2020, that number has risen to 67%.[51]
In the late 1980s, less than 1% of the world's technologically stored information was in digital format, while it was 94% in 2007, with more than 99% by 2014.[52]
It is estimated that the world's capacity to store information has increased from 2.6 (optimally compressed)exabytesin 1986, to some 5,000exabytesin 2014 (5zettabytes).[52][53]
Library expansion was calculated in 1945 byFremont Riderto double in capacity every 16 years where sufficient space made available.[61]He advocated replacing bulky, decaying printed works withminiaturizedmicroformanalog photographs, which could be duplicated on-demand for library patrons and other institutions.
Rider did not foresee, however, thedigital technologythat would follow decades later to replaceanalogmicroform withdigital imaging,storage, andtransmission media, whereby vast increases in the rapidity of information growth would be made possible throughautomated, potentially-losslessdigital technologies. Accordingly,Moore's law, formulated around 1965, would calculate that thenumber of transistorsin a denseintegrated circuitdoubles approximately every two years.[62][63]
By the early 1980s, along with improvements incomputing power, the proliferation of the smaller and less expensive personal computers allowed for immediateaccess to informationand the ability toshareandstoreit. Connectivity between computers within organizations enabled access to greater amounts of information.[citation needed]
The world's technological capacity to store information grew from 2.6 (optimallycompressed)exabytes(EB) in 1986 to 15.8 EB in 1993; over 54.5 EB in 2000; and to 295 (optimally compressed) EB in 2007.[52][65]This is the informational equivalent to less than one 730-megabyte(MB)CD-ROMper person in 1986 (539 MB per person); roughly four CD-ROM per person in 1993; twelve CD-ROM per person in the year 2000; and almost sixty-one CD-ROM per person in 2007.[52]It is estimated that the world's capacity to store information has reached 5zettabytesin 2014,[53]the informational equivalent of 4,500 stacks of printed books from the earth to thesun.[citation needed]
The amount ofdigital datastored appears to be growing approximatelyexponentially, reminiscent ofMoore's law. As such,Kryder's lawprescribes that the amount of storage space available appears to be growing approximately exponentially.[66][67][68][63]
The world's technological capacity to receive information through one-waybroadcast networkswas 432exabytesof (optimallycompressed) information in 1986; 715 (optimally compressed) exabytes in 1993; 1.2 (optimally compressed)zettabytesin 2000; and 1.9 zettabytes in 2007, the information equivalent of 174 newspapers per person per day.[52]
The world's effective capacity toexchange informationthroughtwo-wayTelecommunications networkswas 281petabytesof (optimally compressed) information in 1986; 471 petabytes in 1993; 2.2 (optimally compressed) exabytes in 2000; and 65 (optimally compressed) exabytes in 2007, the information equivalent of six newspapers per person per day.[52]In the 1990s, the spread of the Internet caused a sudden leap in access to and ability to share information in businesses and homes globally. A computer that cost $3000 in 1997 would cost $2000 two years later and $1000 the following year, due to the rapid advancement of technology.[citation needed]
The world's technological capacity to compute information with human-guided general-purpose computers grew from 3.0 × 108MIPSin 1986, to 4.4 × 109MIPS in 1993; to 2.9 × 1011MIPS in 2000; to 6.4 × 1012MIPS in 2007.[52]An article featured in thejournalTrends in Ecology and Evolutionin 2016 reported that:[53]
Digital technologyhas vastly exceeded thecognitivecapacityof any single human being and has done so a decade earlier than predicted. In terms of capacity, there are two measures of importance: the number of operations a system can perform and the amount of information that can be stored. The number ofsynaptic operations per secondin a human brain has been estimated to lie between 10^15 and 10^17. While this number is impressive, even in 2007 humanity'sgeneral-purpose computerswere capable of performing well over 10^18 instructions per second. Estimates suggest that the storage capacity of an individual human brain is about 10^12 bytes. On a per capita basis, this is matched by current digital storage (5x10^21 bytes per 7.2x10^9 people).
Genetic code may also be considered part of theinformation revolution. Now that sequencing has been computerized,genomecan be rendered and manipulated as data. This started withDNA sequencing, invented byWalter GilbertandAllan Maxam[69]in 1976–1977 andFrederick Sangerin 1977, grew steadily with theHuman Genome Project, initially conceived by Gilbert and finally, the practical applications of sequencing, such asgene testing, after the discovery byMyriad Geneticsof theBRCA1breast cancer gene mutation. Sequence data inGenBankhas grown from the 606 genome sequences registered in December 1982 to the 231 million genomes in August 2021. An additional 13 trillion incomplete sequences are registered in theWhole Genome Shotgunsubmission database as of August 2021. The information contained in these registered sequences has doubled every 18 months.[70][original research?]
During rare times in human history, there have been periods of innovation that have transformed human life. TheNeolithic Age, the Scientific Age and theIndustrial Ageall, ultimately, induced discontinuous and irreversible changes in the economic, social and cultural elements of the daily life of most people. Traditionally, these epochs have taken place over hundreds, or in the case of the Neolithic Revolution, thousands of years, whereas the Information Age swept to all parts of the globe in just a few years, as a result of the rapidly advancing speed of information exchange.
Between 7,000 and 10,000 years ago during the Neolithic period, humans began to domesticate animals, began to farm grains and to replace stone tools with ones made of metal. These innovations allowed nomadic hunter-gatherers to settle down. Villages formed along theYangtze Riverin China in 6,500 B.C., theNile Riverregion of Africa and inMesopotamia(Iraq) in 6,000 B.C. Cities emerged between 6,000 B.C. and 3,500 B.C. The development of written communication (cuneiforminSumeriaandhieroglyphsinEgyptin 3,500 B.C. and writing in Egypt in 2,560 B.C. and inMinoaand China around 1,450 B.C.) enabled ideas to be preserved for extended periods to spread extensively. In all, Neolithic developments, augmented by writing as an information tool, laid the groundwork for the advent of civilization.
The Scientific Age began in the period betweenGalileo's 1543 proof that the planets orbit the Sun andNewton's publication of the laws of motion and gravity inPrincipiain 1697. This age of discovery continued through the 18th century, accelerated by widespread use of themoveable type printing pressbyJohannes Gutenberg.
The Industrial Age began in Great Britain in 1760 and continued into the mid-19th century. The invention of machines such as the mechanical textile weaver by Edmund Cartwrite, the rotating shaftsteam enginebyJames Wattand thecotton ginbyEli Whitney, along with processes for mass manufacturing, came to serve the needs of a growing global population. The Industrial Age harnessed steam and waterpower to reduce the dependence on animal and human physical labor as the primary means of production. Thus, the core of the Industrial Revolution was the generation and distribution of energy from coal and water to produce steam and, later in the 20th century, electricity.
The Information Age also requires electricity to power theglobal networksof computers that process and store data. However, what dramatically accelerated the pace of The Information Age's adoption, as compared to previous ones, was the speed by which knowledge could be transferred and pervaded the entire human family in a few short decades. This acceleration came about with the adoptions of a new form of power. Beginning in 1972, engineers devised ways to harness light to convey data throughfiber optic cable.Today, light-basedoptical networkingsystems at the heart of telecom networks and the Internet span the globe and carry most of the information traffic to and from users and data storage systems.
There are different conceptualizations of the Information Age. Some focus on the evolution of information over the ages, distinguishing between the Primary Information Age and the Secondary Information Age. Information in the Primary Information Age was handled by newspapers, radio and television. The Secondary Information Age was developed by the Internet, satellite televisions andmobile phones. The Tertiary Information Age was emerged by media of the Primary Information Age interconnected with media of the Secondary Information Age as presently experienced.[71][72][73][74][75][76]
Others classify it in terms of the well-establishedSchumpeterianlong wavesorKondratiev waves. Here authors distinguish three different long-term metaparadigms, each with different long waves. The first focused on the transformation of material, includingstone,bronze, andiron. The second, often referred to asIndustrial Revolution, was dedicated to the transformation of energy, includingwater,steam,electric, andcombustion power. Finally, the most recent metaparadigm aims at transforming information. It started out with the proliferation of communication andstored dataand has now entered the age ofalgorithms, which aims at creating automated processes to convert the existing information into actionable knowledge.[77]
The main feature of the information revolution is the growing economic, social and technological role of information.[78]Information-related activities did not come up with the Information Revolution. They existed, in one form or the other, in all human societies, and eventually developed into institutions, such as thePlatonic Academy,Aristotle's Peripatetic school in theLyceum, theMusaeumand theLibrary of Alexandria, or the schools ofBabylonian astronomy. TheAgricultural Revolutionand theIndustrial Revolutioncame up when new informational inputs were produced by individual innovators, or by scientific and technical institutions. During the Information Revolution all these activities are experiencing continuous growth, while other information-oriented activities are emerging.
Information is the central theme of several new sciences, which emerged in the 1940s, includingShannon's (1949)Information Theory[79]andWiener's (1948)Cybernetics. Wiener stated: "information is information not matter or energy". This aphorism suggests that information should be considered along withmatterand energy as the third constituent part of the Universe; information is carried by matter or by energy.[80]By the 1990s some writers believed that changes implied by the Information revolution will lead to not only a fiscal crisis for governments but also the disintegration of all "large structures".[81]
The terminformation revolutionmay relate to, or contrast with, such widely used terms asIndustrial RevolutionandAgricultural Revolution. Note, however, that you may prefer mentalist to materialist paradigm. The following fundamental aspects of the theory of information revolution can be given:[82][83]
From a different perspective,Irving E. Fang(1997) identified six 'Information Revolutions': writing, printing, mass media, entertainment, the 'tool shed' (which we call 'home' now), and the information highway. In this work the term 'information revolution' is used in a narrow sense, to describe trends in communication media.[87]
Porat (1976) measured the information sector in the US using theinput-output analysis;OECDhas included statistics on the information sector in the economic reports of its member countries.[88]Veneris (1984, 1990) explored the theoretical, economic and regional aspects of the informational revolution and developed asystems dynamicssimulationcomputer model.[82][83]
These works can be seen as following the path originated with the work ofFritz Machlupwho in his (1962) book "The Production and Distribution of Knowledge in the United States", claimed that the "knowledge industry represented 29% of the US gross national product", which he saw as evidence that the Information Age had begun. He defines knowledge as a commodity and attempts to measure the magnitude of the production and distribution of this commodity within a modern economy. Machlup divided information use into three classes: instrumental, intellectual, and pastime knowledge. He identified also five types of knowledge: practical knowledge; intellectual knowledge, that is, general culture and the satisfying of intellectual curiosity; pastime knowledge, that is, knowledge satisfying non-intellectual curiosity or the desire for light entertainment and emotional stimulation; spiritual or religious knowledge; unwanted knowledge, accidentally acquired and aimlessly retained.[89]
More recent estimates have reached the following results:[52]
Eventually,Information and communication technology(ICT)—i.e. computers,computerized machinery,fiber optics,communication satellites, the Internet, and other ICT tools—became a significant part of theworld economy, as the development ofoptical networkingandmicrocomputersgreatly changed many businesses and industries.[91][92]Nicholas Negropontecaptured the essence of these changes in his 1995 book,Being Digital,in which he discusses the similarities and differences between products made ofatomsand products made ofbits.[93]
The Information Age has affected theworkforcein several ways, such as compelling workers to compete in a globaljob market. One of the most evident concerns is the replacement of human labor by computers that can do their jobs faster and more effectively, thus creating a situation in which individuals who perform tasks that can easily beautomatedare forced to find employment where their labor is not as disposable.[94]This especially creates issue for those inindustrial cities, where solutions typically involve loweringworking time, which is often highly resisted. Thus, individuals who lose their jobs may be pressed to move up into more indispensable professions (e.g. engineers,doctors, lawyers,teachers,professors, scientists,executives, journalists, consultants), who are able to compete successfully in theworld marketand receive (relatively) high wages.[citation needed]
Along with automation, jobs traditionally associated with the middle class (e.g.assembly line,data processing, management, andsupervision) have also begun to disappear as result of outsourcing.[95]Unable to compete with those indeveloping countries,productionand service workers inpost-industrial (i.e. developed) societieseither lose their jobs through outsourcing, accept wage cuts, or settle forlow-skill,low-wageservice jobs.[95]In the past, the economic fate of individuals would be tied to that of their nation's. For example, workers in the United States were once well paid in comparison to those in other countries. With the advent of the Information Age and improvements in communication, this is no longer the case, as workers must now compete in a globaljob market, whereby wages are less dependent on the success or failure of individual economies.[95]
In effectuating aglobalized workforce, the internet has just as well allowed for increased opportunity indeveloping countries, making it possible for workers in such places to provide in-person services, therefore competing directly with their counterparts in other nations. Thiscompetitive advantagetranslates into increased opportunities and higher wages.[96]
The Information Age has affected the workforce in thatautomationand computerization have resulted in higherproductivitycoupled with netjob lossin manufacturing. In the United States, for example, from January 1972 to August 2010, the number of people employed in manufacturing jobs fell from 17,500,000 to 11,500,000 while manufacturing value rose 270%.[97]Although it initially appeared thatjob lossin theindustrial sectormight be partially offset by the rapid growth of jobs in information technology, therecession of March 2001foreshadowed a sharp drop in the number of jobs in the sector. This pattern of decrease in jobs would continue until 2003,[98]and data has shown that, overall, technology creates more jobs than it destroys even in the short run.[99]
Industry has become more information-intensive while lesslabor- andcapital-intensive. This has left important implications for theworkforce, as workers have become increasinglyproductiveas the value of their labor decreases. For the system ofcapitalismitself, the value of labor decreases, the value ofcapitalincreases.
In theclassical model, investments inhumanandfinancial capitalare important predictors of the performance of a newventure.[100]However, as demonstrated byMark Zuckerbergand Facebook, it now seems possible for a group of relatively inexperienced people with limited capital to succeed on a large scale.[101]
The Information Age was enabled by technology developed in theDigital Revolution, which was itself enabled by building on the developments of theTechnological Revolution.
The onset of the Information Age can be associated with the development oftransistortechnology.[2]The concept of afield-effect transistorwas first theorized byJulius Edgar Lilienfeldin 1925.[102]The first practical transistor was thepoint-contact transistor, invented by the engineersWalter Houser BrattainandJohn Bardeenwhile working forWilliam ShockleyatBell Labsin 1947. This was a breakthrough that laid the foundations for modern technology.[2]Shockley's research team also invented thebipolar junction transistorin 1952.[103][102]The most widely used type of transistor is themetal–oxide–semiconductor field-effect transistor(MOSFET), invented byMohamed M. AtallaandDawon Kahngat Bell Labs in 1960.[104]Thecomplementary MOS(CMOS) fabrication process was developed byFrank WanlassandChih-Tang Sahin 1963.[105]
Before the advent ofelectronics,mechanical computers, like theAnalytical Enginein 1837, were designed to provide routine mathematical calculation and simple decision-making capabilities. Military needs duringWorld War IIdrove development of the first electronic computers, based onvacuum tubes, including theZ3, theAtanasoff–Berry Computer,Colossus computer, andENIAC.
The invention of the transistor enabled the era ofmainframe computers(1950s–1970s), typified by theIBM 360. These large,room-sized computersprovided data calculation andmanipulationthat was much faster than humanly possible, but were expensive to buy and maintain, so were initially limited to a few scientific institutions, large corporations, and government agencies.
Thegermaniumintegrated circuit(IC) was invented byJack KilbyatTexas Instrumentsin 1958.[106]Thesiliconintegrated circuit was then invented in 1959 byRobert NoyceatFairchild Semiconductor, using theplanar processdeveloped byJean Hoerni, who was in turn building onMohamed Atalla's siliconsurface passivationmethod developed atBell Labsin 1957.[107][108]Following the invention of theMOS transistorby Mohamed Atalla andDawon Kahngat Bell Labs in 1959,[104]theMOSintegrated circuit was developed by Fred Heiman and Steven Hofstein atRCAin 1962.[109]Thesilicon-gateMOS IC was later developed byFederico Fagginat Fairchild Semiconductor in 1968.[110]With the advent of the MOS transistor and the MOS IC, transistor technologyrapidly improved, and the ratio of computing power to size increased dramatically, giving direct access to computers to ever smaller groups of people.
The first commercial single-chip microprocessor launched in 1971, theIntel 4004, which was developed by Federico Faggin using his silicon-gate MOS IC technology, along withMarcian Hoff,Masatoshi ShimaandStan Mazor.[111][112]
Along with electronicarcade machinesandhome video game consolespioneered byNolan Bushnellin the 1970s, the development of personal computers like theCommodore PETandApple II(both in 1977) gave individuals access to computers. However,data sharingbetween individual computers was either non-existent or largelymanual, at first usingpunched cardsandmagnetic tape, and laterfloppy disks.
The first developments for storing data were initially based on photographs, starting withmicrophotographyin 1851 and thenmicroformin the 1920s, with the ability to store documents on film, making them much more compact. Earlyinformation theoryandHamming codeswere developed about 1950, but awaited technical innovations in data transmission and storage to be put to full use.
Magnetic-core memorywas developed from the research of Frederick W. Viehe in 1947 andAn WangatHarvard Universityin 1949.[113][114]With the advent of the MOS transistor, MOSsemiconductor memorywas developed by John Schmidt atFairchild Semiconductorin 1964.[115][116]In 1967,Dawon KahngandSimon Szeat Bell Labs described in 1967 how the floating gate of an MOS semiconductor device could be used for the cell of a reprogrammable ROM.[117]Following the invention of flash memory byFujio MasuokaatToshibain 1980,[118][119]Toshiba commercializedNAND flashmemory in 1987.[120][117]
Copper wire cables transmitting digital data connectedcomputer terminalsandperipheralsto mainframes, and special message-sharing systems leading to email, were first developed in the 1960s. Independent computer-to-computer networking began withARPANETin 1969. This expanded to become the Internet (coined in 1974). Access to the Internet improved with the invention of theWorld Wide Webin 1991. The capacity expansion fromdense wave division multiplexing,optical amplificationandoptical networkingin the mid-1990s led to record data transfer rates. By 2018, optical networks routinely delivered 30.4 terabits/s over a fiber optic pair, the data equivalent of 1.2 million simultaneous 4K HD video streams.[121]
MOSFET scaling, the rapid miniaturization of MOSFETs at a rate predicted byMoore's law,[122]led to computers becoming smaller and more powerful, to the point where they could be carried. During the 1980s–1990s, laptops were developed as a form of portable computer, andpersonal digital assistants(PDAs) could be used while standing or walking.Pagers, widely used by the 1980s, were largely replaced by mobile phones beginning in the late 1990s, providingmobile networkingfeatures to some computers. Now commonplace, this technology is extended todigital camerasand other wearable devices. Starting in the late 1990s,tabletsand thensmartphonescombined and extended these abilities of computing, mobility, and information sharing.Metal–oxide–semiconductor(MOS)image sensors, which first began appearing in the late 1960s, led to the transition from analog todigital imaging, and from analog to digital cameras, during the 1980s–1990s. The most common image sensors are thecharge-coupled device(CCD) sensor and theCMOS(complementary MOS)active-pixel sensor(CMOS sensor).
Electronic paper, which has origins in the 1970s, allows digital information to appear as paper documents.
By 1976, there were several firms racing to introduce the first truly successful commercial personal computers. Three machines, theApple II,Commodore PET 2001andTRS-80were all released in 1977,[123]becoming the most popular by late 1978.[124]Bytemagazine later referred to Commodore, Apple, and Tandy as the "1977 Trinity".[125]Also in 1977,Sord Computer Corporationreleased the Sord M200 Smart Home Computer in Japan.[126]
Steve Wozniak(known as "Woz"), a regular visitor toHomebrew Computer Clubmeetings, designed the single-boardApple Icomputer and first demonstrated it there. With specifications in hand and an order for 100 machines at US$500 each fromthe Byte Shop, Woz and his friendSteve JobsfoundedApple Computer.
About 200 of the machines sold before the company announced the Apple II as a complete computer. It had colorgraphics, a full QWERTY keyboard, and internal slots for expansion, which were mounted in a high quality streamlined plastic case. The monitor and I/O devices were sold separately. The original Apple IIoperating systemwas only the built-in BASIC interpreter contained in ROM.Apple DOSwas added to support the diskette drive; the last version was "Apple DOS 3.3".
Its higher price and lack offloating pointBASIC, along with a lack of retail distribution sites, caused it to lag in sales behind the other Trinity machines until 1979, when it surpassed the PET. It was again pushed into 4th place whenAtari, Inc.introduced itsAtari 8-bit computers.[127]
Despite slow initial sales, the lifetime of theApple IIwas about eight years longer than other machines, and so accumulated the highest total sales. By 1985, 2.1 million had sold and more than 4 million Apple II's were shipped by the end of its production in 1993.[128]
Optical communicationplays a crucial role incommunication networks. Optical communication provides the transmission backbone for thetelecommunicationsandcomputer networksthat underlie the Internet, the foundation for theDigital Revolutionand Information Age.
The two core technologies are the optical fiber and light amplification (theoptical amplifier). In 1953, Bram van Heel demonstrated image transmission through bundles ofoptical fiberswith a transparent cladding. The same year,Harold HopkinsandNarinder Singh KapanyatImperial Collegesucceeded in making image-transmitting bundles with over 10,000 optical fibers, and subsequently achieved image transmission through a 75 cm long bundle which combined several thousand fibers.
Gordon Gouldinvented theoptical amplifierand thelaser, and also established the first optical telecommunications company,Optelecom, to design communication systems. The firm was a co-founder inCiena Corp., the venture that popularized the optical amplifier with the introduction of the firstdense wave division multiplexingsystem.[129]This massive scale communication technology has emerged as the common basis of all telecommunications networks.[130][failed verification]and, thus, a foundation of the Information Age.[131][132]
Manuel Castells authoredThe Information Age: Economy, Society and Culture. He writes of our global interdependence and the new relationships between economy, state and society, what he calls "a new society-in-the-making." He writes:
"It is in fact, quite the opposite: history is just beginning, if by history we understand the moment when, after millennia of a prehistoric battle with Nature, first to survive, then to conquer it, our species has reached the level of knowledge and social organization that will allow us to live in a predominantly social world. It is the beginning of a new existence, and indeed the beginning of a new age, The Information Age, marked by the autonomy of culture vis-à-vis the material basis of our existence."[133]
Thomas Chatterton Williamswrote about the dangers ofanti-intellectualismin the Information Age in a piece forThe Atlantic. Although access to information has never been greater, most information is irrelevant or insubstantial. The Information Age's emphasis on speed over expertise contributes to "superficial culture in which even the elite will openly disparage as pointless our main repositories for the very best that has been thought."[134]
|
https://en.wikipedia.org/wiki/Information_Age
|
Neuroenhancementorcognitive enhancementis the experimental use ofpharmacologicalor non-pharmacological methods intended toimprovecognitiveandaffectiveabilities in healthy people who don't have anymental illness.[1][2]Agents or methods of neuroenhancement are intended to affect cognitive, social, psychological,mood, or motor benefits beyond normal functioning.
Pharmacological neuroenhancement agents may include compounds thought to benootropics, such asmodafinil,[1][3]caffeine,[4][5]and other drugs used for treating people withneurological disorders.[6]
Non-pharmacological measures of cognitive enhancement may include behavioral methods (activities, techniques, and changes),[7]non-invasive brain stimulation, which has been used with the intent to improve cognitive and affective functions,[8]andbrain-machine interfaces.[9]
There are many supposed nootropics, most having only small effect sizes in healthy individuals. Neuroenhancement's most common pharmacological agents includemodafinilandmethylphenidate(Ritalin).Stimulantsin general and variousdementia treatments[10]or other neurological therapies[11]may affect cognition.
Neuroenhancement may also occur from:
Enhancers are multidimensional and can be clustered into biochemical, physical, and behavioral enhancement strategies.[17]
Approved for treatingnarcolepsy,obstructive sleep apnea, andshift work sleep disorder,modafinilis a wakefulness-promoting drug used to decrease fatigue, increase vigilance, and reduce daytime sleepiness.[1]Modafinilimproves alertness, attention, long-term memory, and daily performance in people with sleep disorders.[1][18]
In sustainedsleep deprivation, repeated use of modafinil helped individuals maintain higher levels of wakefulness than aplacebo, but did not help attention andexecutive function.[1][19]Modafinil may impair one's self-monitoring ability; a common trend found in research studies indicated that participants rated their performances on cognitive tests higher than it was, suggesting an "overconfidence" effect.[1][19]
Methylphenidate(MPH), also known as Ritalin, is astimulantthat is used to treatattention-deficit hyperactivity disorder(ADHD). MPH is abused by a segment of the general population, especially college students.[19]
A comparison between the sales of MPH to the number of people for whom it was prescribed revealed a disproportionate ratio, indicating high abuse.[19]MPH may impair cognitive performance.[20]
Studies are too preliminary to determine whether there are any cognitive-enhancing effects of agents such asmemantineoracetylcholinesterase inhibitors(examples:donepezil,galantamine).[6]
Common drugs intended for neuroehancement are typically well-tolerated by healthy people.[6][19]These drugs are already in mainstream use to treat people with different kinds of psychiatric disorders.
Assessment to determine potentialadverse effectsare drop-out rates and subjective rating.[6][19]The drop-out rates were minimal or non-existent for donepezil, memantine, MPH, and modafinil.[6][19]In the drug trials, participants reported the following adverse reactions to use of donepezil, memantine, MPH, modafinil or caffeine:[5]gastrointestinal complaints(nausea),headache,dizziness,nightmares,anxiety,drowsiness, nervousness, restlessness, sleep disturbances, andinsomnia,[6]diuresis.[21]The side effects normally ceased in the course of treatment.[6]Various factors, such as dosage, timing and concurrent behavior, may influence the onset of adverse effects.[6][19]
Neurostimulationmethods are being researched and developed.[8]Results indicate that details of the stimulation procedures are crucial, with some applications impairing rather than enhancing cognition and questions are being raised about whether this approach can deliver any meaningful results for cognitive domains.[8]Stimulation methods include electrical stimulation, magnetic stimulation,optical stimulation with lasers, several forms of acoustic stimulation, and physical methods like forms ofneurofeedback.[8][17]
Applications ofaugmented realitytechnologies may affect general memory enhancement, extending perception and learning-assistance.[22][23][additional citation(s) needed]TheInternetmay be considered a tool for enabling or extending cognition.[24][25][26]However, it is not "a simple, uniform technology, [n]either in its composition, [n]or in its use" and, as "an informational resource, currently fails to enhance cognition", partly due to issues that includeinformation overload,misinformationand persuasion.[27]
Quality standards,validationandauthentication, sampling and lab testing are commonly substandard or absent for products thought to be cognitive enhancers, includingdietary supplements.[28][29][30][31]
Neuroenhancement products or methods are used with the intent to:
Neuroenhancement products are mentioned in entertainment productions, such asLimitless(2011), which may to some degree probe and explore opportunities and threats of using such products.[35]
In general, people under the age of 25 feel that neuroenhancement agents are acceptable or that the decision to use them is to be made individually.[36]Healthcare officials and parents feel concerned due to safety factors, lack of complete information on these agents, and possible irreversible adverse effects; such concerns may reduce the willingness to take such agents.[37]
A 2024 study based on a representative sample of more than 20,000 adults in Germany showed that around 70% of those surveyed had taken substances with the aim of improving mental performance within a year, without a medical prescription.[38]The consumption of caffeinated drinks, such as coffee and energy drinks, was widespread (64% of users), expressly with the aim of improving performance, followed by dietary supplements and home remedies, such asginkgo biloba(31%).[38]Around 4% stated that they had takenprescription drugsfor cognitive enhancement (lifetime prevalence of 6%), corresponding to around 2.5 million users in Germany.[38]
A 2016 German study among 6,454 employees found a rather low life-time prevalence of cognitive enhancement prescription drug use (namely 3%), while the willingness to take such drugs was found in 10% of respondents.[39]A survey of some 5,000 German university students found a relatively low 30-day prevalence of 1%, while 2% of those sampled used such drugs within the last 6 months, 3% within the last 12 months, and 5% of others used the drugs over their lifetimes.[37]Of those students who used such substances during the last 6 months, 39% reported their use once in this period, 24% twice, 12% three times, and 24% more than three times.[37]Consumers of neuroenhancement drugs are more willing to use them again in the future due to positive experiences or a tendency towards addiction.[40]
|
https://en.wikipedia.org/wiki/Neuroenhancement
|
Thewheat and chessboard problem(sometimes expressed in terms of rice grains) is amathematical problemexpressed intextual formas:
If achessboardwere to havewheatplaced upon each square such that one grain were placed on the first square, two on the second, four on the third, and so on (doubling the number of grains on each subsequent square), how many grains of wheat would be on the chessboard at the finish?
The problem may be solved using simpleaddition. With 64 squares on a chessboard, if the number of grains doubles on successive squares, then the sum of grains on all 64 squares is:1 + 2 + 4 + 8 + ...and so forth for the 64 squares. The total number of grains can be shown to be 264−1 or 18,446,744,073,709,551,615 (eighteenquintillion, four hundred forty-six quadrillion, seven hundred forty-four trillion, seventy-three billion, seven hundred nine million, five hundred fifty-one thousand, six hundred and fifteen).
This exercise can be used to demonstrate how quickly exponential sequences grow, as well as to introduce exponents, zero power,capital-sigma notation, andgeometric series. Updated for modern times using pennies and a hypothetical question such as "Would you rather have a million dollars or a penny on day one, doubled every day until day 30?", the formula has been used to explaincompound interest. (Doubling would yield over one billion seventy three million pennies, or over 10 million dollars: 230−1=1,073,741,823).[1][2]
The problem appears in different stories about the invention ofchess. One of them includes the geometric progression problem. The story is first known to have been recorded in 1256 byIbn Khallikan.[3]Another version has the inventor of chess (in some tellingsSessa, an ancient Indian minister) request his ruler give him wheat according to the wheat and chessboard problem. The ruler laughs it off as a meager prize for a brilliant invention, only to have court treasurers report the unexpectedly huge number of wheat grains would outstrip the ruler's resources. Versions differ as to whether the inventor becomes a high-ranking advisor or is executed.[4]
Macdonnell also investigates the earlier development of the theme.[5]
[According toal-Masudi's early history of India],shatranj, or chess was invented under an Indian king, who expressed his preference for this game overbackgammon. [...] The Indians, he adds, also calculated an arithmetical progression with the squares of the chessboard. [...] The early fondness of the Indians for enormous calculations is well known to students of their mathematics, and is exemplified in the writings of the great astronomerĀryabaṭha[sic] (born 476 A.D.). [...] An additional argument for the Indian origin of this calculation is supplied by the Arabic name for the square of the chessboard, (بيت, "beit"), 'house'. [...] For this has doubtless a historical connection with its Indian designationkoṣṭhāgāra, 'store-house', 'granary' [...].
The simple, brute-force solution is just to manually double and add each step of the series:
The series may be expressed using exponents:
and, represented with capital-sigma notation as:
It can also be solved much more easily using:
A proof of which is:
Multiply each side by 2:
Subtract original series from each side:
The solution above is a particular case of the sum of a geometric series, given by
wherea{\displaystyle a}is the first term of the series,r{\displaystyle r}is the common ratio andn{\displaystyle n}is the number of terms.
In this problema=1{\displaystyle a=1},r=2{\displaystyle r=2}andn=64{\displaystyle n=64}.
Thus,
forn{\displaystyle n}being any positive integer.
The exercise of working through this problem may be used to explain and demonstrateexponentsand the quick growth ofexponentialandgeometricsequences. It can also be used to illustratesigma notation.
When expressed as exponents, thegeometric seriesis: 20+ 21+ 22+ 23+ ... and so forth, up to 263. The base of each exponentiation, "2", expresses the doubling at each square, while the exponents represent the position of each square (0 for the first square, 1 for the second, and so on.).
The number of grains is the 64thMersenne number.
Intechnology strategy, the "second half of the chessboard" is a phrase, coined byRay Kurzweil,[6]in reference to the point where anexponentially growingfactor begins to have a significant economic impact on an organization's overall business strategy. While the number of grains on the first half of the chessboard is large, the amount on the second half is vastly (232> 4 billion times) larger.
The number of grains of wheat on the first half of the chessboard is1 + 2 + 4 + 8 + ... + 2,147,483,648, for a total of 4,294,967,295 (232− 1) grains, or about 279 tonnes of wheat (assuming 65 mg as the mass of one grain of wheat).[7]
The number of grains of wheat on thesecondhalf of the chessboard is232+ 233+ 234+ ... + 263, for a total of 264− 232grains. This is equal to the square of the number of grains on the first half of the board, plus itself. The first square of the second half alone contains one more grain than the entire first half. On the 64th square of the chessboard alone, there would be 263= 9,223,372,036,854,775,808 grains, more than two billion times as many as on the first half of the chessboard.
On the entire chessboard there would be 264− 1 = 18,446,744,073,709,551,615 grains of wheat, weighing about 1,199,000,000,000metric tons. This is over 1,600 times theglobal production of wheat(729 million metric tons in 2014 and 780.8 million tonnes in 2019).[8]
Carl Sagantitled the second chapter ofhis final book"The Persian Chessboard" and wrote, referring to bacteria, that "Exponentials can't go on forever, because they will gobble up everything."[9]Similarly,The Limits to Growthuses the story to present suggested consequences ofexponential growth: "Exponential growth never can go on very long in a finite space with finite resources."[10]
|
https://en.wikipedia.org/wiki/Second_half_of_the_chessboard
|
Arecommender system (RecSys), or arecommendation system(sometimes replacingsystemwith terms such asplatform,engine, oralgorithm), sometimes only called "the algorithm" or "algorithm"[1]is a subclass ofinformation filtering systemthat provides suggestions for items that are most pertinent to a particular user.[2][3][4]Recommender systems are particularly useful when an individual needs to choose an item from a potentially overwhelming number of items that a service may offer.[2][5]Modern recommendation systems such as those used on large social media sites, make extensive use of AI, machine learning and related techniques to learn the behavior and preferences of each user, and tailor their feed accordingly.[6]
Typically, the suggestions refer to variousdecision-making processes, such as what product to purchase, what music to listen to, or what online news to read.[2]Recommender systems are used in a variety of areas, with commonly recognised examples taking the form ofplaylistgenerators for video and music services, product recommenders for online stores, or content recommenders for social media platforms and open web content recommenders.[7][8]These systems can operate using a single type of input, like music, or multiple inputs within and across platforms like news, books and search queries. There are also popular recommender systems for specific topics like restaurants andonline dating. Recommender systems have also been developed to explore research articles and experts,[9]collaborators,[10]and financial services.[11]
Acontent discovery platformis an implementedsoftwarerecommendationplatformwhich uses recommender system tools. It utilizes usermetadatain order to discover and recommend appropriate content, whilst reducing ongoing maintenance and development costs. A content discovery platform delivers personalized content towebsites,mobile devicesandset-top boxes. A large range of content discovery platforms currently exist for various forms of content ranging from news articles andacademic journalarticles[12]to television.[13]As operators compete to be the gateway to home entertainment, personalized television is a key service differentiator. Academic content discovery has recently become another area of interest, with several companies being established to help academic researchers keep up to date with relevant academic content and serendipitously discover new content.[12]
Recommender systems usually make use of either or bothcollaborative filteringand content-based filtering, as well as other systems such asknowledge-based systems. Collaborative filtering approaches build a model from a user's past behavior (items previously purchased or selected and/or numerical ratings given to those items) as well as similar decisions made by other users. This model is then used to predict items (or ratings for items) that the user may have an interest in.[14]Content-based filtering approaches utilize a series of discrete, pre-tagged characteristics of an item in order to recommend additional items with similar properties.[15]
The differences between collaborative and content-based filtering can be demonstrated by comparing two early music recommender systems,Last.fmandPandora Radio.
Each type of system has its strengths and weaknesses. In the above example, Last.fm requires a large amount of information about a user to make accurate recommendations. This is an example of thecold startproblem, and is common in collaborative filtering systems.[17][18][19][20][21][22]Whereas Pandora needs very little information to start, it is far more limited in scope (for example, it can only make recommendations that are similar to the original seed).
Recommender systems are a useful alternative tosearch algorithmssince they help users discover items they might not have found otherwise. Of note, recommender systems are often implemented using search engines indexing non-traditional data.
Recommender systems have been the focus of several granted patents,[23][24][25][26][27]and there are more than 50 software libraries[28]that support the development of recommender systems including LensKit,[29][30]RecBole,[31]ReChorus[32]and RecPack.[33]
Elaine Richcreated the first recommender system in 1979, called Grundy.[34][35]She looked for a way to recommend users books they might like. Her idea was to create a system that asks users specific questions and classifies them into classes of preferences, or "stereotypes", depending on their answers. Depending on users' stereotype membership, they would then get recommendations for books they might like.
Another early recommender system, called a "digital bookshelf", was described in a 1990 technical report byJussi Karlgrenat Columbia University,[36]and implemented at scale and worked through in technical reports and publications from 1994 onwards byJussi Karlgren, then atSICS,[37][38]and research groups led byPattie Maesat MIT,[39]Will Hill at Bellcore,[40]andPaul Resnick, also at MIT,[41][5]whose work with GroupLens was awarded the 2010ACM Software Systems Award.
Montaner provided the first overview of recommender systems from an intelligent agent perspective.[42]Adomaviciusprovided a new, alternate overview of recommender systems.[43]Herlocker provides an additional overview of evaluation techniques for recommender systems,[44]andBeelet al. discussed the problems of offline evaluations.[45]Beel et al. have also provided literature surveys on available research paper recommender systems and existing challenges.[46][47]
One approach to the design of recommender systems that has wide use iscollaborative filtering.[48]Collaborative filtering is based on the assumption that people who agreed in the past will agree in the future, and that they will like similar kinds of items as they liked in the past. The system generates recommendations using only information about rating profiles for different users or items. By locating peer users/items with a rating history similar to the current user or item, they generate recommendations using this neighborhood. Collaborative filtering methods are classified as memory-based and model-based. A well-known example of memory-based approaches is the user-based algorithm,[49]while that of model-based approaches ismatrix factorization (recommender systems).[50]
A key advantage of the collaborative filtering approach is that it does not rely on machine analyzable content and therefore it is capable of accurately recommending complex items such as movies without requiring an "understanding" of the item itself. Many algorithms have been used in measuring user similarity or item similarity in recommender systems. For example, thek-nearest neighbor(k-NN) approach[51]and thePearson Correlationas first implemented by Allen.[52]
When building a model from a user's behavior, a distinction is often made between explicit andimplicitforms ofdata collection.
Examples of explicit data collection include the following:
Examples ofimplicit data collectioninclude the following:
Collaborative filtering approaches often suffer from three problems:cold start, scalability, and sparsity.[54]
One of the most famous examples of collaborative filtering is item-to-item collaborative filtering (people who buy x also buy y), an algorithm popularized byAmazon.com's recommender system.[56]
Manysocial networksoriginally used collaborative filtering to recommend new friends, groups, and other social connections by examining the network of connections between a user and their friends.[2]Collaborative filtering is still used as part of hybrid systems.
Another common approach when designing recommender systems iscontent-based filtering. Content-based filtering methods are based on a description of the item and a profile of the user's preferences.[57][58]These methods are best suited to situations where there is known data on an item (name, location, description, etc.), but not on the user. Content-based recommenders treat recommendation as a user-specific classification problem and learn a classifier for the user's likes and dislikes based on an item's features.
In this system, keywords are used to describe the items, and auser profileis built to indicate the type of item this user likes. In other words, these algorithms try to recommend items similar to those that a user liked in the past or is examining in the present. It does not rely on a user sign-in mechanism to generate this often temporary profile. In particular, various candidate items are compared with items previously rated by the user, and the best-matching items are recommended. This approach has its roots ininformation retrievalandinformation filteringresearch.
To create auser profile, the system mostly focuses on two types of information:
Basically, these methods use an item profile (i.e., a set of discrete attributes and features) characterizing the item within the system. To abstract the features of the items in the system, an item presentation algorithm is applied. A widely used algorithm is thetf–idfrepresentation (also called vector space representation).[59]The system creates a content-based profile of users based on a weighted vector of item features. The weights denote the importance of each feature to the user and can be computed from individually rated content vectors using a variety of techniques. Simple approaches use the average values of the rated item vector while other sophisticated methods use machine learning techniques such asBayesian Classifiers,cluster analysis,decision trees, andartificial neural networksin order to estimate the probability that the user is going to like the item.[60]
A key issue with content-based filtering is whether the system can learn user preferences from users' actions regarding one content source and use them across other content types. When the system is limited to recommending content of the same type as the user is already using, the value from the recommendation system is significantly less than when other content types from other services can be recommended. For example, recommending news articles based on news browsing is useful. Still, it would be much more useful when music, videos, products, discussions, etc., from different services, can be recommended based on news browsing. To overcome this, most content-based recommender systems now use some form of the hybrid system.
Content-based recommender systems can also include opinion-based recommender systems. In some cases, users are allowed to leave text reviews or feedback on the items. These user-generated texts are implicit data for the recommender system because they are potentially rich resources of both feature/aspects of the item and users' evaluation/sentiment to the item. Features extracted from the user-generated reviews are improvedmetadataof items, because as they also reflect aspects of the item like metadata, extracted features are widely concerned by the users. Sentiments extracted from the reviews can be seen as users' rating scores on the corresponding features. Popular approaches of opinion-based recommender system utilize various techniques includingtext mining,information retrieval,sentiment analysis(see alsoMultimodal sentiment analysis) anddeep learning.[61]
Most recommender systems now use a hybrid approach, combiningcollaborative filtering, content-based filtering, and other approaches. There is no reason why several different techniques of the same type could not be hybridized. Hybrid approaches can be implemented in several ways: by making content-based and collaborative-based predictions separately and then combining them; by adding content-based capabilities to a collaborative-based approach (and vice versa); or by unifying the approaches into one model.[43]Several studies that empirically compared the performance of the hybrid with the pure collaborative and content-based methods and demonstrated that the hybrid methods can provide more accurate recommendations than pure approaches. These methods can also be used to overcome some of the common problems in recommender systems such as cold start and the sparsity problem, as well as the knowledge engineering bottleneck inknowledge-basedapproaches.[62]
Netflixis a good example of the use of hybrid recommender systems.[63]The website makes recommendations by comparing the watching and searching habits of similar users (i.e., collaborative filtering) as well as by offering movies that share characteristics with films that a user has rated highly (content-based filtering).
Some hybridization techniques include:
These recommender systems use the interactions of a user within a session[65]to generate recommendations. Session-based recommender systems are used at YouTube[66]and Amazon.[67]These are particularly useful when history (such as past clicks, purchases) of a user is not available or not relevant in the current user session. Domains where session-based recommendations are particularly relevant include video, e-commerce, travel, music and more. Most instances of session-based recommender systems rely on the sequence of recent interactions within a session without requiring any additional details (historical, demographic) of the user. Techniques for session-based recommendations are mainly based on generative sequential models such asrecurrent neural networks,[65][68]transformers,[69]and other deep-learning-based approaches.[70][71]
The recommendation problem can be seen as a special instance of a reinforcement learning problem whereby the user is the environment upon which the agent, the recommendation system acts upon in order to receive a reward, for instance, a click or engagement by the user.[66][72][73]One aspect of reinforcement learning that is of particular use in the area of recommender systems is the fact that the models or policies can be learned by providing a reward to the recommendation agent. This is in contrast to traditional learning techniques which rely on supervised learning approaches that are less flexible, reinforcement learning recommendation techniques allow to potentially train models that can be optimized directly on metrics of engagement, and user interest.[74]
Multi-criteria recommender systems (MCRS) can be defined as recommender systems that incorporate preference information upon multiple criteria. Instead of developing recommendation techniques based on a single criterion value, the overall preference of user u for the item i, these systems try to predict a rating for unexplored items of u by exploiting preference information on multiple criteria that affect this overall preference value. Several researchers approach MCRS as a multi-criteria decision making (MCDM) problem, and apply MCDM methods and techniques to implement MCRS systems.[75]See this chapter[76]for an extended introduction.
The majority of existing approaches to recommender systems focus on recommending the most relevant content to users using contextual information, yet do not take into account the risk of disturbing the user with unwanted notifications. It is important to consider the risk of upsetting the user by pushing recommendations in certain circumstances, for instance, during a professional meeting, early morning, or late at night. Therefore, the performance of the recommender system depends in part on the degree to which it has incorporated the risk into the recommendation process. One option to manage this issue isDRARS, a system which models the context-aware recommendation as abandit problem. This system combines a content-based technique and a contextual bandit algorithm.[77]
Mobile recommender systems make use of internet-accessingsmartphonesto offer personalized, context-sensitive recommendations. This is a particularly difficult area of research as mobile data is more complex than data that recommender systems often have to deal with. It is heterogeneous, noisy, requires spatial and temporal auto-correlation, and has validation and generality problems.[78]
There are three factors that could affect the mobile recommender systems and the accuracy of prediction results: the context, the recommendation method and privacy.[79]Additionally, mobile recommender systems suffer from a transplantation problem – recommendations may not apply in all regions (for instance, it would be unwise to recommend a recipe in an area where all of the ingredients may not be available).
One example of a mobile recommender system are the approaches taken by companies such asUberandLyftto generate driving routes for taxi drivers in a city.[78]This system uses GPS data of the routes that taxi drivers take while working, which includes location (latitude and longitude), time stamps, and operational status (with or without passengers). It uses this data to recommend a list of pickup points along a route, with the goal of optimizing occupancy times and profits.
Generative recommenders (GR) represent an approach that transforms recommendation tasks intosequential transductionproblems, where user actions are treated like tokens in a generative modeling framework. In one method, known as HSTU (Hierarchical Sequential Transduction Units),[80]high-cardinality, non-stationary, and streaming datasets are efficiently processed as sequences, enabling the model to learn from trillions of parameters and to handle user action histories orders of magnitude longer than before. By turning all of the system’s varied data into a single stream of tokens and using a customself-attentionapproach instead oftraditional neural network layers, generative recommenders make the model much simpler and less memory-hungry. As a result, it can improve recommendation quality in test simulations and in real-world tests, while being faster than previousTransformer-based systems when handling long lists of user actions. Ultimately, this approach allows the model’s performance to grow steadily as more computing power is used, laying a foundation for efficient and scalable “foundation models” for recommendations.
One of the events that energized research in recommender systems was theNetflix Prize. From 2006 to 2009, Netflix sponsored a competition, offering a grand prize of $1,000,000 to the team that could take an offered dataset of over 100 million movie ratings and return recommendations that were 10% more accurate than those offered by the company's existing recommender system. This competition energized the search for new and more accurate algorithms. On 21 September 2009, the grand prize of US$1,000,000 was given to the BellKor's Pragmatic Chaos team using tiebreaking rules.[81]
The most accurate algorithm in 2007 used an ensemble method of 107 different algorithmic approaches, blended into a single prediction. As stated by the winners, Bell et al.:[82]
Predictive accuracy is substantially improved when blending multiple predictors.Our experience is that most efforts should be concentrated in deriving substantially different approaches, rather than refining a single technique.Consequently, our solution is an ensemble of many methods.
Many benefits accrued to the web due to the Netflix project. Some teams have taken their technology and applied it to other markets. Some members from the team that finished second place foundedGravity R&D, a recommendation engine that's active in theRecSys community.[81][83]4-Tell, Inc. created a Netflix project–derived solution for ecommerce websites.
A number of privacy issues arose around the dataset offered by Netflix for the Netflix Prize competition. Although the data sets were anonymized in order to preserve customer privacy, in 2007 two researchers from the University of Texas were able to identify individual users by matching the data sets with film ratings on theInternet Movie Database (IMDb).[84]As a result, in December 2009, an anonymous Netflix user sued Netflix in Doe v. Netflix, alleging that Netflix had violated United States fair trade laws and theVideo Privacy Protection Actby releasing the datasets.[85]This, as well as concerns from theFederal Trade Commission, led to the cancellation of a second Netflix Prize competition in 2010.[86]
Evaluation is important in assessing the effectiveness of recommendation algorithms. To measure theeffectivenessof recommender systems, and compare different approaches, three types ofevaluationsare available: user studies,online evaluations (A/B tests), and offline evaluations.[45]
The commonly used metrics are themean squared errorandroot mean squared error, the latter having been used in the Netflix Prize. The information retrieval metrics such asprecision and recallorDCGare useful to assess the quality of a recommendation method. Diversity, novelty, and coverage are also considered as important aspects in evaluation.[87]However, many of the classic evaluation measures are highly criticized.[88]
Evaluating the performance of a recommendation algorithm on a fixed test dataset will always be extremely challenging as it is impossible to accurately predict the reactions of real users to the recommendations. Hence any metric that computes the effectiveness of an algorithm in offline data will be imprecise.
User studies are rather a small scale. A few dozens or hundreds of users are presented recommendations created by different recommendation approaches, and then the users judge which recommendations are best.
In A/B tests, recommendations are shown to typically thousands of users of a real product, and the recommender system randomly picks at least two different recommendation approaches to generate recommendations. The effectiveness is measured with implicit measures of effectiveness such asconversion rateorclick-through rate.
Offline evaluations are based on historic data, e.g. a dataset that contains information about how users previously rated movies.[89]
The effectiveness of recommendation approaches is then measured based on how well a recommendation approach can predict the users' ratings in the dataset. While a rating is an explicit expression of whether a user liked a movie, such information is not available in all domains. For instance, in the domain of citation recommender systems, users typically do not rate a citation or recommended article. In such cases, offline evaluations may use implicit measures of effectiveness. For instance, it may be assumed that a recommender system is effective that is able to recommend as many articles as possible that are contained in a research article's reference list. However, this kind of offline evaluations is seen critical by many researchers.[90][91][92][45]For instance, it has been shown that results of offline evaluations have low correlation with results from user studies or A/B tests.[92][93]A dataset popular for offline evaluation has been shown to contain duplicate data and thus to lead to wrong conclusions in the evaluation of algorithms.[94]Often, results of so-called offline evaluations do not correlate with actually assessed user-satisfaction.[95]This is probably because offline training is highly biased toward the highly reachable items, and offline testing data is highly influenced by the outputs of the online recommendation module.[90][96]Researchers have concluded that the results of offline evaluations should be viewed critically.[97]
Typically, research on recommender systems is concerned with finding the most accurate recommendation algorithms. However, there are a number of factors that are also important.
Recommender systems are notoriously difficult to evaluate offline, with some researchers claiming that this has led to areproducibility crisisin recommender systems publications. The topic of reproducibility seems to be a recurrent issue in some Machine Learning publication venues, but does not have a considerable effect beyond the world of scientific publication. In the context of recommender systems a 2019 paper surveyed a small number of hand-picked publications applying deep learning or neural methods to the top-k recommendation problem, published in top conferences (SIGIR, KDD, WWW,RecSys, IJCAI), has shown that on average less than 40% of articles could be reproduced by the authors of the survey, with as little as 14% in some conferences. The articles considers a number of potential problems in today's research scholarship and suggests improved scientific practices in that area.[110][111][112]More recent work on benchmarking a set of the same methods came to qualitatively very different results[113]whereby neural methods were found to be among the best performing methods. Deep learning and neural methods for recommender systems have been used in the winning solutions in several recent recommender system challenges, WSDM,[114]RecSys Challenge.[115]Moreover, neural and deep learning methods are widely used in industry where they are extensively tested.[116][66][67]The topic of reproducibility is not new in recommender systems. By 2011,Ekstrand,Konstan, et al. criticized that "it is currently difficult to reproduce and extend recommender systems research results," and that evaluations are "not handled consistently".[117]Konstan and Adomavicius conclude that "the Recommender Systems research community is facing a crisis where a significant number of papers present results that contribute little to collective knowledge [...] often because the research lacks the [...] evaluation to be properly judged and, hence, to provide meaningful contributions."[118]As a consequence, much research about recommender systems can be considered as not reproducible.[119]Hence, operators of recommender systems find little guidance in the current research for answering the question, which recommendation approaches to use in a recommender systems.SaidandBellogínconducted a study of papers published in the field, as well as benchmarked some of the most popular frameworks for recommendation and found large inconsistencies in results, even when the same algorithms and data sets were used.[120]Some researchers demonstrated that minor variations in the recommendation algorithms or scenarios led to strong changes in the effectiveness of a recommender system. They conclude that seven actions are necessary to improve the current situation:[119]"(1) survey other research fields and learn from them, (2) find a common understanding of reproducibility, (3) identify and understand the determinants that affect reproducibility, (4) conduct more comprehensive experiments (5) modernize publication practices, (6) foster the development and use of recommendation frameworks, and (7) establish best-practice guidelines for recommender-systems research."
Artificial intelligence(AI) applications in recommendation systems are the advanced methodologies that leverage AI technologies, to enhance the performance recommendation engines. The AI-based recommender can analyze complex data sets, learning from user behavior, preferences, and interactions to generate highly accurate and personalized content or product suggestions.[121]The integration of AI in recommendation systems has marked a significant evolution from traditional recommendation methods. Traditional methods often relied on inflexible algorithms that could suggest items based on general user trends or apparent similarities in content. In comparison, AI-powered systems have the capability to detect patterns and subtle distinctions that may be overlooked by traditional methods.[122]These systems can adapt to specific individual preferences, thereby offering recommendations that are more aligned with individual user needs. This approach marks a shift towards more personalized, user-centric suggestions.
Recommendation systems widely adopt AI techniques such asmachine learning,deep learning, andnatural language processing.[123]These advanced methods enhance system capabilities to predict user preferences and deliver personalized content more accurately. Each technique contributes uniquely. The following sections will introduce specific AI models utilized by a recommendation system by illustrating their theories and functionalities.[citation needed]
Collaborative filtering(CF) is one of the most commonly used recommendation system algorithms. It generates personalized suggestions for users based on explicit or implicit behavioral patterns to form predictions.[124]Specifically, it relies on external feedback such as star ratings, purchasing history and so on to make judgments. CF make predictions about users' preference based on similarity measurements. Essentially, the underlying theory is: "if user A is similar to user B, and if A likes item C, then it is likely that B also likes item C."
There are many models available for collaborative filtering. For AI-applied collaborative filtering, a common model is calledK-nearest neighbors. The ideas are as follows:
Anartificial neural network(ANN), is a deep learning model structure which aims to mimic a human brain. They comprise a series of neurons, each responsible for receiving and processing information transmitted from other interconnected neurons.[125]Similar to a human brain, these neurons will change activation state based on incoming signals (training input and backpropagated output), allowing the system to adjust activation weights during the network learning phase. ANN is usually designed to be ablack-boxmodel. Unlike regular machine learning where the underlying theoretical components are formal and rigid, the collaborative effects of neurons are not entirely clear, but modern experiments has shown the predictive power of ANN.
ANN is widely used in recommendation systems for its power to utilize various data. Other than feedback data, ANN can incorporate non-feedback data which are too intricate for collaborative filtering to learn, and the unique structure allows ANN to identify extra signal from non-feedback data to boost user experience.[123]Following are some examples:
The Two-Tower model is a neural architecture[126]commonly employed in large-scale recommendation systems, particularly for candidate retrieval tasks.[127]It consists of two neural networks:
The outputs of the two towers are fixed-length embeddings that represent users and items in a shared vector space. A similarity metric, such asdot productorcosine similarity, is used to measure relevance between a user and an item.
This model is highly efficient for large datasets as embeddings can be pre-computed for items, allowing rapid retrieval during inference. It is often used in conjunction with ranking models for end-to-end recommendation pipelines.
Natural language processing is a series of AI algorithms to make natural human language accessible and analyzable to a machine.[128]It is a fairly modern technique inspired by the growing amount of textual information. For application in recommendation system, a common case is the Amazon customer review. Amazon will analyze the feedbacks comments from each customer and report relevant data to other customers for reference. The recent years have witnessed the development of various text analysis models, includinglatent semantic analysis(LSA),singular value decomposition(SVD),latent Dirichlet allocation(LDA), etc. Their uses have consistently aimed to provide customers with more precise and tailored recommendations.
An emerging market for content discovery platforms is academic content.[129][130]Approximately 6000 academic journal articles are published daily, making it increasingly difficult for researchers to balance time management with staying up to date with relevant research.[12]Though traditional tools academic search tools such asGoogle ScholarorPubMedprovide a readily accessible database of journal articles, content recommendation in these cases are performed in a 'linear' fashion, with users setting 'alarms' for new publications based on keywords, journals or particular authors.
Google Scholar provides an 'Updates' tool that suggests articles by using astatistical modelthat takes a researchers' authorized paper and citations as input.[12]Whilst these recommendations have been noted to be extremely good, this poses a problem with early career researchers which may be lacking a sufficient body of work to produce accurate recommendations.[12]
In contrast to an engagement-based ranking system employed by social media and other digital platforms, a bridging-based ranking optimizes for content that is unifying instead ofpolarizing.[131][132]Examples includePolisand Remesh which have been used around the world to help find more consensus around specific political issues.[132]Twitterhas also used this approach for managing itscommunity notes,[133]whichYouTubeplanned to pilot in 2024.[134][135]Aviv Ovadya also argues for implementing bridging-based algorithms in major platforms by empoweringdeliberative groupsthat are representative of the platform's users to control the design and implementation of the algorithm.[136]
As the connected television landscape continues to evolve, search and recommendation are seen as having an even more pivotal role in the discovery of content.[137]Withbroadband-connected devices, consumers are projected to have access to content from linear broadcast sources as well asinternet television. Therefore, there is a risk that the market could become fragmented, leaving it to the viewer to visit various locations and find what they want to watch in a way that is time-consuming and complicated for them. By using a search and recommendation engine, viewers are provided with a central 'portal' from which to discover content from several sources in just one location.
|
https://en.wikipedia.org/wiki/Content_discovery_platform
|
DHS media monitoring servicesis a proposedUnited States Department of Homeland Securitydatabase to keep track of 290,000 global news sources and media influencers to monitor sentiment.
Privacy and free speech advocates have criticized the project's far-reaching scope, likening it to apanopticon.[1][2][3][4][5][6][7][8]The DHS has replied that "Despite what some reporters may suggest, this is nothing more than the standard practice of monitoring current events in the media. Any suggestion otherwise is fit for tin foil hat wearing, black helicopter conspiracy theorists."[9][5]It will also look at trade and industry publications, local, national and international outlets, and social media, according to documents. The plans also encompass media coverage being tracked in more than 100 languages including Arabic, Chinese, and Russian, with instant translation of articles into English. The DHS Media Monitoring plan would allow for "24/7 access to a password protected, media influencer database, including journalist, editors, correspondents, social media influencers, bloggers etc" to identify "any and all media coverage related to the Department of Homeland Security or a particular event."[10]
The DHS has noted that agencies under its purview already operate similar databases.[11]Several news organizations have noted that similar services, though narrower in scope, already exist and the proposed DHS service would be the norm within thenews industry.[12][13]
Several organizations have come out opposing the creation of the service:Occupy movement[14]andReporters Committee for Freedom of the Press.[15]
Beginning in January 2010, the NOC launched Media Monitoring Capability (MMC) pilots
using social media monitoring related to specific mission-related incidents and international
events. These pilots were conducted to help fulfill the NOC's statutory responsibility to provide
situational awareness and to access potentially valuable public information within the social
media realm. Prior to implementation of each social media pilot, the DHS Privacy Office and
OPS developed detailed standards and procedures for reviewing information on social media
web sites.[16]
In February 2012, the House of Representatives held a hearing with concerns to counter cyber-terrorism, as well as other acts of criminal activity, whilst maintaining the privacy rights of Americans. The DHS was discussed on its methodology and usage of social media services. In one example, DHS used multiple social networking blogs, including
Facebook and Twitter, three different blogs, and reader
comments in newspapers to capture the reaction of residents to a
possible plan to bring Guantanamo detainees to a local prison in
Standish, Michigan.[17]
Thisdatabase-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/DHS_media_monitoring_services
|
Targeted[1]advertisingordata-driven marketingis a form ofadvertising, includingonline advertising, that is directed towards an audience with certain traits, based on the product or person the advertiser is promoting.[2]
These traits can either bedemographicwith a focus on race, economic status, sex, age, generation, level of education, income level, and employment, orpsychographicfocused on theconsumervalues, personality, attitude, opinion,lifestyle, and interests.[1]This focus can also entail behavioral variables, such asbrowser history,purchase history, and other recent online activities. The process of algorithm targeting eliminates waste.[3]
Traditional forms of advertising, includingbillboards, newspapers, magazines, and radio channels, are progressively becoming replaced by online advertisements.[4]
Through the emergence of new online channels, the usefulness of targeted advertising is increasing because companies aim to minimize wasted advertising.[4]Most targetednew mediaadvertising currently uses second-order proxies for targets, such astrackingonline or mobile web activities of consumers, associating historical web page consumer demographics with new consumer web page access, using a search word as the basis of implied interest, orcontextual advertising.[5]
Companies have technology that allows them to gather information about web users.[4]By tracking and monitoring what websites users visit, internet service providers can directly show ads that are relative to theconsumer's preferences. Most of today's websites are using these targeting technologies totrackusers' internet behavior and there is much debate over theprivacy issuespresent.[6]
Search engine marketing usessearch enginesto reach target audiences. For example,Google's Remarketing Campaigns are a type of targeted marketing where advertisers use theIP addressesof computers that have visited their websites to remarket their ad specifically to users who have previously been on their website whilst they browse websites that are a part of theGoogle display network, or when searching for keywords related to a product or service on the Google search engine.[7]Dynamic remarketing can improve targeted advertising as the ads can include the products or services that the consumers have previously viewed on the advertisers' websites within the ads.[8]
Google Adsincludes different platforms. The Search Network displays the ads on 'Google Search, other Google sites such as Maps and Shopping, and hundreds of non-Google search partner websites that show ads matched to search results'.[8]'The Display Network includes a collection of Google websites (likeGoogle Finance,Gmail,Blogger, andYouTube), partner sites, and mobile sites and apps that show adverts from Google Ads matched to the content on a given page.'[8]
These two kinds of advertising networks can be beneficial for each specific goal of the company, or type of company. For example, the search network can benefit a company to reach consumers actively searching for a particular product or service.
Other ways advertising campaigns can target the user is to usebrowser historyand search history. For example, if the user typespromotional pensinto a search engine such as Google, ads for promotional pens will appear at the top of the page above the organic listings. These ads will be geo-targeted to the area of the user's IP address, showing the product or service in the local area or surrounding regions. The higher ad position is often rewarded to the ad having a higher quality score.[9]The ad quality is affected by the 5 components of the quality score:[10]
When ranked based on these criteria, it will affect the advertiser by improving ad auction eligibility, the actualcost per click(CPC), ad position, and ad position bid estimates; to summarise, the better the quality score, the better ad position, and lower costs.
Google uses its display network to track what users are looking at and to gather information about them. When a user goes to a website that uses the Google display network, it will send a cookie to Google, showing information on the user, what they have searched, where they are from, found by the IP address, and then builds a profile around them, allowing Google to easily target ads to the user more specifically.
For example, if a user goes onto promotional companies' websites often, that sell promotional pens, Google will gather data from the user such as age, gender, location, and other demographic information as well as information on the websites visited, the user will then be put into a category of promotional products, allowing Google to easilydisplay adson websites the user visits relating to promotional products.[11]
Social media targeting is a form of targeted advertising, that uses general targeting attributes such asgeotargeting, behavioral targeting, and socio-psychographic targeting, and gathers the information that consumers have provided on each social media platform.
According to the media users' view history, customers who are interested in the criteria will be automatically targeted by the advertisements of certain products or services.[12]For example,Facebookcollects massive amounts of user data from surveillance infrastructure on its platforms.[13]Information such as a user's likes, view history, and geographic location is leveraged to micro-target consumers with personalized products.
Paid advertising on Facebook works by helping businesses to reach potential customers by creating targeted campaigns.[14]
Social media also creates profiles of the consumer and only needs to look at one place, the user's profile, to find all interests and 'likes'.
E.g. Facebook lets advertisers target using broad characteristics like gender, age, and location. Furthermore, they allow more narrow targeting based on demographics, behavior, and interests (see a comprehensive list of Facebook's different types of targeting options[15]).
Advertisements can be targeted to specific consumers watchingdigital cable,[16]Smart TVs, orover-the-top video.[17]Targeting can be done according to age, gender, location, or personal interests in films, etc.[18]
Cable box addresses can be cross-referenced with information from data brokers likeAcxiom,Equifax, andExperian, including information about marriage, education, criminal record, and credit history. Political campaigns may also match against public records such as party affiliation and which elections and party primaries the view has voted in.[17]
Since the early 2000s, advertising has been pervasive online and more recently in the mobile setting. Targeted advertising based on mobile devices allows more information about the consumer to be transmitted, not just their interests, but their information about their location and time.[19]This allows advertisers to produce advertisements that could cater to their schedule and a more specific changing environment.
The most straightforward method of targeting is content/contextual targeting. This is when advertisers put ads in a specific place, based on the relative content present.[6]Another name used is content-oriented advertising, as it corresponds to the context being consumed.
This targeting method can be used across different mediums, for example in an article online, purchasing homes would have an advert associated with this context, like an insurance ad. This is usually achieved through an ad matching system that analyses the contents on a page or finds keywords and presents a relevant advert, sometimes through pop-ups.[20]
Sometimes the ad matching system can fail, as it can neglect to tell the difference between positive and negative correlations. This can result in placing contradictory adverts, which are not appropriate to the content.[20]
Technical targeting is associated with the user's own software or hardware status. The advertisement is altered depending on the user's availablenetwork bandwidth, for example, if a user is on a mobile phone that has a limited connection, the ad delivery system will display a version of the ad that is smaller for a faster data transfer rate.[6]
Addressable advertising systems serve ads directly based on demographic, psychographic, or behavioral attributes associated with the consumer(s) exposed to the ad. These systems are always digital and must beaddressablein that the endpoint that serves the ad (set-top box, website, or digital sign) must be capable of rendering an ad independently of any other endpoints based on consumer attributes specific to that endpoint at the time the ad is served.
Addressable advertising systems, therefore, must use consumer traits associated with the endpoints as the basis for selecting and serving ads.[21]
According to theJournal of Marketing, more than 1.8 billion clients spent a minimum of 118 minutes daily- via web-based networking media in 2016.[22]Nearly 77% of these clients interact with the content through likes, commenting, and clicking on links related to content. With this astounding buyer trend, advertisers need to choose the right time to schedule content, to maximize advertising efficiency.
To determine what time of day is most effective for scheduling content, it is essential to know when the brain is most effective at retaining memory. Research inchronopsychologyhas credited that time-of-day impactsdiurnal varietyin a person'sworking memoryaccessibility and has discovered the enactment of inhibitory procedures to build working memory effectiveness during times of low working memory accessibility. Working memory is known to be vital forlanguage perception,learning, andreasoning[23][24]providing us with the capacity of putting away, recovering, and preparing quick data.
For many people, working memory accessibility is good when they get up toward the beginning of the day, most reduced in mid-evening, and moderate at night.[25]
Sociodemographic targeting focuses on the characteristics of consumers. This includes their age, generation, gender, salary, and nationality.[6]The idea is to target users specifically and to use this collected data, for example, targeting a male in the age bracket of 18–24. Facebook and other social media platforms use this form of targeting by showing advertisements relevant to the user's demographic on their account, this can show up in the forms ofbanner ads, mobile ads, or commercial videos.[26]
This type of advertising involves targeting different users based on their geographic location. IP addresses can signal the location of a user and can usually transfer the location through ZIP codes.[6]Locations are then stored for users in static profiles, thus advertisers can easily target these individuals based on their geographic location.
Alocation-based service(LBS) is a mobile information service that allows spatial and temporal data transmission and can be used to an advertiser's advantage.[27]This data can be harnessed from applications on the device (mobile apps likeUber) that allow access to the location information.[28]
This type of targeted advertising focuses on localizing content, for example, a user could be prompted with options of activities in the area, for example, places to eat, nearby shops, etc. Although producing advertising off consumer location-based services can improve the effectiveness of delivering ads, it can raise issues with the user's privacy.[29]
Behavioral targetingis centered around the activity/actions of users and is more easily achieved on web pages.[30][31]Information from browsing websites can be collected fromdata mining, which finds patterns in users' search history. Advertisers using this method believe it produces ads that will be more relevant to users, thus leading consumers to be more likely influenced by them.[32]
If a consumer was frequently searching for plane ticket prices, the targeting system would recognize this and start showing related adverts across unrelated websites, such as airfare deals on Facebook. Its advantage is that it can target individual interests, rather than target groups of people whose interests may vary.[6]
When aconsumervisits a website, the pages they visit, the amount of time they view each page, the links theyclickon, the searches they make, and the things that they interact with, allow sites to collect that data, and other factors, to create a 'profile' that links to that visitor's web browser. As a result, site publishers can use this data to create defined audience segments based on visitors who have similar profiles.
When visitors return to a specific site or a network of sites using the same web browser, those profiles can be used to allow marketers and advertisers to position their online ads and messaging in front of those visitors who exhibit a greater level of interest and intent for the products and services being offered.
Behavioral targeting has emerged as one of the main technologies used to increase the efficiency and profits ofdigital marketingand advertisements, as media providers can provide individual users with highly relevant advertisements. On the theory that properly targeted ads and messaging will fetch more consumer interest, publishers can charge a premium for behaviorally targeted ads and marketers can achieve.
Behavioral marketing can be used on its own or in conjunction with other forms of targeting.[15]Many practitioners also refer to this process as "audience targeting".[33]
While behavioral targeting can enhance ad effectiveness, it also raises privacy concerns.[34]Users may feel uncomfortable with the idea of their online behavior being tracked and used for advertising purposes. Striking a balance between personalization and privacy is crucial.[35]
Behavioral targeting may also be applied to any online property on the premise that it either improves the visitor experience or benefits the online property, typically through increased conversion rates or increased spending levels. The early adopters of this technology/philosophy were editorial sites such as HotWired,[36][37]online advertising[38]with leading online ad servers,[39]retail or anothere-commercewebsite as a technique for increasing the relevance of product offers and promotions on a visitor by visitor basis. More recently, companies outside this traditional e-commerce marketplace have started to experiment with these emerging technologies.
The typical approach to this starts by usingweb analyticsorbehavioral analyticsto breakdown the range of all visitors into several discrete channels. Each channel is then analyzed and a virtual profile is created to deal with each channel.
These profiles can be based aroundPersonasthat gives the website operators a starting point in terms of deciding what content, navigation, and layout to show to each of the different personas. When it comes to the practical problem of successfully delivering the profiles correctly this is usually achieved by either using a specialist content behavioral platform or by bespoke software development.
Most platforms identify visitors by assigning a unique ID cookie to every visitor to the site thereby allowing them to be tracked throughout their web journey, the platform then makes a rules-based decision about what content to serve.
Self-learning onsite behavioral targeting systems will monitor visitor response to site content and learn what is most likely to generate a desiredconversion event. Some good content for each behavioral trait or pattern is often established using numerous simultaneousmultivariate tests. Onsite behavioral targeting requires a relatively high level of traffic before statistical confidence levels can be reached regarding the probability of a particular offer generating a conversion from a user with a set behavioral profile. Some providers have been able to do so by leveraging their large user base, such asYahoo!. Some providers use a rules-based approach, allowing administrators to set the content and offers shown to those with particular traits.
According to research behavioral targeting provides little benefit at a huge privacy cost — when targeting for gender, the targeted guess is 42% accurate, which is less than a random guess. When targeting for gender and age the accuracy is 24%.[40]
Advertising networksuse behavioral targeting in a different way than individual sites. Since they serve many advertisements across many different sites, they can build up a picture of the likely demographic makeup of internet users.[41]Data from a visit to one website can be sent to many different companies, includingMicrosoftandGooglesubsidiaries,Facebook,Yahoo, many traffic-logging sites, and smaller ad firms.[42]
This data can sometimes be sent to more than 100 websites and shared with business partners, advertisers, and other third parties for business purposes. The data is collected usingcookies,web beaconsand similar technologies, and/or a third-party ad serving software, to automatically collect information about site users and site activity. Some servers even record the page that referred you to them, the websites you visit after them, which ads you see, and which ads you click on.[43]
Online advertising uses cookies, a tool used specifically to identify users, as a means of delivering targeted advertising by monitoring the actions of a user on the website. For this purpose, the cookies used are calledtrackingcookies. An ad network company such as Google uses cookies to deliver advertisements adjusted to the interests of the user, control the number of times that the user sees an ad, and "measure" whether they are advertising the specific product to the customer's preferences.[44]
This data is collected without attaching the people's names, addresses, email addresses, or telephone numbers, but it may include device identifying information such as the IP address,MAC address, web browser information, cookie, or other device-specific unique alphanumerical ID of your computer, but some stores may create guest IDs to go along with the data.
Cookies are used to control displayed ads and to track browsing activity and usage patterns on sites. This data is used by companies to infer people's age, gender, and possible purchase interests so that they can make customized ads that you would be more likely to click on.[45]
An example would be a user seen on football sites, business sites, and male fashion sites. A reasonable guess would be to assume the user is male. Demographic analyses of individual sites provided either internally (user surveys) or externally (Comscore\Netratings) allow the networks to sell audiences rather than sites.[46]Although advertising networks were used to sell this product, this was based on picking the sites where the audiences were. Behavioral targeting allows them to be slightly more specific about this.
In the work titledAn Economic Analysis of Online Advertising Using Behavioral Targeting,[31]Chen and Stallaert (2014) study the economic implications when an online publisher engages in behavioral targeting. They consider that the publisher auctions off an advertising slot and are paid on acost-per-clickbasis. Chen and Stallaert (2014) identify the factors that affect the publisher'srevenue, the advertisers' payoffs, andsocial welfare. They show that revenue for the online publisher in some circumstances can double when behavioral targeting is used.
Increased revenue for the publisher is not guaranteed: in some cases, the prices of advertising and hence the publisher's revenue can be lower, depending on the degree ofcompetitionand the advertisers' valuations. They identify two effects associated with behavioral targeting: acompetitive effectand apropensity effect.The relative strength of the two effects determines whether the publisher's revenue is positively or negatively affected. Chen and Stallaert (2014) also demonstrate that, although social welfare is increased and small advertisers are better off under behavioral targeting, the dominant advertiser might be worse off and reluctant to switch from traditional advertising.
In 2006, BlueLithium (nowYahoo! Advertising) in a large online study, examined the effects of behavior-targeted advertisements based on contextual content. The study used 400 million "impressions", or advertisements conveyed across behavioral and contextual borders. Specifically, nine behavioral categories (such as "shoppers" or "travelers"[47])with over 10 million "impressions" were observed for patterns across the content.[48]
All measures for the study were taken in terms ofclick-through rates(CTR) and "action-through rates" (ATR), or conversions. So, for every impression that someone gets, the number of times they "click-through" to it will contribute to CTR data, and every time they go through with or convert on the advertisement the user adds "action-through" data.
Results from the study show that advertisers looking for traffic on their advertisements should focus on behavioral targeting in context. Likewise, if they are looking for conversions on the advertisements, behavioral targeting out of context is the most effective process.[47]The data helped determine an "across-the-board rule of thumb";[47]however, results fluctuated widely by content categories. Overall results from the researchers indicate that the effectiveness of behavioral targeting is dependent on the goals of the advertiser and the primary target market the advertiser is trying to reach.
Through the use of analytic tools, marketers attempt to understand customer behavior and make informed decisions based on the data.[49]E-commerce retailers use data driven marketing to try and improvecustomer experienceand increase sales. One example cited in theHarvard Business Reviewis Vineyard Vines, a fashion brand with brick-and-mortar stores and anonline product catalog. The company has used anartificial intelligence(AI) platform to gain knowledge about its customers from actions taken or not taken on the e-commerce site. Email orsocial mediacommunications are automatically triggered at certain points, such as cart abandonment. This information is also used to refine search engine marketing.[50]
Advertising provides advertisers with a direct line of communication with existing and prospective consumers. By using a combination of words and/or pictures the general aim of the advertisement is to act as a "medium of information" (David Ogilvy[51]) making the means of delivery and to whom the information is delivered most important. Advertising should define how and when structural elements of advertisements influence receivers, knowing that all receivers are not the same and thus may not respond in a single, similar manner.[52]
Targeted advertising serves the purpose of placing particular advertisements before specific groups to reach consumers who would be interested in the information. Advertisers aim to reach consumers as efficiently as possible with the belief that it will result in a more effective campaign. By targeting, advertisers can identify when and where the ad should be positioned to achieve maximum profits. This requires an understanding of how customers' minds work (see alsoneuromarketing) to determine the best channel by which to communicate.
Types of targeting include, but are not limited to advertising based ondemographics,psychographics, behavioral variables, andcontextual targeting.
Behavioral advertising is the most common form of targeting used online.Internet cookiesare sent back and forth between an internet server and the browser, which allows a user to be identified or to track their progressions. Cookies provide details on what pages a consumer visits, the amount of time spent viewing each page, the links clicked on; and searches and interactions made.
From this information, the cookie issuer gathers an understanding of the user's browsing tendencies and interests generating aprofile. By analyzing the profile, advertisers can create defined audience segments based upon users with similar returned information, hence profiles. Tailored advertising is then placed in front of the consumer based on what organizations working on behalf of the advertisers assume are the interests of the consumer.[53]
These advertisements have been formatted to appear on pages and in front of users that they would most likely appeal to based on their profiles. For example, under behavioral targeting, if a user is known to have recently visited several automotive shopping and comparison sites based on the data recorded by cookies stored on the user's computer, the user can then be served automotive-related advertisements when visiting other sites.[54]
Behavioral advertising is reliant on data both wittingly and unwittingly provided by users and is made up of two different forms: one involving the delivery of advertising based on an assessment of user's web movements; the second involving the examination of communication and information as it passes through the gateways ofinternet service providers.[citation needed]
Demographic targetingwas the first and most basic form of targeting used online. involves segmenting an audience into more specific groups using parameters such as gender, age, ethnicity, annual income, parental status, etc. All members of the group share a common trait.
So, when an advertiser wishes to run a campaign aimed at a specific group of people then that campaign is intended only for the group that contains those traits at which the campaign is targeted. Having finalized the advertiser's demographic target, a website or a website section is chosen as a medium because a large proportion of the targeted audience utilizes that form of media.[citation needed]
Segmentation using psychographics Is based on an individual's personality, values, interests, and lifestyles. A recent study concerning what forms of media people use- conducted by the Entertainment Technology Center at the University of Southern California, the Hallmark Channel, and E-Poll Market Research- concludes that a better predictor of media usage is the user's lifestyle.
Researchers concluded that while cohorts of these groups may have similar demographic profiles, they may have different attitudes and media usage habits.[55]Psychographics can provide further insight by distinguishing an audience into specific groups by using their traits. Once acknowledging this is the case, advertisers can begin to target customers having recognized that factors other than age for example provide greater insight into the customer.
Contextual advertising is a strategy to place advertisements on media vehicles, such as specific websites or print magazines, whose themes are relevant to the promoted products.[56]: 2Advertisers apply this strategy to narrow-target their audiences.[57][56]Advertisements are selected and served by automated systems based on the identity of the user and the displayed content of the media. The advertisements will be displayed across the user's different platforms and are chosen based on searches for keywords; appearing as either a web page or pop-up ads. It is a form of targeted advertising in which the content of an ad is in direct correlation to the content of the webpage the user is viewing.
Retargeting is where advertisers use behavioral targeting to produce ads that follow users after users have looked at or purchased a particular item. An example of this is store catalogs, where stores subscribe customers to their email system after a purchase hoping that they draw attention to more items for continuous purchases.
The main example of retargeting that has earned a reputation from most people is ads that follow users across the web, showing them the same items that they have looked at in the hope that they will purchase them. Retargeting is a very effective process; by analyzing consumers activities with the brand they can address their consumers' behavior appropriately.[58]
Every brand, service, or product has itself apersonality, how it is viewed by the public and the community and marketers will create these personalities to match the personality traits of their target market.[1]Marketers and advertisers create these personalities because when consumers can relate to the characteristics of a brand, service, or product they are more likely to feel connected to the product and purchase it.[citation needed]
Advertisers are aware that different people lead different lives, have different lifestyles and different wants, and needs at different times in their consumer's lives, thus individual differences can be compensated for Advertisers who base their segmentation on psychographic characteristics promote their product as the solution to these wants and needs. Segmentation by lifestyle considers where the consumer is in their life cycle and which preferences are associated with that life stage.[citation needed]
Psychographic segmentation also includesopinionson religion, gender, politics, sporting and recreational activities, views on the environment, and arts and cultural issues. The views that themarket segmentshold and the activities they participate in will have an impact on the products and services they purchase and it will affect how they respond to the message.
Alternatives to behavioral advertising and psychographic targeting include geographic targeting and demographic targeting
When advertisers want to efficiently reach as many consumers as possible, they use a six-step process.
Alternatives to behavioral advertising include audience targeting, contextual targeting, andpsychographic[59]targeting.
Targeting aims to improve the effectiveness of advertising and reduce the wastage created by sending advertising to consumers who are unlikely to purchase that product. Targeted advertising or improved targeting may lead to lower advertising costs and expenditures.[60]
The effects of advertising on society and those targeted are all implicitly underpinned by the consideration of whether advertising compromises autonomous choice.[61]
Those arguing for the ethical acceptability of advertising claim that, because of the commercially competitive context of advertising, the consumer has a choice over what to accept and what to reject.
Humans have the cognitive competence and are equipped with the necessary faculties to decide whether to be affected by adverts.[62]Those arguing against note, for example, that advertising can make us buy things we do not want or that, as advertising is enmeshed in a capitalist system, it only presents choices based on consumerist-centered reality thus limiting the exposure to non-materialist lifestyles.
Although the effects of target advertising are mainly focused on those targeted, it can also affect those outside of the target segment. Its unintended audiences often view an advertisement targeted at other groups and start forming judgments and decisions regarding the advertisement and even the brand and company behind the advertisement, these judgments may affect future consumer behavior.[63]
TheNetwork Advertising Initiativeconducted a study[64]in 2009 measuring the pricing and effectiveness of targeted advertising. It revealed that targeted advertising:
However, other studies show that targeted advertising, at least by gender,[1]is not effective.
One of the major difficulties in measuring the economic efficiency of targeting, however, is being able to observe what would have happened in the absence of targeting since the users targeted by advertisers are more likely to convert than the general population. Farahat and Bailey[65]exploit a large-scale natural experiment on Yahoo! allowing them to measure the true economic impact of targeted advertising on brand searches and clicks. They find, assuming the cost per 1000 ad impressions (CPM) is $1, that:
Research shows that Content marketing in 2015 generated 3 times as many leads as traditional outbound marketing, but costs 62% less[66]showing how being able to advertise to targeted consumers is becoming the ideal way to advertise to the public. Other stats show that 86% of people skip television adverts and 44% of people ignore direct mail, which also displays how advertising to the wrong group of people can be a waste of resources.[66]
Proponents of targeted advertising argue that there are advantages for both consumers and advertisers:
Targeted advertising benefits consumers because advertisers can effectively attract consumers by using their purchasing and browsing habits this enables ads to be more apparent and useful for customers. Having ads that are related to the interests of the consumers allows the message to be received in a directly through effective touchpoints. An example of how targeted advertising is beneficial to consumers is that if someone sees an ad targeted to them for something similar to an item they have previously viewed online and were interested in, they are more likely to buy it.
Consumers can benefit from targeted advertising in the following ways:
Intelligence agencies worldwide can more easily, and without exposing their personnel to the risks ofHUMINT, track targets at sensitive locations such as military bases or training camps by simply purchasing location data from commercial providers who collect it from mobile devices withgeotargetingenabled used by the operatives present at these places.[68]
Location data can be extremely valuable and must be protected. It can reveal details about the number of users in a location, user and supply movements, daily routines (user and organizational), and can expose otherwise unknown associations between users and locations.
Advertisers benefit from target advertising are reducing resource costs and creating more effective ads by attracting consumers with a strong appeal to these products. Targeted advertising allows advertisers to reduce the cost of advertisement by minimizing "wasted" advertisements to non-interested consumers. Targeted advertising captivates the attention of consumers they were aimed at resulting in higher return on investment for the company.
Because behavioral advertising enables advertisers to more easily determine user preferences and purchasing habits, the ads will be more pertinent and useful for consumers. By creating a more efficient and effective manner of advertising to the consumer, an advertiser benefits greatly in the following ways:
Using information from consumers can benefit the advertiser by developing a more efficient campaign, targeted advertising is proven to work both effectively and efficiently.[69]They don't want to waste time and money advertising to the "wrong people".[60]Through technological advances, the internet has allowed advertisers to target consumers beyond the capabilities of traditional media, and target significantly larger amount.[70]
The main advantage of using targeted advertising is that it can help minimize wasted advertising by using detailed information about individuals who are intended for a product.[71]If consumers produce these ads that are targeted at them, it is more likely they will be interested and click on them. 'Know thy consumer', is a simple principle used by advertisers, when businesses know information about consumers, it can be easier to target them and get them to purchase their product.
Some consumers do not mind if their information is used, and are more accepting of ads with easily accessible links. This is because they may appreciate adverts tailored to their preferences, rather than just generic ads. They are more likely to be directed to products they want, and possibly purchase them, in return generating more income for the business advertising.
Targeted advertising has raised controversies, most particularly regardingprivacy rightsand policies. With behavioral targeting focusing on specific user actions such as site history, browsing history, and buying behavior, this has raised user concern that all activity is being tracked.
Privacy International, a UK-based registered charity that defends and promotes the right to privacy across the world, suggests that from any ethical standpoint such interception of web traffic must be conditional on the based on explicit and informed consent, and action must be taken where organizations can be shown to have acted unlawfully.[citation needed]
A survey conducted in the United States by thePew Internet & American Life Projectbetween January 20 and February 19, 2012, revealed that most Americans are not in favor of targeted advertising, seeing it as an invasion of privacy. Indeed, 68% of those surveyed said they are "not okay" with targeted advertising because they do not like having their online behavior tracked and analyzed.
Another issue with targeted advertising is the lack of 'new' advertisements of goods or services. Seeing as all ads are tailored to be based on user preferences, no different products will be introduced to the consumer. Hence, in this case, the consumer will be at a loss as they are not exposed to anything new.
Advertisers concentrate their resources on the consumer, which can be very effective when done right.[72]When advertising doesn't work, the consumer can find this creepy and start wondering how the advertiser learned the information about them.[26]Consumers can have concerns over ads targeted at them, which are too personal for comfort, feeling a need for control over their data.[73]
In targeted advertising privacy is a complicated issue due to the type of protected user information and the number of parties involved. The three main parties involved in online advertising are the advertiser, the publisher, and the network. People tend to want to keep their previously browsed websites private, although users 'clickstreams' are being transferred to advertisers who work with ad networks. The user's preferences and interests are visible through their clickstream and their behavioral profile is generated.[74]
As of 2010, many people have found this form of advertising to be concerning and see these tactics as manipulative and a sense of discrimination.[74]As a result of this, several methods have been introduced to avoid advertising.[4]Internet users employingad blockersare rapidly growing in numbers. The average global ad-blocking[75]rate in early 2018 was estimated at 27 percent. Greece is at the top of the list with more than 40% of internet users admitting to using ad-blocking software. Among the technical population ad-blocking reaches 58%.[76]
Targeted advertising raisesprivacy concerns. Targeted advertising is performed by analyzing consumers' activities through online services such asHTTP cookiesanddata mining, both of which can be seen as detrimental to consumers' privacy. Marketers research consumers' online activity for targeted advertising campaigns like programmatic andSEO.
Consumers' privacy concerns revolve around today's unprecedentedtrackingcapabilities and whether to trust their trackers. Consumers may feel uncomfortable with sites knowing so much about their activity online. Targeted advertising aims to increase promotions' relevance to potential buyers, deliveringad campaignexecutions to specified consumers at critical stages in thebuying decision process. This potentially limits a consumer's awareness of alternatives and reinforcesselective exposure.
Consumers may start avoiding certain sites and brands if they keep getting served the same advertisements and the consumer may feel like they are being watched too much or may start getting annoyed with certain brands. Due to the increased use of tracking cookies all over the web, many sites now have cookie notices that pop up when a visitor lands on a site. The notice informs the visitor about the use of cookies, how they affect the visitor, and the visitor's options in regarding to what information the cookies can obtain.
As of 2019, many online users and advocacy groups were concerned aboutprivacyissues around targeted advertising, because it requires aggregation of large amounts of personal data, including highly sensitive data, such as sexual orientation or sexual preferences, health issues, and location, which is then traded between hundreds of parties in the process ofreal-time bidding.[77][78]
This is a controversy that the behavioral targeting industry is trying to contain through education, advocacy, and product constraints to keep all information non-personally identifiable or to obtainpermissionfrom end-users.[79]AOLcreated animated cartoons in 2008 to explain to its users that their past actions may determine the content of ads they see in the future.[80]
Canadianacademics at theUniversity of OttawaCanadian Internet Policy and Public Interest Clinichave recently demanded thefederal privacy commissionerinvestigate online profiling of Internet users for targeted advertising.[81]
The European Commission(via CommissionerMeglena Kuneva) has also raised several concerns related to online data collection (of personal data),profiling, and behavioral targeting, and is looking to "enforce existing regulation".[82]
In October 2009 it was reported that a recent survey carried out by theUniversity of Pennsylvaniaand theBerkeley Center for Law and Technologyfound that a large majority of US internet users rejected the use of behavioral advertising.[83]Several research efforts by academics and others as of 2009 have demonstrated that data that is supposedly anonymized can be used to identify real individuals.[84]
In December 2010, onlinetrackingfirmQuantcastagreed to pay $2.4M to settle a class-action lawsuit for their use of'zombie' cookiesto track consumers. These zombie cookies, which were on partner sites such as MTV, Hulu, and ESPN, would re-generate to continue tracking the user even if they were deleted.[85]Other uses of such technology includeFacebook, and their use of theFacebook Beaconto track users across the internet, to later use for more targeted advertising.[86]Tracking mechanisms without consumer consent are generally frowned upon; however, tracking of consumer behavior online or on mobile devices is key of digital advertising, which is the financial backbone to most of the internet.
In March 2011, it was reported that the online ad industry would begin working with the Council of Better Business Bureaus to start policing itself as part of its program to monitor and regulate how marketers track consumers online, also known as behavioral advertising.[87]
Since at least the mid 2010s, many users of smartphones or other mobile devices have advanced the theory that technology companies are using microphones in the devices to record personal conversations for purposes of targeted advertising.[88]Such theories are often accompanied by personal anecdotes involving advertisements with apparent connections to prior conversations.[89]Facebook has denied the practice, andMark Zuckerbergdenied it in congressional testimony.[90]Google has also denied using ambient sound or conversations to target advertising.[91]Technology experts who have investigated the claims have described them as unproven and unlikely.[91][92][93]An alternative explanation for apparent connections between conversations and subsequent advertisements is the fact that technology companies track user behavior and interests in many ways other than via microphones.[94]
In December 2023,404 Mediareported thatCox Media Groupwas advertising a service to marketing professionals called "Active Listening", which involved the ability to listen to microphones installed in smartphones, smart TVs, and other devices in order to target ads to consumers.[95][96]A pitch deck promoting the capability stated that it targeted "Google/Bing" and that Cox Media Group was aGooglePremier Partner.[97]Meta, Amazon, Google, and Microsoft all denied using the service.[98]In response to questions from404 Media,Google stated that it had removed Cox Media Group from its Partners Program after a review.[97]Cox Media removed the material from their website and denied listening to any conversations.[99]
Contemporary data driven marketing can be traced back to the 1980s and the emergence ofdatabase marketing, which increased the ease of personalizing customer communications.[100]
|
https://en.wikipedia.org/wiki/Behavioral_targeting
|
TheCalifornia Online Privacy Protection Act of 2003(CalOPPA),[1]effective as of July 1, 2004 and amended in 2013, is the first state law in theUnited Statesrequiring commercialwebsiteson theWorld Wide Weband online services to include aprivacy policyon their website. According to thisCalifornia State Law, under the Business and Professions Code, Division 8 Special Business Regulations, Chapter 22Internet PrivacyRequirements, operators of commercial websites that collectPersonally Identifiable Information(PII) from California's residents are required to conspicuously post and comply with aprivacy policythat meets specific requirements.[2]A website operator who fails to post theirprivacypolicy within 30 days after being notified about noncompliance will be deemed in violation. PII includes information such as name, street address, email address, telephone number, date of birth,Social Security number, or other details about a person that could allow a consumer to be contacted physically or online.
According to the act, the operator of awebsitemust post a distinctive and easily found link to the website'sprivacy policy, commonly listed under the heading "Your California Privacy Rights". The privacy policy must detail the kinds of information gathered by the website, how the information will or could be shared with other parties, and, if such a process exists, describe the process the users can use to review and make changes to their stored information. It also must include the policy's effective date and an update on any changes that take place since then.
The owner of a website can be subject to legal actions over CalOPPA within 30 days of being notified for not posting the privacy policy or not meeting the law's criteria. The owner could be faulted for theirnegligence, possibly even consciously, over their inability to comply with the act, which ultimately results in charges filed against them for this noncompliance.[3]
CalOPPA non-compliance violations may be reported to theCalifornia Attorney General's office via their website.[4][2]
The act is broad in scope, well beyond California's border. Neither theweb servernor the company that created the website has to be in California in order to be under the scope of the law. The website only has to be accessible by California residents.[5]ManyAmericanwebsites thus include aboilerplatedisclaimer, usually under the titledhyperlinkof "Your California Privacy Rights", on their site'sfootersection by default for all-page access.[6]
As it does not containenforcementprovisions of its own, CalOPPA is expected to be enforced throughCalifornia's Unfair Competition Law (UCL),[7]which prohibits unlawful, unfair, or fraudulent business acts or practices. UCL may be enforced for violations of CalOPPA by government officials seeking civil penalties or equitable relief, or by private parties seeking private claims.[8]
Non-compliance violations may be reported to the California Attorney General's officewebsite.
In May 2007, getting toGoogle'sprivacy policy required clicking on "About Google" on its home page, which brought up a page that included a link to its privacy policy.New York Timesreporter Saul Hansell posted ablogentry[9]raising questions about Google's compliance with this act. A coalition of privacy groups also sent a letter[10]to Google's CEO,Eric Schmidt, questioning the absence of a privacy policy link on its home page. According toElectronic Privacy Information CenterdirectorMarc Rotenberg, a lawsuit challenging Google's privacy policy practices as a violation of California law was not filed in the hope that their informal complaints could be resolved through discussions.[11]Later, Google added a direct link to its privacy policy on its homepage.[12]
Assembly Bill 370 (Muratsuchi), which was signed into law in 2013, amended CalOPPA requiring new privacy policy disclosures for websites and online services that track visitors. It was defined in the legislative analysis of the bill as "the monitoring of an individual across multiple websites to build a profile of behavior and interests."[13][14]It required privacy policies to either contain a disclosure, or link to a disclosure on a separate page, detailing how websites responded to theDo Not Trackheader and "other mechanisms that provide consumers the ability to exercise choice regarding the collection of personally identifiable information about an individual consumer’s online activities over time and across third-party Web sites or online services", if websites tracked thepersonally identifiable informationof users. It also required privacy policies to disclose if websites allowed third-parties to engage in cross-site tracking of their users. See Cal. Assembly Bill 370, which became effective on January 1, 2014.
On February 6, 2013, Assembly Member Ed Chau had introduced AB 242, which would amend the act to impose additional requirements on privacy policies.[15]The amendments would require:
AB 242 died in the Assembly Judiciary Committee.[16]
|
https://en.wikipedia.org/wiki/California_Privacy_Rights
|
Digital marketingis the component ofmarketingthat uses theInternetandonline-baseddigital technologiessuch asdesktop computers,mobile phones, and otherdigital mediaand platforms to promote products and services.[2][3]
It has significantly transformed the way brands and businesses utilize technology formarketingsince the 1990s and 2000s. As digital platforms became increasingly incorporated into marketing plans and everyday life,[4]and as people increasingly useddigital devicesinstead of visiting physical shops,[5][6]digital marketing campaigns have become prevalent, employing combinations of methods. Some of these methods include:search engine optimization(SEO),search engine marketing(SEM),content marketing,influencer marketing, content automation, campaign marketing,data-driven marketing,e-commercemarketing,social media marketing,social media optimization,e-mail direct marketing,display advertising,e-books, andoptical disksand games. Digital marketing extends to non-Internet channels that provide digital media, such astelevision,mobile phones(SMSandMMS), callbacks, and on-hold mobile ringtones.[7]
The extension to non-Internet channels differentiates digital marketing fromonline marketing.[8]
Digital marketing effectively began in 1990 when theArchie search enginewas created as anindexforFTPsites. In the 1980s, the storage capacity ofcomputerswas already large enough to store huge volumes of customer information. Companies started choosing online techniques, such asdatabase marketing, rather than limitedlist brokers.[9]Databasesallowed companies to track customers' information more effectively, transforming the relationship between buyer and seller.
In the 1990s, the termdigital marketingwas coined.[citation needed]The first clickablebanner ad, the "You Will" campaign byAT&T, went live in 1994, and over the first four months, 44% of all people who saw it clicked on the ad.[10][11]Early digital marketing efforts focused on simpleHTMLwebsites and the burgeoning practice of email marketing, which allowed for direct communication with consumers.[12]
In the 2000s, with increasing numbers ofInternetusers and the birth of theiPhone, customers began searching for products and making decisions about their needs online first, instead of consulting asalesperson, which created a new problem for the marketing department of a company.[13]In addition, a survey in 2000 in the United Kingdom found that most retailers still needed to register their own domain address.[14]These problems encouraged marketers to find new ways to integrate digital technology into market development. At the same time,Pay-Per-Clickadvertising, introduced by Google AdWords in 2000, allowed businesses to target specific keywords, making digital marketing more measurable and cost-effective.[15]
The mid-2000s saw the emergence of social media platforms likeFacebook(2004),YouTube(2005), andTwitter(2006). These platforms revolutionized digital marketing by facilitating direct and interactive engagement with consumers. In 2007,marketing automationwas developed as a response to the ever-evolving marketing climate. Marketing automation is the process by which software is used to automate conventional marketing processes.[16]Marketing automationhelps companies segment customers, launchmultichannel marketingcampaigns, and provide personalized information for customers.,[16]based on their specific activities. In this way, users' activity (or lack thereof) triggers a personal message that is customized to the user in their preferred platform. However, despite the benefits of marketing automation many companies are struggling to adapt it to their everyday uses correctly.[17][page needed]
Digital marketing became more sophisticated in the 2000s and the 2010s,
when[18][19]the proliferation of devices capable of accessing digital media led to sudden growth.[20]Statistics produced in 2012 and 2013 showed that digital marketing was still growing.[21][22]With the development ofsocial mediain the 2000s, such asLinkedIn,Facebook,YouTube, andTwitter, consumers became highly dependent ondigital electronicsin their daily lives.[23]Therefore, they expected a seamlessuser experienceacross different channels for searching product information. The change incustomer behaviorimproved the diversification of marketing technology.[24]
Digital mediagrowth was estimated at 4.5 trillion online ads served annually with digital media spending at 48% growth in 2010.[25]An increasing portion of advertising stems from businesses employingOnline Behavioural Advertising(OBA) to tailor advertising for internet users, but OBA raises concerns aboutconsumer privacyanddata protection.[20]
Nonlinear marketing, a form of interactive marketing, is a long-term approach that involves businesses gathering data about users’ online behavior and maintaining visibility across various digital platforms.[26]
Unlike traditional marketing, which typically uses one-way communication methods such as print, television, and radio advertisements, nonlinear digital marketing aims to engage potential customers through multiple online channels.[27]
As consumer knowledge has increased and demand for more tailored offerings has grown, many organizations have adjusted their outreach strategies. This has included adopting omnichannel and nonlinear marketing methods to help ensure brand visibility, customer engagement, and broader reach.[28]
Nonlinear marketing strategies focus on customizing advertising across different platforms[29]and personalizing messages for individual consumers, rather than addressing a single, uniform audience.[30]
Tactics may include:[23]
Some studies indicate that consumer responses to traditional marketing approaches are becoming less predictable for businesses.[31]According to a 2018 study, nearly 90% of online consumers in the United States researched products and brands online before visiting the store or making a purchase.[32]The Global Web Index estimated that in 2018, a little more than 50% of consumers researched products on social media.[33]Businesses often rely on individuals portraying their products in a positive light on social media, and may adapt their marketing strategy to target people with large social media followings in order to generate such comments.[34]In this manner, businesses can use consumers to advertise their products or services, decreasing the cost for the company.[35]
One of the key objectives of modern digital marketing is to raisebrand awareness, the extent to which customers and the public are familiar with and recognize a particular brand.
Enhancing brand awareness is important in digital marketing, and marketing in general, because of its impact on brand perception and consumer decision-making. According to the 2015 essay, "Impact of Brand on Consumer Behavior":
"Brand awareness, as one of the fundamental dimensions ofbrand equity, is often considered to be a prerequisite of consumers’ buying decision, as it represents the main factor for including a brand in theconsideration set. Brand awareness can also influence consumers’ perceived risk assessment and their confidence in the purchase decision, due to familiarity with the brand and its characteristics."[36]
Recent trends show that businesses and digital marketers are prioritizing brand awareness, focusing more on their digital marketing efforts on cultivating brand recognition and recall than in previous years. This is evidenced by a 2019 Content Marketing Institute study, which found that 81% of digital marketers have worked on enhancing brand recognition over the past year.[37]
Another Content Marketing Institute survey revealed that 89% ofB2Bmarketers now believe improving brand awareness to be more important than efforts directed at increasing sales.[38]
Increasing brand awareness is a focus of digital marketing strategy for a number of reasons:
Digital marketing strategies may include the use of one or more online channels and techniques (omnichannel) to increase brand awareness among consumers.
Building brand awareness may involve such methods/tools as:
Search engine optimization techniques may be used to improve the visibility of business websites and brand-related content for common industry-related search queries.[46]
The importance ofSEOto increase brand awareness is said to correlate with the growing influence of search results and search features like featured snippets, knowledge panels, and local SEO on customer behavior.[47]
SEM, also known asPPCadvertising, involves the purchase of ad space in prominent, visible positions atop search results pages and websites. Search ads have been shown to have a positive impact on brand recognition, awareness and conversions.[48]
33% of searchers who click on paid ads do so because they directly respond to their particular search query.[49]
Social media marketing is characterized by its constant engagement with consumers, emphasizing content creation and interaction skills. It involves real-time monitoring, analysis, summarization, and management of the marketing process, performed via platforms likeHootsuiteorSprout Social, which support these activities and allow adjustments to marketing strategies based on real-time feedback from the market and consumers.[50][51]70% of marketers list increasing brand awareness as their number one goal for marketing on social media platforms.[citation needed]As of 2021,LinkedInhas been added as one of the most-used social media platforms by business leaders for its professional networking capabilities.[52]
56% of marketers believepersonalizationcontent – brand-centered blogs, articles, social updates, videos, landing pages – improves brand recall and engagement.[53]
One of the major changes that occurred in traditional marketing was the "emergence of digital marketing", this led to the reinvention of marketing strategies in order to adapt to this major change in traditional marketing.
As digital marketing is dependent ontechnologywhich is ever-evolving and fast-changing, the same features should be expected from digital marketing developments and strategies. This portion is an attempt to qualify or segregate the notable highlights existing and being used as of press time.[when?]
To summarize, Pull digital marketing is characterized by consumers actively seeking marketing content while Push digital marketing occurs when marketers send messages without that content being actively sought by the recipients.
An important consideration today while deciding on a strategy is that the digital tools have democratized the promotional landscape.
Six principles for building online brand content:[59]
Tourism marketing:Advanced tourism, responsible and sustainable tourism, social media and online tourism marketing, and geographic information systems. As a broader research field matures and attracts more diverse and in-depth academic research.[60]
The new digital era has enabled brands toselectively target their customersthat may potentially be interested in their brand or based on previous browsing interests. Businesses can use social media to select the age range, location, gender, and interests of whom they would like their targeted post to be seen. Furthermore, based on a customer's recent search history they can be ‘followed’ on the internet so they see advertisements from similar brands, products, and services,[61]that allows businesses to target the specific customers that they know and feel will most benefit from their product or service, something that had limited capabilities up until the digital era.
Digital marketing activity is still growing across the world according to the headline global marketing index. A study published in September 2018, found that global outlays on digital marketing tactics are approaching $100 billion.[62]Digital media continues to rapidly grow. While the marketing budgets are expanding, traditional media is declining.[63]Digital media helps brands reach consumers to engage with their product or service in a personalized way. Five areas, which are outlined as current industry practices that are often ineffective are prioritizing clicks, balancing search and display, understanding mobiles, targeting, viewability, brand safety and invalid traffic, and cross-platform measurement.[64]Why these practices are ineffective and some ways around making these aspects effective are discussed surrounding the following points.
Prioritizing clicks refers to display click ads, although advantageous by being ‘simple, fast and inexpensive’ rates for display ads in 2016 is only 0.10 percent in the United States. This means one in a thousand click ads is relevant therefore having little effect. This displays that marketing companies should not just use click ads to evaluate the effectiveness of display advertisements.[64]
Balancing search and display for digital display ads is important. marketers tend to look at the last search and attribute all of the effectiveness of this. This, in turn, disregards other marketing efforts, which establish brand value within the consumer's mind.ComScoredetermined through drawing on data online, produced by over one hundred multichannel retailers that digital display marketing poses strengths when compared with or positioned alongside, paid search.[64]This is why it is advised that when someone clicks on a display ad the company opens a landing page, not its home page. A landing page typically has something to draw the customer in to search beyond this page. Commonly marketers see increased sales among people exposed to a search ad. But the fact of how many people you can reach with a display campaign compared to a search campaign should be considered. Multichannel retailers have an increased reach if the display is considered in synergy with search campaigns. Overall, both search and display aspects are valued as display campaigns build awareness for the brand so that more people are likely to click on these digital ads when running a search campaign.[64]
Understanding mobile devices is a significant aspect of digital marketing because smartphones and tablets are now responsible for 64% of the time US consumers are online.[64]Apps provide a big opportunity as well as challenge for the marketers because firstly the app needs to be downloaded and secondly the person needs to actually use it. This may be difficult as ‘half the time spent on smartphone apps occurs on the individuals single most used app, and almost 85% of their time on the top four rated apps’.[64]Mobile advertising can assist in achieving a variety of commercial objectives and it is effective due to taking over the entire screen, and voice or status is likely to be considered highly. However, the message must not be seen or thought of as intrusive.[64]Disadvantages of digital media used on mobile devices also include limited creative capabilities, and reach. Although there are many positive aspects including the user's entitlement to select product information, digital media creating a flexible message platform and there is potential for direct selling.[65]
The number of marketing channels continues to expand, as measurement practices are growing in complexity. A cross-platform view must be used to unify audience measurement and media planning. Market researchers need to understand how the Omni-channel affects consumer's behavior, although when advertisements are on a consumer's device this does not get measured. Significant aspects to cross-platform measurement involve deduplication and understanding that you have reached an incremental level with another platform, rather than delivering more impressions against people that have previously been reached.[64]An example is ‘ESPN and comScore partnered on Project Blueprint discovering the sports broadcaster achieved a 21% increase in unduplicated daily reach thanks to digital advertising’.[64]Television and radio industries are the electronic media, which competes with digital and other technological advertising. Yet television advertising is not directly competing with online digital advertising due to being able to cross platform with digital technology. Radio also gains power through cross platforms, in online streaming content. Television and radio continue to persuade and affect the audience, across multiple platforms.[66]
Targeting, viewability, brand safety, and invalid traffic all are aspects used by marketers to help advocate digital advertising.Cookiesare a form of digital advertising, which are tracking tools within desktop devices, causing difficulty, with shortcomings including deletion by web browsers, the inability to sort between multiple users of a device, inaccurate estimates for unique visitors, overstating reach, understanding frequency, problems with ad servers, which cannot distinguish between when cookies have been deleted and when consumers have not previously been exposed to an ad. Due to the inaccuracies influenced by cookies, demographics in the target market are low and vary.[64]Another element, which is affected by digital marketing, is ‘viewability’ or whether the ad was actually seen by the consumer. Many ads are not seen by a consumer and may never reach the right demographic segment. Brand safety is another issue of whether or not the ad was produced in the context of being unethical or having offensive content. Recognizing fraud when an ad is exposed is another challenge marketers face. This relates to invalid traffic as premium sites are more effective at detecting fraudulent traffic, although non-premium sites are more so the problem.[64]
Digital Marketing Channels are systems based on the Internet that can create, accelerate, and transmit product value from producer to a consumer terminal, through digital networks.[67][68]Digital marketing is facilitated by multiple Digital Marketing channels, as an advertiser one's core objective is to find channels which result in maximum two-way communication and a better overallROIfor the brand. There are multiple digital marketing channels available namely:[69]
It is important for a firm to reach out to consumers and create a two-way communication model, as digital marketing allows consumers to give back feedback to the firm on a community-based site or straight directly to the firm via email.[84]Firms should seek this long-term communication relationship by using multiple forms of channels and using promotional strategies related to their target consumer as well as word-of-mouth marketing.[84]
Possible benefits of digital marketing include:
Digital marketing used to rely primarily on self-regulation included in the ICC Code,[88]which included rules that apply to marketing communications using digital interactive media. However, self-regulation has proved largely ineffective,[89][90]leading to the consolidation of market power in a few firms, includingGoogle, which has been determined to hold monopolies in search marketing and digital advertising.[91][92]While self-regulation codes still exist, government regulation is increasing in multiple jurisdictions, including California's legislation on targeting advertising online.[93]In Europe, digital marketing is regulated through multiple codes, of which the most important is theDigital Services Act,[94]which entered into force on 17 February, 2024. Other regulations focus on user privacy and data management such as theGeneral Data Protection Regulation(GDPR).[95]
Digital marketing planning is a term used in marketing management. It describes the first stage of forming a digital marketing strategy for the widerdigital marketing system. The difference between digital and traditional marketing planning is that it uses digitally based communication tools and technology such as Social, Web, Mobile, Scannable Surface.[96][97]Nevertheless, both are aligned with the vision, the mission of the company and the overarching business strategy.[98]
Dr. Dave Chaffey, an author on marketing topics, has suggested that successful digital marketing strategies have do digital marketing planning (DMP), which is a three-stage approach: Opportunity, Strategy, and Action. This generic strategic approach often has phases of situation review, goal setting, strategy formulation, resource allocation and monitoring.[98]
To create an effective DMP, a business first needs to review the marketplace and set "SMART" (Specific, Measurable, Actionable, Relevant, and Time-Bound) objectives.[99]They can set SMART objectives by reviewing the current benchmarks andkey performance indicators(KPIs) of the company and competitors. It is pertinent that the analytics used for the KPIs be customized to the type, objectives, mission, and vision of the company.[100][101]
Companies can scan for marketing and sales opportunities by reviewing their own outreach as well as influencer outreach. This means they havecompetitive advantagebecause they are able to analyse their co-marketers influence and brand associations.[102]
To seize the opportunity, the firm should summarize its currentcustomers' personasand purchase journey from this they are able to deduce their digital marketing capability.[103]
To create a planned digital strategy, the company must review their digital proposition (what you are offering to consumers) and communicate it using digital customer targeting techniques. So, they must define online value proposition (OVP), this means the company must express clearly what they are offering customers online e.g., brand positioning.
The company should also (re)select target market segments and personas and define digital targeting approaches.
After doing this effectively, it is important to review the marketing mix for online options. The marketing mix comprises the4Ps– Product, Price, Promotion, and Place.[104][105]Some academics have added three additional elements to the traditional 4Ps of marketing Process, Place, and Physical appearance making it 7Ps of marketing.[106]
The third and final stage requires the firm to set a budget and management systems. These must be measurable touchpoints, such as the audience reached across all digital platforms. Furthermore, marketers must ensure the budget and management systems are integrating the paid, owned, and earned media of the company.[107]The Action and final stage of planning also requires the company to set in place measurable content creation e.g. oral, visual or written online media.[108]
One way marketers can reach out to consumers and understand their thought process is through what is called an empathy map. An empathy map is a four-step process. The first step is through asking questions that the consumer would be thinking in their demographic. The second step is to describe the feelings that the consumer may be having. The third step is to think about what the consumer would say in their situation. The final step is to imagine what the consumer will try to do based on the other three steps. This map is so marketing teams can put themselves in their target demographics shoes.[109]Web Analytics are also a very important way to understand consumers. They show the habits that people have online for each website.[110]One particular form of these analytics ispredictive analyticswhich helps marketers figure out what route consumers are on. This uses the information gathered from other analytics and then creates different predictions of what people will do so that companies can strategize on what to do next, according to the people's trends.[111]
The "sharing economy" refers to an economic pattern that aims to obtain a resource that is not fully used.[114]Nowadays, thesharing economyhas had an unimagined effect on many traditional elements including labor, industry, and distribution system.[114]This effect is not negligible that some industries are obviously under threat.[114][115]The sharing economy is influencing the traditional marketing channels by changing the nature of some specific concept including ownership, assets, and recruitment.[115]
Digital marketing channels and traditional marketing channels are similar in function that the value of the product or service is passed from the original producer to the end user by a kind of supply chain.[116]Digital Marketing channels, however, consist of internet systems that create, promote, and deliver products or services from producer to consumer through digital networks.[117]Increasing changes to marketing channels has been a significant contributor to the expansion and growth of the sharing economy.[117]Such changes to marketing channels has prompted unprecedented and historic growth.[117]In addition to this typical approach, the built-in control, efficiency and low cost of digital marketing channels is an essential features in the application of sharing economy.[116]
Digital marketing channels within the sharing economy are typically divided into three domains including, e-mail, social media, and search engine marketing or SEM.[117]
Other emerging digital marketing channels, particularly branded mobile apps, have excelled in the sharing economy.[117]Branded mobile apps are created specifically to initiate engagement between customers and the company. This engagement is typically facilitated through entertainment, information, or market transaction.[117]
|
https://en.wikipedia.org/wiki/Digital_marketing
|
Hypertargetingrefers to the ability to deliver advertising content to specific interest-based segments in a network.MySpacecoined the term in November 2007[1]with the launch of their SelfServe advertising solution (later called myAds[2]), described on their site as "enabling online marketers to tap into self-expressed user information to target campaigns like never before."
Hypertargeting is also the ability on social network sites to target ads based on very specific criteria. This is an important step towards precision performance marketing.
The first MySpace HyperTarget release offered advertisers the ability to direct their ads to 10 categories self-identified by users in their profiles, including music, sports, and movies. In July 2007 the targeting options expanded to 100 subcategories. Rather than simply targeting movie lovers, for example, advertisers could send ads based on the preferred genres like horror, romance, or comedy. By January 2010, MySpace HyperTarget involved 5 algorithms across 1,000 segments.
According to an article by Harry Gold in online publisher ClickZ,[3]the general field of hypertageting draws information from 3 sources:
Facebook, a popular social network, offers an ad targeting service through their Social Ads platform. Ads can be hypertargeted to users based on keywords from their profiles, pages they're fans of, events they responded to, or applications used. Some of these examples involve the use ofbehavioral targeting.[4]
By 2009, hypertargeting became an accepted industry term.[5]In 2010, the InternationalConsumer Electronics Show(CES), the world's largest consumer technology tradeshow, dedicated three sessions to the topic:[citation needed]
|
https://en.wikipedia.org/wiki/Hypertargeting
|
Internet manipulationis the use ofonlinedigital technologies, including algorithms,social bots, and automated scripts, for commercial, social, military, or political purposes.[1]Internet and social media manipulation are the prime vehicles for spreadingdisinformationdue to the importance ofdigital platformsformedia consumptionand everydaycommunication.[2]When employed for political purposes, internet manipulation may be used to steerpublic opinion,[3]polarise citizens,[4]circulateconspiracy theories,[5]and silencepolitical dissidents. Internet manipulation can also be done for profit, for instance, to harm corporate or political adversaries and improvebrandreputation.[6]Internet manipulation is sometimes also used to describe the selective enforcement ofInternet censorship[7][8]or selective violations ofnet neutrality.[9]
Internet manipulation for propaganda purposes with the help of data analysis and internet bots in social media is calledcomputational propaganda.
Internet manipulation often aims to change user perceptions and their corresponding behaviors.[5]Since the early 2000s, this notion ofcognitive hackingmeant acyberattackaiming to change human behavior.[10][11]Today,fake news,disinformation attacks, anddeepfakescan secretly affect behavior in ways that are difficult to detect.[12]It has been found that content that evokes high-arousal emotions (e.g. awe, anger, anxiety or with hidden sexual meaning) is moreviraland that content that holds one or many of these elements: surprising, interesting, or useful is taken into consideration.[13]
Providing and perpetuating simple explanations for complex circumstances may be used for online manipulation. Often such are easier to believe, come in advance of any adequate investigations and have a higher virality than any complex, nuanced explanations and information.[14](See also:Low-information rationality)
Prior collective ratings of anweb contentinfluences ones own perception of it. In 2015 it was shown that the perceived beauty of a piece of artwork in an online context varies with external influence as confederate ratings were manipulated by opinion and credibility for participants of an experiment who were asked to evaluate a piece of artwork.[15]Furthermore, onReddit, it has been found that content that initially gets a few down- or upvotes often continues going negative, or vice versa. This is referred to as "bandwagon/snowball voting" by reddit users and administrators.[16]
Echo chambersandfilter bubblesmight be created byWebsite administratorsormoderatorslocking out people with altering viewpoints or by establishing certain rules or by the typical member viewpoints ofonline sub/communitiesorInternet "tribes"
Fake newsdoes not need to be read but has an effect in quantity and emotional effect by its headlines andsound bitesalone.[citation needed]Specific points, views, issues and people's apparent prevalence can be amplified,[17]stimulated or simulated. (See also:Mere-exposure effectClarifications, conspiracy busting and fake news exposure often come late when the damage is already done and/or do not reach the bulk of the audience of the associated misinformation[18][better source needed]
Social mediaactivities and other data can be used to analyze the personality of people and predict their behaviour and preferences.[19][20]Michal Kosinskideveloped such a procedure.[19]Such can be used for media or information tailored to a person's psyche e.g. viaFacebook. According to reports such may have played an integral part in Donald Trump's 2016 election win.[19][21](See also:Targeted advertising,Personalized marketing)
Due to overabundance of online content,social networking platformsandsearch engineshave leveragedalgorithmsto tailor and personalize users' feeds based on their individual preferences. However, algorithms also restrict exposure to different viewpoints and content, leading to the creation ofecho chambersorfilter bubbles.[5][22]
With the help of algorithms, filter bubbles influence users' choices and perception of reality by giving the impression that a particular point of view or representation is widely shared. Following the 2016referendum of membership of the European Union in the United Kingdomand theUnited States presidential elections, this gained attention as many individuals confessed their surprise at results that seemed very distant from their expectations. The range of pluralism is influenced by the personalized individualization of the services and the way it diminishes choice.[23]Five manipulative verbal influences were found in media texts. There are self-expression, semantic speech strategies, persuasive strategies, swipe films and information manipulation. The vocabulary toolkit for speech manipulation includes euphemism, mood vocabulary, situational adjectives, slogans, verbal metaphors, etc.[24]
Research on echo chambers from Flaxman, Goel, and Rao,[25]Pariser,[26]and Grömping[27]suggest that use of social media and search engines tends to increase ideological distance among individuals.
Comparisons between online and off-linesegregationhave indicated how segregation tends to be higher in face-to-face interactions with neighbors, co-workers, or family members,[28]and reviews of existing research have indicated how availableempirical evidencedoes not support the most pessimistic views aboutpolarization.[29]A 2015 study suggested that individuals' own choices drive algorithmic filtering, limiting exposure to a range of content.[30]While algorithms may not be causing polarization, they could amplify it, representing a significant component of the new information landscape.[31]
TheJoint Threat Research Intelligence Groupunit of theGovernment Communications Headquarters(GCHQ), the Britishintelligence agency[32]was revealed as part of theglobal surveillance disclosuresin documents leaked by the formerNational Security AgencycontractorEdward Snowden[33]and its mission scope includes using "dirty tricks" to "destroy, deny, degrade [and] disrupt" enemies.[33][34]Core-tactics include injecting false material onto the Internet in order to destroy the reputation of targets and manipulating online discourse andactivismfor which methods such as posting material to the Internet and falsely attributing it to someone else, pretending to be a victim of the target individual whose reputation is intended to be destroyed and posting "negative information" on various forums may be used.[35]
Known as "Effects" operations, the work of JTRIG had become a "major part" of GCHQ's operations by 2010.[33]The unit's online propaganda efforts (named "Online Covert Action"[citation needed]) utilize "mass messaging" and the "pushing [of] stories" via the medium ofTwitter,Flickr,FacebookandYouTube.[33]Online "false flag" operations are also used by JTRIG against targets.[33]JTRIG have also changed photographs onsocial mediasites, as well as emailing and texting colleagues and neighbours with "unsavory information" about the targeted individual.[33]In June 2015, NSA files published byGlenn Greenwaldrevealed new details about JTRIG's work at covertly manipulating online communities.[36]The disclosures also revealed the technique of "credential harvesting", in which journalists could be used to disseminate information and identify non-British journalists who, once manipulated, could give information to the intended target of a secret campaign, perhaps providing access during an interview.[33]It is unknown whether the journalists would be aware that they were being manipulated.[33]
Furthermore, Russia is frequently accused of financing "trolls" to post pro-Russian opinions across the Internet.[37]TheInternet Research Agencyhas become known for employing hundreds of Russians to post propaganda online under fake identities in order to create the illusion of massive support.[38]In 2016 Russia was accused of sophisticated propaganda campaigns to spreadfake newswith the goal of punishing Democrat Hillary Clinton and helping Republican Donald Trump during the 2016 presidential election as well as undermining faith in American democracy.[39][40][41]
In a 2017 report[42]Facebookpublicly stated that its site has been exploited by governments for the manipulation of public opinion in other countries – including during the presidential elections in the US and France.[17][43][44]It identified three main components involved in an information operations campaign: targeted data collection, content creation and false amplification and includes stealing and exposing information that is not public; spreading stories, false or real, to third parties through fake accounts; and fake accounts being coordinated to manipulate political discussion, such as amplifying some voices while repressing others.[45][46]
In 2016Andrés Sepúlvedadisclosed that he manipulated public opinion to rig elections in Latin America. According to him with a budget of $600,000 he led a team of hackers that stole campaign strategies, manipulatedsocial mediato create false waves of enthusiasm and derision, and installedspywarein opposition offices to helpEnrique Peña Nieto, a right-of-center candidate, win the election.[47][48]
In the run up toIndia's 2014 elections, both the Bharatiya Janata party (BJP) and the Congress party were accused of hiring "political trolls" to talk favourably about them on blogs and social media.[37]
The Chinese government is also believed to run a so-called "50-cent army" (a reference to how much they are said to be paid) and the "Internet Water Army" to reinforce favourable opinion towards it and theChinese Communist Party(CCP) as well as to suppress dissent.[37][49]
In December 2014 the Ukrainian information ministry was launched to counter Russian propaganda with one of its first tasks being the creation of social media accounts (also known as thei-Army) and amassing friends posing as residents of eastern Ukraine.[37][50]
Twittersuspended a number of bot accounts that appeared to be spreading pro-Saudi Arabiantweets about the disappearance of Saudi dissident journalistJamal Khashoggi.[51]
A report byMediapartclaimed that the UAE, through a secret services agent named Mohammed, was using a Switzerland-based firm Alp Services to run manipulation campaigns against Emirati opponents. Alp Services head,Mario Breroused fictitious accounts that were publishing fake articles under pseudonyms to attack Qatar and the Muslim Brotherhood networks in Europe. The UAE assigned Alp to publish at least 100 articles per year that were critical of Qatar.[52]
Internet manipulation may be seen being used within business and marketing as a way to influence consumers.Forbes,discusses how disinformation online can be spread rapidly across social platforms. Stu Sjouwerman suggests that legitimate and fake news are blurring together and both are being weaponized on large-scale campaigns to influence the general population.[53]
Hackers, hired professionals and private citizens have all been reported to engage in internet manipulation usingsoftware, includingInternet botssuch associal bots,votebotsandclickbots.[54]In April 2009,Internet trollsof4chanvotedChristopher Poole, founder of the site, as the world'smost influential person of 2008with 16,794,368 votes by an openInternet pollconducted byTimemagazine.[55]The results were questioned even before the poll completed, as automated voting programs and manualballot stuffingwere used to influence the vote.[56][57][58]4chan's interference with the vote seemed increasingly likely, when it was found thatreading the first letterof the first 21 candidates in the poll spelled out a phrase containing two 4chanmemes: "Marblecake. Also,The Game".[59]
Jokesters and politically orientedhacktivistsmay share sophisticated knowledge of how to manipulate the Web and social media.[60]
InWiredit was noted that nation-state rules such as compulsory registration and threats of punishment are not adequate measures to combat the problem of online bots.[61]
To guard against the issue of prior ratings influencing perception several websites such asReddithave taken steps such as hiding the vote-count for a specified time.[16]
Some other potential measures under discussion are flagging posts for being likely satire or false.[62]For instance in December 2016 Facebook announced that disputed articles will be marked with the help of users and outside fact checkers.[63]The company seeks ways to identify 'information operations' and fake accounts and suspended 30,000 accounts before the presidential election in France in a strike against information operations.[17]
Inventor of theWorld Wide WebTim Berners-Leeconsiders putting few companies in charge of deciding what is or is not true a risky proposition and states thatopennesscan make the web more truthful. As an example he points toWikipediawhich, while not being perfect, allows anyone to edit with the key to its success being not just the technology but also the governance of the site. Namely, it has an army of countless volunteers and ways of determining what is or is not true.[64]
Furthermore, various kinds ofsoftwaremay be used to combat this problem such as fake checking software or voluntarybrowser extensionsthat store every website one reads or use thebrowsing historyto deliver fake revelations to those who read a fake story after some kind of consensus was found on the falsehood of a story.[original research?]
Furthermore,Daniel Suarezasks society to valuecriticalanalytic thinkingand suggests education reforms such as the introduction of 'formal logic' as a discipline in schools and training inmedia literacyand objective evaluation.[62]
According to a study of the Oxford Internet Institute, at least 43 countries around the globe have proposed or implemented regulations specifically designed to tackle different aspects of influence campaigns, including fake news, social media abuse, and election interference.[65]
In Germany, during the period preceding the elections in September 2017, all major political parties save AfD publicly announced that they would not use social bots in their campaigns. Additionally, they committed to strongly condemning such usage of online bots.
Moves towards regulation on social media have been made: three German states Hessen, Bavaria, and Saxony-Anhalt proposed in early 2017 a law that would mean social media users could face prosecution if they violate the terms and conditions of a platform. For example, the use of a pseudonym on Facebook, or the creation of fake account, would be punishable by up to one year's imprisonment.[66]
In early 2018, the Italian Communications Agency AGCOM published a set of guidelines on its website, targeting the elections in March that same year. The six main topics are:[67]
In November 2018, a law against the manipulation of information was passed in France. The law stipulates that during campaign periods:[68]
In April 2018, the Malaysian parliament passed the Anti-Fake News Act. It defined fake news as 'news, information, data and reports which is or are wholly or partly false.'[69]This applied to citizens or those working at a digital publication, and imprisonment of up to 6 years was possible. However, the law was repealed after heavy criticism in August 2018.[70]
In May 2018, President Uhuru Kenyatta signed into law the Computer and Cybercrimes bill, that criminalised cybercrimes including cyberbullying and cyberespionage. If a person "intentionally publishes false, misleading or fictitious data or misinforms with intent that the data shall be considered or acted upon as authentic," they are subject to fines and up to two years imprisonment.[71]
German chancellorAngela Merkelhas issued theBundestagto deal with the possibilities of political manipulation by social bots or fake news.[72]
This article incorporates text from afree contentwork. Licensed under CC BY SA 3.0 IGO (license statement/permission). Text taken fromWorld Trends in Freedom of Expression and Media Development Global Report 2017/2018, 202, University of Oxford, UNESCO.
|
https://en.wikipedia.org/wiki/Internet_manipulation
|
Digital marketingis the component ofmarketingthat uses theInternetandonline-baseddigital technologiessuch asdesktop computers,mobile phones, and otherdigital mediaand platforms to promote products and services.[2][3]
It has significantly transformed the way brands and businesses utilize technology formarketingsince the 1990s and 2000s. As digital platforms became increasingly incorporated into marketing plans and everyday life,[4]and as people increasingly useddigital devicesinstead of visiting physical shops,[5][6]digital marketing campaigns have become prevalent, employing combinations of methods. Some of these methods include:search engine optimization(SEO),search engine marketing(SEM),content marketing,influencer marketing, content automation, campaign marketing,data-driven marketing,e-commercemarketing,social media marketing,social media optimization,e-mail direct marketing,display advertising,e-books, andoptical disksand games. Digital marketing extends to non-Internet channels that provide digital media, such astelevision,mobile phones(SMSandMMS), callbacks, and on-hold mobile ringtones.[7]
The extension to non-Internet channels differentiates digital marketing fromonline marketing.[8]
Digital marketing effectively began in 1990 when theArchie search enginewas created as anindexforFTPsites. In the 1980s, the storage capacity ofcomputerswas already large enough to store huge volumes of customer information. Companies started choosing online techniques, such asdatabase marketing, rather than limitedlist brokers.[9]Databasesallowed companies to track customers' information more effectively, transforming the relationship between buyer and seller.
In the 1990s, the termdigital marketingwas coined.[citation needed]The first clickablebanner ad, the "You Will" campaign byAT&T, went live in 1994, and over the first four months, 44% of all people who saw it clicked on the ad.[10][11]Early digital marketing efforts focused on simpleHTMLwebsites and the burgeoning practice of email marketing, which allowed for direct communication with consumers.[12]
In the 2000s, with increasing numbers ofInternetusers and the birth of theiPhone, customers began searching for products and making decisions about their needs online first, instead of consulting asalesperson, which created a new problem for the marketing department of a company.[13]In addition, a survey in 2000 in the United Kingdom found that most retailers still needed to register their own domain address.[14]These problems encouraged marketers to find new ways to integrate digital technology into market development. At the same time,Pay-Per-Clickadvertising, introduced by Google AdWords in 2000, allowed businesses to target specific keywords, making digital marketing more measurable and cost-effective.[15]
The mid-2000s saw the emergence of social media platforms likeFacebook(2004),YouTube(2005), andTwitter(2006). These platforms revolutionized digital marketing by facilitating direct and interactive engagement with consumers. In 2007,marketing automationwas developed as a response to the ever-evolving marketing climate. Marketing automation is the process by which software is used to automate conventional marketing processes.[16]Marketing automationhelps companies segment customers, launchmultichannel marketingcampaigns, and provide personalized information for customers.,[16]based on their specific activities. In this way, users' activity (or lack thereof) triggers a personal message that is customized to the user in their preferred platform. However, despite the benefits of marketing automation many companies are struggling to adapt it to their everyday uses correctly.[17][page needed]
Digital marketing became more sophisticated in the 2000s and the 2010s,
when[18][19]the proliferation of devices capable of accessing digital media led to sudden growth.[20]Statistics produced in 2012 and 2013 showed that digital marketing was still growing.[21][22]With the development ofsocial mediain the 2000s, such asLinkedIn,Facebook,YouTube, andTwitter, consumers became highly dependent ondigital electronicsin their daily lives.[23]Therefore, they expected a seamlessuser experienceacross different channels for searching product information. The change incustomer behaviorimproved the diversification of marketing technology.[24]
Digital mediagrowth was estimated at 4.5 trillion online ads served annually with digital media spending at 48% growth in 2010.[25]An increasing portion of advertising stems from businesses employingOnline Behavioural Advertising(OBA) to tailor advertising for internet users, but OBA raises concerns aboutconsumer privacyanddata protection.[20]
Nonlinear marketing, a form of interactive marketing, is a long-term approach that involves businesses gathering data about users’ online behavior and maintaining visibility across various digital platforms.[26]
Unlike traditional marketing, which typically uses one-way communication methods such as print, television, and radio advertisements, nonlinear digital marketing aims to engage potential customers through multiple online channels.[27]
As consumer knowledge has increased and demand for more tailored offerings has grown, many organizations have adjusted their outreach strategies. This has included adopting omnichannel and nonlinear marketing methods to help ensure brand visibility, customer engagement, and broader reach.[28]
Nonlinear marketing strategies focus on customizing advertising across different platforms[29]and personalizing messages for individual consumers, rather than addressing a single, uniform audience.[30]
Tactics may include:[23]
Some studies indicate that consumer responses to traditional marketing approaches are becoming less predictable for businesses.[31]According to a 2018 study, nearly 90% of online consumers in the United States researched products and brands online before visiting the store or making a purchase.[32]The Global Web Index estimated that in 2018, a little more than 50% of consumers researched products on social media.[33]Businesses often rely on individuals portraying their products in a positive light on social media, and may adapt their marketing strategy to target people with large social media followings in order to generate such comments.[34]In this manner, businesses can use consumers to advertise their products or services, decreasing the cost for the company.[35]
One of the key objectives of modern digital marketing is to raisebrand awareness, the extent to which customers and the public are familiar with and recognize a particular brand.
Enhancing brand awareness is important in digital marketing, and marketing in general, because of its impact on brand perception and consumer decision-making. According to the 2015 essay, "Impact of Brand on Consumer Behavior":
"Brand awareness, as one of the fundamental dimensions ofbrand equity, is often considered to be a prerequisite of consumers’ buying decision, as it represents the main factor for including a brand in theconsideration set. Brand awareness can also influence consumers’ perceived risk assessment and their confidence in the purchase decision, due to familiarity with the brand and its characteristics."[36]
Recent trends show that businesses and digital marketers are prioritizing brand awareness, focusing more on their digital marketing efforts on cultivating brand recognition and recall than in previous years. This is evidenced by a 2019 Content Marketing Institute study, which found that 81% of digital marketers have worked on enhancing brand recognition over the past year.[37]
Another Content Marketing Institute survey revealed that 89% ofB2Bmarketers now believe improving brand awareness to be more important than efforts directed at increasing sales.[38]
Increasing brand awareness is a focus of digital marketing strategy for a number of reasons:
Digital marketing strategies may include the use of one or more online channels and techniques (omnichannel) to increase brand awareness among consumers.
Building brand awareness may involve such methods/tools as:
Search engine optimization techniques may be used to improve the visibility of business websites and brand-related content for common industry-related search queries.[46]
The importance ofSEOto increase brand awareness is said to correlate with the growing influence of search results and search features like featured snippets, knowledge panels, and local SEO on customer behavior.[47]
SEM, also known asPPCadvertising, involves the purchase of ad space in prominent, visible positions atop search results pages and websites. Search ads have been shown to have a positive impact on brand recognition, awareness and conversions.[48]
33% of searchers who click on paid ads do so because they directly respond to their particular search query.[49]
Social media marketing is characterized by its constant engagement with consumers, emphasizing content creation and interaction skills. It involves real-time monitoring, analysis, summarization, and management of the marketing process, performed via platforms likeHootsuiteorSprout Social, which support these activities and allow adjustments to marketing strategies based on real-time feedback from the market and consumers.[50][51]70% of marketers list increasing brand awareness as their number one goal for marketing on social media platforms.[citation needed]As of 2021,LinkedInhas been added as one of the most-used social media platforms by business leaders for its professional networking capabilities.[52]
56% of marketers believepersonalizationcontent – brand-centered blogs, articles, social updates, videos, landing pages – improves brand recall and engagement.[53]
One of the major changes that occurred in traditional marketing was the "emergence of digital marketing", this led to the reinvention of marketing strategies in order to adapt to this major change in traditional marketing.
As digital marketing is dependent ontechnologywhich is ever-evolving and fast-changing, the same features should be expected from digital marketing developments and strategies. This portion is an attempt to qualify or segregate the notable highlights existing and being used as of press time.[when?]
To summarize, Pull digital marketing is characterized by consumers actively seeking marketing content while Push digital marketing occurs when marketers send messages without that content being actively sought by the recipients.
An important consideration today while deciding on a strategy is that the digital tools have democratized the promotional landscape.
Six principles for building online brand content:[59]
Tourism marketing:Advanced tourism, responsible and sustainable tourism, social media and online tourism marketing, and geographic information systems. As a broader research field matures and attracts more diverse and in-depth academic research.[60]
The new digital era has enabled brands toselectively target their customersthat may potentially be interested in their brand or based on previous browsing interests. Businesses can use social media to select the age range, location, gender, and interests of whom they would like their targeted post to be seen. Furthermore, based on a customer's recent search history they can be ‘followed’ on the internet so they see advertisements from similar brands, products, and services,[61]that allows businesses to target the specific customers that they know and feel will most benefit from their product or service, something that had limited capabilities up until the digital era.
Digital marketing activity is still growing across the world according to the headline global marketing index. A study published in September 2018, found that global outlays on digital marketing tactics are approaching $100 billion.[62]Digital media continues to rapidly grow. While the marketing budgets are expanding, traditional media is declining.[63]Digital media helps brands reach consumers to engage with their product or service in a personalized way. Five areas, which are outlined as current industry practices that are often ineffective are prioritizing clicks, balancing search and display, understanding mobiles, targeting, viewability, brand safety and invalid traffic, and cross-platform measurement.[64]Why these practices are ineffective and some ways around making these aspects effective are discussed surrounding the following points.
Prioritizing clicks refers to display click ads, although advantageous by being ‘simple, fast and inexpensive’ rates for display ads in 2016 is only 0.10 percent in the United States. This means one in a thousand click ads is relevant therefore having little effect. This displays that marketing companies should not just use click ads to evaluate the effectiveness of display advertisements.[64]
Balancing search and display for digital display ads is important. marketers tend to look at the last search and attribute all of the effectiveness of this. This, in turn, disregards other marketing efforts, which establish brand value within the consumer's mind.ComScoredetermined through drawing on data online, produced by over one hundred multichannel retailers that digital display marketing poses strengths when compared with or positioned alongside, paid search.[64]This is why it is advised that when someone clicks on a display ad the company opens a landing page, not its home page. A landing page typically has something to draw the customer in to search beyond this page. Commonly marketers see increased sales among people exposed to a search ad. But the fact of how many people you can reach with a display campaign compared to a search campaign should be considered. Multichannel retailers have an increased reach if the display is considered in synergy with search campaigns. Overall, both search and display aspects are valued as display campaigns build awareness for the brand so that more people are likely to click on these digital ads when running a search campaign.[64]
Understanding mobile devices is a significant aspect of digital marketing because smartphones and tablets are now responsible for 64% of the time US consumers are online.[64]Apps provide a big opportunity as well as challenge for the marketers because firstly the app needs to be downloaded and secondly the person needs to actually use it. This may be difficult as ‘half the time spent on smartphone apps occurs on the individuals single most used app, and almost 85% of their time on the top four rated apps’.[64]Mobile advertising can assist in achieving a variety of commercial objectives and it is effective due to taking over the entire screen, and voice or status is likely to be considered highly. However, the message must not be seen or thought of as intrusive.[64]Disadvantages of digital media used on mobile devices also include limited creative capabilities, and reach. Although there are many positive aspects including the user's entitlement to select product information, digital media creating a flexible message platform and there is potential for direct selling.[65]
The number of marketing channels continues to expand, as measurement practices are growing in complexity. A cross-platform view must be used to unify audience measurement and media planning. Market researchers need to understand how the Omni-channel affects consumer's behavior, although when advertisements are on a consumer's device this does not get measured. Significant aspects to cross-platform measurement involve deduplication and understanding that you have reached an incremental level with another platform, rather than delivering more impressions against people that have previously been reached.[64]An example is ‘ESPN and comScore partnered on Project Blueprint discovering the sports broadcaster achieved a 21% increase in unduplicated daily reach thanks to digital advertising’.[64]Television and radio industries are the electronic media, which competes with digital and other technological advertising. Yet television advertising is not directly competing with online digital advertising due to being able to cross platform with digital technology. Radio also gains power through cross platforms, in online streaming content. Television and radio continue to persuade and affect the audience, across multiple platforms.[66]
Targeting, viewability, brand safety, and invalid traffic all are aspects used by marketers to help advocate digital advertising.Cookiesare a form of digital advertising, which are tracking tools within desktop devices, causing difficulty, with shortcomings including deletion by web browsers, the inability to sort between multiple users of a device, inaccurate estimates for unique visitors, overstating reach, understanding frequency, problems with ad servers, which cannot distinguish between when cookies have been deleted and when consumers have not previously been exposed to an ad. Due to the inaccuracies influenced by cookies, demographics in the target market are low and vary.[64]Another element, which is affected by digital marketing, is ‘viewability’ or whether the ad was actually seen by the consumer. Many ads are not seen by a consumer and may never reach the right demographic segment. Brand safety is another issue of whether or not the ad was produced in the context of being unethical or having offensive content. Recognizing fraud when an ad is exposed is another challenge marketers face. This relates to invalid traffic as premium sites are more effective at detecting fraudulent traffic, although non-premium sites are more so the problem.[64]
Digital Marketing Channels are systems based on the Internet that can create, accelerate, and transmit product value from producer to a consumer terminal, through digital networks.[67][68]Digital marketing is facilitated by multiple Digital Marketing channels, as an advertiser one's core objective is to find channels which result in maximum two-way communication and a better overallROIfor the brand. There are multiple digital marketing channels available namely:[69]
It is important for a firm to reach out to consumers and create a two-way communication model, as digital marketing allows consumers to give back feedback to the firm on a community-based site or straight directly to the firm via email.[84]Firms should seek this long-term communication relationship by using multiple forms of channels and using promotional strategies related to their target consumer as well as word-of-mouth marketing.[84]
Possible benefits of digital marketing include:
Digital marketing used to rely primarily on self-regulation included in the ICC Code,[88]which included rules that apply to marketing communications using digital interactive media. However, self-regulation has proved largely ineffective,[89][90]leading to the consolidation of market power in a few firms, includingGoogle, which has been determined to hold monopolies in search marketing and digital advertising.[91][92]While self-regulation codes still exist, government regulation is increasing in multiple jurisdictions, including California's legislation on targeting advertising online.[93]In Europe, digital marketing is regulated through multiple codes, of which the most important is theDigital Services Act,[94]which entered into force on 17 February, 2024. Other regulations focus on user privacy and data management such as theGeneral Data Protection Regulation(GDPR).[95]
Digital marketing planning is a term used in marketing management. It describes the first stage of forming a digital marketing strategy for the widerdigital marketing system. The difference between digital and traditional marketing planning is that it uses digitally based communication tools and technology such as Social, Web, Mobile, Scannable Surface.[96][97]Nevertheless, both are aligned with the vision, the mission of the company and the overarching business strategy.[98]
Dr. Dave Chaffey, an author on marketing topics, has suggested that successful digital marketing strategies have do digital marketing planning (DMP), which is a three-stage approach: Opportunity, Strategy, and Action. This generic strategic approach often has phases of situation review, goal setting, strategy formulation, resource allocation and monitoring.[98]
To create an effective DMP, a business first needs to review the marketplace and set "SMART" (Specific, Measurable, Actionable, Relevant, and Time-Bound) objectives.[99]They can set SMART objectives by reviewing the current benchmarks andkey performance indicators(KPIs) of the company and competitors. It is pertinent that the analytics used for the KPIs be customized to the type, objectives, mission, and vision of the company.[100][101]
Companies can scan for marketing and sales opportunities by reviewing their own outreach as well as influencer outreach. This means they havecompetitive advantagebecause they are able to analyse their co-marketers influence and brand associations.[102]
To seize the opportunity, the firm should summarize its currentcustomers' personasand purchase journey from this they are able to deduce their digital marketing capability.[103]
To create a planned digital strategy, the company must review their digital proposition (what you are offering to consumers) and communicate it using digital customer targeting techniques. So, they must define online value proposition (OVP), this means the company must express clearly what they are offering customers online e.g., brand positioning.
The company should also (re)select target market segments and personas and define digital targeting approaches.
After doing this effectively, it is important to review the marketing mix for online options. The marketing mix comprises the4Ps– Product, Price, Promotion, and Place.[104][105]Some academics have added three additional elements to the traditional 4Ps of marketing Process, Place, and Physical appearance making it 7Ps of marketing.[106]
The third and final stage requires the firm to set a budget and management systems. These must be measurable touchpoints, such as the audience reached across all digital platforms. Furthermore, marketers must ensure the budget and management systems are integrating the paid, owned, and earned media of the company.[107]The Action and final stage of planning also requires the company to set in place measurable content creation e.g. oral, visual or written online media.[108]
One way marketers can reach out to consumers and understand their thought process is through what is called an empathy map. An empathy map is a four-step process. The first step is through asking questions that the consumer would be thinking in their demographic. The second step is to describe the feelings that the consumer may be having. The third step is to think about what the consumer would say in their situation. The final step is to imagine what the consumer will try to do based on the other three steps. This map is so marketing teams can put themselves in their target demographics shoes.[109]Web Analytics are also a very important way to understand consumers. They show the habits that people have online for each website.[110]One particular form of these analytics ispredictive analyticswhich helps marketers figure out what route consumers are on. This uses the information gathered from other analytics and then creates different predictions of what people will do so that companies can strategize on what to do next, according to the people's trends.[111]
The "sharing economy" refers to an economic pattern that aims to obtain a resource that is not fully used.[114]Nowadays, thesharing economyhas had an unimagined effect on many traditional elements including labor, industry, and distribution system.[114]This effect is not negligible that some industries are obviously under threat.[114][115]The sharing economy is influencing the traditional marketing channels by changing the nature of some specific concept including ownership, assets, and recruitment.[115]
Digital marketing channels and traditional marketing channels are similar in function that the value of the product or service is passed from the original producer to the end user by a kind of supply chain.[116]Digital Marketing channels, however, consist of internet systems that create, promote, and deliver products or services from producer to consumer through digital networks.[117]Increasing changes to marketing channels has been a significant contributor to the expansion and growth of the sharing economy.[117]Such changes to marketing channels has prompted unprecedented and historic growth.[117]In addition to this typical approach, the built-in control, efficiency and low cost of digital marketing channels is an essential features in the application of sharing economy.[116]
Digital marketing channels within the sharing economy are typically divided into three domains including, e-mail, social media, and search engine marketing or SEM.[117]
Other emerging digital marketing channels, particularly branded mobile apps, have excelled in the sharing economy.[117]Branded mobile apps are created specifically to initiate engagement between customers and the company. This engagement is typically facilitated through entertainment, information, or market transaction.[117]
|
https://en.wikipedia.org/wiki/Internet_marketing
|
Personal brandingis a strategic process aimed at creating, positioning, and maintaining a positivepublic perceptionof oneself by leveraging unique individual characteristics and presenting a differentiated narrative to atarget audience.[1]The concept is rooted in two main theoretical foundations: marketing theory and self-presentation behaviors. Personal branding is often framed in marketing terms such as 'product,' 'added value,' and 'promise,' highlighting its parallels with product branding and its focus on distinctiveness and market positioning. Conversely, definitions of self-presentation focus on personal identity, reputation, and managing one's image, underscoring how people present themselves to influence how others perceive them.[2]Success in personal branding is viewed as the result of effective self-packaging.[3]It is more about self-promotion rather than true self-expression. The distinction between the two lies in the fact that self-promotion is deliberate in every regard, as the person is consciously crafting their image or persona. In contrast, self-expression can sometimes unintentionally arise from promotion.
The idea of positioning a personal or professional identity appeared in the 1981 bookPositioning: The Battle for Your Mind, byAl RiesandJack Trout.[4]More specifically in Chapter 20 - “Positioning Yourself and Your Career” - That one can benefit by using positioning strategy to advance one’s career.
Business writerTom Petersis credited as coining the phrase "personal branding" as part of his "Brand You" philosophy, introduced in his 1999 bookThe Brand You 50,[5]which expanded on his original 1997 article, "The Brand Called You".[6][7]
In their 2003 bookBe Your Own Brand, marketers David McNally and Karl Speak describe a personal brand as "a perception or emotion, maintained by somebody other than you, that describes the total experience of having a relationship with you".[8]
A personal brand is a widely recognized, consistent perception or impression of an individual based on their experience, expertise, competencies, actions and/or achievements within a community, industry, or the marketplace at large.[9]Some individuals link theirpersonal namesor pseudonyms with their businesses as seen with currentPresident of the United Statesandreal estatemogulDonald Trump, who uses his name on properties and enterprises likeTrump Tower.Celebritiesmay also leverage their social status to support organizations for financial or social gain. For example,Kim Kardashianendorsesbrands and products through hermedia influence.[10]
The relationship between brands andconsumersis dynamic and must be constantly refined. This continuous process demonstrates theambivalenceofconsumerism.[11]
Personal branding has gained significance due to the use ofthe Internet, associal mediaandonline identitiesaffect the physical world. Effective personal branding involves highlighting one’s knowledge, experience, and skills to establish a credible image.[12][13]Authenticity, professionalism, and responsiveness are crucial traits when communicating online, as they create trust and consistency.[12][13][14]Maintaining a consistent portrayal across both professional and personal platforms reinforces a coherent brand image, whileunprofessional behaviouron anysocial mediaplatform can harm career prospects.[12][15]Individuals maintain a unified brand by avoiding conflicting portrayals, and where necessary, separating personal and professional social media identities helps maintainprivacy.[16][15]
With the rise ofsocial media, managing a personal brand has become more accessible. Platforms likeFacebook,Twitter,Instagram,personal blogs, are used to build and maintain abrandconsistency across all mediums, which ensures effectivebrand management.[17]Establishing atarget audienceand focusing on an area of specialization helps maintain and preserve the brand. Creating original content engages the audience and staying informed within one’s field builds expertise.[18]Publishing content across various channels helps individuals gain recognition andfollowersand staying relevant keeps the audience engaged by reinforcing one’s position as an expert.[18]
General professional profiles likeLinkedInand company or industry-specific networks, such asSlack, allow a person to improve their self-branding, specifically in finding a job or improving one's professional standing. As an online open source, social media has become a place that is fulfilled with highly reliable and resourceful information to target user identities.[19]
Employers are increasingly using social media tools to vet applicants before offering theminterviews. Practices include searching an applicant's history on sites such asFacebookandTwitter, and conductingbackground checksusingsearch enginesand other tools.[20]To effectively promote a personal brand, individuals should focus on presenting a comprehensive professional profile. Hence along with a standoutresumethat highlights skills and accomplishments, a customizedcover letter, references, anelevator speech, and aLinkedInprofile showcasing expertise need to be included.[21]Additionally, maintaining a professional presence on social media platforms likeFacebookandTwitter, and linking these to apersonal websitewith relevant content, strengthens one’s overall brand image and visibility.[21]
According to Alberto Chinchilla Abadías "it is advisable for the company to train its workers and managers in communication and digital skills in order to effectively use these technologies".[22]
Building a brand and an online presence within internal corporate networks allows individuals to connect with their colleagues, not only socially but also professionally. This kind of interaction allows for employees to build up their personal brand relative to other employees, as well as spur innovation within the company as more people can learn from one another.[23][24]
Some social media sites, likeTwitter, can have a flattened, all-encompassing audience that can be composed of professional and personal contacts, which then can be seen as a more "'professional' environment with potential professional costs".[25]Because of its explicitly public nature, Twitter becomes a double-sided platform that can be utilized in different ways depending on the amount of censorship a user decides on.[26]
Aside from professional aspirations, personal branding can also be used on personal-level social networks to flare popularity. The online self is used as a marketing and promotional tool to brand an individual as a type of person; success on the virtual platforms then becomes "online social value [that could transform] to real rewards in the offline world."[27]When branding themselves on social media three factors are considered: "crafting physical footprint, creating digital footprint, and communicating the message."[28]A prominent example of a self-made self-branded social media icon isTila Tequila, who rose to prominence in 2006 on theMyspacenetwork, gaining more than 1.5 million friends, through expertly marketing her personal brand.[29]
As social media has become a vehicle for self-branding, these moguls have begun to situate the maintenance of their online brand as a job, which brings about new ways to think about work and labor.[30]The logic of online sites and the presence of feedback means that one's online presence is viewed by others using the same rubric to judge brands: evaluation, ranking, and judgment. Thus, social media network sites serve as complex, technologically mediated venues for the branding of the self.[30]
Visual identitycan be an essential part of personal branding as it shapes how individuals are perceived and remembered.[31]The visual representation of a brand, including elements likecolor schemesandtypography, has the power to evoke specific emotions and influence perceptions.[31]Consistent visual identity, through images and graphics, creates brand differentiation and recognition.[32]Thoughtfulphotographyand cohesivedesignsstrengthen visual identity, making a brand more relatable and trustworthy.[32]This cohesive presentation supports brand consistency,loyalty, and relatability.[32]
Personal branding involves the practice of self-disclosure, and this transparency is part of what Foucault would call "the proper care of the self".[11]In this sense, disclosure refers to the details of one's everyday life for other's consumption, while transparency is the effect of this kind of disclosure. Transparency essentially works to give viewers a complete view of one's authentic self.[11]
Digitally aided disclosure, which involves building a self-brand on a social network site, relies on traditional discourses of the authentic self as one that is transparent, without artifice, and open to others. Authenticity is viewed as both residing inside the self and is also demonstrated by allowing the outside world access to one's inner self.[11]It is interesting to think about the idea of authenticity with disclosure, and the freedom social networks allow in disclosing an inauthentic self. All the while, these posting are forming a digital archive of the self, through which a brand could be crafted by others.
Personal branding has been widely promoted as a tool for achieving professional success. Numerousself-helpbooks, programs, personal coaches, and articles emphasize the importance of crafting an individual brand, often framed around ideas of authenticity and personal fulfillment.[33]Proponents suggest that these strategies help individuals highlight their strengths and differentiate themselves in competitive environments.
However, critics argue that personal branding contributes to the commodification of the self.[34]In this view, individuals are treated as products, with their identities marketed and consumed similarly to commercial goods. This perspective suggests that efforts to express authenticity may paradoxically become artificial, as the presentation of the self is shaped by audience expectations and platform logic.[33]
While personal branding can enhance visibility and help employers assess a candidate’s skills and cultural fit, it may also create pressure to conform to specific norms or engage in performative behavior.[35][36]Scholars have pointed to the tension between expressing a genuine self and tailoring that expression for strategic advantage. The use of social media further complicates this dynamic, as profiles, blogs, and personal websites form part of a public-facing portfolio that can be interpreted and evaluated by others.[27]
Erving Goffman'sself-presentation theory explores how individuals seek to control the impressions others form of them. The theory introduces the concepts offront stageandback stageto distinguish between public and private behaviors. In the context of personal branding,front stagerefers to the curated presentation of the self, which is often seen on social media platforms where individuals actively shape how they are perceived by others. Public figures, including celebrities and athletes, commonly use these platforms to cultivate a consistent and strategic personal image.
Thebackstage, in contrast, encompasses behaviors or attitudes that are concealed from public view. Disclosures that occur outside the intended brand image may contradict the curated persona and potentially harm public perception. Public controversies resulting from private comments made public, such as the case of formerLos Angeles ClippersownerDonald Sterling, illustrate how unfiltered backstage behavior can conflict with and damage a carefully managed personal brand.[37]
Through Goffman’s framework, personal branding can be interpreted as a form of performance in which individuals selectively share content to reinforce a desired identity. This process is amplified in digital environments. Audiences often evaluate online personas in ways similar to how they assess commercial brands, focusing on visibility, consistency, and perceived authenticity.
|
https://en.wikipedia.org/wiki/Personal_branding
|
Real-time bidding(RTB) is a means by whichadvertising inventoryis bought and sold on a per-impressionbasis, via instantaneous programmaticauction, similar to financial markets. With real-time bidding,online advertisingbuyers bid on an impression and, if the bid is won, the buyer's ad is instantly displayed on the publisher's site.[2]Real-time bidding lets advertisers manage and optimize ads from multipleAd networks, allowing them to create and launch advertising campaigns, prioritize networks, and allocate percentages of unsold inventory, known as backfill.[3]
Real-time bidding is distinguishable from static auctions by how it is a per-impression way of bidding, whereas static auctions are groups of up to several thousand impressions.[4]RTB is promoted as being more effective than static auctions for both advertisers and publishers in terms of advertising inventory sold, though the results vary by execution and local conditions. RTB replaced the traditional model.
Research suggests that RTB digital advertising spend will reach $23.5 billion in the United States in 2018 compared to $6.3 billion spent in 2014.[5]
RTB requires collection, accumulation and dissemination of data about users and their activities for both operating the bidding process, profiling users to "enrich" bid requests, and operate ancillary functions such as fraud detection. As a consequence, RTB has led to a range of privacy concerns,[6][7]and has attracted attention fromdata protection authorities(DPAs).[8]According to UK's DPA, theICO, report, companies involved in RTB "were collecting and trading information such as race, sexuality, health status or political affiliation" without consent from affected users.[9]Simon McDougall of ICO reported, in June 2019, that "sharing people’s data with potentially hundreds of companies, without properly assessing and addressing the risk of these counterparties, raises questions around the security and retention of this data."[10]
In 2019, 12 NGOs complained about RTB to a range of regulators in the Union,[11]leading to a decision in February 2022 where the Belgian Data Protection Authority found a range of illegality in aspects of a system used to authorise much of RTB in the EU under theGDPR, the Transparency and Consent Framework produced by theInteractive Advertising Bureau Europe.[12]The Dutch DPA has since indicated that websites and other actors in the Netherlands should cease using RTB to profile users.[13]The Belgian DPA's decision has been described as "an atomic bomb",[14]with some academic commentators arguing that the RTB would require fundamental restructuring in order for a system such as the TCF to be able to authorise it under the decision.[15]
Since RTB works throughmachine-to-machinecommunication, it has been gamed by malicious actors aiming to extract money from theprogrammatic commerceofonline advertisingby monetizingfake news websites[16]and other forms of made-for-advertising websites that extract rents viaad fraud.[1]
A typical transaction begins with a user visiting a website. This triggers a bid request that can include various pieces of data such as the user's demographic information, browsing history, location, and the page being loaded. The request goes from the publisher to an ad exchange, which submits it and the accompanying data to multiple advertisers who automatically submit bids in real time to place their ads. Advertisers bid on each ad impression as it is served. The impression goes to the highest bidder and their ad is served on the page.[citation needed]
The bidding happens autonomously and advertisers set maximum bids and budgets for an advertising campaign. The criteria for bidding on particular types of consumers can be very complex, taking into account everything from very detailed behavioural profiles to conversion data.[citation needed]Probabilistic models can be used to determine the probability for a click or a conversion given the user history data (aka user journey). This probability can be used to determine the size of the bid for the respective advertising slot.[17]
Demand-side platforms(DSPs) give buyers direct RTB access to multiple sources of inventory. They typically streamline ad operations with applications that simplify workflow and reporting. DSPs are directed at advertisers. The technology that powers an ad exchange can also provide the foundation for a DSP, allowing for synergy between advertising campaigns.[4]
The primary distinction between an ad network and a DSP is that DSPs have the technology to determine the value of an individual impression in real time (less than 100 milliseconds) based on what is known about a user's history.[18]
Large publishers often manage multiple advertising networks and usesupply-side platforms(SSPs) to manage advertising yield. Supply-side platforms utilize data generated from impression-level bidding to help tailor advertising campaigns. Applications to manage ad operations are also often bundled into SSPs. SSP technology is adapted from ad exchange technology.[4]
An individual's browser history is more difficult to determine on mobile devices.[18]This is due to technical limitations that continue to make the type of targeting and tracking available on the desktop essentially impossible on smartphones and tablets. The lack of a universal cookie alternative for mobile web browsing also limits the growth and feasibility of programmatic ad buying. Mobile real time bidding also lacks universal standards.[19]
|
https://en.wikipedia.org/wiki/Real-time_bidding
|
Real-time marketingismarketingperformed "on-the-fly" to determine an appropriate or optimal approach to a particularcustomerat a particular time and place. It is a form ofmarket researchinbound marketing that seeks the most appropriate offer for a given customer sales opportunity, reversing the traditionaloutbound marketing(orinterruption marketing) which aims to acquire appropriate customers for a given 'pre-defined' offer. The dynamic 'just-in-time' decision making behind a real-time offer aims to exploit a given customer interaction defined by websiteclicksor verbalcontact centreconversation.[1]
Real-time marketing techniques developed during the mid-1990s following the initial deployment ofcustomer relationship management(CRM) solutions in majorretail banking,investment bankingandtelecommunicationscompanies. The intrinsic and prevailing 'heavyweight' nature of the key CRM vendors at this time, who were generally focused on major back and front office system integration projects, provided an opportunity for niche players within thecampaign management applicationarena.
The implementation of real-time marketing solutions through the late 1990s would typically involve a 10- to 14-week delivery project with 1-2FTEexpert consultants and often would follow an earlier outbound marketing solution implementation. This relatively lightweight delivery model had obvious appeal within the vendor sales cycle and customer procurement context but was ultimately to prove a disincentive for majorsystems integrationservices providers to partner with real-time marketing vendors.
Real-time marketing solution implementation classically involves theserver-sideinstallation of a multithreaded core decisioning application server / interaction transactional-biasedschemaand supporting client components such as a fat-client desktop campaign studio / rules editor, browser-based marketing user reporting interface and enterprise applicationAPIssuch asweb services/Javacomponents. Vendors typically will also provide legacy interfaces for COM,socketsandHTTPintegration.
Vendor solution approaches to real-time learning naturally vary but commonly, the underlying models utilize a naiveBayesian probabilityclassifier, recognizing that despite their apparently oversimplified assumptions, these classifiers have worked well in many complex real-world situations. To help gain acceptance with in-house specialistdata miningstakeholders, the real-time solutions also support external model scores and execution within offer decision making.
Thedotcom bubble'bust' of 2000 inhibited the further development and implementation of item-basedcollaborative filteringtechniques. Having been incorporated within real-time marketing solutions through the 1990s, these filtering techniques should have been immediately attractive to online retailers managing hundreds of thousands (or millions) of products as opposed to a retail bank with a hundred propositions across savings,credit cardandmortgageproduct lines.
Over time, it became apparent to solution vendors and maturing customers alike that 'traditional' outbound and emergent inbound marketing initiatives should be consolidated within a coherent and coordinated enterprise marketing strategy. To this end, a class of marketing application known as marketing resource management (MRM) which 'sits above' real-time marketing, began to emerge during the early 21st Century, albeit in a fairly bespoke and implementation-specific guise. The essence of this abstraction layer is that the MRM application orchestrates strategy, stakeholder sign-off, budgeting, program planning, campaign execution and effectiveness reporting across inbound real-time and outbound marketing disciplines.
The term "real-time marketing" has the potential weakness of self-limiting the underlying decisioning server capability to cross-sell and up-sell despite the observation that this particular function is generally the most compelling aspect of the application class. Vendors therefore found themselves re-branding real-time marketing products to suggest a more holistic appreciation of enterprise interaction decision management.
In some respects, these early real-time marketing customer implementations were ahead of their time despite acknowledged revenue realization within theearly adopters.[2]
Hosted real-time marketing solutions are an obvious and increasingly prevalent means of provisioning organizational demand for this critical enterprise capability. A remaining challenge for such solution vendors is to fully convince enterprise clients that the customer data profile (often comprising up to 1000 source or derived attributes) involved in the decision making and targeting processes is fully secure. Packaged 'private' cloud solutions are already appearing alongside 'cloud sourcing' management consultancies.
Gartner's predictions for the Gartner Top 10 Technologies for 2011[3]suggest that whatever the nomenclature, real-time marketing will continue to evolve, crucially to embrace mobile platforms underpinned by an awareness of customer context, location and social networking (collective intelligence) implications.
|
https://en.wikipedia.org/wiki/Real-time_marketing
|
Relationship marketingis a form of marketing developed fromdirect response marketingcampaigns that emphasizescustomer retentionandsatisfactionrather than sales transactions.[1][2]It differentiates from other forms of marketing in that it recognises the long-term value ofcustomer relationshipsand extends communication beyond intrusiveadvertisingand sales promotional messages.[3]
With the growth of the Internet and mobile platforms, relationship marketing has continued to evolve as technology opens more collaborative and social communication channels such as tools for managing relationships with customers that go beyond demographics and customer service data collection. Relationship marketing extends to include inbound marketing, a combination ofsearch optimizationand strategic content,public relations, social media and application development.
Relationship marketing refers to an arrangement where both the buyer and seller have an interest in a more satisfying exchange. This approach aims to transcend the post-purchase-exchange process with a customer in order to make richer contact by providing a morepersonalisedpurchase, and using the experience to create stronger ties. A main focus on a long-term relationship with customers differentiates relationship marketing from other marketing techniques.
The technique was first proposed by American marketing scholars Berry (1983) and Jackson (1985). Berry (1983) argued in a conference about the field of service marketing that relationship marketing is a marketing activity for enterprises to obtain, maintain and promote effective relationships with customers. After a long-term study on the marketing process of the service industry, it was concluded that the ultimate goal of enterprise marketing is not only to develop new customers but also to focus on maintaining existing customers. Ultimately, the goal is to improve the long-term interests of both parties through cooperative relationships. The study also argues that the cost of maintaining an old customer is far lower than the cost of developing a new customer and that maintaining a relationship with old consumers is more economical than developing new customers.
Jackson (1985) further modified the concept in the aspect of industry marketing. He argued that the essence of relationship marketing is to attract, establish and maintain a close relationship with enterprise customers. Furthermore, other studies have concluded that the essence of relationship marketing is the actual maintenance of existing customers, which creates long-term interest in a product. This research conclusion has been generally recognised since the original proposal for relationship marketing. The research scope, however, is limited to the relationship with old customers, easily ignoring the dynamic development of customers because long-term customers are developed from new customers. If an enterprise is restricted to the maintenance of existing customers, it is impossible for it to achieve any progress or compete in the market since it cannot attract long-term customers in the first place.
From asocial anthropologicalperspective, relationship marketing theory and practice can be interpreted ascommodity exchangethat instrumentalises features ofgift exchange.[4]Marketers, consciously or intuitively, are recognizingreciprocity, a 'pre-modern' form of exchange, and have begun to use it.
Thus, relationship marketing revolves around gaining loyal customers. According to Liam Alvey, relationship marketing can be applied where there are competitive product alternatives for customers to choose from and an ongoing desire for that product.[5]Research studying relationship marketing suggests that companies can do this through one of the three value strategies: best price, best product or best service. Hence companies can relay their relationship marketing message through value statements.[6]
The practice of relationship marketing has been facilitated by several generations of customer relationship management software, which track and analyze each customer's preferences and activities. For example, an automobile manufacturer maintaining a database of when and how repeat customers buy their products, including data concerning their choices and purchase financing, can more efficiently develop one-to-one marketing offers and product benefits. Moreover, extensive use of such software is found in web applications. A consumer shopping profile can be built as a person shops online and is then used to compute his likely preferences. These curated and predicted offerings can then be presented to the customer through cross-sell, email recommendation and other channels.
Relationship marketing has also migrated back into direct mail. Marketers can use the technological capabilities of digital, toner-based printing presses to produce unique, personalised pieces for each recipient throughvariable data printing. They can personalise documents by information contained in their databases, including name, address, demographics, purchase history and dozens to hundreds of other variables. The result is a printed piece that reflects the individual needs and preferences of each recipient, increasing the relevance of the piece and increasing the response rate.
Additionally, relationship marketing has been strongly influenced byreengineering. According to process reengineering theory, organizations should be structured according to complete tasks and processes rather than functions. Thuscross-function teamsshould be responsible for a whole process from beginning to end rather than having the work go from one functional department to another, whereas traditional marketing uses the functional (or 'silo') department approach where stages of production are handled by different departments. The legacy of traditional marketing can still be seen in the traditional four Ps of themarketing mix:pricing,product management,promotion, andplacement. According to Gordon (1999), the marketing mix approach is too limited to provide a usable framework for assessing and developing customer relationships in many industries and should be replaced by the relationship marketing alternative model where the focus is on customers, relationships and interaction over time rather than markets and products.
In contrast, relationship marketing is cross-functional, organised around processes that involve all aspects of an organization.[7]Some commentators prefer to call relationship marketing 'relationship management' because it involves much more than that which is included in normal marketing.[citation needed]
Because of its broad scope, relationship marketing can be effective in many contexts. As well as being relevant to 'for profit' businesses, research indicates that relationship marketing can be useful for organizations in the voluntary sector[8]and in the public sector.[9][10]
Martin Christopher, Adrian Payne and David Ballantyne at theCranfield School of Managementclaim that relationship marketing has the potential to forge a synthesis between quality management, customer service management and marketing.[11]
Relationship marketing relies on the communication and acquisition of consumer requirements solely from existing customers in a mutually beneficial exchange usually involving permission for contact by the customer through anopt-insystem.[12]With particular relevance tocustomer satisfaction, the relative price and quality of goods and services produced or sold through a company alongside customer service generally determine the amount of sales relative to that of competing companies. Although groups targeted through relationship marketing may be large, accuracy of communication and overall relevance to the customer remains higher than that of direct marketing.
A principle of relationship marketing is the retention of customers in order to ensure repeated trade from preexisting customers by satisfying requirements above those of competing companies through a mutually beneficial relationship.[12][13]This technique balances new customers and opportunities with current and existing customers tomaximise profitand counteracts the leaky bucket theory of business, where new customers in older direct marketing-oriented businesses are gained at the expense of the loss of older customers.[14][15]This process of 'churning' is less economically viable than retaining all or the majority of customers using both direct and relationship management because securing new customers requires more investment.[16]
Many companies in competing markets redirect or allocate large amounts of resources towards customer retention. In markets with increasing competition, attracting new customers may cost up to five times more than retaining current customers because direct or 'offensive' marketing requires much more to cause defection from competitors.[16]However, it is suggested that because extensive classic marketing theories center on means of attracting customers and creating transactions rather than maintaining them, the predominant usage of direct marketing used in the past is now gradually being used more with relationship marketing as the latter's importance becomes more recognizable.[16]
Reichheldand Sasser (1990) claim that a 5% improvement incustomer retentioncan cause an increase in profitability of between 25 and 85 percent in terms ofnet present valuedepending on the industry.[17]However, Carrol and Reichheld dispute these calculations, claiming that they result from faulty cross-sectional analysis.[18]Research by John Fleming and Jim Asplund indicates that engaged customers generate 1.7 times more revenue than normal customers while having engaged employees and engaged customers returns a revenue gain of 3.4 times the normal return.[19]
According to Buchanan and Gilles, the increased profitability associated with customer retention efforts occurs because of several factors once a relationship has been established with a customer:
The relationship ladder ofcustomer loyaltygroups types of customers according to their level of loyalty. The ladder's first rung consists of prospects, non-customers who are likely to become customers in the future. This is followed by the successive rungs of customer, client, supporter, advocate, and partner. The relationship marketer's objective is to 'help' customers climb the ladder as high up as possible. This usually involves providing more personalised service and providing service quality that exceeds expectations at each step.
Customer retention efforts involve multiple considerations:
A technique to calculate the value to a firm of a sustained customer relationship has been developed. This calculation is typically calledcustomer lifetime value, a prediction of thenet profitof a customer's relationship with a company.
Retention strategies may also include building barriers tocustomer switchingbyproduct bundling(combining several products or services into one package and offering them at a single price),cross-selling(selling related products to current customers), cross-promotions (giving discounts or otherpromotional incentivesto purchasers of related products),loyalty programs(giving incentives for frequent purchases), increasing switching costs (adding termination costs such as mortgage termination fees), and integrating computer systems of multiple organizations (primarily in industrial marketing).
Many relationship marketers use a team-based approach due to the concept that the more points of contact between the organization and customer, the stronger the bond and the more secure the relationship.
Relationship marketing and traditional or transactional marketing are not mutually exclusive, and there is no need for a conflict between them. In practice, a relationship-oriented marketer still has choices depending on the situation. Most firms blend the two approaches in order to reach a short-term marketing goal or long-termmarketing strategy.[21]Many products have a service component to them, which has been growing in recent decades.
Relationship marketing aims to strengthen the relationship with clients and secure them. Morgan and Hunt (1994) made a distinction between economic and social exchange on the basis of exchange theory and concluded that the basic guarantee of social exchange was the spirit of the contract of trust and commitment. The transition from economic exchange theory to social exchange theory is where the one-time transaction's prevalence is reduced.
Besides, the theoretical core of enterprise relationship marketing in this period is the cooperative relationship based on commitment, which defines relationship marketing from the perspective of exchange theory and emphasizes that relationship marketing is an activity related to the progress, maintenance and development of all marketing activities. The theory states that trading enterprises are composed of trust and commitment and that the basis of marketing activities to establish long-term relations. Factors affecting cooperation from both sides include communication, power, cost and benefit and opportunism behavior; but the relationship effect is mainly formed by trust and commitment. Moreover, Copulsky and Wolf (1990) introduced terminology like 'one to one' marketing that leverages IT to target customers with specific offers.[22]
Enterprise has an incentive to improve the effect of relationships with customer. When access to data and information that improves the relationship with the customer has a low cost, enterprises pay the cost in order to improve relations with customers. Due to the development of communication and Internet technology, information costs have decreased substantially. Liker and Klamath (1998) introduced the relationship between enterprises and suppliers into the scope of relational marketing, claiming that in the marketing process manufacturers make suppliers assume corresponding responsibilities and enable them to exploit technological and resource advantages in the production process, improving their marketing innovation.
Lukas and Ferrell (2000) believe that the implementation of customer-oriented marketing can greatly promote marketing innovation and encourage enterprises to break through the traditional relationship model between enterprises and customers and propose new products. Lethe (2006) confirms the relationship between enterprises and customers through the observation of the benchmarking customer research, finding a positive correlation to innovation. He posits that good relations between enterprises and customers results in more efficient benchmarking, identifying new potential products, reduce the cost of new product development and increase market acceptance of products. Also he proposes that all relationships established with relevant parties for enterprise marketing are centered on the establishment of good customer relations: the core concept of relationship marketing is maintaining a relationship with customers.
Guinness (1994) propounds that relationship marketing is a consciousness that regards the marketing process as the interaction between enterprises and various aspects of relationships and networks. According to his research, enterprise faces four relations: its relationship with the macro-environment, that with the micro-environment, market relations and relations with a special market. In addition, enterprises in the implementation of relationship marketing are often able to use networks to promote all aspects of relationship coordination and progress.[citation needed]
Relationship marketing stresses internal marketing, which is using a marketing orientation within an organization itself. Many relationship marketing attributes like collaboration, loyalty and trust determine internal customers' words and actions. According to this theory, every employee, team and department in the company is simultaneously a supplier and a customer of services and products. An employee obtains a service at a point in thevalue chainand then provides a service to another employee further along the value chain. If internal marketing is effective, every employee both provides and receives exceptional service to and from other employees. It also helps employees understand the significance of their roles and how their roles relate to others'. If implemented well, it can encourage every employee to see the process in terms of the customer's perception of value and the organization's strategic mission. Further, an effective internal marketing program is a prerequisite for effective external marketing efforts (W. George 1990).[23]
Christopher, Payne and Ballantyne (1991) identify six markets which they note as central to relationship marketing: internal markets, supplier markets, recruitment markets, referral markets, influence markets and customer markets.[11]They refer to Berry's work (1983), which drew attention to the need to engage with internal stakeholders such as employees to ensure that they are capable and willing to deliver the value proposition on offer.[24]
Referral marketingis the development and implementation of a marketing plan in order to stimulate referrals. Marketing to suppliers is aimed at ensuring a long-term conflict-free relationship in which all parties understand the others' needs and exceed their expectations. Such a strategy can reduce costs and improve quality. Meanwhile,Influence marketsinvolve a wide range of sub-markets, including government regulators, standards bodies, lobbyists, stockholders, bankers, venture capitalists, financial analysts, stockbrokers, consumer associations, environmental associations and labor associations. These activities are typically carried out by thepublic relationsdepartment, but relationship marketers believe that marketing to all six markets is the responsibility of everyone in an organization. Each market may require its own explicit strategies andmarketing mix.[citation needed]
Live-in marketing (LIM) is a variant of marketing and advertising in which the target consumer is allowed to sample or use a product in a relaxed atmosphere over a long period of time. Much like product placement in film and television, LIM was developed as a means to reach select target demographics in a non-invasive and much less garish manner than traditional advertising. While LIM represents an entirely untapped avenue of marketing, it is not an entirely novel idea. With the rising popularity of experiential and event marketing in North America and Europe and the relatively high ROI in terms of advertising dollars spent on experiential marketing compared to traditional big media advertising, industry analysts see LIM as a natural progression.[25]
LIM functions around the premise that marketing or advertising agencies aim to appeal to companies'target demographic. Avenues such as sponsorship or direct product placement and sampling are explored in turn. Unlike traditional event marketing, LIM suggests that end-users can sample the product or service in a comfortable and relaxed atmosphere. The theory posits that the end-user will have as positive as possible an interaction with the given brand, thereby leading toword-of-mouthcommunication and potential future purchases.[26]
|
https://en.wikipedia.org/wiki/Relationship_marketing
|
Targeted[1]advertisingordata-driven marketingis a form ofadvertising, includingonline advertising, that is directed towards an audience with certain traits, based on the product or person the advertiser is promoting.[2]
These traits can either bedemographicwith a focus on race, economic status, sex, age, generation, level of education, income level, and employment, orpsychographicfocused on theconsumervalues, personality, attitude, opinion,lifestyle, and interests.[1]This focus can also entail behavioral variables, such asbrowser history,purchase history, and other recent online activities. The process of algorithm targeting eliminates waste.[3]
Traditional forms of advertising, includingbillboards, newspapers, magazines, and radio channels, are progressively becoming replaced by online advertisements.[4]
Through the emergence of new online channels, the usefulness of targeted advertising is increasing because companies aim to minimize wasted advertising.[4]Most targetednew mediaadvertising currently uses second-order proxies for targets, such astrackingonline or mobile web activities of consumers, associating historical web page consumer demographics with new consumer web page access, using a search word as the basis of implied interest, orcontextual advertising.[5]
Companies have technology that allows them to gather information about web users.[4]By tracking and monitoring what websites users visit, internet service providers can directly show ads that are relative to theconsumer's preferences. Most of today's websites are using these targeting technologies totrackusers' internet behavior and there is much debate over theprivacy issuespresent.[6]
Search engine marketing usessearch enginesto reach target audiences. For example,Google's Remarketing Campaigns are a type of targeted marketing where advertisers use theIP addressesof computers that have visited their websites to remarket their ad specifically to users who have previously been on their website whilst they browse websites that are a part of theGoogle display network, or when searching for keywords related to a product or service on the Google search engine.[7]Dynamic remarketing can improve targeted advertising as the ads can include the products or services that the consumers have previously viewed on the advertisers' websites within the ads.[8]
Google Adsincludes different platforms. The Search Network displays the ads on 'Google Search, other Google sites such as Maps and Shopping, and hundreds of non-Google search partner websites that show ads matched to search results'.[8]'The Display Network includes a collection of Google websites (likeGoogle Finance,Gmail,Blogger, andYouTube), partner sites, and mobile sites and apps that show adverts from Google Ads matched to the content on a given page.'[8]
These two kinds of advertising networks can be beneficial for each specific goal of the company, or type of company. For example, the search network can benefit a company to reach consumers actively searching for a particular product or service.
Other ways advertising campaigns can target the user is to usebrowser historyand search history. For example, if the user typespromotional pensinto a search engine such as Google, ads for promotional pens will appear at the top of the page above the organic listings. These ads will be geo-targeted to the area of the user's IP address, showing the product or service in the local area or surrounding regions. The higher ad position is often rewarded to the ad having a higher quality score.[9]The ad quality is affected by the 5 components of the quality score:[10]
When ranked based on these criteria, it will affect the advertiser by improving ad auction eligibility, the actualcost per click(CPC), ad position, and ad position bid estimates; to summarise, the better the quality score, the better ad position, and lower costs.
Google uses its display network to track what users are looking at and to gather information about them. When a user goes to a website that uses the Google display network, it will send a cookie to Google, showing information on the user, what they have searched, where they are from, found by the IP address, and then builds a profile around them, allowing Google to easily target ads to the user more specifically.
For example, if a user goes onto promotional companies' websites often, that sell promotional pens, Google will gather data from the user such as age, gender, location, and other demographic information as well as information on the websites visited, the user will then be put into a category of promotional products, allowing Google to easilydisplay adson websites the user visits relating to promotional products.[11]
Social media targeting is a form of targeted advertising, that uses general targeting attributes such asgeotargeting, behavioral targeting, and socio-psychographic targeting, and gathers the information that consumers have provided on each social media platform.
According to the media users' view history, customers who are interested in the criteria will be automatically targeted by the advertisements of certain products or services.[12]For example,Facebookcollects massive amounts of user data from surveillance infrastructure on its platforms.[13]Information such as a user's likes, view history, and geographic location is leveraged to micro-target consumers with personalized products.
Paid advertising on Facebook works by helping businesses to reach potential customers by creating targeted campaigns.[14]
Social media also creates profiles of the consumer and only needs to look at one place, the user's profile, to find all interests and 'likes'.
E.g. Facebook lets advertisers target using broad characteristics like gender, age, and location. Furthermore, they allow more narrow targeting based on demographics, behavior, and interests (see a comprehensive list of Facebook's different types of targeting options[15]).
Advertisements can be targeted to specific consumers watchingdigital cable,[16]Smart TVs, orover-the-top video.[17]Targeting can be done according to age, gender, location, or personal interests in films, etc.[18]
Cable box addresses can be cross-referenced with information from data brokers likeAcxiom,Equifax, andExperian, including information about marriage, education, criminal record, and credit history. Political campaigns may also match against public records such as party affiliation and which elections and party primaries the view has voted in.[17]
Since the early 2000s, advertising has been pervasive online and more recently in the mobile setting. Targeted advertising based on mobile devices allows more information about the consumer to be transmitted, not just their interests, but their information about their location and time.[19]This allows advertisers to produce advertisements that could cater to their schedule and a more specific changing environment.
The most straightforward method of targeting is content/contextual targeting. This is when advertisers put ads in a specific place, based on the relative content present.[6]Another name used is content-oriented advertising, as it corresponds to the context being consumed.
This targeting method can be used across different mediums, for example in an article online, purchasing homes would have an advert associated with this context, like an insurance ad. This is usually achieved through an ad matching system that analyses the contents on a page or finds keywords and presents a relevant advert, sometimes through pop-ups.[20]
Sometimes the ad matching system can fail, as it can neglect to tell the difference between positive and negative correlations. This can result in placing contradictory adverts, which are not appropriate to the content.[20]
Technical targeting is associated with the user's own software or hardware status. The advertisement is altered depending on the user's availablenetwork bandwidth, for example, if a user is on a mobile phone that has a limited connection, the ad delivery system will display a version of the ad that is smaller for a faster data transfer rate.[6]
Addressable advertising systems serve ads directly based on demographic, psychographic, or behavioral attributes associated with the consumer(s) exposed to the ad. These systems are always digital and must beaddressablein that the endpoint that serves the ad (set-top box, website, or digital sign) must be capable of rendering an ad independently of any other endpoints based on consumer attributes specific to that endpoint at the time the ad is served.
Addressable advertising systems, therefore, must use consumer traits associated with the endpoints as the basis for selecting and serving ads.[21]
According to theJournal of Marketing, more than 1.8 billion clients spent a minimum of 118 minutes daily- via web-based networking media in 2016.[22]Nearly 77% of these clients interact with the content through likes, commenting, and clicking on links related to content. With this astounding buyer trend, advertisers need to choose the right time to schedule content, to maximize advertising efficiency.
To determine what time of day is most effective for scheduling content, it is essential to know when the brain is most effective at retaining memory. Research inchronopsychologyhas credited that time-of-day impactsdiurnal varietyin a person'sworking memoryaccessibility and has discovered the enactment of inhibitory procedures to build working memory effectiveness during times of low working memory accessibility. Working memory is known to be vital forlanguage perception,learning, andreasoning[23][24]providing us with the capacity of putting away, recovering, and preparing quick data.
For many people, working memory accessibility is good when they get up toward the beginning of the day, most reduced in mid-evening, and moderate at night.[25]
Sociodemographic targeting focuses on the characteristics of consumers. This includes their age, generation, gender, salary, and nationality.[6]The idea is to target users specifically and to use this collected data, for example, targeting a male in the age bracket of 18–24. Facebook and other social media platforms use this form of targeting by showing advertisements relevant to the user's demographic on their account, this can show up in the forms ofbanner ads, mobile ads, or commercial videos.[26]
This type of advertising involves targeting different users based on their geographic location. IP addresses can signal the location of a user and can usually transfer the location through ZIP codes.[6]Locations are then stored for users in static profiles, thus advertisers can easily target these individuals based on their geographic location.
Alocation-based service(LBS) is a mobile information service that allows spatial and temporal data transmission and can be used to an advertiser's advantage.[27]This data can be harnessed from applications on the device (mobile apps likeUber) that allow access to the location information.[28]
This type of targeted advertising focuses on localizing content, for example, a user could be prompted with options of activities in the area, for example, places to eat, nearby shops, etc. Although producing advertising off consumer location-based services can improve the effectiveness of delivering ads, it can raise issues with the user's privacy.[29]
Behavioral targetingis centered around the activity/actions of users and is more easily achieved on web pages.[30][31]Information from browsing websites can be collected fromdata mining, which finds patterns in users' search history. Advertisers using this method believe it produces ads that will be more relevant to users, thus leading consumers to be more likely influenced by them.[32]
If a consumer was frequently searching for plane ticket prices, the targeting system would recognize this and start showing related adverts across unrelated websites, such as airfare deals on Facebook. Its advantage is that it can target individual interests, rather than target groups of people whose interests may vary.[6]
When aconsumervisits a website, the pages they visit, the amount of time they view each page, the links theyclickon, the searches they make, and the things that they interact with, allow sites to collect that data, and other factors, to create a 'profile' that links to that visitor's web browser. As a result, site publishers can use this data to create defined audience segments based on visitors who have similar profiles.
When visitors return to a specific site or a network of sites using the same web browser, those profiles can be used to allow marketers and advertisers to position their online ads and messaging in front of those visitors who exhibit a greater level of interest and intent for the products and services being offered.
Behavioral targeting has emerged as one of the main technologies used to increase the efficiency and profits ofdigital marketingand advertisements, as media providers can provide individual users with highly relevant advertisements. On the theory that properly targeted ads and messaging will fetch more consumer interest, publishers can charge a premium for behaviorally targeted ads and marketers can achieve.
Behavioral marketing can be used on its own or in conjunction with other forms of targeting.[15]Many practitioners also refer to this process as "audience targeting".[33]
While behavioral targeting can enhance ad effectiveness, it also raises privacy concerns.[34]Users may feel uncomfortable with the idea of their online behavior being tracked and used for advertising purposes. Striking a balance between personalization and privacy is crucial.[35]
Behavioral targeting may also be applied to any online property on the premise that it either improves the visitor experience or benefits the online property, typically through increased conversion rates or increased spending levels. The early adopters of this technology/philosophy were editorial sites such as HotWired,[36][37]online advertising[38]with leading online ad servers,[39]retail or anothere-commercewebsite as a technique for increasing the relevance of product offers and promotions on a visitor by visitor basis. More recently, companies outside this traditional e-commerce marketplace have started to experiment with these emerging technologies.
The typical approach to this starts by usingweb analyticsorbehavioral analyticsto breakdown the range of all visitors into several discrete channels. Each channel is then analyzed and a virtual profile is created to deal with each channel.
These profiles can be based aroundPersonasthat gives the website operators a starting point in terms of deciding what content, navigation, and layout to show to each of the different personas. When it comes to the practical problem of successfully delivering the profiles correctly this is usually achieved by either using a specialist content behavioral platform or by bespoke software development.
Most platforms identify visitors by assigning a unique ID cookie to every visitor to the site thereby allowing them to be tracked throughout their web journey, the platform then makes a rules-based decision about what content to serve.
Self-learning onsite behavioral targeting systems will monitor visitor response to site content and learn what is most likely to generate a desiredconversion event. Some good content for each behavioral trait or pattern is often established using numerous simultaneousmultivariate tests. Onsite behavioral targeting requires a relatively high level of traffic before statistical confidence levels can be reached regarding the probability of a particular offer generating a conversion from a user with a set behavioral profile. Some providers have been able to do so by leveraging their large user base, such asYahoo!. Some providers use a rules-based approach, allowing administrators to set the content and offers shown to those with particular traits.
According to research behavioral targeting provides little benefit at a huge privacy cost — when targeting for gender, the targeted guess is 42% accurate, which is less than a random guess. When targeting for gender and age the accuracy is 24%.[40]
Advertising networksuse behavioral targeting in a different way than individual sites. Since they serve many advertisements across many different sites, they can build up a picture of the likely demographic makeup of internet users.[41]Data from a visit to one website can be sent to many different companies, includingMicrosoftandGooglesubsidiaries,Facebook,Yahoo, many traffic-logging sites, and smaller ad firms.[42]
This data can sometimes be sent to more than 100 websites and shared with business partners, advertisers, and other third parties for business purposes. The data is collected usingcookies,web beaconsand similar technologies, and/or a third-party ad serving software, to automatically collect information about site users and site activity. Some servers even record the page that referred you to them, the websites you visit after them, which ads you see, and which ads you click on.[43]
Online advertising uses cookies, a tool used specifically to identify users, as a means of delivering targeted advertising by monitoring the actions of a user on the website. For this purpose, the cookies used are calledtrackingcookies. An ad network company such as Google uses cookies to deliver advertisements adjusted to the interests of the user, control the number of times that the user sees an ad, and "measure" whether they are advertising the specific product to the customer's preferences.[44]
This data is collected without attaching the people's names, addresses, email addresses, or telephone numbers, but it may include device identifying information such as the IP address,MAC address, web browser information, cookie, or other device-specific unique alphanumerical ID of your computer, but some stores may create guest IDs to go along with the data.
Cookies are used to control displayed ads and to track browsing activity and usage patterns on sites. This data is used by companies to infer people's age, gender, and possible purchase interests so that they can make customized ads that you would be more likely to click on.[45]
An example would be a user seen on football sites, business sites, and male fashion sites. A reasonable guess would be to assume the user is male. Demographic analyses of individual sites provided either internally (user surveys) or externally (Comscore\Netratings) allow the networks to sell audiences rather than sites.[46]Although advertising networks were used to sell this product, this was based on picking the sites where the audiences were. Behavioral targeting allows them to be slightly more specific about this.
In the work titledAn Economic Analysis of Online Advertising Using Behavioral Targeting,[31]Chen and Stallaert (2014) study the economic implications when an online publisher engages in behavioral targeting. They consider that the publisher auctions off an advertising slot and are paid on acost-per-clickbasis. Chen and Stallaert (2014) identify the factors that affect the publisher'srevenue, the advertisers' payoffs, andsocial welfare. They show that revenue for the online publisher in some circumstances can double when behavioral targeting is used.
Increased revenue for the publisher is not guaranteed: in some cases, the prices of advertising and hence the publisher's revenue can be lower, depending on the degree ofcompetitionand the advertisers' valuations. They identify two effects associated with behavioral targeting: acompetitive effectand apropensity effect.The relative strength of the two effects determines whether the publisher's revenue is positively or negatively affected. Chen and Stallaert (2014) also demonstrate that, although social welfare is increased and small advertisers are better off under behavioral targeting, the dominant advertiser might be worse off and reluctant to switch from traditional advertising.
In 2006, BlueLithium (nowYahoo! Advertising) in a large online study, examined the effects of behavior-targeted advertisements based on contextual content. The study used 400 million "impressions", or advertisements conveyed across behavioral and contextual borders. Specifically, nine behavioral categories (such as "shoppers" or "travelers"[47])with over 10 million "impressions" were observed for patterns across the content.[48]
All measures for the study were taken in terms ofclick-through rates(CTR) and "action-through rates" (ATR), or conversions. So, for every impression that someone gets, the number of times they "click-through" to it will contribute to CTR data, and every time they go through with or convert on the advertisement the user adds "action-through" data.
Results from the study show that advertisers looking for traffic on their advertisements should focus on behavioral targeting in context. Likewise, if they are looking for conversions on the advertisements, behavioral targeting out of context is the most effective process.[47]The data helped determine an "across-the-board rule of thumb";[47]however, results fluctuated widely by content categories. Overall results from the researchers indicate that the effectiveness of behavioral targeting is dependent on the goals of the advertiser and the primary target market the advertiser is trying to reach.
Through the use of analytic tools, marketers attempt to understand customer behavior and make informed decisions based on the data.[49]E-commerce retailers use data driven marketing to try and improvecustomer experienceand increase sales. One example cited in theHarvard Business Reviewis Vineyard Vines, a fashion brand with brick-and-mortar stores and anonline product catalog. The company has used anartificial intelligence(AI) platform to gain knowledge about its customers from actions taken or not taken on the e-commerce site. Email orsocial mediacommunications are automatically triggered at certain points, such as cart abandonment. This information is also used to refine search engine marketing.[50]
Advertising provides advertisers with a direct line of communication with existing and prospective consumers. By using a combination of words and/or pictures the general aim of the advertisement is to act as a "medium of information" (David Ogilvy[51]) making the means of delivery and to whom the information is delivered most important. Advertising should define how and when structural elements of advertisements influence receivers, knowing that all receivers are not the same and thus may not respond in a single, similar manner.[52]
Targeted advertising serves the purpose of placing particular advertisements before specific groups to reach consumers who would be interested in the information. Advertisers aim to reach consumers as efficiently as possible with the belief that it will result in a more effective campaign. By targeting, advertisers can identify when and where the ad should be positioned to achieve maximum profits. This requires an understanding of how customers' minds work (see alsoneuromarketing) to determine the best channel by which to communicate.
Types of targeting include, but are not limited to advertising based ondemographics,psychographics, behavioral variables, andcontextual targeting.
Behavioral advertising is the most common form of targeting used online.Internet cookiesare sent back and forth between an internet server and the browser, which allows a user to be identified or to track their progressions. Cookies provide details on what pages a consumer visits, the amount of time spent viewing each page, the links clicked on; and searches and interactions made.
From this information, the cookie issuer gathers an understanding of the user's browsing tendencies and interests generating aprofile. By analyzing the profile, advertisers can create defined audience segments based upon users with similar returned information, hence profiles. Tailored advertising is then placed in front of the consumer based on what organizations working on behalf of the advertisers assume are the interests of the consumer.[53]
These advertisements have been formatted to appear on pages and in front of users that they would most likely appeal to based on their profiles. For example, under behavioral targeting, if a user is known to have recently visited several automotive shopping and comparison sites based on the data recorded by cookies stored on the user's computer, the user can then be served automotive-related advertisements when visiting other sites.[54]
Behavioral advertising is reliant on data both wittingly and unwittingly provided by users and is made up of two different forms: one involving the delivery of advertising based on an assessment of user's web movements; the second involving the examination of communication and information as it passes through the gateways ofinternet service providers.[citation needed]
Demographic targetingwas the first and most basic form of targeting used online. involves segmenting an audience into more specific groups using parameters such as gender, age, ethnicity, annual income, parental status, etc. All members of the group share a common trait.
So, when an advertiser wishes to run a campaign aimed at a specific group of people then that campaign is intended only for the group that contains those traits at which the campaign is targeted. Having finalized the advertiser's demographic target, a website or a website section is chosen as a medium because a large proportion of the targeted audience utilizes that form of media.[citation needed]
Segmentation using psychographics Is based on an individual's personality, values, interests, and lifestyles. A recent study concerning what forms of media people use- conducted by the Entertainment Technology Center at the University of Southern California, the Hallmark Channel, and E-Poll Market Research- concludes that a better predictor of media usage is the user's lifestyle.
Researchers concluded that while cohorts of these groups may have similar demographic profiles, they may have different attitudes and media usage habits.[55]Psychographics can provide further insight by distinguishing an audience into specific groups by using their traits. Once acknowledging this is the case, advertisers can begin to target customers having recognized that factors other than age for example provide greater insight into the customer.
Contextual advertising is a strategy to place advertisements on media vehicles, such as specific websites or print magazines, whose themes are relevant to the promoted products.[56]: 2Advertisers apply this strategy to narrow-target their audiences.[57][56]Advertisements are selected and served by automated systems based on the identity of the user and the displayed content of the media. The advertisements will be displayed across the user's different platforms and are chosen based on searches for keywords; appearing as either a web page or pop-up ads. It is a form of targeted advertising in which the content of an ad is in direct correlation to the content of the webpage the user is viewing.
Retargeting is where advertisers use behavioral targeting to produce ads that follow users after users have looked at or purchased a particular item. An example of this is store catalogs, where stores subscribe customers to their email system after a purchase hoping that they draw attention to more items for continuous purchases.
The main example of retargeting that has earned a reputation from most people is ads that follow users across the web, showing them the same items that they have looked at in the hope that they will purchase them. Retargeting is a very effective process; by analyzing consumers activities with the brand they can address their consumers' behavior appropriately.[58]
Every brand, service, or product has itself apersonality, how it is viewed by the public and the community and marketers will create these personalities to match the personality traits of their target market.[1]Marketers and advertisers create these personalities because when consumers can relate to the characteristics of a brand, service, or product they are more likely to feel connected to the product and purchase it.[citation needed]
Advertisers are aware that different people lead different lives, have different lifestyles and different wants, and needs at different times in their consumer's lives, thus individual differences can be compensated for Advertisers who base their segmentation on psychographic characteristics promote their product as the solution to these wants and needs. Segmentation by lifestyle considers where the consumer is in their life cycle and which preferences are associated with that life stage.[citation needed]
Psychographic segmentation also includesopinionson religion, gender, politics, sporting and recreational activities, views on the environment, and arts and cultural issues. The views that themarket segmentshold and the activities they participate in will have an impact on the products and services they purchase and it will affect how they respond to the message.
Alternatives to behavioral advertising and psychographic targeting include geographic targeting and demographic targeting
When advertisers want to efficiently reach as many consumers as possible, they use a six-step process.
Alternatives to behavioral advertising include audience targeting, contextual targeting, andpsychographic[59]targeting.
Targeting aims to improve the effectiveness of advertising and reduce the wastage created by sending advertising to consumers who are unlikely to purchase that product. Targeted advertising or improved targeting may lead to lower advertising costs and expenditures.[60]
The effects of advertising on society and those targeted are all implicitly underpinned by the consideration of whether advertising compromises autonomous choice.[61]
Those arguing for the ethical acceptability of advertising claim that, because of the commercially competitive context of advertising, the consumer has a choice over what to accept and what to reject.
Humans have the cognitive competence and are equipped with the necessary faculties to decide whether to be affected by adverts.[62]Those arguing against note, for example, that advertising can make us buy things we do not want or that, as advertising is enmeshed in a capitalist system, it only presents choices based on consumerist-centered reality thus limiting the exposure to non-materialist lifestyles.
Although the effects of target advertising are mainly focused on those targeted, it can also affect those outside of the target segment. Its unintended audiences often view an advertisement targeted at other groups and start forming judgments and decisions regarding the advertisement and even the brand and company behind the advertisement, these judgments may affect future consumer behavior.[63]
TheNetwork Advertising Initiativeconducted a study[64]in 2009 measuring the pricing and effectiveness of targeted advertising. It revealed that targeted advertising:
However, other studies show that targeted advertising, at least by gender,[1]is not effective.
One of the major difficulties in measuring the economic efficiency of targeting, however, is being able to observe what would have happened in the absence of targeting since the users targeted by advertisers are more likely to convert than the general population. Farahat and Bailey[65]exploit a large-scale natural experiment on Yahoo! allowing them to measure the true economic impact of targeted advertising on brand searches and clicks. They find, assuming the cost per 1000 ad impressions (CPM) is $1, that:
Research shows that Content marketing in 2015 generated 3 times as many leads as traditional outbound marketing, but costs 62% less[66]showing how being able to advertise to targeted consumers is becoming the ideal way to advertise to the public. Other stats show that 86% of people skip television adverts and 44% of people ignore direct mail, which also displays how advertising to the wrong group of people can be a waste of resources.[66]
Proponents of targeted advertising argue that there are advantages for both consumers and advertisers:
Targeted advertising benefits consumers because advertisers can effectively attract consumers by using their purchasing and browsing habits this enables ads to be more apparent and useful for customers. Having ads that are related to the interests of the consumers allows the message to be received in a directly through effective touchpoints. An example of how targeted advertising is beneficial to consumers is that if someone sees an ad targeted to them for something similar to an item they have previously viewed online and were interested in, they are more likely to buy it.
Consumers can benefit from targeted advertising in the following ways:
Intelligence agencies worldwide can more easily, and without exposing their personnel to the risks ofHUMINT, track targets at sensitive locations such as military bases or training camps by simply purchasing location data from commercial providers who collect it from mobile devices withgeotargetingenabled used by the operatives present at these places.[68]
Location data can be extremely valuable and must be protected. It can reveal details about the number of users in a location, user and supply movements, daily routines (user and organizational), and can expose otherwise unknown associations between users and locations.
Advertisers benefit from target advertising are reducing resource costs and creating more effective ads by attracting consumers with a strong appeal to these products. Targeted advertising allows advertisers to reduce the cost of advertisement by minimizing "wasted" advertisements to non-interested consumers. Targeted advertising captivates the attention of consumers they were aimed at resulting in higher return on investment for the company.
Because behavioral advertising enables advertisers to more easily determine user preferences and purchasing habits, the ads will be more pertinent and useful for consumers. By creating a more efficient and effective manner of advertising to the consumer, an advertiser benefits greatly in the following ways:
Using information from consumers can benefit the advertiser by developing a more efficient campaign, targeted advertising is proven to work both effectively and efficiently.[69]They don't want to waste time and money advertising to the "wrong people".[60]Through technological advances, the internet has allowed advertisers to target consumers beyond the capabilities of traditional media, and target significantly larger amount.[70]
The main advantage of using targeted advertising is that it can help minimize wasted advertising by using detailed information about individuals who are intended for a product.[71]If consumers produce these ads that are targeted at them, it is more likely they will be interested and click on them. 'Know thy consumer', is a simple principle used by advertisers, when businesses know information about consumers, it can be easier to target them and get them to purchase their product.
Some consumers do not mind if their information is used, and are more accepting of ads with easily accessible links. This is because they may appreciate adverts tailored to their preferences, rather than just generic ads. They are more likely to be directed to products they want, and possibly purchase them, in return generating more income for the business advertising.
Targeted advertising has raised controversies, most particularly regardingprivacy rightsand policies. With behavioral targeting focusing on specific user actions such as site history, browsing history, and buying behavior, this has raised user concern that all activity is being tracked.
Privacy International, a UK-based registered charity that defends and promotes the right to privacy across the world, suggests that from any ethical standpoint such interception of web traffic must be conditional on the based on explicit and informed consent, and action must be taken where organizations can be shown to have acted unlawfully.[citation needed]
A survey conducted in the United States by thePew Internet & American Life Projectbetween January 20 and February 19, 2012, revealed that most Americans are not in favor of targeted advertising, seeing it as an invasion of privacy. Indeed, 68% of those surveyed said they are "not okay" with targeted advertising because they do not like having their online behavior tracked and analyzed.
Another issue with targeted advertising is the lack of 'new' advertisements of goods or services. Seeing as all ads are tailored to be based on user preferences, no different products will be introduced to the consumer. Hence, in this case, the consumer will be at a loss as they are not exposed to anything new.
Advertisers concentrate their resources on the consumer, which can be very effective when done right.[72]When advertising doesn't work, the consumer can find this creepy and start wondering how the advertiser learned the information about them.[26]Consumers can have concerns over ads targeted at them, which are too personal for comfort, feeling a need for control over their data.[73]
In targeted advertising privacy is a complicated issue due to the type of protected user information and the number of parties involved. The three main parties involved in online advertising are the advertiser, the publisher, and the network. People tend to want to keep their previously browsed websites private, although users 'clickstreams' are being transferred to advertisers who work with ad networks. The user's preferences and interests are visible through their clickstream and their behavioral profile is generated.[74]
As of 2010, many people have found this form of advertising to be concerning and see these tactics as manipulative and a sense of discrimination.[74]As a result of this, several methods have been introduced to avoid advertising.[4]Internet users employingad blockersare rapidly growing in numbers. The average global ad-blocking[75]rate in early 2018 was estimated at 27 percent. Greece is at the top of the list with more than 40% of internet users admitting to using ad-blocking software. Among the technical population ad-blocking reaches 58%.[76]
Targeted advertising raisesprivacy concerns. Targeted advertising is performed by analyzing consumers' activities through online services such asHTTP cookiesanddata mining, both of which can be seen as detrimental to consumers' privacy. Marketers research consumers' online activity for targeted advertising campaigns like programmatic andSEO.
Consumers' privacy concerns revolve around today's unprecedentedtrackingcapabilities and whether to trust their trackers. Consumers may feel uncomfortable with sites knowing so much about their activity online. Targeted advertising aims to increase promotions' relevance to potential buyers, deliveringad campaignexecutions to specified consumers at critical stages in thebuying decision process. This potentially limits a consumer's awareness of alternatives and reinforcesselective exposure.
Consumers may start avoiding certain sites and brands if they keep getting served the same advertisements and the consumer may feel like they are being watched too much or may start getting annoyed with certain brands. Due to the increased use of tracking cookies all over the web, many sites now have cookie notices that pop up when a visitor lands on a site. The notice informs the visitor about the use of cookies, how they affect the visitor, and the visitor's options in regarding to what information the cookies can obtain.
As of 2019, many online users and advocacy groups were concerned aboutprivacyissues around targeted advertising, because it requires aggregation of large amounts of personal data, including highly sensitive data, such as sexual orientation or sexual preferences, health issues, and location, which is then traded between hundreds of parties in the process ofreal-time bidding.[77][78]
This is a controversy that the behavioral targeting industry is trying to contain through education, advocacy, and product constraints to keep all information non-personally identifiable or to obtainpermissionfrom end-users.[79]AOLcreated animated cartoons in 2008 to explain to its users that their past actions may determine the content of ads they see in the future.[80]
Canadianacademics at theUniversity of OttawaCanadian Internet Policy and Public Interest Clinichave recently demanded thefederal privacy commissionerinvestigate online profiling of Internet users for targeted advertising.[81]
The European Commission(via CommissionerMeglena Kuneva) has also raised several concerns related to online data collection (of personal data),profiling, and behavioral targeting, and is looking to "enforce existing regulation".[82]
In October 2009 it was reported that a recent survey carried out by theUniversity of Pennsylvaniaand theBerkeley Center for Law and Technologyfound that a large majority of US internet users rejected the use of behavioral advertising.[83]Several research efforts by academics and others as of 2009 have demonstrated that data that is supposedly anonymized can be used to identify real individuals.[84]
In December 2010, onlinetrackingfirmQuantcastagreed to pay $2.4M to settle a class-action lawsuit for their use of'zombie' cookiesto track consumers. These zombie cookies, which were on partner sites such as MTV, Hulu, and ESPN, would re-generate to continue tracking the user even if they were deleted.[85]Other uses of such technology includeFacebook, and their use of theFacebook Beaconto track users across the internet, to later use for more targeted advertising.[86]Tracking mechanisms without consumer consent are generally frowned upon; however, tracking of consumer behavior online or on mobile devices is key of digital advertising, which is the financial backbone to most of the internet.
In March 2011, it was reported that the online ad industry would begin working with the Council of Better Business Bureaus to start policing itself as part of its program to monitor and regulate how marketers track consumers online, also known as behavioral advertising.[87]
Since at least the mid 2010s, many users of smartphones or other mobile devices have advanced the theory that technology companies are using microphones in the devices to record personal conversations for purposes of targeted advertising.[88]Such theories are often accompanied by personal anecdotes involving advertisements with apparent connections to prior conversations.[89]Facebook has denied the practice, andMark Zuckerbergdenied it in congressional testimony.[90]Google has also denied using ambient sound or conversations to target advertising.[91]Technology experts who have investigated the claims have described them as unproven and unlikely.[91][92][93]An alternative explanation for apparent connections between conversations and subsequent advertisements is the fact that technology companies track user behavior and interests in many ways other than via microphones.[94]
In December 2023,404 Mediareported thatCox Media Groupwas advertising a service to marketing professionals called "Active Listening", which involved the ability to listen to microphones installed in smartphones, smart TVs, and other devices in order to target ads to consumers.[95][96]A pitch deck promoting the capability stated that it targeted "Google/Bing" and that Cox Media Group was aGooglePremier Partner.[97]Meta, Amazon, Google, and Microsoft all denied using the service.[98]In response to questions from404 Media,Google stated that it had removed Cox Media Group from its Partners Program after a review.[97]Cox Media removed the material from their website and denied listening to any conversations.[99]
Contemporary data driven marketing can be traced back to the 1980s and the emergence ofdatabase marketing, which increased the ease of personalizing customer communications.[100]
|
https://en.wikipedia.org/wiki/Targeted_advertising
|
Variable data printing(VDP) (also known asvariable information printing(VIP) orvariable imaging(VI)) is a form of digital printing, includingon-demand printing, in which elements such astext,graphicsandimagesmay be changed from one printed piece to the next, without stopping or slowing down the printing process and using information from adatabaseor external file.[1]For example, a set of personalized letters, each with the same basic layout, can be printed with a different name and address on each letter. Variable data printing is mainly used fordirect marketing,customer relationship management,advertising, invoicing and applying addressing[2]on selfmailers, brochures or postcard campaigns.
VDP is a direct outgrowth ofdigital printing, which harnesses computer databases and digital print devices and highly effective software to create high-quality, full color documents, with a look and feel comparable to conventionaloffset printing. Variable data printing enables themass customizationof documents via digital print technology, as opposed to the 'mass-production' of a single document usingoffset lithography. Instead of producing 10,000 copies of a single document, delivering a single message to 10,000 customers, variable data printing could print 10,000 unique documents with customized messages for each customer.
There are several levels of variable printing. The most basic level involves changing the salutation or name on each copy much likemail merge. More complicated variable data printing uses 'versioning', where there may be differing amounts of customization for different markets, with text and images changing for groups of addresses based upon which segment of the market is being addressed. Finally there is full variability printing, where the text and images can be altered for each individual address. All variable data printing begins with a basic design that defines static elements and variable fields for the pieces to be printed. While the static elements appear exactly the same on each piece, the variable fields are filled in with text or images as dictated by a set of application and style rules and the information contained in the database.
There are three main operational methodologies for variable data printing.[3]
In one methodology, a static document is loaded into printer memory. The printer is instructed, through theprint driverorraster image processor(RIP) to always print the static document when sending any page out to the printer driver or RIP. Variable data can then be printed on top of the static document. This methodology is the simplest way to execute VDP, however its capability is less than that of a typicalmail merge.[4]
A second methodology is to combine the static and variable elements into print files, prior to printing, using standard software. This produces a conventional (and potentially huge) print file[5]with every image being merged into every page. A shortcoming of this methodology is that running many very large print files can overwhelm the RIP's mad processing capability. When this happens, printing speeds might become slow enough to be impractical for a print job of more than a few hundred pages.
A third methodology is to combine the static and variable elements into print files, prior to printing, using specialized VDP software. This produces optimized print files, such asPDF/VT, PostScript orPPML,[5]that maximize print speed since the RIP only needs to process static elements once.[6]
There are many software packages available to merge text and images into VDP print files. Some are stand-alone software packages likeSYNC Infographic VDP Generator, however most of the advanced VDP software packages are actually plug-in modules for one or more publishing software packages such asAdobe Creative Suite.[7]
Besides VDP software, other software packages may be necessary for VDP print projects. Mailing software is necessary in the United States (United States Postal Service) and Canada to take advantage of reduced postage forbulk mailing.[2]Used prior to the VDP print file creation, mailing software presorts and validates and generates bar codes for mailing addresses. Pieces can then be printed in the proper sequence for sorting by postal code. In Canada,Canada Postnow offers a 'Machineable'[8]personalized mail category which does not require addresses to be sorted into any specific order before mailing; therefore reducing the need for specialized sorting software to obtain optimal postage rates.
Software to manage data quality (e.g. for duplicate removal or handling of bad records) and uniformity may also be needed.[9]In lieu of purchasing software, various companies provide an assortment of VDP-related print file, mailing and data services.
The difference between variable data printing (VDP) and traditional printing is the personalization that is involved. Personalization allows a company to connect to its customers. Variable data printing is more than a variable name or address in a printed piece; in the past, a variable name would have been effective, because it was a new concept at the time. In today’s world, marketers expect personalization to reflect the interests of the customer. In order for VDP to be successful, the company must first know something about the customer. For example, a customer who loves baseball might be given a VDP postcard containing an image of their favorite baseball player. Compared to a generically printed marketing postcard, such a VDP postcard is more likely to be effective, because the customer is more likely to read the material that it carries. Conversely, an example of an ineffective VDP piece would entail mailing a postcard to the same customer with an image of a soccer player. If the customer has no interest in soccer, then he or she might or might not pay attention to the postcard. The ultimate goal is to attract the customer's attention to a sales pitch of some type, with the intent of generating demand for a product or service (which might be something that the customer has no need for, but which the advertising manages to convince the customer to pay for anyway).
As a communication tool, personalization enables the company to communicate in such a way as to develop business relationships with prospective customers and also to maintain relationships with their current customers. In this way, a prospect who is converted to a customer can then be converted to a loyal customer, who continues to buy goods or services from that company. A company that produces good-quality products or provides useful services will retain the loyal customers that it has created.[10]
Another benefit of VDP is the increase in the response rate and the reduction in response time. Because personalization more effectively catches the attention of the consumer, the response rate of a mail campaign increases. Personalization also decreases the response time, because the mailed piece generally has a more profound and more meaningful effect on the consumer. This effect, in turn, induces the consumer to respond more quickly, especially if the mailed piece contains acall to actionsuch as a time-limited offer with a clearly enunciated deadline. In contrast, a mailed piece that is not eye-catching may instead be set aside and forgotten until a later date. Therefore, in such a case, it might take weeks or longer for a response to be obtained, if one is in fact obtained at all.[10]
Variable data printing can be combined with other platforms – such asPURLS, email blasts, and QR codes; all three platforms are considered marketing tools. Many people have found the benefit of combining all of these platforms in order to have a successful campaign. Email blasts and PURLS allow a company to find out information about their consumer. An email blasts usually doesn’t contain much personalization, but it can. The bulk of the personalization would be seen in a PURL. A PURL is a personalized uniform resource locator (URL). In short, it is a landing page. It is also where most of the knowledge about the consumer will be gained. The email blasts will contain a PURL, which will lead the consumer to a personalized page. The PURL is where a company can gain information about the consumer through the requested information. The QR code can be added to a mailed piece. It works like an email blasts. It directs the consumer to a website. The integration of these three platforms can help a campaign.[10]
The origin of the term variable data printing is widely credited to Frank Romano, Professor Emeritus, School of Print Media, at theCollege of Imaging Arts and SciencesatRochester Institute of Technology. Mr. Romano does not explicitly take credit for coining the term[11]but points to his use of it as early as 1969 and its appearance in the 1999 book, “Personalized and Database Printing”, that he authored with David Broudy.[11]
The concept of merging static document elements and variable document elements predates the term and has seen various implementations ranging from simple desktopmail merge, to complexmainframeapplications in the financial and banking industry. In the past, the term VDP has been most closely associated with digital printing machines. However, in recent years the application of this technology has spread to web pages, emails, and mobile messaging.
|
https://en.wikipedia.org/wiki/Variable_data_printing
|
Dynamic pricing, also referred to assurge pricing,demand pricing, ortime-based pricing,andvariable pricing, is arevenue managementpricing strategyin which businesses set flexible prices forproductsorservicesbased on current market demands. It usually entails raising prices during periods of peak demand and lowering prices during periods of low demand.[1]
As a pricing strategy, it encourages consumers to make purchases during periods of low demand (such as buying tickets well in advance of an event or buying meals outside of lunch and dinner rushes)[1]and disincentivizes them during periods of high demand (such as using less electricity during peak electricity hours).[2][3]In some sectors, economists have characterized dynamic pricing as having welfare improvements over uniform pricing and contributing to more optimal allocation of limited resources.[4]Its usage often stirs public controversy, as people frequently think of it asprice gouging.[5]
Businesses are able to change prices based on algorithms that take into account competitor pricing,supply and demand, and other external factors in the market. Dynamic pricing is a common practice in several industries such ashospitality,tourism,entertainment,retail,electricity, andpublic transport. Each industry takes a slightly different approach to dynamic pricing based on its individual needs and the demand for the product.
Cost-plus pricingis the most basic method of pricing. A store will simply charge consumers the cost required to produce a product plus a predetermined amount of profit. Cost-plus pricing is simple to execute, but it only considers internal information when setting the price and does not factor in external influencers like market reactions, the weather, or changes in consumer value. A dynamic pricing tool can make it easier to update prices, but will not make the updates often if the user doesn't account for external information like competitor market prices.[6]Due to its simplicity, this is the most widely used method of pricing with around 74% of companies in the United States employing this dynamic pricing strategy.[7]Although widely used, the usage is skewed, with companies facing a high degree of competition using this strategy the most, on the other hand, companies that deal with manufacturing tend to use this strategy the least.[7]
Businesses that want to price competitively will monitor their competitors’ prices and adjust accordingly. This is called competitor-based pricing. In retail, the competitor that many companies watch is Amazon, which changes prices frequently throughout the day. Amazon is a market leader in retail that changes prices often,[8]which encourages other retailers to alter their prices to stay competitive. Such online retailers use price-matching mechanisms like price trackers.[9]The retailers give the end-user an option for the same, and upon selecting the option to price match, an online bot searches for the lowest price across various websites and offers a price lower than the lowest.[10]
Such pricing behavior depends on market conditions, as well as a firm's planning. Although a firm existing within a highly competitive market is compelled to cut prices, that is not always the case. In case of high competition, yet a stable market, and a long-term view, it was predicted that firms will tend to cooperate on a price basis rather than undercut each other.[11]
Ideally, companies should ask the price for a product that is equal to the value a consumer attaches to a product. This is called value-based pricing. As this value can differ from person to person, it is difficult to uncover the perfect value and have a differentiated price for every person. However, consumers' willingness to pay can be used as a proxy for the perceived value. With the price elasticity of products, companies can calculate how many consumers are willing to pay for the product at each price point. Products with high elasticities are highly sensitive to changes in price, while products with low elasticities are less sensitive to price changes (ceteris paribus). Subsequently, products with low elasticity are typically valued more by consumers if everything else is equal. The dynamic aspect of this pricing method is that elasticities change with respect to the product, category, time, location, and retailers. With the price elasticity of products and the margin of the product, retailers can use this method with their pricing strategy to aim for volume, revenue, orprofit maximizationstrategies.[12]
There are two types of bundle pricing strategies: one from the consumer's point of view, and one from the seller's point of view. From the seller's point of view, an end product's price depends on whether it is bundled with something else; which bundle it belongs to; and sometimes on which customers it is offered to. This strategy is adopted by print-media houses and other subscription-based services.The Wall Street Journal, for example, offers a standalone price if an electronic mode of delivery is purchased, and a discount when it is bundled with print delivery.[10]
Many industries, especially online retailers, change prices depending on thetime of day. Most retail customers shop during weekly office hours (between 9 AM and 5 PM), so many retailers will raise prices during the morning and afternoon, then lower prices during the evening.[13]
Time-based pricing of services such as provision ofelectric powerincludes:[14][15]
Peak fit pricing is best used for products that are inelastic in supply, where suppliers are fully able to anticipate demand growth and thus be able to charge differently for service during systematic periods of time.
A utility with regulated prices may develop a time-based pricing schedule on analysis of its long-run costs, such as operation and investment costs. A utility such as electricity (or another service), operating in a market environment, may be auctioned on acompetitive market; time-based pricing will typically reflect price variations on the market. Such variations include both regular oscillations due to the demand patterns of users; supply issues (such as availability of intermittent natural resources like water flow or wind); and exceptional price peaks. Price peaks reflect strained conditions in the market (possibly augmented bymarket manipulation, as during theCalifornia electricity crisis), and convey a possible lack of investment. Extreme events include the default byGriddyafter the2021 Texas power crisis.
Time-based pricing is the standard method of pricing in the tourism industry. Higher prices are charged during the peak season, or during special event periods. In the off-season, hotels may charge only the operating costs of the establishment, whereas investments and any profit are gained during the high season (this is the basic principle oflong-run marginal costpricing: see alsolong run and short run).
Hotels and other players in the hospitality industry use dynamic pricing to adjust the cost of rooms and packages based on the supply and demand needs at a particular moment.[16]The goal of dynamic pricing in this industry is to find the highest price that consumers are willing to pay. Another name for dynamic pricing in the industry is demand pricing. This form of price discrimination is used to try to maximize revenue based on the willingness to pay of different market segments. It features price increases when demand is high and decreases to stimulate demand when it is low. Having a variety of prices based on the demand at each point in the day makes it possible for hotels to generate more revenue by bringing in customers at the different price points they are willing to pay.
Airlines change prices often depending on the day of the week, time of day, and the number of days before the flight.[17]For airlines, dynamic pricing factors in different components such as: how many seats a flight has, departure time, and average cancellations on similar flights.[18]A 2022 study inEconometricaestimated that dynamic pricing was beneficial for "early-arriving, leisure consumers at the expense of late-arriving, business travelers. Although dynamic pricing ensures seat availability for business travelers, these consumers are then charged higher prices. When aggregated over markets, welfare is higher under dynamic pricing than under uniform pricing."[4]
Congestion pricingis often used in public transportation androad pricing, where a higher price at peak periods is used to encourage more efficient use of the service or time-shifting to cheaper or free off-peak travel. For example, the San Francisco Bay Bridge charges a higher toll during rush hour and on the weekend, when drivers are more likely to be traveling.[19]This is an effective way to boost revenue when demand is high, while also managing demand since drivers unwilling to pay the premium will avoid those times. TheLondon congestion chargediscourages automobile travel to Central London during peak periods. TheWashington MetroandLong Island Rail Roadcharge higher fares at peak times. The tolls on theCustis Memorial Parkwayvary automatically according to the actual number of cars on the roadway, and at times of severe congestion can reach almost $50.[citation needed]
Dynamic pricing is also used byUberandLyft.[20]Uber's system for "dynamically adjusting prices for service" measures supply (Uber drivers) and demand (passengers hailing rides by use of smartphones), and prices fares accordingly.[21]Ride-sharing companies such as Uber and Lyft have increasingly incorporated dynamic pricing into their operations. This strategy enables these businesses to offer the best prices for both drivers and passengers by adjusting prices in real-time in response to supply and demand. When there is a strong demand for rides, rates go up to encourage more drivers to offer their services, and when there is a low demand, prices go down to draw in more passengers.
Someprofessional sportsteams use dynamic pricing structures to boost revenue. Dynamic pricing is particularly important in baseball because MLB teams play around twice as many games as some other sports and in much larger venues.[22]
Sports that are outdoors have to factor weather into pricing strategy, in addition to the date of the game, date of purchase, and opponent.[23]Tickets for a game during inclement weather will sell better at a lower price; conversely, when a team is on a winning streak, fans will be willing to pay more.
Dynamic pricing was first introduced to sports by a start-up software company from Austin, Texas,QcueandMajor League BaseballclubSan Francisco Giants. TheSan Francisco Giantsimplemented a pilot of 2,000 seats in the View Reserved and Bleachers and moved on to dynamically pricing the entire venue for the 2010 season.Qcuecurrently works with two-thirds ofMajor League Baseballfranchises, not all of which have implemented a full dynamic pricing structure, and for the 2012 postseason, theSan Francisco Giants,Oakland Athletics, andSt. Louis Cardinalsbecame the first teams to dynamically price postseason tickets. While behind baseball in terms of adoption, theNational Basketball Association,National Hockey League, andNCAAhave also seen teams implement dynamic pricing. Outside of the U.S., it has since been adopted on a trial basis by some clubs in theFootball League.[24]Scottish Premier LeagueclubHeart of Midlothianintroduced dynamic pricing for the sale of theirseason ticketsin 2012, but supporters complained that they were being charged significantly more than the advertised price.[25]
Retailers, and online retailers, in particular, adjust the price of their products according to competitors, time, traffic, conversion rates, and sales goals.[26][27]
Supermarkets often use dynamic pricing strategies to manage perishable inventory, such as fresh produce and meat products, that have a limited shelf life. By adjusting prices based on factors like expiration dates and current inventory levels, retailers can minimize waste and maximize revenue. Additionally, the widespread adoption ofelectronic shelf labelsin grocery stores has made it easier to implement dynamic pricing strategies in real-time, enabling retailers to respond quickly to changing market conditions and consumer preferences.[28]These labels also makes it easier for grocery stores to markup high demand items (e.g. making it more expensive to purchase ice in warmer weather).[29]
Theme parks have also recently adopted this pricing model.DisneylandandDisney Worldadapted this practice in 2016, and Universal Studios followed suit.[30]Since the supply of parks is limited and new rides cannot be added based on the surge of demand, the model followed by theme parks in regards to dynamic pricing resembles that followed by the hotel industry. During summertime, when demand is ratherinelastic, the parks charge higher prices, whereas ticket prices in winter are less expensive.[31]
Dynamic pricing is often criticized asprice gouging.[32][33]Dynamic pricing is widely unpopular among consumers as some feel it tends to favour particular buyers.[34][35][36]While the intent of surge pricing is generally driven by demand-supply dynamics, some instances have proven otherwise. Some businesses utilise modern technologies (Big dataandIoT) to adopt dynamic pricing strategies, where collection and analysis of real-time private data occur almost instantaneously.[37][38][39][40]
As modern technology on data analysis is developing rapidly, enabling to detect one’s browsing history, age, gender, location and preference, some consumers fear “unwanted privacy invasions and data fraud” as the extent of their information being used is often undisclosed or ambiguous.[41]Even with firms’ disclaimers stating private information will only be used strictly for data collection and promising no third-party distribution will occur, few cases of misconducting companies can disrupt consumers’ perceptions.[42]Some consumers were simply skeptical on general information collection outright due to the potentiality of “data leakages and misuses”, possibly impacting suppliers’ long-term profitability stimulated by reduced customer loyalty.[43]
Consumers can also develop price fairness/unfairness perceptions, whereby different prices being offered to individuals for the same products can affect customers’ perceptions on price fairness.[41][43][44]Studies discovered easiness of learning other individuals’ purchase price induced consumers to sense price unfairness and lower satisfaction when others paid less than themselves. However, when consumers were price-advantaged, development of trust and increased repurchase intentions were observed.[44][45][46]Other research indicated price fairness perceptions varied depending on their privacy sensitivity and natures of dynamic pricing like, individual pricing, segment pricing, location data pricing and purchase history pricing.[41]
Amazon engaged inprice discriminationfor some customers in the year 2000, showing different prices at the same time for the same item to different customers, potentially violating theRobinson–Patman Act.[47]When this incident was criticised, Amazon issued a public apology with refunds to almost 7000 customers but did not cease the practice.[42]
During theCOVID-19 pandemic, prices of certain items in high demand were reported to shoot up by quadruple their original price, garnering negative attention.[48]Although Amazon denied claims of any such manipulation and blamed a few sellers for shooting up prices for essentials such as sanitizers and masks, prices of essential products 'sold by Amazon' had also seen a hefty rise in prices. Amazon claimed this was a result of software malfunction.[48]
Uber's surge pricing has also been criticized. In 2013, when New York was in the midst of a storm, Uber users saw fares go up eight times the usual fares.[49][50]This incident attracted public backlash from public figures, withSalman Rushdieamongst others publicly criticizing this move.[34]
After this incident, the company started placing caps on how high surge pricing can go during times of emergency, starting in 2015.[51]Drivers have been known to hold off on accepting rides in an area until surge pricing forces fares up to a level satisfactory to them.[52]
In 2024,Wendy'sannounced plans to test dynamic pricing in certain American locations during 2025. This pricing method was included with plans to redesign menu boards[53]and these changes were announced to stakeholders.[54]The company received significant online backlash for this decision. In response, Wendy's stated that the intended implementation was limited to reducing prices during low traffic periods.[55]
|
https://en.wikipedia.org/wiki/Variable_pricing
|
Customer experience, sometimes abbreviated toCX, is the totality ofcognitive,affective,sensory, andbehavioralresponses of acustomerduring all stages of theconsumptionprocess including pre-purchase, consumption, and post-purchase stages.[1][2][3]
Different dimensions of customer experience include senses, emotions,feelings,perceptions, cognitive evaluations, involvement,memories, as well as spiritual components, andbehavioral intentions.[4][1][5]The pre-consumption anticipation experience can be described as the amount ofpleasureordispleasurereceived from savoring future events, while the remembered experience is related to a recollection of memories about previous events and experiences of a product or service.[6][7][8]
According toForrester Research(viaFast Company), the foundational elements of a remarkable customer experience consist of six key disciplines, beginning with strategy, customer understanding, design, measurement,governanceand culture.[9]A company's ability to deliver an experience that sets it apart in the eyes of its customers will increase the amount ofconsumer spendingwith the company and inspire loyalty to itsbrand. According to Jessica Sebor, "Loyalty is now driven primarily by a company'sinteractionwith its customers and how well it delivers on their wants and needs".[10]
Barbara E. Kahn,Wharton's Professor ofMarketing,[11]has established an evolutional approach to customer experience as the third of four stages of any company in terms of itscustomer centricitymaturity. These progressive phases are:
In today's competitive climate, more than just low prices and innovative products are required to survive in the retail business. Customer experience involves every point of contact you have with a customer and the interactions with the products or services of the business. Customer experience has emerged as a vital strategy for all retail businesses that are facing competition.[12]According to Holbrook & Hirschman studies[13](1982) customer experience can be defined as a whole event that a customer comes into contact with when interacting with a certain business. This experience often affects the emotions of the customer. The whole experience occurs when the interaction takes place through the stimulation of goods and services consumed.[12]
In 1994 Steve Haeckel and Lou Carbone further refined the original concept and collaborated on a seminal early article onexperience management, titled "Engineering Customer Experiences", where they defined experience as "the 'take-away' impression formed by people's encounters with products, services and businesses — a perception produced when humans consolidate sensory information." They argued that the new approach must focus on total experience as the key customer value proposition.[14]
The type of experience seen through a marketing perspective is put forward by Pine & Gilmore[15]which they state that an experience can be unique which may mean different individuals will not have the same level of experience that may not be memorable to the person, therefore, it won't be remembered over a period of time. Certain types of experiences may involve different aspects of the individual person such as emotional, physical, intellectual or even spiritual.
Customer experience is the stimulation a company creates for the senses of the consumers, this means that the companies and that particular brand can control the stimuli that they have given to the consumer's senses which the companies can then control the consumers' reaction resulting from the stimulation process, giving more acquisition of the customer experience as expected by company.[12]
Kotler et al. 2013, (p. 283) say that customer experience is about, "Adding value for customers buying products and services through customer participation and connection, by managing all aspects of the encounter". The encounter includestouchpoints. Businesses can create and modify touchpoints so that they are suited to their consumers which change/enhance the customers' experience. Creating an experience for the customer can lead to greaterbrand loyaltyandbrand recognitionin the form oflogos, colour, smell, touch, taste, etc.[16][17][18]
However,customer experience management, and in particular design for experiences, is not only relevant for the private sector but also increasingly important in the public sector, especially in the age of digitalizaiton where public service users cocreate value by integrating resources from multiple sources.[19]In this context, organizations need to not only understand their service users but also the network of actors and how public services fit into the wider value constellation and people's activities.
Customer experience is divided into realms and domains by various scholars. Pine and Gilmore introduced four realms of experience include esthetic, escapist, entertainment, and educational components.[20]
Entertainment Realm: In this realm, businesses create experiences that captivate customers by providing entertainment and amusement. It goes beyond traditional products or services, aiming to engage and delight customers through memorable and immersive experiences.
Educational Realm: This realm focuses on educating customers and enhancing their knowledge during their interactions with a brand. It involves providing valuable information, insights, and learning opportunities, fostering a sense of personal growth and understanding.
Esthetic Realm: The esthetic realm emphasizes the visual and sensory aspects of the customer experience. It involves creating visually appealing and sensory-rich environments, products, or services that stimulate the senses and elicit positive emotional responses.
Escapist Realm: In this realm, businesses offer customers an escape from their everyday lives. It involves creating experiences that transport customers to different worlds or realities, allowing them to temporarily disconnect from their usual routines and responsibilities.
There are many elements in the shopping experience associated with a customer's experience. Customer service, a brand's ethical ideals and the shopping environment are examples of factors that affect a customer's experience. Understanding and effectively developing a positive customer experience has become a staple within businesses and brands to combat growing competition (Andajani, 2015[12]). Many consumers are well informed, they are able to easily compare two similar products or services together. Therefore, consumers are looking for experiences that can fulfil their intentions(Ali, 2015[21]). A brand that can provide this gains a competitive advantage over its competition. A study by Ali (2015[21]) found that developing a positive behavioural culture created a greater competitive advantage in the long term. He looked at the customer experience at resort hotels and discovered that providing the best hotel service was not sufficient. To optimise a customer's experience, management must also consider thepeace of mindand relaxation, recognition andescapism, involvement, andhedonics. The overall customer experience must be considered. The development of a positive customer experience is important as it increases the chances of a customer to make continued purchases and develops brand loyalty (Kim & Yu, 2016[22]). Brand loyalty can turn customers into advocates, resulting in a long term relationship between both parties (Ren, Wang & Lin, 2016[23]). This promotes word-of-mouth and turns the customer into atouchpointfor the brand. Potential customers can develop opinions through another's experiences. Males and females both respond differently to brands and therefore, will experience the same brand differently. Males respond effectively to relational, behavioural and cognitive experiences whereas females respond greater to behavioural, cognitive and effective experiences in relation to branded apps. If female consumers are the target market, an app advert focused on the emotion of the product will provide an effective customer experience (Kim & Yu, 2016[22]).
Today, retail stores tend to exist in shopping areas such as malls or shopping districts. Very few operate in areas alone (Tynan, McKechnie & Hartly, 2014[24]). Customer experience is not limited to the purchase alone. It includes all activities that may influence a customer's experience with a brand (Andajani, 2015[12]). Therefore, a shopping centre's reputation that a store is located in will affect a brand's customer experience. At the same time, it is important to provide a seamless integrated experience that goes beyond individual transactions and enhances overall brand perception.[25]This is an example of the shopping environment effecting a customer's experience. A study by Hart, Stachow and Cadogan (2013[26]) found that a consumer's opinion of a town centre can affect the opinion of the retail stores operating within both negatively and positively. They shared an example of a town centre's management team developing synergy between the surrounding location and the retail stores. A location bound with historical richness could provide an opportunity for the town centre and local businesses to connect at a deeper level with their customers. They suggested that town centre management and retail outlets should work cooperatively to develop an effective customer experience. This will result in all stores benefiting fromcustomer retentionand loyalty.
Another effective way to develop a positive customer experience is by actively engaging a customer with an activity. Human and physical components of an experience are very important (Ren, Wang & Lin, 201[23]6). Customers are able to recall active, hands-on experiences much more effectively and accurately than passive activities. This is because customers in these moments are per definition the 'experts of use'.[27]Participants within a study were able to recount previous luxury driving experiences due to its high involvement. However, this can also have a negative effect on the customer's experience. Just as active, hands-on experiences can greatly develop value creation, they can also greatly facilitate value destruction (Tynan, McKechnie & Hartly, 2014[24]). This is related to a customer's satisfaction with their experience. By understanding what causes satisfaction or dissatisfaction with a customer's experience, management can appropriately implement changes within their approach (Ren, Wang & Lin,[23]2016). A study on the customer experience in budget hotels revealed interesting results. Customer satisfaction was largely influenced by tangible and sensory dimensions. This included cleanliness, shower comfort, and room temperature, just to name a few. As budget hotels are cheap, customers expected the basic elements to be satisfactory and the luxury elements to be non-existent. If these dimensions did not reach an appropriate standard, satisfaction would decline, resulting in a negative experience (Ren, Wang & Lin, 20[23]16).
Customer experience management(CEMorCXM) is the process that companies use to oversee and track all interactions with a customer during their relationship. This involves the strategy of building around the needs of individual customers.[28]According to Jeananne Rae, companies are realizing that "building great consumer experiences is a complex enterprise, involving strategy, integration of technology, orchestrating business models, brand management and CEO commitment".[29]
In 2020, the global CEM market was valued at $7.54 billion, and is expected to grow with aCAGRof 17.5% from 2021-2028.[30]Top companies in the customer experience industry include:[30]
According to Bernd Schmitt, "the term 'Customer Experience Management' represents the discipline, methodology and/or process used to comprehensively manage a customer's cross-channelexposure, interaction and transaction with a company, product, brand or service."[32]Harvard Business Reviewblogger Adam Richardson says that a company must define and understand all dimensions of the customer experience in order to have long-term success.[33][need quotation to verify]
Although 80% ofbusinessesstate that they offer a "great customer experience," according to author James Allen, this contrasts with the 8% of customers expressing satisfaction with their experience. Allen asserts that for companies to meet the demands of providing an exceptional customer experience, they must be able to execute the "Three Ds":
CEM has been recognized as the future of the customer service and sales industry. Companies are using this approach to anticipate customer needs and adopt the mindset of the customer.[35]
CEM depicts a business strategy designed to manage the customer experience and gives benefits to both retailers and customers.[36]CEM can be monitored through surveys, targeted studies, observational studies, or "voice of customer" research.[37]It captures the instant response of the customer to its encounters with the brand or company. Customer surveys, customer contact data, internal operations process and quality data, and employee input are all sources of "voice of customer" data that can be used to quantify the cost of inaction on customer experience issues.[38]
The aim of CEM is to optimize the customer experience by gaining the loyalty of the current customers in a multi-channel environment and ensuring they are completely satisfied. Its also to create advocates of their current customers with potential customers as a word of mouth form of marketing.[39]However, common efforts at improving CEM can have the opposite effect.[40]
Utilizing surroundings includes using visuals, displays and interactivity to connect with customers and create an experience (Kotler, et al. 2013, p. 283). CEM can be related to customer journey mapping, a concept pioneered by Ron Zemke andChip Bell.[41]Customer journey mapping is a design tool used to track customers' movements through differenttouchpointswith the business in question. It maps out the first encounters people may have with the brand and shows the different routes people can take through the different channels or marketing (e.g. online, television, magazine, newspaper).Integrated marketing communications(IMC) is also being used to manage the customer experience; IMC is about sending a consistent message amongst all platforms; these platforms include: Advertising, personal selling, public relations,direct marketing, andsales promotion(Kotler et al. 2013, p. 495).[16][42][43]
CEM holds great importance in terms of research and showing that academia is not as applicable and usable as the practice behind it. Typically, to make the best use of CEM and ensure its accuracy, the customer journey must be viewed from the actual perspective of customers, not the business or organization.[44]It needs to be noted that there isn't a specific set of rules or steps to follow as companies (in their various industries) will have different strategies. Therefore, development into the conceptual and theoretical aspects is needed, based on customers' perspectives on the brand experience. This can be seen through different scholarly research.[45]The reasoning behind the interest in CEM increasing so significantly is because businesses are looking for competitive differentiation.[46]Businesses want to be more profitable and see this as a means to do so. Hence why businesses want to offer a better experience to their customers and want to manage this process efficiently. In order to gain success as a business customers need to be understood. In order to fully utilise the models used in practice, academic research that is conducted can assist the practical aspect. This along with recognising past customer experiences can help manage future experiences.
A good indicator of customer satisfaction is theNet Promoter Score(NPS). This indicates out of a score of ten if a customer would recommend a business to other people. With scores of nine and ten these people are called protractors and will recommend others to the given product but on the other end of the spectrum are detractors, those who give the score of zero to six. Subtracting the detractors from the protractors gives the calculation of advocacy. Those businesses with higher scores are likely to be more successful and give a better customer experience.[39]
Not all aspects of CEM can be controlled by the business (e.g. other people and the influence they have).[36]Besides, there is not much substantial information to support CEM claims in terms of academic research. The use ofartificial intelligence in customer experiencehas slowly been increasing in recent years.[47][48]Chatbotsare often seen as the first phase of this development.[49]
The classical linearcommunication modelincludes having one sender or source sending out a message that goes through the media (television, magazines) and then to the receiver. The classical linear model is a form of mass marketing that targets a large number of people where only a few may be customers; this is a form of non-personal communication (Dahlen, et al. 2010, p. 39). The adjusted model shows the source sending a message either to the media or directly to an opinion leader/s and/or opinion former (Model, actress, credible source, trusted figure in society, YouTuber/reviewer), which sends a decoded message to the receiver (Dahlen et al. 2010, p. 39). The adjusted model is a form of interpersonal communication where feedback is almost instantaneous with receiving the message. The adjusted model means that there are many more platforms of marketing with the use ofsocial media, which connects people with more touchpoints. Marketers use the digital experience to enhance the customer experience (Dahlen et al. 2010, p. 40). Enhancing digital experiences influences changes to the CEM, the customer journey map and IMC. The adjusted model allows marketers to communicate a message designed specifically for the 'followers' of the particular opinion leader or opinion former, sending a personalised message and creating a digital experience.[50]
Persuasion techniques are used when trying to send a message in order for an experience to take place. Marcom Projects (2007) came up with fivemind shapersto show how humans view things. The five mind shapers of persuasion include:
Mind shapers can be seen through the use of the adjusted communication model, it allows the source/sender to create a perception for the receiver (Dahlen, Lange, & Smith, 2010, p. 39).
Mind shapers can take two routes for persuasion:
Marketers can use human thought processes and target these to create greater experiences, they can do so by either making the process more simple and creating interactive steps to help the process (Campbell & Kirmani, 2000).[51][52][53]
According to Das[54](2007),customer relationship management(CRM) is the "establishment, development, maintenance and optimization of long-term mutually valuable relationships between consumers and organizations". The official definition of CRM by the Customer Relationship Management Research Center is "a strategy used to learn more about the customers' needs and behaviours in order to develop stronger relationships with them". The purpose of this strategy is to change the approach to customers and improve the experience for the consumer by making the supplier more aware of their buying habits and frequencies.
The D4 Company Analysis is an audit tool that considers the four aspects of strategy, people, technology and processes in the design of a CRM strategy. The analysis includes four main steps.
In the classical marketing model, marketing is deemed to be a funnel: at the beginning of the process (in the "awareness" stage) there are many branches competing for the attention of the customer, and this number is reduced through the differentpurchasing stages. Marketing is an action of "pushing" the brand through a few touchpoints (for example through TV ads).
Since the rise of theWorld Wide Weband smartphone applications, there are many more touchpoints from new content serving platforms (Facebook, Instagram, Twitter, YouTube etc.), individual online presences (such as websites, forums, blogs, etc.) and dedicated smartphone applications.
As a result, this process has become a type of "journey":
In relation to customers and the channels which are associated with sales, these aremultichannelin nature. Due to the growth and importance of social media and digital advancement, these aspects need to be understood by businesses to be successful in this era of customer journeys. With tools such as Facebook, Instagram and Twitter having such prominence, there is a constant stream of data that needs to be analysed to understand this journey.[56]Business flexibility and responsiveness are vital in the ever-changing digital customer environment, as customers are constantly connected to businesses and their products. Customers are now instant product experts due to various digital outlets and form their own opinions on how and where to consume products and services.[57]Businesses use customer values and create a plan to gain acompetitive advantage. Businesses use the knowledge of customers to guide the customer journey to their products and services.[57]
Due to the shift in customer experience, in 2014 Wolny & Charoensuksai highlight three behaviours that show how decisions can be made in this digital journey. The ZeroMoment of truthis the first interaction a customer has in connection with a service or product. This moment affects the consumer's choice to explore a product further or not at all. These moments can occur on anydigital device.Showroominghighlights how a consumer will view a product in a physical store but then decide to exit the store empty handed and buy online instead. This consumer decision may be due to the ability to compare multiple prices online. On the opposing end of the spectrum iswebrooming. Consumers will research a product online in regards to quality and price but then decide to purchase in store. These three channels need to be understood by businesses because customers expect businesses to be readily available to cater to their specific customer needs and purchasing behaviours.[56]
In marketing, the notion ofcustomer journeyportrays the process customers go through to establish a commercial relationship with a firm.[58]The journey emphasizestouchpoints, which are the moments in which firms can interact with their current or potential customers.[59]Managers use visualizations called customer journey mapping (CJM) to represent the sequences of interactions between firms and customers to identify opportunities for interaction. Understanding CJM also allows for corporations to reduce"friction", or potential issues for the customer.[60]
CJM has subsequently become one of the most widely used tools forservice designand has been utilized as a tool for visualizing intangible services. A customer journey map shows the story of the customer's experience. It not only identifies key interactions that the customer has with the organization, but it also brings the user's feelings, motivations, and questions for each of the touchpoints. Finally, a customer journey map has the objective of teaching organizations more about theircustomers.[61]To map a customer journey is important to consider the company's customers (buyer persona), thecustomer journey's time frame, channels (telephone, email, in-app messages, social media, forums, recommendations), first actions (problem acknowledgment), and last actions (recommendations or subscription renewal). Customer Journey Maps are good storytelling conduits – they communicate to the brand the journey, along with the emotional quotient, that the customer experiences at every stage of the buyer journey.[62]
Customer journey maps take into account people's mental models (how things should behave), the flow of interactions, and possible touchpoints. They may combine user profiles, scenarios, and user flows; and reflect the thought patterns, processes, considerations, paths, and experiences that people go through in their daily lives.[63]
Mapping thecustomer journeyhelps organizations understand how prospects and customers use the various channels and [touchpoints], how the organization is perceived, and how the organization would like its customers and prospects' experiences to be. By understanding the latter, it is possible to design an optimal experience that meets the expectations of major customer groups, achieves competitive advantage, and supports the attainment of desired customer experience objectives.[63]Increased customer retention is another benefit of a carefully designed and executed customer experience strategy.
Journey mapping or journey orchestration has recently benefitted from the growth of AI technology. Solutions have become available in the last decade which allow AI to enhance complex customer journeys. Until recently, all customer journey mapping was human-led, but we are currently experiencing a rise of artificial intelligence in customer experience.[64]
Retail environment factors include social features, design, and ambiance.[36]This can result in enhanced pleasure while shopping, thus a positive customer experience and more likely chances of the customer revisiting the store in the future. The same retail environment may produce varied outcomes and emotions, depending on what the consumer is looking for. For example, a crowded retail environment may be exciting for a consumer seeking entertainment, but create an impression of inattentive customer service and frustration to a consumer who may need help looking for a specific product to meet an immediate need.[65]
Environmental stimuli such as lighting and music can influence a consumer's decision to stay longer in the store, therefore increasing the chances of purchasing.[36]For example, a retail store may have dim lights and soothing music which may lead a consumer to experience the store as relaxing and calming.
Today's consumers are consistently connected through the development of technological innovation in the retail environment. This has led to the increased use of digital-led experiences in their purchase journey both in-store and online that inspire and influence the sales process.[66]For example,Rebecca Minkoffhas installed smart mirrors in their fitting rooms that allow the customers to browse for products that may complement what they are trying on.[67]These mirrors also hold an extra feature, aself-checkoutsystem where the customer places the item on anRFID-powered table, which then sends the products to aniPadthat is used to check out.[68]
External and internal variables in a retail environment can also affect a consumer's decision to visit the store. External variables include window displays such as posters and signage, or product exposure that can be seen by the consumer from outside of the store.[65]Internal variables include flooring, decoration and design. These attributes of a retail environment can either encourage or discourage a consumer from approaching the store.
Sales experience is a subset of the customer experience. Whereas customer experience encompasses the sum of all interactions between an organization and a customer over the entire relationship, sales experience is focused exclusively on the interactions that take place during the sales process and up to the point that a customer decides to buy.
Customer experience tends to be owned by the marketing function within an organization,[69]and therefore has little control or focus on what happens before a customer decides to buy.[further explanation needed]
Sales experience is concerned with the buyer's journey up to and including the point that the buyer makes a purchase decision. Sales is a very important touch-point for overall customer experience as this is where the most human interaction takes place.
|
https://en.wikipedia.org/wiki/Customer_experience
|
Marketing automationrefers to software platforms and technologies designed formarketingdepartments and organizations automate repetitive tasks[1]and consolidate multi-channel (email,SMS,chatbot, social media) interactions, tracking andweb analytics,lead scoring, campaign management and reporting into one system.[2]It often integrates withcustomer relationship management(CRM) andcustomer data platform(CDP) software.[3]
Marketing automation trackstop-of-funnelactivities to drive prospects to sales. This is contrasted with CRM, which manages information about the prospect and their position in the sales cycle.[3]
The use of marketing automation makes processes that would otherwise have been performed manually much more efficient and makes new processes possible. Marketing Automation can be defined as a process where technology is used to automate several repetitive tasks that are undertaken on a regular basis in a marketing campaign.
Marketing Automation platforms allow marketers to automate and simplify client communication by managing complex omnichannel marketing strategies with a single tool. Marketing Automation assists greatly in areas like Lead Generation, Segmentation, Lead nurturing and lead scoring, Relationship marketing, Cross-sell and upsell, Retention, and Marketing ROI measurement. Effective marketing automation tools leverage data from a separate or integrated CRM to understand customer impact and preferences.
There are three categories of marketing automation software:
Advertising Automation
Advancedworkflow automation
As of 25 May 2018 theGeneral Data Protection Regulationcame into effect in the EU,[5]this has had a large impact on the way marketing teams and organizations can manage their consumer data. Any organization using marketing automation tracking is required to ask consent from the consumer as well as provide transparency on how the data will be processed.
Similarly, the California Consumer Privacy Act (CCPA), which took effect on January 1, 2020, introduced strict data privacy laws for residents of California.[6]The CCPA grants consumers the right to know what personal data is collected, the purpose of collection, and with whom it is shared. It also allows consumers to opt-out of the sale of their personal information.
The CCPA's impact on marketing automation includes:
These regulations reflect a global trend towards stronger data privacy laws, influencing marketing automation by:
Consumers are directly impacted by marketing automation.[7]Consumers providedatafor companies, and companies use algorithms to determine products and services to market towards the consumer. The products and services are personalized based on the collected data for each individual. The use of marketing automation is interpreted as an efficient customer experience[8]while others interpret a loss of autonomy[9][10]for the consumer.
Marketing automation solutions provide three key functions:[11]
After a user visits a merchant's website and navigates away, an automated email can be triggered to be sent out that user. They can be reminded of an abandoned shopping cart, a subscription that is about to expire, or be welcomed if they are a new customer.Couponsand messages can be tailored based on past purchases.[3]
Software can also automate the creation of product landing pages and chatbot for customer support.[3]
According toGartner, the B2B automation market was valued at $2.1 billion in 2020 and more than $2.74 billion in 2021.[12][13][3]Gartner identified the following vendors as B2B marketing automation leaders as of August 2021:[13]
|
https://en.wikipedia.org/wiki/Marketing_automation
|
TheBogardus social distance scaleis apsychological testingscalecreated byEmory S. Bogardusto empirically measure people's willingness to participate in social contacts of varying degrees of closeness with members of diverse social groups, such as racial andethnic groups.
The scale asks people the extent to which they would be accepting of each group (a score of 1.00 for a group is taken to indicate no social distance):
The Bogardus social distance scale is a cumulative scale (aGuttman scale), because agreement with any item implies agreement with all preceding items.
Research by Bogardus first in 1925 and then repeated in 1946, 1956, and 1966 shows that the extent of social distancing in the US is decreasing slightly and fewer distinctions are being made among groups. The study was also replicated in 2005. The results supported the existence of this tendency, showing that the mean level of social distance has been decreasing comparing with the previous studies.[1]
For Bogardus, social distance is a function of affective distance between the members of two groups: ‘‘[i]n social distance studies the center of attention is on the feeling reactions of persons toward other persons and toward groups of people.’’[2]Thus, for him, social distance is essentially a measure of how much or littlesympathythe members of a group feel for another group.
|
https://en.wikipedia.org/wiki/Bogardus_social_distance_scale
|
Consensus-based assessmentexpands on the common practice ofconsensus decision-makingand the theoretical observation that expertise can be closely approximated by large numbers of novices or journeymen. It creates a method for determiningmeasurement standardsfor very ambiguous domains of knowledge, such asemotional intelligence, politics, religion, values and culture in general. From this perspective, the shared knowledge that forms cultural consensus can be assessed in much the same way as expertise or general intelligence.
Consensus-based assessment is based on a simple finding: that samples of individuals with differing competence (e.g., experts and apprentices) rate relevant scenarios, usingLikert scales, with similar mean ratings. Thus, from the perspective of a CBA framework, cultural standards for scoring keys can be derived from the population that is being assessed. Peter Legree and Joseph Psotka, working together over the past decades, proposed thatpsychometricgcould be measured unobtrusively through survey-like scales requiring judgments. This could either use the deviation score for each person from the group or expert mean; or aPearson correlationbetween their judgments and the group mean. The two techniques are perfectly correlated. Legree and Psotka subsequently created scales that requested individuals to estimate word frequency; judge binary probabilities of good continuation; identify knowledge implications; and approximate employment distributions. The items were carefully identified to avoid objective referents, and therefore the scales required respondents to provide judgments that were scored against broadly developed, consensual standards. Performance on this judgment battery correlated approximately 0.80 with conventional measures of psychometricg. The response keys were consensually derived. Unlike mathematics or physics questions, the selection of items, scenarios, and options to assess psychometricgwere guided roughly by a theory that emphasized complex judgment, but the explicit keys were unknown until the assessments had been made: they were determined by the average of everyone's responses, using deviation scores, correlations, or factor scores.
One way to understand the connection between expertise and consensus is to consider that for many performance domains, expertise largely reflects knowledge derived from experience. Since novices tend to have fewer experiences, their opinions err in various inconsistent directions. However, as experience is acquired, the opinions of journeymen through to experts become more consistent. According to this view, errors are random. Ratings data collected from large samples of respondents of varying expertise can thus be used to approximate the average ratings a substantial number of experts would provide were many experts available. Because the standard deviation of a mean will approach zero as the number of observations becomes very large, estimates based on groups of varying competence will provide converging estimates of the best performance standards. The means of these groups’ responses can be used to create effective scoringrubrics, or measurement standards to evaluate performance. This approach is particularly relevant to scoring subjective areas of knowledge that are scaled using Likert response scales, and the approach has been applied to develop scoring standards for several domains where experts are scarce.
In practice, analyses have demonstrated high levels of convergence between expert and CBA standards with values quantifying those standards highly correlated (PearsonRs ranging from .72 to .95), and with scores based on those standards also highly correlated (Rs ranging from .88 to .99) provided the sample size of both groups is large (Legree, Psotka, Tremble & Bourne, 2005). This convergence between CBA and expert referenced scores and the associated validity data indicate that CBA and expert based scoring can be used interchangeably, provided that the ratings data are collected using large samples of experts and novices or journeymen.
CBA is often computed by using the PearsonRcorrelation of each person'sLikert scalejudgments across a set of items against the mean of all people's judgments on those same items. The correlation is then a measure of that person's proximity to the consensus. It is also sometimes computed as a standardized deviation score from the consensus means of the groups. These two procedures are mathematically isomorphic. If culture is considered to be shared knowledge; and the mean of the group's ratings on a focused domain of knowledge is considered a measure of the cultural consensus in that domain; then both procedures assess CBA as a measure of an individual person's cultural understanding.
However, it may be that the consensus is not evenly distributed over all subordinate items about a topic. Perhaps the knowledge content of the items is distributed over domains with differing consensus. For instance, conservatives who are libertarians may feel differently about invasion of privacy than conservatives who feel strongly about law and order. In fact, standardfactor analysisbrings this issue to the fore.
In either centroid orprincipal components analysis(PCA) the first factor scores are created by multiplying each rating by the correlation of the factor (usually the mean of all standardized ratings for each person) against each item's ratings. This multiplication weights each item by the correlation of the pattern of individual differences on each item (the component scores). If consensus is unevenly distributed over these items, some items may be more focused on the overall issues of the common factor. If an item correlates highly with the pattern of overall individual differences, then it is weighted more strongly in the overall factor scores. This weighting implicitly also weights the CBA score, since it is those items that share a common CBA pattern of consensus that are weighted more in factor analysis.
The transposed orQ methodologyfactor analysis, created byWilliam Stephenson (psychologist)brings this relationship out explicitly. CBA scores are statistically isomorphic to the component scores in PCA for a Q factor analysis. They are the loading of each person's responses on the mean of all people's responses. So, Q factor analysis may provide a superior CBA measure, if it can be used first to select the people who represent the dominant dimension, over items that best represent a subordinate attribute dimension of a domain (such as liberalism in a political domain). Factor analysis can then provide the CBA of individuals along that particular axis of the domain.
In practice, when items are not easily created and arrayed to provide a highly reliable scale, the Q factor analysis is not necessary, since the original factor analysis should also select those items that have a common consensus. So, for instance, in a scale of items for political attitudes, the items may ask about attitudes toward big government; law and order; economic issues; labor issues; or libertarian issues. Which of these items most strongly bear on the political attitudes of the groups polled may be difficult to determine a priori. However, since factor analysis is a symmetric computation on the matrix of items and people, the original factor analysis of items, (when these are Likert scales) selects not just those items that are in a similar domain, but more generally, those items that have a similar consensus. The added advantage of this factor analytic technique is that items are automatically arranged along a factor so that the highest Likert ratings are also the highest CBA standard scores. Once selected, that factor determines the CBA (component) scores.
The most common critique of CBA standards is to question how an average could possibly be a maximal standard. This critique argues that CBA is unsuitable for maximum-performance tests of psychological attributes, especially intelligence. Even so, CBA techniques are routinely employed in various measures of non-traditional intelligences (e.g., practical, emotional, social, etc.). Detailed critiques are presented in Gottfredson (2003) and MacCann, Roberts, Matthews, & Zeidner (2004) as well as elsewhere in the scientific literature.
|
https://en.wikipedia.org/wiki/Consensus_based_assessment
|
Thediamond of oppositesis a type of two-dimensional plot used inpsychodramagroups. This tool can illuminate the presence of contradictions in processes that cannot be detected by any single questionnaire item using a traditional format such as theLikert scale. The diamond of opposites is asociometricscaling methodthat simultaneously measures positive and negative responses to a statement.
Psychodrama/Sociometry:
The psychological approach to counseling and exploring issues, both personal and in a wider social context, was founded byJ.L. Morenoin the 1920s. His unique approach to therapy and social change at that time involved using theater and roleplay to assist individuals and groups to change/ improve their life circumstances, and a somewhat lesser known approach, called Sociometry, to measuring social dynamics in groups, and effect change in groups and society. Sociometry measures the connections between individuals in any group, from small groups to world issues. Measuring connections and feelings within a group assists individuals within the group to make desired/ needed changes, and also provides a wealth of information about the dynamics in any particular group or situation. The Diamond of Opposites is one type of sociometric assessment.
Unlike traditional question formats, especially thesemantic differentialformat where the respondent must choose a point on a one-dimensional scale anchored by two semantically opposite terms, the diamond of opposites allows the respondent to express attraction and repulsion independently. In this format, the stem describes an object, person or situation in relation to which the respondent is asked to indicate their degree of attractionandrepulsion. The two variables are plotted on two orthogonal axes.
mathematics problems
|
https://en.wikipedia.org/wiki/Diamond_of_opposites
|
Anemployment websiteis awebsitethat deals specifically withemploymentorcareers. Many employment websites are designed to allowemployersto post job requirements for a position to be filled and are commonly known as job boards. Other employment sites offer employer reviews, career and job-search advice, and describe different job descriptions or employers. Through a job website, a prospective employee can locate and fill out ajob applicationor submitresumesover the Internet for the advertised position.
The Online Career Center was developed in 1992 byBill Warren[1]as a non-profit organization backed by forty major corporations to allow job hunters to post their resumes and forrecruitersto post job openings.[2]
In 1994, Robert J. McGovern began NetStart Inc. as software sold to companies for listing job openings on their websites and manage the incoming e-mails those listings generated. After an influx of two million dollars in investment capital[3]he then transported this software to its own web address, at first listing the job openings from the companies who utilized the software.[4]NetStart Inc. changed its name in 1998 to operate under the name of their software,CareerBuilder.[5]The company received a further influx of seven million dollars from investment firms such asNew Enterprise Associatesto expand their operations.[6]
Six major newspapers joined forces in 1995 to list their classified sections online. The service was called CareerPath.com and featured help-wanted listings from the Los Angeles Times, the Boston Globe, Chicago Tribune, the New York Times, San Jose Mercury News and the Washington Post.[7]
The industry attempted to reach a broader, less tech-savvy base in 1998 whenHotjobs.comattempted to buy aSuper Bowlspot, but Fox rejected the ad for being in poor taste. The ad featured a janitor at a zoo sweeping out the elephant cage completely unbeknownst to the animal. The elephant sits down briefly and when it stands back up, the janitor has disappeared, suggesting the worker was now stuck in the elephant's anus. The ad meant to illustrate a need for those stuck in jobs they hate, and offer a solution through their Web site.[8]
In 1999,Monster.comran on three 30 second Super Bowl ads for four million dollars.[9]One ad which featured children speaking like adults, drolly intoning their dream of working at various dead-end jobs to humorous effect were far more popular than rival Hotjobs.com ad about a security guard who transitions from a low paying security job to the same job at a fancier building.[10]Soon thereafter, Monster.com was elevated to the top spot of online employment sites.[11]Hotjobs.com's ad wasn't as successful, but it gave the company enough of a boost for itsIPOin August.[12]
After being purchased in a joint venture byKnight RidderandTribune Companyin July,[13]CareerBuilder absorbed competitor boards CareerPath.com and then Headhunter.net which had already acquired CareerMosaic. Even with these aggressive mergers CareerBuilder still trailed behind the number one employment site Jobsonline.com, number two Monster.com and number three Hotjobs.com.[14]
Monster.com made a move in 2001 to purchase Hotjobs.com for $374 million instock, but were unsuccessful due toYahoo's unsolicited cash and stock bid of $430 million late in the year. Yahoo had previously announced plans to enter the job board business, but decided to jump start that venture by purchasing the established brand.[15]In February 2010, Monster acquired HotJobs from Yahoo for $225 million.[16]
Ajob boardis awebsitethat facilitatesjob huntingand range from large scale generalist sites to niche job boards for job categories such asengineering,legal,insurance,social work,teaching,mobile appdevelopment as well as cross-sector categories such asgreen jobs,ethical jobsandseasonal jobs. Users can typically upload theirrésumésand submit them to potentialemployersandrecruitersfor review, while employers and recruiters can post job ads and search for potential employees.
The termjob search enginemight refer to a job board with asearch enginestyle interface, or to a web site that actually indexes and searches other web sites.
Niche job boards are starting to play a bigger role in providing more targeted job vacancies and employees to the candidate and the employer respectively. Job boards such as airport jobs and federal jobs among others provide a very focused way of eliminating and reducing time to applying to the most appropriate role.USAJobs.govis the United States' official website for jobs. It gathers job listings from over 500 federal agencies.[17]
Some web sites are simplysearch enginesthat collect results from multiple independent job boards. This is an example of bothmetasearch(since these are search engines which search other search engines) andvertical search(since the searches are limited to a specific topic - job listings).
Some of these newsearch enginesprimarily index traditional job boards. These sites aim to provide a "one-stop shop" for job-seekers who don't need to search the underlying job boards. In 2006, tensions developed between the job boards and severalscraper sites, withCraigslistbanning scrapers from its job classifieds andMonster.comspecifically banning scrapers through its adoption of arobots exclusion standardon all its pages while others have embraced them.
Industry specific posting boards are also appearing. These consolidate all the vacancies in a very specific industry. The largest "niche" job board isDice.comwhich focuses on the IT industry. Many industry and professional associations offer members a job posting capability on the association website.
An employer review website is a type of employment website where past and currentemployeespost comments about their experiences working for a company or organization. An employer review website usually takes the form of aninternet forum. Typical comments are aboutmanagement,working conditions, andpay. Although employer review websites may produce links to potential employers, they do not necessarily list vacancies.[18][19]
Although many sites that provide access to job advertisements include pages with advice about writing resumes and CVs, performing well in interviews, and other topics of interest to job seekers there are sites that specialize in providing information of this kind, rather than job opportunities. One such isWorking in Canada. It does provide links to theCanadian Job Bank. However, most of its content is information about local labor markets (in Canada), requirements for working in various occupations, information about relevant laws and regulations, government services and grants, and so on. Most items could be of interest to people in various roles and conditions including those considering career options, job seekers, employers and employees.
Employment sites typically charge fees to employers for listing job postings. Often these are flat fees for a specific duration (30 days, 60 days, etc). Other sites may allow employers to post basic listings for free, but charge a fee for more prominent placement of listings in search results. Employment sites like job aggregators use "pay-per-click" orpay-for-performancemodels, where the employer listing the job pays for clicks on the listing.[20][21]
In Japan, some sites have come under fire for allowing employers to list a job for free for an initial duration, then charging exorbitant fees after the free period expires. Most of these sites seem to have appeared within the last year in response to the labor shortage in Japan.[22]
Many job search engines and job boards encourage users to post theirresumeand contact details. While this is attractive for the site operators (who sell access to the resume bank toheadhuntersand recruiters), job-seekers should exercise caution in uploading personal information, since they have no control over where their resume will eventually be seen. Their resume may be viewed by a current employer or, worse, by criminals who may use information from it to amass and sell personal contact information, or even perpetrateidentity theft.[23][24]
|
https://en.wikipedia.org/wiki/Employer_review_website
|
TheMinnesota Multiphasic Personality Inventory(MMPI) is astandardizedpsychometrictest of adultpersonalityandpsychopathology.[1]A version for adolescents also exists, theMMPI-A, and was first published in 1992.[2]Psychologistsand other mental health professionals use various versions of the MMPI to help develop treatment plans, assist withdifferential diagnosis, help answer legal questions (forensic psychology), screen job candidates during thepersonnel selectionprocess, or as part of atherapeutic assessmentprocedure.[3]
The original MMPI was developed byStarke R. HathawayandJ. C. McKinley, faculty of theUniversity of Minnesota, and first published by theUniversity of Minnesota Pressin 1943.[4]It was replaced by an updated version, the MMPI-2, in 1989 (Butcher, Dahlstrom, Graham, Tellegen, and Kaemmer).[5]An alternative version of the test, theMMPI-2Restructured Form (MMPI-2-RF), published in 2008, retains some aspects of the traditional MMPI assessment strategy, but adopts a different theoretical approach topersonality testdevelopment. The newest version (MMPI-3) was released in 2020.[6]
The original authors of the MMPI were American psychologistStarke R. Hathawayand American neurologistJ. C. McKinley. The MMPI is copyrighted by theUniversity of Minnesota.
The MMPI was designed as an adult measure ofpsychopathologyand personality structure in 1939. Many additions and changes to the measure have been made over time to improve interpretability of the original clinical scales. Additionally, there have been changes in the number of items in the measure, and other adjustments which reflect its current use as a tool towards modernpsychopathyandpersonality disorders.[7]The most historically significant developmental changes include:
The MMPI-2-RF is a streamlined measure. Retaining only 338 of the original 567 items, its hierarchical scale structure provides non-redundant information across 51 scales that are easily interpretable.Validity scaleswere retained (revised), two new validity scales have been added (Fs in 2008 and RBS in 2011), and there are new scales that capture somatic complaints. All of the MMPI-2-RF's scales demonstrate either increased or equivalent construct and criterion validity compared to their MMPI-2 counterparts.[10][12][13]
Current versions of the test (MMPI-2 and MMPI-2-RF) can be completed onoptical scanforms or administered directly to individuals on the computer. The MMPI-2 can generate a Score Report or an Extended Score Report, which includes the Restructured Clinical scales from which the Restructured Form was later developed.[9]The MMPI-2 Extended Score Report includes scores on the original clinical scales as well as Content, Supplementary, and other subscales of potential interest to clinicians. Additionally, the MMPI-2-RF computer scoring offers an option for the administrator to select a specific reference group with which to contrast and compare an individual's obtained scores; comparison groups include clinical, non-clinical, medical, forensic, and pre-employment settings, to name a few. The newest version of the Pearson Q-Local computer scoring program offers the option of converting MMPI-2 data into MMPI-2-RF reports as well as numerous other new features. Use of the MMPI is tightly controlled. Any clinician using the MMPI is required to meet specific test publisher requirements in terms of training and experience, must pay for all administration materials including the annual computer scoring license and is charged for each report generated by computer.
In 2018, the University of Minnesota Press commissioned development of the MMPI-3, which was to be based in part on the MMPI-2-RF and include updated normative data. It was published in December 2020.[14][15]
The original MMPI was developed on a scale-by-scale basis in the late 1930s and early 1940s.[16]Hathaway and McKinley used an empirical [criterion] keying approach, with clinical scales derived by selecting items that were endorsed by patients known to have been diagnosed with certainpathologies.[17][18][19][20][21]The difference between this approach and other test development strategies used around that time was that it was in many ways atheoretical (not based on any particular theory) and thus the initial test was not aligned with the prevailingpsychodynamictheories. Theory in some ways affected the development process, if only because the candidate test items and patient groups on which scales were developed were affected by prevailing personality and psychopathological theories of the time.[22]The approach to MMPI development ostensibly enabled the test to capture aspects of human psychopathology that were recognizable and meaningful, despite changes in clinical theories.
However, the MMPI had flaws of validity that were soon apparent and could not be overlooked indefinitely. Thecontrol groupfor its original testing consisted of a small number of individuals, mostly young, white, and married men and women from rural areas of the Midwest. (The racial makeup of the respondents reflected the ethnic makeup of that time and place.) The MMPI also faced problems as to its terminology and its irrelevance to the population that the test was intended to measure. It became necessary for the MMPI to measure a more diverse number of potential mental health problems, such as "suicidal tendencies, drug abuse, and treatment-related behaviors."[23]
The first major revision of the MMPI was the MMPI-2, which was standardized on a new national sample of adults in the United States and released in 1989.[8]The new standardization was based on 2,600 individuals from a more representative background than the MMPI.[24]It is appropriate for use with adults 18 and over. Subsequent revisions of certain test elements have been published, and a wide variety of sub scales were introduced over many years to help clinicians interpret the results of the original 10 clinical scales. The current MMPI-2 has 567 items, and usually takes between one and two hours to complete depending on reading level. It is designed to require a 4.6 grade (Flesh-Kincaid)reading level.[24]There is an infrequently used abbreviated form of the test that consists of the MMPI-2's first 370 items.[25]The shorter version has been mainly used in circumstances that have not allowed the full version to be completed (e.g., illness or time pressure), but the scores available on the shorter version are not as extensive as those available in the 567-item version. The original form of the MMPI-2 is the third most frequently utilized test in the field of psychology, behind the most usedIQandachievement tests.
A version of the test designed for adolescents ages 14 to 18, the MMPI-A, was released in 1992. The youth version was developed to improve measurement of personality, behavior difficulties, and psychopathology among adolescents. It addressed limitations of using the original MMPI among adolescent populations.[26]Twelve- to thirteen-year-old children were assessed and could not adequately understand the question content so the MMPI-A is not meant for children younger than 14. People who are 18 and no longer in high school may appropriately be tested with the MMPI-2.[27]
Some concerns related to use of the MMPI with youth included inadequate item content, lack of appropriatenorms, and problems with extreme reporting. For example, many items were written from an adult perspective, and did not cover content critical to adolescents (e.g., peers, school). Likewise, adolescent norms were not published until the 1970s, and there was not consensus on whether adult or adolescent norms should be used when the instrument was administered to youth. Finally, the use of adult norms tended to overpathologize adolescents, who demonstrated elevations on most original MMPI scales (e.g., T scores greater than 70 on the F validity scale; marked elevations on clinical scales 8 and 9). Therefore, an adolescent version was developed and tested during the restandardization process of the MMPI, which resulted in the MMPI-A.[26]
The MMPI-A has 478 items. It includes the original 10 clinical scales (Hs, D, Hy, Pd, Mf, Pa, Pt, Sc, Ma, Si), six validity scales (?, L, F, F1, F2, K, VRIN, TRIN), 31 Harris Lingoes subscales, 15 content component scales (A-anx, A-obs, A-dep, A-hea, A-ain, A-biz, A-ang, A-cyn, A-con, A-lse, A-las, A-sod, A-fam, A-sch, A-trt), the Personality Psychopathology Five (PSY-5) scales (AGGR, PSYC, DISC, NEGE, INTR), three socialintroversionsubscales (Shyness/Self-Consciousness, Social Avoidance, Alienation), and six supplementary scales (A, R, MAC-R, ACK, PRO, IMM). There is also a short form of 350 items, which covers the basic scales (validity and clinical scales). The validity, clinical, content, and supplementary scales of the MMPI-A have demonstrated adequate to strongtest-retest reliability, internal consistency, and validity.[26]
A four factor model (similar to all of the MMPI instruments) was chosen for the MMPI-A and included
The MMPI-A normative and clinical samples included 805 males and 815 females, ages 14 to 18, recruited from eight schools across the United States and 420 males and 293 females ages 14 to 18 recruited from treatment facilities inMinneapolisandMinnesota, respectively. Norms were prepared by standardizing raw scores using a uniformt-scoretransformation, which was developed byAuke Tellegenand adopted for the MMPI-2. This technique preserves the positive skew of scores but also allows percentile comparison.[26]
Strengths of the MMPI-A include the use of adolescent norms, appropriate and relevant item content, inclusion of a shortened version, a clear and comprehensive manual,[28]and strong evidence of validity.[29][30]
Critiques of the MMPI-A include a non-representative clinical norms sample, overlap in what the clinical scales measure, irrelevance of the mf scale,[28]as well as long length and high reading level of the instrument.[30]
The MMPI-A is one of the most commonly used instruments among adolescent populations.[30]
A restructured form of the MMPI-A, theMMPI-A-RFwas published in 2016.
The University of Minnesota Press published a new version of the MMPI-2, the MMPI-2 Restructured Form (MMPI-2-RF), in 2008.[31]The MMPI-2-RF builds on the Restructured Clinical (RC) scales developed in 2003,[9]and subsequently subjected to extensive research,[32]with an overriding goal of improveddiscriminant validity, or the ability of the test to reliably differentiate between clinical syndromes or diagnoses. Most of the MMPI and MMPI-2 Clinical Scales are relatively heterogeneous, i.e., they measure diverse groupings of signs and symptoms, such that an elevation on Scale 2 (Depression), for example, may or may not indicate a depressive disorder.[a]The MMPI-2-RF scales, on the other hand, are fairly homogeneous; are designed to more precisely measure distinct symptom constellations or disorders. From a theoretical perspective, the MMPI-2-RF scales rest on an assumption that psychopathology is a homogeneous condition that is additive.[33]
Advances in psychometric theory, test development methods, and statistical analyses used to develop the MMPI-2-RF were not available when the MMPI was developed.
The MMPI-3 was released in December 2020. Its primary goals were to enhance the item pool, update the test norms, optimize existing scales, and introduce new scales (that assess disordered eating, compulsivity, impulsivity, and self-importance).[34]It features a new, nationally representative normative sample, selected to match projections for race and ethnicity, education, and age. Spanish language norms are available for use with the U.S. Spanish translation of the MMPI-3.[35]
The original clinical scales were designed to measure common diagnoses of the era.
Code types are a combination of the two or three (and, according to a few authors, even four) highest-scoring clinical scales (e.g. 4, 8, 6 = 486). Code types are interpreted as a single, wider ranged elevation, rather than interpreting each scale individually. For profiles without defined code types, interpretation should focus on the individual scales.[36]
This scale comes from the Minnesota Multiphasic Personality Inventory-2 (MMPI-2), where 50 statements compose the Psychopathic Deviate subscale. The 50 statements must be answered in true or false format as applied to one's self.[37]
The Psychopathic Deviate scale measures general social maladjustment and the absence of strongly pleasant experiences. The items on this scale tap into complaints about family and authority figures in general, self-alienation, social alienation and boredom.[38]
When diagnosing psychopathy, the MMPI-2's Psychopathic Deviate scale is considered one of the traditional personality tests that contain subscales relating to psychopathy, though they assess relatively non-specific tendencies towards antisocial or criminal behavior.[39]
The clinical scales are heterogeneous for their item content. To assist clinicians in interpreting the scales, researchers have developed subscales of more homogeneous items within each scale. TheHarris–Lingoes (1955)scales was one of the most widely used results of this approach[40]and were included in the MMPI-2[41]and MMPI-A.[42]
The Restructured Clinical scales were designed to be psychometrically improved versions of the original clinical scales, which were known to contain a high level of interscale correlation, overlapping items, and were confounded by the presence of an overarching factor that has since been extracted and placed in a separate scale (demoralization).[43]The RC scales measure the core constructs of the original clinical scales. Critics of the RC scales assert they have deviated too far from the original clinical scales, the implication being that previous research done on the clinical scales will not be relevant to the interpretation of the RC scales. However, researchers on the RC scales assert that the RC scales predict pathology in their designated areas better than their concordant original clinical scales while using significantly fewer items and maintaining equal to higher internal consistency, reliability and validity; further, unlike the original clinical scales, the RC scales are not saturated with the primary factor (demoralization, now captured in RCdem) which frequently produced diffuse elevations and made interpretation of results difficult; finally, the RC scales have lower interscale correlations and, in contrast to the original clinical scales, contain no interscale item overlap.[44]The effects of removal of the common variance spread across the older clinical scales due to a general factor common to psychopathology, through use of sophisticated psychometric methods, was described as aparadigm shiftin personality assessment.[45][46]Critics of the new scales argue that the removal of this common variance makes the RC scales less ecologically valid (less like real life) because real patients tend to present complex patterns of symptoms.[citation needed]Proponents of the MMPI-2-RF argue that this potential problem is addressed by being able to view elevations on other RC scales that are less saturated with the general factor and, therefore, are also more transparent and much easier to interpret.[citation needed]
Thevalidity scalesin all versions of the MMPI-2 (MMPI-2 and RF) contain three basic types of validity measures: those that were designed to detect non-responding or inconsistent responding (CNS, VRIN, TRIN), those designed to detect when clients are over reporting or exaggerating the prevalence or severity of psychological symptoms (F, FB, FP, FBS), and those designed to detect when test-takers are under-reporting or downplaying psychological symptoms (L, K, S). A new addition to the validity scales for the MMPI-2-RF includes an over reporting scale of somatic symptoms (FS) as well as revised versions of the validity scales of the MMPI-2 (VRIN-r, TRIN-r, F-r, FP-r, FBS-r, L-r, and K-r). The MMPI-2-RF does not include the S or FBscales, and the F-r scale now covers the entirety of the test.[48]
Although elevations on the clinical scales are significant indicators of certain psychological conditions, it is difficult to determine exactly what specific behaviors the high scores are related to. The content scales of the MMPI-2 were developed for the purpose of increasing the incremental validity of the clinical scales.[52]The content scales contain items intended to provide insight into specific types of symptoms and areas of functioning that the clinical scales do not measure, and are supposed to be used in addition to the clinical scales to interpret profiles. They were developed by Butcher, Graham, Williams and Ben-Porath using similar rational and statistical procedures as Wiggins who developed the original MMPI content scales.[52][53]
The items on the content scales contain obvious content and therefore are susceptible to response bias – exaggeration or denial of symptoms, and should be interpreted with caution. T scores greater than 65 on any content scale are considered high scores.[54]
The MMPI-2 and MMPI-A included subscales for some of the content scales to further specify the results. For example,Depression (DEP)was broken down intoLack of drive (DEP1),Dysphoria (DEP2),Self-depreciation (DEP3)andSuicidal ideation (DEP4).[56]
To supplement these multidimensional scales and to assist in interpreting the frequently seen diffuse elevations due to the general factor (removed in the RC scales),[57][58]the supplemental scales were also developed, with the more frequently used being the substance abuse scales (MAC-R, APS, AAS), designed to assess the extent to which a client admits to or is prone toabusing substances, and the A (anxiety) and R (repression) scales, developed by Welsh after conducting afactor analysisof the original MMPI item pool.
The PSY-5 is set of scales measuring dimensional traits of personality disorders, originally developed from factor analysis of the personality disorder content of theDiagnostic and Statistical Manual of Mental Disorders.[60]Originally, these scales were titled: Aggressiveness, Psychoticism, Constraint, Negative Emotionality/Neuroticism, and Positive Emotionality/Extraversion;[60]however, in the most current edition of the MMPI-2 and MMPI-2-RF, the Constraint and Positive Emotionality scales have been reversed and renamed as Disconstraint and Introversion / Low Positive Emotionality.[61]
Across several large samples including clinical, college, and normative populations, the MMPI-2 PSY-5 scales showed moderate internal consistency and intercorrelations comparable with the domain scales on the NEO-PI-R Big Five personality measure.[60]Also, scores on the MMPI-2 PSY-5 scales appear to be similar across genders,[60]and the structure of the PSY-5 has been reproduced in a Dutch psychiatric sample.[62]
The Minnesota Multiphasic Personality Inventory – Adolescent – Restructured Form (MMPI-A-RF) is a broad-band instrument used to psychologically evaluate adolescents.[64]It was published in 2016 and was primarily authored by Robert P. Archer, Richard W. Handel, Yossef S. Ben-Porath, and Auke Tellegen. It is a revised version of the Minnesota Multiphasic Personality Inventory – Adolescent (MMPI-A). Like the MMPI-A, this version is intended for use with adolescents aged 14–18 years old. It consists of 241 true-false items which produce scores on 48 scales: 6 Validity scales (VRIN-r, TRIN-r, CRIN, F-r, L-r, K-r), 3 Higher-Order scales (EID, THD, BXD), 9 Restructured Clinical scales (RCd, RC1, RC2, RC3, RC4, RC6, RC7, RC8, RC9), 25 Specific Problem scales, and revised versions of the MMPI-A PSY-5 scales (AGGR-r, PSYC-r, DISC-r, NEGE-r, INTR-r).[65]It also features 14 critical items, including 7 regarding depressing and suicidal ideation.[65]
The MMPI-A-RF was designed to address limitations of its predecessor, such as the scale heterogeneity and item overlap of the original clinical scales. The weaknesses of the clinical scales resulted in intercorrelations of several MMPI-A scales and limited discriminant validity of the scales. To address the issues with the clinical scales, the MMPI-A underwent a revision similar to the restructuring of the MMPI-2 to the MMPI-2-RF. Specifically, a demoralization scale was developed, and each clinical scale underwent exploratory factor analysis to identify its distinctive components.[65]
Additionally, the Specific Problems (SP) scales were developed. Whereas the RC scales provide a broad overview of psychological problems (e.g., low positive emotions or symptoms of depression; antisocial behavior; bizarre thoughts), the SP scales offered narrow, focused descriptions of the problems the individual reported he or she was experiencing. The MMPI-2-RF SP Scales were used as a template. First, corresponding items from the MMPI-2-RF were identified in the MMPI-A, and then 58 items unique to the MMPI-A were added to the item pool. This way, the MMPI-A-RF SP scales could maintain continuity with the MMPI-2-RF but in addition address issues specific to adolescent problems. After a preliminary set of SP scales were developed based on their content, each scale went through statistical tests (factor analysis) to ensure they did not overlap or relate too strongly to the RC demoralization scale.[66]Additional statistical analyses were put in place to make sure each SP scale contained items that were strongly related (correlated) with its scale and less strongly associated with other scales; in the end, each item appeared on only one SP scale. These scales were developed to provide additional information in association with the RC scales, but SP scales are not subscales and can be interpreted even when the related RC scale is not elevated.[66]
As noted above, 25 SP scales were developed. Of these, 19 have the same names as the corresponding MMPI-2-RF SP scales, although the specific items that construct SP scales vary per form. The following 5 scales were unique to the MMPI-A-RF: Obsessions/Compulsions (OCS), Antisocial Attitudes (ASA), Conduct Problems (CNP), Negative Peer Influence (NPI), and Specific Fears (SPF).
The SP scales were organized into four groupings: Somatic/Cognitive, Internalizing, Externalizing, and Interpersonal Scales. The Somatic/Cognitive scales (MLS, GIC, HPC, NUC, and COG) share their names with the SP scales on the MMPI-2-RF, are related to RC1, and focus on aspects of physical health and functioning. There are nine Internalizing scales. The first three (HLP, SFD, and NFC) are related to aspects of demoralization, or the general sense of unhappiness, and the remaining scales (OCS, STW, AXY, ANP, BRF, SPF) assess for Dysfunctional Negative Emotions (e.g., a tendency toward worry, fearfulness, and anxiety). Six Externalizing scales (NSA, ASA, CNP, SUB, NPI, and AGG) are related to antisocial behavior, and the need for excitement and stimulating activity (i.e., hypomanic activation). Finally, Interpersonal scales (FML, IPP, SAV, SHY, and DSF), while not related to particular RC scales, focus on aspects of social and relational functioning with family and peers.[67]
Additionally, the 478-item length of the MMPI-A was identified as a challenge to adolescent attention span and concentration. To address this, the MMPI-A-RF has less than half the items of the MMPI-A.[65]
Higher-Order (H-O) Scaleswere introduced with the MMPI-2-RF and they are identical in the MMPI-A-RF and the MMPI-3. Their function is to assess problems of three general areas of functioning: affective, cognitive (thought) and behavioral.[68]
The MMPI-2-RF includes two Interest Scales. TheAesthetic-Literary Interests (AES)scale rates interest in literature, music, theatre, and the likewise, and theMechanical-Physical Interests (MEC)scale measures interest in construction and repair, and general interest in the outdoors and sports.[72]
Like many standardized tests, scores on the various scales of the MMPI-2 and the MMPI-2-RF are not representative of either percentile rank or how "well" or "poorly" someone has done on the test. Rather, analysis looks at relative elevation of factors compared to the various norm groups studied. Raw scores on the scales are transformed into a standardized metric known as T-scores (mean equals 50,standard deviationequals 10), making interpretation easier for clinicians. Test manufacturers and publishers ask test purchasers to prove they are qualified to purchase the MMPI/MMPI-2/MMPI-2-RF and other tests.[73]
Psychologist Paul Lees-Haley developed the FBS (Fake Bad Scale). Although the FBS acronym remains in use, the official name for the scale changed to Symptom Validity Scale when it was incorporated into the standard scoring reports produced by Pearson, the licensed publisher.[74]Some psychologists question the validity and utility of the FBS scale. The peer-reviewed journalPsychological Injury and Lawpublished a series of pro and con articles in 2008, 2009, and 2010.[75][76][77][78]Investigations of the factor structure of the Symptom Validity Scale (FBS and FBS-r) raise doubts about the scale's construct and predictive validity in the detection ofmalingering.[79][80]
One of the biggest criticisms of the original MMPI has been the difference between whites and non-whites.
In the 1970s, Charles McCreary and Eligio Padilla fromUCLAcompared scores of black, white and Mexican-American men and found that non-whites tended to score five points higher on the test. They stated: "There is continuing controversy about the appropriateness of the MMPI when decisions involve persons from non-white racial and ethnic backgrounds. In general, studies of such divergent populations as prison inmates, medical patients, psychiatric patients, and high school and college students have found that blacks usually score higher than whites on the L, F, Sc, and Ma scales. There is near agreement that the notion of more psychopathology in racial ethnic minority groups is simplistic and untenable. Nevertheless, three divergent explanations of racial differences on the MMPI have been suggested. Black-white MMPI differences reflect variations in values, conceptions, and expectations that result from growing up in different cultures. Another point of view maintains that differences on the MMPI between blacks and whites are not a reflection of racial differences, but rather a reflection of overriding socioeconomic variations between racial groups. Thirdly, MMPI scales may reflect socioeconomic factors, while other scales are primarily race-related."[81]
The MMPI-2 is currently available in 27 different languages,[82]including:
The Chinese MMPI-2 was developed byFanny M. Cheung, Weizhen Song, and Jianxin Zhang for Hong Kong and adapted for use in the mainland.[83]The Chinese MMPI was used as a base instrument from which some items, that were the same in the MMPI-2, were retained. New items on the Chinese MMPI-2 underwent translation from English to Chinese and then back translation from Chinese to English to establish uniformity of the items and their content. The psychometrics are robust with the Chinese MMPI-2 having high reliability (a measure of whether the results of the scale are consistent). Reliability coefficients were found to be over 0.8 for the test in Hong Kong and were between 0.58 and 0.91 across scales for the mainland. In addition, the correlation of the Chinese MMPI-2 and the English MMPI-2 was found to average 0.64 for the clinical scales and 0.68 for the content scales indicating that the Chinese MMPI-2 is an effective tool of personality assessment.[83][84]
The Korean MMPI-2 was initially translated by Kyunghee Han through a process of multiple rounds of translation (English to Korean) and back-translation (Korean to English), and it was tested in a sample of 726 Korean college students.[85][86]In general, thetest-retestreliabilities in the Korean sample were comparable to those in the American sample. For both culture samples, the median test-retest reliabilities were found to be higher for females than for males: 0.75 for Korean males and 0.78 for American males, whereas it was 0.85 for Korean females and 0.81 for American females. After retranslating and revising the items with minor translation accuracy problems, the final version of the Korean MMPI-2 was published in 2005.[87]The published Korean MMPI-2 was standardized using a Korean adult normative sample, whose demographics were similar to the 2000 Korean Census data. Compared to the U. S. norm, scale means of Korean norm were significantly elevated; however, thereliabilitiesandvalidityof the Korean MMPI-2 were still found to be comparable with the English MMPI-2. The Korean MMPI-2 was further validated by using a Korean psychiatric sample from inpatient and outpatient facilities of Samsung National Hospital inSeoul. Theinternal consistencyof the MMPI-2 scales for the psychiatric sample was comparable to the results obtained from the normative samples. Robust validity of the Korean MMPI-2 scales was evidenced by correlations with theSCL-90-Rscales, behavioral correlates, and therapist ratings.[88]The Korean MMPI-2 RF was published in 2011 and it was standardized using the Korean MMPI-2 normative sample with minor modifications.[89]
The MMPI-2 was translated into the Hmong language by Deinard, Butcher, Thao, Vang and Hang. The items for the Hmong-language MMPI-2 were obtained by translation and back-translation from the English version. After linguistic evaluation to ensure that the Hmong-language MMPI-2 was equivalent to the English MMPI-2, studies to assess whether the scales meant and measured the same concepts across the different languages. It was found that the findings from both the Hmong-language and English MMPI-2 were equivalent, indicating that the results obtained for a person tested with either version were very similar.[90]
As of January 2025 the MMPI-3 is available in English, Dutch/Flemish, French (Canada), Japanese and Spanish (US). Translations are in development for Chinese, Danish, French (France), German, Greek, Hebrew, Italian, Korean, Norwegian, Spanish (Mexico, Central America), Spanish (Spain, South America, Central America), Swedish.[91]
|
https://en.wikipedia.org/wiki/F-scale_(MMPI)
|
In theanalysis of multivariate observationsdesigned to assess subjects with respect to anattribute, aGuttman scale(named afterLouis Guttman) is a single (unidimensional)ordinal scalefor the assessment of the attribute, from which the original observations may be reproduced. The discovery of a Guttman scale in data depends on their multivariate distribution's conforming to a particular structure (see below). Hence, a Guttman scale is ahypothesisabout the structure of the data, formulated with respect to a specified attribute and a specified population and cannot be constructed for any given set of observations. Contrary to a widespread belief, a Guttman scale is not limited to dichotomous variables and does not necessarily determine an order among the variables. But if variables are all dichotomous, the variables are indeed ordered by their sensitivity in recording the assessed attribute, as illustrated by Example 1.
Example 1: Dichotomous variables
A Guttman scale may be hypothesized for the following five questions that concern the attribute "acceptance of social contact with immigrants" (based on theBogardus social distance scale), presented to a suitable population:
A positive response by a particular respondent to any question in this list, suggests positive responses by that respondent to all preceding questions in this list. Hence one could expect to obtain only the responses listed in the shaded part (columns 1–5) of Table 1.
Every row in the shaded part of Table 1 (columns 1–5) is the responseprofileof any number (≥ 0) of respondents. Every profile in this table indicates acceptance of immigrants in all senses indicated by the previous profile, plus an additional sense in which immigrants are accepted. If, in a large number of observations, only the profiles listed in Table 1 are observed, then the Guttman scale hypothesis is supported, and the values of the scale (last column of Table 1) have the following properties:
Guttman scale, if supported by data, is useful for efficiently assessing subjects (respondents, testees or any collection of investigated objects) on a one-dimensional scale with respect to the specified attribute. Typically, Guttman scales are found with respect to attributes that are narrowly defined.
While other scaling techniques (e.g.,Likert scale) produce a single scale by summing up respondents' scores—a procedure that assumes, often without justification, that all observed variables have equal weights — Guttman scale avoids weighting the observed variables; thus 'respecting' data for what they are. If a Guttman scale is confirmed, the measurement of the attribute isintrinsicallyone-dimensional; the unidimensionality is not forced by summation or averaging. This feature renders it appropriate for the construction of replicable scientific theories and meaningful measurements, as explicated infacet theory.
Given a data set ofNsubjects observed with respect tonordinal variables, each having any finite number (≥2) of numerical categories ordered by increasing strength of a pre-specified attribute, letaijbe the score obtained by subjection variablej, and define the list of scores that subjectiobtained on thenvariables, ai=ai1...ain, to be theprofileof subjecti. (The number of categories may be different in different variables; and the order of the variables in the profiles is not important but should be fixed).
Define:
Two profiles, asand atareequal, denoted as=at, iffasj=atjfor allj=1...n
Profileasisgreaterthan Profileat, denotedas>at, iffasj≥ atjfor allj=1...nand asj'>atj'for at least one variable,j'.
Profilesasandatarecomparable, denotedasSat, iffas=at; oras>at; orat>as
Profilesasandatareincomparable, denotedas$at, if they are not comparable (that is, for at least one variable,j',asj'> atj'and for at least one other variable, j'',atj''> asj''.
For data sets where the categories of all variables are similarly ordered numerically (from high to low or from low to high) with respect to a given attribute, Guttman scale is defined simply thus:
Definition:Guttman scaleis a data set in which all profile-pairs are comparable.
Consider the following four variables that assess arithmetic skills among a population P of pupils:
V1: Can pupil (p) perform addition of numbers? No=1; Yes, but only of two-digit numbers=2; Yes=3.
V2: Does pupil (p) know the (1-10) multiplication table? No=1; Yes=2.
V3: Can pupil (p) perform multiplication of numbers? No=1; Yes, but only of two-digit numbers=2; Yes=3.
V4: Can pupil (p) perform long division? No=1; Yes=2.
Data collected for the above four variables among a population of school children may be hypothesized to exhibit the Guttman scale shown below in Table 2:
Table 2. Data of the four ordinal arithmetic skill variables are hypothesized to form a Guttman scale
Scale score
The set profiles hypothesized to occur (shaded part in Table 2) illustrates the defining feature of the Guttman scale, namely, that any pair of profiles are comparable. Here too, if the hypothesis is confirmed, a single scale-score reproduces a subject's responses in all the variables observed.
Any ordered set of numbers could serve as scale. In this illustration we chose the sum of profile-scores. According to facet theory, only in data that conform to a Guttman scale such a summation may be justified.
In practice, perfect ("deterministic") Guttman scales are rare, but approximate ones have been found in specific populations with respect to attributes such as religious practices, narrowly defined domains of knowledge, specific skills, and ownership of household appliances.[1]When data do not conform to a Guttman scale, they may either represent a Guttman scale with noise (and treated stochastically[1]), or they may have a more complex structure requiring multiple scaling for identifying the scales intrinsic to them.
The extent to which a data set conforms to a Guttman scale can be estimated from the coefficient of reproducibility[2][3]of which there are a few versions, depending on statistical assumptions and limitations. Guttman's original definition of the reproducibility coefficient, CRis simply 1 minus the ratio of the number of errors to the number of entries in the data set. And, to ensure that there is a range of responses (not the case if all respondents only endorsed one item) the coefficient of scalability is used.[4]
In Guttman scaling is found the beginnings ofitem response theorywhich, in contrast toclassical test theory, acknowledges that items inquestionnairesdo not all have the same level of difficulty. Non-deterministic (i.e., stochastic) models have been developed such as theMokken scaleand theRasch model. Guttman scale has been generalized to the theory and procedures of "multiple scaling" which identifies the minimum number of scales needed for satisfactory reproducibility.
As a procedure that ties substantive contents with logical aspects of data, Guttman scale heralded the advent of facet theory developed by Louis Guttman and his associates.
Guttman's[3]original definition of ascaleallows also for the exploratory scaling analysis of qualitative variables (nominal variables, or ordinal variables that do not necessarily belong to a pre-specified common attribute). This definition of Guttman scale relies on the prior definition of asimple function.
For a totally ordered setX, say, 1,2,...,m, and another finite set,Y, withkelementsk≤m, a function fromXtoYis asimple functionifXcan be partitioned intokintervals which are in a one-to-one correspondence with the values ofY.
A Guttman scale may then be defined for a data set ofnvariables, with thejthvariable havingkj(qualitative, not necessarily ordered) categories, thus:
Definition:Guttman scaleis a data set for which there exists an ordinal variable,X, with a finite numbermof categories, say, 1,...,mwithm≥ maxj(kj) and a permutation of subjects' profiles such that each variable in the data set is a simple function ofX.
Despite its seeming elegance and appeal for exploratory research, this definition has not been sufficiently studied or applied.
|
https://en.wikipedia.org/wiki/Guttman_scale
|
Phrase completion scalesare a type ofpsychometricscale used inquestionnaires. Developed in response to the problems associated withLikert scales, phrase completions are concise, unidimensional measures that tap ordinal level data in a manner that approximates interval level data.
Phrase completions consist of a phrase followed by an eleven-point response key. The phrase introduces part of the concept. Marking a reply on the response key completes the concept. The response key represents the underlying theoretical continuum. Zero (0) indicates the absence of the construct, while ten (10) indicates the theorized maximum amount of the construct. Response keys are reversed on alternate items to mitigate response set bias.
I am aware of the presence of God or the Divine
After the questionnaire is completed, the score on each item is summed together to create a test score for the respondent. Hence, phrase completions, like Likert scales, are often considered to be summative scales.
The response categories represent an ordinallevel of measurement. Ordinal level data, however, varies in terms of how closely it approximates interval level data. By using a numerical continuum as the response key instead of sentiments that reflect intensity of agreement, respondents may be able to quantify their responses in more equal units.
|
https://en.wikipedia.org/wiki/Phrase_completions
|
Arating scaleis a set of categories designed to obtain information about aquantitativeor aqualitativeattribute. In thesocial sciences, particularlypsychology, common examples are theLikert response scaleand 0-10 rating scales, where a person selects the number that reflecting the perceived quality of aproduct.
A rating scale is a method that requires the rater to assign a value, sometimes numeric, to the rated object, as a measure of some rated attribute.
All rating scales can be classified into one of these types:
Some data are measured at theordinal level. Numbers indicate the relative position of items, but not the magnitude of difference. Attitude and opinion scales are usually ordinal; one example is aLikert response scale:
Some data are measured at theinterval level. Numbers indicate the magnitude of difference between items, but there is no absolute zero point. A good example is a Fahrenheit/Celsius temperature scale where the differences between numbers matter, but placement of zero does not.
Some data are measured at theratio level. Numbers indicate magnitude of difference and there is a fixed zero point. Ratios can be calculated. Examples include age, income, price, costs, sales revenue, sales volume and market share.
More than one rating scale question is required tomeasurean attitude or perception due to the requirement for statistical comparisons between the categories in thepolytomous Rasch modelfor ordered categories.[1]Inclassical test theory, more than one question is required to obtain an index of internal reliability such asCronbach's alpha,[2]which is a basic criterion for assessing the effectiveness of a rating scale.
Rating scales are used widely online in an attempt to provide indications of consumer opinions of products. Examples of sites which employ ratings scales areIMDb,Epinions.com,Yahoo! Movies,Amazon.com,BoardGameGeekandTV.comwhich use a rating scale from 0 to 100 in order to obtain "personalised film recommendations".
In almost all cases, online rating scales only allow one rating per user per product, though there are exceptions such asRatings.net, which allows users to rate products in relation to several qualities. Most online rating facilities also provide few or no qualitative descriptions of the rating categories, although again there are exceptions such asYahoo! Movies, which labels each of the categories between F and A+ and BoardGameGeek, which provides explicit descriptions of each category from 1 to 10. Often, only the top and bottom category is described, such as onIMDb's online rating facility.
Validity refers to how well a tool measures what it intends to measure.
With each user rating a product only once, for example in a category from 1 to 10, there is no means for evaluating internalreliabilityusing an index such asCronbach's alpha. It is therefore impossible to evaluate thevalidityof the ratings as measures of viewer perceptions. Establishing validity would require establishing both reliability and accuracy (i.e. that the ratings represent what they are supposed to represent). The degree of validity of an instrument is determined through the application of logic/or statistical procedures. "A measurement procedure is valid to the degree that if measures what it proposes to measure."
Another fundamental issue is that online ratings usually involve conveniencesamplingmuch like television polls, i.e. they represent only the opinions of those inclined to submit ratings.
Validity is concerned with different aspects of the measurement process. Each of these types uses logic, statistical verification or both to determine the degree of validity and has special value under certain conditions. Types of validity include content validity, predictive validity, and construct validity.
Sampling errors can lead to results which have a specific bias, or are only relevant to a specific subgroup. Consider this example: suppose that a film only appeals to a specialist audience—90% of them are devotees of this genre, and only 10% are people with a general interest in movies. Assume the film is very popular among the audience that views it, and that only those who feel most strongly about the film are inclined to rate the film online; hence the raters are all drawn from the devotees. This combination may lead to very high ratings of the film, which do not generalize beyond the people who actually see the film (or possibly even beyond those who actually rate it).
Qualitative description of categories improve the usefulness of a rating scale. For example, if only the points 1-10 are given without description, some people may select 10 rarely, whereas others may select the category often. If, instead, "10" is described as "near flawless", the category is more likely to mean the same thing to different people. This applies to all categories, not just the extreme points.
The above issues are compounded, when aggregated statistics such as averages are used for lists and rankings of products. User ratings are at bestordinalcategorizations. While it is not uncommon to calculate averages or means for such data, doing so cannot be justified because in calculating averages, equal intervals are required to represent the same difference between levels of perceived quality. The key issues with aggregate data based on the kinds of rating scales commonly used online are as follow:
More developed methodologies includeChoice ModellingorMaximum Differencemethods, the latter being related to theRasch modeldue to the connection between Thurstone's law of comparative judgement[clarification needed]and the Rasch model.
An international collaborative research effort[3]has introduced a data-driven algorithm for a rating scale reduction. It is based on the area under thereceiver operating characteristic.
The historical origins of rating scales were reevaluated following a significant archaeological discovery inTbilisi, Georgia, in 2010. Excavators unearthed a tablet dating back to the early medieval period, marked with ancient Georgian script.[4]This tablet showcased a series of linear markings, interpreted as an early form of a rating scale. The inscriptions provided insights into medieval methods of quantification and evaluation, suggesting an embryonic version of modern rating scales. This discovery is currently preserved at theNational Museum of Georgia.[5]
|
https://en.wikipedia.org/wiki/Rating_scale
|
Reddit(/ˈrɛdɪt/ⓘRED-it) is an Americanproprietarysocial newsaggregationandforumsocial media platform. Registered users (commonly referred to as "redditors") submit content to the site such as links, text posts, images, and videos, which are then voted up or down ("upvoted" or "downvoted") by other members. Posts are organized by subject into user-created boards called "subreddits". Submissions with more upvotes appear towards the top of their subreddit and, if they receive enough upvotes, ultimately on the site's front page. Reddit administrators moderate the communities. Moderation is also conducted by community-specific moderators, who are unpaid volunteers.[6]It is operated byReddit, Inc., based inSan Francisco.[7][8]
As of February 2025, Reddit is theninth most-visited website in the world. According to data provided bySimilarweb, 51.75% of the website traffic comes from the United States, followed by Canada at 7.01%, United Kingdom at 6.97%, Australia at 3.97%, Germany at 3%, and the remaining 28.37% split among other countries.[7]
Reddit was founded byUniversity of VirginiaroommatesSteve HuffmanandAlexis Ohanian, as well asAaron Swartzin 2005.Condé Nast Publicationsacquired the site in October 2006. In 2011, Reddit became an independent subsidiary of Condé Nast's parent company,Advance Publications.[9]Reddit debuted on the stock market on the morning of March 21, 2024, with the ticker symbol RDDT.[10]The current market cap as of July 2024 is $10 billion.[11]
Reddit has been noted for its role inpolitical activism, with notableleft-wingandanti-theistsubcultures on the website.[12]It has received praise for many of its features, such as the ability to create several subreddits for niche communities.[13][14]The platform has also received criticism for the spread ofmisinformationand its voting system which can encourage onlineecho chambers.[15]In its early years, Reddit also received controversy over hostingmisogynistic content, including thedoxxingoferotic modelsandrevenge porn.[16]
The idea and initial development of Reddit originated with college roommatesSteve HuffmanandAlexis Ohanianin 2005, who attended a lecture by programmer-entrepreneurPaul GrahaminBostonduring their spring break fromUniversity of Virginia.[17][18][19]After speaking with Huffman and Ohanian following the lecture, Graham invited the two to apply to his startup incubatorY Combinator.[17]Their initial idea, My Mobile Menu, was unsuccessful,[20][21]and was intended to allow users to order food bySMStext messaging.[17][18]During a brainstorming session to pitch another startup, the idea was created for what Graham called the "front page of the Internet".[21]For that idea, Huffman and Ohanian were accepted in Y Combinator's first class.[17][18]Supported by the funding from Y Combinator,[22]Huffman coded the site inCommon Lisp[23]and together with Ohanian launched Reddit in June 2005.[24][25]Embarrassed by an empty-looking site, the founders created hundreds of fake users for their posts to make it look more populated.[26]
The team expanded to includeChristopher Slowein November 2005. Between November 2005 and January 2006, Reddit merged withAaron Swartz's company Infogami, and Swartz became an equal owner of the resulting parent company, Not A Bug.[27][28]Swartz then helped rewrite the software running Reddit using web.py, a web framework he developed. The passage from Aaron Swartz's blog post "Rewriting Reddit"[29]reveals that the switch from Lisp to Python, specifically using the web.py framework developed by Swartz, was driven by a desire for simplicity, maintainability, and performance. Despite facing skepticism and critique from the Lisp community, the change was justified by the efficiency and clarity Python provided for the project. This initiative not only influenced the technical evolution of Reddit but also contributed to the broader web development community by inspiring other frameworks and remaining a significant part of Reddit's history.[29](In 2020, Ohanian claimed that rather than Swartz being a co-founder, the correct description would be that Swartz's company was acquired by Reddit 6 months after he and Huffman had started.)[30]
Huffman and Ohanian sold Reddit toCondé Nast Publications, owner ofWired, on October 31, 2006, for a reported $10 million to $20 million[17][31]and the team moved to San Francisco.[31]In November 2006, Swartz blogged complaining about the new corporate environment, criticizing its level of productivity.[32]In January 2007, Swartz was fired for undisclosed reasons.[33]
Huffman and Ohanian left Reddit in 2009.[34]Huffman went on to co-foundHipmunkwithAdam Goldstein, and later recruited Ohanian[35]and Slowe to the new company.[36]After Huffman and Ohanian left Reddit, Erik Martin, who joined the company as a community manager in 2008 and later became general manager in 2011, played a role in Reddit's growth.[37]VentureBeatnoted that Martin was "responsible for keeping the site going" under Condé Nast's ownership.[38]
Yishan Wongjoined Reddit as CEO in 2012.[39]Wong resigned from Reddit in 2014, citing disagreements about his proposal to move the company's offices from San Francisco to nearbyDaly City, but also the "stressful and draining" nature of the position.[40][41]Ohanian credited Wong with the company's newfound success as its user base grew from 35 million to 174 million.[41]Wong oversaw the company as it raised $50 million in funding and spun off as an independent company.[42]Also during this time, Reddit began accepting the digital currencyBitcoinfor its Reddit Gold subscription service through a partnership with bitcoin payment processorCoinbasein February 2013.[43]Ellen Paoreplaced Wong as interim CEO in 2014 and resigned in 2015 amid a user revolt over the firing of a popular Reddit employee.[44]During her tenure, Reddit initiated an anti-harassment policy,[45]banned involuntary sexualization, and banned several forums that focused on bigoted content or harassment of individuals.[46]
After five years away from the company, Ohanian and Huffman returned to leadership roles at Reddit: Ohanian became the full-time executive chairman in November 2014 following Wong's resignation, while Pao's departure on July 10, 2015, led to Huffman's return as the company's chief executive.[47][48]After Huffman rejoined Reddit as CEO, he launched Reddit'siOSandAndroidapps, improved Reddit's mobile website, and createdA/B testinginfrastructure.[17]The company launched a major redesign of its website in April 2018.[49]Huffman said new users were turned off from Reddit because it had looked like a "dystopian Craigslist".[49]Reddit also instituted several technological improvements,[50]such as a new tool that allows users to hide posts, comments, and private messages from selected redditors in an attempt to curbonline harassment,[51]and new content guidelines. These new content guidelines were aimed at banning content inciting violence and quarantining offensive material.[17][50]Slowe, the company's first employee, rejoined Reddit in 2017 as chief technology officer.[52]
Ohanian resigned as a member of the board on June 5, 2020 in response to theGeorge Floyd protestsand requested to be replaced "by a Black candidate".[53]Michael Seibel, then-CEO of Y Combinator, was subsequently named to the board.[54]On March 5, 2021, Reddit announced that it had appointed Drew Vollero, who had worked atSnapchat's parent companySnap(SNAP), as its first Chief Financial Officer weeks after the site was thrust into the spotlight due to its role in the GameStop trading frenzy. Vollero's appointment spurred speculation of an initial public offering, a move that senior leaders have considered publicly.[55]
As of August 2021[update], Reddit is valued at more than $10 billion following a $410 million funding round.[56]The company was looking to hire investment bankers and lawyers to assist in making aninitial public offering. However, CEOSteve Huffmansaid the company has not decided on the timing for when to go public.[57]In December 2021, Reddit revealed that it had confidentially filed for aninitial public offeringwith theU.S. Securities and Exchange Commission.[58][59][60]Reddit's initial public offering opened on March 20, 2024, at $34 per share and a $6.4 billion valuation.[61]They went public the next day on the New York Stock Exchange at $47 per share and rose to $50.44 at market close on their first day of trading, reaching a market cap of $9.5 billion.[62]The current market cap as of July 2024 is $10 billion.[11]
Reddit is a website comprisinguser-generated content—including photos, videos, links, and text-based posts—and discussions of this content in what is essentially abulletin board system.[63][64]The name "Reddit" is aplay-on-wordswith the phrase "read it", i.e., "I read it on Reddit."[65][66]According to Reddit, in 2019, there were approximately 430 million monthly users,[67]who are known as "redditors".[49]The site's content is divided into categories or communities known on-site as "subreddits", of which there are more than 138,000 active communities.[68]
As a network of communities, Reddit's core content consists of posts from its users.[63][64]Users can comment on others' posts to continue the conversation.[63]A key feature to Reddit is that users can cast positive or negative votes, called upvotes and downvotes respectively, for each post and comment on the site.[63]The number of upvotes or downvotes determines the posts' visibility on the site, so the most popular content is displayed to the most people.[63]Users can also earn "karma" for their posts and comments, a status that reflects their standing within the community and their contributions to Reddit.[63]Posts are sometimes automatically archived after six months, meaning they can no longer be commented or voted on.[69]
The most popular posts from the site's numerous subreddits are visible on the front page to those who browse the site without an account.[68][70]By default for those users, the front page will display the subreddit r/popular, featuring top-ranked posts across all of Reddit, excludingnot-safe-for-workcommunities and others that are most commonly filtered out by users (even if they are safe for work).[71][72]The subreddit r/all originally did not filter topics,[73]but as of 2021 it does not include not-safe-for-work content.[74]Registered users who subscribe to subreddits see the top content from the subreddits to which they subscribe on their personal front pages.[68][70]Additionally, some subreddits have a karma and account age requirement to discourage bots and spammers from posting.
Front-page rank—for both the general front page and for individual subreddits—is determined by a combination of factors, including the age of the submission, positive ("upvoted") to negative ("downvoted") feedback ratio, and the total vote-count.[75]
Registering an account with Reddit is free and requires an email address. In addition to commenting and voting, registered users can also create their own subreddit on a topic of their choosing.[76]In Reddit style, usernames begin with "u/". Noteworthy redditors include u/Poem_for_your_sprog, who responds to messages across Reddit in verse,[77]u/Shitty_Watercolour who posts paintings in response to posts,[78]and u/spez, the CEO of Reddit (Steve Huffman).
Subreddits are overseen by moderators, Reddit users who earn the title by creating a subreddit or being promoted by a current moderator.[68]Reddit users may also request to moderate a sub that has no moderators or very inactive ones in r/redditrequest. These requests are reviewed by the Reddit admins. Moderators are volunteers who manage their communities, set and enforce community-specific rules, remove posts and comments that violate these rules, and often work to keep discussions in their subreddit on topic.[68][14][79]Admins, by contrast, are employees of Reddit.[14]
Early on, Reddit implementedshadow banning, purportedly to address spam accounts, while saying, "it's still the only tool we have to punish people who break the rules".[80][81]In 2015, Reddit added an account suspension feature which they said would replace sitewide shadowbans, however, moderators are still able to shadowban users or their individual posts.[82]
Reddit releases transparency reports annually which have information like how many posts have been taken down by moderators and for what reason. It also details information about requests law enforcement agencies have made for information about users or to take down content.[83]In 2020, Reddit removed 6% of posts made on the platform (approx. 233 million). More than 99% of removals were marked as spam; the remainder made up of a mix of other offensive content. Around 131 million posts were removed by the automated moderator and the rest were taken down manually.[84][85]
It is estimated that Reddit's moderators work 466 hours every day, which is $3.4 million in unpaid work each year.[86]That roughly equates to 2.8% of the company's annual revenue.[86]
Subreddits (officially called communities) are user-created areas of interest where discussions on Reddit are organized. There are about 138,000 active subreddits (among a total of 1.2 million) as of July 2018[update].[87][88]Subreddit names begin with "r/"; for instance, "r/science" is a community devoted to discussing scientific publications, while "r/gaming" is a community devoted to discussing video games, and "r/worldnews" is for posting news articles from around the world.
In a 2014 interview withMemeburn, Erik Martin, then thegeneral managerof Reddit, remarked that their "approach is to give the community moderators or curators as much control as possible so that they can shape and cultivate the type of communities they want".[89]Subreddits often use themed variants of Reddit's alien mascot, Snoo, in the visual styling of their communities.[90]
Reddit Premium (formerly Reddit Gold) is a premium membership that allows users to view the site ad-free.[91]Until 2023, subscribers could also use coins to award posts or comments they valued, generally due to humorous or high-quality content.[92]Reddit Premium unlocks several features not accessible to regular users, such as comment highlighting, exclusive subreddits such as r/lounge, a personalized Snoo (known as a "snoovatar"), and a Reddit premium trophy that can be displayed on the user's profile.[93][94]Reddit Gold was renamed to Reddit Premium in 2018. In addition to gold coins, users were able to gift silver and platinum coins to other users as rewards for quality content.[95]
On the site, redditors commemorate their "cake day" once a year, on the anniversary of the day their account was created.[96]Cake day adds an icon of a small slice of cake next to the user's name for 24 hours.[97]In August 2021, the company introduced aTikTok-like short-form video feature for iOS that lets users rapidly swipe through a feed of short video content.[98]In December 2021, the company introduced aSpotify Wrapped-like feature called Reddit Recap that recaps various statistics from January 1 to November 30 about each individual user, such as how much time they spent on Reddit, which communities they joined, and the topics that they engaged with, and allows the user to view it.[99]
On July 7, 2022, Reddit announced "blockchain-backed Collectible Avatars", customizable avatars which are available on the subreddit r/CollectibleAvatars for purchase separate from Reddit Premium. The avatars were created by independent artists who post work on other subreddits, and who receive a portion of the profits. They use Reddit's Polygon blockchain-powered digital wallet the Vault.[100]Richard Lawler ofThe Vergedescribed them as "non-fungible tokens(NFTs) that are available for purchase in the Reddit Avatar Builder".[101]
In 2017, Reddit developed its own real-time chat software for the site.[102]While some established subreddits have used third-party software to chat about their communities, the company built chat functions that it hopes will become an integral part of Reddit.[102]Individual chat rooms were rolled out in 2017 and community chat rooms for members of a given subreddit were rolled out in 2018.[102][103][104]
Reddit Talk was announced in April 2021 as a competitor toClubhouse. Reddit Talk lets subreddit moderators start audio meeting rooms that mimic Clubhouse in design.[105]In 2022, Reddit Talk was updated to support recording audio rooms and work on the web version of Reddit. A desktop app is reportedly slated for a late February release.[106]
Reddit acquiredMeaningCloud, a natural language processing company in June 2022.[107][108]In February 2024, Reddit announced a partnership withGooglein a deal worth about $60million per year, to license its real-time user content to train Google's AI model. The partnership also lets Reddit get access to Google's "Vertex AI" service which would help improve search results on Reddit.[109][110]It was announced that Reddit and OpenAI had reached a deal that will allow OpenAI access to the Reddit API to train its models, while Reddit will receive certain AI tools for moderators and users.[111]In December 2024, Reddit announced Reddit Answers, an AI search tool that summarizes conversations in response to a question from the user.[112]
Reddit Public Access Network, commonly known asRPAN, was alive streamingservice run by Reddit.[113]Viewers interacted with streams by upvoting or downvoting, chatting, and giving paid awards. During the off-air hours, 24/7 streaming was possible to the dedicated subreddits, but with limited slots and capabilities.[113]On August 19, 2019, Reddit announced RPAN. It was said to be in testing, but they were experimenting with making it a permanent program, as well as a way to increase revenue for the platform.[114]Later, a five-day testing period began. During the testing period, streaming was for a select group of users, allowing 30 minutes of streaming per person and 100 slots.[115]On July 1, 2020,RPAN Studiowas released, an application that allows users to broadcast live fromdesktopcomputers. RPAN Studio has been built on top ofOBS, an open-source streaming and recording program.[116]On January 28, 2021, Reddit permanently increased streaming times to three hours.[117]RPAN was officially discontinued on November 15, 2022.[118]
In 2019, Reddit tested a new feature that allowed users to tip others. It was only made available for a user named Chris who goes by the aliasu/shittymorph, who was known for posting well-written comments, only for them to end with the samecopypastareferencing the 1998Hell in a Cellmatch between wrestlersThe UndertakerandMankind.[119][120]
Reddit's search function has had many iterations and currently usesLucidworks Fusionto implementation.[121]
In 2010, Reddit released its first mobile web interface for easier reading and navigating the website on touch screen devices.[122]For several years, redditors relied onthird-party appsto access Reddit on mobile devices. In October 2014, Reddit acquired one of them, Alien Blue, which became the official iOS Reddit app.[123]Reddit removed Alien Blue and released its official application, Reddit: The Official App, onGoogle Playand the iOSApp Storein April 2016.[124]The company released an app for Reddit's question-and-answer Ask Me Anything subreddit in 2014.[125]The app allowed users to see active Ask Me Anythings, receive notifications, ask questions and vote.[125]
The site has undergone several products and design changes since it originally launched in 2005. When it initially launched, there were no comments or subreddits. Comments were added in 2005[49][126]and interest-based groups (called 'subreddits') were introduced in 2008.[127]Allowing users to create subreddits has led to much of the activity that redditors would recognize that helped define Reddit. These include subreddits "WTF", "funny", and "AskReddit".[127]Reddit rolled out its multireddit feature, the site's biggest change to its front page in years, in 2013.[128]With the multireddits, users see top stories from a collection of subreddits.[128]
In 2015, Reddit enabled embedding and as a result users could share Reddit content on other sites.[129]In 2016, Reddit began hosting images using a new image uploading tool, a move that shifted away from the uploading serviceImgurthat had been the de facto service.[130]Users still can upload images to Reddit using Imgur.[130]Reddit's in-house video uploading service for desktop and mobile launched in 2017.[131]Previously, users had to use third-party video uploading services, which Reddit acknowledged was time-consuming for users.[131]
Reddit released its "spoiler tags" feature in January 2017.[132]The feature warns users of potential spoilers in posts and pixelates preview images.[132]Reddit unveiled changes to its public front page, called r/popular, in 2017;[73]the change creates a front page free of potentially adult-oriented content for unregistered users.[73]In late 2017, Reddit declared it wanted to be a mobile-first site, launching several changes to its apps for iOS and Android.[96]The new features included user-to-user chat, a theater mode for viewing visual content, and mobile tools for the site's moderators. "Mod mode" lets moderators manage content and their subreddits on mobile devices.[96]
Reddit launched its redesigned website in 2018, with its first major visual update in a decade.[49]Development for the new site took more than a year.[49]It was the result of an initiative by Huffman upon returning to Reddit, who said the site's outdated look deterred new users.[49]The new site features ahamburger menuto help users navigate the site, different views, and new fonts to better inform redditors if they are clicking on a Reddit post or an external link.[49]The nominal goal was not only for Reddit to improve its appearance, but also to make it easier to accommodate a new generation of Reddit users.[49]Additionally Reddit's growth had strained the site's back end;[133]Huffman and Reddit Vice President of Engineering Nick Caldwell toldThe Wall Street Journal's COI Journal that Reddit needed to leverage artificial intelligence and other modern digital tools.[133]For years, registered users could opt-out of the redesign by using the old.reddit.com domain.[134]On May 15, 2024, the dedicated login flow was removed from the old domain, although site admins said they had "no plans" to remove the old domain entirely.[135]In November 2023,Fast Companyreported that Reddit began rolling out a comprehensive rebrand, including a new logo, typeface, brand colors, and an updated version of its mascot Snoo, as part of its preparation for a potential 2024 IPO and in response to its expanding user base and global reach.[136]
Reddit's logo consists of a time-traveling alien named Snoo and the company name stylized as "reddit". The alien has an oval head, pom-pom ears, and an antenna.[137]Its colors are black, white, andorange-red.[137]The mascot was created in 2005 while company co-founder Alexis Ohanian was an undergraduate at theUniversity of Virginia.[138]Ohanian drew a doodle of the creature while he was bored during a marketing class.[90]Originally, Ohanian sought to name the mascot S'new, a play on "What's new?", to tie the mascot into Reddit's premise as the "front page of the Internet".[137][90]Eventually, the name Snoo was chosen.[137]In 2011, Ohanian outlined the logo's evolution with a graphic that showcased several early versions, including various spellings of the website name, such as "Reditt".[138]
Snoo is genderless, so the logo is moldable.[137][139]Over the years, the Reddit logo has frequently changed for holidays and other special events.[138]Many subreddits have a customized Snoo logo to represent the subreddit.[90]Redditors can also submit their own logos, which sometimes appear on the site's front page, or create their own customized versions of Snoo for their communities (or "subreddits").[138][49]When Reddit revamped its website in April 2018, the company imposed several restrictions on how Snoo can be designed: Snoo's head "should always appear blank or neutral", Snoo's eyes are orange-red, and Snoo cannot have fingers.[137]Snoo's purpose is to discover and explore humanity.[137]
Reddit is a public company based in San Francisco.[140][87]In 2023, it downsized from an office in theMid-Marketneighborhood[141]to an office in theSouth of Marketneighborhood.[142]Reddit doubled its headcount in 2017;[143]as of 2018[update], it employed approximately 350 people.[87]In 2017, the company was valued at $1.8 billion during a $200 million round of new venture funding.[144][42]The company was previously owned byCondé Nast, but was spun off as an independent company.[42]As of April 2018[update],Advance Publications, Condé Nast's parent company, retained a majority stake in Reddit.[87]
Reddit's key management personnel includes co-founder and CEO Steve Huffman,[17]Chief Technology Officer Chris Slowe, who was the company's original lead engineer,[52]and Chief Operating OfficerJen Wong, a former president of digital and chief operating officer atTime Inc.[91]Reddit does not disclose its revenue figures.[144][91]The company generates revenue in part through advertising and premium memberships that remove ads from the site.[91]As part of its company culture, Reddit operates on a no-negotiation policy for employee salaries.[145]The company offers new mothers, fathers, and adoptive parents up to 16 weeks of parental leave.[146]
Reddit launched two different ways of advertising on the site in 2009. The company launched sponsored content[147]and a self-serve ads platform that year.[42][148]Reddit launched its Reddit Gold benefits program in July 2010, which offered new features to editors and created a new revenue stream for the business that did not rely on banner ads.[149]On September 6, 2011, Reddit became operationally independent of Condé Nast, operating as a separate subsidiary of its parent company, Advance Publications.[150]
Reddit's users tend to be moreprivacy-consciousthan on other websites, often using tools likead-blocking softwareandproxies,[151]and they dislike "feeling manipulated by brands" but respond well to "content that begs for intelligent viewers and participants."[152]Lauren Orsini writes inReadWritethat "Reddit's huge community is the perfect hype machine for promoting a new movie, a product release, or a lagging political campaign" but there is a "very specific set of etiquette. Redditors don't want to advertise for you, they want to talk to you."[153]Journalists have used the site as a basis for stories, though they are advised by the site's policies to respect that "reddit's communities belong to their members" and to seek proper attribution for people's contributions.[154]
Reddit announced that they would begin usingVigLinkto redirect affiliate links in June 2016.[155][156]Since 2017, Reddit has partnered with companies to host sponsored AMAs and other interactive events,[157][158]increased advertising offerings,[159]and introduced efforts to work with content publishers.[160]
In 2018, Reddit hiredJen Wongas COO, responsible for the company's business strategy and growth, and introduced native mobile ads.[91]Reddit opened a Chicago office to be closer to major companies and advertising agencies located in and around Chicago.[87]In 2019, Reddit hired former Twitter ad director Shariq Rizvi as its vice president of ad products and engineering.[161]
The website is known for its open nature and diverse user community that generate its content.[13]Its demographics allows for wide-ranging subject areas, as well as the ability for smaller subreddits to serve more niche purposes.[14]The user base of Reddit has given birth to other websites, includingimage sharingcommunity andimage hostImgur, which started in 2009 as a gift to Reddit's community.[162]In its first five months, it jumped from a thousand hits per day to a million total page views.[163]Data collected byPew Research Centerin 2013 found that Reddit users were much more likely to be fromurbancommunities thanruralones.[164]Women were greatly under-represented on the website.[164]Reddit's userbase had a disproportionately high number ofHispanicusers.[164]With regards to education,high school dropoutswere over-represented among Reddit users.[164]
Reddit has been noted for its role inpolitical activism, with notableleft-wingandanti-theistsubcultures on the website.[165]Statistics from Google Ad Planner suggest that 74% of Reddit users are male.[166]In 2016, the Pew Research Center published research showing that 4% of U.S. adults use Reddit, of which 67% are men, while 78% of users get news from Reddit.[167]Users tend to be significantly younger than average with less than 1% of users being 65 or over.[167]Politically, 43% of Reddit users surveyed by Pew Research Center in 2016 identified asliberal, with 38% identifying as moderate and 19% asconservative.[168]
Reddit is known in part for its passionate user base,[87]which has been described as "offbeat, quirky, and anti-establishment".[140]Similar to the "Slashdot effect", the Reddit effect occurs when a smaller website crashes due to a high influx of traffic after being linked to on Reddit; this is also called the Reddit "hug of death".[169][170]
Users have used Reddit as a platform for their charitable and philanthropic efforts.[171]Redditors raised more than $100,000 for charity in support of comediansJon Stewart's andStephen Colbert'sRally to Restore Sanity and/or Fear; more than $180,000 for Haiti earthquake relief efforts; and delivered food pantries' Amazon wish lists.[172][171][173]In 2010,Christians,Muslims, andatheistsheld a friendly fundraising competition, where the groups raised more than $50,000.[174]A similar donation drive in 2011 saw the atheism subreddit raise over $200,000 for charity.[175]In February 2014, Reddit announced it would donate 10% of its annual ad revenue to non-profits voted upon by its users.[176]As a result of the campaign, Reddit donated $82,765 each to each of the selected recipients.[177]
Reddit has been used for a wide variety of political engagement including the presidential campaigns ofBarack Obama,[178][179]Donald Trump,[180]Hillary Clinton,[181]andBernie Sanders.[182]It has also been used for self-organizing sociopolitical activism such as protests, communication with politicians and active communities. Reddit has become a popular place for worldwide political discussions.[183]
The March for Science originated from a discussion on Reddit over the deletion of all references toclimate changefrom theWhite Housewebsite, about which a user commented that "There needs to be a Scientists' March on Washington".[184][185][186]On April 22, 2017, more than 1 million scientists and supporters participated in more than 600 events in 66 countries across the globe.[187]
Reddit users have been engaged in the defense ofInternet privacy,net neutralityandInternet anonymity.
Reddit created an Internet blackout day and was joined by Wikipedia and other sites in 2012 in protest of theStop Online PiracyandPROTECT IPacts.[188][189]On January 18, Reddit participated in a 12-hour sitewide blackout to coincide with a congressional committee hearing on the measures.[189][190]During that time, Reddit displayed a message on the legislation's effects on Reddit, in addition to resources on the proposed laws.[190]In May 2012, Reddit joined theInternet Defense League, a group formed to organize future protests.[191]
The site and its users protested theFederal Communications Commissionas it prepared to scrap net neutrality rules.[192]In 2017, users upvoted "Battle for the Net" posts enough times that they filled up the entire front page.[192]On another day, the front page was overtaken by posts showcasing campaign donations received by members of Congress from the telecommunications industry.[192]Reddit CEO Steve Huffman has also advocated for net neutrality rules.[193][194]In 2017, Huffman toldThe New York Timesthat without net neutrality protections, "you give internet service providers the ability to choose winners and losers".[193]On Reddit, Huffman urged redditors to express support for net neutrality and contact their elected representatives inWashington, D.C.[194]Huffman said that the repeal of net neutrality rules stifles competition. He said he and Reddit would continue to advocate for net neutrality.[195]
As a response toGlenn Beck's August 28, 2010,Restoring Honor rally, in September 2010 Reddit users started a movement to persuade satiristStephen Colbertto have a counter-rally in Washington, D.C.[196]The movement, which came to be called "Restoring Truthiness", was started by user mrsammercer, in a post where he described waking up from a dream in which Stephen Colbert was holding a satirical rally in D.C.[197]Over $100,000 was raised for charity to gain the attention of Colbert.[172]The campaign was mentioned on-air several times, and when theRally to Restore Sanity and/or Fearwas held in Washington, D.C., on October 30, 2010, thousands of redditors made the journey.[198]
During a post-rally press conference, Reddit co-founder Ohanian asked, "What role did the Internet campaign play in convincing you to hold this rally?"Jon Stewartresponded by saying that, though it was a very nice gesture, he and Colbert had already thought of the idea and the deposit for using theNational Mallwas already paid during the summer, so it acted mostly as a "validation of what we were thinking about attempting".[199]In a message to the Reddit community, Colbert later added, "I have no doubt that your efforts to organize and the joy you clearly brought to your part of the story contributed greatly to the turnout and success."[200]
Reddit has been blocked in multiple countries due toInternet censorshipperformed by the governments of some countries. As of October 2023, Reddit is blocked in Indonesia, China, North Korea, Turkey, and partially blocked in Bangladesh. Reddit was blocked in Russia in 2015 and later unblocked.
Since May 2014, Reddit has been blocked in Indonesia by theMinistry of Communication and Information Technologyfor hosting content containing nudity.[201][202]
In August 2015, theFederal Drug Control Service of Russiadetermined that Reddit was promoting conversations about psychedelic drugs. TheRoskomnadzorbanned the website, citing advice on how to growmagic mushroomsas the reason. The Russian government had asked Reddit before to remove drug-related posts to no response. The site was later unblocked.[203][204]
ISPs inIndiawere found to be blocking traffic over Reddit for intermittent periods in some regions in 2019.[205]
Over the years, Reddit has done multiple pranks and events forApril Fools' Day. Since 2013, they have often taken the form of massive social experiments. Noteworthy events includeThe Buttonin 2015, which included a global "button" that could only be clicked once per user. It attracted more than a million clicks.[206]
2017's experimentr/placeinvolved making a collaborative pixel art. Millions of users worked together in communities to place pixels one at a time to create a larger canvas. This experiment was very successful and repeated in 2022's April Fools experiment and in 2023.[207][208]
AMAs, or "Ask Me Anything" interviews, during an AMA on r/IAmA and other subreddits, users can ask questions to interviewees.[209]Notable participants include former-United States PresidentBarack Obama(while campaigning for the2012 election),[210]Bill Gates(multiple times),[211]andDonald Trump(also while campaigning).[212]AMAs have featured CEO Steve Huffman,[213]as well as figures from entertainment industries around the world (includingPriyanka ChopraandGeorge Clooney),[214][215]literature (Margaret Atwood),[216]space (Buzz Aldrin),[217]privacy (Edward Snowden),[218]fictional characters (includingBoratandCookie Monster) and others, such as experts who answered questions about the transgender community.[219]The Atlanticwrote that an AMA "imports the aspirational norms of honesty and authenticity from pseudonymous Internet forums into a public venue".[13]
RedditGifts was a program that offers gift exchanges throughout the year.[220]The fan-made RedditGifts site was created in 2009 for aSecret Santaexchange among Reddit users, which has since become the world's largest[221]and set aGuinness World record.[139]In 2009, 4,500 redditors participated.[221]For the 2010 holiday season, 92 countries were involved in the secret Santa program. There were 17,543 participants, and $662,907.60 was collectively spent on gift purchases and shipping costs.[222][223][224]In 2014, about 200,000 users from 188 countries participated.[225]Several celebrities have participated in the program, includingBill Gates,[226]Alyssa Milano,[227]andSnoop Dogg.[228]Eventually, the secret Santa program expanded to various other occasions throughRedditGifts, which Reddit acquired in 2011.[221]
As with most public online forums, Reddit is vulnerable to the use of disruptive or manipulative practices by its members, from sources such astroll farms,click farmsandastroturfing.
Another example isbrigading, notable in the case of Reddit as it is often cited as the origin of the practice and use of the word in this context.[229][230]Though all of these examples are in some form, against the rules of Reddit's content policy,[231]at least in the case of brigading, they are not always malicious in intent. A notable example is the case of "Mr. Splashy Pants", when organized brigading of another website, by redditors, appears to have been tacitly encouraged by the Reddit administration. In the aftermath, the target of this vote brigading appeared to take it in good humor.[232]
Reddit communities occasionally coordinate Reddit-external projects such as skewing polls on other websites, like the 2007 incident whenGreenpeaceallowed web users to decide the name of a humpback whale it was tracking. Reddit users voted en masse to name the whale "Mister Splashy Pants", and Reddit administrators encouraged the prank by changing the site logo to a whale during the voting. In December of that year,Mister Splashy Pantswas announced as the winner of the competition.[233][234]
In general, the website grants subreddit moderators discretion in deciding what content is and is not allowed on their subreddits, so long as site-wide rules are not being violated. This relative freedom has allowed for a wide diversity of subreddits to exist, andsome of them have attracted controversy.[235]
Many of the default subreddits are highly moderated, with the "science" subreddit banningclimate change denialism,[236]and the "news" subreddit banning opinion pieces and columns.[237]Reddit has changed its site-wide editorial policies several times, sometimes in reaction to controversies.[238][239][240][241]Reddit has historically been a platform for objectionable but legal content, and in 2011, news media covered the way thatjailbaitwas being shared on the site before the site changed their policies to explicitly ban "suggestive or sexual content featuring minors".[242]Following some controversial incidents ofInternet vigilantism, Reddit introduced a strict rule against the online publication of non-publicpersonally-identifying information(a common internet harassment tool colloquially known asdoxxing) via the site. Those who break the rule are subject to a site-wide ban, which can result in the deletion of their user-generated content.
Due to Reddit's decentralized moderation, user anonymity, and lack offact-checkingsystems, the platform is highly prone to spreadingmisinformationanddisinformation.[243]It has been suggested that those who use Reddit should exercise caution in taking user-created unsourced content as fact.[244]Concerns have been raised in particular about dangerous medical misinformation on the platform.[15][245]A 2022 study of 300 comments and posts discussingurinary tract infectionsfound that fewer than 1% cited a source for their content, and several contained harmful medical misinformation that may dissuade readers from seeking medical care or lead to dangerousself-medication, such as proposingfastingas a cure for UTIs.[245]
Reddit communities exhibit theecho chamber effect, in which repeated unsourced statements come to be accepted among the community as fact, leading to distortedworldviewsamong users.[246]It has been suggested that since 2019, Russian state-sponsored troll accounts and bots have engaged in a broad campaign to take over subreddits, such as r/antiwar.[247]
After theBoston Marathon bombingin April 2013, Reddit faced criticism after users wrongly identified a number of people as suspects in the Subreddit r/FindBostonBombers.[248]Notable among misidentified bombing suspects wasSunil Tripathi, a student reported missing before the bombings took place. A body reported to be Sunil's was found inProvidence RiverinRhode Islandon April 25, according to Rhode Island Health Department. The cause of death was not immediately known, but authorities said they did not suspect foul play.[249]The family later confirmed Tripathi's death was a result ofsuicide.[250]Reddit general manager Erik Martin later issued an apology for this behavior, criticizing the "online witch hunts and dangerous speculation" that took place on the website.[251]The incident was later referenced in the season 5 episode of the CBS TV seriesThe Good Wifetitled "Whack-a-Mole",[252]as well asThe Newsroom.[253][254]
In August, private sexual photos from thecelebrity photo hackwere widely disseminated across the site.[255][256]A dedicated subreddit, "TheFappening", was created for this purpose,[257]and contained links to most if not all of the criminally obtained explicit images.[258][259][260][261]Some images ofMcKayla MaroneyandLiz Leewere identified by redditors and outside commentators as child pornography because the photos were taken when the women were underage.[262]The subreddit was banned on September 6.[263]The scandal led to wider criticisms concerning the website's administration fromThe VergeandThe Daily Dot.[264][265]
AfterEllen Paobecame CEO in 2014, she was initially a target of criticism by users who objected to the deletion of content critical of herself and her husband.[266]Later on June 10, 2015, Reddit shut down the 150,000-subscriber "fatpeoplehate" subreddit and four others citing issues related to harassment.[267]This move was seen as very controversial; some commenters said that the bans went too far, while others said that the bans did not go far enough.[268]One of the latter complaints concerned a subreddit that was "expressing support" for the perpetrator of theCharleston church shooting.[269]Responding to the accusations of "skewed enforcement", Reddit reaffirmed their commitment to free expression and stated, "There are some subreddits with very little viewership that get highlighted repeatedly for their content, but those are a tiny fraction of the content on the site."
On July 2, Reddit began experiencing a series of blackouts asmoderatorsset popular subreddit communities to private, in an event dubbed "AMAgeddon", aportmanteauof AMA ("ask me anything") andArmageddon. This was done in protest of the recent firing of Victoria Taylor, an administrator who helped organize citizen-led interviews with famous people on the popular AMA subreddit. Organizers of the blackout also expressed resentment about the recent severance of the communication between Reddit and the moderators of subreddits.[270]The blackout intensified on July 3 when formercommunity managerDavid Croach gave an AMA about being fired. Before deleting his posts, he stated that Ellen Pao dismissed him with one year of health coverage when he had cancer and did not recover quickly enough.[271][272]Following this, aChange.orgpetitionto remove Pao as CEO of Reddit Inc. reached over 200,000 signatures.[273][274][275]Pao posted a response on July 3 as well as an extended version of it on July 6 in which she apologized for bad communication and not delivering on promises. She also apologized on behalf of the other administrators and noted that problems already existed over the past several years.[276][277][278][279]On July 10, Pao resigned as CEO and was replaced by former CEO and co-founderSteve Huffman.[280]
In August, Steve Huffman introduced a policy which led to the banning of several offensive and sexual communities. Included in the ban waslolicon, to which Huffman referred as "animated CP [child porn]".[281]Some subreddits had also been "quarantined" due to having "highly-offensive or upsetting content" such as r/European, r/swedenyes, r/drawpeople, r/kiketown, r/blackfathers, r/greatapes, and r/whitesarecriminals.[282]
In April 2023, Reddit announced its intentions to charge large fees for its application programming interface (API), a feature of the site that has existed for free since 2008,[283]causing an ongoing dispute. The move forced multiple third-party applications to shut down and threatened accessibility applications and moderation tools.[284]On May 31,Apollodeveloper Christian Selig stated that Reddit's pricing would force him to cease development on the app. The resulting outcry from the Reddit community ultimately led to a planned protest from June 12 to 14 in which moderators for the site would make their communities private or restricted posting.[285]Following the release of an internal memo from Reddit CEO Steve Huffman and defiance from Reddit, some moderators have continued their protest.[286]Alternate forms of protest have emerged in the days following the initial blackout. Upon reopening, users of r/pics, r/gifs, and r/aww voted to exclusively post about comedian John Oliver.[287]Multiple subreddits labeled themselves as not safe for work (NSFW), affecting advertisements and resulting in administrators removing the entire moderation team of some subreddits.[288]The protest has been compared to a strike.[289]
/r/placehad its third launch on July 20, 2023; however, the launch was heavily protested by users and developers due to the event following the2023 Reddit API controversy; Reddit CEOSteve Huffman's decision to make it prohibitively expensive for third-party app developers drew widespread condemnation.[290][291]
In February 2017, Reddit banned thealt-rightsubreddit r/altright for violating its terms of service, more specifically for attempting toshare private informationabout the man who attacked alt-right figureRichard B. Spencer.[292][293]The forum's users and moderators accused Reddit administrators of having political motivations for the ban.[294][295]
After the2021 storming of the United States Capitol, Reddit banned the subreddit r/DonaldTrump in response to repeated policy violations and alluding to the potential influence the community had on those who participated in or supported the storming.[296]The move followed similar actions from social media platforms,Twitter,YouTube,TikTokand more.[297]The ban was criticized by those who believed it furthered an agenda andcensorshipofconservativeideologies.[298]The subreddit had over 52,000 members just before it was banned.[299]
In May 2016, CEO Steve Huffman said in an interview at the TNW Conference that, unlike Facebook, which "only knows what [its users are] willing to declare publicly", Reddit knows its users' "dark secrets"[300][301][302]at the same time that the website's "values" page was updated regarding its "privacy" section. The video reached the top of the website's main feed.[302][303]Shortly thereafter, announcements concerning new advertisement content drew criticism on the website.[304][305]In September, a user named "mormondocuments" released thousands of administrative documents belonging tothe Church of Jesus Christ of Latter-day Saints, an action driven by theex-Mormonandatheistcommunities on Reddit. Previously, on April 22, the same user had announced his plans to do so. Church officials commented that the documents did not contain anything confidential.[306][307]
On November 23, Huffman admitted to having replaced his username with the names ofr/The_Donaldmoderators in many insulting comments.[308][309]He did so by changing insulting comments made towards him and made it appear as if the insult were directed at the moderators of r/The_Donald.[310]On November 24,The Washington Postreported Reddit had banned the "Pizzagate" conspiracy board from their site, stating it violated their policy of posting personal information of others, triggering a wave of criticism from users on r/The_Donald, who felt the ban amounted to censorship.[311]After the forum was banned from Reddit, the words "we don't want witchhunts on our site" now appears on the former page of the Pizzagate subreddit.[312][313]
On November 30, Huffman announced changes to the algorithm of Reddit's r/all page to block "stickied" posts from a number of subreddits, such as r/The_Donald. In the announcement, he also apologized for personally editing posts by users from r/The_Donald, and declared intentions to take actions against "hundreds of the most toxic users" of Reddit and "communities whose users continually cross the line".[6][314][315]
In March 2018, it was revealed that Huffman had hidden Russian troll activity from users.[316]
In February 2019, Chinese companyTencentinvested $150 million into Reddit.[317][318]This resulted in a large backlash from Reddit users, who were worried about potential censorship.[319][320][321]Many posts featuring subjects censored in China, such asTiananmen Square,Tank Man, andWinnie the Pooh, received popularity on Reddit.[318][321][322]
In late August 2021, more than 70 subreddits went private to protest againstCOVID-19 misinformationon Reddit, as well as Reddit's refusal to delete subreddits undermining the severity of the pandemic.[323][324]A 2021 letter from theUnited States Senateto Reddit CEOSteve Huffmanexpressed concern about the spread of COVID-19 misinformation on the platform.[15]
In January 2025, over 100 Reddit communities banned users from posting links from theXsocial media site afterElon Musk, its CEO,made an arm gesture at a speechwhich critics claimed was aNazi salute.[325]The Vergereported that Musk had "privately pressur[ed]" the CEO of Reddit Steve Huffman to moderate content critical of him and the Trump administration, and that after their exchange, Reddit took action and temporarily banned r/WhitePeopleTwitter due to "policy violations".[326][327]
On March 5, 2025, Reddit announced that they will be issuing warnings to users who upvote "violent content", and "may consider" taking other actions against the users.The Vergereported two days later that Reddit's automatic moderation tool has been flagging the word "Luigi" as "potentially violent", including in comments or context unrelated toLuigi Mangione, the suspect in thekilling of the UnitedHealthcare CEO. The moderator of r/popculture, a subreddit with over 125,000 members, stated that Reddit's AutoModerator system flagged a comment aboutLuigi's Mansionbecause it included the word "Luigi", and instructed them to "check for violence"; other comments that mentioned "Luigi", even in non-violent context, were also flagged.[328][329]
On July 12, 2018, the creator and head moderator of theGamerGatesubreddit, r/KotakuInAction, removed all of the moderators and set the forum to private, alleging it to have become "infested with racism and sexism". A Reddit employee restored the forum and its moderators an hour later.[330][331]
During theGeorge Floyd protestsin early June 2020, over 800 moderators signed an open letter demanding a policy banning hate speech, a shutdown of racist and sexist subreddits, and more employee support for moderation.Bloomberg Newspointed out the company's slow reaction to r/watchpeopledie, a subreddit dedicated to videos of people dying in accidents and other situations, and the harassment that accompanied new unmoderated features like icons for purchase and public chats.[332]
On June 29, 2020, Reddit updated its content policy and introduced rules aimed at curbing the presence of communities they believed to be "promoting hate",[333]and banned approximately 2,000 subreddits that were found to be in violation of the new guidelines on the same day.[334]Larger subreddits affected by the bans included r/The_Donald,[335]r/GenderCritical[336](the platform's largest and most activeanti-transgender radical feministsubreddit),[337]and r/ChapoTrapHouse (afar-leftsubreddit originally created by fans of the podcastChapo Trap House).[336]Some media outlets and political commentators also condemned the banning of the r/The_Donald and r/ChapoTrapHouse subreddits as a violation of the right to free political expression.[338]
In February 2013, Betabeat published a post that recognized the influx of multinational corporations likeCostco,Taco Bell,Subaru, andMcDonald'sposting branded content on Reddit that was made to appear as if it was original content from legitimate Reddit users.[339]PAN Communications wrote that marketers want to "infiltrate the reddit community on behalf of their brand," but emphasized that "self-promotion is frowned upon" and Reddit's former director of communications noted that the site is "100 percent organic."[340][341][342][343]She recommended that advertisers design promotions that "spark conversations and feedback."[344]She recommended that businesses useAMAsto get attention for public figures but cautioned "It is important to approach AMAs carefully and be aware that this may not be a fit for every project or client."[345]Nissanran a successfulbranded contentpromotion offering users free gifts to publicize a new car,[346][347]though the company was later ridiculed for suspectedastroturfingwhen the CEO only answered puff piece questions on the site.[348][349]Taylor described these situations as "high risk" noting: "We try hard to educate people that they have to treat questions that may seem irreverent or out of left field the same as they would questions about the specific project they are promoting."[350]
In March 2021, Reddit users discovered thatAimee Challenor, an English politician who had been suspended from two UK political parties, was hired as an administrator for the site. Her first suspension from theGreen Partycame for retaining her father as her campaign manager after his arrest on childsexual abusecharges. She was later suspended from theLiberal Democratsafter tweets describingpedophilicfantasies were discovered on her partner's Twitter account. Reddit banned a moderator for posting a news article which mentioned Challenor, and some Reddit users alleged that Reddit were removing all mention of Challenor. Many subreddits, including r/Music, which had 27million subscribers, and 46 other subreddits with over 1million subscribers, went private in protest.[351][352][353][354]On March 24, Reddit'sCEOSteve Huffmansaid that Challenor had been inadequately vetted before being hired and that Reddit would review its relevant internal processes. Huffman attributed user suspensions to over-indexing on anti-harassment measures.[353]Challenor was also removed from her role as a Reddit admin.[355]
TheGameStop short squeezewas primarily organized on the subredditr/wallstreetbetsin January 2021.[356]
In October 2023, Reddit Moons (a site-specificcryptocurrencylaunched in May 2020) had seen a surge of value in 2023, at one point in mid-2023 rising past 50 cents per moon, but it crashed by more than 90% after it was announced on October 17 that the token would be "wound down" on November 8, allegedly due to scaling and regulatory issues; Reddit-centric coins DONUT and BRICK also crashed upon the news.[357]
In June 2023, TheBlackCat hacker gangclaimed responsibility for a February 2023 breach of Reddit's systems. On its data leak site, it claimed that it stole 80 GB of compressed data and demanded a $4.5 million ransom from Reddit. This attack did not involve data encryption like typical ransomware campaigns.[358]
In September 2024, theFederal Trade Commissionreleased a report summarizing nine company responses (including from Reddit) to orders made by the agency pursuant to Section 6(b) of theFederal Trade Commission Act of 1914to provide information about user and non-user data collection (including of children and teenagers) and data use by the companies that found that the companies' user and non-user data practices put individuals vulnerable toidentity theft,stalking, unlawful discrimination, emotional distress andmental health issues, social stigma, and reputational harm.[359][360][361]
In 2025, researchers from the University of Zurich conducted a experiment on the debate subreddit r/changemyview. The researchers deployed AI-run Reddit accounts to pose as humans and actively push desired viewpoints in order to study how AI could influence opinions among human participants. The experiment was run without the consent or knowledge of the subreddit moderators for four months until one of the researchers informed them. Critics of the experiment argued it was unethical as it involved impersonation and involuntarily used Redditors as experiment participants.[362]
|
https://en.wikipedia.org/wiki/Reddit
|
Inpsychologyandsociology, theThurstone scalewas the first formal technique to measure anattitude. It was developed byLouis Leon Thurstonein 1928, originally as a means of measuring attitudes towardsreligion. Today it is used to measure attitudes towards a wide variety of issues. The technique uses a number of statements about a particular issue, and each statement is given a numerical value indicating how favorable or unfavorable it is judged to be. These numerical values are prepared ahead of time by the researcher and not shown to the test subjects. The subjects then check each of the statements with which they agree, and ameanscore of those statements' values is computed, indicating their attitude.
Thurstone's method of pair comparisons can be considered a prototype of anormal distribution-based method for scaling-dominance matrices. Even though the theory behind this method is quite complex (Thurstone, 1927a), the algorithm itself is straightforward. For the basic Case V, the frequency dominance matrix is translated into proportions and interfaced with the standard scores. The scale is then obtained as a left-adjusted column marginal average of this standard score matrix (Thurstone, 1927b). The underlying rationale for the method and basis for the measurement of the "psychological scale separation between any two stimuli" derives from Thurstone'sLaw of comparative judgment(Thurstone, 1928). ASU
The principal difficulty with this algorithm is its indeterminacy with respect to one-zero proportions, which return z values as plus or minus infinity, respectively. The inability of the pair comparisons algorithm to handle these cases imposes considerable limits on the applicability of the method.
The most frequent recourse when the 1.00-0.00 frequencies are encountered is their omission. Thus, e.g., Guilford (1954, p. 163) has recommended not using proportions more extreme than .977 or .023, and Edwards (1957, pp. 41–42) has suggested that“if the number of judges is large, say 200 or more, then we might use pij values of .99 and .01, but with less than 200 judges, it is probably better to disregard all comparative judgments for which pij is greater than .98 or less than .02."’Since the omission of such extreme values leaves empty cells in the Z matrix, the averaging procedure for arriving at the scale values cannot be applied, and an elaborate procedure for the estimation of unknown parameters is usually employed (Edwards, 1957, pp. 42–46). An alternative solution of this problem was suggested by Krus and Kennedy (1977).
With later developments in psychometric theory, it has become possible to employ direct methods of scaling such as application of theRasch modelor unfolding models such as the Hyperbolic Cosine Model (HCM) (Andrich & Luo, 1993). The Rasch model has a close conceptual relationship to Thurstone's law of comparative judgment (Andrich, 1978), the principal difference being that it directly incorporates a person parameter. Also, the Rasch model takes the form of a logistic function rather than a cumulative normal function.
|
https://en.wikipedia.org/wiki/Thurstone_scale
|
Brand safetyis a set of measures that aim to protect the image and reputation of brands from the negative or damaging influence of questionable or inappropriate content when advertising online.
In response to ads being placed next to undesirable content, companies have cut advertising budgets,[1]and pulled ads from online advertising and social media platforms.[2][3][4]
The global digital advertising industry considers the "Dirty Dozen" categories to avoid:[5]
TheInteractive Advertising Bureau(IAB) added a 13th category:fake news.[6]In addition, companies will often define specific unsafe categories based on the brand itself.[citation needed]
Some online advertising tools allow advertisers to avoid their ads appearing alongside unwanted contexts. This feature is typically referred to as brand safety. For example, within theGoogle Marketing Platform, additional protection can be set up using Campaign Manager 360. If the automated auction still chooses an advertiser's ad as relevant for placement alongside certain contexts, instead of the actual creative, a default image set by the advertiser will be displayed.[citation needed]
To ensure brand safety, advertisers can buy ad space directly from trusted publishers, allowing them to directly address brand safety concerns.[7]Advertisers and publishers may also employ third-party vendors of brand safety services that can be integrated into the advertising system.[8]Other common preventive measures are black-lists of unsafe sites to avoid, or a white-lists of safe sites for advertising. Theads.txt(Authorized Digital Sellers) initiative from theIABis designed to allow online media buyers to check the validity of the sellers from whom they buy.[9]
Ad agencies, such asThe Interpublic Group of CompaniesandComscore, have used media watchdog companies likeAd Fontes MediaandNewsGuardto make sure that their clients' ads are placed with "credible" news sources.[10]
|
https://en.wikipedia.org/wiki/Brand_safety
|
Censorshipis the suppression ofspeech, public communication, or otherinformation. This may be done on the basis that such material is considered objectionable, harmful, sensitive, or "inconvenient".[2][3][4]Censorship can be conducted bygovernments[5]and private institutions.[6]When an individual such as an author or other creator engages in censorship of their own works or speech, it is referred to asself-censorship. General censorship occurs in a variety of different media, including speech, books, music, films, and other arts,the press, radio, television, and the Internet for a variety of claimed reasons includingnational security, to controlobscenity,pornography, andhate speech, to protect children or other vulnerable groups, to promote or restrict political or religious views, and to preventslanderandlibel. Specific rules and regulations regarding censorship vary betweenlegal jurisdictionsand/or private organizations.
Socrates, while defying attempts by the Athenian state to censor his philosophical teachings, was brought charges that led to his death. The conviction is recorded by Plato: in 399 BC, Socrates went ontrial[8]and was subsequently found guilty of both corrupting the minds of the youth of Athens and ofimpiety(asebeia,[9]"not believing in the gods of the state"),[10]and was sentenced tohemlock.[11][12][13]
Socrates' student,Plato, is said to have advocated censorship in his essay onThe Republic, which opposed the existence of democracy. In contrast to Plato, Greek playwrightEuripides(480–406 BC) defended the true liberty of freeborn men, including the right to speak freely. In 1766,Sweden became the first country to abolish censorship by law.[14]
Censorship has been criticized throughout history for being unfair and hindering progress.[citation needed]In a 1997 essay on Internet censorship, social commentator Michael Landier explains that censorship is counterproductive as it prevents the censored topic from being discussed. Landier expands his argument by claiming that those who impose censorship must consider what they censor to be true, as individuals believing themselves to be correct would welcome the opportunity to disprove those with opposing views.[15]
Censorship is often used to impose moral values on society, as in the censorship of material considered obscene. English novelistE. M. Forsterwas a staunch opponent of censoring material on the grounds that it was obscene or immoral, raising the issue of moral subjectivity and the constant changing of moral values. When the 1928 novelLady Chatterley's Loverwasput on trial in 1960, Forster wrote:[16]
Lady Chatterley's Loveris a literary work of importance...I do not think that it could be held obscene, but am in a difficulty here, for the reason that I have never been able to follow the legal definition of obscenity. The law tells me that obscenity may deprave and corrupt, but as far as I know, it offers no definition of depravity or corruption.
Proponents have sought to justify it using different rationales for various types of information censored:
In wartime, explicit censorship is carried out with the intent of preventing the release of information that might be useful to an enemy. Typically it involves keeping times or locations secret, or delaying the release of information (e.g., an operational objective) until it is of no possible use to enemy forces. The moral issues here are often seen as somewhat different, as the proponents of this form of censorship argue that the release of tactical information usually presents a greater risk of casualties among one's own forces and could possibly lead to loss of the overall conflict.[citation needed]
DuringWorld War Iletters written by British soldiers would have to go through censorship. This consisted of officers going through letters with a black marker and crossing out anything which might compromise operational secrecy before the letter was sent.[22]TheWorld War IIcatchphrase "Loose lips sink ships" was used as a common justification to exercise official wartime censorship and encourage individual restraint when sharing potentially sensitive information.[23]
An example of "sanitization" policies comes from theUSSRunderJoseph Stalin, where publicly used photographs were often altered to remove people whom Stalin had condemned to execution. Though past photographs may have been remembered or kept, this deliberate and systematic alteration to all of history in the public mind is seen as one of the central themes ofStalinismandtotalitarianism.[citation needed]
Censorship is occasionally carried out to aid authorities or to protect an individual, as with some kidnappings when attention and media coverage of the victim can sometimes be seen as unhelpful.[24]
Religious censorshipis a form of censorship wherefreedom of expressionis controlled or limited using religious authority or on the basis of the teachings of thereligion.[25]This form of censorship has a long history and is practiced in many societies and by many religions. Examples include theGalileo affair,Edict of Compiègne, theIndex Librorum Prohibitorum(list of prohibited books) and the condemnation ofSalman Rushdie's novelThe Satanic VersesbyIranianleaderAyatollah Ruhollah Khomeini. Images of the Islamic figure Muhammad are also regularly censored. In some secular countries, this is sometimes done to prevent hurting religious sentiments.[26]
The content of school textbooks is often an issue of debate, since their target audiences are young people. The termwhitewashingis commonly used to refer to revisionism aimed at glossing over difficult or questionable historical events, or a biased presentation thereof. Thereporting of military atrocities in historyis extremely controversial, as in the case ofthe Holocaust(orHolocaust denial),Bombing of Dresden, theNanking Massacreas found withJapanese history textbook controversies, theArmenian genocide, theTiananmen Square protests of 1989, and theWinter Soldier Investigationof theVietnam War.
In the context of secondary school education, the way facts and history are presented greatly influences the interpretation of contemporary thought, opinion and socialization. One argument for censoring the type of information disseminated is based on the inappropriate quality of such material for the younger public. The use of the "inappropriate" distinction is in itself controversial, as it changed heavily. A Ballantine Books version of the bookFahrenheit 451which is the version used by most school classes[27]contained approximately 75 separate edits, omissions, and changes from the original Bradbury manuscript.
In February 2006, aNational Geographiccover was censored by theNashravaran Journalistic Institute. The offending cover was about the subject ofloveand a picture of an embracing couple was hidden beneath a white sticker.[28]
Economic induced censorship is a type of censorship enacted by economic markets to favor, and disregard, types of information. Economic induced censorship is also caused by market forces which privatize and establish commodification of certain information that is not accessible by the general public, primarily because of the cost associated with commodified information such as academic journals, industry reports and pay to use repositories.[29]
The concept was illustrated as a censorship pyramid[30]that was conceptualized by primarilyJulian Assange, along withAndy Müller-Maguhn,Jacob AppelbaumandJérémie Zimmermann, in theCypherpunks (book).
Self-censorship is the act of censoring orclassifyingone's own discourse. This is done out of fear of, or deference to, the sensibilities or preferences (actual or perceived) of others and without overt pressure from any specific party or institution of authority. Self-censorship is often practiced byfilm producers,film directors,publishers,news anchors,journalists,musicians, and other kinds ofauthorsincluding individuals who usesocial media.[32]
According to aPew Research Centerand theColumbia Journalism Reviewsurvey, "About one-quarter of the local and national journalists say they have purposely avoided newsworthy stories, while nearly as many acknowledge they have softened the tone of stories to benefit the interests of their news organizations. Fully four-in-ten (41%) admit they have engaged in either or both of these practices."[33]
Threats to media freedom have shown a significant increase in Europe in recent years, according to a study published in April 2017 by theCouncil of Europe.
This results in a fear of physical or psychological violence, and the ultimate result is self-censorship by journalists.[34]
Copy approval is the right to read and amend an article, usually an interview, before publication. Many publications refuse to give copy approval but it is increasingly becoming common practice when dealing with publicity anxious celebrities.[35]Picture approval is the right given to an individual to choose which photos will be published and which will not.Robert Redfordis well known for insisting upon picture approval. Writer approval is when writers are chosen based on whether they will write flattering articles or not. Hollywood publicist Pat Kingsley is known for banning certain writers who wrote undesirably about one of her clients from interviewing any of her other clients.[36]
Flooding the public, often through onlinesocial networks, with false or misleading information is sometimes called "reverse censorship". American legal scholarTim Wuhas explained that this type of information control, sometimes bystate actors, can "distort or drown out disfavored speech through the creation and dissemination offake news, the payment of fake commentators, and the deployment of propagandarobots."[37]
Soft censorshiporindirect censorshipis the practice of influencing news coverage by applying financial pressure on media companies that are deemed critical of a government or its policies and rewarding media outlets and individual journalists who are seen as friendly to the government.[38]
Book censorship can be enacted at the national or sub-national level, and can carry legal penalties for their infraction. Books may also be challenged at a local, community level. As a result, books can be removed from schools or libraries, although these bans do not typically extend outside of that area.
Aside from the usual justifications of pornography andobscenity, some films are censored due to changing racial attitudes orpolitical correctnessin order to avoidethnic stereotypingand/or ethnic offense despite its historical or artistic value. One example is the still withdrawn "Censored Eleven" series of animated cartoons, which may have been innocent then, but are "incorrect" now.[39]
Film censorship is carried out by various countries. Film censorship is achieved by censoring the producer or restricting a state citizen. For example, in China the film industry censorsLGBT-related films. Filmmakers must resort to finding funds from international investors such as the "Ford Foundations" and or produce through an independent film company.[40]
Music censorship has been implemented by states, religions, educational systems, families, retailers and lobbying groups – and in most cases they violate international conventions of human rights.[41]
Censorship of maps is often employed for military purposes. For example, the technique was used in formerEast Germany, especially for the areas near the border toWest Germanyin order to make attempts of defection more difficult. Censorship of maps is also applied byGoogle Maps, where certain areas are grayed out or blacked or areas are purposely left outdated with old imagery.[42]
Art is loved and feared because of its evocative power. Destroying or oppressing art can potentially justify its meaning even more.[43]
British photographer and visual artistGraham Ovenden's photos and paintings were ordered to be destroyed by a London's magistrate court in 2015 for being "indecent"[44]and their copies had been removed from the onlineTate gallery.[45]
A 1980 Israeli law forbade bannedartworkcomposed of the four colours of thePalestinian flag,[46]and Palestinians were arrested for displaying such artwork or even for carrying sliced melons with the same pattern.[47][48][49]
Moath al-Alwiis a Guantanamo Bay prisoner who createsmodel shipsas an expression of art. Alwi does so with the few tools he has at his disposal such as dental floss and shampoo bottles, and he is also allowed to use a small pair of scissors with rounded edges.[50]A few of Alwi's pieces are on display at John Jay College of Criminal Justice in New York. There are also other artworks on display at the College that were created by other inmates. The artwork that is being displayed might be the only way for some of the inmates to communicate with the outside. Recently things have changed though. The military has come up with a new policy that will not allow the artwork at Guantanamo Bay Military Prison to leave the prison. The artwork created by Alwi and other prisoners is now government property and can be destroyed or disposed of in whatever way the government choose, making it no longer the artist's property.[51]
Around 300 artists in Cuba are fighting for their artistic freedom due to new censorship rules Cuba's government has in place for artists. In December 2018, following the introduction of new rules that would ban music performances and artwork not authorized by the state,performance artistTania Bruguerawas detained upon arriving to Havana and released after four days.[52]
An example of extreme state censorship was the Nazis' requirements of using art as propaganda. Art was only allowed to be used as a political instrument to control people and failure to act in accordance with the censors was punishable by law, even fatal. TheDegenerate Art Exhibitionwas a historical instance of this, the goal of which was to advertise Nazi values and slander others.[53]
Internet censorship is control or suppression of the publishing or accessing of information onthe Internet. It may be carried out by governments or by private organizations either at the behest of the government or on their own initiative. Individuals and organizations may engage inself-censorshipon their own or due to intimidation and fear.
The issues associated with Internet censorship are similar to those for offline censorship of more traditional media. One difference is that national borders are more permeable online: residents of a country that bans certain information can find it on websites hosted outside the country. Thus censors must work to prevent access to information even though they lack physical or legal control over the websites themselves. This in turn requires the use of technical censorship methods that are unique to the Internet, such as site blocking and content filtering.[59]
Furthermore, theDomain Name System(DNS) a critical component of the Internet is dominated by centralized and few entities. The most widely used DNS root is administered by theInternet Corporation for Assigned Names and Numbers(ICANN).[60][61]As an administrator they have rights to shut down and seizedomain nameswhen they deem necessary to do so and at most times the direction is from governments. This has been the case withWikileaksshutdowns[62]and name seizure events such as the ones executed by theNational Intellectual Property Rights Coordination Center(IPR Center) managed by theHomeland Security Investigations(HSI).[63]This makes it easy for internet censorship by authorities as they have control over what should or should not be on the Internet. Some activists and researchers have started opting foralternative DNS roots, though the Internet Architecture Board[64](IAB) does not support these DNS root providers.
Unless the censor has total control over all Internet-connected computers, such as inNorth KoreaorCuba, total censorship of information is very difficult or impossible to achieve due to the underlying distributed technology of the Internet.Pseudonymityanddata havens(such asFreenet) protectfree speechusing technologies that guarantee material cannot be removed and prevents the identification of authors. Technologically savvy users can often find ways toaccess blocked content. Nevertheless, blocking remains an effective means of limiting access to sensitive information for most users when censors, such as those inChina, are able to devote significant resources to building and maintaining a comprehensive censorship system.[59]
Views about the feasibility and effectiveness of Internet censorship have evolved in parallel with the development of the Internet and censorship technologies:
ABBC World Service pollof 27,973 adults in 26 countries, including 14,306 Internet users,[68]was conducted between 30 November 2009 and 7 February 2010. The head of the polling organization felt, overall, that the poll showed that:
The poll found that nearly four in five (78%) Internet users felt that the Internet had brought them greater freedom, that most Internet users (53%) felt that "the internet should never be regulated by any level of government anywhere", and almost four in five Internet users and non-users around the world felt that access to the Internet was a fundamental right (50% strongly agreed, 29% somewhat agreed, 9% somewhat disagreed, 6% strongly disagreed, and 6% gave no opinion).[70]
The rising use of social media in many nations has led to the emergence of citizens organizing protests through social media, sometimes called "Twitter Revolutions". The most notable of these social media-led protests were theArab Spring uprisings, starting in 2010. In response to the use of social media in these protests, the Tunisian government began a hack of Tunisian citizens' Facebook accounts, and reports arose of accounts being deleted.[71]
Automated systems can be used to censorsocial mediaposts, and therefore limit what citizens can say online. This most notably occurs inChina, where social media posts are automatically censored depending on content. In 2013, Harvard political science professorGary Kingled a study to determine what caused social media posts to be censored and found that posts mentioning the government were not more or less likely to be deleted if they were supportive or critical of the government. Posts mentioning collective action were more likely to be deleted than those that had not mentioned collective action.[72]Currently, social media censorship appears primarily as a way to restrict Internet users' ability to organize protests. For the Chinese government, seeing citizens unhappy with local governance is beneficial as state and national leaders can replace unpopular officials. King and his researchers were able to predict when certain officials would be removed based on the number of unfavorable social media posts.[73]
Research has proved that criticism is tolerable on social media sites, therefore it is not censored unless it has a higher chance of collective action. It is not important whether the criticism is supportive or unsupportive of the states' leaders, the main priority of censoring certain social media posts is to make sure that no big actions are being made due to something that was said on the internet. Posts that challenge the Party's political leading role in the Chinese government are more likely to be censored due to the challenges it poses to the Chinese Communist Party.[74]
In December 2022Elon Musk, owner and CEO ofTwitterreleased internal documents from the social media microblogging site to journalistsMatt Taibbi,Michael ShellenbergerandBari Weiss. The analysis of these files on Twitter, collectively called, theTwitter Files, explored the content moderation and visibility filtering carried out in collaboration with theFederal Bureau of Investigationon theHunter Biden laptop controversy.
On the platform TikTok, certain hashtags have been categorized by the platform's code and determines how viewers can or cannot interact with the content or hashtag specifically. Some shadowbanned tags include: #acab, #GayArab, #gej due to their referencing of certain social movements and LGBTQ identity. As TikTok guidelines are becoming more localized around the world, some experts believe that this could result in more censorship than before.[75]
Since the early 1980s, advocates of video games have emphasized their use as anexpressive medium, arguing for their protection under the laws governingfreedom of speechand also as an educational tool. Detractors argue that video games areharmfuland therefore should besubject to legislative oversight and restrictions. Many video games have certain elements removed or edited due toregional rating standards.[76][77]For example, in the Japanese and PAL Versions ofNo More Heroes, blood splatter and gore is removed from the gameplay. Decapitation scenes are implied, but not shown. Scenes of missing body parts after having been cut off, are replaced with the same scene, but showing the body parts fully intact.[78]
Surveillance and censorship are different. Surveillance can be performed without censorship, but it is harder to engage in censorship without some form of surveillance.[79]Even when surveillance does not lead directly to censorship, the widespread knowledge or belief that a person, their computer, or their use of the Internet is under surveillance can have a "chilling effect" and lead to self-censorship.[80]
The former Soviet Union maintained a particularly extensive program of state-imposed censorship. The main organ for official censorship in the Soviet Union was theChief Agency for Protection of Military and State Secretsgenerally known as theGlavlit, its Russian acronym. TheGlavlithandled censorship matters arising from domestic writings of just about any kind – even beer and vodka labels.Glavlitcensorship personnel were present in every large Soviet publishing house or newspaper; the agency employed some 70,000 censors to review information before it was disseminated by publishing houses, editorial offices, and broadcasting studios. No mass medium escapedGlavlit's control. All press agencies and radio and television stations hadGlavlitrepresentatives on their editorial staffs.[81]
Sometimes, public knowledge of the existence of a specific document is subtly suppressed, a situation resembling censorship. The authorities taking such action will justify it by declaring the work to be "subversive" or "inconvenient". An example isMichel Foucault's 1978 textSexual Morality and the Law(later republished asThe Danger of Child Sexuality), originally published asLa loi de la pudeur[literally, "the law of decency"]. This work defends the decriminalization ofstatutory rapeand theabolition of age of consent laws.[citation needed]
When a publisher comes under pressure to suppress a book, but has already entered into a contract with the author, they will sometimes effectively censor the book by deliberately ordering a small print run and making minimal, if any, attempts to publicize it. This practice became known in the early 2000s asprivishing(private publishing).[82]anOpenNet Initiative(ONI) classifications:[83]
Censorship for individual countries is measured by Freedom House (FH)Freedom of the Pressreport,[84]Reporters Without Borders (RWB)Press freedom index[85]andV-Demgovernment censorship effort index. Censorship aspects are measured by Freedom on the Net[54]andOpenNet Initiative(ONI) classifications.[83]Censorship by countrycollects information on censorship,internet censorship,press freedom,freedom of speech, andhuman rightsby country and presents it in a sortable table, together with links to articles with more information. In addition to countries, the table includes information on former countries, disputed countries, political sub-units within countries, and regional organizations.
InFrench-speaking Belgium, politicians consideredfar-rightarebannedfrom live media appearances such as interviews or debates.[86][87]
Very little is formally censored in Canada, aside from "obscenity" (as defined in the landmark criminal case ofR v Butler) which is generally limited topornographyandchild pornographydepicting and/or advocating non-consensual sex, sexual violence, degradation, or dehumanization, in particular that which causes harm (as inR v Labaye). Most films are simply subject to classification by theBritish Columbia Film Classification Officeunder the non-profitCrown corporationby the name ofConsumer Protection BC, whose classifications are officially used by the provinces ofBritish Columbia,Saskatchewan,Ontario, andManitoba.[88]
Cuban media used to be operated under the supervision of theCommunist Party'sDepartment of Revolutionary Orientation, which "develops and coordinates propaganda strategies".[89]Connection to the Internet is restricted and censored.[90]
ThePeople's Republic of Chinaemploys sophisticated censorship mechanisms, referred to as theGolden Shield Project, to monitor the internet. Popular search engines such asBaidualso remove politically sensitive search results.[91][92][93]
Strict censorship existed in the Eastern Bloc.[94]Throughout the bloc, the various ministries of culture held a tight rein on their writers.[95]Cultural products there reflected the propaganda needs of the state.[95]Party-approved censors exercised strict control in the early years.[96]In the Stalinist period, even the weather forecasts were changed if they suggested that the sun might not shine onMay Day.[96]UnderNicolae CeauşescuinRomania, weather reports were doctored so that the temperatures were not seen to rise above or fall below the levels which dictated that work must stop.[96]
Possession and use ofcopying machineswas tightly controlled in order to hinder the production and distribution ofsamizdat, illegalself-publishedbooks and magazines. Possession of even a single samizdat manuscript such as a book byAndrei Sinyavskywas a serious crime which might involve a visit from theKGB. Another outlet for works which did not find favor with the authorities was publishing abroad.
Amid declining car sales in 2020, France banned a television ad by a Dutch bike company, saying the ad "unfairly discredited the automobile industry".[97]
TheConstitution of Indiaguaranteesfreedom of expression, but placescertain restrictionson content, with a view towards maintaining communal and religious harmony, given the history of communal tension in the nation.[98]According to the Information Technology Rules 2011, objectionable content includes anything that "threatens the unity, integrity, defence, security or sovereignty of India, friendly relations with foreign states or public order".[99]Notably many pornographic websites are blocked in India.
Iraq underBaathistSaddam Husseinhad much the same techniques of press censorship as did Romania under Nicolae Ceauşescu but with greater potential violence.[100]
During theGHQoccupation of Japan after WW2, any criticism of the Allies' pre-war policies, the SCAP, the Far East Military Tribunal, the inquiries against the United States and every direct and indirect references to the role played by the Allied High Command in drafting Japan's new constitution or to censorship of publications, movies, newspapers and magazines was subject to massive censorship,purges,media blackout.[101]
In the four years (September 1945–November 1949) since theCCDwas active, 200 million pieces of mail and 136 million telegrams were opened, and telephones were tapped 800,000 times. Since no criticism of the occupying forces for crimes such as the dropping of the atomic bomb, rape and robbery by US soldiers was allowed, a strict check was carried out. Those who got caught were put on a blacklist called the watchlist, and the persons and the organizations to which they belonged were investigated in detail, which made it easier to dismiss or arrest the "disturbing molecule".[102]
Under subsection 48(3) and (4) of thePenangIslamic Religious Administration Enactment 2004, non-Muslims inMalaysiaare penalized for using the following words, or to write or publish them, in any form, version or translation in any language or for use in any publicity material in any medium:"Allah", "Firman Allah", "Ulama", "Hadith", "Ibadah", "Kaabah", "Qadhi'", "Illahi", "Wahyu", "Mubaligh", "Syariah", "Qiblat", "Haji", "Mufti", "Rasul", "Iman", "Dakwah", "Wali", "Fatwa", "Imam", "Nabi", "Sheikh", "Khutbah", "Tabligh", "Akhirat", "Azan", "Al Quran", "As Sunnah", "Auliya'", "Karamah", "False Moon God", "Syahadah", "Baitullah", "Musolla", "Zakat Fitrah", "Hajjah", "Taqwa" and "Soleh".[103][104][105]
On 4 March 2022, Russian PresidentVladimir Putinsigned into law a bill introducingprison sentences of up to 15 yearsfor those who publish "knowingly false information" about the Russian military and its operations, leading to some media outlets in Russia to stop reporting on Ukraine or shutting their media outlet.[106][107]Although the1993 Russian Constitutionhas an article expressly prohibitingcensorship,[108]the Russian censorship apparatusRoskomnadzorordered the country's media to only use information from Russian state sources or face fines and blocks.[109]As of December 2022, more than 4,000 people were prosecuted under "fake news" laws in connection with theRussian invasion of Ukraine.[110]
Novaya Gazeta'seditor-in-chiefDmitry Muratovwas awarded the2021 Nobel Peace Prizefor his "efforts to safeguard freedom of expression". In March 2022,Novaya Gazetasuspended its print activities after receiving a second warning fromRoskomnadzor.[111]
According to Christian Mihr, executive director ofReporters Without Borders, "censorship in Serbia is neither direct nor transparent, but is easy to prove."[112]According to Mihr there are numerous examples of censorship and self-censorship in Serbia[112]According to Mihr, Serbian prime ministerAleksandar Vučićhas proved "very sensitive to criticism, even on critical questions," as was the case with Natalija Miletic, a correspondent forDeutsche Welle Radio, who questioned him in Berlin about the media situation in Serbia and about allegations that some ministers in the Serbian government had plagiarized their diplomas, and who later received threats and offensive articles on the Serbian press.[112]
Multiple news outlets have accused Vučić of anti-democratic strongman tendencies.[113][114][115][116][117]In July 2014, journalists associations were concerned about the freedom of the media in Serbia, in which Vučić came under criticism.[118][119]
In September 2015 five members of United States Congress (Edie Bernice Johnson, Carlos Curbelo, Scott Perry, Adam Kinzinger, andZoe Lofgren) have informed Vice President of the United StatesJoseph Bidenthat Aleksandar's brother, Andrej Vučić, is leading a group responsible for deteriorating media freedom inSerbia.[120]
In theRepublic of Singapore, Section 33 of the Films Act originally banned the making, distribution and exhibition of "party political films", at the pain of a fine not exceeding $100,000 or imprisonment for a term not exceeding two years.[121]The Act further defines a "party political film" as any film or video
In 2001, the short documentary calledA Vision of Persistenceon opposition politicianJ. B. Jeyaretnamwas also banned for being a "party political film". The makers of the documentary, all lecturers at the Ngee Ann Polytechnic, later submitted written apologies and withdrew the documentary from being screened at the 2001Singapore International Film Festivalin April, having been told they could be charged in court.[122]Another short documentary calledSingapore RebelbyMartyn See, which documentedSingapore Democratic Partyleader DrChee Soon Juan's acts of civil disobedience, was banned from the 2005Singapore International Film Festivalon the same grounds and See is being investigated for possible violations of the Films Act.[123]
This law, however, is often disregarded when such political films are made supporting the rulingPeople's Action Party(PAP).Channel NewsAsia's five-part documentary series on Singapore's PAP ministers in 2005, for example, was not considered a party political film.[124]
Exceptions are also made when political films are made concerning political parties of other nations. Films such asMichael Moore's 2004 documentaryFahrenheit 911are thus allowed to screen regardless of the law.[125]
Since March 2009, the Films Act has been amended to allow party political films as long as they were deemed factual and objective by a consultative committee. Some months later, this committee lifted the ban on Singapore Rebel.[126]
Independent journalism did not exist in theSoviet UnionuntilMikhail Gorbachevbecame its leader. Gorbachev adoptedglasnost(openness), political reform aimed at reducing censorship; before glasnost all reporting was directed by theCommunist Partyor related organizations.Pravda, the predominant newspaper in the Soviet Union, had a monopoly. Foreign newspapers were available only if they were published bycommunist partiessympathetic to the Soviet Union.
Online access to all language versions ofWikipediawas blocked inTurkeyon 29 April 2017 byErdoğan's government.[127]
Article 299of the Turkish Penal Code deems it illegal to "Insult thePresident of Turkey".A person who is sentenced for a violation of this article can be sentenced to a prison term between one and four years and if the violation was made in public the verdict can be elevated by a sixth.[128]Prosecutions often target critics of the government, independent journalists, and political cartoonists.[129]Between 2014 and 2019, 128,872 investigations were launched for this offense and prosecutors opened 27,717 criminal cases.[130]
From December 1956 until 1974 theIrish republicanpolitical partySinn Féinwas banned from participating in elections by the Northern Ireland Government.[131]From 1988 until 1994 the British government prevented the UK media from broadcasting the voices (but not words) of Sinn Féin and ten Irish republican andUlster loyalistgroups.[132]
In the United States, most forms of censorship are self-imposed rather than enforced by the government. The government does not routinely censor material, although state and local governments often restrict what is provided in libraries and public schools.[133]In addition, distribution, receipt, and transmission (but notmere private possession) ofobscene materialmay be prohibited by law. Furthermore, underFCC v. Pacifica Foundation, the FCC has the power to prohibit the transmission of indecent material over broadcast. Additionally, critics ofcampaign finance reform in the United Statessay this reform imposes widespread restrictions on political speech.[134][135]
In 1973, a military coup took power in Uruguay, and the State practiced censorship. For example, writerEduardo Galeanowas imprisoned and later was forced to flee. His bookOpen Veins of Latin Americawas banned by the right-wing military government, not only in Uruguay, but also in Chile and Argentina.[136]
|
https://en.wikipedia.org/wiki/Censorship
|
False advertisingis the act of publishing, transmitting, distributing or otherwise publicly circulating anadvertisementcontaining a false claim, or statement, made intentionally, or recklessly, to promote the sale of property, goods or services.[3]A false advertisement can be classified as deceptive if the advertiser deliberately misleads the consumer, rather than making an unintentional mistake. A number of governments use regulations or other laws and methods to limit false advertising.
False advertising can take one of two broad forms: an advertisement may be factually wrong, or intentionally misleading. Both types of false advertising may be presented in a number of ways.
Photo manipulation is a technique often used in the cosmetics field and forweight losscommercials[6]to advertise false (or non-typical) results and give consumers a false impression of a product's capabilities. Photo manipulation can alter the audience's perception of a product's effectiveness;[7]for example, makeup advertisements may use airbrushed photos. Another example is usingdarkroom exposure techniques, darkening and lightening photographs. Some manipulation techniques are praised for the impressive artwork, whereas others are looked down upon, especially in cases where others are deceived.[citation needed]
Hidden fees can be a way for companies to trick unwary consumers into paying more for a product which was advertised at a specific price to increase profits without raising the price of the product.[8]TheFine printmay be used to obscure fees and surcharges in advertising. Another way to hide fees is to exclude shipping costs when listing the price of goods online, making an item look less expensive than it actually is.[9]A number of hotels chargeresort fees, which are not typically included in the advertised price of a room.
Some products are sold withfillers, which increases the legal weight of a product with something that costs the producer very little compared to what the consumer thinks they are buying. Some food advertisements use this technique in products such as meat, which can be injected withbrothorbrine(up to 15 percent), orTV dinnersfilled with gravy (or other sauce) instead of meat.Maltandhamhave been used as filler inpeanut butter.[10]Non-meat fillers may be high in carbohydrates and low in nutritional value; one example is a cereal binder, which usually contains flour and oatmeal.[11]Some products may come in a large container which is mostly empty, leading a consumer to believe that the total amount of food is greater than it is.[12]
Another form of deceptive advertising falsifies the quality or origin of a product. If an advertiser shows a product with a certain quality but knows the product has defects or is not of the same quality, they are falsely advertising the product. Producers may misrepresent where a product is manufactured, saying (for example) that it was produced in the United States when it was produced in another country.[13]
The labels "diet," "low fat," "sugar-free," "healthy" and "good for you" are often associated with products which claim to improve health. Advertisers, aware of consumer desire to live healthier and longer, describe their products accordingly.Food advertisinginfluences consumer preferences and shopping habits.[15]Highlighting certain ingredients may mislead consumers into thinking they are buying healthy products when, in fact, they are not.[16]Dannon'sActivia yogurt was advertised as scientifically proven to boost the immune system, and was sold at a much higher price. The company was ordered to pay $45 million in damages to consumers after a lawsuit.[17]
Food companies may end up in court for using misleading tactics such as:[18]
Many US advertisements for dietary supplements include the disclaimer, "This product is not intended to diagnose, treat, cure, or prevent any disease",[22]since products intended to diagnose, treat, cure, or prevent disease must undergoFDAtesting and approval.
Companies use a number of advertising techniques to assert that their products are the best available.[23]One of the most common marketing tactics is comparative advertising, where "the advertised brand is explicitly compared with one or more competing brands and the comparison is obvious to the audience."[24]Laws about comparative advertising have changed in the United States; perhaps the most drastic change occurred with the 1946Lanham Act, the backbone of all cases involving false advertisement. Marketing strategies have become more aggressive, however, and the provisions of the Lanham Act have become outdated.
USCA§1125 was passed in 2012 as an addition to the Lanham Act, clarifying issues about comparative advertising. Anyone who uses words, symbols or misleading descriptions of fact in commerce which are likely to cause consumer confusion about their own product, or misrepresents the nature, characteristics or qualities of their own (or another's) product, is civilly liable.[25]USCA §1125 addresses gaps in the Lanham Act, but is not a perfect remedy. Advertisements that present false descriptions of fact are considered deceptive, with no additional evidence required; when an advertisement makes a factual (but misleading) claim, however, evidence of confusion of an average consumer is needed.[26]
Puffing (or puffery) is exaggerating a product's worth with meaningless or unsubstantiated terms, language based on opinion rather than fact,[27]or the manipulation of data.[28]Examples include superlatives such as "greatest of all time," "best in town," and "out of this world," or a restaurant's claim that it had "the world's best-tasting food."[29]
Puffing is not an illegal form of false advertising, and may be seen as a humorous way to attract consumer attention.[29]Puffing may be used as a defense against charges of deceptive advertising when it is formatted as opinion rather than fact.[30]Omitted, or incomplete, information is characteristic of puffery.[31]
Terms used in advertising may be used imprecisely. Depending on the jurisdiction, "organic" food may not have a clear legal definition; "light" has been used to describe foods low incalories,sugars,carbohydrates,salt,texture,viscosity, or even light in color. Labels such as "all-natural" are frequently used, but essentially meaningless.
Before theFamily Smoking Prevention and Tobacco Control Act, tobacco companies regularly used terms likelow tar,light,ultra-lightandmildto imply that such products had lessdetrimental effects on health. In 2009, the United States banned manufacturers from labeling tobacco products with these terms.[32]When the U.S.United Egg Producersused an "Animal Care Certified" logo onegg cartons, theBetter Business Bureausaid that it misled consumers by implying a higher level ofanimal carethan was actually the case.[33]
In 2010, Kellogg'sRice Krispiescereal claimed that it could improve a child's immunity. The company was forced to discontinue such claims.[34]In 2015,Kellogg'sadvertised itsKashiproducts as "all natural" when they contained a number of artificial ingredients; Kellogg's paid $5 million to settle a lawsuit.[35]
"Better" means that one item is superior to another in some way; "best" means that it is superior to all others in some way. Advertisers often fail to specify the basis on which products are compared (such as price, size or quality) and, in the case of "better," what the product is compared to (a competitor's product, an earlier version of their product, or nothing at all). Without defining the terms "better" and "best", they become meaningless. An ad that says, "Our cold medicine is better" could be claiming that it is an improvement over taking nothing at all. Another often-seen example is "better than the leading brand", with a statistic attached; the "leading brand", however, is undefined.
In an inconsistent comparison, an item is compared with others only in terms of favorable attributes; this conveys the false impression that it is the best of all products overall. One variant is a website which lists competitors whose price for a particular item is higher, ignoring competitors whose price is lower.
A common example is theserving suggestionpictures on food-product boxes, which include ingredients other than those included in the package. The "serving suggestion" disclaimer is a legal requirement for an illustration including items not included in the purchase, but if a customer fails to notice (or understand) the caption they may assume that all depicted items are included.
Some advertised photos ofhamburgersconvey the impression that the food is larger than it really is, and foods are "styled" to appear unrealistically appetizing.[36]Products sold unassembled or unfinished may have a picture of the finished product, without a picture of what the customer is actually buying. Video-game commercials may include what are essentially short CGI films, with considerably better graphics than the actual game.[37]
Consumers may buy an item based on the color they saw in an advertisement. When used to make people think food is riper, fresher, healthier, or otherwise more desirable than it really is,food coloringmay be deceptive. When combined withadded sugaror corn syrup, bright colors convey a subconscious impression of healthy, ripe fruit, full ofantioxidantsandphytochemicals.
One variation is packaging which obscures the color of the foods within, such as red mesh bags holding yellow oranges or grapefruit which then appear to be a ripe orange or red.[38]Regularly stirring minced meat on sale at a deli can make the surface meat remain red (and appear fresh) when it would oxidize (and brown), showing its true age if left unstirred. Some sodas are sold in colored bottles when the actual product is clear.
Angel dustingis a process where an ingredient which would be beneficial in a certain quantity is added in insignificant quantities, which would have no consumer benefit. The advertiser then says that the product contains that ingredient, misleading a consumer into expecting that they will experience the benefit. A cereal may claim that it contains "12 essential vitamins and minerals," but the amounts of each may be only one percent (or less) of theReference Daily Intakeand provide virtually no nutritional benefit.
A number of products are advertised with some form of the statement "chemical free" or "no chemicals." Because everything on Earth is made up ofchemicalsexcept for a few elementary particles formed byradioactive decayor present in minute quantities fromsolar windand sunlight, a chemical-free product is impossible. The label can indicate that a product contains no synthetic or exceptionally-harmful chemicals but, because the word "chemical" carries a stigma,[39]it is often used without clarification.
Bait-and-switchis a deceptive marketing tactic generally used to lure customers into a store. A company will advertise a product in an attractive way (the bait). The product is not available for some reason, however, and the company will try to sell something more expensive than what was originally advertised (the switch). Although only a small percentage of shoppers will buy the more expensive product, an advertiser may still profit.[40]
Bait advertising is also used in other contexts; in an online job advertisement, a potential candidate may be deceived about working conditions, pay, or other variables. An airline may "bait" a potential client with a bargain before raising the price or redirecting them to a more expensive flight.[41]
Businesses can avoid charges of misleading or deceptive conduct by following a few guidelines:
In some countries, such as Australia, bait advertising can have severe legal penalties.[44]
If a company does not say what it will do if a product fails to meet expectations, it is generally free to do little or nothing. This is due to alegal technicalitywhich states that acontractcannot be enforced unless it provides a basis for determining a breach and for providing a remedy for a breach.[45]Fraud in crowdfunding communities such asIndiegogoandKickstartercan be difficult to prosecute.[46]
Advertisers can falsely claim that there is no risk in trying their product. They may charge a customer's credit card for a product, offering a full refund if not satisfied. However, the customer may not receive the product; they may be billed for things they did not want; they may be unable to call the company to authorize a return; they may not be refunded an item's shipping and handling costs, or they may have to pay for return shipping.
Mirror neuronsare found in several sections of the human brain.[47]They are responsible for mirroring a behavior (or movement) seen in others. In marketing, mirror neurons have been used to stimulate consumers to do what those in advertisements do.
Insubliminal advertising, products (or ideas) are advertised to consumers without their knowledge. Its purpose is to induce a consumer to buy an advertised item while they are unaware that they are being influenced into making a purchase. This form of advertising exploits a consumer's sub-limenalstate.[48]
The United States federal government regulates advertising through theFederal Trade Commission[49](FTC) with truth-in-advertising laws[50]and enables private litigation through a number of laws, most significantly theLanham Act(trademark and unfair competition). Specifically, under Section 43(a), false advertising is an actionable civil claim. Therefore, a party successful in a suit may be awarded damages or may be entitled to injunctive relief.[51]In order to bring a false advertising claim, it is imperative that the plaintiff demonstrate that the defendant actually made false/misleading statement to their own or another's product, that at least a tendency to deceive a large amount of the intended audience was present, and that there was a likelihood of injury to the plaintiff, among other reasons.[51]
The goal is prevention rather than punishment, reflecting the difference between civil and criminal law. A typical remedy is ordering an advertiser to stop its illegal acts, or to include disclosure of additional information which eliminates potentially-deceptive material. Corrective advertising may be mandated,[52][53]but no fines or prison time is imposed except for the rare instances where an advertiser refuses to stop despite an order to do so.[54]
The state governments General Business Law § 349 covers an essential part of the false advertisement regulation including "Deceptive acts or practices in the conduct of any business, trade or commerce or in the furnishing of any service in this state are hereby declared unlawful."[55]In the Chimienti v. Wendy's Int'l, LLC[56]Case the plaintiff failed to demonstrate that he suffered injury and did not sufficiently allege that the advertisements were materially misleading, so the claims were dismissed under the New York General Business Law. "For a claim to be considered a cause of action to recover damages pursuant to General Business Law § 349 has three elements: first, that the challenged act or practice was consumer-oriented; second, that it was misleading in a material way; and third, that the plaintiff suffered injury as a result of the deceptive act."[57]
In 1905,Samuel Hopkins Adamsreleased a series of articles detailing misleading claims by thepatent medicineindustry. The public outcry resulting from the articles led to the creation of theFood and Drug Administrationthe following year.[58]
In 1941, theSupreme CourtreviewedFederal Trade Commission v. Bunte Brothers, Inc.under Section 5 of theFederal Trade Commission Act of 1914with regard to "unfair or deceptive acts or practices".[59]The court reviewed three false-advertising cases in 2013 and 2014:Static Control v. Lexmark(concerning who may sue under the Lanham Act),ONY, Inc. v. Cornerstone Therapeutics, Inc.[60]andPOM Wonderful LLC v. Coca-Cola Co.[further explanation needed]
State governments have a number of unfair-competition laws which regulate false advertising, trademarks, and related issues. Many are similar to those of the FTC, and may be copied so closely that they are known as "little FTC acts."[61]According to theNational Consumer Law Center, the laws – known as "unfair, deceptive, or abusive acts and practices laws" (UDAAP or UDAP laws)[62]– vary widely in the protection they offer consumers.[63]In California, one such statute is theUnfair Competition Law(UCL).[64]The UCL "borrows heavily from section 5 of the Federal Trade Commission Act", and has developed a body of case law.[65]
Civil penaltiesmay range from thousands to millions of dollars, and advertisers are sometimes ordered to provide all customers who purchased the product with a partial (or full) refund. Corrective advertising, disclosures, and other informational remedies may also be ordered. Advertisers may have to warn buyers of false statements in advertisements, make clear disclosures in future advertisements, or provide customers with other information to correct misinformation in an original ad.[66]
Advertising in the UK is regulated under theConsumer Protection from Unfair Trading Regulations 2008[67](CPR), thede factosuccessor of theTrade Descriptions Act 1968. It is designed to implement theUnfair Commercial Practices Directive, part of a common set of European minimum standards forconsumer protectionwhich legally bind advertisers in England, Scotland, Wales, and parts of Ireland.[68][67]The regulations, which focus onbusiness-to-consumer(B2C) interactions, are modeled with a table meant to assess unfairness. Evaluations are made against four tests in the regulations which indicate deceptive advertising:
These elements of deceptive advertising may impair a consumer's ability to make an informed decision, limiting their freedom of choice. The system resemblesFTC regulation of behavioral advertisingin prohibiting false and deceptive messaging, unfair and unethical commercial practices, and omitting important information; it differs in monitoring aggressive sales practices (regulation seven), which include high-pressure practices which go beyond persuasion. Harassment and coercion are not defined but rather interpreted as any undue physical and psychological pressure (in advertising). Each case is analyzed individually, allowing the authority to promote compliance with its enforcement policies, priorities, and available resources.
The CPR mandates different standards authorities for each country:
TheAustralian Competition and Consumer Commission(also known as the ACCC) is responsible for ensuring that all businesses and consumers act in accordance with the Australian Competition & Consumer Act 2010 and fair-trade and consumer-protection laws (ACCC, 2016).[69]Each state and territory has its own consumer-protection or consumer-affairs agency:[69]
The ACCC is designed to assist consumers, businesses, industries, and infrastructure nationwide. It assists the consumer by making available the rights, regulations, obligations, and procedures for refunds and returns, complaints, faulty products, and guarantees of products and services. They also develop laws and guidelines in relation to unfair practices and misleading or deceptive conduct.[69]
There are many similarities in the laws and regulation between the Australian ACCC, New Zealand's FTA, the U.S. FTC, and the United Kingdom's CPR. The goals of these policies are to support fair trade and competition and to reduce deceptive and false practices in advertising. A number of countries have agreements with theInternational Consumer Protection and Enforcement Network(ICPEN).[77]
TheFair Trading Act 1986aims to promote fair competition and trading inNew Zealand.[78][79]The act prohibits certain conduct in trade, provides for the disclosure of information available to the consumer relating to the supply of goods and services, and regulates product safety. Although it does not require businesses to provide all information to consumers in every circumstance, businesses are obliged to ensure that the information they provide is accurate and important information is not withheld from consumers.[78][80]
A number of sales practices intended to mislead consumers are illegal under the Fair Trading Act.[81][80]The act also applies to certain activities whether or not the parties are "in trade," such as employment advertising, pyramid selling, and supplying products covered by product-safety and consumer-information standards.[78]
Consumers and businesses can rely on and take legal action under the act. Consumers may contact the trader and assert rights stated in the act. If the issues are not resolved, the consumer (or anyone else) can take legal action under the act. TheCommerce Commissionis also empowered to take enforcement action when allegations are sufficiently serious to meet its criteria.
There are also four consumer-information standards:[80]
The voluntaryAdvertising Standards Council of India(ASCI) was established in 1985 to evaluate the truth and fairness of advertisements. The ASCI also aims to ensure that ads are respectful to widely accepted public decency principles. It has a number of codes, including the Press Council Act of 1978, the News Broadcasters Association's Code of Conduct, the Young Persons Act of 1956, the Consumer Protection Act of 1986, the Drugs and Cosmetics Act of 1940, and the Food Safety and Standards Act of 2006.[82]Surrogate advertisinga major misleading advertising tactic in India. Many companies use this idea to advertise Betting, Gambling, online fantacy gaming and casino apps.[83]
|
https://en.wikipedia.org/wiki/False_advertising
|
Greenwashing(acompound wordmodeled on "whitewash"), also calledgreen sheen,[1][2]is a form ofadvertisingormarketing spinthat deceptively usesgreen PRandgreen marketingto persuade the public that an organization's products, goals, orpoliciesareenvironmentally friendly.[3][4][5]Companies that intentionally adopt greenwashing communication strategies often do so to distance themselves from their environmental lapses or those of their suppliers.[6]Firms engage in greenwashing for two primary reasons: to appear legitimate and to project an image of environmental responsibility to the public.[7]Because there "is no harmonised definition of greenwashing", a determination that this is occurring in a given instance may be subjective.[8]
Greenwashing occurs when an organization spends significantly more resources on "green" advertising than on environmentally sound practices.[9]Many corporations use greenwashing to improvepublic opinionof their brands. Complex corporate structures can further obscure the bigger picture.[10]Corporations attempt to capitalize on consumers' environmental guilt.[11]Critics of the practice suggest that the rise of greenwashing, paired with ineffective regulation, contributes to consumer skepticism of all green claims and diminishes the power of the consumer to drive companies toward greenermanufacturing processesand business operations.[12]Greenwashing covers up unsustainable corporate agendas and policies.[13]Highly public accusations of greenwashing have contributed to the term's increasing use.[14]
Greenwashing has recently increased to meet consumer demand for environmentally-friendly goods and services. New regulations, laws, and guidelines put forward by organizations such as theCommittee of Advertising Practicein the UK aim to discourage companies from using greenwashing to deceive consumers.[15]At the same time, activists have been increasingly inclined to accuse companies of greenwashing, with inconsistent standards as to what activities merit such an accusation.[8]
Activities deemed to be characteristic of greenwashing can vary by time and place, product, and the opinions or expectations of the person making the determination.[8]
According to theUnited Nations, greenwashing can present itself in many ways:
TerraChoice, an environmental consulting division ofUL, described "seven sins of greenwashing" in 2007 to "help consumers identify products that made misleading environmental claims":[17]
The organization noted that by 2010, approximately 95% of consumer products in the U.S. claiming to be green were discovered to commit at least one of these sins.[18][19]
The origins of greenwashing can be traced to several different moments. For example,Keep America Beautifulwas a campaign founded by beverage manufacturers and others in 1953.[20]The campaign focused on recycling and littering, diverting attention away from corporate responsibility to protect the environment. The objective was to forestall the regulation of disposable containers such as the one established by Vermont.[21]
In the mid-1960s, the environmental movement gained momentum, particularly after the publication of the landmarkSilent Springby Rachel Carson. The book marked a turning point about the environment and inspired citizen action. It prompted many companies to seek a new cleaner or greener image through advertising.Jerry Mander, a formerMadison Avenueadvertising executive, called this new form of advertising "ecopornography."[22]
The firstEarth Daywas held on 22 April 1970. Most companies did not actively participate in the initial Earth Day events because environmental issues were not a major corporate priority, and there was a sense of skepticism or resistance to the movement's message. Nevertheless, some industries began to advertise themselves as friendly to the environment. For example, public utilities were estimated to have spent around $300 million advertising themselves as clean and green companies, which was eight times what they spent on pollution reduction research.[23][24]
The term "greenwashing" was coined by New YorkenvironmentalistJay Westerveldin a 1986 essay about thehotel industry'spractice of placing notices in bedrooms promoting the reuse of towels to "save the environment". He noted that these institutions often made little or no effort toward reducing energy waste, although towel reuse saved them laundry costs. He concluded that the fundamental objective was most frequently increased profit. He labeled this and other profitable-but-ineffective "environmentally-conscientious" acts as "greenwashing".[25]
In 1991, a study published in the "Journal of Public Policy and Marketing" (American Marketing Association) found that 58% of environmental ads had at least one deceptive claim. Another study found that 77% of people said a company's environmental reputation affected whether they would buy its products. One-fourth of all household products marketed around Earth Day advertised themselves as being green and environmentally friendly. In 1998, theFederal Trade Commissioncreated the "Green Guidelines", which defined terms used in environmental marketing. The following year, the FTC found the Nuclear Energy Institute's environmentally clean claims invalid. The FTC did nothing about the ads because they were out of the agency's jurisdiction. This caused the FTC to realize they needed new, clear, enforceable standards. In 1999, the word "greenwashing" was added to the "Oxford English Dictionary".[23][24]
Days before the 1992 Earth Summit in Rio de Janeiro, Greenpeace released the Greenpeace Book on Greenwash, which described the corporate takeover of the UN conference and provided case studies of the contrast between corporate polluters and their rhetoric. Third World Network published an expanded version of that report, "Greenwash: The Reality Behind Corporate Environmentalism."
In 2002, during theWorld Summit on Sustainable Developmentin Johannesburg, the Greenwashing Academy hosted the Greenwash Academy Awards. The ceremony awarded companies likeBP,ExxonMobil, and even theU.S. Governmentfor their elaborate greenwashing ads and support for greenwashing.[23][24]A European Union study from 2020 found that over 50% of examined environmental claims in the EU were vague, misleading or unfounded and 40% were unsubstantiated.[26]
Many companies have committed to lessen their greenhouse gas emissions to a net zero due to theParis Agreementbeing established in 2015. A net zero emissions level means that any emissions given off by a company would be offset by carbon eliminators in the natural world (otherwise known as carbon sinks). However, companies are not actually cutting emissions, but are creating infeasible plans and trying to improve other things rather than their emissions. Therefore, most companies are not actually upholding their agreements and ultimately continue not to cause any positive change.[16]
Some companies communicate and publicize unsubstantiated ethical claims or social responsibility, and practice greenwashing, which increases consumer cynicism and mistrust.[85]By using greenwashing, companies can present their business as more ecologically sustainable than it is. According to a policy report, greenwashing includes risks such as misleading advertisements and public communications, misleading ESG credentials, and false or deceiving carbon credit claims.[86]
After a legal analysis, the corruption and integrity risks in climate solutions reports show that regulations are significantly weaker for misleading ESG credentials than for climate washing and advertising standards. Despite imposed obligations, ESG rating agencies or ESG auditors are not regulated in any reviewed jurisdictions. Factors such as the lack of oversight by third-party environmental service providers, the opacity of internal scoring methodologies, and the lack of alignment and consistency around ESG assessments can create opportunities for misleading or unsubstantiated claims and, worst cases, bribery or fraud.[86]
Greenwashing is a relatively new area of research within psychology, and there needs to be more consensus among studies on how greenwashing affects consumers and stakeholders. Because of the variance in country and geography in recently published studies, the discrepancy between consumer behavior in studies could be attributed to cultural or geographic differences.
Researchers found that consumers significantly favor environmentally friendly products over their greenwashed counterparts.[87]A survey by LendingTree found that 55% of Americans are willing to spend more money on products they perceive to be more sustainable and eco-friendly.[88]
Consumer perceptions of greenwashing are also mediated by the level of greenwashing they are exposed to.[89]Other research suggests that few consumers notice greenwashing, particularly when they perceive the company or brand as reputable. When consumers perceive green advertising as credible, they develop more positive attitudes towards the brand, even when the advertising is greenwashed.[90]
Other research suggests that consumers with more green concern are more able to tell the difference between honest green marketing and greenwashed advertising; the more green concern, the stronger the intention not to purchase from companies from which they perceive greenwashing advertising behavior. When consumers use word-of-mouth to communicate about a product, green concern strengthens the negative relationship between the consumer's intent to purchase and the perception of greenwashing.[91]
Research suggests that consumers distrust companies that greenwash because they view the act as deceptive. If consumers perceive that a company would realistically benefit from a green marketing claim being true, then it is more likely that the claim and the company will be seen as genuine.[92]
Consumers' willingness to purchase green products decreases when they perceive that green attributes compromise product quality, making greenwashing potentially risky, even when the consumer or stakeholder is not skeptical of green messaging. Words and phrases often used in green messaging and greenwashing, such as "gentle," can lead consumers to believe the green product is less effective than a non-green option.[93]
Eco-labelscan be given to a product from an external organization and the company itself. This has raised concerns because companies can label a product as green or environmentally friendly by selectively disclosing positive attributes of the product while not disclosing environmental harms.[94]Consumers expect to see eco-labels from both internal and external sources but perceive labels from external sources to be more trustworthy. Researchers from the University of Twente found that uncertified or greenwashed internal eco-labels may still contribute to consumer perceptions of a responsible company, with consumers attributing internal motivation to a company's internal eco-labeling.[95]Other research connecting attribution theory and greenwashing found that consumers often perceive green advertising as greenwashing when companies use green advertisements, attributing the green messaging to corporate self-interest. Green advertising can backfire, particularly when the advertised environmental claim does not match a company's environmental engagement.[96]
Researchers working with consumer perception, psychology, and greenwashing note that companies should "walk the walk" regarding green advertising and behavior to avoid the negative connotations and perceptions of greenwashing. Green marketing, labeling, and advertising are most effective when they match a company's environmental engagement. This is also mediated by the visibility of those environmental engagements, meaning that if consumers are unaware of a company's commitment to sustainability or environmentally-conscious ethos, they cannot factor greenness in their assessment of the company or product.[97]
Exposure to greenwashing can make consumers indifferent to or generate negative feelings toward green marketing. Thus, genuinely green businesses must work harder to differentiate themselves from those who use false claims. Nevertheless, consumers may react negatively to valid sustainability claims because of negative experiences with greenwashing.[98]
Conversely, concerns about the perception of genuine efforts to develop more environmentally friendly practices can lead to "greenhushing", where a company avoids publicizing these efforts out of concern that they will be accused of greenwashing anyway.[8]
Companies may pursueenvironmental certificationto avoid greenwashing through independent verification of their green claims. For example, theCarbon TrustStandard launched in 2007 with the stated aim "to end 'greenwash' and highlight firms that are genuine about their commitment to the environment."[99]
There have been attempts to reduce the impact of greenwashing by exposing it to the public.[100]The Greenwashing Index, created by theUniversity of Oregonin partnership with EnviroMedia Social Marketing, allowed the public to upload and rate examples of greenwashing, but it was last updated in 2012.[101]
Research published in the Journal of Business Ethics in 2011 shows that Sustainability Ratings might deter greenwashing. Results concluded that higher sustainability ratings lead to significantly higher brand reputation than lower sustainability ratings. This same trend was found regardless of the company's level ofcorporate social responsibility(CSR) communications. This finding establishes that consumers pay more attention to sustainability ratings than CSR communications or greenwashing claims.[102]
The World Federation of Advertisers released six new guidelines for advertisers in 2022 to prevent greenwashing. These approaches encourage credible environmental claims and more sustainable outcomes.[103]
Worldwide regulations on misleading environmental claims vary from criminal liability to fines or voluntary guidelines.
The AustralianTrade Practices Actpunishes companies that provide misleading environmental claims. Any organization found guilty of such could face upA$6 millionin fines.[104]In addition, the guilty party must pay for all expenses incurred while setting the record straight about their product or company's actualenvironmental impact.[105]
Canada'sCompetition Bureau, along with theCanadian Standards Association, discourage companies from making "vague claims" about their products' environmental impact. Any claims must be backed up by "readily available data."[105]
TheEuropean Anti-Fraud Office(OLAF) handles investigations that have an environmental or sustainability element, such as the misspending of EU funds intended for green products and the counterfeiting and smuggling of products with the potential to harm the environment and health. It also handlesillegal loggingand smuggling of precious wood and timber into the EU (wood laundering).[106]
In January 2021, the European Commission, in cooperation with nationalconsumer protectionauthorities, published a report on its annual survey of consumer websites investigated for violations of EU consumer protection law.[107]The study examined green claims across a wide range of consumer products, concluding that for 42 percent of the websites examined, the claims were likely false and misleading and could well constitute actionable claims for unfair commercial practices.[108]
In the context of escalating concerns regarding the authenticity of corporate ecological sustainability claims, greenwashing has emerged as a significant issue and poses a real challenge tosustainable financeregulations gaps. ESMA outlined the correlation between the growth of ESG-related funds and greenwashing. The exponential rise of funds integrating vague ESG-related language in their names started since theParis Agreement(2015), and is effective in deceivingly attracting more investors.[109]
The 2020-2024 agenda ofDG FISMAconcern about greenwashing reconciles two objectives: increasing capital for sustainable investments and bolstering trust and investor protection in European financial markets.[110]
The European Union struck a provisional agreement to mandate new reporting rules for companies with over 250 staff and a turnover of€40 million. They must disclose environmental, social, and governance (ESG) information, which will help combat greenwashing. These requirements go into effect in 2024.[111]The European Commission has introduced a proposal ofESG regulationaimed at bolstering transparency and integrity within ESG rating in 2023.[112]
In June 2024, theFederal Constitutional Courtof Germany ruled that companies that use "climate neutral" in advertising must define what the term means or use of the phrase would not continue to be permitted due to the phrase being too vague.[113]
Norway's consumer ombudsmanhas targeted automakers who claim their cars are "green," "clean," or "environmentally friendly," with some of the world's strictest advertising guidelines. Consumer Ombudsman official Bente Øverli said: "Cars cannot do anything good for the environment except less damage than others." Manufacturers risk fines if they fail to drop misleading advertisements. Øverli said she did not know of other countries going so far in cracking down on cars and the environment.[114][115][116][117]
The Green Leaf Certification is an evaluation method created by theAssociation of Southeast Asian Nations(ASEAN) as a metric that rates the hotels' environmental efficiency of environmental protection.[118]In Thailand, this certification is believed to help regulate greenwashing phenomena associated with green hotels.Eco hotelor "green hotel" are hotels that have adopted sustainable, environmentally-friendly practices in hospitality business operations.[119]Since the development of the tourism industry in the ASEAN, Thailand superseded its neighboring countries in inbound tourism, with 9 percent of Thailand's direct GDP contributions coming from the travel and tourism industry in 2015.[120]Because of the growth and reliance on tourism as an economic pillar, Thailand developed "responsible tourism" in the 1990s to promote the well-being of local communities and the environment affected by the industry.[118]However, studies show the green hotel companies' principles and environmental perceptions contradict the basis of corporate social responsibilities in responsible tourism.[118][121]Against this context, the Green Leaf Certification issuance aims to keep the hotel industry and supply chains accountable for corporate social responsibilities regarding sustainability by having an independent international organization evaluate a hotel and rate it one through five leaves.[122]
TheCompetition and Markets Authorityis the UK's primary competition and consumer authority. In September 2021, it published a Green Claims Code to protect consumers from misleading environmental claims and businesses from unfair competition.[123]In May 2024, theFinancial Conduct Authorityintroduced anti-greenwashing rules covering sustainability claims made by regulated firms that market financial products or services.[124]
TheFederal Trade Commission(FTC) provides voluntary guidelines for environmental marketing claims. These guidelines give the FTC the right to prosecute false and misleading claims. These guidelines are not enforceable but instead were intended to be followed voluntarily:
The FTC announced in 2010 that it would update its guidelines for environmental marketing claims in an attempt to reduce greenwashing.[126]The revision to the FTC's Green Guides covers a wide range of public input, including hundreds of consumer and industry comments on previously proposed revisions, offering clear guidance on what constitutes misleading information and demanding clear factual evidence.[108]
According to FTC ChairmanJon Leibowitz, "The introduction of environmentally-friendly products into the marketplace is a win for consumers who want to purchase greener products and producers who want to sell them." Leibowitz also says such a win-win can only operate if marketers' claims are straightforward and proven.[127]
In 2013, the FTC began enforcing these revisions. It cracked down on six different companies; five of the cases concerned false or misleading advertising surrounding thebiodegradabilityof plastics. The FTC charged ECM Biofilms, American Plastic Manufacturing, CHAMP, Clear Choice Housewares, and Carnie Cap, for misrepresenting the biodegradability of their plastics treated with additives.[128]
The FTC charged a sixth company, AJM Packaging Corporation, with violating a commission consent order prohibiting companies from using advertising claims based on the product or packaging being "degradable, biodegradable, or photodegradable" without reliable scientific information.[128]The FTC now requires companies to disclose and provide the information that qualifies their environmental claims to ensure transparency.
The issue of green marketing andconsumerismin China has gained significant attention as the country faces environmental challenges. According to "Green Marketing and Consumerism in China: Analyzing the Literature" by Qingyun Zhu and Joseph Sarkis, China has implemented environmental protection laws to regulate the business and commercial sector. Regulations such as the Environmental Protection Law and the Circular Economy Promotion Law contain provisions prohibiting false advertising (known as greenwashing).[129][130]TheChinese governmenthas issued regulations and standards to regulate green advertising and labeling, including the Guidelines for Green Advertising Certification, the Guidelines for Environmental Labeling and Eco-Product Certification, and the Standards for Environmental Protection Product Declaration. These guidelines promote transparency in green marketing and prevent false or misleading claims. The Guidelines for Green Advertising Certification require that green advertising claims should be truthful, accurate, and verifiable.[131]These guidelines and certifications require that eco-labels should be based on scientific and technical evidence, and should not contain false or misleading information. The standards also require that eco-labels be easy to understand and not confuse or deceive consumers. The regulations that are set in place for greenwashing, green advertising, and labeling in China are designed to protect consumers and prevent misleading claims. China's climate crisis, sustainability, and greenwashing remain critical and require ongoing attention. The implementation of regulations and guidelines for green advertising and labeling in China aims to promote transparency and prevent false or misleading claims.
In efforts to stop this practice, in November 2016, the General Office of the State Council introduced legislation to promote the development of green products, encourage companies to adopt sustainable practices, and mention the need for a unified standard for what was to be labeled green.[132]This was a general plan or opinion on the matter, with no specifics on its implementation, however with similarly worded legislation and plans out at that time there was a push toward a unified green product standard.[133]Until then, green products had various standards and guidelines developed by different government agencies or industry associations, resulting in a lack of consistency and coherence. One example of guidelines set then was from the Ministry of Environmental Protection of China (now known as the Ministry of Ecology and Environment). They issued specifications in 2000, but these guidelines were limited and not widely recognized by industry or consumers. It was not until 2017, with the launch of GB/T (a set of national standards and recommendations), that a widespread guideline was set for what would constitute green manufacturing and a green supply chain.[134][135]Expanding on these guidelines in 2019 the State Administration for Market Regulation (SAMR) created regulations for Green Product Labels, which are symbols used on products to mark that they meet certain environmentally friendly criteria, and certification agencies have verified their manufacturing process.[136][137]The standards and coverage for green products have increased as time passes, with changes and improvements to green product standardization still occurring in 2023.[135]
In China, the Greenpeace Campaign focuses on the pain point of air pollution. The campaign aims to address the severe air pollution problem prevalent in many Chinese communities. The campaign has been working to raise awareness about air pollution's health and environmental impacts, advocate for more robust government policies and regulations to reduce emissions, and encourage a shift toward clean and renewable energy sources.[138]"From 2011 to 2016, we linked global fast fashion brands to toxic chemical pollution in China through their manufacturers. Many multinational companies and local suppliers have stopped using toxic and harmful chemicals. They included Adidas, Benetton, Burberry, Esprit, H&M, Puma, and Zara, among others." The Greenpeace Campaign in China has involved various activities, including scientific research, public education, and advocacy efforts. The campaign has organized public awareness events to engage both consumers and policymakers, urging them to take action to improve air quality. "In recent years,Chinese Communist Partygeneral secretaryXi Jinpinghas committed to controlling the expansion of coal power plants. He has also pledged to stop building new coal power abroad". The campaign seeks to drive public and government interest toward more strict air pollution control measures, promote more clean energy technology, and contribute to health, wellness, and sustainability in China. However, the health of Chinese citizens is at the forefront of this issue, as air pollution is a critical issue in the nation. The article emphasizes that China has prioritized putting people front and center on environmental issues. China's Greenpeace campaigns and those in other countries are a part of their global efforts to address environmental challenges and promote sustainability.
"Bluewashing" is a similar term. However, instead of falsely advertising environmentally friendly practices, companies are advertising corporate social responsibility. For example, companies are saying they are fighting for human rights while practicing very unethical production practices such as paying factory employees next to nothing.[139]
Carbon emission tradingcan be similar to greenwashing in that it gives an environmentally-friendly impression, but can be counterproductive if carbon is priced too low, or if large emitters are given "free credits." For example,Bank of AmericasubsidiaryMBNAoffers "Eco-Logique"MasterCardsthat reward Canadian customers withcarbon offsetswhen they use them. Customers may feel that they are nullifying theircarbon footprintby purchasing goods with these, but only 0.5% of the purchase price goes to buy carbon offsets; the rest of theinterchange feestill goes to the bank.[140]
Greenscamming describes an organization or product taking on a name that falsely implies environmental friendliness. It is related to both greenwashing andgreenspeak.[141]This is analogous toaggressive mimicryin biology.[142][143]
Greenscamming is used in particular by industrial companies and associations that deployastroturfingorganisations to try to dispute scientific findings that threaten their business model. One example is thedenial of man-made global warmingby companies in thefossil energy sector, also driven by specially-founded greenscamming organizations.[citation needed]
One reason to establish greenscamming organizations is that openly communicating the benefits of activities that damage the environment is difficult. Sociologist Charles Harper stresses that marketing a group called "Coalition to Trash the Environment for Profit" would be difficult. Anti-environment initiatives, therefore, must give theirfront organizationsdeliberately deceptive names if they want to be successful, as surveys[citation needed]show that environmental protection has a social consensus. However, the danger of being exposed as an anti-environmental initiative entails a considerable risk that the greenscamming activities will backfire and be counterproductive for the initiators.[144]
Greenscamming organizations are active in organizedclimate denial.[142]An important financier of greenscamming organizations was the oil companyExxonMobil, which financially supported more than 100 climate denial organizations and spent about 20 million U.S. dollars on greenscamming groups.[145]James Lawrence Powellidentified the "admirable" designations of many of these organizations as the most striking common feature, which for the most part sounded very rational. He quotes a list of climate denial organizations drawn up by theUnion of Concerned Scientists, which includes 43 organizations funded byExxon. None had a name that would lead one to infer that climate change denial was their "raison d'être". The list is headed byAfrica Fighting Malaria, whose website features articles and commentaries opposing ambitiousclimate mitigationconcepts, even though the dangers ofmalariacould be exacerbated byglobal warming.[146]
Examples of greenscamming organizations include theNational Wetlands Coalition, Friends of Eagle Mountain, The Sahara Club, The Alliance for Environment and Resources, The Abundant Wildlife Society of North America, theGlobal Climate Coalition, the National Wilderness Institute, the Environmental Policy Alliance of theCenter for Organizational Research and Education, and theAmerican Council on Science and Health.[143][147]Behind these ostensible environmental protection organizations lie the interests of business sectors. For example, oil drilling companies and real estate developers support the National Wetlands Coalition. In contrast, the Friends of Eagle Mountain is backed by a mining company that wants to convert open-cast mines into landfills. The Global Climate Coalition was backed by commercial enterprises that fought against government-imposed climate protection measures. Other Greenscam organizations include the U.S. Council for Energy Awareness, backed by the nuclear industry; the Wilderness Impact Research Foundation, representing the interests of loggers and ranchers; and the American Environmental Foundation, representing the interests of landowners.[148]
Another Greenscam organization is the Northwesterners for More Fish, which had a budget of $2.6 million in 1998. This group opposed conservation measures for endangered fish that restricted the interests of energy companies, aluminum companies, and the region's timber industry and tried to discredit environmentalists who promoted fish habitats.[143]TheCenter for the Study of Carbon Dioxide and Global Change, the National Environmental Policy Institute, and theInformation Council on the Environmentfunded by thecoal industryare also greenscamming organizations.[145]
In Germany, this form of mimicry or deception is used by the "European Institute for Climate and Energy" (EIKE), which suggests by its name that it is an important scientific research institution.[149]In fact, EIKE is not a scientific institution at all, but alobby organizationthat neither has an office nor employs climate scientists, but instead disseminates fake news on climate issues on its website.[150]
|
https://en.wikipedia.org/wiki/Greenwashing
|
Impression managementis a conscious orsubconsciousprocess in which people attempt toinfluencetheperceptionsof other people about a person, object or event by regulating and controlling information insocial interaction.[1]It was first conceptualized byErving Goffmanin 1956 inThe Presentation of Self in Everyday Life,and then was expanded upon in 1967.
Impression management behaviors include accounts (providing "explanations for a negative event to escape disapproval"), excuses (denying "responsibility for negative outcomes"), and opinion conformity ("speak(ing) or behav(ing) in ways consistent with the target"), along with many others.[2]By utilizing such behaviors, those who partake in impression management are able to control others' perception of them or events pertaining to them. Impression management is possible in nearly any situation, such as in sports (wearing flashy clothes or trying to impress fans with their skills), or on social media (only sharing positive posts). Impression management can be used with either benevolent or malicious intent.
Impression management is usually used synonymously withself-presentation, in which a person tries to influence the perception of theirimage. The notion of impression management was first applied toface-to-face communication, but then was expanded to apply tocomputer-mediated communication. The concept of impression management is applicable to academic fields of study such aspsychologyandsociologyas well as practical fields such ascorporate communicationandmedia.
The foundation and the defining principles of impression management were created by Erving Goffman inThe Presentation of Self in Everyday Life. Impression management theory states that one tries to alter one's perception according to one's goals. In other words, the theory is about how individuals wish to present themselves, but in a way that satisfies their needs and goals. Goffman "proposed to focus on how people in daily work situations present themselves and, in so doing, what they are doing to others", and he was "particularly interested in how a person guides and controls how others form an impression of them and what a person may or may not do while performing before them".[3]
Impression management can be found in all social interactions, whether real or imaginary, and is governed by a range of factors. The characteristics of a given social situation are important; specifically, the surroundingcultural normsdetermine the appropriateness of particular nonverbal behaviors.[4]The actions and exchange have to be appropriate to the targets, and within that culture's norms. Thus, the nature of the audience and its relationship with the speaker influences the way impression management is realized.
The awareness of being a potential subject of monitoring is also crucial. A person's goals inform the strategies of impression management, and can influence how they are received. This leads to distinct ways of presenting the self.Self-efficacydescribes whether a person is convinced that it is possible to convey the intended impression.[5]Conmen, for instance, can rely on their ability to emanate self-assuredness in the process of gaining a mark's trust.
There is evidence that, all other things being equal, people are more likely to pay attention to faces associated with negative gossip compared to those with neutral or positive associations.[6]This contributes to a body of work indicating that, far from being objective, human perceptions are shaped by unconscious brain processes that determine what they "choose" to see or ignore—even before a person is consciously aware of it. The findings also add to the idea that the brain evolved to be particularly sensitive to "bad guys" or cheaters—fellow humans who undermine social life by deception, theft or other non-cooperative behavior.[6]
There are many methods behind self-presentation, includingself-disclosure(identifying what makes you "you" to another person), managing appearances (trying to fit in),ingratiation, aligning actions (making one's actions seem appealing or understandable), andalter-casting(imposing identities onto other people). Maintaining a version of self-presentation that is generally considered to be attractive can help to increase one'ssocial capital; this method is commonly used at networking events. These self-presentation methods can also be used by corporations for impression management with the public.[1][7]
Self-presentation is conveying information about oneself – or an image of oneself – to others. There are two types and motivations of self-presentation:
Self-presentation is expressive. Individuals construct an image of themselves to claim personal identity, and present themselves in a manner that is consistent with that image. If they feel like it is restricted, they often exhibitreactanceor become defiant – try to assert their freedom against those who would seek to curtail self-presentation expressiveness. An example of this dynamic is someone who grew up with extremely strict or controlling parental figures. The child in this situation may feel that their identity and emotions have been suppressed, which may cause them to behave negatively towards others.
Self-presentation can be either defensive or assertive strategies (also described as protective versus acquisitive).[12]Whereas defensive strategies include behaviours like avoidance of threatening situations or means ofself-handicapping,assertivestrategies refer to more active behaviour like the verbal idealisation of the self, the use of status symbols or similar practices.[13]
These strategies play important roles in one's maintenance ofself-esteem.[14]One's self-esteem is affected by their evaluation of their own performance and their perception of how others react to their performance. As a result, people actively portray impressions that will elicit self-esteem enhancing reactions from others.[15]
In 2019, as filtered photos are perceived as deceptive by users, PlentyOfFish along with other dating sites have started to ban filtered images.[16]
Goffman argued in his 1967 book,Interaction ritual, that people participate in social interactions by performing a "line", or "pattern of verbal and nonverbal acts", which is created and maintained by both the performer and the audience.[17]By enacting a line effectively, the person gains positive social value, which is also called "face". The success of a social interaction will depend on whether the performer has the ability to maintain face.[3]As a result, a person is required to display a kind of character by becoming "someone who can be relied upon to maintain himself as an interactant, poised for communication, and to act so that others do not endanger themselves by presenting themselves as interactants to him".[17]Goffman analyses how a human being in "ordinary work situations presents himself and his activity to others, the ways in which he guides and controls the impression they form of him, and the kinds of things he may and may not do while sustaining his performance before them".[18]
When Goffman turned to focus on people physically presented in a social interaction, the "social dimension of impression management certainly extends beyond the specific place and time of engagement in the organization". Impression management is "a social activity that has individual and community implications".[3]We call it "pride" when a person displays a good showing from duty to himself, while we call it "honor" when he "does so because of duty to wider social units, and receives support from these duties in doing so".[17]
Another approach to moral standards that Goffman pursues is the notion of "rules of conduct", which "can be partially understood as obligations or moral constraints". These rules may be substantive (involving laws, morality, and ethics) or ceremonial (involving etiquette).[3]Rules of conduct play an important role when a relationship "is asymmetrical and the expectations of one person toward another are hierarchical."[3]
Goffman presented impression managementdramaturgically, explaining the motivations behind complex human performances within a social setting based on a play metaphor.[19]Goffman's work incorporates aspects of a symbolicinteractionistperspective,[20]emphasizing a qualitative analysis of the interactive nature of the communication process. Impression management requires the physical presence of others. Performers who seek certain ends in their interest, must "work to adapt their behavior in such a way as to give off the correct impression to a particular audience" and "implicitly ask that the audience take their performance seriously".[3]Goffman proposed that while among other people individual would always strive to control the impression that others form of him or her so that to achieve individual or social goals.[21]
The actor, shaped by the environment and target audience, sees interaction as a performance. The objective of the performance is to provide the audience with an impression consistent with the desired goals of the actor.[22]Thus, impression management is also highly dependent on the situation.[23]In addition to these goals, individuals differ in responses from the interactive environment, some may be non-responsive to an audience's reactions while others actively respond to audience reactions in order to elicit positive results. These differences in response towards the environment and target audience are calledself-monitoring.[24]Another factor in impression management isself-verification, the act of conforming the audience to the person'sself-concept.
The audience can be real or imaginary. IM style norms, part of the mental programming received through socialization, are so fundamental that we usually do not notice our expectations of them. While an actor (speaker) tries to project a desired image, an audience (listener) might attribute a resonant or discordant image. An example is provided by situations in which embarrassment occurs and threatens the image of a participant.[25]
Goffman proposes that performers "can use dramaturgical discipline as a defense to ensure that the 'show' goes on without interruption."[3]Goffman contends that dramaturgical discipline includes:[3]
In business, "managing impressions" normally "involves someone trying to control the image that a significant stakeholder has of them". The ethics of impression management has been hotly debated on whether we should see it as an effective self-revelation or as cynicalmanipulation.[3]Some people insist that impression management can reveal a truer version of the self by adopting the strategy of being transparent. Becausetransparency"can be provided so easily and because it produces information of value to the audience, it changes the nature of impression management from being cynically manipulative to being a kind of useful adaptation".
Virtue signallingis used within groups to criticize their own members for valuing outward appearance over substantive action (having a real or permanent, rather than apparent or temporary, existence).
Psychological manipulationis a type ofsocial influencethat aims to change the behavior orperceptionof others throughabusive,deceptive, or underhanded tactics.[26]By advancing the interests of the manipulator, often at another's expense, such methods could be considered exploitative, abusive, devious, and deceptive. The process of manipulation involves bringing an unknowing victim under the domination of the manipulator, often using deception, and using the victim to serve their own purposes.
Machiavellianismis a term that some social and personality psychologists use to describe a person's tendency to be unemotional, and therefore able to detach him or herself from conventional morality and hence to deceive and manipulate others.[27](See alsoMachiavellianism in the workplace.)
Lying constitutes a force that is destructive and can manipulate an environment allowing them to be narcissistic human beings. A person's mind can be manipulated into believing those antics are true as though it relates to being solely deceptive and unethical.[28]Theories show manipulation can cause a huge effect on the dynamic of one's relationship. The emotions of a person can stem from a trait that is mistrustful, triggering one's attitude and character to misbehave disapprovingly. Relationships with a positive force can provide a greater exchange whereas with relationships having poor moral values, the chances of the connection will be based on detachment and disengagement.[29]Dark personalities and manipulation are within the same entity. It will intervene between a person's attainable goal if their perspective is only focused on self-centeredness.[30]The personality entices a range of erratic behaviors that will corrupt the mind into practicing violent acts resulting in a rage of anger and physical harm.[31]
Public relationsEthics. Professionals both serve the public's interest and private interests of businesses, associations, non-profit organizations, and governments. This dual obligation gave rise to heated debates among scholars of the discipline and practitioners over its fundamental values. This conflict represents the main ethical predicament of public relations.[40] In 2000, the Public Relations Society of America (PRSA) responded to the controversy by acknowledging in its new code of ethics "advocacy" – for the first time – as a core value of the discipline.[40]
The field of public relations is generally highly un-regulated, but many professionals voluntarily adhere to the code of conduct of one or more professional bodies to avoid exposure for ethical violations.[41] The Chartered Institute of Public Relations, the Public Relations Society of America, and The Institute of Public Relations are a few organizations that publish an ethical code. Still, Edelman's 2003 semi-annual trust survey found that only 20 percent of survey respondents from the public believed paid communicators within a company were credible.[42] Individuals in public relations are growing increasingly concerned with their company's marketing practices, questioning whether they agree with the company's social responsibility. They seek more influence over marketing and more of a counseling and policy-making role. On the other hand, individuals in marketing are increasingly interested in incorporating publicity as a tool within the realm marketing.[43]
According to Scott Cutlip, the social justification for public relations is the right for an organization to have a fair hearing of their point of view in the public forum, but to obtain such a hearing for their ideas requires a skilled advocate.[44]
Marketing and communications strategist, Ira Gostin, believes there is a code of conduct when conducting business and using public relations. Public relations specialists have the ability to influence society. Fact-checking and presenting accurate information is necessary to maintain credibility with employers and clients.[45]
Public Relations
Code of Ethics
The Public Relation Student Society of America has established a set of fundamental guidelines that people within the public relations professions should practice and use in their business atmosphere. These values are:
Advocacy: Serving the public interest by acting as responsible advocates for the clientele. This can occur by displaying the marketplace of ideas, facts and viewpoints to aid informed public debate.
Honesty: Standing by the truth and accuracy of all facts in the case and advancing those statements to the public.
Expertise: To become and stay informed of the specialized knowledge needed in the field of Public Relations. Taking that knowledge and improving the field through development, research and education. Meanwhile, professionals also build their understanding, credibility, and relationships to understand various audiences and industries.
Independence: Provide unbiased work to those that are represented while being accountable for all actions.
Loyalty: Stay devoted to the client while remembering that there is a duty to still serve the public interest.
Fairness: Honorably conduct business with any and all clients, employers, competitors, peers, vendors, media and general public. Respecting all opinions and right of free expression.[46]
International Public Relations Code of Ethics
Other than the ethics put in place in the United States of America there are also International ethics set to ensure proper and, legal worldwide communication. Regarding these ethics, there are broad codes used specifically for international forms of public relations, and then there are more specific forms from different countries. For example, some countries have certain associations to create ethics and standards to communication across their country.
The International Association of Business Communication (founded in 1971),[47] or also known as IABC, has its own set of ethics in order to enforce a set of guidelines that ensure communication internationality is legal, ethical, and is in good taste. Some principles that members of the board of IABC follow include.
Having proper and legal communication
Being understanding and open to other people's cultures, values, and beliefs
Create communication that is accurate, trusting, to ensure mutual respect and understanding
The IABC members use the following list of ethics in order to work to improve values of communication throughout the world:[47]
Being credible and honest
Keeping up with information to ensure accuracy of communication
Understanding free speech and respecting this right
Having sensitivity towards other people's thoughts, beliefs, and way of life
Not taking part in unethical behaviors
Obeying policies and laws
Giving proper credit to resources used for communication
Ensuring private information is protected (not used for personal gain) and if publicized, guarantee proper legal measures will be put in place.
Publishers of said communication do not accept gifts, benefits, payments etc.; for work, or their services
Creating results and spreading results that are attainable and they can deliver.
Being fully truthful to other people, and themselves.
Media is a major resource in the public relations career especially in news networks. That is why as a public relations specialist, having proper information is very important, and crucial to the society as a whole.
Spin
Main article:Spin(public relations)
Spin has been interpreted historically to mean overt deceit that is meant to manipulate the public, but since the 1950s has shifted to describing a "polishing of the truth."[48] Today, spin refers to providing a certain interpretation of information meant to sway public opinion.[49] Companies may use spin to create the appearance of the company or other events are going in a slightly different direction than they actually are.[48] Within the field of public relations, spin is seen as a derogatory term, interpreted by professionals as meaning blatant deceit and manipulation.[50][51] Skilled practitioners of spin are sometimes called "spin doctors."
In Stuart Ewen's PR! A Social History of Spin, he argues that public relations can be a real menace to democracy as it renders the public discourse powerless. Corporations are able to hire public relations professionals and transmit their messages through the media channels and exercise a huge amount of influence upon the individual who is defenseless against such a powerful force. He claims that public relations is a weapon for capitalist deception and the best way to resist is to become media literate and use critical thinking when interpreting the various mediated messages.[52]
According to Jim Hoggan, "public relations is not by definition 'spin'. Public relations is the art of building good relationships. You do that most effectively by earning trust and goodwill among those who are important to you and your business... Spin is to public relations what manipulation is to interpersonal communications. It's a diversion whose primary effect is ultimately to undermine the central goal of building trust and nurturing a good relationship."[53]
The techniques of spin include selectively presenting facts and quotes that support ideal positions (cherry picking), the so-called "non-denial denial", phrasing that in a way presumes unproven truths, euphemisms for drawing attention away from items considered distasteful, and ambiguity in public statements. Another spin technique involves careful choice of timing in the release of certain news so it can take advantage of prominent events in the news.
NegativeSee also: Negative campaigningNegative public relations, also called dark public relations (DPR), 'black hat PR' and in some earlier writing "Black PR", is a process of destroying the target's reputation and/or corporate identity. The objective in DPR is to discredit someone else, who may pose a threat to the client's business or be a political rival. DPR may rely on IT security, industrial espionage, social engineering and competitive intelligence. Common techniques include using dirty secrets from the target, producing misleading facts to fool a competitor.[54][55][56][57] In politics, a decision to use negative PR is also known as negative campaigning.
The social psychologist, Edward E. Jones, brought the study of impression management to the field of psychology during the 1960s and extended it to include people's attempts to control others' impression of their personal characteristics.[32]His work sparked an increased attention towards impression management as a fundamental interpersonal process.
The concept ofselfis important to the theory of impression management as the images people have of themselves shape and are shaped by social interactions.[33]Our self-concept develops from social experience early in life.[34]Schlenker (1980) further suggests that children anticipate the effect that their behaviours will have on others and how others will evaluate them. They control the impressions they might form on others, and in doing so they control the outcomes they obtain from social interactions.
Social identityrefers to how people are defined and regarded insocial interactions.[35]Individuals use impression management strategies to influence the social identity they project to others.[34]The identity that people establish influences their behaviour in front of others, others' treatment of them and the outcomes they receive. Therefore, in their attempts to influence the impressions others form of themselves, a person plays an important role in affecting his social outcomes.[36]
Social interaction is the process by which we act and react to those around us. In a nutshell, social interaction includes those acts people perform toward each other and the responses they give in return.[37]The most basic function of self-presentation is to define the nature of a social situation (Goffman, 1959). Most social interactions are very role governed. Each person has a role to play, and the interaction proceeds smoothly when these roles are enacted effectively. People also strive to create impressions of themselves in the minds of others in order to gain material and social rewards (or avoid material and socialpunishments).[38]
Understanding how one's impression management behavior might be interpreted by others can also serve as the basis for smoother interactions and as a means for solving some of the most insidious communication problems among individuals of different racial/ethnic and gender backgrounds (Sanaria, 2016).[1][39]
"People are sensitive to how they are seen by others and use many forms of impression management to compel others to react to them in the ways they wish" (Giddens, 2005, p. 142). An example of this concept is easily illustrated through cultural differences. Different cultures have diverse thoughts and opinions on what is consideredbeautiful or attractive. For example, Americans tend to findtan skinattractive, but in Indonesian culture,pale skinis more desirable.[40]It is also argued that Women in India use different impression management strategies as compared to women in western cultures (Sanaria, 2016).[1]
Another illustration of how people attempt to control how others perceive them is portrayed through the clothing they wear. A person who is in a leadership position strives to be respected and in order to control and maintain the impression. This illustration can also be adapted for a cultural scenario. The clothing people choose to wear says a great deal about the person and the culture they represent. For example, most Americans are not overly concerned with conservative clothing. Most Americans are content with tee shirts, shorts, and showing skin. The exact opposite is true on the other side of the world. "Indonesians are both modest and conservative in their attire" (Cole, 1997, p. 77).[40]
One way people shape their identity is through sharing photos on social media platforms. The ability to modify photos by certain technologies, such as Photoshop, helps achieve their idealized images.[41]
Companies usecross-cultural training(CCT) to facilitate effective cross-cultural interaction. CCT can be defined as any procedure used to increase an individual's ability to cope with and work in a foreign environment. Training employees in culturally consistent and specific impression management (IM) techniques provide the avenue for the employee to consciously switch from an automatic, home culture IM mode to an IM mode that is culturally appropriate and acceptable. Second, training in IM reduces the uncertainty of interaction with FNs and increases employee's ability to cope by reducing unexpected events.[39]
Impression management theory can also be used in health communication. It can be used to explore how professionals 'present' themselves when interacting on hospital wards and also how they employ front stage and backstage settings in their collaborative work.[42]
In the hospital wards, Goffman's front stage and backstage performances are divided into 'planned' and 'ad hoc' rather than 'official' and 'unofficial' interactions.[42]
Results show that interprofessional interactions in this setting are often based less on planned front stage activities than on ad hoc backstage activities. While the former may, at times, help create and maintain an appearance of collaborative interprofessional 'teamwork', conveying a sense of professional togetherness in front of patients and their families, they often serve little functional practice. These findings have implications for designing ways to improve interprofessional practice on acute hospital wards where there is no clearly defined interprofessional team, but rather a loose configuration of professionals working together in a collaborative manner around a particular patient. In such settings, interventions that aim to improve both ad hoc as well as planned forms of communication may be more successful than those intended to only improve planned communication.[42]
Thehyperpersonal modelofcomputer-mediated communication(CMC) posits that users exploit the technological aspects of CMC in order to enhance the messages they construct to manage impressions and facilitate desired relationships. The most interesting aspect of the advent of CMC is how it reveals basic elements of interpersonal communication, bringing into focus fundamental processes that occur as people meet and develop relationships relying on typed messages as the primary mechanism of expression. "Physical features such as one's appearance and voice provide much of the information on which people base first impressions face-to-face, but such features are often unavailable in CMC. Various perspectives on CMC have suggested that the lack of nonverbal cues diminishes CMC's ability to foster impression formation and management, or argued impressions develop nevertheless, relying on language and content cues. One approach that describes the way that CMC's technical capacities work in concert with users' impression development intentions is the hyperpersonal model of CMC (Walther, 1996). As receivers, CMC users idealize partners based on the circumstances or message elements that suggest minimal similarity or desirability. As senders, CMC users selectively self-present, revealing attitudes and aspects of the self in a controlled and socially desirable fashion. The CMC channel facilitates editing, discretion, and convenience, and the ability to tune out environmental distractions and re-allocate cognitive resources in order to further enhance one's message composition. Finally, CMC may create dynamic feedback loops wherein the exaggerated expectancies are confirmed and reciprocated through mutual interaction via the bias-prone communication processes identified above."[43]
According to O'Sullivan's (2000) impression management model of communication channels, individuals will prefer to use mediated channels rather than face-to-face conversation in face-threatening situations. Within his model, this trend is due to the channel features that allow for control over exchanged social information. The present paper extends O'Sullivan's model by explicating information control as a media affordance, arising from channel features and social skills, that enables an individual to regulate and restrict the flow of social information in an interaction, and present a scale to measure it. One dimension of the information control scale, expressive information control, positively predicted channel preference for recalled face-threatening situations. This effect remained after controlling for social anxiousness and power relations in relationships. O'Sullivan's model argues that some communication channels may help individuals manage this struggle and therefore be more preferred as those situations arise. It was based on an assumption that channels with features that allow fewer social cues, such as reduced nonverbal information or slower exchange of messages, invariably afford an individual with an ability to better manage the flow of a complex, ambiguous, or potentially difficult conversations.[44]Individuals manage what information about them is known, or isn't known, to control other's impression of them. Anyone who has given the bathroom a quick cleaning when they anticipate the arrival of their mother-in-law (or date) has managed their impression. For an example frominformation and communication technologyuse, inviting someone to view a person's Webpage before a face-to-face meeting may predispose them to view the person a certain way when they actually meet.[3]
The impression management perspective offers potential insight into how corporate stories could build the corporate brand, by influencing the impressions that stakeholders form of the organization. The link between themes and elements of corporate stories and IM strategies/behaviours indicates that these elements will influence audiences' perceptions of the corporate brand.[45]
Corporate storytelling is suggested to help demonstrate the importance of the corporate brand to internal and external stakeholders, and create a position for the company against competitors, as well as help a firm to bond with its employees (Roper and Fill, 2012). The corporate reputation is defined as a stakeholder's perception of the organization (Brown et al., 2006), and Dowling (2006) suggests that if the story causes stakeholders to perceive the organization as more authentic, distinctive, expert, sincere, powerful, and likeable, then it is likely that this will enhance the overall corporate reputation.
Impression management theory is a relevant perspective to explore the use of corporate stories in building the corporate brand. The corporate branding literature notes that interactions with brand communications enable stakeholders to form an impression of the organization (Abratt and Keyn, 2012), and this indicates that IM theory could also therefore bring insight into the use of corporate stories as a form of communication to build the corporate brand. Exploring the IM strategies/behaviors evident in corporate stories can indicate the potential for corporate stories to influence the impressions that audiences form of the corporate brand.[45]
Firms use more subtle forms of influencing outsiders' impressions of firm performance and prospects, namely by manipulating the content and presentation of information in corporate documents with the purpose of "distort[ing] readers" perceptions of corporate achievements" [Godfrey et al., 2003, p. 96]. In the accounting literature this is referred to as impression management. The opportunity for impression management in corporate reports is increasing. Narrative disclosures have become longer and more sophisticated over the last few years. This growing importance of descriptive sections in corporate documents provides firms with the opportunity to overcome information asymmetries by presenting more detailed information and explanation, thereby increasing their decision-usefulness. However, they also offer an opportunity for presenting financial performance and prospects in the best possible light, thus having the opposite effect. In addition to the increased opportunity for opportunistic discretionary disclosure choices, impression management is also facilitated in that corporate narratives are largely unregulated.[citation needed]
The medium of communication influences the actions taken in impression management.Self-efficacycan differ according to the fact whether the trial to convince somebody is made through face-to-face-interaction or by means of an e-mail.[24]Communication via devices like telephone, e-mail or chat is governed by technical restrictions, so that the way people express personal features etc. can be changed. This often shows how far people will go.
Theaffordancesof a certain medium also influence the way a user self-presents.[46]Communication via a professional medium such as e-mail would result in professional self-presentation.[47]The individual would use greetings, correct spelling, grammar and capitalization as well as scholastic language. Personal communication mediums such as text-messaging would result in a casual self-presentation where the usershortens words, includesemojisandselfiesand uses less academic language.
Another example of impression management theory in play is present in today's world of social media. Users are able to create a profile and share whatever they like with their friends, family, or the world. Users can choose to omit negative life events and highlight positive events if they so please.[48]
Social media usage among American adults grew from 5% in 2005 to 69% in 2018.[49]Facebookis the most popular social media platform, followed byInstagram,LinkedIn, andTwitter.[49]
Social networking users will employ protective self-presentations for image management. Users will use subtractive and repudiate strategies to maintain a desired image.[50]Subtractive strategy is used to untag an undesirable photo on Social Networking Sites. In addition to un-tagging their name, some users will request the photo to be removed entirely. Repudiate strategy is used when a friend posts an undesirable comment about the user. In response to an undesired post, users may add another wall post as an innocence defense. Michael Stefanone states that "self-esteem maintenance is an important motivation for strategic self-presentation online."[50]Outside evaluations of their physical appearance, competence, and approval from others determines how social media users respond to pictures and wall posts. Unsuccessful self-presentation online can lead to rejection and criticism from social groups. Social media users are motivated to actively participate in SNS from a desire to manage their online image.[51]
Online social media presence often varies with respect to users' age, gender, and body weight. While men and women tend to use social media in comparable degrees, both uses and capabilities vary depending on individual preferences as well perceptions of power or dominance.[52]In terms of performance, men tend to display characteristics associated with masculinity as well as more commanding language styles.[52]In much the same way, women tend to present feminine self-depictions and engage in more supportive language.[52]
With respect to usage across age variances, many children develop digital and social media literacy skills around 7 or 8 and begin to form online social relationships via virtual environments designed for their age group.[52]The years between thirteen and fifteen demonstrate high social media usage that begins to become more balanced with offline interactions as teens learn to navigate both their online and in-person identities which may often diverge from one another.[52]
Social media platforms often provide a great degree of social capital during the college years and later.[52]College students are motivated to use Facebook for impression management, self-expression, entertainment, communication and relationship maintenance.[53]College students sometimes rely on Facebook to build a favorable online identity, which contributes to greater satisfaction with campus life.[53]In building an online persona, college students sometimes engage in identity manipulation, including altering personality and appearance, to increase their self-esteem and appear more attractive to peers.[54]Since risky behavior is frequently deemed attractive by peers, college students often use their social media profiles to gain approval by highlighting instances of risky behavior, like alcohol use and unhealthy eating.[55]Users present risky behavior as signs of achievement, fun, and sociability, participating in a form of impression management aimed at building recognition and acceptance among peers.[55]During middle adulthood, users tend to display greater levels of confidence and mastery in their social media connections while older adults tend to use social media for educational and supportive purposes.[52]These myriad factors influence how users will form and communicate their online personas. In addition to that, TikTok has made an influence on college students and adults to create their own self-image on a social media platform. The positivity of this is that college students and adults are using this to create their own brand for business purposes and for entertainment purposes. This gives them a chance to seek the desires of stardom and build an audience for revenue.[56]Media fatigue is a negative effect that is caused by the conveyance of social media presence. Social anxiety stems from low-self esteem which causes a strain of stress in one's self-identity that is perceived in the media limelight for targeted audiences.[57]
According to Marwick, social profiles create implications such as "context collapse" for presenting oneself to the audience. The concept of 'context collapse' suggests that social technologies make it difficult to vary self-presentation based on environment or audience. "Large sites such as Facebook and Twitter group friends, family members, coworkers, and acquaintances together under the umbrella term 'friends'."[58]In a way, this context collapse is aided by a notion ofperformativityas characterized byJudith Butler.
Impression management is also influential in the political spectrum. "Political impression management" was coined in 1972 by sociologist Peter M. Hall, who defined the term as the art of marking a candidate look electable and capable (Hall, 1972). This is due in part to the importance of "presidential" candidates—appearance, image, and narrative are a key part of a campaign and thus impression management has always been a huge part of winning an election (Katz 2016). Social media has evolved to be part of the political process, thus political impression management is becoming more challenging as the online image of the candidate often now lies in the hands of the voters themselves.
The evolution of social media has increased the way in which political campaigns are targeting voters and how influential impression management is when discussing political issues and campaigns.[59]Political campaigns continue to use social media as a way to promote their campaigns and share information about who they are to make sure to lead the conversation about their political platform.[60]Research has shown that political campaigns must create clear profiles for each candidate in order to convey the right message to potential voters.[61]
In professional settings, impression management is usually primarily focused on appearing competent,[62]but also involves constructing and displaying an image of oneself that others find socially desirable and believably authentic.[63][64]People manage impressions by their choice of dress, dressing either more or less formally, and this impacts perceptions their coworkers and supervisors form.[65]The process includes a give and take; the person managing their impression receives feedback as the people around them interact with the self they are presenting and respond, either favorably or negatively.[64]Research has shown impression management to be impactful in the workplace because the perceptions co-workers form of one another shape their relationships and indirectly influence their ability to function well as teams and achieve goals together.[66]
In their research on impression management among leaders, Peck and Hogue define "impression management as conscious or unconscious, authentic or inauthentic, goal-directed behavior individuals engage in to influence the impression others form of them in social interactions."[66]Using those three dimensions, labelled "automatic" vs. "controlled", "authentic" vs. "inauthentic", and "pro-self" vs. "pro-social", Peck and Hogue formed a typology of eight impression management archetypes.[66]They suggest that while no one archetype stands out as the sole correct or ideal way to practice impression management as a leader, types rooted in authenticity and pro-social goals, rather than self-focused goals, create the most positive perceptions among followers.[66]
Impression management strategies employed in the workplace also involve deception, and the ability to recognize deceptive acts impacts the supervisor-subordinate relationship as well as coworker relationships.[67]When it comes to workplace behaviors,ingratiationis the major focus of impression management research.[68]Ingratiation behaviors are those that employees engage in to elicit a favorable impression from a supervisor.[69][70]These behaviors can have a negative or positive impact on coworkers and supervisors, and this impact is dependent on how ingratiating is perceived by the target and those who observe the ingratiating behaviors.[69][70]The perception that follows an ingratiation act is dependent on whether the target attributes the behavior to the authentic-self of the person performing the act, or to impression management strategies.[71]Once the target is aware that ingratiation is resulting from impression management strategies, the target will perceive ethical concerns regarding the performance.[71]However, if the target attributes the ingratiation performance to the actor's authentic-self, the target will perceive the behavior as positive and not have ethical concerns.[71]
Workplace leaders that are publicly visible, such as CEOs, also perform impression management with regard tostakeholdersoutside their organizations. In a study comparing online profiles of North American and European CEOs, research showed that while education was referenced similarly in both groups, profiles of European CEOs tended to be more professionally focused, while North American CEO profiles often referenced the CEO's public life outside business dealings, including social and political stances and involvement.[62]
Employees also engage in impression management behaviors to conceal or reveal personal stigmas. How these individuals approach their disclosure of the stigma(s) impacts coworker's perceptions of the individual, as well as the individual's perception of themselves, and thus affects likeability amongst coworkers and supervisors.[72]
On a smaller scale, many individuals choose to participate in professional impression management beyond the sphere of their own workplace. This may take place through informal networking (either face-to-face or usingcomputer-mediated communication) or channels built to connect professionals, such asprofessional associations, or job-related social media sites, likeLinkedIn.
Impression management can distort the results of empirical research that relies on interviews and surveys, a phenomenon commonly referred to as "social desirability bias". Impression management theory nevertheless constitutes a field of research on its own.[73]When it comes to practical questions concerning public relations and the way organizations should handle their public image, the assumptions provided by impression management theory can also provide a framework.[74]
An examination of different impression management strategies acted out by individuals who were facing criminal trials where the trial outcomes could range from a death sentence, life in prison or acquittal has been reported in the forensic literature.[75]The Perri and Lichtenwald article examined femalepsychopathickillers, whom as a group were highly motivated to manage the impression that attorneys, judges, mental health professions and ultimately, a jury had of the murderers and the murder they committed. It provides legal case illustrations of the murderers combining and/or switching from one impression management strategy such as ingratiation or supplication to another as they worked towards their goal of diminishing or eliminating any accountability for the murders they committed.
Since the 1990s, researchers in the area of sport and exercise psychology have studied self-presentation. Concern about how one is perceived has been found to be relevant to the study of athletic performance. For example, anxiety may be produced when an athlete is in the presence of spectators. Self-presentational concerns have also been found to be relevant to exercise. For example, the concerns may elicit motivation to exercise.[76]
More recent research investigating the effects of impression management on social behaviour showed that social behaviours (e.g. eating) can serve to convey a desired impression to others and enhance one's self-image. Research on eating has shown that people tend to eat less when they believe that they are being observed by others.[77]
|
https://en.wikipedia.org/wiki/Impression_management
|
Peer-to-peer(P2P) computing or networking is adistributed applicationarchitecture that partitions tasks or workloads between peers. Peers are equally privileged,equipotentparticipants in the network, forming a peer-to-peer network ofnodes.[1]In addition, apersonal area network(PAN) is also in nature a type ofdecentralizedpeer-to-peer network typically between two devices.[2]
Peers make a portion of their resources, such as processing power, disk storage, ornetwork bandwidth, directly available to other network participants, without the need for central coordination by servers or stable hosts.[3]Peers are both suppliers and consumers of resources, in contrast to the traditionalclient–server modelin which the consumption and supply of resources are divided.[4]
While P2P systems had previously been used in manyapplication domains,[5]the architecture was popularized by theInternetfile sharing systemNapster, originally released in 1999.[6]P2P is used in many protocols such asBitTorrentfile sharing over the Internet[7]and inpersonal networkslikeMiracastdisplaying andBluetoothradio.[8]The concept has inspired new structures and philosophies in many areas of human interaction. In such social contexts,peer-to-peer as a memerefers to theegalitariansocial networkingthat has emerged throughout society, enabled byInternettechnologies in general.
While P2P systems had previously been used in many application domains,[5]the concept was popularized byfile sharingsystems such as the music-sharing applicationNapster. The peer-to-peer movement allowed millions of Internet users to connect "directly, forming groups and collaborating to become user-created search engines, virtual supercomputers, and filesystems".[9]The basic concept of peer-to-peer computing was envisioned in earlier software systems and networking discussions, reaching back to principles stated in the firstRequest for Comments, RFC 1.[10]
Tim Berners-Lee's vision for theWorld Wide Webwas close to a P2P network in that it assumed each user of the web would be an active editor and contributor, creating and linking content to form an interlinked "web" of links. The early Internet was more open than the present day, where two machines connected to the Internet could send packets to each other without firewalls and other security measures.[11][9][page needed]This contrasts with thebroadcasting-like structure of the web as it has developed over the years.[12][13][14]As a precursor to the Internet,ARPANETwas a successful peer-to-peer network where "every participating node could request and serve content". However, ARPANET was not self-organized, and it could not "provide any means for context or content-based routing beyond 'simple' address-based routing."[14]
Therefore,Usenet, a distributed messaging system that is often described as an early peer-to-peer architecture, was established. It was developed in 1979 as a system that enforces adecentralized modelof control.[15]The basic model is aclient–servermodel from the user or client perspective that offers a self-organizing approach to newsgroup servers. However,news serverscommunicate with one another as peers to propagate Usenet news articles over the entire group of network servers. The same consideration applies toSMTPemail in the sense that the core email-relaying network ofmail transfer agentshas a peer-to-peer character, while the periphery ofEmail clientsand their direct connections is strictly a client-server relationship.[16]
In May 1999, with millions more people on the Internet,Shawn Fanningintroduced the music and file-sharing application calledNapster.[14]Napster was the beginning of peer-to-peer networks, as we know them today, where "participating users establish a virtual network, entirely independent from the physical network, without having to obey any administrative authorities or restrictions".[14]
A peer-to-peer network is designed around the notion of equalpeernodes simultaneously functioning as both "clients" and "servers" to the other nodes on the network.[17]This model of network arrangement differs from theclient–servermodel where communication is usually to and from a central server. A typical example of a file transfer that uses the client-server model is theFile Transfer Protocol(FTP) service in which the client and server programs are distinct: the clients initiate the transfer, and the servers satisfy these requests.
Peer-to-peer networks generally implement some form of virtualoverlay networkon top of the physical network topology, where the nodes in the overlay form asubsetof the nodes in the physical network.[18]Data is still exchanged directly over the underlyingTCP/IPnetwork, but at theapplication layerpeers can communicate with each other directly, via the logical overlay links (each of which corresponds to a path through the underlying physical network). Overlays are used for indexing and peer discovery, and make the P2P system independent from the physical network topology. Based on how the nodes are linked to each other within the overlay network, and how resources are indexed and located, we can classify networks asunstructuredorstructured(or as a hybrid between the two).[19][20][21]
Unstructured peer-to-peer networksdo not impose a particular structure on the overlay network by design, but rather are formed by nodes that randomly form connections to each other.[22](Gnutella,Gossip, andKazaaare examples of unstructured P2P protocols).[23]
Because there is no structure globally imposed upon them, unstructured networks are easy to build and allow for localized optimizations to different regions of the overlay.[24]Also, because the role of all peers in the network is the same, unstructured networks are highly robust in the face of high rates of "churn"—that is, when large numbers of peers are frequently joining and leaving the network.[25][26]
However, the primary limitations of unstructured networks also arise from this lack of structure. In particular, when a peer wants to find a desired piece of data in the network, the search query must be flooded through the network to find as many peers as possible that share the data. Flooding causes a very high amount of signaling traffic in the network, uses moreCPU/memory (by requiring every peer to process all search queries), and does not ensure that search queries will always be resolved. Furthermore, since there is no correlation between a peer and the content managed by it, there is no guarantee that flooding will find a peer that has the desired data. Popular content is likely to be available at several peers and any peer searching for it is likely to find the same thing. But if a peer is looking for rare data shared by only a few other peers, then it is highly unlikely that the search will be successful.[27]
Instructured peer-to-peer networksthe overlay is organized into a specific topology, and the protocol ensures that any node can efficiently[28]search the network for a file/resource, even if the resource is extremely rare.[23]
The most common type of structured P2P networks implement adistributed hash table(DHT),[4][29]in which a variant ofconsistent hashingis used to assign ownership of each file to a particular peer.[30][31]This enables peers to search for resources on the network using ahash table: that is, (key,value) pairs are stored in the DHT, and any participating node can efficiently retrieve the value associated with a given key.[32][33]
However, in order to route traffic efficiently through the network, nodes in a structured overlay must maintain lists of neighbors[34]that satisfy specific criteria. This makes them less robust in networks with a high rate ofchurn(i.e. with large numbers of nodes frequently joining and leaving the network).[26][35]More recent evaluation of P2P resource discovery solutions under real workloads have pointed out several issues in DHT-based solutions such as high cost of advertising/discovering resources and static and dynamic load imbalance.[36]
Notable distributed networks that use DHTs includeTixati, an alternative toBitTorrent'sdistributed tracker, theKad network, theStorm botnet, and theYaCy. Some prominent research projects include theChord project,Kademlia,PAST storage utility,P-Grid, a self-organized and emerging overlay network, andCoopNet content distribution system.[37]DHT-based networks have also been widely utilized for accomplishing efficient resource discovery[38][39]forgrid computingsystems, as it aids in resource management and scheduling of applications.
Hybrid models are a combination of peer-to-peer andclient–servermodels.[40]A common hybrid model is to have a central server that helps peers find each other.Spotifywas an example of a hybrid model [until 2014].[41]There are a variety of hybrid models, all of which make trade-offs between the centralized functionality provided by a structured server/client network and the node equality afforded by the pure peer-to-peer unstructured networks. Currently, hybrid models have better performance than either pure unstructured networks or pure structured networks because certain functions, such as searching, do require a centralized functionality but benefit from the decentralized aggregation of nodes provided by unstructured networks.[42]
CoopNet (Cooperative Networking)was a proposed system for off-loading serving to peers who have recentlydownloadedcontent, proposed by computer scientists Venkata N. Padmanabhan and Kunwadee Sripanidkulchai, working atMicrosoft ResearchandCarnegie Mellon University.[43][44]When aserverexperiences an increase in load it redirects incoming peers to other peers who have agreed tomirrorthe content, thus off-loading balance from the server. All of the information is retained at the server. This system makes use of the fact that the bottleneck is most likely in the outgoing bandwidth than theCPU, hence its server-centric design. It assigns peers to other peers who are 'close inIP' to its neighbors [same prefix range] in an attempt to use locality. If multiple peers are found with the samefileit designates that the node choose the fastest of its neighbors.Streaming mediais transmitted by having clientscachethe previous stream, and then transmit it piece-wise to new nodes.
Peer-to-peer systems pose unique challenges from acomputer securityperspective. Like any other form ofsoftware, P2P applications can containvulnerabilities. What makes this particularly dangerous for P2P software, however, is that peer-to-peer applications act as servers as well as clients, meaning that they can be more vulnerable toremote exploits.[45]
Since each node plays a role in routing traffic through the network, malicious users can perform a variety of "routing attacks", ordenial of serviceattacks. Examples of common routing attacks include "incorrect lookup routing" whereby malicious nodes deliberately forward requests incorrectly or return false results, "incorrect routing updates" where malicious nodes corrupt the routing tables of neighboring nodes by sending them false information, and "incorrect routing network partition" where when new nodes are joining they bootstrap via a malicious node, which places the new node in a partition of the network that is populated by other malicious nodes.[45]
The prevalence ofmalwarevaries between different peer-to-peer protocols.[46]Studies analyzing the spread of malware on P2P networks found, for example, that 63% of the answered download requests on thegnutellanetwork contained some form of malware, whereas only 3% of the content onOpenFTcontained malware. In both cases, the top three most common types of malware accounted for the large majority of cases (99% in gnutella, and 65% in OpenFT). Another study analyzing traffic on theKazaanetwork found that 15% of the 500,000 file sample taken were infected by one or more of the 365 differentcomputer virusesthat were tested for.[47]
Corrupted data can also be distributed on P2P networks by modifying files that are already being shared on the network. For example, on theFastTracknetwork, theRIAAmanaged to introduce faked chunks into downloads and downloaded files (mostlyMP3files). Files infected with the RIAA virus were unusable afterwards and contained malicious code. The RIAA is also known to have uploaded fake music and movies to P2P networks in order to deter illegal file sharing.[48]Consequently, the P2P networks of today have seen an enormous increase of their security and file verification mechanisms. Modernhashing,chunk verificationand different encryption methods have made most networks resistant to almost any type of attack, even when major parts of the respective network have been replaced by faked or nonfunctional hosts.[49]
The decentralized nature of P2P networks increases robustness because it removes thesingle point of failurethat can be inherent in a client–server based system.[50]As nodes arrive and demand on the system increases, the total capacity of the system also increases, and the likelihood of failure decreases. If one peer on the network fails to function properly, the whole network is not compromised or damaged. In contrast, in a typical client–server architecture, clients share only their demands with the system, but not their resources. In this case, as more clients join the system, fewer resources are available to serve each client, and if the central server fails, the entire network is taken down.
There are both advantages and disadvantages in P2P networks related to the topic of databackup, recovery, and availability. In a centralized network, the system administrators are the only forces controlling the availability of files being shared. If the administrators decide to no longer distribute a file, they simply have to remove it from their servers, and it will no longer be available to users. Along with leaving the users powerless in deciding what is distributed throughout the community, this makes the entire system vulnerable to threats and requests from the government and other large forces.
For example,YouTubehas been pressured by theRIAA,MPAA, and entertainment industry to filter out copyrighted content. Although server-client networks are able to monitor and manage content availability, they can have more stability in the availability of the content they choose to host. A client should not have trouble accessing obscure content that is being shared on a stable centralized network. P2P networks, however, are more unreliable in sharing unpopular files because sharing files in a P2P network requires that at least one node in the network has the requested data, and that node must be able to connect to the node requesting the data. This requirement is occasionally hard to meet because users may delete or stop sharing data at any point.[51]
In a P2P network, the community of users is entirely responsible for deciding which content is available. Unpopular files eventually disappear and become unavailable as fewer people share them. Popular files, however, are highly and easily distributed. Popular files on a P2P network are more stable and available than files on central networks. In a centralized network, a simple loss of connection between the server and clients can cause a failure, but in P2P networks, the connections between every node must be lost to cause a data-sharing failure. In a centralized system, the administrators are responsible for all data recovery and backups, while in P2P systems, each node requires its backup system. Because of the lack of central authority in P2P networks, forces such as the recording industry,RIAA,MPAA, and the government are unable to delete or stop the sharing of content on P2P systems.[52]
In P2P networks, clients both provide and use resources. This means that unlike client–server systems, the content-serving capacity of peer-to-peer networks can actuallyincreaseas more users begin to access the content (especially with protocols such asBitTorrentthat require users to share, refer a performance measurement study[53]). This property is one of the major advantages of using P2P networks because it makes the setup and running costs very small for the original content distributor.[54][55]
Peer-to-peer file sharingnetworks such asGnutella,G2, and theeDonkey networkhave been useful in popularizing peer-to-peer technologies. These advancements have paved the way forPeer-to-peer content delivery networksand services, including distributed caching systems like Correli Caches to enhance performance.[56]Furthermore, peer-to-peer networks have made possible the software publication and distribution, enabling efficient sharing ofLinux distributionand various games throughfile sharingnetworks.
Peer-to-peer networking involves data transfer from one user to another without using an intermediate server. Companies developing P2P applications have been involved in numerous legal cases, primarily in the United States, over conflicts withcopyrightlaw.[57]Two major cases areGrokstervs RIAAandMGM Studios, Inc. v. Grokster, Ltd..[58]In the last case, the Court unanimously held that defendant peer-to-peer file sharing companies Grokster and Streamcast could be sued for inducing copyright infringement.
TheP2PTVandPDTPprotocols are used in various peer-to-peer applications. Someproprietarymultimedia applications leverage a peer-to-peer network in conjunction with streaming servers to stream audio and video to their clients.Peercastingis employed for multicasting streams. Additionally, a project calledLionShare, undertaken byPennsylvania State University, MIT, andSimon Fraser University, aims to facilitate file sharing among educational institutions globally. Another notable program,Osiris, enables users to create anonymous and autonomous web portals that are distributed via a peer-to-peer network.
Datis a distributed version-controlled publishing platform.I2P, is anoverlay networkused to browse the Internetanonymously. Unlike the related I2P, theTor networkis not itself peer-to-peer[dubious–discuss]; however, it can enable peer-to-peer applications to be built on top of it viaonion services. TheInterPlanetary File System(IPFS) is aprotocoland network designed to create acontent-addressable, peer-to-peer method of storing and sharinghypermediadistribution protocol, with nodes in the IPFS network forming adistributed file system.Jamiis a peer-to-peer chat andSIPapp.JXTAis a peer-to-peer protocol designed for theJava platform.Netsukukuis aWireless community networkdesigned to be independent from the Internet.Open Gardenis a connection-sharing application that shares Internet access with other devices using Wi-Fi or Bluetooth.
Resilio Syncis a directory-syncing app. Research includes projects such as theChord project, thePAST storage utility, theP-Grid, and theCoopNet content distribution system.Secure Scuttlebuttis a peer-to-peergossip protocolcapable of supporting many different types of applications, primarilysocial networking.Syncthingis also a directory-syncing app.Tradepall andM-commerceapplications are designed to power real-time marketplaces. TheU.S. Department of Defenseis conducting research on P2P networks as part of its modern network warfare strategy.[59]In May 2003,Anthony Tether, then director ofDARPA, testified that the United States military uses P2P networks.WebTorrentis a P2Pstreamingtorrent clientinJavaScriptfor use inweb browsers, as well as in theWebTorrent Desktopstandalone version that bridges WebTorrent andBitTorrentserverless networks.Microsoft, inWindows 10, uses a proprietary peer-to-peer technology called "Delivery Optimization" to deploy operating system updates using end-users' PCs either on the local network or other PCs. According to Microsoft's Channel 9, this led to a 30%-50% reduction in Internet bandwidth usage.[60]Artisoft'sLANtasticwas built as a peer-to-peer operating system where machines can function as both servers and workstations simultaneously.Hotline CommunicationsHotline Client was built with decentralized servers and tracker software dedicated to any type of files and continues to operate today.Cryptocurrenciesare peer-to-peer-baseddigital currenciesthat useblockchains
Cooperation among a community of participants is key to the continued success of P2P systems aimed at casual human users; these reach their full potential only when large numbers of nodes contribute resources. But in current practice, P2P networks often contain large numbers of users who utilize resources shared by other nodes, but who do not share anything themselves (often referred to as the "freeloader problem").
Freeloading can have a profound impact on the network and in some cases can cause the community to collapse.[61]In these types of networks "users have natural disincentives to cooperate because cooperation consumes their own resources and may degrade their own performance".[62]Studying the social attributes of P2P networks is challenging due to large populations of turnover, asymmetry of interest and zero-cost identity.[62]A variety of incentive mechanisms have been implemented to encourage or even force nodes to contribute resources.[63][45]
Some researchers have explored the benefits of enabling virtual communities to self-organize and introduce incentives for resource sharing and cooperation, arguing that the social aspect missing from today's P2P systems should be seen both as a goal and a means for self-organized virtual communities to be built and fostered.[64]Ongoing research efforts for designing effective incentive mechanisms in P2P systems, based on principles from game theory, are beginning to take on a more psychological and information-processing direction.
Some peer-to-peer networks (e.g.Freenet) place a heavy emphasis onprivacyandanonymity—that is, ensuring that the contents of communications are hidden from eavesdroppers, and that the identities/locations of the participants are concealed.Public key cryptographycan be used to provideencryption,data validation, authorization, and authentication for data/messages.Onion routingand othermix networkprotocols (e.g. Tarzan) can be used to provide anonymity.[65]
Perpetrators oflive streaming sexual abuseand othercybercrimeshave used peer-to-peer platforms to carry out activities with anonymity.[66]
Although peer-to-peer networks can be used for legitimate purposes, rights holders have targeted peer-to-peer over the involvement with sharing copyrighted material. Peer-to-peer networking involves data transfer from one user to another without using an intermediate server. Companies developing P2P applications have been involved in numerous legal cases, primarily in the United States, primarily over issues surroundingcopyrightlaw.[57]Two major cases areGrokstervs RIAAandMGM Studios, Inc. v. Grokster, Ltd.[58]In both of the cases the file sharing technology was ruled to be legal as long as the developers had no ability to prevent the sharing of the copyrighted material.
To establish criminal liability for the copyright infringement on peer-to-peer systems, the government must prove that the defendant infringed a copyright willingly for the purpose of personal financial gain or commercial advantage.[67]Fair useexceptions allow limited use of copyrighted material to be downloaded without acquiring permission from the rights holders. These documents are usually news reporting or under the lines of research and scholarly work. Controversies have developed over the concern of illegitimate use of peer-to-peer networks regarding public safety and national security. When a file is downloaded through a peer-to-peer network, it is impossible to know who created the file or what users are connected to the network at a given time. Trustworthiness of sources is a potential security threat that can be seen with peer-to-peer systems.[68]
A study ordered by theEuropean Unionfound that illegal downloadingmaylead to an increase in overall video game sales because newer games charge for extra features or levels. The paper concluded that piracy had a negative financial impact on movies, music, and literature. The study relied on self-reported data about game purchases and use of illegal download sites. Pains were taken to remove effects of false and misremembered responses.[69][70][71]
Peer-to-peer applications present one of the core issues in thenetwork neutralitycontroversy. Internet service providers (ISPs) have been known to throttle P2P file-sharing traffic due to its high-bandwidthusage.[72]Compared to Web browsing, e-mail or many other uses of the internet, where data is only transferred in short intervals and relative small quantities, P2P file-sharing often consists of relatively heavy bandwidth usage due to ongoing file transfers and swarm/network coordination packets. In October 2007,Comcast, one of the largest broadband Internet providers in the United States, started blocking P2P applications such asBitTorrent. Their rationale was that P2P is mostly used to share illegal content, and their infrastructure is not designed for continuous, high-bandwidth traffic.
Critics point out that P2P networking has legitimate legal uses, and that this is another way that large providers are trying to control use and content on the Internet, and direct people towards aclient–server-based application architecture. The client–server model provides financial barriers-to-entry to small publishers and individuals, and can be less efficient for sharing large files. As a reaction to thisbandwidth throttling, several P2P applications started implementing protocol obfuscation, such as theBitTorrent protocol encryption. Techniques for achieving "protocol obfuscation" involves removing otherwise easily identifiable properties of protocols, such as deterministic byte sequences and packet sizes, by making the data look as if it were random.[73]The ISP's solution to the high bandwidth isP2P caching, where an ISP stores the part of files most accessed by P2P clients in order to save access to the Internet.
Researchers have used computer simulations to aid in understanding and evaluating the complex behaviors of individuals within the network. "Networking research often relies on simulation in order to test and evaluate new ideas. An important requirement of this process is that results must be reproducible so that other researchers can replicate, validate, and extend existing work."[74]If the research cannot be reproduced, then the opportunity for further research is hindered. "Even though new simulators continue to be released, the research community tends towards only a handful of open-source simulators. The demand for features in simulators, as shown by our criteria and survey, is high. Therefore, the community should work together to get these features in open-source software. This would reduce the need for custom simulators, and hence increase repeatability and reputability of experiments."[74]
Popular simulators that were widely used in the past are NS2, OMNeT++, SimPy, NetLogo, PlanetLab, ProtoPeer, QTM, PeerSim, ONE, P2PStrmSim, PlanetSim, GNUSim, and Bharambe.[75]
Besides all the above stated facts, there has also been work done on ns-2 open source network simulators. One research issue related to free rider detection and punishment has been explored using ns-2 simulator here.[76]
|
https://en.wikipedia.org/wiki/Peer-to-peer#Security_and_trust
|
Reputation launderingoccurs when a person or an organization conceals unethical, corrupt, or criminal behavior or other forms of controversy by performing highly visible positive actions with the intent to improve their reputation and obscure their history.
Reputation laundering can include gestures such as donating to charities, sponsoring sports teams, or joining prominent associations.
One of the first uses of the phrase "reputation laundering" was in 1996, in the bookThe United Nations and Transnational Organized Crime, which defined it as "the process of acquiring respectability in a new environment".[1][2]
An early use of the phrase in mass media was in 2010, in aGuardianarticle headlined "PR firms make London world capital of reputation laundering", a report which focused on the use ofpublic relations(PR) firms by heads of state (includingSaudi Arabia,Rwanda,Kazakhstan, andSri Lanka) to obscure human rights abuses and corruption.[3]
The phrase was in common use by 2016 when it was used byTransparency Internationalin their report "Paradise Lost: Ending the UK's role as a safe haven for corrupt individuals, their allies and assets". In that report, they defined reputation laundering as "the process of concealing the corrupt actions, past or present, of an individual, government or corporate entity, and presenting their character and behaviour in a positive light."[4]
The phrase "reputation laundering" is aplayon the older phrase "money laundering".
Reputation laundering includes activities such as:[4][5]
Reputation laundering activities are sometimes delegated to professional public relations (PR) firms. Techniques employed by PR firms on behalf of the purportedly corrupt or criminal customers include fake social media accounts, blogs by fake personalities, or partisan op-eds.[6]
The British public relations firmBell Pottingeris noted for using PR techniques for reputation laundering, supporting clients such asAlexander Lukashenko,Bahrain, and thePinochet Foundation.[7]
Public relations firmsHavas,Publicis, andQorviswere hired by Saudi Arabia to perform reputation laundering after9/11and theAssassination of Jamal Khashoggi.[5]
Involvement in professional sports, by sponsorship or ownership, is a prominent activity used for reputation laundering. Examples include the creation ofFormula 1car races in Qatar and Saudi Arabia,[8]ownership ofChelsea F.C.byRoman Abramovich,[9]and ownership of theNewcastle United football clubby Saudi Arabian investors.[10]
Reputation laundering often involves charitable donations from the persons attempting to improve their reputation. One study found that Russian oligarchs had donated between $372 million and $435 million to charitable institutions in the United States.[11]Charities and non-profits that received funds, according to a database compiled by David Szakonyi and Casey Michel includeMIT,Brandeis University,Mayo Clinic,John F. Kennedy Center for the Performing Arts,New York University,Brookings Institution,Harvard University, and theNew York Museum of Modern Art.[11][12]
TheSackler familyis particularly notable for its charitable donations that aim to repair their reputation, which was heavily damaged by their role in theopioid crisis. Since 2009, the family has donated over £170m to art institutions in the United Kingdom.[13]The family's philanthropy has been characterized as "reputation laundering" from profits acquired from the selling of opiates.[14][15]The Sackler family name, as used in institutions which the family have donated to, saw increased scrutiny in the late 2010s over the family's association with OxyContin. David Crow, writing in theFinancial Times, described the family name as "tainted" (cf.Tainted donors).[16][17]In March 2019, theNational Portrait Galleryand theTategalleries announced that they would not accept further donations from the family. This came after the American photographerNan Goldinthreatened to withdraw a planned retrospective of her work in the National Portrait Gallery if the gallery accepted a £1 million donation from a Sackler fund.[18][19]In June 2019,NYU Langone Medical Centerannounced they will no longer be accepting donations from the Sacklers, and have since changed the name of theSackler Institute of Graduate Biomedical Sciencesto theVilcekInstitute of Graduate Biomedical Sciences.[20]Later in 2019, theAmerican Museum of Natural History, and theSolomon R. Guggenheim MuseumandMetropolitan Museum of Artin New York, each announced they will not accept future donations from any Sacklers that were involved in Purdue Pharma.[21]In 2022, the British Museum announced that it would rename the Raymond and Beverly Sackler Rooms and the Raymond and Beverly Sackler Wing, as part of "development of the new masterplan", and that it "made this decision together through collaborative discussions" with the Sackler Foundation.[22]Prior to the collapse of hiscryptocurrency exchange, FTX, entrepreneurSam Bankman-Frieddid not have known wrong doing to launder, but is alleged to have used (among other means) "a value system of utilitarian idealism ... not orientated toward money"[23]—for example promising to give away 99 percent of his fortune[24]—which led to investors letting "down their due diligence guard."[23]
Alex Shephard, writing in the New Republic, asserts that generalMark Milley—chairman of theJoint Chiefs of Staff—engaged in self-beneficial reputation laundering when he acted as a primary source for news media, providing media with inside accounts of events in the Trump White House.[25]Milley's intention, according to some analysts, was to prompt major media outlets to rehabilitate his reputation (tainted by the association with theTrump administration) and in exchange, Milley provided insider information to the media.[26]
Arwa Mahdawi, columnist for The Guardian, characterizedRudy Giuliani's appearance onThe Masked Singeras reputation laundering.[27]
TheUnited Kingdomgovernment generated a report in 2020, analyzing the activities ofRussian oligarchsin the United Kingdom.[28]The report states that the oligarchs had been "extending patronage and building influence across a wide sphere of the British establishment" and had employed public relations firms that were "willing beneficiaries, contributing to a ‘reputation laundering' process".[29]
A notable example of Russian oligarchs participating in reputation laundering isViatcheslav Moshe Kantor, who donated £9 million toKing Edward VII's Hospital, a facility used by the UKroyal familyand patronized by the queen. The donation came under scrutiny after Kantor was placed under sanctions during the2022 invasion of Ukraine, and the hospital removed Kantor's name from a wing of the hospital.[30]
According to theNational Endowment for Democracy, kleptocrats from former Portuguese colonyAngolaused Portugal as a base to engage in reputation laundering. In particular,Isabel dos Santos, the billionaire daughter of Angola’s president, was alleged to engage in reputation laundering. Examples of reputation laundering by Angolese aristocrats included participation in high-profile social events, and promoting philanthropic endeavors.[5]
According to theNational Endowment for Democracy, aristocrats ofUnited Arab Emiratesengaged in reputation laundering when they established a partnership with France'sLouvremuseum, and created theLouvre Abu Dhabi.[5]
The public relations firmQorviswas hired bySaudi Arabiato improve its image in the wake of theSeptember 11 attacks, paying the company $14.7 million between March and September 2002.[31]Qorvis engaged in a PR frenzy that publicized the "9/11 Commissionfinding that there was 'no evidence that the Saudi government as an institution or senior Saudi officials individually fundedAl Qaeda, while omitting the report's conclusion that 'Saudi Arabia has been a problematic ally in combating Islamic extremism.'"[32][33]Petruzzello toldThe Washington Postthat the work was not about "lobbying" but "educating" the public and policy makers.[34]
Qorvis also had lead role in shaping media coverage of the widely criticizedSaudi-led attack on Yemenof 2015. This included the creation of the websiteoperationrenewalofhope.comand helping Saudi officials gain access to US media.[35]One example of the latter is aNewsweekarticle in which the Saudi foreign minister claims that, far from "supportingviolent extremism", his country has actually shown "leadership in combating terrorism".[36]
Qorvis has also been employed by Saudi Arabia to repair its image and reputation followingthe Kingdom's assassination of journalist Jamal Khashoggi, receiving $18.8 million from October 2018 through July 2019[37]and signing three additional contracts with the Kingdom in spring 2019.[38]
|
https://en.wikipedia.org/wiki/Reputation_laundering
|
Thereputation marketingfield has evolved from the marriage of the fieldsreputation managementandbrand marketing, and involves a brand's reputation being vetted online in real-time by consumers leaving online reviews and citing experiences onsocial networking sites. With the popularity of social media in the new millennium reputation, vetting has turned from word-of-mouth to the digital platform, forcing businesses to take active measures to stay competitive and profitable.
A study done byNeilsenin 2012 suggests that 70% of consumers trust online reviews (15% more than in 2008), second only to personal recommendations.[1]This gives credibility to thesocial prooftheory; most famously studied byMuzafer Sherif, and highlighted as one of the six principles of persuasion byRobert Cialdini. The increasing number of review websites such asYelpandConsumerAffairsattracted the attention ofHarvard Business Schoolwhich conducted a study of online reviews and their effects on restaurants. The study finds that a one-star increase in Yelp rating leads to a 5–7% increase in restaurant revenue having a major impact on local restaurants and a lesser impact on big chains[2]A similar study conducted atUC Berkeleyreports that a half-star improvement on a five-star rating could make it 30-49% more likely that a restaurant will sell out its evening seats.[3]
Reputation marketing is often associated withreputation managementand is seen as a means of handling negative reviews. However, reputation marketing differs in that it also seeks to manage positive feedback as a way to attract new customers.[4]Reputation marketing is taking a proactive approach toward your brand’s presence.[5]Reputation marketing is not a new strategy. TheBetter Business Bureauhas been around since 1912 and is one of the most notable and well-known consumer review organizations.[6]With the surplus of social media review sites available to the average consumer, businesses are forced to closely monitor their reputations and find new and creative ways to use social media to stay competitive in today's economy.[4]
Online reviews have a tremendous influence on consumers' purchases since they can read evaluations and opinions of the items they are considering.Amazonwas the first company to invite consumers to post reviews on the internet[7]and many others have since done the same. The average customer finds social media more trustworthy than brand-generated marketing making social media more effective than television commercials, advertising signs, and internet banners at drawing potential consumers; however, reviews by people the consumer does not know are only 2% as effective.[8]
The chart below shows the most viewed review websites of 2017 according toAlexa:[9]
A business's online reputation can have a critical impact on its success or failure, with more than 3 out of every 4 people preferring positively reviewed businesses over negative ones. The impact of negative reviews may even affect a business's ability to secure financial assistance as banks and other financial institutions check a company's online ratings as part of the application process.[10]In today's non-private, social society it would be plausible to see that one's business could be affected by what people say about it, the owner, or employees online.[11]
Reputation marketing and building a good online reputation are critically important. However, they are not stand-alone growth strategies. Reputation marketing yields the most positive returns when coupled with other online and offline marketing efforts since the effectiveness of these efforts is increased by a good reputation. The popularity ofsmart phoneshave made it almost essential for businesses to be mobile-friendly with click-to-call, click-to-map, and instant review options readily available.[12]Although thefood industryseems to be most impacted by online reviews, experts predict that doctors, contractors, surgeons, accountants and many other local business owners will see more and more online reviews due to changes in search engines.[13]
The benefits of online platforms on the economy are supported by predictions of economic theory, with most consumers preferring convenience, buyer options, and free access to information.[14]Product ratings and reviews are an important factor in how consumers choose products and services, with product ratings (usually in stars) attracting customers while personal reviews have the greater impact on the actual buying decision.[citation needed]Spending increases more than 30% with companies that have positive reviews companies with negative reviews may face a substantial drop in consumer traffic.[15]Research conducted in theUnited KingdombyBarclayslooking at how greater responses of businesses to the increase of customer feedback would improve business performance, shows that the economic output could grow by an additional 0.07% between 2016 and 2026. The effects could lead to an increase in the economic output of the United Kingdom by £555 million ($747 million) per year over the average growth rate by the year 2026.[16]
|
https://en.wikipedia.org/wiki/Reputation_marketing
|
Asmear campaign, also referred to as asmear tacticor simply asmear, is an effort to damage or call into question someone'sreputation, by propounding negativepropaganda.[1]It makes use ofdiscrediting tactics. It can be applied to individuals or groups. Common targets are public officials,politicians,heads of state,political candidates,activists, celebrities (especially those who are involved in politics), and ex-spouses. The term also applies in other contexts, such as the workplace.[2]The termsmear campaignbecame popular around the year 1936.[3]
A smear campaign is an intentional, premeditated effort to undermine an individual's or group's reputation, credibility, andcharacter.[4]Likenegative campaigning, most often smear campaigns target government officials, politicians, political candidates, and other public figures.[5]However, public relations campaigns might also employ smear tactics in the course of managing an individual or institutional brand to target competitors and potential threats.[6]Discrediting tactics are used to discourage people from believing in the figure or supporting their cause, such as the use ofdamaging quotations.
Smear tactics differ from normal discourse or debate in that they do not bear upon the issues or arguments in question. A smear is a simple attempt to malign a group or an individual with the aim of undermining their credibility.
Smears often consist ofad hominemattacks in the form of unverifiable rumors anddistortions,half-truths, or even outrightlies; smear campaigns are often propagated bygossip magazines. Even when the facts behind a smear campaign are demonstrated to lack proper foundation, the tactic is often effective because the target's reputation is tarnished before the truth is known.
Smear campaigns can also be used as acampaign tacticassociated withtabloid journalism, which is a type of journalism that presents little well-researched news and instead uses eye-catching headlines, scandal-mongering andsensationalism. For example, duringGary Hart's 1988 presidential campaign (see below), theNew York Postreported on its front page big, black block letters: "GARY: I'M NO WOMANIZER."[7][8]
Smears are also effective in diverting attention away from the matter in question and onto a specific individual or group. The target of the smear typically must focus on correcting thefalse informationrather than on the original issue.
Deflection has been described as awrap-up smear: "You make up something. Then you have the press write about it. And then you say, everybody is writing about this charge".[9]
In the U.S. judicial system, discrediting tactics (calledwitness impeachment) are the approved method for attacking the credibility of any witness in court, including aplaintiffordefendant. In cases with significantmass mediaattention or high-stakes outcomes, those tactics often take place in public as well.
Logically, an argument is held in discredit if the underlying premise is found, "So severely in error that there is cause to remove the argument from the proceedings because of its prejudicial context and application...".Mistrialproceedings in civil and criminal courts do not always require that an argument brought by defense or prosecution be discredited, however appellate courts must consider the context and may discredit testimony as perjurious or prejudicial, even if the statement is technically true.
Smear tactics are commonly used to undermine effective arguments or critiques.
During the 1856 presidential election,John C. Frémontwas the target of a smear campaign alleging that he was aCatholic, among other accusations. The campaign was designed to undermine support for Fremont from those who weresuspicious of Catholics.[10]
Ralph Naderwas the victim of a smear campaign during the 1960s, when he was campaigning for car safety. In order to smear Nader and deflect public attention from his campaign,General Motorsengaged private investigators to search for damaging or embarrassing incidents from his past. In early March 1966, several media outlets, includingThe New RepublicandThe New York Times, reported that GM had tried to discredit Nader, hiring private detectives totap his phonesand investigate his past and hiring prostitutes to trap him in compromising situations.[11][12]Nader sued the company forinvasion of privacyand settled the case for $284,000. Nader's lawsuit against GM was ultimately decided by theNew York Court of Appeals, whose opinion in the case expandedtort lawto cover "overzealous surveillance."[13]Nader used the proceeds from the lawsuit to start the pro-consumer Center for Study of Responsive Law.
Gary Hartwas the target of a smear campaign during the 1988 US presidential campaign. TheNew York Postonce reported on its front page big, black block letters: "GARY: I'M NO WOMANIZER."[7][8]
In 2011, China launched a smear campaign againstApple, including TV and radio advertisements and articles in state-run papers. The campaign failed to turn the Chinese public against the company and its products.[14]
Chris Bryant, a British parliamentarian, accused Russia in 2012 of orchestrating a smear campaign against him because of his criticism ofVladimir Putin.[15]In 2017 he alleged that other British officials are vulnerable to Russian smear campaigns.[16][17]
In 2024,The New York Timesreported on an alleged smear campaign conducted against actressBlake Livelyafter she accusedJustin Baldoniof misconduct.[18]The alleged smear campaign allegedly pushed negative stories about Lively and used social media to boost those stories. In January 2025, Baldoni filed a suit in the federal District Court for the Southern District of New York against Blake, her husband Ryan Reynolds, and publicist, for $400 million in damages alleging civil extortion, defamation, and a slew of contract-related claims.[19]
In January 2007, it was revealed that an anonymous website that attacked critics ofOverstock.com, including media figures and private citizens on message boards, was operated by an official of Overstock.com.[20][21]
In 2023,The New Yorkerreported thatMohamed bin Zayedwas paying millions of euros to a Swiss firm, Alp Services for orchestrating asmear campaignto defame the Emirati targets, including Qatar and the Muslim Brotherhood. Under the ‘dark PR’, Alp posted false and defamatory Wikipedia entries against them. The Emirates also paid the Swiss firm to publish propaganda articles against the targets. Multiple meetings took place between the Alp Services headMario Breroand an Emirati official, Matar Humaid al-Neyadi. However, Alp’s bills were sent directly to MbZ. The defamation campaign also targeted an American, Hazim Nada, and his firm, Lord Energy, because his fatherYoussef Nadahad joined the Muslim Brotherhood as a teenager.[22]
|
https://en.wikipedia.org/wiki/Smear_campaign
|
Asock puppet,sock puppet account, or simplysockis a false online identity used for deceptive purposes.[1]The term originally referred to ahand puppet made from a sock. Sock puppets include online identities created to praise, defend, or support a person or organization,[2]to manipulate public opinion,[3]or to circumvent restrictions such as viewing a social media account that a user is blocked from. Sock puppets are unwelcome in many online communities and forums.
The practice of writing pseudonymous self-reviews began before the Internet. WritersWalt WhitmanandAnthony Burgesswrote pseudonymous reviews of their own books,[4]as didBenjamin Franklin.[5]
TheOxford English Dictionarydefines the term without reference to the internet, as "a person whose actions are controlled by another; a minion" with a 2000 citation fromU.S. News & World Report.[6]
Wikipediahas had a long history of problems with sockpuppetry. On October 21, 2013, theWikimedia Foundation(WMF) condemned paid advocacy sockpuppeting on Wikipedia and, two days later on October 23, specifically bannedWiki-PR editing of Wikipedia.[7]In August and September 2015, the WMF uncovered another group of sockpuppets known asOrangemoody.[8]
One reason for sockpuppeting is to circumvent a block, ban, or other form of sanction imposed on the person's original account.[9]
Sockpuppets may be created during an online poll to increase the puppeteer's votes. A related usage is the creation of multiple identities, each supporting the puppeteer's views in an argument, attempting to position the puppeteer as representing majority opinion and sideline opposition voices. In the abstract theory ofsocial networksandreputation systems, this is known as aSybil attack.[10]
A sockpuppet-like use of deceptive fake identities is used instealth marketing. The stealth marketer creates one or more pseudonymous accounts, each claiming to be a different enthusiastic supporter of the sponsor's product, book or ideology.[11]
Astrawman sockpuppet(sometimes abbreviated asstrawpuppet) is afalse flagpseudonym created to make a particular point of view look foolish or unwholesome in order to generate negative sentiment against it. Strawman sockpuppets typically behave in an unintelligent, uninformed, orbigotedmanner, advancing "straw man" arguments that their puppeteers can easily refute. The intended effect is to discredit more rational arguments made for the same position.[12]
Such sockpuppets behave in a similar manner toInternet trolls. A particular case is theconcern troll, a false flag pseudonym created by a user whose actual point of view is opposed to that of the sockpuppet. The concern troll posts in web forums devoted to its declared point of view and attempts to sway the group's actions or opinions while claiming to share their goals, but with professed "concerns". The goal is to sowfear, uncertainty and doubt(FUD) within the group.[citation needed]
Some sources have used the termmeatpuppetas a synonym for sock puppet,[13][14][15]thoughmeatpuppetis more commonly accepted[by whom?]to be an account that is run by a person other than the puppeteer, yet used to accomplish the same goals as a typical sock puppet.[citation needed]
A number of techniques have been developed to determine whether accounts are sockpuppets, including comparing theIP addressesof suspected sockpuppets and comparative analysis of thewriting styleof suspected sockpuppets.[16]UsingGeoIPit is possible to look up the IP addresses and locate them.[17]
In 2006, Missouri resident Lori Drew created aMySpaceaccount purporting to be operated by a fictitious 16-year-old boy named Josh Evans. He began an online relationship withMegan Meier, a 13-year-old girl who had allegedly been in conflict with Drew's daughter. After "Josh Evans" ended the relationship with Meier, the latter died of suicide.[18][19]
In 2008, Thomas O'Brien,United States Attorneyfor theCentral District of California, charged Drew, then 49, with four felony counts: one count of conspiracy to violate theComputer Fraud and Abuse Act(CFAA), which prohibits "accessing a computer without authorization viainterstate commerce", and three counts of violation of the CFAA, alleging she violated MySpace's terms of service by misrepresenting herself. O'Brien justified his prosecution of the case because MySpace's servers were located in his jurisdiction. The jury convicted Drew of three misdemeanor counts, dismissing one on the grounds prosecutors had failed to demonstrate Drew inflicted emotional distress on Meier.[20][21]
During sentencing arguments, prosecutors argued for the maximum sentence for the statute: three years in prison and a fine of $300,000. Drew's lawyers argued her use of a false identity did not constitute unauthorized access to MySpace, citingPeople v. Donell, a 1973breach of contractdispute, in which a court of appeals ruled "fraudulently induced consent is consent nonetheless."[22]JudgeGeorge H. Wudismissed the charges before sentencing.[23]
In 2010, 50-year-old lawyer Raphael Golb was convicted on 30 criminal charges, includingidentity theft, criminal impersonation, and aggravated harassment, for using multiple sockpuppet accounts to attack and impersonate historians he perceived as rivals of his father,Norman Golb.[24]Golb defended his actions as "satirical hoaxes" protected by free-speech rights. He was disbarred and sentenced to six months in prison, but the sentence was reduced to probation on appeal.[25]
In 2014, a Florida state circuit court held that sock puppetry istortious interferencewith business relations and awarded injunctive relief against it during the pendency of litigation. The court found that "the act of falsifying multiple identities" is conduct that should be enjoined. It explained that the conduct was wrongful "not because the statements are false or true, but because the conduct of making up names of persons who do not exist to post fake comments by fake people to support Defendants' position tortiously interferes with Plaintiffs' business" and such "conduct is inherently unfair."[26]
The court, therefore, ordered the defendants to "remove or cause to be removed all postings creating the false impression that more [than one] person are commenting on the program th[an] actually exist." The court also found, however, that the comments of the defendants "which do not create a false impression of fake patients or fake employees, or fake persons connected to program (those posted under their respective names) are protected by The Constitution of the United States of America, First Amendment."[26]
In 2007, the CEO ofWhole Foods,John Mackey, was discovered to have posted as "Rahodeb" on theYahoo!Finance Message Board, extolling his own company and predicting a dire future for its rival,Wild Oats Markets, while concealing his relationship to both companies. Whole Foods argued that none of Mackey's actions broke the law.[27][28]
During the 2007 trial ofConrad Black, chief executive ofHollinger International, prosecutors alleged that he had posted messages on a Yahoo! Finance chat room using the name "nspector", attackingshort sellersand blaming them for his company's stock performance. Prosecutors provided evidence of these postings inBlack's criminal trial, where he was convicted of mail fraud and obstruction. The postings were raised at multiple points in the trial.[27]
Anamazon.comcomputer glitch in 2004 revealed the names of many authors who had written pseudonymous reviews of their books.John Rechy, who wrote the best-selling novelCity of Night(1963), was among the authors unmasked in this way, and was shown to have written numerous five-star reviews of his own work.[4]In 2010, historianOrlando Figeswas found to have written Amazon reviews under the names "orlando-birkbeck" and "historian", praising his own books and criticizing those of historiansRachel PolonskyandRobert Service. The two sued Figes and won monetary damages.[29][30]
During a panel discussion at a British Crime Writers Festival in 2012, authorStephen Leatheradmitted using pseudonyms to praise his own books, claiming that "everyone does it". He spoke of building a "network of characters", some operated by his friends, who discussed his books and had conversations with him directly.[31]The same year, after he was pressured by the spy novelistJeremy Dunson Twitter, who had detected possible indications online, UK crime fiction writerR.J. Elloryadmitted having used a pseudonymous account name to write a positive review for each of his own novels, and additionally a negative review for two other authors.[32][33]
David Manningwas a fictitiousfilm critic, created by a marketing executive working forSony Corporationto give consistently good reviews for releases from Sony subsidiaryColumbia Pictures, which could then be quoted in promotional material.[34]
American reporterMichael Hiltzikwas temporarily suspended from posting to his blog, "The Golden State", on theLos Angeles Timeswebsite after he admitted "posting there, as well as on other sites, under false names." He used the pseudonyms to attack conservatives such asHugh Hewittand L.A. prosecutor Patrick Frey—who eventually exposed him.[35][36]Hiltzik's blog at theLA Timeswas the newspaper's first blog. While suspended from blogging, Hiltzik continued to write regularly for the newspaper.
Lee Siegel, a writer forThe New Republicmagazine, was suspended for defending his articles and blog comments under the username "Sprezzatura". In one such comment, "Sprezzatura" defended Siegel's bad reviews ofJon Stewart: "Siegel is brave, brilliant and wittier than Stewart will ever be."[37][38]
In late November 2020,TYT Networkreported an example of awhite maleRepublican PartyDonald Trumpvoter having a sockpuppetTwitteraccount presented as that of a blackgayman, criticizingJoe Bidenand praising Trump while systematically emphasizing his race and sexual orientation. In October 2020, aClemson Universitysocial media researcher identified "more than two dozen of Twitter accounts claiming to be black Trump supporters who gained hundreds of thousands of likes and retweets in a span of just a few days, sparking major doubts about their identities," many using photos of black men from news reports or stock images "including one in which the text 'black man photo' was still watermarked on the image".[39]
As an example ofstate-sponsored Internet sockpuppetry, in 2011, a US company calledNtrepidwas awarded a $2.76 million contract fromU.S. Central Commandfor "online persona management" operations[40]to create "fake online personas to influence net conversations and spread U.S. propaganda" in Arabic, Persian, Urdu and Pashto[40]as part ofOperation Earnest Voice.
On September 11, 2014, a number of sockpuppet accounts reported an explosion at a chemical plant in Louisiana. The reports came on a range of media, including Twitter and YouTube, but U.S. authorities claimed the entire event to be a hoax. The information was determined by many to have originated with a Russian government-sponsored sockpuppet management office in Saint Petersburg, called theInternet Research Agency.[41]Russia was again implicated by the U.S. intelligence community in 2016 for hiring trolls in the2016 United States presidential election.[42]
TheInstitute of Economic Affairsclaimed in a 2012 paper that the United Kingdom government and the European Union fund charities that campaign and lobby for causes the government supports. In one example, 73% of responses to a government consultation were the direct result of campaigns by alleged "sockpuppet" organizations.[43]
|
https://en.wikipedia.org/wiki/Sock_puppet_account
|
Inpublic relationsandpolitics,spinis a form ofpropaganda, achieved through knowingly
providing a biased interpretation of an event. While traditional public relations andadvertisingmay manage their presentation of facts, "spin" often implies the use ofdisingenuous,deceptive, andmanipulativetactics.[1]
Because of the frequent association between spin andpress conferences(especiallygovernmentpress conferences), the room in which these conferences take place is sometimes described as a "spin room".[2]Public relationsadvisors, pollsters andmedia consultantswho develop deceptive or misleading messages may be referred to as "spin doctors" or "spinmeisters".
A standard tactic used in "spinning" is to reframe or modify the perception of an issue or event to reduce any negative impact it might have on public opinion. For example, a company whose top-selling product is found to have a significant safety problem may "reframe" the issue by criticizing the safety of its main competitor's products or by highlighting the risk associated with the entire product category. This might be done using a "catchy"sloganorsound bitethat can help to persuade the public of the company's biasedpoint of view. This tactic could enable the company to refocus the public's attention away from the negative aspects of its product.
Spinning is typically a service provided by paid media advisors and media consultants. The largest and most powerful companies may have in-house employees and sophisticated units with expertise in spinning issues. While spin is often considered to be a private-sector tactic, in the 1990s and 2000s some politicians and political staff were accused of using deceptive "spin" tactics to manipulate or deceive the public. Spin may include "burying" potentially negative new information by releasing it at the end of the workday on the last day before a long weekend; selectivelycherry-pickingquotes from previous speeches made by their employer or an opposing politician to give the impression that they advocate a certain position; or purposelyleakingmisinformationabout an opposing politician or candidate that casts them in a negative light.[3]
Edward Bernayshas been called the "Father of Public Relations". Bernays helpedtobaccoandalcoholcompanies make consumption of their products more socially acceptable, and he was proud of his work as a propagandist.[4]Throughout the 1990s, the use of spin by politicians and parties accelerated, especially in theUnited Kingdom; the emergence of 24-hour news increased pressures placed upon journalists to provide nonstop content, which was further intensified by the competitive nature of British broadcasters and newspapers, and content quality declined due to 24-hour news' and political parties' techniques for handling the increased demand.[5]This led to journalists relying more heavily on the public relations industry as a source for stories, and advertising revenue as a profit source, making them more susceptible to spin.[6]
Spin in the United Kingdom began to break down with the high-profile resignations of the architects of spin within theNew Labourgovernment, withCharlie Whelanresigning asGordon Brown's spokesman in 1999 andAlastair Campbellresigning asTony Blair's Press Secretary in 2003.[3][7]As information technology has increased since the end of the 20th century, commentators likeJoe Trippihave advanced the theory that modernInternet activismspells the end for political spin, in that the Internet may reduce the effectiveness of spin by providing immediate counterpoints.[8]
Spin doctors can either command media attention or remain anonymous. Examples from the UK includeJamie Sheaduring his time asNATO's press secretary throughout theKosovo War, Charlie Whelan, and Alastair Campbell.[6][clarification needed]
Campbell, previously a journalist before becoming Tony Blair's Press Secretary, was the driving force behind a government that was able to produce the message it wanted in the media. He played a key role in important decisions, with advisors viewing him as a 'Deputy Prime Minister' inseparable from Blair.[9]Campbell identifies how he was able to spinRupert Murdoch, during a meeting in July 1995, into positively reporting an upcoming Blair speech, gathering the support fromThe SunandThe Times, popular British newspapers.[10]Campbell later acknowledged that his and the government's spinning had contributed to the electorate's growing distrust of politicians, and he asserted that spin must cease.[11]
"Spin doctors" such as Shea praised and respected Campbell's work. In 1999, during the beginning of NATO's intervention in Kosovo, Shea's media strategy was non-existent before the arrival of Campbell and his team. Campbell taught Shea how to organise his team to deliver what he wanted to be in the media, which led to Shea being appreciated for his work by PresidentBill Clinton.[9]
Some spin techniques include:
For years, businesses have used fake or misleadingcustomer testimonialsby editing/spinning customers to reflect a much more satisfied experience than was actually the case. In 2009, theFederal Trade Commissionupdated their laws to include measures to prohibit this type of "spinning" and have been enforcing these laws as of late.[14]
The extent of the impact of "spin doctors" is contested, though their presence is still recognized in the political environment. The1997 General electionin the United Kingdom saw a landslide victory for New Labour with a 10.3% swing fromConservativetoLabour, with help from newspapers such asThe Suntowards whichAlastair Campbellfocused his spinning tactics as he greatly valued their support.[15]The famous newspaper headline 'The Sun Backs Blair' was a key turning point in the campaign which provided New Labour with a lot of confidence and hope of increased electoral support.[16]The change in political alignment had an impact on the electorate, with the number of individuals voting for Labour that read switching newspapers rising by 19.4%, compared to only 10.8% by those that did not read switching newspapers; a study conducted by Ladd and Lenz.[17]
|
https://en.wikipedia.org/wiki/Spin_(propaganda)
|
TheStreisand effectis anunintended consequenceof attempts to hide, remove, or censor information, where the effort instead increases public awareness of the information.
The term was coined byMike MasnickafterBarbra Streisandattempted in 2003 to suppress the publication of a photograph showing her clifftop residence inMalibu, taken to documentcoastal erosionin California, inadvertently drawing far greater attention to the previously obscure photograph.[1][2][3]
Attempts to suppress information are often made throughcease-and-desist letters, but instead of being suppressed, the information sometimes receives extensive publicity, as well as the creation of media such as videos and spoof songs, which can bemirroredon theInternetor distributed onfile-sharing networks.[4][5]In addition, seeking or obtaining aninjunctionto prohibit something from beingpublishedor to remove something that is already published can lead to increasedpublicityof the published work.
The Streisand effect is an example ofpsychological reactance, wherein once people are aware that some information is being kept from them, they are significantly more motivated to acquire and spread it.[6]
The Streisand effect has been observed in relation to theright to be forgotten, the right in some jurisdictions to have private information about a person removed from internet searches and other directories under some circumstances, as alitigantattempting to remove information from search engines risks the litigation itself being reported as valid, current news.[7][8][9][10][11]
The phenomenon is well-known in Chinese culture, expressed by thechengyu"wishing to cover, more conspicuous" (欲蓋彌彰,pinyin:Yù gài mí zhāng).[12]
In 2003, the American singer and actressBarbra Streisandsued the photographerKenneth Adelmanand Pictopia.com for US$50 million forviolation of privacy.[13][14][15]The lawsuit sought to remove "Image 3850", an aerial photograph in which Streisand's mansion was visible, from the publicly availableCalifornia Coastal Records Projectof 12,000 California coastline photographs. As the project's goal was to documentcoastal erosionto influence government policymakers, privacy concerns of homeowners were deemed to be of minor or no importance.[4][16][17][18][19]The lawsuit was dismissed and Streisand was ordered to pay Adelman's $177,000 legalattorney fees.[13][20][21][22][23]"Image 3850" had been downloaded only six times prior to Streisand's lawsuit, two of those being by Streisand's attorneys;[24]public awareness of the case led to more than 420,000 people visiting the site over the following month.[25]
Two years later,Mike MasnickofTechdirtnamed the effect after the Streisand incident when writing about Marco Beach Ocean Resort's takedown notice to urinal.net (a site dedicated to photographs ofurinals) over its use of the resort's name.[26][27]
How long is it going to take before lawyers realize that the simple act of trying to repress something they don't like online is likely to make it so that something that most people would never, ever see (like a photo of a urinal in some random beach resort) is now seen by many more people? Let's call it the Streisand Effect.
In her 2023 autobiographyMy Name Is Barbra, Streisand, citing security problems with intruders, wrote:[29]
My issue was never with the photo ... it was only about the use of my name attached to the photo. I felt I was standing up for a principle, but in retrospect, it was a mistake. I also assumed that my lawyer had done exactly as I wished and simply asked to take my name off the photo.
According toVanity Fair, "she... didn't want her name to be publicized with [the photo], for security reasons."[30]Since the controversy, Streisand has published numerous detailed photos of the property on social media and in her 2010 book,My Passion For Design.[13]
The French intelligence agencyDCRI's attempt to delete theFrench Wikipediaarticle about themilitary radio station of Pierre-sur-Haute[31]resulted in the restored article temporarily becoming the most-viewed page on the French Wikipedia.[32]
In October 2020, theNew York Postpublishedemails from a laptopowned byHunter Biden, the son of then Democratic presidential nomineeJoe Biden, detailing an alleged corruption scheme.[33]After internal discussion that debated whether the story may have originated fromRussian misinformation and propaganda,Twitterblocked the story from their platform and locked the accounts of those who shared a link to the article, including theNew York Post'sown Twitter account, and White House Press SecretaryKayleigh McEnany, among others.[34]Researchers atMITcited the increase of 5,500 shares every 15 minutes to about 10,000 shares shortly after Twitter censored the story, as evidence of the Streisand Effect nearly doubling the attention the story received.[35]Twitter removed the ban the following day.
In April 2007, a group of companies that usedAdvanced Access Content System (AACS) encryptionissued cease-and-desist letters demanding that the system's 128-bit (16-byte) numerical key (represented inhexadecimalas09 F9 11 02 9D 74 E3 5B D8 41 56 C5 63 56 88 C0) be removed from several high-profile websites, includingDigg. With the numerical key and some software, it was possible to decrypt the video content onHD DVDs. This led to the key's proliferation across other sites and chat rooms in various formats, with one commentator describing it as having become "the most famous number on the Internet".[36]Within a month, the key had been reprinted on over 280,000 pages, printed on T-shirts and tattoos, published as a book, and appeared onYouTubein a song played over 800,000 times.[37]
In September 2009, multi-national oil companyTrafiguraobtained in a British court asuper-injunctionto preventThe Guardiannewspaper from reporting on an internal Trafigura investigation into the2006 Ivory Coast toxic waste dumpscandal. A super-injunction prevents reporting on even the existence of the injunction. Usingparliamentary privilege, Labour MPPaul Farrellyreferred to the super-injunction in a parliamentary question and on October 12, 2009,The Guardianreported that it had been gagged from reporting on the parliamentary question, in violation of theBill of Rights 1689.[38][39][40]Blogger Richard Wilson correctly identified the blocked question as referring to the Trafigura waste dump scandal, after whichThe Spectatorsuggested the same. Not long after, Trafigura began trending on Twitter, helped along byStephen Fry's retweeting the story to his followers.[41]Twitter users soon tracked down all details of the case, and by October 16, the super-injunction had been lifted and the report published.[42]
On 11 March 2025, the bookCareless People: A Cautionary Tale of Power, Greed, and Lost Idealismby Sarah Wynn-Williams was published. It details the author’s experiences working atFacebook(nowMeta) and explores the company’s internal culture, decision-making processes, and role in reshaping global events. Meta CEOMark Zuckerbergresponded by seeking relief at the Emergency International Arbitral Tribunal, which enjoined Wynn-Williams "from making orally, in writing, or otherwise any disparaging, critical or otherwise detrimental comments to any person or entity concerning [Meta], its officers, directors, or employees".[43][44]Macmillan, the UK publisher, later issued a statement saying that it would ignore the ruling.[43]The book reached number one on theNew York Times bestseller listby 20 March 2025.[45]Meta described the book as "a mix of out-of-date and previously reported claims about the company and false accusations about [its] executives".[45]
In January 2008, theChurch of Scientology's attempts to get Internet websites to delete a video ofTom Cruisespeaking aboutScientologyresulted in the creation of the protest movementProject Chanology.[46][47][48]
On December 5, 2008, theInternet Watch Foundation(IWF)addedtheEnglish Wikipediaarticle about the 1976ScorpionsalbumVirgin Killerto a child pornography blacklist, considering the album's cover art "a potentially illegal indecent image of a child under the age of 18".[46]The article quickly became one of the most popular pages on the site,[49]and the publicity surrounding the IWF action resulted in the image being spread across other sites.[50]
The IWF was later reported on theBBC Newswebsite to have said "IWF's overriding objective is to minimise the availability of indecent images of children on the Internet, however, on this occasion our efforts have had the opposite effect".[51]This effect was also noted by the IWF in its statement about the removal of the URL from the blacklist.[52][53]
In May 2011,Premier LeaguefootballerRyan Giggssued Twitter after a user revealed that Giggs was the subject of an anonymousprivacy injunction(informally referred to as a "super-injunction")[54]that prevented the publication of details regarding an alleged affair with model and formerBig BrothercontestantImogen Thomas.
A blogger for theForbeswebsite observed that the British media, which were banned from breaking the terms of the injunction, had mocked the footballer for not understanding the effect.[55]Dan Sabbagh fromThe Guardiansubsequently posted a graph detailing—without naming the player—the number of references to the player's name against time, showing a large spike following the news that the player was seeking legal action.[56]
In 2013, aBuzzFeedarticle showcasing photos from theSuper Bowlcontained several photos ofBeyoncémaking unflattering poses and faces, resulting in her publicist contacting BuzzFeed via email and requesting the removal of the images.[57]In response to the email, BuzzFeed republished the images, which subsequently became much more well-known across the internet.[58]
In December 2022, Twitter CEOElon Muskbanned the Twitter account@elonjet, a bot that reported his private jet's movements based on public domain flight data,[59]citing concerns about his family's safety.[60]The ban drew further media coverage and public attention to Musk's comments on allowing free speech across the Twitter platform.[61][62]Musk received further criticism after banning several journalists who had referred to the "ElonJet" account or been critical of Musk in the past.[63]
In November 2024, Canadian rapperDrakefiled a lawsuit againstUniversal Music Groupover the popularKendrick Lamarsong "Not Like Us" which is adiss trackagainst Drake. The lawsuit has been described by music industry insiders as having a Streisand effect since in the wake of the lawsuit, the song's sales have increased by 440% and it has also surged back up in several charts.[64]
|
https://en.wikipedia.org/wiki/Streisand_effect
|
Defamationis acommunicationthat injures a third party'sreputationand causes a legally redressable injury. The precise legal definition of defamation varies from country to country. It is not necessarily restricted to makingassertionsthat arefalsifiable, and can extend to concepts that are more abstract than reputation – likedignityandhonour.
In the English-speaking world, the law of defamation traditionally distinguishes betweenlibel(written, printed, posted online, published in mass media) andslander(oral speech). It is treated as acivil wrong(tort,delict), as acriminal offence, or both.[1][2][3][4][additional citation(s) needed]
Defamation and related laws can encompass a variety of acts (from general defamation and insult – as applicable to every citizen – to specialized provisions covering specific entities and social structures):[5][additional citation(s) needed]
Defamation law has a long history stretching back to classical antiquity. While defamation has been recognized as an actionable wrong in various forms across historical legal systems and in various moral and religious philosophies, defamation law in contemporary legal systems can primarily be traced back to Roman and early English law.[citation needed]
Roman lawwas aimed at giving sufficient scope for the discussion of a man's character, while it protected him from needless insult and pain. The remedy for verbal defamation was long confined to a civil action for a monetary penalty, which was estimated according to the significance of the case, and which, although punitive in its character, doubtless included practically the element of compensation. But a new remedy was introduced with the extension of the criminal law, under which many kinds of defamation were punished with great severity. At the same time increased importance attached to the publication of defamatory books and writings, thelibriorlibelli famosi, from which is derived the modern use of the wordlibel; and under the later emperors the latter term came to be specially applied to anonymous accusations orpasquils, the dissemination of which was regarded as particularly dangerous, and visited with very severe punishment, whether the matters contained in them were true or false.[citation needed]
The Praetorian Edict, codified circa AD 130, declared that an action could be brought up for shouting at someone contrary to good morals: "qui, adversus bonos mores convicium cui fecisse cuiusve opera factum esse dicitur, quo adversus bonos mores convicium fieret, in eum iudicium dabo."[6]In this case, the offence was constituted by the unnecessary act of shouting. According toUlpian, not all shouting was actionable. Drawing on the argument ofLabeo, he asserted that the offence consisted in shouting contrary to the morals of the city ("adversus bonos mores huius civitatis") something apt to bring in disrepute or contempt ("quae... ad infamiam vel invidiam alicuius spectaret") the person exposed thereto.[7]Any act apt to bring another person into disrepute gave rise to anactio injurarum.[8]In such a case the truth of the statements was no justification for the public and insulting manner in which they had been made, but, even in public matters, the accused had the opportunity to justify his actions by openly stating what he considered necessary for public safety to be denounced by the libel and proving his assertions to be true.[9]The second head included defamatory statements made in private, and in this case the offense lay in the content of the imputation, not in the manner of its publication. The truth was therefore a sufficient defense, for no man had a right to demand legal protection for a false reputation.[citation needed]
InAnglo-Saxon England, whose legal tradition is the predecessor of contemporary common law jurisdictions,[citation needed]slander was punished by cutting out the tongue.[10]Historically, while defamation of a commoner in England was known as libel or slander, the defamation of a member of the Englisharistocracywas calledscandalum magnatum,literally "the scandal of magnates".[11]
Following theSecond World Warand with the rise of contemporaryinternational human rights law, the right to a legal remedy for defamation was included in Article 17 of theUnited NationsInternational Covenant on Civil and Political Rights(ICCPR), which states that:
This implies a right to legal protection against defamation; however, this right co-exists with the right to freedom of opinion and expression under Article 19 of the ICCPR as well as Article 19 of theUniversal Declaration of Human Rights.[12]Article 19 of the ICCPR expressly provides that the right to freedom of opinion and expression may be limited so far as it is necessary "for respect of the rights or reputations of others".[12]Consequently, international human rights law provides that while individuals should have the right to a legal remedy for defamation, this right must be balanced with the equally protected right to freedom of opinion and expression. In general, ensuring that domestic defamation law adequately balances individuals' right to protect their reputation with freedom of expression and ofthe pressentails:[13]
In most of Europe, article 10 of the European Convention on Human Rights permits restrictions on freedom of speech when necessary to protect the reputation or rights of others.[14]Additionally, restrictions of freedom of expression and other rights guaranteed by international human rights laws (including theEuropean Convention on Human Rights(ECHR)) and by the constitutions of a variety of countries are subject to some variation of the three-part test recognised by theUnited Nations Human Rights Committeewhich requires that limitations be: 1) "provided by law that is clear and accessible to everyone", 2) "proven to be necessary and legitimate to protect the rights or reputations of others", and 3) "proportionate and the least restrictive to achieve the purported aim".[15]This test is analogous to theOakes Testapplied domestically by theSupreme Court of Canadain assessing whether limitations on constitutional rights are "demonstrably justifiable in a free and democratic society" under Section 1 of theCanadian Charter of Rights and Freedoms, the "necessary in a democratic society" test applied by theEuropean Court of Human Rightsin assessing limitations on rights under the ECHR,Section 36of the post-ApartheidConstitution of South Africa,[16]and Section 24 of the 2010 Constitution of Kenya.[17]Nevertheless, the worldwide use of criminal[18]and civildefamation, to censor, intimidate or silence critics, has been increasing in recent years.[19]
In 2011, theUnited Nations Human Rights Committeepublished their General comment No. 34 (CCPR/C/GC/34) – regarding Article 19 of the ICCPR.[20]
Paragraph 47 states:
Defamation laws must be crafted with care to ensure that they comply with paragraph 3 [of Article 19 of the ICCPR], and that they do not serve, in practice, to stifle freedom of expression. All such laws, in particular penal defamation laws, should include such defences as thedefence of truthand they should not be applied with regard to those forms of expression that are not, of their nature, subject to verification. At least with regard to comments aboutpublic figures, consideration should be given to avoiding penalizing or otherwise rendering unlawful untrue statements that have been published in error but without malice. In any event, apublic interestin the subject matter of the criticism should be recognized as a defence. Care should be taken by States parties to avoid excessively punitive measures and penalties. Where relevant, States parties should place reasonable limits on the requirement for a defendant to reimburse the expenses of the successful party. States parties should consider thedecriminalizationof defamation and, in any case, the application of the criminal law should only be countenanced in the most serious of cases and imprisonment is never an appropriate penalty. It is impermissible for a State party to indict a person for criminal defamation but then not to proceed to trial expeditiously – such a practice has a chilling effect that may unduly restrict the exercise of freedom of expression of the person concerned and others.
While each legal tradition approaches defamation differently, it is typically regarded as a tort[a]for which the offended party cantake civil action. The range of remedies available to successful plaintiffs in defamation cases varies between jurisdictions and range fromdamagesto court orders requiring the defendant to retract the offending statement or to publish a correction or an apology.
Modern defamation in common law jurisdictions are historically derived fromEnglish defamation law. English law allows actions for libel to be brought in the High Court for any published statements alleged to defame a named or identifiable individual or individuals (under English law companies are legal persons, and allowed to bring suit for defamation[22][23][24]) in a manner that causes them loss in their trade or profession, or causes a reasonable person to think worse of them.
In contemporarycommon lawjurisdictions, to constitute defamation, a claim must generally be false and must have been made to someone other than the person defamed.[25]Some common law jurisdictions distinguish between spoken defamation, calledslander, and defamation in other media such as printed words or images, calledlibel.[26]The fundamental distinction between libel and slander lies solely in theformin which the defamatory matter is published. If the offending material is published in some fleeting form, such as spoken words or sounds, sign language, gestures or the like, then it is slander. In contrast, libel encompasses defamation by written or printed words, pictures, or in any form other than spoken words or gestures.[27][b]The law of libel originated in the 17th century in England. With the growth of publication came the growth of libel and development of the tort of libel.[28]The highest award in an American defamation case, at US$222.7 million was rendered in 1997 againstDow Jonesin favour of MMAR Group Inc;[29]however, the verdict was dismissed in 1999 amid allegations that MMAR failed to disclose audiotapes made by its employees.[30]
In common law jurisdictions, civil lawsuits alleging defamation have frequently been used by both private businesses and governments to suppress and censor criticism. A notable example of such lawsuits being used to suppress political criticism of a government is the use of defamation claims by politicians in Singapore's rulingPeople's Action Partyto harass and suppress opposition leaders such asJ. B. Jeyaretnam.[31][32][33][34][35]Over the first few decades of the twenty first century, the phenomenon ofstrategic lawsuits against public participationhas gained prominence in many common law jurisdictions outside Singapore as activists, journalists, critics of corporations, political leaders, and public figures are increasingly targeted with vexatious defamation litigation.[36]As a result,tort reform measureshave been enacted in various jurisdictions; theCalifornia Code of Civil Procedureand Ontario'sProtection of Public Participation Actdo so by enabling defendants to make aspecial motion to strikeor dismiss during whichdiscoveryis suspended and which, if successful, would terminate the lawsuit and allow the party to recover its legal costs from the plaintiff.[37][38]
There are a variety of defences to defamation claims in common law jurisdictions.[39]The two most fundamental defences arise from the doctrine in common law jurisdictions that only a false statement of fact (as opposed to opinion) can be defamatory. This doctrine gives rise to two separate but related defences: opinion and truth. Statements of opinion cannot be regarded as defamatory as they are inherently non-falsifiable.[c]Where a statement has been shown to be one of fact rather than opinion, the most common defence in common law jurisdictions is that of truth. Proving the truth of an allegedly defamatory statement is always a valid defence.[41]Where a statement is partially true, certain jurisdictions in the Commonwealth have provided by statute that the defence "shall not fail by reason only that the truth of every charge is not proved if the words not proved to be true do not materially injure the claimant's reputation having regard to the truth of the remaining charges".[42]Similarly, the American doctrine ofsubstantial truthprovides that a statement is not defamatory if it has "slight inaccuracies of expression" but is otherwise true.[43]Since a statement can only be defamatory if it harms another person's reputation, another defence tied to the ability of a statement to be defamatory is to demonstrate that, regardless of whether the statement is true or is a statement of fact, it does not actually harm someone's reputation.
It is also necessary in these cases to show that there is a well-foundedpublic interestin the specific information being widely known, and this may be the case even forpublic figures. Public interest is generally not "what the public is interested in", but rather "what is in the interest of the public".[44][45]
Other defences recognised in one or more common law jurisdictions include:[46][47]
Media liability or defamation insurance is often purchased by publishers and journalists to cover potential damage awards from libel lawsuits.[50][51][52]Roughly 3/4 of all money spent on claims by liability insurers goes to lawyers and only 1/4 goes to settlements or judgments, according to one estimate from Michelle Worrall Tilton of Media Risk Consultants.[50]Some advise buying worldwide coverage that offers defense against cases regardless of where in the world they are filed, since a compainant canlook for a more favorable jurisdictionto file their claim.[50]Investigative journalismusually requires higher insurance premiums, with some plans not covering investigative work altogether.[51]
Many common law jurisdictions recognise that some categories of statements are considered to be defamatoryper se, such that people making a defamation claim for these statements do not need to prove that the statement was defamatory.[53]In an action for defamationper se, the law recognises that certain false statements are so damaging that they create a presumption of injury to the plaintiff's reputation, allowing a defamation case to proceed to verdict with no actual proof of damages. Although laws vary by state, and not all jurisdictions recognise defamationper se, there are four general categories of false statement that typically support aper seaction:[54]
If the plaintiff proves that such a statement was made and was false, to recover damages the plaintiff need only prove that someone had made the statement to any third party. No proof of special damages is required. However, to recover full compensation a plaintiff should be prepared to prove actual damages.[54]As with any defamation case, truth remains an absolute defence to defamationper se. This means that even if the statement would be considered defamatoryper seif false, if the defendant establishes that it is in fact true, an action for defamationper secannot survive.[55]The conception of what type of allegation may support an action for defamation per se can evolve with public policy. For example, in May 2012 an appeals court in New York, citing changes in public policy with regard tohomosexuality, ruled that describing someone asgayis not defamation.[56]
While defamation torts are broadly similar across common law jurisdictions; differences have arisen as a result of diverging case law, statutes and other legislative action, and constitutional concerns[d]specific to individual jurisdictions.
Some jurisdictions have a separatetortor delict ofinjury,intentional infliction of emotional distress, involving the making of a statement, even if truthful, intended to harm the claimant out of malice; some have a separate tort or delict of "invasion of privacy" in which the making of a true statement may give rise to liability: but neither of these comes under the general heading of "defamation". The tort of harassment created by Singapore'sProtection from Harassment Act 2014is an example of a tort of this type being created by statute.[42]There is also, in almost all jurisdictions, a tort or delict of "misrepresentation", involving the making of a statement that is untrue even though not defamatory. Thus a surveyor who states a house is free from risk of flooding has not defamed anyone, but may still be liable to someone who purchases the house relying on this statement. Other increasingly common claims similar to defamation in U.S. law are claims that a famous trademark has been diluted through tarnishment, see generallytrademark dilution, "intentional interference with contract", and "negligent misrepresentation". In America, for example, the unique tort offalse lightprotects plaintiffs against statements which are not technically false but are misleading.[57]Libel and slander both require publication.[58]
Although laws vary by state; in America, a defamation action typically requires that a plaintiff claiming defamation prove that the defendant:
Additionally, American courts apply special rules in the case of statements made in the press concerning public figures, which can be used as a defence. While plaintiff alleging defamation in an American court must usually prove that the statement caused harm, and was made without adequate research into the truthfulness of the statement; where the plaintiff is a celebrity or public official, they must additionally prove that the statement was made withactual malice(i.e. the intent to do harm or with reckless disregard for the truth).[59][60]A series of court rulings led byNew York Times Co. v. Sullivan, 376 U.S. 254 (1964) established that for apublic official(or other legitimate public figure) to win a libel case in an American court, the statement must have been published knowing it to be false or with reckless disregard to its truth (i.e.actual malice).[61]TheAssociated Pressestimates that 95% of libel cases involving news stories do not arise from high-profile news stories, but "run of the mill" local stories like news coverage of local criminal investigations or trials, or business profiles.[62]An early example of libel is the case ofJohn Peter Zengerin 1735. Zenger was hired to publish theNew York Weekly Journal. When he printed another man's article criticisingWilliam Cosby, theroyal governorofColonial New York, Zenger was accused ofseditious libel.[28]The verdict was returned asnot guiltyon the charge of seditious libel, because it was proven that all the statements Zenger had published about Cosby had been true, so there was not an issue of defamation. Another example of libel is the case ofNew York Times Co. v. Sullivan(1964). TheSupreme Court of the United Statesoverruled a state court inAlabamathat had foundThe New York Timesguilty of libel for printing an advertisement that criticised Alabama officials for mistreating studentcivil rightsactivists. Even though some of whatThe Timesprinted was false, the court ruled in its favour, saying that libel of a public official requires proof ofactual malice, which was defined as a "knowing or reckless disregard for the truth".[63]
Many jurisdictions within theCommonwealth(e.g. Singapore,[64]Ontario,[65]and the United Kingdom[66]) have enacted legislation to:
Libel law in England and Wales was overhauled even further by theDefamation Act 2013.
Defamation inIndian tort lawlargely resembles that ofEngland and Wales. Indian courts have endorsed[67]the defences of absolute[68]and qualified privilege,[69]fair comment,[70]and justification.[71]While statutory law in the United Kingdom provides that, if the defendant is only successful in proving the truth of some of the several charges against him, the defence of justification might still be available if the charges not proved do not materially injure the reputation,[72]there is no corresponding provision in India, though it is likely that Indian courts would treat this principle as persuasive precedent.[73]Recently, incidents of defamation in relation to public figures have attracted public attention.[74]
The origins of U.S. defamation law pre-date theAmerican Revolution.[e]Though theFirst Amendment of the American Constitutionwas designed to protect freedom of the press, it was primarily envisioned to prevent censorship by the state rather than defamation suits; thus, for most of American history, theSupreme Courtdid not interpret the First Amendment as applying to libel cases involving media defendants. This left libel laws, based upon the traditional common law of defamation inherited from the English legal system, mixed across the states. The 1964 caseNew York Times Co. v. Sullivandramatically altered the nature of libel law in the country by elevating the fault element for public officials to actual malice – that is, public figures could win a libel suit only if they could demonstrate the publisher's "knowledge that the information was false" or that the information was published "with reckless disregard of whether it was false or not".[76]Later the Supreme Court held that statements that are so ridiculous to be clearly not true are protected from libel claims,[77]as are statements of opinion relating to matters of public concern that do not contain a provably false factual connotation.[78]Subsequent state and federal cases have addressed defamation law and the Internet.[79]
American defamation law is much less plaintiff-friendly than its counterparts in European and theCommonwealth countries. A comprehensive discussion of what is and is not libel or slander under American law is difficult, as the definition differs between different states and is further affected by federal law.[80]Some states codify what constitutes slander and libel together, merging the concepts into a single defamation law.[54]
New Zealand received English law with the signing of the Treaty of Waitangi in February 1840. The current Act is theDefamation Act 1992which came into force on 1 February 1993 and repealed theDefamation Act 1954.[81]New Zealand law allows for the following remedies in an action for defamation: compensatory damages; an injunction to stop further publication; a correction or a retraction; and in certain cases, punitive damages. Section 28 of the Act allows for punitive damages only when a there is a flagrant disregard of the rights of the person defamed. As the law assumes that an individual suffers loss if a statement is defamatory, there is no need to prove that specific damage or loss has occurred. However, Section 6 of the Act allows for a defamation action brought by a corporate body to proceed only when the body corporate alleges and proves that the publication of the defamation has caused or is likely to cause pecuniary loss to that body corporate.
As is the case for mostCommonwealthjurisdictions, Canada follows English law on defamation issues (except inQuebecwhere the private law is derived from French civil law). In common law provinces and territories, defamation covers any communication that tends to lower the esteem of the subject in the minds of ordinary members of the public.[82]Probably true statements are not excluded, nor are political opinions. Intent is always presumed, and it is not necessary to prove that the defendant intended to defame. InHill v. Church of Scientology of Toronto(1995), theSupreme Court of Canadarejected theactual malicetest adopted in the US caseNew York Times Co. v. Sullivan. Once a claim has been made, the defendant may avail themselves of a defence of justification (the truth), fair comment, responsible communication,[83]or privilege. Publishers of defamatory comments may also use the defence of innocent dissemination where they had no knowledge of the nature of the statement, it was not brought to their attention, and they were not negligent.[84][85]
Common law jurisdictions vary as to whether they permit corporate plaintiffs in defamation actions. Under contemporary Australian law, private corporations are denied the right to sue for defamation, with an exception for small businesses (corporations with less than 10 employees and no subsidiaries); this rule was introduced by the state of New South Wales in 2003, and then adopted nationwide in 2006.[86]By contrast, Canadian law grants private corporations substantially the same right to sue for defamation as individuals possess.[86]Since 2013, English law charts a middle course, allowing private corporations to sue for defamation, but requiring them to prove that the defamation caused both serious harm and serious financial loss, which individual plaintiffs are not required to demonstrate.[86]
Defamation in jurisdictions applying Roman Dutch law (i.e. most of Southern Africa,[f]Indonesia, Suriname, and the Dutch Caribbean) gives rise to a claim by way of "actio iniuriarum". For liability under theactio iniuriarum, the general elements of delict must be present, but specific rules have been developed for each element. Causation, for example, is seldom in issue, and is assumed to be present. The elements of liability under theactio iniuriarumare as follows:
Under theactio iniuriarum, harm consists in the infringement of a personality right, either "corpus", "dignitas", or "fama".Dignitasis a generic term meaning 'worthiness, dignity, self-respect', and comprises related concerns like mental tranquillity and privacy. Because it is such a wide concept, its infringement must be serious. Not every insult is humiliating; one must provecontumelia. This includes insult (iniuriain the narrow sense), adultery, loss of consortium, alienation of affection, breach of promise (but only in a humiliating or degrading manner), et cetera. "Fama" is a generic term referring to reputation andactio iniuriarumpertaining to it encompasses defamation more broadly Beyond simply covering actions that fall within the broader concept of defamation, "actio iniuriarum" relating to infringements of a person'scorpusprovides civil remedies for assaults, acts of a sexual or indecent nature, and 'wrongful arrest and detention'.
InScots law, which is closely related to Roman Dutch law, the remedy for defamation is similarly theactio iniuriariumand the most common defence is "veritas" (i.e. proving the truth of otherwise defamatory statement). Defamation falls within the realm of non-patrimonial (i.e. dignitary) interests. The Scots law pertaining to the protection of non-patrimonial interests is said to be 'a thing of shreds and patches'.[87]This notwithstanding, there is 'little historical basis in Scots law for the kind of structural difficulties that have restricted English law' in the development of mechanisms to protect so-called 'rights of personality'.[88]Theactio iniuriarumheritage of Scots law gives the courts scope to recognise, and afford reparation in, cases in which no patrimonial (or 'quasi-patrimonial') 'loss' has occurred, but a recognised dignitary interest has nonetheless been invaded through the wrongful conduct of the defender. For such reparation to be offered, however, the non-patrimonial interest must be deliberately affronted: negligent interference with a non-patrimonial interest will not be sufficient to generate liability.[89]Anactio iniuriarumrequires that the conduct of the defender be 'contumelious'[90]—that is, it must show such hubristic disregard of the pursuer's recognised personality interest that an intention to affront (animus iniuriandi) might be imputed.[91]
In addition to tort law, many jurisdictions treat defamation as a criminal offence and provide for penalties as such.Article 19, a British free expression advocacy group, has published global maps[92]charting the existence of criminal defamation law across the globe, as well as showing countries that have special protections for political leaders or functionaries of the state.[93]
There can be regional statutes that may differ from the national norm. For example, in the United States, criminal defamation is generally limited to the living. However, there are 7 states (Idaho,Kansas,Louisiana,Nevada,North Dakota,Oklahoma,Utah) that have criminal statutes regarding defamation of the dead.[94]
TheOrganization for Security and Co-operation in Europe(OSCE) has also published a detailed database on criminal and civil defamation provisions in 55 countries, including all European countries, all member countries of theCommonwealth of Independent States, America, and Canada.[4]
Questions of group libel have been appearing in common law for hundreds of years. One of the earliest known cases of a defendant being tried for defamation of a group was the case ofR v Orme and Nutt(1700). In this case, the jury found that the defendant was guilty of libeling several subjects, though they did not specifically identify who these subjects were. A report of the case told that the jury believed that "where a writing ... inveighs against mankind in general, or against a particular order of men, as for instance, men of the gown, this is no libel, but it must descend to particulars and individuals to make it libel."[95][dubious–discuss]This jury believed that only individuals who believed they were specifically defamed had a claim to a libel case. Since the jury was unable to identify the exact people who were being defamed, there was no cause to identify the statements were a libel.
Another early English group libel which has been frequently cited isKing v. Osborne(1732). In this case, the defendant was on trial "for printing a libel reflecting upon the Portuguese Jews". The printing in question claimed that Jews who had arrived in London from Portugal burned a Jewish woman to death when she had a child with a Christian man, and that this act was common. Following Osborne's anti-Semitic publication, several Jews were attacked. Initially, the judge seemed to believe the court could do nothing since no individual was singled out by Osborne's writings. However, the court concluded that "since the publication implied the act was one Jews frequently did, the whole community of Jews was defamed."[96][not specific enough to verify]Though various reports of this case give differing accounts of the crime, this report clearly shows a ruling based on group libel. Since laws restricting libel were accepted at this time because of its tendency to lead to a breach of peace, group libel laws were justified because they showed potential for an equal or perhaps greater risk of violence.[97]For this reason, group libel cases are criminal even though most libel cases are civil torts.
In a variety of Common Law jurisdictions, criminal laws prohibiting protests at funerals,sedition, false statements in connection with elections, and the use of profanity in public, are also often used in contexts similar tocriminal libelactions. The boundaries of a court's power to hold individuals in "contempt of court" for what amounts to alleged defamatory statements about judges or the court process by attorneys or other people involved in court cases is also not well established in many common law countries.
While defamation torts are less controversial as they ostensibly involve plaintiffs seeking to protect their right to dignity and their reputation, criminal defamation is more controversial as it involves the state expressly seeking to restrictfreedom of expression. Human rights organisations, and other organisations such as theCouncil of EuropeandOrganization for Security and Co-operation in Europe, have campaigned against strict defamation laws that criminalise defamation.[98][99]The freedom of expression advocacy groupArticle 19opposes criminal defamation, arguing that civil defamation laws providing defences for statements on matters of public interest are better compliant with international human rights law.[13]The European Court of Human Rights has placed restrictions on criminal libel laws because of the freedom of expression provisions of the European Convention on Human Rights. One notable case wasLingens v. Austria(1986).
According to the Criminal Code ofAlbania, defamation is a crime. Slandering in the knowledge of falsity is subject to fines of from40000ALL (c. $350) to one million ALL (c. $8350).[105]If the slandering occurs in public or damages multiple people, the fine is 40,000 ALL to three million ALL (c. $25100).[106]In addition, defamation of authorities, public officials or foreign representatives (Articles 227, 239 to 241) are separate crimes with maximum penalties varying from one to three years of imprisonment.[107][108]
InArgentina, the crimes of calumny and injury are foreseen in the chapter "Crimes Against Honor" (Articles 109 to 117-bis) of the Penal Code. Calumny is defined as "the false imputation to a determined person of a concrete crime that leads to a lawsuit" (Article 109). However, expressions referring to subjects of public interest or that are not assertive do not constitute calumny. Penalty is a fine from 3,000 to 30,000pesos. He who intentionally dishonor or discredit a determined person is punished with a penalty from 1,500 to 20,000 pesos (Article 110).
He who publishes or reproduces, by any means, calumnies and injuries made by others, will be punished as responsible himself for the calumnies and injuries whenever its content is not correctly attributed to the corresponding source. Exceptions are expressions referring to subjects of public interest or that are not assertive (see Article 113). When calumny or injury are committed through the press, a possible extra penalty is the publication of the judicial decision at the expenses of the guilty (Article 114). He who passes to someone else information about a person that is included in a personal database and that one knows to be false, is punished with six months to three years in prison. When there is harm to somebody, penalties are aggravated by an extra half (Article 117 bis, §§ 2nd and 3rd).[109]
Defamation law inAustraliadeveloped primarily out of the English law of defamation and its cases, though now there are differences introduced by statute and by the implied constitutional limitation on governmental powers to limit speech of a political nature established inLange v Australian Broadcasting Corporation(1997).[110]
In 2006, uniform defamation laws came into effect across Australia.[111]In addition to fixing the problematic inconsistencies in law between individual States and Territories, the laws made a number of changes to the common law position, including:
The 2006 reforms also established across all Australian states the availability of truth as an unqualified defence; previously a number of states only allowed a defence of truth with the condition that a public interest or benefit existed. The defendant however still needs to prove that the defamatory imputations are substantially true.[115]
The law as it currently stands in Australia was summarised in the 2015 case of Duffy v Google byJustice Bluein theSupreme Court of South Australia:[116]
The tort can be divided up into the following ingredients:
Defences available to defamation defendants includeabsolute privilege,qualified privilege, justification (truth), honest opinion, publication of public documents, fair report of proceedings of public concern and triviality.[46]
On 10 December 2002, theHigh Court of Australiadelivered judgment in the Internet defamation case ofDow Jones v Gutnick.[117]The judgment established that internet-published foreign publications that defamed an Australian in their Australian reputation could be held accountable under Australian defamation law. The case gained worldwide attention and is often said, inaccurately, to be the first of its kind. A similar case that predatesDow Jones v GutnickisBerezovsky v Michaelsin England.[118]
Australia's firstTwitterdefamation case to go to trial is believed to beMickle v Farley. The defendant, formerOrange High Schoolstudent Andrew Farley was ordered to pay $105,000 to a teacher for writing defamatory remarks about her on the social media platform.[119]
A more recent case in defamation law wasHockey v Fairfax Media Publications Pty Limited[2015], heard in theFederal Court of Australia. This judgment was significant as it demonstrated that tweets, consisting of even as little as three words, can be defamatory, as was held in this case.[120]
In Austria, the crime of defamation is foreseen by Article 111 of the Criminal Code. Related criminal offences include "slander and assault" (Article 115), that happens "if a person insults, mocks, mistreats or threatens will ill-treatment another one in public", and yet "malicious falsehood" (Article 297), defined as a false accusation that exposes someone to the risk of prosecution.[121]
InAzerbaijan, the crime of defamation (Article 147) may result in a fine up to "500 times the amount ofminimum salaries", public work for up to 240 hours, correctional work for up to one year, or imprisonment of up to six months. Penalties are aggravated to up to three years of prison if the victim is falsely accused of having committed a crime "of grave or very grave nature" (Article 147.2). The crime of insult (Article 148) can lead to a fine of up to 1,000 times the minimum wage, or to the same penalties of defamation for public work, correctional work or imprisonment.[122][123]
According to the OSCE report on defamation laws, "Azerbaijan intends to remove articles on defamation and insult from criminal legislation and preserve them in the Civil Code".[124]
In Belgium, crimes against honor are foreseen in Chapter V of the Belgian Penal Code, Articles 443 to 453-bis. Someone is guilty of calumny "when law admitsproofof the alleged fact" and of defamation "when law does not admit this evidence" (Article 443). The penalty is eight days to one year of imprisonment, plus a fine (Article 444). In addition, the crime of "calumnious denunciation" (Article 445) is punished with 15 days to six months in prison, plus a fine. In any of the crimes covered by Chapter V of the Penal Code, the minimum penalty may be doubled (Article 453-bis) "when one of the motivations of the crime is hatred, contempt or hostility of a person due to his or herintended race, colour of the skin,ancestry,national originorethnicity,nationality,gender,sexual orientation,marital status, place of birth, age,patrimony,philosophicalorreligious belief, present or future health condition,disability,native language, political belief, physical or genetical characteristic, orsocial origin".[125][126]
InBrazil, defamation is a crime, which is prosecuted either as "defamation" (three months to a year in prison, plus fine; Article 139 of the Penal Code), "calumny" (six months to two years in prison, plus fine; Article 138 of the PC) or "injury" (one to six months in prison, or fine; Article 140), with aggravating penalties when the crime is practiced in public (Article 141, item III) or against a state employee because of his regular duties. Incitation to hatred and violence is also foreseen in the Penal Code (incitation to a crime, Article 286). Moreover, in situations likebullyingor moral constraint, defamation acts are also covered by the crimes of "illegal constraint" (Article 146 of the Penal Code) and "arbitrary exercise of discretion" (Article 345 of PC), defined as breaking the law as avigilante.[127]
InBulgaria, defamation is formally a criminal offence, but the penalty of imprisonment was abolished in 1999. Articles 146 (insult), 147 (criminal defamation) and 148 (public insult) of the Criminal Code prescribe a penalty of fine.[128]
InQuebec, defamation was originally grounded in the law inherited from France and is presently established by Chapter III, Title 2 of Book One of theCivil Code of Quebec, which provides that "every person has a right to the respect of his reputation and privacy".[129]
To establish civil liability for defamation, the plaintiff must establish, on a balance of probabilities, the existence of an injury (fault), a wrongful act (damage), and of a causal connection (link of causality) between the two. A person who has made defamatory remarks will not necessarily be civilly liable for them. The plaintiff must further demonstrate that the person who made the remarks committed a wrongful act. Defamation in Quebec is governed by a reasonableness standard, as opposed to strict liability; a defendant who made a false statement would not be held liable if it was reasonable to believe the statement was true.[130]
TheCriminal Code of Canadaspecifies the following as criminal offences:
The criminal portion of the law has been rarely applied, but it has been observed that, when treated as an indictable offence, it often appears to arise from statements made against an agent of the Crown, such as apolice officer, acorrections officer, or aCrown attorney.[134]In the most recent case, in 2012, an Ottawa restaurant owner was convicted of ongoing online harassment of a customer who had complained about the quality of food and service in her restaurant.[135]
According to the OSCE official report on defamation laws issued in 2005, 57 persons in Canada were accused of defamation, libel and insult, among which 23 were convicted – 9 to prison sentences, 19 toprobationand one to a fine. The average period in prison was 270 days, and the maximum sentence was four years of imprisonment.[136]
The rise of the internet as a medium for publication and the expression of ideas, including the emergence of social media platforms transcending national boundaries, has proven challenging to reconcile with traditional notions of defamation law. Questions of jurisdiction and conflicting limitation periods in trans-border online defamation cases, liability for hyperlinks to defamatory content, filing lawsuits against anonymous parties, and the liability of internet service providers and intermediaries make online defamation a uniquely complicated area of law.[137]
In 2011, theSupreme Court of Canadaheld that a person who posts hyperlinks on a website which lead to another site with defamatory content is not publishing that defamatory material for the purposes of libel and defamation law.[138][139]
InChile, the crimes of calumny and slanderous allegation (injurias) are covered by Articles 412 to 431 of the Penal Code. Calumny is defined as "the false imputation of a determined crime and that can lead to a public prosecution" (Article 412). If the calumny is written and with publicity, penalty is "lower imprisonment" in its medium degree plus a fine of 11 to 20 "vital wages" when it refers to a crime, or "lower imprisonment" in its minimum degree plus a fine of six to ten "vital wages" when it refers to amisdemeanor(Article 413). If it is not written or with publicity, penalty is "lower imprisonment" in its minimum degree plus a fine of six to fifteen "vital wages" when it is about a crime, or plus a fine of six to ten "vital wages" when it is about a misdemeanor (Article 414).[140][141]
According to Article 25 of the Penal Code, "lower imprisonment" is defined as a prison term between 61 days and five years. According to Article 30, the penalty of "lower imprisonment" in its medium or minimum degrees carries with it also the suspension of the exercise of a public position during the prison term.[142]
Article 416 definesinjuriaas "all expression said or action performed that dishonors, discredits or causes contempt". Article 417 defines broadlyinjurias graves(grave slander), including the imputation of a crime or misdemeanor that cannot lead to public prosecution, and the imputation of a vice or lack of morality, which are capable of harming considerably the reputation, credit or interests of the offended person. "Grave slander" in written form or with publicity are punished with "lower imprisonment" in its minimum to medium degrees plus a fine of eleven to twenty "vital wages". Calumny or slander of a deceased person (Article 424) can be prosecuted by the spouse, children, grandchildren, parents, grandparents, siblings andheirsof the offended person. Finally, according to Article 425, in the case of calumnies and slander published in foreign newspapers, are considered liable all those who from Chilean territory sent articles or gave orders for publication abroad, or contributed to the introduction of such newspapers in Chile with the intention of propagating the calumny and slander.[143]
Based on text fromWikisource:[better source needed]Civil Code of the People's Republic of China, "Book Four" ("Personality Rights").
"Chapter I" ("General rules"):
"Chapter V" ("Rights to Reputation and Rights to Honor"):
Article 246 of theCriminal Law of the People's Republic of China(中华人民共和国刑法) makes serious defamation punishable by fixed-term imprisonment of not more than three years or criminal detention upon complaint, unless it is against the government.[144]
InCroatia, the crime of insult prescribes a penalty of up to three months in prison, or a fine of "up to 100 daily incomes" (Criminal Code, Article 199). If the crime is committed in public, penalties are aggravated to up to six months of imprisonment, or a fine of "up to 150 daily incomes" (Article 199–2). Moreover, the crime of defamation occurs when someone affirms or disseminates false facts about other person that can damage his reputation. The maximum penalty is one year in prison, or a fine of up to 150 daily incomes (Article 200–1). If the crime is committed in public, the prison term can reach one year (Article 200–2). On the other hand, according to Article 203, there is an exemption for the application of the aforementioned articles (insult and defamation) when thespecific contextis that of ascientific work,literary work,work of art, public information conducted by a politician or a government official,journalistic work, or the defence of a right or the protection of justifiable interests, in all casesprovided thatthe conduct was not aimed at damaging someone's reputation.[145]
According to the Czech Criminal Code, Article 184, defamation is a crime. Penalties may reach a maximum prison term of one year (Article 184–1) or, if the crime is committed through the press, film, radio, TV, publicly accessible computer network, or by "similarly effective" methods, the offender may stay in prison for up to two years or be prohibited of exercising a specific activity.[146]However, only the most severe cases will be subject to criminal prosecution. The less severe cases can be solved by an action for apology, damages or injunctions.
In Denmark, libel is a crime, as defined by Article 267 of the Danish Criminal Code, with a penalty of up to six months in prison or a fine, with proceedings initiated by the victim. In addition, Article 266-b prescribes a maximum prison term of two years in the case of public defamation aimed at a group of persons because of their race, colour, national or ethnic origin, religion or "sexual inclination".[147][148]
In Finland, defamation is a crime, according to theCriminal Code(Chapter 24, Sections 9 and 10), punishable with a fine, or, if aggravated, with up to two years' imprisonment or a fine. Defamation is defined as spreading a false report or insinuation apt to cause harm to a person, or otherwise disparaging someone. Defamation of the deceased may also constitute an offence if apt to cause harm to surviving loved ones. In addition, there is a crime called "dissemination of information violating personal privacy" (Chapter 24, Section 8), which consists in disseminating information, even accurate, in a way that is apt to harm someone's right to privacy. Information that may be relevant with regard to a person's conduct in public office, in business, or in a comparable position, or of information otherwise relevant to a matter of public interest, is not covered by this prohibition.[149][150]Finnish criminal law has no provisions penalizing the defamation of corporate entities, only of natural persons.
While defamation law in most jurisdictions centres on the protection of individuals' dignity or reputation, defamation law inFranceis particularly rooted in protecting the privacy of individuals.[151]While the broader scope of the rights protected make defamation cases easier to prove in France than, for example, in England; awards in defamation cases are significantly lower and it is common for courts to award symbolic damages as low as €1.[151]Controversially, damages in defamation cases brought by public officials are higher than those brought by ordinary citizens, which has achilling effecton criticism of public policy[152]While the only statutory defence available under French defamation law is to demonstrate the truth of the defamatory statement in question, a defence that is unavailable in cases involving an individual's personal life; French courts have recognised three additional exceptions:[153]
Defined as "the allegation or [the] allocation of a fact that damages the honor or reputation of the person or body to which the fact is imputed". A defamatory allegation is considered an insult if it does not include any facts or if the claimed facts cannot be verified.
In German law, there is no distinction between libel and slander. As of 2006[update], German defamation lawsuits are increasing.[154]The relevant offences of Germany'sCriminal Codeare §90 (denigration of the Federal President), §90a (denigration of the [federal] State and its symbols), §90b (unconstitutional denigration of the organs of the Constitution), §185 ("insult"), §186 (defamation of character), §187 (defamation with deliberate untruths), §188 (political defamation with increased penalties for offending against paras 186 and 187), §189 (denigration of a deceased person), §192 ("insult" with true statements). Other sections relevant to prosecution of these offences are §190 (criminal conviction as proof of truth), §193 (no defamation in the pursuit of rightful interests), §194 (application for a criminal prosecution under these paragraphs), §199 (mutual insult allowed to be left unpunished), and §200 (method of proclamation).
In Greece, the maximum prison term for defamation, libel or insult was five years, while the maximum fine was €15,000.[155]
The crime of insult (Article 361, § 1, of the Penal Code) may have led to up to one year of imprisonment or a fine, while unprovoked insult (Article
361-A, § 1) was punished with at least three months in prison. In addition, defamation may have resulted in up to two months in prison or a fine, while aggravated defamation could have led to at least three months of prison, plus a possible fine (Article 363) and deprivation of the offender'scivil rights. Finally, disparaging the memory of a deceased person is punished with imprisonment of up to six months (Penal Code, Article 365).[156]
In India, a defamation case can be filed under eithercriminal laworcivil law, or both.[157]
According to theConstitution of India,[158]thefundamental right to free speech (Article 19)is subject to "reasonable restrictions":
19. Protection of certain rights regarding freedom of speech, etc.
Accordingly, for the purpose ofcriminaldefamation, "reasonable restrictions" are defined in Section 499 of theIndian Penal Code, 1860(Section 499 of Indian Penal Code has now been replaced by Section 356 of Bharatiya Nyaya Sanhita).[100]This section defines defamation and provides ten valid exceptions when a statement is not considered to be defamation. It says that defamation takes place, when someone "by words either spoken or intended to be read, or by signs or by visible representations, makes or publishes any imputation concerning any person intending to harm, or knowing or having reason to believe that such imputation will harm, the reputation of such person". The punishment is simple imprisonment for up to two years, or a fine, or both (Section 500).
Some other offences related to false allegations: false statements regarding elections (Section 171G), falseinformation(Section 182), false claims in court (Section 209), falsecriminal charges(Section 211).
Some other offences related to insults: against public servants in judicial proceedings (Section 228), against religion or religious beliefs (Section 295A), against religious feelings (Section 298), against breach of peace (Section 504), against modesty of women (Section 509).
According to theIndian Code of Criminal Procedure, 1973[159]defamation is prosecuted only upon a complaint (within six months from the act) (Section 199), and is abailable,non-cognisableand compoundable offence (See: The First Schedule, Classification of Offences).
According to the (revised) Defamation Act 2009,[101]the last criminal offences (Sections 36–37, blasphemy) seem to have been repealed. Thestatute of limitationsis one year from the time of first publication (may be extended to two years by the courts) (Section 38).
The 2009 Act repeals the Defamation Act 1961, which had, together with the underlying principles of the common law of tort, governed Irish defamation law for almost half a century. The 2009 Act represents significant changes in Irish law, as many believe that it previously attached insufficient importance to the media's freedom of expression and weighed too heavily in support of the individual's right to a good name.[160]
According toDefamation Prohibition Law[full citation needed](1965), defamation can constitute either civil or criminal offence.
As a civil offence, defamation is considered a tort case and the court may award a compensation of up toNIS 50,000to the person targeted by the defamation, while the plaintiff does not have to prove a material damage.
As a criminal offence, defamation is punishable by a year of imprisonment. In order to constitute a felony, defamation must be intentional and target at least two persons.
In Italy, there used to be different crimes against honor. The crime of injury (Article 594 of the Penal Code) referred to the act of offending someone's honor in their presence and was punishable with up to six months in prison or a fine of up to €516. The crime of defamation (Article 595, Penal Code) refers to any other situation involving offending one's reputation before many persons, and is punishable with a penalty of up to a year in prison or up to €1,032 in fine, doubled to up to two years in prison or a fine of €2,065 if the offence consists in the attribution of a determined fact. When the offence happens by the means of the press or by any other means of publicity, or in a public demonstration, the penalty is of imprisonment from six months to three years, or a fine of at least €516. Both of them werea querela di partecrimes, that is, the victim had the right of choosing, in any moment, to stop the criminal prosecution by withdrawing thequerela(a formal complaint), or even prosecute the fact only with a civil action with noquerelaand therefore no criminal prosecution at all. Beginning from 15 January 2016, injury is no longer a crime but a tort, while defamation is still considered a crime like before.[161]
Article 31 of the Penal Code establishes that crimes committed withabuse of poweror with abuse of a profession orart, or with the violation of a duty inherent to that profession or art, lead to the additional penalty of a temporarybanin the exercise of that profession or art. Therefore, journalists convicted of libel may be banned from exercising their profession.[162][163]Deliberately false accusations of defamation, as with any other crime, lead to the crime ofcalunnia("calumny", Article 368, Penal Code), which, under the Italian legal system, is defined as the crime of falsely accusing, before the authorities, a person of a crime they did not commit. As to the trial, judgment on the legality of the evidence fades into its relevance.[164]
TheConstitution of Japan[165]reads:
Article 21. Freedom of assembly and association as well as speech, press and all other forms of expression are guaranteed. No censorship shall be maintained, nor shall the secrecy of any means of communication be violated.
Under article 723 of the Japanese Civil Code, a court is empowered to order a tortfeasor in a defamation case to "take suitable measures for the restoration of the [plaintiff's] reputation either in lieu of or together with compensation for damages".[166]An example of a civil defamation case in Japan can be found atJapan civil court finds against ZNTIR President Yositoki (Mitsuo) Hataya and Yoshiaki.
ThePenal Code of Japan[102](translation from government, but still not official text) seems to prescribe these related offences:
Article 92, "Damage to a Foreign National Flag". Seems relevant to the extent that the wording: "...defiles the national flag or other national emblem of a foreign state for the purpose of insulting the foreign state", can be construed to include more abstract defiling; translations of the Japanese term (汚損,[167]oson) include 'defacing'.
Article 172, "False Accusations". That is, falsecriminal charges(as incomplaint,indictment, orinformation).
Article 188, "Desecrating Places of Worship; Interference with Religious Service". The Japanese term (不敬,[168]fukei) seems to include any act of 'disrespect' and 'blasphemy' – a standard term; as long as it is performed in a place of worship.
Articles 230 and 230-2, "Defamation" (名誉毀損,meiyokison). General defamation provision. Where the truth of the allegations is not a factor in determining guilt; but there is "Special Provision for Matters Concerning Public Interest", whereby proving the allegations is allowed as a defence. See also 232: "prosecuted only upon complaint".
Article 231, "Insults" (侮辱,bujoku). General insult provision. See also 232: "prosecuted only upon complaint".
Article 233, "Damage to Credibility; Obstruction of Business". Special provision for damaging the reputation of, or 'confidence' (信用,[169]shinnyou) in, the business of another.
For a sample penal defamation case, seePresident of the Yukan Wakayama Jiji v. States,Vol. 23 No. 7 Minshu 1966 (A) 2472, 975 (Supreme Court of Japan25 June 1969) – also onWikisource. The defence alleged, among other things, violation of Article 21 of the Constitution. The court found that none of the defence's grounds for appeal amounted to lawful grounds for a final appeal. Nevertheless, the court examined the caseex officio, and found procedural illegalities in the lower courts' judgments (regarding the exclusion of evidence from testimony, as hearsay). As a result, the court quashed the conviction on appeal, and remanded the case to a lower court for further proceedings.
In Malaysia, defamation is both a tort and a criminal offence meant to protect the reputation and good name of a person. The principal statutes relied upon are the Defamation Act 1957 (Revised 1983) and the Penal Code. Following the practice of other common law jurisdictions like the United Kingdom, Singapore, and India, Malaysia relies on case law. In fact, the Defamation Act 1957 is similar with the English Defamaiton Act 1952. The Malaysian Penal Code ispari materiawith the Indian and Singaporean Penal Codes.
In Mexico, crimes of calumny, defamation and slanderous allegation (injurias) have been abolished in the Federal Penal Code as well as in fifteen states. These crimes remain in the penal codes of seventeen states, where penalty is, in average, from 1.1 years (for ones convicted for slanderous allegation) to 3.8 years in jail (for those convicted for calumny).[170]
In the Netherlands, defamation is mostly dealt with by lodging a civil complaint at the District Court. Article 167 of book 6 of theCivil Codeholds: "When someone is liable towards another person under this Section because of an incorrect or, by its incompleteness, misleading publication of information of factual nature, the court may, upon a right of action (legal claim) of this other person, order the tortfeasor to publish a correction in a way to be set by court." If the court grants an injunction, the defendant is usually ordered to delete the publication or to publish a rectification statement.
In Norway, defamation was a crime punished with imprisonment of up to six months or a fine (Penal Code, Chapter 23, § 246). When the offense is likely to harm one's "good name" and reputation, or exposes him to hatred, contempt or loss of confidence, the maximum prison term went up to one year, and if the defamation happens in print, in broadcasting or through an especially aggravating circumstance, imprisonment may have reached two years (§ 247). When the offender acts "against his better judgment", he was liable to a maximum prison term of three years (§ 248). According to § 251, defamation lawsuits must be initiated by the offended person, unless the defamatory act was directed to an indefinite group or a large number of persons, when it may also have been prosecuted by public authorities.[171][172]
Under the new Penal Code, decided upon by the Parliament in 2005, defamation would cease to exist as a crime. Rather, any person who believes he or she has been subject to defamation will have to press civil lawsuits. The Criminal Code took effect on 1 October 2015.
According to theRevised Penal Codeof thePhilippines("Title Thirteen", "Crimes Against Honor"):[103]
ARTICLE 353. Definition of Libel. – A libel is a public and malicious imputation of a crime, or of a vice or defect, real or imaginary, or any act, omission, condition, status, or circumstance tending to cause the dishonor, discredit, or contempt of a natural or juridical person, or to blacken the memory of one who is dead.
Related articles:
In January 2012,The Manila Timespublished an article on a criminal defamation case. A broadcaster was jailed for more than two years, following conviction on libel charges, by theRegional Trial CourtofDavao. The radio broadcast dramatized a newspaper report regarding former speakerProspero Nograles, who subsequently filed a complaint. Questioned were the conviction's compatibility with freedom of expression, and thetrial in absentia. The United Nations Human Rights Committee recalled itsGeneral comment No. 34, and ordered the Philippine government to provide remedy, including compensation for time served in prison, and to prevent similar violations in the future.[173]
In 2012, the Philippines enacted Republic Act 10175, titledCybercrime Prevention Act of 2012. Essentially, this Act provides that libel is criminally punishable and describes it as: "Libel – the unlawful or prohibited act as defined in Article 355 of the Revised Penal Code, as amended, committed through a computer system or any other similar means which may be devised in the future." Professor Harry Roque of the University of the Philippines has written that under this law, electronic libel is punished with imprisonment from six years and one day to up to twelve years.[174][175][176]As of 30 September 2012[update], five petitions claiming the law to be unconstitutional had been filed with the Philippine Supreme Court, one by SenatorTeofisto Guingona III. The petitions all claim that the law infringes on freedom of expression, due process, equal protection and privacy of communication.[177]
In Poland, defamation is a crime that consists of accusing someone of aconduct that may degradehim inpublic opinionor expose him "to the loss of confidence necessary for a given position, occupation or type of activity". Penalties include fine, limitation of liberty and imprisonment for up to a year (Article 212.1 of the Criminal Code). The penalty is more severe when the offence happens through themedia(Article 212.2).[178]When the insult is public and aims at offending a group of people or an individual because of his or theirnationality, ethnicity, race, religion or lack of religion, the maximum prison term is three years.[179]
In Portugal, defamation crimes are: "defamation" (article 180 of thePenal Code; up to six months in prison, or a fine of up to 240 days), "injuries" (art. 181; up to three months in prison, or a fine up to 120 days), and "offence to the memory of a deceased person" (art. 185; up to 6 months in prison or a fine of up 240 days). Penalties are aggravated in cases with publicity (art. 183; up to two years in prison or at least 120 days of fine) and when the victim is an authority (art.184; all other penalties aggravated by an extra half). There is yet the extra penalty of "public knowledge of the court decision" (costs paid by the defamer) (art. 189 of Penal Code) and also the crime of "incitement of a crime" (article 297; up to three years in prison, or fine).[180][181]
Since 2014, defamation is no longer criminalized in the country.[182]
InSaudi Arabia, defamation of the state, or a past or present ruler, is punishable underterrorismlegislation.[183]In a 2015 case, a Saudi writer was arrested for defaming a former ruler of the country. Reportedly, under a [2014] counterterrorism law, "actions that 'threaten Saudi Arabia's unity, disturb public order, or defame the reputation of the state or the king' are considered acts of terrorism. The law decrees that a suspect can be held incommunicado for 90 days without the presence of their lawyer during the initial questioning."[184]
InSingapore, Division 2 of Part 3 of theProtection from Harassment Act 2014provides for individuals who have been affected by false statements online to seek a variety of court orders under the tort of harassment that are not available under the pre-internet tort of defamation:[42]
This is distinct from and does not affect plaintiffs right of action under the common law torts of libel and slander as modified by the Defamation Act 1957.[64]The Protection of Harassment Act 2014, which provides for criminal penalties in addition to civil remedies, is specifically designed to address a narrower scope of conduct in order to avoid outlawing an overly broad range of speech, and is confined to addressing speech that causes "harassment, alarm, or distress".[42]
InSouth Korea, both true and false statements can be considered defamation.[185]The penalties increase for false statements. It is also possible for a person to be criminally defamed when they are no longer alive.[186]
Criminal defamation occurs when a public statement damages the subject's reputation, unless the statement was true and presented solely for thepublic interest.[186]In addition to criminal law, which allows for imprisonment (up to seven years in case the allegations are false) and monetary fines, one can also sue for damages with civil actions. Generally, criminal actions proceed civil ones with South Korean police as judicial investigators.[citation needed]
In October 2008, theKorea JoongAng Dailypublished an article on online attacks againstcelebrities, and their potential connection tosuicides in the country. Before the death ofChoi Jin-sil, there were rumours online of a significant loan to the actorAhn Jae-hwan, who killed himself earlier due to debts.U;Neehanged herself, unable to deal with remarks about her physical appearance and surgery.Jeong Da-bincommitted suicide while suffering from depression, later linked to personal attacks about her appearance.Na Hoon-awas falsely rumoured to have been castrated by theyakuza.Byun Jung-soowas falsely reported to have died in a car accident. A professor ofinformationandsocial studiesfrom theSoongsil University, warned how rumours around celebrities can impact their lives, in unexpected and serious ways.[187]
In January 2009, according to an article inThe Korea Times, aSeoulcourt approved the arrest of online financial commentatorMinerva, for spreading false information. According to the decision, Minerva's online comments affected national credibility negatively. Lawmakers from the rulingGrand National Partyproposed abill, that would allow imprisonment up to three years for online defamation, and would authorize the police to investigate cyber defamation cases without priorcomplaint.[188]
In September 2015, according to an article inHankook Ilbo, submitted complaints for insults during online games were increasing. Complainants aimed forsettlement money, wasting the investigative capacity of police departments. One person could end up suing 50 others, or more. This led to the emergence of settlement-money hunters, provoking others to insult them, and then demanding compensation. According to statistics from the Cyber Security Bureau of theNational Police Agency, the number of cyber defamation and insult reports was 5,712 in 2010, 8,880 in 2014, and at least 8,488 in 2015. More than half of the complaints for cyber insults were game-related (the article mentionsLeague of Legendsspecifically). Most of the accused were teenagers. Parents often paid settlement fees, ranging from 300,000 to 2,000,000South Korean won(US$300–2000 as of 2015), to save their children from gettingcriminal records.[189]
In the formerSoviet Union, defamatory insults can "only constitute a criminal offence, not a civil wrong".[190]
In Spain, the crime of calumny (Article 205 of thePenal Code) consists of accusing someone of a crime knowing the falsity of the accusation, or with a recklesscontemptfor truth. Penalties for cases with publicity are imprisonment from six months to two years or a fine of 12 to 24 months-fine, and for other cases only a fine of six to twelve months-fine (Article 206). Additionally, the crime of injury (Article 208 of the Penal Code) consists of hurting someone'sdignity, depreciating his reputation or injuring hisself-esteem, and is only applicable if the offence, by its nature, effects and circumstances, is considered by the general public as strong. Injury has a penalty of fine from three to seven months-fine, or from six to fourteen months-fine when it is strong and with publicity. According to Article 216, an additional penalty to calumny or injury may be imposed by the judge, determining the publication of the judicial decision (in a newspaper) at the expenses of the defamer.[191][192]
In Sweden, denigration (ärekränkning) is criminalised by Chapter 5 of the Criminal Code. Article 1 regulates defamation (förtal) and consists of pointing out someone as a criminal or as "having a reprehensible way of living", or of providing information about them "intended to cause exposure to the disrespect of others". The penalty is a fine.[193]It is generally not a requirement that the statements are untrue; it is enough if they statements are meant to be vilifying.[194][195]
Article 2 regulatesgrossdefamation (grovt förtal) and has a penalty of up to two years in prison or a fine. In judging if the crime is gross, the court should consider whether the information, because of its content or the scope of its dissemination, is calculated to produce "serious damage".[193]For example, if it can be established that the defendant knowingly conveyed untruths.[194]Article 4 makes it a crime to defame a deceased person according to Article 1 or 2.[193]Most obviously, the paragraph is meant to make it illegal to defame someone's parents as a way to bypass the law.[194]
Article 3 regulates otherinsulting behaviour(förolämpning), not characterised under Article 1 or 2, and is punishable with a fine or, if it is gross, with up to six months of prison or a fine.[193]While an act of defamation involves a third person, it is not a requirement for insulting behaviour.[194]
Under exemptions in the Freedom of the Press Act, Chapter 7, both criminal and civil lawsuits may be brought to court under the laws on denigration.[196]
InSwitzerland, the crime of wilful defamation is punished with a maximum term of three years in prison, or with a fine of at least thirty days' worth, according to Article 174-2 of theSwiss Criminal Code. There is wilful defamation when the offender knows the falsity of his/her allegations and intentionally looks to ruin the reputation of one's victim (see Articles 174-1 and 174–2).[197][198]
On the other hand, defamation is punished only with a maximum monetary penalty of 180 daily penalty units (Article 173–1).[199]When it comes to a deceased or absent person, thestatute of limitationsis 30 years (after the death).[200]
According to the Civil Code of theRepublic of China.[201]
"Part I General Principles", "Chapter II Persons", "Section I Natural Persons":
"Part II Obligations", "Chapter I General Provisions"
"Section 1 – Sources of Obligations", "Sub-section 5 Torts":
"Section 3 – Effects Of Obligations", "Sub-section 1 Performance":
The Criminal Code of theRepublic of China(中華民國刑法),[104]under "Chapter 27 Offenses Against Reputation and Credit", lists these articles:
Other related articles:
In July 2000, the Justices of theJudicial Yuan(司法院大法官) – the Constitutional Court of Taiwan – delivered theJ.Y. Interpretation No. 509("The Defamation Case"). They upheld the constitutionality of Art. 310 of the Criminal Code. In theConstitution,[202]Article 11 establishes freedom of speech. Article 23 allows restrictions to freedoms and rights, to prevent infringing on the freedoms and rights of others. The court found that Art. 310 ¶¶ 1-2 were necessary andproportionalto protect reputation, privacy, and the public interest. It seemed to extend the defence of truth in ¶ 3, to providing evidence that a perpetrator had reasonable grounds in believing the allegations were true (even if they could not ultimately be proven). Regarding criminal punishments versus civil remedies, it noted that if the law allowed anyone to avoid a penalty for defamation by offering monetary compensation, it would be tantamount to issuing them a licence to defame.[203]
In January 2022, an editorial in theTaipei Times(written by a law student from theNational Chengchi University) argued against Articles 309 and 310. Its position was abolishing prison sentences in practice, on the way to full decriminalization. It argued that insulting language should be tackled viaeducation, and not in the courts (with the exception ofhate speech). According to the article, 180prosecutorsurged theLegislative Yuanto decriminalize defamation, or at least limit it toprivate prosecutions(in order to reserve public resources for major crimes, rather than private disputes and quarrels irrelevant to the public interest).[204]
In June 2023, the Constitutional Court delivered its judgmentCase on the Criminalization of Defamation II. The court dismissed all the complaints and upheld the constitutionality of the disputed provisions. It emphasized that excluding the application of substantial truth doctrine on defamatory speeches concerning private matters with no public concern, is proportionate in protecting the victim's reputation and privacy. The court reaffirmedJ.Y. Interpretation No. 509and further supplemented its decision. It elaborated on the offender's duty to check the validity of the defamatory statements regarding public matters, and dictated that the offender shall not be punished if there are objective and reasonable grounds for the offender to believe the defamatory statement is true. The court ruled that untrue defamatory statements concerning public matters shall not be punished unless they are issued under actual malice. This includes situations where the offender knowingly or under gross negligence issued said defamatory statement. In terms of the burden of proof for actual malice, the court ruled that it shall be on the prosecutor or the accuser. To prevent fake news from eroding the marketplace of ideas, the court pointed out that the media (including mass media, social media, and self-media) shall be more thorough than the general public in fact-checking.[205]
The Civil and Commercial Code of Thailand provides that:
A person who, contrary to the truth, asserts or circulates as a fact that which injurious to the reputation or the credit of another or his earnings or prosperity in any other manner, shall compensate the other for any damage arising therefrom, even if he does not know of its untruth, provided he ought to know it.
A person who makes a communication the untruth of which is unknown to him, does not thereby render himself liable to make compensation, if he or the receiver of the communication has a rightful interest in it.
The Court, when given judgment as to the liability for wrongful act and the amount of compensation, shall not be bound by the provisions of the criminal law concerning liability to punishment or by the conviction or non-conviction of the wrongdoer for a criminal offence.[206]
In practice, defamation law in Thailand has been found by theOffice of the United Nations High Commissioner for Human Rightsto be facilitatehostile and vexatious litigationby business interests seeking to suppress criticism.[207]
TheThai Criminal Codeprovides that:
Section 326. Defamation
Whoever, imputes anything to the other person before a third person in a manner likely to impair the reputation of such other person or to expose such other person to be hated or scorned, is said to commit defamation, and shall be punished with imprisonment not exceeding one year or fined not exceeding twenty thousand Baht, or both.
Section 327. Defamation to the Family
Whoever, imputing anything the deceased person before the third person, and that imputation to be likely to impair the reputation of the father, mother, spouse or child of the deceased or to expose that person hated or scammed to be said to commit defamation, and shall be punished as prescribed by Section 326.[208]
Criminal defamation charges in Thailand under Section 326 of the Criminal Code are frequently used to censor journalists and activists critical of human rights circumstances for workers in the country.[207]
The United Kingdom abolished criminal libel on 12 January 2010 by section 73 of theCoroners and Justice Act 2009.[209]There were only a few instances of the criminal libel law being applied. Notably, the Italian anarchistErrico Malatestawas convicted of criminal libel for denouncing the Italian state agent Ennio Belelli in 1912.
UnderEnglish common law, proving the truth of the allegation was originally a valid defence only in civil libel cases. Criminal libel was construed as an offence against the public at large based on the tendency of the libel to provokebreach of peace, rather than being a crime based upon the actual defamationper se; its veracity was therefore considered irrelevant. Section 6 of theLibel Act 1843allowed the proven truth of the allegation to be used as a valid defence in criminal libel cases, but only if the defendant also demonstrated that publication was for the "public benefit".[210]
Fewer than half ofU.S. stateshave criminal defamation laws, but the applicability of those laws is limited by theFirst Amendment to the U.S. Constitution, and the laws are rarely enforced.[211]There are no criminal defamation or insult laws at the federal level. On the state level, 23 states and two territories have criminal defamation laws on the books:Alabama,Florida,Idaho,Illinois,Kansas,Kentucky,Louisiana,Massachusetts,Michigan,Minnesota,Mississippi,Montana,Nevada,New Hampshire,New Mexico,North Carolina,North Dakota,Oklahoma,South Carolina,Texas,Utah,Virginia,Wisconsin,Puerto RicoandVirgin Islands. In addition,Iowacriminalizes defamation throughcase lawwithout statutorily defining it as a crime.
Noonan v. Staples[212]is sometimes cited as precedent that truth is not always a defence to libel in the U.S., but the case is actually not valid precedent on that issue because Staples did not argue First Amendment protection, which is one theory for truth as complete defence, for its statements.[213]The court assumed in this case that the Massachusetts law was constitutional under the First Amendment without it being argued by the parties.
In response to the expansion of other jurisdictions' attempts to enforce judgements in cases of trans-border defamation and to a rise in domesticstrategic lawsuits against public participation(SLAPPs) following the rise of the internet, the federal and many state governments have adopted statutes limiting the enforceability of offshore defamation judgments and expediting the dismissal of defamation claims. American writers and publishers are shielded from the enforcement of offshore libel judgments not compliant under theSPEECH Act, which was passed by the111th United States Congressand signed into law by PresidentBarack Obamain 2010.[214]It is based on the New York State2008 Libel Terrorism Protection Act(also known as "Rachel's Law", afterRachel Ehrenfeldwho initiated the state and federal laws).[215]Both the New York state law and the federal law were passed unanimously.
In March 2016, a civil action for defamation led to imposition of a four-year prison sentence on a newspaper publisher.[216]
In 2024 Judge and member of theSupreme Judicial Council,Sabah al-Alwaniwas the subject of an online defamation campaign.[217]
As of 2012, defamation, slander, insult andlese-majestylaws, existed across the world. According toARTICLE 19, 174 countries retained criminal penalties for defamation, with fulldecriminalizationin 21 countries. TheOrganization for Security and Co-operation in Europe(OSCE) had an ongoing decriminalization campaign.UNESCOalso provided technical assistance to governments on revising legislation, to align with international standards and best practices.[218]
The use of civil defamation increased, often in lieu of criminal cases, resulting in disproportionatefinesanddamages, particularly against media and journalists critical of governments.Libel tourismenabled powerful individuals to limit critical and dissenting voices by shopping around the world for thejurisdictionsmost likely to approve their defamation suits.[218]
As of 2011, 47% of countries had laws against blasphemy,apostasyordefamation of religion. According to thePew Research Center, 32 had laws or policies prohibiting blasphemy, and 87 had defamation of religion laws.[218]
Thelegal liabilityofinternet intermediariesgained increasing importance. Private companies could be held responsible foruser-generated contentthat was made accessible through their servers or services, if it was deemed illegal or harmful. Due to uncertain takedown procedures and the lack of legal resources, intermediaries sometimes were excessively compliant withtakedown notices, often outside thelegal systemand with littlerecoursefor the affected content producer. Intermediaries were at times held criminally liable for content posted by a user, when others perceived it violatedprivacyor defamation laws. Such cases indicated an emerging trend ofpreventive censorship, where companies conducted their own monitoring and filtering to avoid possible repercussions. This contributed to a process of privatized censorship, where some governments may rely onprivate-sectorcompanies to regulate online content, outside of electoral accountability and withoutdue process.[218]
Debate around defamation of religions, and how this impacts the right to free expression, continued to be an issue at a global level. In 2006, UNESCO's executive board adopted a decision on "Respect for freedom of expression and respect forsacredbeliefsand values andreligiousandculturalsymbols". In 2011, theUnited Nations Human Rights Councilmade further calls for strengtheningreligious toleranceand preventinghate speech. Similar resolutions were made in 2012 and 2013. In 2013, 87 governments agreed on the Rabat Plan of Action, for the prohibition of incitement to hatred.[218]
By 2013, at least 19% of the region had decriminalized defamation. In 2010, theAfrican Commission on Human and Peoples' Rightsadopted a resolution, calling onAfrican Union(AU) member countries to repeal criminal defamation or insult laws. In 2012, thePan-African Parliamentpassed a resolution encouraging AU heads of state to sign theDeclaration of Table Mountain, calling for the abolition of insult and criminal defamation laws. Such laws frequently led to the arrest and imprisonment of journalists across the continent. It was signed by two countries. In most cases – criminal or civil – theburden of proofcontinued to be on the defendant, and it was rare to havepublic interestrecognized as a defence. Members of government continued to initiate most such cases. There was a trend towards using civil defamation in lieu of criminal defamation, but with demands for extremely high damages and the potential to bankrupt media outlets – although the courts often dismissed such cases. According to an analysis by the Pew Research Center's Forum on Religion and Public Life, laws against defamation of religion remained on the books in 13 countries (27%), four countries had laws penalizing apostasy, and two had anti-blasphemy laws.[219]
All Arab States retained criminal penalties for defamation. Truth was rarely a defence to defamation and libel charges. In 2012,AlgeriaandTunisiapartially decriminalized defamation by eliminating prison terms. According to an analysis by the Pew Research Center's Forum on Religion and Public Life, sixteen countries (84%) had laws penalizing blasphemy, apostasy and/or defamation of religion. Lese-majesty laws existed in some parts of the region. There were vaguely worded concepts and terms, interpreted narrowly by the judiciary. The number of bloggers imprisoned was rising. Among someGulf Statesin particular,citizen journalistsandsocial mediausers reporting on political matters were arrested. The charges were defamation or insult, typically with respect to heads of state. There was a trend towards trying journalists and bloggers inmilitary courts, particularly during and following theArab Spring; although this was not limited to countries where such uprisings occurred.[220]
The majority of countries (86%) had laws imposing criminal penalties for defamation. Six countries decriminalized defamation. Both criminal and civil defamation charges against journalists and media organizations continued. Other legal trends included using charges of terrorism, blasphemy, inciting subversion of state power, acting against the state, and conducting activities to overthrow the state. In 2011, the Pew Research Center's Forum on Religion and Public Life found that anti-blasphemy laws existed in eight countries (18%), while 15 (34%) had laws against defamation of religion.[221]
Four countries inCentral and Eastern Europefully decriminalized defamation. An additional four abolished prison sentences for defamation convictions, although the offence remained in the criminal code.[222]
At the same time, an emerging trend was using fines andsanctions. Civil defamation cases were increasingly used, as evidenced by the number of civil lawsuits and disproportionate fines against journalists and media critical of governments. In at least four countries, defamation laws were used by public officials, including heads of state, to restrict critical media across all platforms.Mediaandcivil societyincreased pressure on authorities, to stop granting public officials a higher degree of protection against defamation in the media.[222]
Blasphemy was not a widespread phenomenon in Central and Eastern Europe, whereonly one[clarification needed]country still had such a provision. According to the Pew Research Center's Forum on Religion & Public Life, 17 countries had laws penalizing religious hate speech.[222]
TheSpecial Rapporteur for Freedom of Expressionof theInter-American Commission on Human Rights(IACHR) of theOrganization of American States(OAS), recommended repealing or amending laws that criminalizedesacato, defamation, slander, and libel. Some countries proposed reforming the IACHR, which could have weakened the office of thespecial rapporteur, but the proposal was not adopted by theOAS General Assembly.[223]
Seven countries, three of which in theCaribbean, fully or partially decriminalized defamation. Another trend was abolishingdesacatolaws, which refer specifically to defamation of public officials. The OAS Special Rapporteur expressed concern over the use ofterrorismortreasonoffences against those who criticize governments.[223]
Defamation,copyright, and political issues were identified as the principal motives for content removal.[223]
Defamation was a criminal offence in the vast majority of countries, occasionally leading to imprisonment or elevated fines. Criminal penalties for defamation remained, but there was a trend towards their repeal. Between 2007 and 2012, 23 of the 27 countries inWestern EuropeandNorth Americaimposed criminal penalties for various exercises of expression (includingcriminal libel, defamation, slander, insult, and lese-majesty laws – but excludingincitement to violence).[224]
Two countries decriminalized defamation in 2009, followed by another in 2010. In another case, there was no criminal libel at the federal level, and a minority of states still had criminal defamation laws. In general, criminal penalties for libel were imposed rarely, with two notable exceptions.[224]
According to the Pew Research Center's Forum on Religion & Public Life, eight countries had blasphemy legislation, though these laws were used infrequently.[224]
The range of defences available to those accused ofinvasion of privacyor defamation expanded, with growing recognition of the public-interest value ofjournalism. In at least 21 countries, defences to charges of defamation included truth and public interest. This included countries that had at least one truth or public interest defence to criminal or civil defamation (including countries where defence of truth was qualified or limited – for example, to statements of fact as opposed to opinions, or to libel as opposed to insult).[224]
Civil defamation continued, particularly about content related to the rich and powerful, including public officials andcelebrities. There were a high number of claims, prohibitivelegal costs, and disproportionate damages. This prompted a campaign against what was seen by some asplaintiff-friendlylibel laws in theUnited Kingdom; and that led to reforming the country's defamation law, resulting in theDefamation Act 2013.[224]
Due to legal protection of speech, and practical and jurisdictional limits on effectiveness of controls, censorship was increasingly carried out by private bodies. Privatized censorship by internet intermediaries involved: (i) the widening range of content considered harmful and justified to block or filter; (ii) inadequate due process andjudicial oversightof decisions to exclude content or to conductsurveillance; and (iii) a lack of transparency regarding blocking and filtering processes (including the relationship between the state and private bodies, in the setting of filters and the exchange ofpersonal data).[224]
As of 2017, at least 130 UNESCO member states retained criminal defamation laws. In 2017, theOSCE Representative on Freedom of the Mediaissued a report[3]on criminal defamation and anti-blasphemy laws among its member states, which found that defamation was criminalized in nearly three-quarters (42) of the 57 OSCE participating states. Many of the laws pertaining to defamation included specific provisions with harsher punishments for speech or publications critical of heads of state, public officials, state bodies, and the state itself. The report noted that blasphemy and religious insult laws existed in around one third of OSCE participating states; many of these combined blasphemy and/or religious insult with elements of hate speech legislation. A number of countries continued to include harsh punishments for blasphemy and religious insult.[225]
Countries in every region extended criminal defamation legislation to online content.Cybercrimeandanti-terrorism lawspassed throughout the world;bloggersappeared before courts, with some serving time in prison. Technological advancements strengthened governments' abilities to monitor online content.[225]
Between 2012 and 2017, four AU member states decriminalized defamation. Other national courts defended criminal defamation's place in their constitution [sic]. Regional courts pressured countries to decriminalize defamation. TheECOWAS Court of Justice, which hadjurisdictionover cases pertaining tohuman rightsviolations since 2005, set a precedent with two rulings in favour of cases challenging the criminalization of defamation.[226]
In thelandmark caseofLohé Issa Konaté v. the Republic of Burkina Faso, theAfrican Court on Human and Peoples' Rightsoverturned the conviction of a journalist, characterizing it as a violation of theAfrican Charter on Human and Peoples' Rights, theInternational Covenant on Civil and Political Rights, and the treaty of theEconomic Community of West African States(ECOWAS). The journalist was subjected to censorship, excessive fines, and a lengthy imprisonment for defamation. Following this legally binding decision, the country in question proceeded to amend its laws and pay the journalist compensation.[225][226]
In 2016, the Constitutional Court of Zimbabwe declared its criminal defamation laws unconstitutional. In 2017, theHigh Court of Kenyadeclared Section 194 (criminal defamation) of the Penal Code unconstitutional.[226]
Civil society and press freedom organizations lobbied for changes to the penal codes in their respective countries – sometimes successfully. However, even in countries where libel or defamation were explicitly decriminalized, there were often other laws whose broad provisions allowed governments to imprison journalists for a wide range of reasons (cybercrime, anti-terrorism, incitement to violence,national security).[226]
The majority of countries had defamation laws, that were used to charge and imprison journalists. Media outlets were suspended after publishing reports critical of the government or other political elites.[226]
Libel, defamation, slander, as well asemergency lawsand anti-terrorism laws, were frequently used as tools of government control on media. Emergency laws often superseded the general law. Defamation laws tended to favour those who could afford costly legal expenses.[227]
Googletransparency reports[228]showed that several governments in theArab regionmade requests to remove content (such asYouTubevideos), based on allegations of insulting religion and defaming powerful figures.[227]
Journalists were predominantly jailed under anti-state laws, with charges ranging from spreading chaos, promoting terrorism, and inciting dissidence, to incitement against the ruling government. Charges for publishing or spreading false news were the next most frequent. Other defamation or religious insult laws were laid against journalists in several cases.[227]
Most countries inSouth,Southeast, andEast Asia, had civil and/or criminal defamation laws. Various cases indicated that such laws were used by political interests and powerful elites (individuals and corporations). Cases of online defamation were on the rise.[229]
One recently enacted defamation law received condemnation, including from theUnited Nations. The law allowed journalists to be jailed if they were found questioningSharialaw or the affairs of the state. From 2014, criminal defamation laws were challenged, both in South and East Asian countries.[229]
Since 2014, use of criminal defamation and insult laws increased. Newlegal obligationswere imposed onISPsto monitor content, as a matter of national security – particularly in theCommonwealth of Independent States(CIS) sub-region.[230]
Since 2012, more countries in theSouth-East Europesub-region decriminalized defamation. Of theeight countries[clarification needed], three repealed all general provisions on criminal defamation and insult,four[clarification needed]retained criminal defamation offences but without the possibility of imprisonment, andone[clarification needed]retained imprisonment as a possibility. Defamation of public officials, state bodies, or state institutions was criminalized inone[clarification needed]country. Other forms of criminal offences existed in some countries: insulting public officials, harming the reputation and honour of the head of state, insulting or defaming the state.[230]
Civil laws to protect the reputation of individuals or their privacy were increasingly used. There was an increase in the number of cases wherepoliticiansturned to the courts, seeking relief for reputational injuries. Civil defamation lawsuits by politicians limited press freedom, in at least one country of the CIS sub-region.[230]
There were attempts to pass legislation allowing content removal based on different claims, including defamation and hate speech.Draft billswere proposed, criminalizing online publication of content deemed as hate speech, and allowing the executive to order take downs of such content. Several states tried to pass legislation creating special criminal offences for online content that could damage the reputation and/or honour of a person. As of 2017, none of these bills were approved.[231]
Public officials throughout the region initiated criminal proceedings against internet users, predominantly against those opposing theruling party. Claims were based on defamation laws, including charges againstmemesparodying political personalities.[231]
Antigua and Barbuda(in 2015),Jamaica(in 2013), andGrenada(in 2012), abolished criminal libel.Trinidad and Tobagopartially repealed criminal libel in 2014. TheDominican Republicremoved prison sentences for defamation of government bodies and public officials.[231]
New cybercrime laws were passed in two Caribbean countries. In 2017, one country passed an anti-hate law that was criticized for stifling political debate.[231]
Legal developments varied across the region. While criminal defamation and insult laws were repealed in some countries, stronger defamation laws were produced or reintroduced in other countries.[232]
Incommon lawcountries, criminal defamation laws mostly fell into disuse. In contrast, mostcivil lawcountries in Western Europe retained criminal defamation laws. In several Western European countries, defamation was sanctioned more harshly if it involved a public official. In some instances, heads of state were provided more protection to their reputation and punishments were more severe. Some governments strengthened criminal defamation laws to counter online hate speech orcyberbullying.[232]
TheEuropean Court of Human Rightshad limited influence inlegal reformsaccording to the court's standards, where (suspended) prison sentences for defamation were considered a violation of Article 10 of theEuropean Convention on Human Rights. Otherhigh courtshad a mixed record when evaluating criminal defamation and freedom of expression.[232]
According to the 2017 OSCE report,[3]criminal defamation laws were in place in at least 21 of the 27 countries in Western Europe and North America. At least 13 states retained statutes penalizing blasphemy or religious insult.[232]
As of 2022, at least 160 countries had criminal defamation laws on the books, down from 166 in 2015. At least 57 laws and regulations across 44 countries were adopted or amended since 2016, containing vague language or disproportionate punishments, threatening onlinefreedom of expressionandpress freedom.[233]
According to reports provided byMeta,Google, andTwitter, the number of content removal requests received by those platforms fromcourt orders,law enforcement, andexecutivebranches of governments worldwide doubled in the last five years – to a total of approximately 117,000 requests in 2020. Of these companies, only Google published data on the rationale for content removal requests made by governments; that data showed "defamation" and "privacy and security" as the leading justifications.[233]
Christianreligious texts(such as theEpistle of James– full text onWikisource),catechisms(like the one commissioned by theCouncil of Trent– see "The Eighth Commandment" from itsRoman Catechism), andpreachers(likeJean-Baptiste Massillon– see hissermontitled "On evil-speaking"), have argued against expressions (true and false) that can offend others.
Theologianand catechistJoseph Deharbe, inhis interpretation of the Eighth Commandment, gives practical advice to thefaithful: The commandment above all forbids givingfalse evidenceincourt. It is never lawful to tell a lie. In general, forbidden arelies,hypocrisy, detraction, calumny, slander, falsesuspicion, rashjudgment; anything that can injure the honour or character of another. With two exceptions: for the good of the guilty, or when necessary to prevent a greater evil – and then, only withcharitableintentions and without exaggerations.
TheCatholic Encyclopediahas entries for two related concepts, detraction[234]and slander.[235]
Defamation and calumny seem to be used as synonyms for slander.
Themortal sinof damaging another's good name, by revealing their faults orcrimes(honestly believed real by the detractor). Contrasted with calumny, where the assertions are knowingly false.
The degree of sinfulness depends on the harm done, based on three things:
A relatively small defect alleged against a person of eminent station (abishopis given as example) might be a mortal sin. While an offence of considerable magnitude (drunkenness is given as example), attributed to a member of asocial classin which such things frequently happen (asailoris given as example), might constitute only avenial sin.
If the victim has been publiclysentenced, or their misdeeds are alreadynotorious, it islawfulto refer to them – unless the accused havereformed, or their deeds have been forgotten. But this does not apply to particular communities (acollegeormonasteryare given as examples), where it would beunlawfulto publish the fact outside said community. But even if the sin is not public, it may be revealed for thecommon good, or for the benefit of the narrator, listener, or culprit.
The damage from failing to reveal another's sin must be balanced against the evil of defamation [sic]. No more than necessary should be exposed, andfraternal correctionis preferable.Journalistsare allowed to criticize public officials.Historiansmust be able to document the causes and connections of events, and strengthen public conscience.
Those whoabettheprincipal's defamation, are also guilty. Detractors (or theirheirs) must provide restitution. They must restore the victim'sfameand pay themdamages. According to the text, allegations cannot be taken back, reparation methods proposed by theologians are unsatisfactory, and the only way is finding the right occasion for a favourable characterization of the defamed.
Defined as attributing fault to another, when the slanderer knows they areinnocent. It combines damaging another's reputation and lying.
According to the text, theologians say that the act of lying might not be grievous in itself, but advise mentioning it inconfessionto determine reparation methods. The important act is injuring a reputation (hencemoralistsdo not consider slander distinct from detraction). The method of injury is negligible.
In a somewhat contradictory opinion, it is stated that there are circumstances where misdeeds can be lawfully exposed, but a lie is intrinsicallyeviland can never be justified.
Slander violates commutative justice, so the perpetrator must makerestitution.Atonementseems achievable byretractingthe false statement, which undoes the injury (even if this requires exposing the perpetrator as a liar). Compensation for the victim's losses may also be required.
In a 2018 academic paper,[236]the author (a law student from theInternational Islamic University of Malaysia) argued for harmonization betweenMalaysian lawsandIslamic legal principles.Article 3of the Constitution declaresIslamas thestate religion.Article 10provides for freedom of speech, with expressly permitted restrictions for defamation-related offences.
First, definitions of defamation from Malaysian and Islamic law are listed. According to the paper, definitions byMuslim scholarscan include: mislead, accuse of adultery, and embarrass or discredit the dignity or honour of another. In theQuran, many more concepts might be included. The author concludes that Islamic definitions are better for classifying defamatory actions.
Second, freedom of speech is compared with teachings ofMuhammad. Mentioned among others are:fragmentation of society,divine retributionby theangelsin theafterlife, secrecy, loyalty, and treachery; dignity and honour are again mentioned. The author concludes that freedom of speech should be practised for the sake of justice, and can be lifted if it causes discomfort or unhealthy relationships in society.
Third, Malaysian laws related to defamation are enumerated. According to the author, there were cases with exorbitantmonetary awards, interference by third parties, and selective actions againstpolitical opposition; having a negative impact on society.
Fourth, the proposal of harmonization is discussed. The author proposes amending Malaysian laws to conform with Islamic legal principles, under the supervision of a specific department. Mentioned are: Islamiccustomary law(Adat), secondary sources of Islamic law (such asUrf), and "other laws" practised by people in various countries; provided that they are in line with Islamic divine law (Maqasid). The author concludes that in the Malaysian context, this proposed harmonization would be justified by Article 3 of the Constitution (with a passing reference to the "supremacy of the Constitution", apparently guaranteed in Article 4).
Finally, the author enumerates proposed steps to bring about thislegal reform. Defamation would include:
There would be three types of punishment for defamation:
Other proposed measures include:right of reply, order ofretraction, mediation via anombudsman, empowering theHuman Rights Commission of Malaysia, finding ways for people to express their views and opinions, education.
The Jewish Encyclopediahas two articles on the topic: calumny[237]and slander.[238]
The two terms seem to be conflated. It is not clear which, if any, corresponds to harmful and true speech, and which to harmful and false speech. Combined with Wikipedia's entry onlashon hara(terms are spelled somewhat differently), it might be deduced that:
The Wikipedia article onlashon haraequates it todetraction. And classifies all of slander, defamation, and calumny, as the same – and equal tohotzaat shem ra.
It is described as asin, based on both theBible("gossip") andrabbinic literature(leshon hara, "the evil tongue"). Intentionally falseaccusationsand also injuriousgossip. Both forbidden in theTorah. Of theTen Commandments, relevant is the ninth (inJudaism):Thou shalt not bear false witness against thy neighbour.
According to the article, the slanderous [sic] tongue ruins the slanderer, the listener, and the maligned. Thedivine presencewill be denied toliars,hypocrites,scoffers, and slanderers. Slander is morally equated toidolatry,adultery, andmurder.
According to the authors, somerabbissawquinsy,leprosy(related toMiriamspeaking ill of Moses),stoning, as deserved punishments. And theMidrashattributes hardships of various figures (such asJoseph,Moses,Elijah,Isaiah) to sins of the tongue.
As forlegal remedies, the article refers to ethical and religious sanctions from the Bible and theTalmud, arguing that thelawcannot repair subtle damage to reputation – with two exceptions. Bringing an evil name upon one'swife(punished with a fine and by disallowingdivorce).Perjury, which would result in the perpetrator receiving same punishment, as the one their false testimony would have brought upon the falsely accused.
The authors conclude that calumny was met withrighteous indignationand penal severity inJewish thought, and this was in accordance with the ethical principle of treating the honour of others as one's own.
Defined as "false and malicious defamation" (circular definition) of another's reputation and character, disgracing them in theircommunity. Here, it is distinguished fromleshon haraby being deliberately false. Punishments includefinesanddamages.
According to the authors, theLaw of Mosesprescribedflagellationand monetary compensation for ahusbandwho, without reasonable cause, questioned thevirginityof his newly married wife; and divorce was disallowed (similarly with calumny). The article notes that after the destruction of theTemple in Jerusalem, these laws prescribing fines andcapital punishmentceased.
Rabbinicalenactmentsagainst slander are described as very stringent. Abusive language might have been exempt from anylegal liability, unless it was considered slander (against both the living and the deceased). Fines andexcommunicationwere a possibility. Butfastingandapologyalso seemed to be acceptableatonements.
|
https://en.wikipedia.org/wiki/Libel
|
Any-angle path planningalgorithms arepathfindingalgorithms that search for aEuclidean shortest pathbetween two points on agridmap while allowing the turns in the path to have any angle. The result is a path that cuts directly through open areas and has relatively few turns.[1]More traditional pathfinding algorithms such asA*either lack in performance or produce jagged, indirect paths.
Real-world and many game maps have open areas that are most efficiently traversed in a direct way. Traditional algorithms are ill-equipped to solve these problems:
An any-angle path planning algorithm aims to produce optimal or near-optimal solutions while taking less time than the basic visibility graph approach. Fast any-angle algorithms take roughly the same time as a grid-based solution to compute.
So far, five main any-angle path planning algorithms that are based on the heuristic search algorithmA*[3]have been developed, all of which propagate information along grid edges:
There are also A*-based algorithm distinct from the above family:
Besides, for search in high-dimensional search spaces, such as when theconfiguration spaceof the system involves manydegrees of freedomthat need to be considered (seeMotion planning), and/ormomentumneeds to be considered (which could effectively double the number of dimensions of the search space; this larger space including momentum is known as thephase space), variants of therapidly-exploring random tree(RRT)[23]have been developed that (almost surely) converge to the optimal path by increasingly finding shorter and shorter paths:
Any-angle path planning are useful forrobot navigationandreal-time strategygames where more optimal paths are desirable. Hybrid A*, for example, was used as an entry to a DARPA challenge.[21]The steering-aware properties of some examples also translate to autonomous cars.
|
https://en.wikipedia.org/wiki/Any-angle_path_planning
|
Breadth-first search(BFS) is analgorithmfor searching atreedata structure for a node that satisfies a given property. It starts at thetree rootand explores all nodes at the presentdepthprior to moving on to the nodes at the next depth level. Extra memory, usually aqueue, is needed to keep track of the child nodes that were encountered but not yet explored.
For example, in achess endgame, achess enginemay build thegame treefrom the current position by applying all possible moves and use breadth-first search to find a win position for White. Implicit trees (such as game trees or other problem-solving trees) may be of infinite size; breadth-first search is guaranteed to find a solution node[1]if one exists.
In contrast, (plain)depth-first search(DFS), which explores the node branch as far as possible before backtracking and expanding other nodes,[2]may get lost in an infinite branch and never make it to the solution node.Iterative deepening depth-first searchavoids the latter drawback at the price of exploring the tree's top parts over and over again. On the other hand, both depth-first algorithms typically require far less extra memory than breadth-first search.[3]
Breadth-first search can be generalized to bothundirected graphsanddirected graphswith a given start node (sometimes referred to as a 'search key').[4]Instate space searchinartificial intelligence, repeated searches of vertices are often allowed, while in theoretical analysis of algorithms based on breadth-first search, precautions are typically taken to prevent repetitions.
BFS and its application in findingconnected componentsof graphs were invented in 1945 byKonrad Zuse, in his (rejected) Ph.D. thesis on thePlankalkülprogramming language, but this was not published until 1972.[5]It was reinvented in 1959 byEdward F. Moore, who used it to find the shortest path out of a maze,[6][7]and later developed by C. Y. Lee into awire routingalgorithm (published in 1961).[8]
Input: A graphGand a starting vertexrootofG
Output: Goal state. Theparentlinks trace the shortest path back toroot[9]
This non-recursive implementation is similar to the non-recursive implementation ofdepth-first search, but differs from it in two ways:
IfGis atree, replacing the queue of this breadth-first search algorithm with a stack will yield a depth-first search algorithm. For general graphs, replacing the stack of the iterative depth-first search implementation with a queue would also produce a breadth-first search algorithm, although a somewhat nonstandard one.[10]
TheQqueue contains the frontier along which the algorithm is currently searching.
Nodes can be labelled as explored by storing them in a set, or by an attribute on each node, depending on the implementation.
Note that the wordnodeis usually interchangeable with the wordvertex.
Theparentattribute of each node is useful for accessing the nodes in a shortest path, for example by backtracking from the destination node up to the starting node, once the BFS has been run, and the predecessors nodes have been set.
Breadth-first search produces a so-calledbreadth first tree. You can see how abreadth first treelooks in the following example.
The following is an example of the breadth-first tree obtained by running a BFS onGermancities starting fromFrankfurt:
Thetime complexitycan be expressed asO(|V|+|E|){\displaystyle O(|V|+|E|)}, since every vertex and every edge will be explored in the worst case.|V|{\displaystyle |V|}is the number of vertices and|E|{\displaystyle |E|}is the number of edges in the graph.
Note thatO(|E|){\displaystyle O(|E|)}may vary betweenO(1){\displaystyle O(1)}andO(|V|2){\displaystyle O(|V|^{2})}, depending on how sparse the input graph is.[11]
When the number of vertices in the graph is known ahead of time, and additional data structures are used to determine which vertices have already been added to the queue, thespace complexitycan be expressed asO(|V|){\displaystyle O(|V|)}, where|V|{\displaystyle |V|}is the number of vertices. This is in addition to the space
required for the graph itself, which may vary depending on thegraph representationused by an implementation of the algorithm.
When working with graphs that are too large to store explicitly (or infinite), it is more practical to describe the complexity of breadth-first search in different terms: to find the nodes that are at distancedfrom the start node (measured in number of edge traversals), BFS takesO(bd+ 1)time and memory, wherebis the "branching factor" of the graph (the average out-degree).[12]: 81
In the analysis of algorithms, the input to breadth-first search is assumed to be a finite graph, represented as anadjacency list,adjacency matrix, or similar representation. However, in the application of graph traversal methods inartificial intelligencethe input may be animplicit representationof an infinite graph. In this context, a search method is described as being complete if it is guaranteed to find a goal state if one exists. Breadth-first search is complete, but depth-first search is not. When applied to infinite graphs represented implicitly, breadth-first search will eventually find the goal state, but depth first search may get lost in parts of the graph that have no goal state and never return.[13]
An enumeration of the vertices of a graph is said to be a BFS ordering if it is the possible output of the application of BFS to this graph.
LetG=(V,E){\displaystyle G=(V,E)}be a graph withn{\displaystyle n}vertices. Recall thatN(v){\displaystyle N(v)}is the set of neighbors ofv{\displaystyle v}.
Letσ=(v1,…,vm){\displaystyle \sigma =(v_{1},\dots ,v_{m})}be a list of distinct elements ofV{\displaystyle V}, forv∈V∖{v1,…,vm}{\displaystyle v\in V\setminus \{v_{1},\dots ,v_{m}\}}, letνσ(v){\displaystyle \nu _{\sigma }(v)}be the leasti{\displaystyle i}such thatvi{\displaystyle v_{i}}is a neighbor ofv{\displaystyle v}, if such ai{\displaystyle i}exists, and be∞{\displaystyle \infty }otherwise.
Letσ=(v1,…,vn){\displaystyle \sigma =(v_{1},\dots ,v_{n})}be an enumeration of the vertices ofV{\displaystyle V}.
The enumerationσ{\displaystyle \sigma }is said to be a BFS ordering (with sourcev1{\displaystyle v_{1}}) if, for all1<i≤n{\displaystyle 1<i\leq n},vi{\displaystyle v_{i}}is the vertexw∈V∖{v1,…,vi−1}{\displaystyle w\in V\setminus \{v_{1},\dots ,v_{i-1}\}}such thatν(v1,…,vi−1)(w){\displaystyle \nu _{(v_{1},\dots ,v_{i-1})}(w)}is minimal. Equivalently,σ{\displaystyle \sigma }is a BFS ordering if, for all1≤i<j<k≤n{\displaystyle 1\leq i<j<k\leq n}withvi∈N(vk)∖N(vj){\displaystyle v_{i}\in N(v_{k})\setminus N(v_{j})}, there exists a neighborvm{\displaystyle v_{m}}ofvj{\displaystyle v_{j}}such thatm<i{\displaystyle m<i}.
Breadth-first search can be used to solve many problems in graph theory, for example:
|
https://en.wikipedia.org/wiki/Breadth-first_search
|
Depth-first search(DFS) is analgorithmfor traversing or searchingtreeorgraphdata structures. The algorithm starts at theroot node(selecting some arbitrary node as the root node in the case of a graph) and explores as far as possible along each branch before backtracking. Extra memory, usually astack, is needed to keep track of the nodes discovered so far along a specified branch which helps in backtracking of the graph.
A version of depth-first search was investigated in the 19th century by French mathematicianCharles Pierre Trémaux[1]as a strategy forsolving mazes.[2][3]
Thetimeandspaceanalysis of DFS differs according to its application area. In theoretical computer science, DFS is typically used to traverse an entire graph, and takes timeO(|V|+|E|){\displaystyle O(|V|+|E|)},[4]where|V|{\displaystyle |V|}is the number ofverticesand|E|{\displaystyle |E|}the number ofedges. This is linear in the size of the graph. In these applications it also uses spaceO(|V|){\displaystyle O(|V|)}in the worst case to store thestackof vertices on the current search path as well as the set of already-visited vertices. Thus, in this setting, the time and space bounds are the same as forbreadth-first searchand the choice of which of these two algorithms to use depends less on their complexity and more on the different properties of the vertex orderings the two algorithms produce.
For applications of DFS in relation to specific domains, such as searching for solutions inartificial intelligenceor web-crawling, the graph to be traversed is often either too large to visit in its entirety or infinite (DFS may suffer fromnon-termination). In such cases, search is only performed to alimited depth; due to limited resources, such as memory or disk space, one typically does not use data structures to keep track of the set of all previously visited vertices. When search is performed to a limited depth, the time is still linear in terms of the number of expanded vertices and edges (although this number is not the same as the size of the entire graph because some vertices may be searched more than once and others not at all) but the space complexity of this variant of DFS is only proportional to the depth limit, and as a result, is much smaller than the space needed for searching to the same depth using breadth-first search. For such applications, DFS also lends itself much better toheuristicmethods for choosing a likely-looking branch. When an appropriate depth limit is not known a priori,iterative deepening depth-first searchapplies DFS repeatedly with a sequence of increasing limits. In the artificial intelligence mode of analysis, with abranching factorgreater than one, iterative deepening increases the running time by only a constant factor over the case in which the correct depth limit is known due to the geometric growth of the number of nodes per level.
DFS may also be used to collect asampleof graph nodes. However, incomplete DFS, similarly to incompleteBFS, isbiasedtowards nodes of highdegree.
For the following graph:
a depth-first search starting at the node A, assuming that the left edges in the shown graph are chosen before right edges, and assuming the search remembers previously visited nodes and will not repeat them (since this is a small graph), will visit the nodes in the following order: A, B, D, F, E, C, G. The edges traversed in this search form aTrémaux tree, a structure with important applications ingraph theory.
Performing the same search without remembering previously visited nodes results in visiting the nodes in the order A, B, D, F, E, A, B, D, F, E, etc. forever, caught in the A, B, D, F, E cycle and never reaching C or G.
Iterative deepeningis one technique to avoid this infinite loop and would reach all nodes.
The result of a depth-first search of a graph can be conveniently described in terms of aspanning treeof the vertices reached during the search. Based on this spanning tree, the edges of the original graph can be divided into three classes:forward edges, which point from a node of the tree to one of its descendants,back edges, which point from a node to one of its ancestors, andcross edges, which do neither. Sometimestree edges, edges which belong to the spanning tree itself, are classified separately from forward edges. If the original graph is undirected then all of its edges are tree edges or back edges.
It is also possible to use depth-first search to linearly order the vertices of a graph or tree. There are four possible ways of doing this:
Forbinary treesthere is additionallyin-orderingandreverse in-ordering.
For example, when searching the directed graph below beginning at node A, the sequence of traversals is either A B D B A C A or A C D C A B A (choosing to first visit B or C from A is up to the algorithm). Note that repeat visits in the form of backtracking to a node, to check if it has still unvisited neighbors, are included here (even if it is found to have none). Thus the possible preorderings are A B D C and A C D B, while the possible postorderings are D B C A and D C B A, and the possible reverse postorderings are A C B D and A B C D.
Reverse postordering produces atopological sortingof anydirected acyclic graph. This ordering is also useful incontrol-flow analysisas it often represents a natural linearization of the control flows. The graph above might represent the flow of control in the code fragment below, and it is natural to consider this code in the order A B C D or A C B D but not natural to use the order A B D C or A C D B.
0
A recursive implementation of DFS:[5]
A non-recursive implementation of DFS with worst-case space complexityO(|E|){\displaystyle O(|E|)}, with the possibility of duplicate vertices on the stack:[6]
These two variations of DFS visit the neighbors of each vertex in the opposite order from each other: the first neighbor ofvvisited by the recursive variation is the first one in the list of adjacent edges, while in the iterative variation the first visited neighbor is the last one in the list of adjacent edges. The recursive implementation will visit the nodes from the example graph in the following order: A, B, D, F, E, C, G. The non-recursive implementation will visit the nodes as: A, E, F, B, D, C, G.
The non-recursive implementation is similar tobreadth-first searchbut differs from it in two ways:
IfGis atree, replacing the queue of the breadth-first search algorithm with a stack will yield a depth-first search algorithm. For general graphs, replacing the stack of the iterative depth-first search implementation with a queue would also produce a breadth-first search algorithm, although a somewhat nonstandard one.[7]
Another possible implementation of iterative depth-first search uses a stack ofiteratorsof the list of neighbors of a node, instead of a stack of nodes. This yields the same traversal as recursive DFS.[8]
Algorithms that use depth-first search as a building block include:
Thecomputational complexityof DFS was investigated byJohn Reif. More precisely, given a graphG{\displaystyle G}, letO=(v1,…,vn){\displaystyle O=(v_{1},\dots ,v_{n})}be the ordering computed by the standard recursive DFS algorithm. This ordering is called the lexicographic depth-first search ordering. John Reif considered the complexity of computing the lexicographic depth-first search ordering, given a graph and a source. Adecision versionof the problem (testing whether some vertexuoccurs before some vertexvin this order) isP-complete,[12]meaning that it is "a nightmare forparallel processing".[13]: 189
A depth-first search ordering (not necessarily the lexicographic one), can be computed by a randomized parallel algorithm in the complexity classRNC.[14]As of 1997, it remained unknown whether a depth-first traversal could be constructed by a deterministic parallel algorithm, in the complexity classNC.[15]
|
https://en.wikipedia.org/wiki/Depth-first_search
|
Dijkstra's algorithm(/ˈdaɪkstrəz/DYKE-strəz) is analgorithmfor finding theshortest pathsbetweennodesin a weightedgraph, which may represent, for example, aroad network. It was conceived bycomputer scientistEdsger W. Dijkstrain 1956 and published three years later.[4][5][6]
Dijkstra's algorithm finds the shortest path from a given source node to every other node.[7]: 196–206It can be used to find the shortest path to a specific destination node, by terminating the algorithm after determining the shortest path to the destination node. For example, if the nodes of the graph represent cities, and the costs of edges represent the distances between pairs of cities connected by a direct road, then Dijkstra's algorithm can be used to find the shortest route between one city and all other cities. A common application of shortest path algorithms is networkrouting protocols, most notablyIS-IS(Intermediate System to Intermediate System) andOSPF(Open Shortest Path First). It is also employed as asubroutinein algorithms such asJohnson's algorithm.
The algorithm uses amin-priority queuedata structure for selecting the shortest paths known so far. Before more advanced priority queue structures were discovered, Dijkstra's original algorithm ran inΘ(|V|2){\displaystyle \Theta (|V|^{2})}time, where|V|{\displaystyle |V|}is the number of nodes.[8][9]Fredman & Tarjan 1984proposed aFibonacci heappriority queue to optimize the running time complexity toΘ(|E|+|V|log|V|){\displaystyle \Theta (|E|+|V|\log |V|)}. This isasymptoticallythe fastest known single-sourceshortest-path algorithmfor arbitrarydirected graphswith unbounded non-negative weights. However, specialized cases (such as bounded/integer weights, directed acyclic graphs etc.) can beimproved further. If preprocessing is allowed, algorithms such ascontraction hierarchiescan be up to seven orders of magnitude faster.
Dijkstra's algorithm is commonly used on graphs where the edge weights are positive integers or real numbers. It can be generalized to any graph where the edge weights arepartially ordered, provided the subsequent labels (a subsequent label is produced when traversing an edge) aremonotonicallynon-decreasing.[10][11]
In many fields, particularlyartificial intelligence, Dijkstra's algorithm or a variant offers auniform cost searchand is formulated as an instance of the more general idea ofbest-first search.[12]
What is the shortest way to travel fromRotterdamtoGroningen, in general: from given city to given city.It is the algorithm for the shortest path, which I designed in about twenty minutes. One morning I was shopping inAmsterdamwith my young fiancée, and tired, we sat down on the café terrace to drink a cup of coffee and I was just thinking about whether I could do this, and I then designed the algorithm for the shortest path. As I said, it was a twenty-minute invention. In fact, it was published in '59, three years later. The publication is still readable, it is, in fact, quite nice. One of the reasons that it is so nice was that I designed it without pencil and paper. I learned later that one of the advantages of designing without pencil and paper is that you are almost forced to avoid all avoidable complexities. Eventually, that algorithm became to my great amazement, one of the cornerstones of my fame.
Dijkstra thought about the shortest path problem while working as a programmer at theMathematical Center in Amsterdamin 1956. He wanted to demonstrate the capabilities of the new ARMAC computer.[13]His objective was to choose a problem and a computer solution that non-computing people could understand. He designed the shortest path algorithm and later implemented it for ARMAC for a slightly simplified transportation map of 64 cities in the Netherlands (he limited it to 64, so that 6 bits would be sufficient to encode the city number).[5]A year later, he came across another problem advanced by hardware engineers working on the institute's next computer: minimize the amount of wire needed to connect the pins on the machine's back panel. As a solution, he re-discoveredPrim's minimal spanning tree algorithm(known earlier toJarník, and also rediscovered byPrim).[14][15]Dijkstra published the algorithm in 1959, two years after Prim and 29 years after Jarník.[16][17]
The algorithm requires a starting node, and computes the shortest distance from that starting node to each other node. Dijkstra's algorithm starts with infinite distances and tries to improve them step by step:
The shortest path between twointersectionson a city map can be found by this algorithm using pencil and paper. Every intersection is listed on a separate line: one is the starting point and is labeled (given a distance of) 0. Every other intersection is initially labeled with a distance of infinity. This is done to note that no path to these intersections has yet been established. At each iteration one intersection becomes the current intersection. For the first iteration, this is the starting point.
From the current intersection, the distance to everyneighbor(directly-connected) intersection is assessed by summing the label (value) of the current intersection and the distance to the neighbor and thenrelabelingthe neighbor with the lesser of that sum and the neighbor's existing label. I.e., the neighbor is relabeled if the path to it through the current intersection is shorter than previously assessed paths. If so, mark the road to the neighbor with an arrow pointing to it, and erase any other arrow that points to it. After the distances to each of the current intersection's neighbors have been assessed, the current intersection is marked as visited. The unvisited intersection with the smallest label becomes the current intersection and the process repeats until all nodes with labels less than the destination's label have been visited.
Once no unvisited nodes remain with a label smaller than the destination's label, the remaining arrows show the shortest path.
In the followingpseudocode,distis an array that contains the current distances from thesourceto other vertices, i.e.dist[u]is the current distance from the source to the vertexu. Theprevarray contains pointers to previous-hop nodes on the shortest path from source to the given vertex (equivalently, it is thenext-hopon the pathfromthe given vertextothe source). The codeu ← vertex inQwith min dist[u], searches for the vertexuin the vertex setQthat has the leastdist[u]value.Graph.Edges(u,v)returns the length of the edge joining (i.e. the distance between) the two neighbor-nodesuandv. The variablealton line 14 is the length of the path from thesourcenode to the neighbor nodevif it were to go throughu. If this path is shorter than the current shortest path recorded forv, then the distance ofvis updated toalt.[7]
To find the shortest path between verticessourceandtarget, the search terminates after line 10 ifu=target. The shortest path fromsourcetotargetcan be obtained by reverse iteration:
Now sequenceSis the list of vertices constituting one of the shortest paths fromsourcetotarget, or the empty sequence if no path exists.
A more general problem is to find all the shortest paths betweensourceandtarget(there might be several of the same length). Then instead of storing only a single node in each entry ofprev[]all nodes satisfying the relaxation condition can be stored. For example, if bothrandsourceconnect totargetand they lie on different shortest paths throughtarget(because the edge cost is the same in both cases), then bothrandsourceare added toprev[target]. When the algorithm completes,prev[]data structure describes a graph that is a subset of the original graph with some edges removed. Its key property is that if the algorithm was run with some starting node, then every path from that node to any other node in the new graph is the shortest path between those nodes graph, and all paths of that length from the original graph are present in the new graph. Then to actually find all these shortest paths between two given nodes, a path finding algorithm on the new graph, such asdepth-first searchwould work.
A min-priority queue is an abstract data type that provides 3 basic operations:add_with_priority(),decrease_priority()andextract_min(). As mentioned earlier, using such a data structure can lead to faster computing times than using a basic queue. Notably,Fibonacci heap[19]orBrodal queueoffer optimal implementations for those 3 operations. As the algorithm is slightly different in appearance, it is mentioned here, in pseudocode as well:
Instead of filling the priority queue with all nodes in the initialization phase, it is possible to initialize it to contain onlysource; then, inside theifalt< dist[v]block, thedecrease_priority()becomes anadd_with_priority()operation.[7]: 198
Yet another alternative is to add nodes unconditionally to the priority queue and to instead check after extraction (u←Q.extract_min()) that it isn't revisiting, or that no shorter connection was found yet in theif alt < dist[v]block. This can be done by additionally extracting the associated prioritypfrom the queue and only processing furtherifp== dist[u]inside thewhileQis not emptyloop.[20]
These alternatives can use entirely array-based priority queues without decrease-key functionality, which have been found to achieve even faster computing times in practice. However, the difference in performance was found to be narrower for denser graphs.[21]
To prove thecorrectnessof Dijkstra's algorithm,mathematical inductioncan be used on the number of visited nodes.[22]
Invariant hypothesis: For each visited nodev,dist[v]is the shortest distance fromsourcetov, and for each unvisited nodeu,dist[u]is the shortest distance fromsourcetouwhen traveling via visited nodes only, or infinity if no such path exists. (Note: we do not assumedist[u]is the actual shortest distance for unvisited nodes, whiledist[v]is the actual shortest distance)
The base case is when there is just one visited node,source. Its distance is defined to be zero, which is the shortest distance, since negative weights are not allowed. Hence, the hypothesis holds.
Assuming that the hypothesis holds fork{\displaystyle k}visited nodes, to show it holds fork+1{\displaystyle k+1}nodes, letube the next visited node, i.e. the node with minimumdist[u]. The claim is thatdist[u]is the shortest distance fromsourcetou.
The proof is by contradiction. If a shorter path were available, then this shorter path either contains another unvisited node or not.
For all other visited nodesv, thedist[v]is already known to be the shortest distance fromsourcealready, because of the inductive hypothesis, and these values are unchanged.
After processingu, it is still true that for each unvisited nodew,dist[w]is the shortest distance fromsourcetowusing visited nodes only. Any shorter path that did not useu, would already have been found, and if a shorter path useduit would have been updated when processingu.
After all nodes are visited, the shortest path fromsourceto any nodevconsists only of visited nodes. Therefore,dist[v]is the shortest distance.
Bounds of the running time of Dijkstra's algorithm on a graph with edgesEand verticesVcan be expressed as a function of the number of edges, denoted|E|{\displaystyle |E|}, and the number of vertices, denoted|V|{\displaystyle |V|}, usingbig-O notation. The complexity bound depends mainly on the data structure used to represent the setQ. In the following, upper bounds can be simplified because|E|{\displaystyle |E|}isO(|V|2){\displaystyle O(|V|^{2})}for any simple graph, but that simplification disregards the fact that in some problems, other upper bounds on|E|{\displaystyle |E|}may hold.
For any data structure for the vertex setQ, the running time i s:[2]
whereTdk{\displaystyle T_{\mathrm {dk} }}andTem{\displaystyle T_{\mathrm {em} }}are the complexities of thedecrease-keyandextract-minimumoperations inQ, respectively.
The simplest version of Dijkstra's algorithm stores the vertex setQas a linked list or array, and edges as anadjacency listormatrix. In this case, extract-minimum is simply a linear search through all vertices inQ, so the running time isΘ(|E|+|V|2)=Θ(|V|2){\displaystyle \Theta (|E|+|V|^{2})=\Theta (|V|^{2})}.
Forsparse graphs, that is, graphs with far fewer than|V|2{\displaystyle |V|^{2}}edges, Dijkstra's algorithm can be implemented more efficiently by storing the graph in the form of adjacency lists and using aself-balancing binary search tree,binary heap,pairing heap,Fibonacci heapor a priority heap as apriority queueto implement extracting minimum efficiently. To perform decrease-key steps in a binary heap efficiently, it is necessary to use an auxiliary data structure that maps each vertex to its position in the heap, and to update this structure as the priority queueQchanges. With a self-balancing binary search tree or binary heap, the algorithm requires
time in the worst case; for connected graphs this time bound can be simplified toΘ(|E|log|V|){\displaystyle \Theta (|E|\log |V|)}. TheFibonacci heapimproves this to
When using binary heaps, theaverage casetime complexity is lower than the worst-case: assuming edge costs are drawn independently from a commonprobability distribution, the expected number ofdecrease-keyoperations is bounded byΘ(|V|log(|E|/|V|)){\displaystyle \Theta (|V|\log(|E|/|V|))}, giving a total running time of[7]: 199–200
In common presentations of Dijkstra's algorithm, initially all nodes are entered into the priority queue. This is, however, not necessary: the algorithm can start with a priority queue that contains only one item, and insert new items as they are discovered (instead of doing a decrease-key, check whether the key is in the queue; if it is, decrease its key, otherwise insert it).[7]: 198This variant has the same worst-case bounds as the common variant, but maintains a smaller priority queue in practice, speeding up queue operations.[12]
Moreover, not inserting all nodes in a graph makes it possible to extend the algorithm to find the shortest path from a single source to the closest of a set of target nodes on infinite graphs or those too large to represent in memory. The resulting algorithm is calleduniform-cost search(UCS) in the artificial intelligence literature[12][23][24]and can be expressed in pseudocode as
Its complexity can be expressed in an alternative way for very large graphs: whenC*is the length of the shortest path from the start node to any node satisfying the "goal" predicate, each edge has cost at leastε, and the number of neighbors per node is bounded byb, then the algorithm's worst-case time and space complexity are both inO(b1+⌊C*⁄ε⌋).[23]
Further optimizations for the single-target case includebidirectionalvariants, goal-directed variants such as theA* algorithm(see§ Related problems and algorithms), graph pruning to determine which nodes are likely to form the middle segment of shortest paths (reach-based routing), and hierarchical decompositions of the input graph that reduces–trouting to connectingsandtto their respective "transit nodes" followed by shortest-path computation between these transit nodes using a "highway".[25]Combinations of such techniques may be needed for optimal practical performance on specific problems.[26]
As well as simply computing distances and paths, Dijkstra's algorithm can be used to sort vertices by their distances from a given starting vertex.
In 2023, Haeupler, Rozhoň, Tětek, Hladík, andTarjan(one of the inventors of the 1984 heap), proved that, for this sorting problem on a positively-weighted directed graph, a version of Dijkstra's algorithm with a special heap data structure has a runtime and number of comparisons that is within a constant factor of optimal amongcomparison-basedalgorithms for the same sorting problem on the same graph and starting vertex but with variable edge weights. To achieve this, they use a comparison-based heap whose cost of returning/removing the minimum element from the heap is logarithmic in the number of elements inserted after it rather than in the number of elements in the heap.[27][28]
When arc weights are small integers (bounded by a parameterC{\displaystyle C}), specialized queues can be used for increased speed. The first algorithm of this type was Dial's algorithm[29]for graphs with positive integer edge weights, which uses abucket queueto obtain a running timeO(|E|+|V|C){\displaystyle O(|E|+|V|C)}. The use of aVan Emde Boas treeas the priority queue brings the complexity toO(|E|+|V|logC/loglog|V|C){\displaystyle O(|E|+|V|\log C/\log \log |V|C)}.[30]Another interesting variant based on a combination of a newradix heapand the well-known Fibonacci heap runs in timeO(|E|+|V|logC){\displaystyle O(|E|+|V|{\sqrt {\log C}})}.[30]Finally, the best algorithms in this special case run inO(|E|loglog|V|){\displaystyle O(|E|\log \log |V|)}[31]time andO(|E|+|V|min{(log|V|)1/3+ε,(logC)1/4+ε}){\displaystyle O(|E|+|V|\min\{(\log |V|)^{1/3+\varepsilon },(\log C)^{1/4+\varepsilon }\})}time.[32]
Dijkstra's original algorithm can be extended with modifications. For example, sometimes it is desirable to present solutions which are less than mathematically optimal. To obtain a ranked list of less-than-optimal solutions, the optimal solution is first calculated. A single edge appearing in the optimal solution is removed from the graph, and the optimum solution to this new graph is calculated. Each edge of the original solution is suppressed in turn and a new shortest-path calculated. The secondary solutions are then ranked and presented after the first optimal solution.
Dijkstra's algorithm is usually the working principle behindlink-state routing protocols.OSPFandIS-ISare the most common.
Unlike Dijkstra's algorithm, theBellman–Ford algorithmcan be used on graphs with negative edge weights, as long as the graph contains nonegative cyclereachable from the source vertexs. The presence of such cycles means that no shortest path can be found, since the label becomes lower each time the cycle is traversed. (This statement assumes that a "path" is allowed to repeat vertices. Ingraph theorythat is normally not allowed. Intheoretical computer scienceit often is allowed.) It is possible to adapt Dijkstra's algorithm to handle negative weights by combining it with the Bellman-Ford algorithm (to remove negative edges and detect negative cycles):Johnson's algorithm.
TheA* algorithmis a generalization of Dijkstra's algorithm that reduces the size of the subgraph that must be explored, if additional information is available that provides a lower bound on the distance to the target.
The process that underlies Dijkstra's algorithm is similar to thegreedyprocess used inPrim's algorithm. Prim's purpose is to find aminimum spanning treethat connects all nodes in the graph; Dijkstra is concerned with only two nodes. Prim's does not evaluate the total weight of the path from the starting node, only the individual edges.
Breadth-first searchcan be viewed as a special-case of Dijkstra's algorithm on unweighted graphs, where the priority queue degenerates into aFIFOqueue.
Thefast marching methodcan be viewed as a continuous version of Dijkstra's algorithm which computes the geodesic distance on a triangle mesh.
From adynamic programmingpoint of view, Dijkstra's algorithm is a successive approximation scheme that solves the dynamic programming functional equation for the shortest path problem by theReachingmethod.[33][34][35]
In fact, Dijkstra's explanation of the logic behind the algorithm:[36]
Problem 2.Find the path of minimum total length between two given nodesPandQ.
We use the fact that, ifRis a node on the minimal path fromPtoQ, knowledge of the latter implies the knowledge of the minimal path fromPtoR.
is a paraphrasing ofBellman'sPrinciple of Optimalityin the context of the shortest path problem.
|
https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm
|
This is alist of operator splitting topics.
|
https://en.wikipedia.org/wiki/List_of_operator_splitting_topics
|
Inmathematics, especiallylinear algebra, anM-matrixis a matrix whose off-diagonal entries are less than or equal to zero (i.e., it is aZ-matrix) and whoseeigenvalueshave nonnegativereal parts. The set of non-singularM-matrices are a subset of the class ofP-matrices, and also of the class ofinverse-positive matrices(i.e. matrices with inverses belonging to the class ofpositive matrices).[1]The nameM-matrix was seemingly originally chosen byAlexander Ostrowskiin reference toHermann Minkowski, who proved that if a Z-matrix has all of its row sums positive, then the determinant of that matrix is positive.[2]
An M-matrix is commonly defined as follows:
Definition:LetAbe an×nrealZ-matrix. That is,A= (aij)whereaij≤ 0for alli≠j, 1 ≤i,j≤n. Then matrixAis also anM-matrixif it can be expressed in the formA=sI−B, whereB= (bij)withbij≥ 0, for all1 ≤i,j≤ n, wheresis at least as large as the maximum of the moduli of the eigenvalues ofB, andIis an identity matrix.
For thenon-singularityofA, according to thePerron–Frobenius theorem, it must be the case thats>ρ(B). Also, for a non-singular M-matrix, the diagonal elementsaiiofAmust be positive. Here we will further characterize only the class of non-singular M-matrices.
Many statements that are equivalent to this definition of non-singular M-matrices are known, and any one of these statements can serve as a starting definition of a non-singular M-matrix.[3]For example, Plemmons lists 40 such equivalences.[4]These characterizations has been categorized by Plemmons in terms of their relations to the properties of: (1) positivity of principal minors, (2) inverse-positivity and splittings,
(3) stability, and (4) semipositivity and diagonal dominance. It makes sense to categorize the properties in this way because the statements within a particular group are related to each other even when matrixAis an arbitrary matrix, and not necessarily a Z-matrix. Here we mention a few characterizations from each category.
Below,≥denotes the element-wise order (not the usualpositive semidefiniteorder on matrices). That is, for any real matricesA,Bof sizem×n, we writeA≥B(orA>B)ifaij≥bij(oraij>bij)for alli,j.
LetAbe an×nrealZ-matrix, then the following statements are equivalent toAbeing anon-singularM-matrix:
Positivity of principal minors
Inverse-positivity and splittings
Stability
Semipositivity and diagonal dominance
The primary contributions to M-matrix theory has mainly come from mathematicians and economists. M-matrices are used in mathematics to establish bounds on eigenvalues and on the establishment of convergence criteria foriterative methodsfor the solution of largesparsesystems of linear equations. M-matrices arise naturally in some discretizations ofdifferential operators, such as theLaplacian, and as such are well-studied in scientific computing. M-matrices also occur in the study of solutions tolinear complementarity problem. Linear complementarity problems arise inlinearandquadratic programming,computational mechanics, and in the problem of findingequilibrium pointof abimatrix game. Lastly, M-matrices occur in the study of finiteMarkov chainsin the field ofprobability theoryandoperations researchlikequeuing theory. Meanwhile, economists have studied M-matrices in connection with gross substitutability, stability of ageneral equilibriumandLeontief's input–output analysisin economic systems. The condition of positivity of all principal minors is also known as the Hawkins–Simon condition in economic literature.[5]In engineering, M-matrices also occur in the problems ofLyapunov stabilityandfeedback controlincontrol theoryand are related toHurwitz matrices. Incomputational biology, M-matrices occur in the study ofpopulation dynamics.
|
https://en.wikipedia.org/wiki/M-matrix
|
Inmathematics, particularlymatrix theory, aStieltjes matrix, named afterThomas Joannes Stieltjes, is arealsymmetricpositive definite matrixwithnonpositiveoff-diagonalentries. A Stieltjes matrix is necessarily anM-matrix. Everyn×nStieltjes matrix is invertible to a nonsingular symmetricnonnegativematrix, though the converse of this statement is not true in general forn> 2.
From the above definition, a Stieltjes matrix is a symmetric invertibleZ-matrixwhose eigenvalues have positive real parts. As it is a Z-matrix, its off-diagonal entries are less than or equal to zero.
This article aboutmatricesis astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Stieltjes_matrix
|
Semantic integrationis the process of interrelating information from diverse sources, for example calendars and to do lists, email archives, presence information (physical, psychological, and social), documents of all sorts, contacts (includingsocial graphs), search results, and advertising and marketing relevance derived from them. In this regard,semanticsfocuses on the organization of and action uponinformationby acting as an intermediary between heterogeneous data sources, which may conflict not only by structure but also context or value.
Inenterprise application integration(EAI), semantic integration can facilitate or even automate the communication between computer systems usingmetadata publishing. Metadata publishing potentially offers the ability to automatically linkontologies. One approach to (semi-)automated ontology mapping requires the definition of a semantic distance or its inverse,semantic similarityand appropriate rules. Other approaches include so-calledlexical methods, as well as methodologies that rely on exploiting the structures of the ontologies. For explicitly stating similarity/equality, there exist special properties or relationships in most ontology languages.OWL, for example has "owl:equivalentClass", "owl:equivalentProperty" and "owl:sameAs".
Eventually system designs may see the advent of composable architectures where published semantic-based interfaces are joined together to enable new and meaningful capabilities[citation needed]. These could predominately be described by means of design-time declarative specifications, that could ultimately be rendered and executed at run-time[citation needed].
Semantic integration can also be used to facilitate design-time activities of interface design and mapping. In this model, semantics are only explicitly applied to design and the run-time systems work at thesyntaxlevel[citation needed]. This "early semantic binding" approach can improve overall system performance while retaining the benefits of semantic driven design[citation needed].
From the industry use case, it has been observed that the semantic mappings were performed only within the scope of the ontology class or thedatatypeproperty. These identified semantic integrations are (1) integration of ontology class instances into another ontology class without any constraint, (2) integration of selected instances in one ontology class into another ontology class by the range constraint of the property value and (3) integration of ontology class instances into another ontology class with the value transformation of the instance property. Each of them requires a particular mapping relationship, which is respectively: (1) equivalent or subsumption mapping relationship, (2) conditional mapping relationship that constraints the value of property (data range) and (3) transformation mapping relationship that transforms the value of property (unit transformation). Each identified mapping relationship can be defined as either (1) direct mapping type, (2) data range mapping type or (3) unit transformation mapping type.
In the case of integrating supplemental data source,
SELECT ?medicationWHERE {?diagnosis a example:Diagnosis .?diagnosis example:name “TB of vertebra” .?medication example:canTreat ?diagnosis .}
SELECT DRUG.medIDFROM DIAGNOSIS, DRUG, DRUG_DIAGNOSISWHERE DIAGNOSIS.diagnosisID=DRUG_DIAGNOSIS.diagnosisIDAND DRUG.medID=DRUG_DIAGNOSIS.medIDAND DIAGNOSIS.name=”TB of vertebra”
ThePacific Symposium on Biocomputinghas been a venue for the popularization of the ontology mapping task in the biomedical domain, and a number of papers on the subject can be found in its proceedings.
|
https://en.wikipedia.org/wiki/Semantic_Integration
|
ASPARQL Query Results XML(also sometimes calledSPARQL Results Document) is a file stores data (value,URIand text) in XML.
This document is generally the response by default of a RDF database after aSPARQLquery.
This computing article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/SPARQL_Query_Results_XML_Format
|
SPARQL Syntax Expressions(alternatively,SPARQL S-Expressions) is aparse tree(a.k.a. concrete syntax) for representingSPARQLAlgebraexpressions.
They have been used to apply theBERTlanguage model to create SPARQL queries fromnatural languagequestions.[1]
Thiscomputer sciencearticle is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/SPARQL_Syntax_Expressions
|
Net neutrality, sometimes referred to asnetwork neutrality, is the principle thatInternet service providers(ISPs) must treat all Internet communications equally, offeringusersand online content providers consistent transfer rates regardless of content, website,platform,application, type of equipment, source address, destination address, or method of communication (i.e., withoutprice discrimination).[4][5]Net neutrality was advocated for in the 1990s by the presidential administration ofBill Clintonin the United States. Clinton signed of theTelecommunications Act of 1996, an amendment to theCommunications Act of 1934.[6][7][better source needed]In 2025, an American court ruled that Internet companies should not be regulated like utilities, which weakened net neutrality regulation and put the decision in the hands of theUnited States Congressand state legislatures.[8]
Supporters of net neutrality argue that it prevents ISPs from filtering Internet content without a court order, fostersfreedom of speechand democratic participation, promotes competition and innovation, prevents dubious services, and maintains theend-to-end principle, and that users would be intolerant of slow-loading websites. Opponents argue that it reduces investment, deters competition, increases taxes, imposes unnecessary regulations, prevents the Internet from being accessible to lower income individuals, and prevents Internet traffic from being allocated to the most needed users, that large ISPs already have a performance advantage over smaller providers, and that there is already significant competition among ISPs with few competitive issues.
The term was coined byColumbia Universitymedia law professorTim Wuin 2003 as an extension of the longstanding concept of acommon carrierwhich was used to describe the role oftelephone systems.[9][10][11][12]
Net neutrality regulations may be referred to ascommon carrierregulations.[13][14]Net neutrality does not block all abilities that ISPs have to impact their customers' services. Opt-in and opt-out services exist on the end user side, and filtering can be done locally, as in the filtering of sensitive material for minors.[15]
Research suggests that a combination ofpolicy instrumentscan help realize the range of valued political and economic objectives central to the network neutrality debate.[16]Combined with public opinion, this has led some governments to regulate broadband Internet services as apublic utility, similar to the way electricity, gas, and the water supply are regulated, along with limiting providers and regulating the options those providers can offer.[17]
Proponents of net neutrality, which includecomputer scienceexperts,consumer advocates,human rights organizations, and Internet content providers, assert that net neutrality helps to provide freedom of information exchange, promotes competition and innovation for Internet services, and upholds standardization of Internet data transmission which was essential for its growth.[citation needed]Opponents of net neutrality, which include ISPs, computer hardware manufacturers, economists,technologistsandtelecommunications equipment manufacturers, argue that net neutrality requirements would reduce their incentive to build out the Internet and reduce competition in the marketplace, and may raise their operating costs, which they would have to pass along to their users.[citation needed]
Network neutrality is the principle that all Internet traffic should be treated equally.[18]According toColumbia Law SchoolprofessorTim Wu, a public information network will be most useful when this is the case.[19]
Internet traffic consists of various types of digital data sent over the Internet between all kinds of devices (e.g., data center servers, personal computers,mobile devices,video game consoles, etc.), using hundreds of different transfer technologies. The data includes email messages;HTML,JSON, and all related web browserMIMEcontent types; text, word processing, spreadsheet, database and other academic, business or personal documents in any conceivable format;audioandvideofiles;streaming mediacontent; and countless other formal, proprietary, or ad-hocschematic formats—all transmitted via myriadtransfer protocols.
Indeed, while the focus is often on thetypeof digital content being transferred, network neutrality includes the idea that if all suchtypesare to be treated equally, then it follows that any ostensibly arbitrary choice ofprotocol—that is, the technical details of the actual communications transaction itself—must be as well. For example, the same digital video file could be accessed by viewing it live while the data is being received (HLS), interacting with its playback from a remote server (DASH), by receiving it in an email message (SMTP), or by downloading it from either a website (HTTP), anFTPserver, or viaBitTorrent, among other means. Although all of these use the Internet for transport, and the content received locally is ultimately identical, the interim data traffic is dramatically different depending on which transfer method is used. To proponents of net neutrality, this suggests that prioritizing any one transfer protocol over another is generally unprincipled, or that doing so penalizes the free choices of some users.
In sum, net neutrality is the principle that an ISP be required to provide access to all sites, content, and applications at the same speed, under the same conditions, without blocking or giving preference to any content. Under net neutrality, whether a user connects to Netflix, Wikipedia, YouTube, or a family blog, their ISP must treat them all the same.[20]Without net neutrality, an ISP can influence the quality that each experience offers to end users, which suggests a regime ofpay-to-play, where content providers can be charged to improve the exposure of their own products versus those of their competitors.[21]
Under anopen Internetsystem, the full resources of the Internet and means to operate on it should be easily accessible to all individuals, companies, and organizations.[22]Applicable concepts include: net neutrality,open standards,transparency, lack ofInternet censorship, and lowbarriers to entry. The concept of the open Internet is sometimes expressed as an expectation ofdecentralized technological power, and is seen by some observers as closely related toopen-source software, a type of software program whose maker allows users access to the code that runs the program, so that users can improve the software or fixbugs.[23]Proponents of net neutrality see neutrality as an important component of anopen Internet, wherein policies such as equal treatment of data and openweb standardsallow those using the Internet to easily communicate, and conduct business and activities without interference from a third party.[24]
In contrast, aclosed Internetrefers to the opposite situation, wherein established persons, corporations, or governments favor certain uses, restrict access to necessaryweb standards,artificially degradesome services, or explicitlyfilter out content. Some countries such asThailandblock certain websites or types of sites, and monitor and/or censor Internet use usingInternet police, a specialized type oflaw enforcement, orsecret police.[25]Other countries such as Russia,[26]China,[27]andNorth Korea[28]also use similar tactics to Thailand to control the variety of Internet media within their respective countries. In comparison to the United States or Canada for example, these countries have far more restrictive Internet service providers. This approach is reminiscent of aclosed platformsystem, as both ideas are highly similar.[29]These systems all serve to hinder access to a wide variety of Internet service, which is a stark contrast to the idea of an open Internet system.
The termdumb pipewas coined in the early 1990s and refers to water pipes used in a city water supply system. In theory, these pipes provide a steady and reliable source of water to every household without discrimination. In other words, it connects the user with the source without any intelligence or decrement. Similarly, adumb networkis a network with little or no control or management of its use patterns.[30]
Experts in thehigh-technology fieldwill often compare the dumb pipe concept withsmart pipesand debate which one is best applied to a certain portion of Internet policy. These conversations usually refer to these two concepts as being analogous to the concepts of open and closed Internet respectively. As such, certain models have been made that aim to outline four layers of the Internet with the understanding of the dumb pipe theory:[31]
Theend-to-end principleofnetwork designwas first laid out in the 1981 paperEnd-to-end arguments in system designbyJerome H. Saltzer,David P. Reed, andDavid D. Clark.[32]The principle states that, whenever possible,communications protocoloperations should be defined to occur at the end-points of a communications system, or as close as possible to the resources being controlled. According to the end-to-end principle, protocol features are only justified in the lower layers of a system if they are a performance optimization; hence,TCPretransmission for reliability is still justified, but efforts to improve TCP reliability should stop after peak performance has been reached.
They argued that, in addition to any processing in the intermediate systems, reliable systems tend to require processing in the end-points to operate correctly. They pointed out that most features in the lowest level of a communications system impose costs for all higher-layer clients, even if those clients do not need the features, and are redundant if the clients have to re-implement the features on an end-to-end basis. This leads to the model of a minimaldumb networkwith smart terminals, a completely different model from the previous paradigm of the smart network withdumb terminals. Because the end-to-end principle is one of the central design principles of the Internet, and because the practical means for implementing data discrimination violate the end-to-end principle, the principle often enters discussions about net neutrality. The end-to-end principle is closely related and sometimes seen as a direct precursor to the principle of net neutrality.[33]
Traffic shapingis the control ofcomputer networktraffic to optimize or guarantee performance, improvelatency(i.e., decrease Internet response times), or increase usablebandwidthby delayingpacketsthat meet certain criteria.[34]In practice, traffic shaping is often accomplished bythrottlingcertain types of data, such asstreaming videoorP2Pfile sharing. More specifically, traffic shaping is any action on a set of packets (often called a stream or a flow) that imposes additional delay on those packets such that they conform to some predetermined constraint (a contract or traffic profile).[35]Traffic shaping provides a means to control the volume of traffic being sent into anetworkin a specified period (bandwidth throttling), or the maximum rate at which the traffic is sent (rate limiting), or more complex criteria such asgeneric cell rate algorithm.
If the core of a network has more bandwidth than is permitted to enter at the edges, then good quality of service (QoS) can be obtained without policing or throttling. For example, telephone networks employ admission control to limit user demand on the network core by refusing to create a circuit for the requested connection. During anatural disaster, for example, most users will get acircuit busysignal if they try to make a call, as the phone company prioritizes emergency calls. Over-provisioning is a form ofstatistical multiplexingthat makes liberal estimates ofpeak user demand. Over-provisioning is used in private networks such asWebExand theInternet 2 Abilene Network, an American university network. David Isenberg believes that continued over-provisioning will always provide more capacity for less expense than QoS anddeep packet inspectiontechnologies.[36][37]
Device neutralityis the principle that to ensure freedom of choice and freedom of communication for users of network-connected devices, it is not sufficient that network operators do not interfere with their choices and activities; users must be free to use applications of their choice and hence remove the applications they do not want. Device vendors can establish policies for managing applications, but they, too, must be applied neutrally.[citation needed]
An unsuccessful bill to enforce network and device neutrality was introduced in Italy in 2015 byStefano Quintarelli.[38]The law gained formal support at the European Commission[39]from BEUC, theEuropean Consumer Organisation, theElectronic Frontier Foundationand theHermes Center for Transparency and Digital Human Rights.[citation needed]A similar law was enacted in South Korea.[40]Similar principles were proposed in China.[41]The French telecoms regulator ARCEP has called for the introduction of device neutrality in Europe.[42]
The principle has been incorporated in the EU'sDigital Markets Act(Articles 6.3 an 6.4)[43][non-primary source needed]
ISPs can choose a balance between a base subscription tariff (monthly bundle) and a pay-per-use (pay by MB metering). The ISP sets an upper monthly threshold on data usage, just to be able to provide an equal share among customers, and a fair use guarantee. This is generally not considered to be an intrusion but rather allows for a commercial positioning among ISPs.[citation needed]
Some networks likepublic Wi-Fican take traffic away from conventionalfixedormobile networkproviders. This can significantly change the end-to-end behavior (performance, tariffs).[citation needed]
Discrimination by protocol is the favoring or blocking of information based on aspects of thecommunications protocolthat the computers are using.[44]In the US, a complaint was filed with theFederal Communications Commissionagainst the cable providerComcastalleging they had illegally inhibited users of its high-speed Internet service from using the popular file-sharing softwareBitTorrent.[45]Comcast admitted no wrongdoing[46]in its proposed settlement of up toUS$16per share in December 2009.[47]However, a U.S. appeals court ruled in April 2010 that the FCC exceeded its authority when it sanctioned Comcast in 2008. However, the FCC spokeswoman Jen Howard responded, "The court in no way disagreed with the importance of preserving a free and open Internet, nor did it close the door to other methods for achieving this important end."[48]Despite the ruling in favor of Comcast, a study byMeasurement Labin October 2011 verified that Comcast had virtually stopped its BitTorrent throttling practices.[49][50]
During the 1990s, creating a non-neutral Internet was technically infeasible.[51]Originally developed to filter harmfulmalware, the Internet security companyNetScreen Technologiesreleased networkfirewallsin 2003 with so-calleddeep packet inspectioncapabilities. Deep packet inspection helped make real-time discrimination between different kinds of data possible,[52]and is often used forInternet censorship.
One criticism regarding discrimination is that the system set up by ISPs for this purpose is capable of not only discriminating but also scrutinizing the full-packet content of communications. For instance, deep packet inspection technology installs intelligence within the lower layers of the network to discover and identify the source, type, and destination of packets, revealing information about packets traveling in the physical infrastructure so it can dictate the quality of transport such packets will receive.[53]This is seen as an architecture ofsurveillance, one that can be shared withintelligence agencies, copyrighted content owners, and civil litigants, exposing the users' secrets in the process.[54]
In a practice calledzero-rating, companies will not invoice data use related to certain IP addresses, favoring the use of those services. Examples includeFacebook Zero,[55]Wikipedia Zero, andGoogle Free Zone. These zero-rating practices are especially common in thedeveloping world.[56]Aside from the zero-rating method, ISPs will also use certain strategies to reduce the costs of pricing plans such as the use of sponsored data. In a scenario where a sponsored data plan is used, a third party will step in and pay for all the content that it (or the carrier or consumer) does not want around. This is generally used as a way for ISPs to removeout-of-pocketcosts from subscribers.[57]
Sometimes ISPs will charge some companies, but not others, for the traffic they cause on the ISP's network. French telecom operator Orange, complaining that traffic from YouTube and other Google sites consist of roughly 50% of total traffic on the Orange network, made a deal with Google, in which they charge Google for the traffic incurred on the Orange network.[58]Some also thought that Orange's rival ISPFreethrottled YouTube traffic. However, an investigation done by the French telecommunications regulatory body revealed that the network was simply congested during peak hours.[59]
Proponents of net neutrality argue that without new regulations, Internet service providers would be able to profit from and favor their own private networks and that ISPs would be able to pick and choose who they offer a greater bandwidth to. If one website or company is able to afford more, they will go with them. This especially stifles private up-and-coming businesses.
ISPs are able to encourage the use of specific services by using private networks to discriminate what data is counted against bandwidth caps. For example, Comcast struck a deal with Microsoft that allowed users to stream television through the Xfinity app on theirXbox 360swithout it affecting their bandwidth limit. However, using other televisionstreamingapps, such asNetflix,HBO Go, andHulu, counted towards the limit. Comcast denied that this infringed on net neutrality principles since "it runs its Xfinity for Xbox service on its own, private Internet protocol network."[60]In 2009, whenAT&Twas bundlingiPhone 3Gwith its 3G network service, the company placed restrictions on which iPhone applications could run on its network.[61]
According to net neutrality proponents, this capitalization on which content producers ISPs can favor would ultimately lead to fragmentation, where some ISPs would have certain content that is not necessarily present in the networks offered by other ISPs. The danger behind fragmentation, as viewed by proponents of net neutrality, is the concept that there could bemultiple Internets, where some ISPs offer exclusive Internet applications or services or make it more difficult to gain access to Internet content that may be more easily viewable through other Internet service providers. An example of a fragmented service would be television, where some cable providers offer exclusive media from certain content providers.[62]
However, in theory, allowing ISPs to favor certain content and private networks would overall improve Internet services since they would be able to recognize packets of information that are more time-sensitive and prioritize that over packets that are not as sensitive to latency. The issue, as explained by Robin S. Lee and Tim Wu, is that there are literally too many ISPs and Internet content providers around the world to reach an agreement on how to standardize that prioritization. A proposed solution would be to allow all online content to be accessed and transferred freely, while simultaneously offering afast lanefor a preferred service that does not discriminate on the content provider.[62]
There is disagreement about whetherpeeringis a net neutrality issue.[63]In the first quarter of 2014, streaming website Netflix reached an arrangement with ISP Comcast to improve the quality of its service to Netflix clients.[64]This arrangement was made in response to increasingly slow connection speeds through Comcast over the course of 2013, where average speeds dropped by over 25% of their values a year before to an all-time low. After the deal was struck in January 2014, the Netflix speed index recorded a 66% increase in connection.Netflixagreed to a similar deal withVerizonin 2014, after VerizonDSLcustomers' connection speed dropped to less than 1 Mbit/s early in the year. Netflix spoke out against this deal with a controversial statement delivered to all Verizon customers experiencing low connection speeds, using the Netflix client.[65]This sparked an internal debate between the two companies that led to Verizon's obtaining acease and desistorder on 5 June 2014, that forced Netflix to stop displaying this message.
Pro-net neutrality arguments have also noted that regulations are necessary due to research showing low tolerance to slow-loading content providers. In a 2009 research study conducted by Forrester Research, online shoppers expected the web pages they visited to download content instantly.[66]When a page fails to load at the expected speed, many of them simply click out. A study found that even a one-second delay could lead to "11% fewer page views, a 16% decrease in customer satisfaction, and 7% loss in conversions."[67]This delay can cause a severe problem to small innovators who have created new technology. If a website is slow by default, the general public will lose interest and favor a website that runs faster. This helps large corporate companies maintain power because they have the means to fund faster Internet speeds.[68]On the other hand, smaller competitors have less financial capabilities making it harder for them to succeed in the online world.[69]
Legal enforcement of net neutrality principles takes a variety of forms, from provisions that outlaw anti-competitive blocking andthrottlingof Internet services, all the way to legal enforcement that prevents companies from subsidizing Internet use on particular sites.[70]Contrary to popular rhetoric and statements by various individuals involved in the ongoing academic debate, research suggests that a single policy instrument (such as a no-blocking policy or a quality of service tiering policy) cannot achieve the range of valued political and economic objectives central to the debate.[16]As Bauer and Obar suggest, "safeguarding multiple goals requires a combination of instruments that will likely involve government and nongovernment measures. Furthermore, promoting [rights and] goals such as thefreedom of speech, political participation, investment, and innovation calls for complementary policies."[71]
Net neutrality is administered on a national or regional basis, though much of the world's focus has been on the conflict overnet neutrality in the United States. Net neutrality in the US has been a topic since the early 1990s, as they were one of the world leaders in providing online services. However, they face the same problems as the rest of the world.
In 2019, the Save the Internet Act to "guarantee broadband internet users equal access to online content" was passed by the US House of Representatives[72]but not by the US Senate. Finding an appropriate solution by creating more regulations for ISPs has been a major work in progress. Net neutrality rules were repealed in the US in 2017 during the Trump administration and subsequent appeals upheld the ruling,[73]until the FCC voted to reinstate them in 2024.[74]In 2025, on January 2nd, however, "a US appeals court on Thursday ruled the Federal Communications Commission did not have the legal authority to reinstate landmark net neutrality rules."[75]
Governments of countries that comment on net neutrality usually support the concept.[76]
Net neutrality in the United States has been a point of conflict between network users and service providers since the 1990s. Much of the conflict over net neutrality arises from how Internet services are classified by the Federal Communications Commission (FCC) under the authority of theCommunications Act of 1934. The FCC would have significant ability to regulate ISPs should Internet services be treated as a Title II "common carrierservice", or otherwise the ISPs would be mostly unrestricted by the FCC if Internet services fell under Title I "information services". In 2009, the United States Congress passed the American Recovery and Reinvestment Act 2009, which granted a stimulus of $2.88 billion for extending broadband services into certain areas of the United States. It was intended to make the internet more accessible for under-served areas, and aspects of net neutrality and open access were written into the grant. However, the bill never set any significant precedents for net neutrality or influenced future legislation relating to net neutrality.[77]Until 2017, the FCC had generally been favorable towards net neutrality, treating ISPs under Title II common carrier. With the onset of thePresidency of Donald Trumpin 2017, and the appointment ofAjit Pai, an opponent of net neutrality, to the chairman of the FCC, the FCC has reversed many previous net neutrality rulings and reclassified Internet services as Title I information services.[78]The FCC's decisions have been a matter of several ongoing legal challenges by both states supporting net neutrality, and ISPs challenging it. The United States Congress has attempted to pass legislation supporting net neutrality but has failed to gain sufficient support. In 2018, a bill cleared the U.S. Senate, with RepublicansLisa Murkowski,John Kennedy, andSusan Collinsjoining all 49 Democrats but the House majority denied the bill a hearing.[79]Individual states have been trying to pass legislation to make net neutrality a requirement within their state, overriding the FCC's decision. California has successfully passed its ownnet neutrality act, which the United States Department of Justice challenged on a legal basis.[80]On 8 February 2021, the U.S. Justice Department withdrew its challenge to California's data protection law. Federal Communications Commission Acting ChairwomanJessica Rosenworcelvoiced support for an open Internet and restoring net neutrality.[81]Vermont, Colorado, and Washington, among other states, have also enacted net neutrality.[82]
On 19 October 2023, the FCC voted 3–2 to approve a Notice of Proposed Rulemaking (NPRM) that seeks comments on a plan to restore net neutrality rules and regulation of Internet service providers.[83]On 25 April 2024, the FCC voted 3–2 to reinstate net neutrality in the United States by reclassifying the Internet under Title II.[84][85]However, legal challenges immediately filed by ISPs resulted in an appeals court issuing an order that stays the net neutrality rules until the court makes a final ruling, while issuing the opinion that the ISPs will likely prevail over the FCC on the merits.[86]
On 2 January 2025, net neutrality rules, which disallow broadband providers from selectively interfering with Internet speeds depending on the accessed resource, were struck down byUS Court of Appeals for the Sixth CircuitinMCP No. 185.[87][88]
Federal law shows that broadband must be classified as an "information service" and not the more heavily-regulated "telecommunications service" the FCC said it was when it adopted the rules in April 2024, a three-judge panel for the US Court of Appeals for the Sixth Circuit ruled. The FCC lacked the authority to impose its rules on the broadband providers, the court said.[89]
According toBloomberg News, the Sixth Circuit's ruling is "one of the highest-profile examples" so far of an appeals court exercising the expanded authority followingLoper Bright Enters. v. Raimondo, which overturned a doctrine that had supported agency interpretations of ambiguous laws. The court also rejected a similar FCC classification for mobile broadband providers.[89]
Net neutrality in Canadais a debated issue in that nation, but not to the degree of partisanship in other nations such as the United States in part because of its federal regulatory structure and pre-existing supportive laws that were enacted decades before the debate arose.[90]In Canada, ISPs generally provide Internet service in a neutral manner. Some notable incidents otherwise have includedBell Canada's throttling of certain protocols andTelus's censorship of a specific website supporting striking union members.[91]In the case with Bell Canada, the debate for net neutrality became a more popular topic when it was revealed that they were throttling traffic by limiting people's accessibility to viewCanada's Next Great Prime Minister, which eventually led to the Canadian Association of Internet Providers (CAIP) demanding theCanadian Radio-Television and Telecommunications Commission (CRTC)to take action on preventing the throttling of third-party traffic.[92]On 22 October 2009, the CRTC issued a ruling about Internet traffic management, which favored adopting guidelines that were suggested by interest groups such asOpenMedia.caand the Open Internet Coalition. However, the guidelines set in place require citizens to file formal complaints proving that their Internet traffic is being throttled, and as a result, some ISPs still continue to throttle the Internet traffic of their users.[92]
In the year 2018, theIndian Governmentunanimously approved new regulations supporting net neutrality. The regulations are considered to be the "world's strongest" net neutrality rules, guaranteeing free and open Internet for nearly half a billion people,[93]and are expected to help the culture ofstartupsand innovation. The only exceptions to the rules are new and emerging services likeautonomous drivingandtele-medicine, which may require prioritized Internet lanes and faster than normal speeds.[94]
Net neutrality in China is not enforced, and ISPs in China play important roles in regulating the content that is available domestically on the Internet. There are several ISPs filtering and blocking content at the national level, preventing domestic Internet users from accessing certain sites or services or foreign Internet users from gaining access to domestic web content. This filtering technology is referred to as theGreat Firewall, or GFW.[95]
In an article published by the Cambridge University Press, they observed the political environment with net neutrality in China. Chinese ISPs have become a way for the country to control and restrict information rather than providing neutral Internet content for those who use the Internet.[96]
Net neutrality in thePhilippinesis not enforced. Mobile Internet providers likeGlobe TelecomandSmart Communicationscommonly offer data package promos tied to specific applications, games or websites like Facebook,Instagram, andTikTok.[97][98][99]
In the mid-2010s, Philippine telcos came under fire from theDepartment of Justicefor throttling the bandwidth of subscribers of unlimited data plans if the subscribers exceeded arbitrary data caps imposed by the telcos under a supposed "fair use policy" on their "unlimited" plans.[100]Certain adult sites likePornhub,Redtube, andXTubehave also been blocked by some Philippine ISPs at the request of thePhilippine National Policeto theNational Telecommunications Commission, even without the necessary court orders required by theSupreme Court of the Philippines.[101]
Proponents of net neutrality regulations includeconsumer advocates, human rights organizations such asArticle 19,[102]online companies and some technology companies.[103]Net neutrality tends to be supported by those on thepolitical left, while opposed by those on thepolitical right.[104]
Many major Internet application companies are advocates of neutrality, such aseBay,[105]Amazon,[105]Netflix,[106]Reddit,[106]Microsoft,[107]Twitter,[citation needed]Etsy,[108]IAC Inc.,[107]Yahoo!,[109]Vonage,[109]andCogent Communications.[110]In September 2014, an online protest known asInternet Slowdown Daytook place to advocate for the equal treatment of Internet traffic. Notable participants included Netflix and Reddit.[106]
Consumer Reports,[111]theOpen Society Foundations[112]along with several civil rights groups, such as theACLU, theElectronic Frontier Foundation,Free Press,SaveTheInternet, andFight for the Futuresupport net neutrality.[113][106]
Individuals who support net neutrality includeWorld Wide WebinventorTim Berners-Lee,[114]Vinton Cerf,[115][116]Lawrence Lessig,[117]Robert W. McChesney,[118]Steve Wozniak,Susan P. Crawford,Marvin Ammori,Ben Scott,David Reed,[119]and former U.S. PresidentBarack Obama.[120][121]On 10 November 2014, Obama recommended that the FCC reclassify broadband Internet service as a telecommunications service to preserve net neutrality.[122][123][124]On 31 January 2015,AP Newsreported that the FCC will present the notion of applying ("with some caveats")Title II (common carrier)of theCommunications Act of 1934and section 706 of the Telecommunications act of 1996[125]to the Internet in a vote expected on 26 February 2015.[126][127][128][129][130]
Supporters of net neutrality in the United States want to designatecable companiesascommon carriers, which would require them to allow ISPs free access to cable lines, the same model used fordial-upInternet. They want to ensure that cable companies cannot screen, interrupt or filter Internet content without acourt order.[131]Common carrier status would give the FCC the power to enforce net neutrality rules.[132]SaveTheInternet.comaccuses cable and telecommunications companies of wanting the role of gatekeepers, being able to control which websites load quickly, load slowly, or do not load at all. According to SaveTheInternet.com, these companies want to charge content providers who require guaranteed speedy data delivery – to create advantages for their own search engines, Internet phone services, and streaming video services – and slowing access or blocking access to those of competitors.[133]Vinton Cerf, a co-inventor of theInternet Protocoland current vice president of Google, argues that the Internet was designed without any authorities controlling access to new content or new services.[134]He concludes that the principles responsible for making the Internet such a success would be fundamentally undermined were broadband carriers given the ability to affect what people see and do online.[115]Cerf has also written about the importance of looking at problems like Net Neutrality through a combination of the Internet's layered system and the multistakeholder model that governs it.[135]He shows how challenges can arise that can implicate Net Neutrality in certain infrastructure-based cases, such as when ISPs enter into exclusive arrangements with large building owners, leaving the residents unable to exercise any choice in broadband provider.[136]
Proponents of net neutrality argue that a neutral net will foster free speech and lead to further democratic participation on the Internet. FormerSenator Al Frankenfrom Minnesota fears that without new regulations, the major Internet Service Providers will use their position of power to stifle people's rights. He calls net neutrality the "First Amendmentissue of our time."[137]The past two decades has been an ongoing battle of ensuring that all people and websites have equal access to an unrestricted platform, regardless of their ability to pay, proponents of net neutrality wish to prevent the need to pay for speech and the further centralization of media power.[138]Lawrence LessigandRobert W. McChesneyargue that net neutrality ensures that the Internet remains a free and open technology, fostering democratic communication. Lessig and McChesney go on to argue that the monopolization of the Internet would stifle the diversity of independent news sources and the generation of innovative and novel web content.[117]
Proponents of net neutrality invoke the human psychological process of adaptation where when people get used to something better, they would not ever want to go back to something worse. In the context of the Internet, the proponents argue that a user who gets used to the "fast lane" on the Internet would find theslow laneintolerable in comparison, greatly disadvantaging any provider who is unable to pay for thefast lane. Video providers Netflix[140]and Vimeo[141]in their comments to FCC in favor of net neutrality use the research[139]of S.S. Krishnan andRamesh Sitaramanthat provides the first quantitative evidence of adaptation to speed among online video users. Their research studied the patience level of millions of Internet video users who waited for a slow-loading video to start playing. Users who had faster Internet connectivity, such as fiber-to-the-home, demonstrated less patience and abandoned their videos sooner than similar users with slower Internet connectivity. The results demonstrate how users can get used to faster Internet connectivity, leading to higher expectations of Internet speed, and lower tolerance for any delay that occurs. AuthorNicholas Carr[142]and other social commentators[143][144]have written about the habituation phenomenon by stating that a faster flow of information on the Internet can make people less patient.
Net neutrality advocates argue that allowing cable companies the right to demand a toll to guarantee quality or premium delivery would create an exploitative business model based on the ISPs position asgatekeepers.[145]Advocates warn that by charging websites for access, network owners may be able to block competitor Web sites and services, as well as refuse access to those unable to pay.[117]According to Tim Wu, cable companies plan to reserve bandwidth for their own television services, and charge companies a toll for priority service.[146]Proponents of net neutrality argue that allowing for preferential treatment of Internet traffic, ortiered service, would put newer online companies at a disadvantage and slow innovation in online services.[103]Tim Wuargues that, without network neutrality, the Internet will undergo a transformation from a market ruled by innovation to one ruled by deal-making.[146]SaveTheInternet.comargues that net neutrality puts everyone on equal terms, which helps drive innovation. They claim it is a preservation of the way the Internet has always operated, where the quality of websites and services determined whether they succeeded or failed, rather than deals with ISPs.[133]Lawrence LessigandRobert W. McChesneyargue that eliminating net neutrality would lead to the Internet resembling the world of cable TV, so that access to and distribution of content would be managed by a handful of massive, near monopolistic companies, though there are multiple service providers in each region. These companies would then control what is seen as well as how much it costs to see it. Speedy and secure Internet use for such industries as healthcare, finance, retailing, and gambling could be subject to large fees charged by these companies. They further explain that a majority of the great innovators in the history of the Internet started with little capital in their garages, inspired by great ideas. This was possible because the protections of net neutrality ensured limited control by owners of the networks, maximal competition in this space, and permitted innovators from outside access to the network. Internet content was guaranteed a free and highly competitive space by the existence of net neutrality.[117]For example, back in 2005, YouTube was a small startup company. Due to the absence of Internet fast lanes, YouTube had the ability to grow larger than Google Video. Tom Wheeler and Senators Ronald Lee Wyden (D-Ore.) andAl Franken(D-Minn.) said, "Internet service providers treated YouTube's videos the same as they did Google's, and Google couldn't pay the ISPs [Internet service providers] to gain an unfair advantage, like a fast lane into consumers' homes," they wrote. "Well, it turned out that people liked YouTube a lot more than Google Video, so YouTube thrived."[147]
The lack of competition among internet providers has been cited as a major reason to support net neutrality.[108]The loss of net neutrality in 2017 in the U.S. increased the calls for public broadband.[148]
Net neutrality advocates have sponsored legislation claiming that authorizing incumbent network providers to override transport and application layer separation on the Internet would signal the decline of fundamental Internet standards and international consensus authority. Further, the legislation asserts that bit-shaping the transport of application data will undermine the transport layer's designed flexibility.[149]
Some advocates say network neutrality is needed to maintain theend-to-end principle. According toLawrence LessigandRobert W. McChesney, all content must be treated the same and must move at the same speed for net neutrality to be true. They say that it is this simple but brilliant end-to-end aspect that has allowed the Internet to act as a powerful force for economic and social good.[117]Under this principle, a neutral network is adumb network, merely passing packets regardless of the applications they support. This point of view was expressed by David S. Isenberg in his paper,The Rise of the Stupid Network. He states that the vision of an intelligent network is being replaced by a new network philosophy and architecture in which the network is designed for always-on use, not intermittence and scarcity. Rather than intelligence being designed into the network itself, the intelligence would be pushed out to the end-user devices; and the network would be designed simply to deliver bits without fancy network routing or smart number translation. The data would be in control, telling the network where it should be sent. End-user devices would then be allowed to behave flexibly, as bits would essentially be free and there would be no assumption that the data is of a single data rate or data type.[150]
Contrary to this idea, the research paper titledEnd-to-end arguments in system designby Saltzer, Reed, and Clark argues thatnetwork intelligencedoes not relieve end systems of the requirement to check inbound data for errors and to rate-limit the sender, nor for wholesale removal of intelligence from the network core.[151]
Opponents of net neutrality regulations include ISPs, broadband and telecommunications companies, computer hardware manufacturers, economists, and notable technologists.
Many of the major hardware and telecommunications companies specifically oppose the reclassification of broadband as acommon carrierunder Title II. Corporate opponents of this measure includeComcast,AT&T,Verizon,IBM,Intel,Cisco,Nokia,Qualcomm,Broadcom,Juniper,D-Link,Wintel,Alcatel-Lucent,Corning,Panasonic,Ericsson,Oracle,Akamai, and others.[152][153][154][155]TheUS Telecom and Broadband Association, which represents a diverse array of small and large broadband providers, is also an opponent.[156][157]A 2006 campaign against net neutrality was funded by AT&T and members includedBellSouth,Alcatel,Cingular, andCitizens Against Government Waste.[158][159][160][161][162]
Nobel Memorial Prize-winning economistGary Becker's paper titled, "Net Neutrality and Consumer Welfare", published by theJournal of Competition Law & Economics, argues that claims by net neutrality proponents "do not provide a compelling rationale for regulation" because there is "significant and growing competition" among broadband access providers.[163][164]Google chairmanEric Schmidtstates that, while Google views that similar data types should not be discriminated against, it is okay to discriminate across different data types—a position that both Google and Verizon generally agree on, according to Schmidt.[165][166]According to the Journal, when President Barack Obama announced his support for strong net neutrality rules late in 2014, Schmidt told a top White House official the president was making a mistake. Google once strongly advocated net-neutrality–like rules prior to 2010, but their support for the rules has since diminished; the company however still remains "committed" to net neutrality.[166][167]
Individuals who opposed net neutrality rules includeBob Kahn,[168][169]Marc Andreessen,[170]Scott McNealy,[171]Peter ThielandMax Levchin,[163][172]David Farber,[173]David Clark,[174][175]Louis Pouzin,[176]MIT Media Labco-founderNicholas Negroponte,[177]Rajeev Suri,[178]Jeff Pulver,[179][better source needed]Mark Cuban,[180]Robert Pepper[181]and former FCC chairmanAjit Pai.
Nobel Prize laureate economistswho opposed net neutrality rules include Princeton economistAngus Deaton, Chicago economistRichard Thaler,MITeconomistBengt Holmström, and the late Chicago economistGary Becker.[182][183]Others include MIT economistsDavid Autor,Amy Finkelstein, andRichard Schmalensee; Stanford economistsRaj Chetty,Darrell Duffie,Caroline Hoxby, andKenneth Judd; Harvard economistAlberto Alesina; Berkeley economistsAlan AuerbachandEmmanuel Saez; and Yale economistsWilliam Nordhaus,Joseph AltonjiandPinelopi Goldberg.[182]
Some civil rights groups, such as theNational Urban League,Jesse Jackson'sRainbow/PUSH, andLeague of United Latin American Citizens, also opposed Title II net neutrality regulations,[184]citing concerns over stifling investment in underserved areas.[185][186]
TheWikimedia Foundation, which runs Wikipedia, toldThe Washington Postin 2014 that it had a "complicated relationship" with net neutrality.[187]The organization partnered with telecommunications companies to provide free access to Wikipedia for people in developing countries, under a program calledWikipedia Zero, without requiring mobile data to access information. The concept is known aszero rating. Said Wikimedia Foundation officer Gayle Karen Young, "Partnering with telecom companies in the near term, it blurs the net neutrality line in those areas. It fulfills our overall mission, though, which is providing free knowledge."[188]
Farber has written and spoken strongly in favor of continued research and development on core Internet protocols. He joined academic colleagues Michael Katz,Christopher Yoo, and Gerald Faulhaber in an op-ed forThe Washington Postcritical of network neutrality, stating that while the Internet is in need of remodeling, congressional action aimed at protecting the best parts of the current Internet could interfere with efforts to build a replacement.[189]
According to a letter to FCC commissioners and key congressional leaders sent by 60 major ISP technology suppliers including IBM, Intel, Qualcomm, and Cisco, Title II regulation of the Internet "means that instead of billions of broadband investment driving other sectors of the economy forward, any reduction in this spending will stifle growth across the entire economy. This is not idle speculation or fear mongering...Title II is going to lead to a slowdown, if not a hold, in broadband build out, because if you don't know that you can recover on your investment, you won't make it."[152][190][191][192]According to theWall Street Journal, in one of Google's few lobbying sessions with FCC officials, the company urged the agency to craft rules that encourage investment in broadband Internet networks—a position that mirrors the argument made by opponents of strong net neutrality rules, such as AT&T and Comcast.[166]Opponents of net neutrality argue that prioritization of bandwidth is necessary for future innovation on the Internet.[154]Telecommunications providers such as telephone and cable companies, and some technology companies that supply networking gear, argue telecom providers should have the ability to provide preferential treatment in the form oftiered services, for example by giving online companies willing to pay the ability to transfer their data packets faster than other Internet traffic.[193]The added income from such services could be used to pay for the building of increased broadband access to more consumers.[103]
Opponents say that net neutrality would make it more difficult for ISPs and other network operators to recoup their investments in broadband networks.[194]John Thorne, senior vice president and deputy general counsel ofVerizon, abroadbandand telecommunications company, has argued that they will have no incentive to make large investments to develop advanced fibre-optic networks if they are prohibited from charging higher preferred access fees to companies that wish to take advantage of the expanded capabilities of such networks. Thorne and other ISPs have accused Google andSkypeof freeloading or free riding for using a network of lines and cables the phone company spent billions of dollars to build.[154][195][196]Marc Andreessenstates that "a pure net neutrality view is difficult to sustain if you also want to have continued investment in broadband networks. If you're a large telco right now, you spend on the order of $20billiona year oncapex[capital expenditure]. You need to know how you're going to get areturn on that investment. If you have these pure net neutrality rules where you can never charge a company like Netflix anything, you're not ever going to get a return on continued network investment – which means you'll stop investing in the network. And I would not want to be sitting here 10 or 20 years from now with the same broadband speeds we're getting today."[197]
Proponents of net neutrality regulations say network operators have continued to under-invest in infrastructure.[198]However, according to Copenhagen Economics, U.S. investment in telecom infrastructure is 50 percent higher than in the European Union. As a share of GDP, the United States' broadband investment rate per GDP trails only the UK and South Korea slightly, but exceeds Japan, Canada, Italy, Germany, and France sizably.[199]On broadband speed, Akamai reported that the US trails only South Korea and Japan among its major trading partners, and trails only Japan in the G-7 in both average peak connection speed and percentage of the population connection at 10 Mbit/s or higher, but are substantially ahead of most of its other major trading partners.[199]
The White House reported in June 2013 that U.S. connection speeds are "the fastest compared to other countries with either a similar population or land mass."[200]Akamai's report on "The State of the Internet" in the 2nd quarter of 2014 says "a total of 39 states saw 4K readiness rate more than double over the past year." In other words, as ZDNet reports, those states saw amajorincrease in the availability of the 15 Mbit/s speed needed for 4K video.[201]According to theProgressive Policy Instituteand ITU data, the United States has the most affordable entry-level prices for fixed broadband in the OECD.[199][202]
In Indonesia, there is a very high number of Internet connections that are subject to exclusive deals between the ISP and the building owner. Representatives ofGoogle, Incclaim that changing this dynamic could unlock much moreconsumer choicesand higher speeds.[136]Former FCC Commissioner Ajit Pai and Federal Election Commission's Lee Goldman also wrote in a Politico piece in February 2015, "Compare Europe, which has long had utility-style regulations, with the United States, which has embraced a light-touch regulatory model. Broadband speeds in the United States, both wired and wireless, are significantly faster than those in Europe. Broadband investment in the United States is several multiples that of Europe. And broadband's reach is much wider in the United States, despite its much lower population density."[203]
VOIPpioneerJeff Pulverstates that the uncertainty of the FCC imposing Title II, which experts said would create regulatory restrictions on using the Internet to transmit a voice call, was the "single greatest impediment to innovation" for a decade.[204]According to Pulver, investors in the companies he helped found, like Vonage, held back investment because they feared the FCC could use Title II to prevent VOIP startups from bypassing telephone networks.[204]
A 2010 paper on net neutrality by Nobel Prize economistGary Beckerand his colleagues stated that "there is significant and growing competition among broadband access providers and that few significant competitive problems have been observed to date, suggesting that there is no compelling competitive rationale for such regulation."[164]Becker and fellow economists Dennis Carlton and Hal Sidler found that "Between mid-2002 and mid-2008, the number of high-speed broadband access lines in the United States grew from 16 million to nearly 133 million, and the number of residential broadband lines grew from 14 million to nearly 80 million. Internet traffic roughly tripled between 2007 and 2009. At the same time, prices for broadband Internet access services have fallen sharply."[164]The PPI reports that the profit margins of U.S. broadband providers are generally one-sixth to one-eighth of companies that use broadband (such as Apple or Google), contradicting the idea of monopolistic price-gouging by providers.[199]
When FCC chairman Tom Wheeler redefined broadband from 4 Mbit/s to 25 Mbit/s (3.125MB/s) or greater in January 2015, FCC commissioners Ajit Pai and Mike O'Reilly believed the redefinition was to set up the agency's intent to settle the net neutrality fight with new regulations. The commissioners argued that the stricter speed guidelines painted the broadband industry as less competitive, justifying the FCC's moves with Title II net neutrality regulations.[205]
A report by theProgressive Policy Institutein June 2014 argues that nearly every American can choose from at least 2–4 broadband Internet service providers, despite claims that there are only a "small number" of broadband providers.[199]Citing research from the FCC, the Institute wrote that 90 percent of American households have access to at least one wired and one wireless broadband provider at speeds of at least 4Mbit/s(500kbyte/s) downstream and 1 Mbit/s (125 kbyte/s) upstream and that nearly 88 percent of Americans can choose from at least two wired providers of broadband disregarding speed (typically choosing between a cable and telco offering). Further, three of the four national wireless companies report that they offer 4G LTE to 250–300 million Americans, with the fourth (T-Mobile) sitting at 209 million and counting.[199]Similarly, the FCC reported in June 2008 that 99.8% ofZIP codesin the United States had two or more providers of high-speed Internet lines available, and 94.6% of ZIP codes had four or more providers, as reported by University of Chicago economists Gary Becker, Dennis Carlton, and Hal Sider in a 2010 paper.[164]
FCC commissionerAjit Paistates that the FCC completely brushes away the concerns of smaller competitors who are going to be subject to various taxes, such as state property taxes and general receipts taxes.[206]As a result, according to Pai, that does nothing to create more competition within the market.[206]According to Pai, the FCC's ruling to impose Title II regulations is opposed by the country's smallest private competitors and manymunicipal broadbandproviders.[207]In his dissent, Pai noted that 142 wireless ISPs (WISPs) said that FCC's new "regulatory intrusion into our businesses ... would likely force us to raise prices, delay deployment expansion, or both." He also noted that 24 of the country's smallest ISPs, each with fewer than 1,000 residential broadband customers, wrote to the FCC stating that Title II "will badly strain our limited resources" because they "have no in-house attorneys and no budget line items for outside counsel." Further, another 43 municipal broadband providers told the FCC that Title II "will trigger consequences beyond the Commission's control and risk serious harm to our ability to fund and deploy broadband without bringing any concrete benefit for consumers or edge providers that the market is not already proving today without the aid of any additional regulation."[153]
According to aWiredmagazine article by TechFreedom's Berin Szoka, Matthew Starr, and Jon Henke, local governments and public utilities impose the most significant barriers to entry for more cable broadband competition: "While popular arguments focus on supposed 'monopolists' such as big cable companies, it's government that's really to blame." The authors state that local governments and their public utilities charge ISPs far more than they actually cost and have the final say on whether an ISP can build a network. The public officials determine what requirements an ISP must meet to get approval for access to publicly owned rights of way (which lets them place their wires), thus reducing the number of potential competitors who can profitably deploy Internet services—such as AT&T's U-Verse, Google Fiber, and Verizon FiOS. Kickbacks may include municipal requirements for ISPs such as building out service where it is not demanded, donating equipment, and delivering free broadband to government buildings.[208]
According to a research article fromMIS Quarterly, the authors stated their findings subvert some of the expectations of how ISPs and CPs act regarding net neutrality laws. The paper shows that even if an ISP is under restrictions, it still has the opportunity and the incentive to act as a gatekeeper over CPs by enforcing priority delivery of content.[209]
Those in favor of forms of non-neutral tiered Internet access argue that the Internet is already not a level playing field, and that large companies achieve a performance advantage over smaller competitors by providing more and better-quality servers and buying high-bandwidth services. Should scrapping of net neutrality regulations precipitate a price drop for lower levels of access, or access to only certain protocols, for instance, such would make Internet usage more adaptable to the needs of those individuals and corporations who specifically seek differentiated tiers of service. Network expert[210]Richard Bennett has written, "A richly funded Web site, which delivers data faster than its competitors to the front porches of the Internet service providers, wants it delivered the rest of the way on an equal basis. This system, which Google calls broadband neutrality, actually preserves a more fundamental inequality."[211]
FCC commissioner Ajit Pai, who opposed the 2015 Title II reclassification of ISPs, says that the ruling allows new fees and taxes on broadband by subjecting them to telephone-style taxes under the Universal Service Fund. Net neutrality proponentFree Presswrites, "the average potential increase in taxes and fees per household would be far less" than the estimate given by net neutrality opponents, and that if there were to be additional taxes, the tax figure may be around US$4 billion. Under favorable circumstances, "the increase would be exactly zero."[212]Meanwhile, theProgressive Policy Instituteclaims that Title II could trigger taxes and fees up to $11 billion a year.[213]Financial websiteNerd Walletdid their own assessment and settled on a possible US$6.25 billion tax impact, estimating that the average American household may see their tax bill increase US$67 annually.[213]
FCC spokesperson Kim Hart said that the ruling "does not raise taxes or fees. Period."[213]
According toPayPalfounder and Facebook investorPeter Thielin 2011, "Net neutrality has not been necessary to date. I don't see any reason why it's suddenly become important, when the Internet has functioned quite well for the past 15 years without it. ... Government attempts to regulate technology have been extraordinarily counterproductive in the past."[163]Max Levchin, the other co-founder of PayPal, echoed similar statements, telling CNBC, "The Internet is not broken, and it got here without government regulation and probably in part because of lack of government regulation."[214]
FCC CommissionerAjit Pai, who was one of the two commissioners who opposed the net neutrality proposal, criticized the FCC's ruling on Internet neutrality, stating that the perceived threats from ISPs to deceive consumers, degrade content, or disfavor the content that they dislike are non-existent: "The evidence of these continuing threats? There is none; it's all anecdote, hypothesis, and hysteria. A small ISP in North Carolina allegedly blocked VoIP calls a decade ago. Comcast capped BitTorrent traffic to ease upload congestion eight years ago. Apple introduced Facetime over Wi-Fi first, cellular networks later. "FCC chairman Pai wants to switch ISP rules from proactive restrictions to after-the-fact litigation, which means a lot more leeway for ISPs that don't particularly want to be treated as impartial utilities connecting people to the internet." (Atherton, 2017).[21]Examples this picayune and stale aren't enough to tell a coherent story about net neutrality. The bogeyman never had it so easy."[153]FCC Commissioner Mike O'Reilly, the other opposing commissioner, also claims that the ruling is a solution to a hypothetical problem, "Even after enduring three weeks of spin, it is hard for me to believe that the Commission is establishing an entire Title II/net neutrality regime to protect against hypothetical harms. There is not a shred of evidence that any aspect of this structure is necessary. The D.C. Circuit called the prior, scaled-down version a 'prophylactic' approach. I call it guilt by imagination."[citation needed]In aChicago Tribunearticle, FCC Commissioner Pai andJoshua Wrightof theFederal Trade Commissionargue that "the Internet isn't broken, and we don't need the president's plan to 'fix' it. Quite the opposite. The Internet is an unparalleled success story. It is a free, open and thriving platform."[215]
Opponents argue that net neutrality regulations prevent service providers from providing more affordable Internet access to those who can not afford it.[185]A concept known aszero-rating, ISPs would be unable to provide Internet access for free or at a reduced cost to the poor under net neutrality rules.[216][185]For example, low-income users who can not afford bandwidth-hogging Internet services such asvideo streamscould be exempted from paying through subsidies or advertising.[185]However, under the rules, ISPs would not be able to discriminate traffic, thus forcing low-income users to pay for high-bandwidth usage like other users.[216]
TheWikimedia Foundation, which runs Wikipedia, createdWikipedia Zeroto provide Wikipedia free-of-charge on mobile phones to low-income users, especially those in developing countries. However, the practice violates net neutrality rules as traffic would have to be treated equally regardless of the users' ability to pay.[185][217]In 2014, Chile banned the practice of Internet service providers giving users free access to websites like Wikipedia and Facebook, saying the practice violates net neutrality rules.[218]In 2016, India banned Free Basics application run byInternet.org, which provides users in less developed countries with free access to a variety of websites like Wikipedia,BBC,Dictionary.com, health sites, Facebook,ESPN, and weather reports—ruling that the initiative violated net neutrality.[219]
Net neutrality rules would prevent the traffic from being allocated to the most needed users, according toDavid Farber.[189]Because net neutrality regulations prevent adiscrimination of traffic, networks would have to treat critical traffic equally with non-critical traffic. According to Farber, "When traffic surges beyond the ability of the network to carry it, something is going to be delayed. When choosing what gets delayed, allowing a network to favor traffic from, say, a patient's heart monitor over traffic delivering a music download makes sense. It also makes sense to allow network operators to restrict harmful traffic, such as viruses, worms, and spam."[189]
Tim Wu, though a proponent of network neutrality, claims that the current Internet is not neutral as its implementation ofbest effortgenerally favorsfile transferand other non-time-sensitive traffic over real-time communications.[220]Generally, a network which blocks somenodesor services for the customers of the network would normally be expected to be less useful to the customers than one that did not. Therefore, for a network to remain significantly non-neutral requires either that the customers not be concerned about the particular non-neutralities or the customers not have any meaningful choice of providers, otherwise they would presumably switch to another provider with fewer restrictions.[citation needed]
While the network neutrality debate continues, network providers often enter into peering arrangements among themselves. These agreements often stipulate how certain information flows should be treated. In addition, network providers often implement various policies such as blocking of port 25 to prevent insecure systems from serving as spam relays, or other ports commonly used by decentralized music search applications implementing peer-to-peer networking models. They also present terms of service that often include rules about the use of certain applications as part of their contracts with users.[citation needed]Most consumer Internet providers implement policies like these. The MIT Mantid Port Blocking Measurement Project is a measurement effort to characterize Internet port blocking and potentially discriminatory practices. However, the effect of peering arrangements among network providers are only local to the peers that enter into the arrangements and cannot affect traffic flow outside their scope.[citation needed]
Jon PehafromCarnegie Mellon Universitybelieves it is important to create policies that protect users from harmful traffic discrimination while allowing beneficial discrimination. Peha discusses the technologies that enable traffic discrimination, examples of different types of discrimination, and the potential impacts of regulation.[221]Google chairmanEric Schmidtaligns Google's views on data discrimination with Verizon's: "I want to be clear what we mean by Net neutrality: What we mean is if you have one data type like video, you don't discriminate against one person's video in favor of another. But it's okay to discriminate across different types. So you could prioritize voice over video. And there is general agreement withVerizonand Google on that issue."[165]Echoing similar comments by Schmidt, Google's Chief Internet Evangelist and "father of the Internet",Vint Cerf, says that "it's entirely possible that some applications needs far more latency, like games. Other applications need broadband streaming capability in order to deliver real-time video. Others don't really care as long as they can get the bits there, like e-mail or file transfers and things like that. But it should not be the case that the supplier of the access to the network mediates this on a competitive basis, but you may still have different kinds of service depending on what the requirements are for the different applications."[222]
Content cachingis the process by which frequently accessed contents are temporarily stored in strategic network positions (e.g., in servers close to the end-users[223]) to achieve several performance objectives. For example, caching is commonly used by ISPs to reducenetwork congestionand results in a superiorquality of experience(QoE) perceived by the final users.
Since the storage available in cache servers is limited, caching involves a process of selecting the contents worth storing. Severalcache algorithmshave been designed to perform this process which, in general, leads to storing the most popular contents. The cached contents are retrieved at a higher QoE (e.g., lower latency), and caching can be therefore considered a form of traffic differentiation.[221]However, caching is not generally viewed as a form of discriminatory traffic differentiation. For example, the technical writer Adam Marcus states that "accessing content from edge servers may be a bit faster for users, but nobody is being discriminated against and most content on the Internet is not latency-sensitive".[223]In line with this statement, caching is not regulated by legal frameworks that are favourable to Net Neutrality, such as the Open Internet Order issued by theFCCin 2015. Even more so, the legitimacy of caching has never been put in doubt by opponents of Net Neutrality. On the contrary, the complexity of caching operations (e.g., extensive information processing) has been successively regarded by the FCC as one of the technical reasons why ISPs should not be considered common carriers, which legitimates the abrogation of Net Neutrality rules.[224]Under a Net Neutrality regime, prioritization of a class of traffic with respect to another one is allowed only if several requirements are met (e.g., objectively different QoS requirements).[225]However, when it comes to caching, a selection of contents of the same class has to be performed (e.g., set of videos worth storing in cache servers). In the spirit of general deregulation with regard to caching, there is no rule that specifies how this process can be carried out in a non-discriminatory way. Nevertheless, the scientific literature considers the issue of caching as a potentially discriminatory process and provides possible guidelines to address it.[226]For example, a non-discriminatory caching might be performed considering the popularity of contents, or with the aim of guaranteeing the same QoE to all the users, or, alternatively, to achieve some common welfare objectives.[226]
As far ascontent delivery networks(CDNs) are concerned, the relationship between caching and Net Neutrality is even more complex. In fact, CDNs are employed to allow scalable and highly-efficient content delivery rather than to grant access to the Internet. Consequently, differently from ISPs, CDNs are entitled to charge content providers for caching their content. Therefore, although this may be regarded as a form of paid traffic prioritization, CDNs are not subject to Net Neutrality regulations and are rarely included in the debate. Despite this, it is argued by some that the Internet ecosystem has changed to such an extent that all the players involved in the content delivery can distort competition and should be therefore also included in the discussion around Net Neutrality.[226]Among those, the analyst Dan Rayburn suggested that "the Open Internet Order enacted by the FCC in 2015 was myopically focussed on ISPs".[227]
Internet routers forward packets according to the different peering and transport agreements that exist between network operators. Many internets using Internet protocols now employ quality of service (QoS), and Network Service Providers frequently enter into Service Level Agreements with each other embracing some sort of QoS. There is no single, uniform method of interconnecting networks usingIP, and not all networks that use IP are part of the Internet.IPTVnetworks are isolated from the Internet and are therefore not covered by network neutrality agreements. The IPdatagramincludes a 3-bit wide Precedence field and a largerDiffServCode Point (DSCP) that are used to request a level of service, consistent with the notion that protocols in a layered architecture offer service throughService Access Points. This field is sometimes ignored, especially if it requests a level of service outside the originating network's contract with the receiving network. It is commonly used in private networks, especially those includingWi-Finetworks where priority is enforced. While there are several ways of communicating service levels across Internet connections, such asSIP,RSVP,IEEE 802.11e, andMPLS, the most common scheme combines SIP and DSCP. Router manufacturers now sell routers that have logic enabling them to route traffic for various Classes of Service atwire-speed.[citation needed]
Quality of service is sometimes taken as a measurement through certain tools to test a user's connection quality, such as Network Diagnostic Tools (NDT) and services on speedtest.net. These tools are known to be used byNational Regulatory Authorities (NRAs), who use these QoS measurements as a way of detecting Net Neutrality violations. However, there are very few examples of such measurements being used in any significant way by NRAs, or in network policy for that matter. Often, these tools are used not because they fail at recording the results they are meant to record, but because said measurements are inflexible and difficult to exploit for any significant purpose. According to Ioannis Koukoutsidis, the problems with the current tools used to measure QoS stem from a lack of a standard detection methodology, a need to be able to detect various methods in which an ISP might violate Net Neutrality, and the inability to test an average measurement for a specific population of users.[228]
With the emergence of multimedia,VoIP, IPTV, and other applications that benefit from low latency, various attempts to address the inability of some private networks to limit latency have arisen, including the proposition of offeringtiered servicelevels that would shape Internet transmissions at the network layer based on application type. These efforts are ongoing and are starting to yield results as wholesale Internet transport providers begin to amend service agreements to include service levels.[229]
Advocates of net neutrality have proposed several methods to implement a net-neutral Internet that includes a notion of quality-of-service:
There are also some discrepancies in how wireless networks affect the implementation of net neutrality policy, some of which are noted in the studies ofChristopher Yoo. In one research article, he claimed that "...bad handoffs, local congestion, and the physics ofwave propagationmake wireless broadband networks significantly less reliable than fixed broadband networks."[232]
Broadband Internet access has most often been sold to users based onExcess Information Rateor maximum available bandwidth. If ISPs can provide varying levels of service to websites at various prices, this may be a way to manage the costs of unused capacity by selling surplus bandwidth (or "leverageprice discriminationto recoup costs of 'consumer surplus'"). However, purchasers of connectivity on the basis ofCommitted Information Rateor guaranteed bandwidth capacity must expect the capacity they purchase in order to meet their communications requirements. Various studies have sought to provide network providers with the necessary formulas for adequately pricing such atiered servicefor their customer base. But while network neutrality is primarily focused on protocol-based provisioning, most of the pricing models are based on bandwidth restrictions.[233]
Many Economists have analyzed Net Neutrality to compare various hypothetical pricing models. For instance, economic professors Michael L. Katz and Benjamin E. Hermalin at the University of California Berkeley co-published a paper titled, "The Economics of Product-Line Restrictions with an Application to the Network Neutrality Debate" in 2007. In this paper, they compared the single-service economic equilibrium to the multi-service economic equilibriums under Net Neutrality.[234]
On 12 July 2017, an event called theDay of Actionwas held to advocatenet neutrality in the United Statesin response to Ajit Pai's plans to remove government policies that upheld net neutrality. Several websites participated in this event, including ones such asAmazon,Netflix, Google, and several other just as well-known websites. The gathering was called "the largest online protest in history." Websites chose many different ways to convey their message. The founder ofthe web,Tim Berners-Lee, published a video defending FCC's rules.Redditmade a pop-up message that loads slowly to illustrate the effect of removing net neutrality. Other websites also put up some less obvious notifications, such as Amazon, which put up a hard-to-notice link, or Google, which put up a policy blog post as opposed to a more obvious message.[235]
A poll conducted byMozillashowed strong support for net neutrality acrossUS political parties. Out of the approximately 1,000 responses received by the poll, 76% of Americans, 81% of Democrats, and 73% of Republicans, support net neutrality.[236]The poll also showed that 78% of Americans do not think that Trump's government can be trusted to protect access to the Internet. Net neutrality supporters had also made several comments on the FCC website opposing plans to remove net neutrality, especially aftera segmentbyJohn Oliverregarding this topic was aired on his showLast Week Tonight.[237]He urged his viewers to comment on the FCC's website, and the flood of comments that were received crashed the FCC's website, with the resulting media coverage of the incident inadvertently helping it to reach greater audiences.[238]However, in response, Ajit Pai selected one particular comment that specifically supported removal of net neutrality policies.
At the end of August, the FCC released more than 13,000 pages of net neutrality complaints filed by consumers, one day before the deadline for the public to comment on Ajit Pai's proposal to remove net neutrality. It has been implied that the FCC ignored evidence against their proposal to remove the protection laws faster. It has also been noted that nowhere was it mentioned how FCC made any attempt to resolve the complaints made. Regardless, Ajit Pai's proposal has drawn more than 22 million comments, though a large amount was spam. However, there were 1.5 million personalized comments, 98.5% of them protesting Ajit Pai's plan.[239]
As of January 2018[update],[needs update]fifty senators had endorsed a legislative measure to override the Federal Communications Commission's decision to deregulate the broadband industry.The Congressional Review Actpaperwork was filed on 9 May 2018, which allowed theSenateto vote on the permanence of the new net neutrality rules proposed by the Federal Communications Commission.[240]The vote passed and a resolution was approved to try to remove the FCC's new rules on net neutrality; however, officials doubted there was enough time to completely repeal the rules before theOpen Internet Orderofficially expired on 11 June 2018.[241]A September 2018 report from Northeastern University and the University of Massachusetts, Amherst found that U.S. telecom companies are indeed slowing Internet traffic to and from those two sites in particular along with other popular apps.[242]In March 2019, congressional supporters of net neutrality introduced the Save the Internet Act in both the House and Senate, which if passed would reverse the FCC's 2017 repeal of net neutrality protections.[243]
Adigital divideis referred to as the difference between those who have access to the internet and those using digital technologies based on urban against rural areas.[244]In the U.S, government city tech leaders warned in 2017 that the FCC's repeal of net neutrality will widen the digital divide, negatively affect small businesses, and job opportunities for middle class and low-income citizens. The FCC reports on their website that Americans in rural areas reach only 65 percent, while in urban areas reach 97 percent of access to high-speed Internet.[245][246]Public Knowledge has stated that this will have a larger impact on those living in rural areas without internet access.[247]In developing countries like India that don't have reliable electricity or internet connections has only 9 percent of those living in rural areas that have internet access compared to 64 percent of those in urban areas that have access.[248]
|
https://en.wikipedia.org/wiki/Network_neutrality
|
Shape Expressions(ShEx)[2]is adata modellinglanguage for validating and describing aResource Description Framework(RDF).
It was proposed at the 2012 RDF Validation Workshop[3]as a high-level, concise language for RDF validation.
The shapes can be defined in a human-friendly compact syntax called ShExC or using any RDFserialization formatslikeJSON-LDorTurtle.
ShEx expressions can be used both to describe RDF and to automatically check the conformance of RDF data.
The syntax of ShEx is similar toTurtleandSPARQLwhile the semantics is inspired by regular expression languages likeRelaxNG.
The previous example declares that nodes conforming to shapePersonmust have one propertyschema:namewith a string value and zero or more propertiesschema:knowswhose values must conform with shapePerson.
|
https://en.wikipedia.org/wiki/ShEx
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.