text
stringlengths
16
172k
source
stringlengths
32
122
Car dependencyis a phenomenon inurban planningwherein existing and planned infrastructure prioritizes the use ofautomobilesover other modes of transportation, such aspublic transport,bicycles, andwalking. Car dependency has been attributed with leading to a more polluting transport system compared to systems where all transportation modes are treated more equally.[1] Car infrastructure is often paid for by governments fromgeneral taxesrather thangasoline taxesor mandated by governments.[2]For instance, many cities haveminimum parking requirementsfor new housing, which in practice requires developers to "subsidize" drivers.[3]In some places, bicycles andrickshawsare banned from using road space. Theroad lobbyplays an important role in maintaining car dependency, arguing that car infrastructure is good for economic growth.[1] In many modern cities, automobiles are convenient and sometimes necessary to move easily.[4][5]When it comes to automobile use, there is a spiraling effect wheretraffic congestionproduces the 'demand' for more and bigger roads and the removal of 'impediments' totraffic flow. For instance,pedestrians, signalized crossings,traffic lights, cyclists, and various forms of street-based public transit, such astrams. These measures make automobile use more advantageous at the expense of other modes of transport, inducinggreater traffic volumes. Additionally, theurban designof cities adjusts to the needs of automobiles in terms of movement and space. Buildings are replaced by parking lots. Open-air shopping streets are replaced by enclosedshopping malls. Walk-in banks and fast-food stores are replaced by drive-in versions of themselves that are inconveniently located for pedestrians. Town centers with amixtureof commercial, retail, and entertainment functions are replaced by single-functionbusiness parks, 'category-killer' retail boxes, and 'multiplex' entertainment complexes, each surrounded by large tracts of parking. These kinds of environments require automobiles to access them, thus inducing even more traffic onto the increased road space. This results in congestion, and the cycle above continues. Roads get ever bigger, consuming ever greater tracts of land previously used for housing, manufacturing, and other socially and economically useful purposes. Public transit becomes less viable and socially stigmatized, eventually becoming a minority form of transportation. People's choices and freedoms to live functional lives without the use of the car are greatly reduced. Such cities are automobile-dependent. Automobile dependency is seen primarily as an issue of environmentalsustainabilitydue to the consumption ofnon-renewable resourcesand theproduction of greenhouse gasesresponsible forglobal warming. It is also an issue of social and cultural sustainability. Likegated communities, the private automobile produces physical separation between people and reduces the opportunities for unstructured social encounters that is a significant aspect ofsocial capitalformation and maintenance in urban environments. As automobile use rose drastically in the 1910s, American road administrators favored building roads to accommodate traffic.[6]Administrators and engineers in the interwar period spent their resources making small adjustments to accommodate traffic such as widening lanes and adding parking spaces, as opposed to larger projects that would change the built environment altogether.[6]American cities began to tear outtramsystems in the 1920s. Car dependency itself saw its formation around theSecond World War, when urban infrastructure began to be built exclusively around the car.[7]The resultant economic and built environment restructuring allowed wide adoption of automobile use. In the United States, the expansive manufacturing infrastructure, increase in consumerism, and the establishment of theInterstate Highway Systemset forth the conditions for car dependence in communities. In 1956, theHighway Trust Fund[8]was established in America, reinvesting gasoline taxes back into car-based infrastructure. In 1916 the firstzoningordinance was introduced in New York City, the1916 Zoning Resolution. Zoning was created as a means of organizing specific land uses in a city so as to avoid potentially harmful adjacencies like heavy manufacturing and residential districts, which were common in large urban areas in the 19th and early 20th centuries. Zoning code also determines the permitted residential building types and densities in specific areas of a city by defining such things as single-family homes, and multi-family residential as being allowed as of right or not in certain areas. The overall effect of zoning in the last century has been to create areas of the city with similar land use patterns in cities that had previously been a mix of heterogenous residential and business uses. The problem is particularly severe right outside of cities, in suburban areas located around the periphery of a city where strict zoning codes almost exclusively allow forsingle family detached housing.[9]Strict zoning codes that result in a heavily segregated built environment between residential and commercial land uses contributes to car dependency by making it nearly impossible to access all one's given needs, such as housing, work, school and recreation without the use of a car. One key solution to the spatial problems caused by zoning would be a robust public transportation network. There is also currently a movement to amend older zoning ordinances to create more mixed-use zones in cities that combine residential and commercial land uses within the same building or within walking distance to create the so-called15-minute city. Parking minimumsare also a part of modern zoning codes, and contribute to car dependency through a process known asinduced demand. Parking minimums require a certain number of parking spots based on the land use of a building and are often designed in zoning codes to represent the maximum possible need at any given time.[10]This has resulted in cities having nearly eight parking spaces for every car in America, which have created cities almost fully dedicated to parking from free on-street parking to parking lots up to three times the size of the businesses they serve.[10]This prevalence in parking has perpetuated a loss in competition between other forms of transportation such that driving becomes thede factochoice for many people even when alternatives do exist. The design of city roads can contribute significantly to the perceived and actual need to use a car over other modes of transportation in daily life. In the urban context car dependence is induced in greater numbers by design factors that operate in opposite directions - first, design that makes driving easier and second, design that makes all other forms of transportation more difficult. Frequently these two forces overlap in a compounding effect to induce more car dependence in an area that would have potential for a more heterogenous mix of transportation options. These factors include things like the width of roads, that make driving faster and therefore 'easier' while also making a less safe environment for pedestrians or cyclists that share the same road. The prevalence of on-street parking on most residential and commercial streets also makes driving easier while taking away street space that could be used forprotected bike lanes, dedicatedbus lanes, or other forms of public transportation. According to theHandbook on estimation of external costs in the transport sector[11]made by theDelft University, which is the main reference in the European Union for assessing the externalities of cars, the main external costs of driving a car are: Other negative externalities may include increased cost of building infrastructure, inefficient use of space and energy,pollutionand per capita fatality.[12][13] There are a number of planning and design approaches to redressing automobile dependency,[14]known variously asNew Urbanism,transit-oriented development, andsmart growth. Most of these approaches focus on the physicalurban design,urban densityandlanduse zoningof cities.Paul Meesargued that investment in good public transit, centralized management by the public sector and appropriate policy priorities are more significant than issues of urban form and density. Removal of minimum parking requirements from building codes can alleviate the problems generated by car dependency. Minimum parking requirements occupy valuable space that otherwise can be used for housing. However, removal of minimum parking requirements will require implementation of additional policies to manage the increase in alternative parking methods.[15] There are, of course, many who argue against a number of the details within any of the complex arguments related to this topic, particularly relationships betweenurban densityand transit viability, or the nature of viable alternatives to automobiles that provide the same degree of flexibility and speed. There is also research into the future ofautomobilityitself in terms of shared usage, size reduction, road-space management and more sustainable fuel sources. Car-sharingis one example of a solution to automobile dependency. Research has shown that in the United States, services likeZipcarhave reduced demand by about 500,000 cars.[16]In the developing world, companies like eHi,[17]Carrot,[18][19]Zazcar[20]andZoomhave replicated or modified Zipcar's business model to improve urban transportation to provide a broader audience with greater access to the benefits of a car and provide "last-mile" connectivity between public transportation and an individual's destination. Car sharing also reduces private vehicle ownership. Whethersmart growthdoes or can reduce problems of automobile dependency associated withurban sprawlhas been fiercely contested for several decades. The influential study in 1989 byPeter Newmanand Jeff Kenworthy compared 32 cities across North America, Australia, Europe and Asia.[21]The study has been criticised for its methodology,[22]but the main finding, that denser cities, particularly inAsia, have lower car use than sprawling cities, particularly inNorth America, has been largely accepted, but the relationship is clearer at the extremes across continents than it is within countries where conditions are more similar. Within cities, studies from across many countries (mainly in the developed world) have shown that denser urban areas with greater mixture of land use and better public transport tend to have lower car use than less densesuburbanandexurbanresidential areas. This usually holds true even after controlling for socio-economic factors such as differences in household composition and income.[23] This does not necessarily imply that suburban sprawl causes high car use, however. One confounding factor, which has been the subject of many studies, is residential self-selection:[24]people who prefer to drive tend to move towards low-density suburbs, whereas people who prefer to walk, cycle or use transit tend to move towards higher density urban areas, better served by public transport. Some studies have found that, when self-selection is controlled for, the built environment has no significant effect on travel behaviour.[25]More recent studies using more sophisticated methodologies have generally rejected these findings: density, land use and public transport accessibility can influence travel behaviour, although social and economic factors, particularly household income, usually exert a stronger influence.[26] Reviewing the evidence onurban intensification, smart growth and their effects on automobile use, Melia et al. (2011)[27]found support for the arguments of both supporters and opponents of smart growth. Planning policies that increasepopulation densitiesin urban areas do tend to reduce car use, but the effect is weak. So, doubling the population density of a particular area will not halve the frequency or distance of car use. These findings led them to propose the paradox of intensification: At the citywide level, it may be possible, through a range of positive measures to counteract the increases in traffic and congestion that would otherwise result from increasing population densities:[28]Freiburg im Breisgauin Germany is one example of a city which has been more successful in reducing automobile dependency and constraining increases in traffic despite substantial increases in population density.[29] This study also reviewed evidence on local effects of building at higher densities. At the level of the neighbourhood or individual development, positive measures (like improvements to public transport) will usually be insufficient to counteract the traffic effect of increasing population density. This leaves policy-makers with four choices:
https://en.wikipedia.org/wiki/Car_dependence
Abicycle, also called apedal cycle,bike,push-bikeorcycle, is ahuman-poweredormotor-assisted,pedal-driven,single-track vehicle, with twowheelsattached to aframe, one behind the other. Abicycle rideris called a cyclist, or bicyclist. Bicycles were introduced in the 19th century inEurope. By the early 21st century there were more than 1 billion bicycles.[1][2]There are many more bicycles thancars.[3][4][5]Bicycles are the principalmeans of transportin many regions. They also provide a popular form ofrecreation, and have been adapted for use aschildren's toys. Bicycles are used forfitness,militaryandpoliceapplications,courier services,bicycle racing, andartistic cycling. The basic shape and configuration of a typicalupright or "safety" bicycle, has changed little since the firstchain-drivenmodel was developed around 1885.[6][7][8]However, many details have been improved, especially since the advent ofmodern materialsandcomputer-aided design. These have allowed for a proliferation of specialized designs for many types of cycling. In the 21st century,electric bicycleshave become popular. The bicycle's invention has had an enormous effect on society, both in terms ofcultureand of advancing modernindustrialmethods. Several components that played a key role in the development of the automobile were initially invented for use in the bicycle, includingball bearings,pneumatic tires,chain-driven sprockets, andtension-spoked wheels.[9] The wordbicyclefirst appeared in English print inThe Daily Newsin 1868, to describe "Bysicles and trysicles" on the "Champs Elysées and Bois de Boulogne".[10]The word was first used in 1847 in a French publication to describe an unidentified two-wheeled vehicle, possibly a carriage.[10]The design of the bicycle was an advance on thevelocipede, although the words were used with some degree of overlap for a time.[10][11] Other words for bicycle include "bike",[12]"pushbike",[13]"pedal cycle",[14]or "cycle".[15]InUnicode, thecode pointfor "bicycle" is 0x1F6B2. Theentity🚲inHTMLproduces 🚲.[16] Although bike and cycle are used interchangeably to refer mostly to two types of two-wheelers, the terms still vary across the world. In India, for example, a cycle[17]refers only to a two-wheeler using pedal power whereas the term bike is used to describe a two-wheeler usinginternal combustion engineorelectric motorsas a source of motive power instead of motorcycle/motorbike. The "dandy horse",[18]also calledDraisienneorLaufmaschine("running machine"), was the first human means of transport to use only two wheels intandemand was invented by the GermanBaronKarl von Drais. It is regarded as the first bicycle and von Drais is seen as the "father of the bicycle",[19][20][21][22]but it did not have pedals.[23][24][25][26]Von Drais introduced it to the public inMannheimin 1817 and in Paris in 1818.[27][28]Its rider sat astride a wooden frame supported by two in-line wheels and pushed the vehicle along with his or her feet while steering the front wheel.[27] Thefirst mechanically propelled, two-wheeled vehiclemay have been built byKirkpatrick MacMillan, a Scottish blacksmith, in 1839, although the claim is often disputed.[29]He is also associated with the first recorded instance of a cycling traffic offense, when a Glasgow newspaper in 1842 reported an accident in which an anonymous "gentleman from Dumfries-shire... bestride a velocipede... of ingenious design" knocked over a little girl in Glasgow and was fined fiveshillings(equivalent to £30 in 2023).[30] In the early 1860s, FrenchmenPierre MichauxandPierre Lallementtook bicycle design in a new direction by adding a mechanicalcrankdrive with pedals on an enlarged front wheel (thevelocipede). This was the first in mass production. Another French inventor named Douglas Grasso had a failed prototype of Pierre Lallement's bicycle several years earlier. Several inventions followed using rear-wheel drive, the best known being the rod-driven velocipede by ScotsmanThomas McCallin 1869. In that same year, bicycle wheels with wire spokes were patented byEugène Meyerof Paris.[31]The Frenchvélocipède, made of iron and wood, developed into the "penny-farthing" (historically known as an "ordinary bicycle", aretronym, since there was then no other kind).[32]It featured a tubular steel frame on which were mounted wire-spoked wheels with solid rubber tires. These bicycles were difficult to ride due to their high seat and poorweight distribution. In 1868 Rowley Turner, a sales agent of the Coventry Sewing Machine Company (which soon became theCoventry Machinists Company), brought a Michaux cycle toCoventry, England. His uncle, Josiah Turner, and business partnerJames Starley, used this as a basis for the 'Coventry Model' in what became Britain's first cycle factory.[33] Thedwarf ordinaryaddressed some of these faults by reducing the front wheeldiameterand setting theseatfurther back. This, in turn, required gearing—effected in a variety of ways—to efficiently use pedal power. Having to both pedal and steer via the front wheel remained a problem. EnglishmanJ.K. Starley(nephew of James Starley), J.H. Lawson, and Shergold solved this problem by introducing thechain drive(originated by the unsuccessful "bicyclette" of Englishman Henry Lawson),[34]connecting the frame-mounted cranks to the rear wheel. These models were known assafety bicycles, dwarf safeties, or upright bicycles for their lower seat height and better weight distribution, although without pneumatic tires the ride of the smaller-wheeled bicycle would be much rougher than that of the larger-wheeled variety. Starley's 1885Rover, manufactured in Coventry[35]is usually described as the first recognizably modern bicycle.[36]Soon theseat tubewas added which created the modern bike's double-trianglediamond frame. Further innovations increased comfort and ushered in a secondbicycle craze, the 1890sGolden Age of Bicycles. In 1888, ScotsmanJohn Boyd Dunlopintroduced the first practical pneumatic tire, which soon became universal.Willie Humedemonstrated the supremacy of Dunlop's tyres in 1889, winning the tyre's first-ever races in Ireland and then England.[37][38]Soon after, the rearfreewheelwas developed, enabling the rider to coast. This refinement led to the 1890s invention[39]ofcoaster brakes.Dérailleur gearsand hand-operatedBowden cable-pull brakes were also developed during these years, but were only slowly adopted by casual riders. TheSvea Velocipedewith vertical pedal arrangement andlocking hubswas introduced in 1892 by the Swedish engineersFredrik LjungströmandBirger Ljungström. It attracted attention at theWorld Fairand was produced in a few thousand units. In the 1870s manycycling clubsflourished. They were popular in a time when there were no cars on the market and the principal mode of transportation washorse-drawn vehicles. Among the earliest clubs wasThe Bicycle Touring Club, which has operated since 1878. By the turn of the century, cycling clubs flourished on both sides of the Atlantic, and touring and racing became widely popular. TheRaleigh Bicycle Companywas founded in Nottingham, England in 1888. It became the biggest bicycle manufacturing company in the world, making over two million bikes per year.[40] Bicycles and horse buggies were the two mainstays of private transportation just prior to the automobile, and the grading of smooth roads in the late 19th century was stimulated by the widespread advertising, production, and use of these devices.[8]More than 1 billion bicycles have been manufactured worldwide as of the early 21st century.[1][2]Bicycles are the most common vehicle of any kind in the world, and the most numerous model of any kind of vehicle, whether human-powered ormotor vehicle, is the ChineseFlying Pigeon, with numbers exceeding 500 million.[1]The next most numerous vehicle, theHonda Super Cubmotorcycle, has more than 100 million units made,[41]while most produced car, theToyota Corolla, has reached 44 million and counting.[3][4][5][42] Bicycles are used for transportation,bicycle commuting, andutility cycling.[43]They are also used professionally bymail carriers,paramedics,police,messengers, andgeneral deliveryservices. Military uses of bicycles includecommunications,reconnaissance, troop movement, supply of provisions, and patrol, such as inbicycle infantries.[44] They are also used for recreational purposes, includingbicycle touring,mountain biking,physical fitness, andplay.Bicycle sportsincluderacing,BMX racing,track racing,criterium,roller racing,sportivesandtime trials. Major multi-stage professional events are theGiro d'Italia, theTour de France, theVuelta a España, theTour de Pologne, and theVolta a Portugal. They are also used for entertainment and pleasure in other ways, such as in organised mass rides,artistic cyclingandfreestyle BMX. The bicycle has undergone continual adaptation and improvement since its inception. These innovations have continued with the advent of modern materials and computer-aided design, allowing for a proliferation of specialized bicycle types, improvedbicycle safety, and riding comfort.[45] Bicycles can be categorized in many different ways: by function, by number of riders, by general construction, by gearing or by means of propulsion. The more common types includeutility bicycles,mountain bicycles,racing bicycles,touring bicycles,hybrid bicycles,cruiser bicycles, andBMX bikes. Less common aretandems,low riders,tall bikes,fixed gear,folding models,amphibious bicycles,cargo bikes,recumbentsandelectric bicycles. Unicycles,tricyclesandquadracyclesare not strictly bicycles, as they have respectively one, three and four wheels, but are often referred to informally as "bikes" or "cycles". A bicycle stays upright while moving forward by being steered so as to keep itscenter of massover the wheels.[46]This steering is usually provided by the rider, but under certain conditions may be provided by the bicycle itself.[47] The combined center of mass of a bicycle and its rider must lean into a turn to successfully navigate it. This lean is induced by a method known ascountersteering, which can be performed by the rider turning the handlebars directly with the hands[48]or indirectly by leaning the bicycle.[49] Short-wheelbase or tall bicycles, when braking, can generate enough stopping force at the front wheel to flip longitudinally.[50]The act of purposefully using this force to lift the rear wheel and balance on the front without tipping over is a trick known as astoppie, endo, or front wheelie. The bicycle is extraordinarily efficient in both biological and mechanical terms. The bicycle is the most efficient human-powered means of transportation in terms of energy a person must expend to travel a given distance.[51]From a mechanical viewpoint, up to 99% of the energy delivered by the rider into the pedals is transmitted to the wheels, although the use of gearing mechanisms may reduce this by 10–15%.[52][53]In terms of the ratio of cargo weight a bicycle can carry to total weight, it is also an efficient means of cargo transportation. A human traveling on a bicycle at low to medium speeds of around 16–24 km/h (10–15 mph) uses only the power required to walk. Air drag, which is proportional to the square of speed, requires dramatically higher power outputs as speeds increase. If the rider is sitting upright, the rider's body creates about 75% of the total drag of the bicycle/rider combination. Drag can be reduced by seating the rider in a moreaerodynamicallystreamlined position. Drag can also be reduced by covering the bicycle with an aerodynamicfairing. The fastest recorded unpaced speed on a flat surface is 144.18 km/h (89.59 mph).[54] In addition, thecarbon dioxidegenerated in the production and transportation of the food required by the bicyclist, per mile traveled, is less than1⁄10that generated by energy efficient motorcars.[55] The great majority of modern bicycles have a frame with upright seating that looks much like the first chain-driven bike.[6][7][8]These upright bicycles almost always feature thediamond frame, atrussconsisting of two triangles: the front triangle and the rear triangle. The front triangle consists of the head tube, top tube, down tube, and seat tube. The head tube contains theheadset, the set of bearings that allows theforkto turn smoothly for steering and balance. The top tube connects the head tube to the seat tube at the top, and the down tube connects the head tube to thebottom bracket. The rear triangle consists of the seat tube and paired chain stays and seat stays. The chain stays run parallel to thechain, connecting the bottom bracket to the reardropout, where the axle for the rear wheel is held. The seat stays connect the top of the seat tube (at or near the same point as the top tube) to the rear fork ends. Historically, women's bicycle frames had a top tube that connected in the middle of the seat tube instead of the top, resulting in a lowerstandover heightat the expense of compromised structural integrity, since this places a strong bending load in the seat tube, and bicycle frame members are typically weak in bending. This design, referred to as astep-through frameor as anopen frame, allows the rider to mount and dismount in a dignified way while wearing a skirt or dress. While some women's bicycles continue to use this frame style, there is also a variation, themixte, which splits the top tube laterally into two thinner top tubes that bypass the seat tube on each side and connect to the rear fork ends. The ease of stepping through is also appreciated by those with limited flexibility or other joint problems. Because of its persistent image as a "women's" bicycle, step-through frames are not common for larger frames. Step-throughs were popular partly for practical reasons and partly for social mores of the day. For most of the history of bicycles' popularity women have worn long skirts, and the lower frame accommodated these better than the top-tube. Furthermore, it was considered "unladylike" for women to open their legs to mount and dismount—in more conservative times women who rode bicycles at all were vilified as immoral or immodest. These practices were akin to the older practice of riding horsesidesaddle.[56] Another style is therecumbent bicycle. These are inherently more aerodynamic than upright versions, as the rider may lean back onto a support and operate pedals that are on about the same level as the seat. The world's fastest bicycle is a recumbent bicycle but this type was banned from competition in 1934 by theUnion Cycliste Internationale.[57] Historically, materials used in bicycles have followed a similar pattern as in aircraft, the goal being high strength and low weight. Since the late 1930s alloy steels have been used for frame and fork tubes in higher quality machines. By the 1980saluminumweldingtechniques had improved to the point that aluminum tube could safely be used in place ofsteel. Since then aluminum alloy frames and other components have become popular due to their light weight, and most mid-range bikes are now principally aluminum alloy of some kind.[where?]More expensive bikes usecarbon fibredue to its significantly lighter weight and profiling ability, allowing designers to make a bike both stiff and compliant by manipulating the lay-up. Virtually all professional racing bicycles now use carbon fibre frames, as they have the best strength to weight ratio. A typical modern carbon fiber frame can weigh less than 1 kilogram (2.2 lb). Other exotic frame materials includetitaniumand advanced alloys.Bamboo, a naturalcomposite materialwith high strength-to-weight ratio andstiffness[58]has been used for bicycles since 1894.[59]Recent versions use bamboo for the primary frame with glued metal connections and parts, priced as exotic models.[59][60][61] Thedrivetrainbegins with pedals which rotate thecranks, which are held in axis by the bottom bracket. Most bicycles use a chain to transmit power to the rear wheel. A very small number of bicycles use a shaft drive to transmit power, or special belts. Hydraulic bicycle transmissions have been built, but they are currently inefficient and complex. Since cyclists' legs are most efficient over a narrow range of pedaling speeds, orcadence, a variablegear ratiohelps a cyclist to maintain an optimum pedalling speed while covering varied terrain. Some, mainly utility, bicycles usehub gearswith between 3 and 14 ratios, but most use the generally more efficient dérailleur system, by which the chain is moved between different cogs called chainrings and sprockets to select a ratio. A dérailleur system normally has two dérailleurs, or mechs, one at the front to select thechainringand another at the back to select the sprocket. Most bikes have two or three chainrings, and from 5 to 12 sprockets on the back, with the number of theoretical gears calculated by multiplying front by back. In reality, many gears overlap or require the chain to run diagonally, so the number of usable gears is fewer. An alternative to chaindrive is to use a synchronous belt. These are toothed and work much the same as a chain—popular with commuters and long distance cyclists they require little maintenance. They cannot be shifted across a cassette of sprockets, and are used either as single speed or with a hub gear. Different gears and ranges of gearsare appropriate for different people and styles of cycling. Multi-speed bicycles allow gear selection to suit the circumstances: a cyclist could use a high gear when cycling downhill, a medium gear when cycling on a flat road, and a low gear when cycling uphill. In a lower gear every turn of the pedals leads to fewer rotations of the rear wheel. This allows the energy required to move the same distance to be distributed over more pedal turns, reducing fatigue when riding uphill, with a heavy load, or against strong winds. A higher gear allows a cyclist to make fewer pedal turns to maintain a given speed, but with more effort per turn of the pedals. With achain drivetransmission, achainringattached to a crank drives the chain, which in turn rotates the rear wheel via the rear sprocket(s) (cassetteorfreewheel). There are four gearing options: two-speed hub gear integrated with chain ring, up to 3 chain rings, up to 12 sprockets, hub gear built into rear wheel (3-speed to 14-speed). The most common options are either a rear hub or multiple chain rings combined with multiple sprockets (other combinations of options are possible but less common). Thehandlebarsconnect to thestemthat connects to the fork that connects to the front wheel, and the whole assembly connects to the bike and rotates about the steering axis via theheadsetbearings. Three styles of handlebar are common.Upright handlebars, the norm in Europe and elsewhere until the 1970s, curve gently back toward the rider, offering a natural grip and comfortable upright position.Drop handlebars"drop" as they curve forward and down, offering the cyclist best braking power from a more aerodynamic "crouched" position, as well as more upright positions in which the hands grip the brake lever mounts, the forward curves, or the upper flat sections for increasingly upright postures. Mountain bikes generally feature a 'straight handlebar' or 'riser bar' with varying degrees of sweep backward and centimeters rise upwards, as well as wider widths which can provide better handling due to increased leverage against the wheel. Saddlesalso vary with rider preference, from the cushioned ones favored by short-distance riders to narrower saddles which allow more room for leg swings. Comfort depends on riding position. With comfort bikes and hybrids, cyclists sit high over the seat, their weight directed down onto the saddle, such that a wider and more cushioned saddle is preferable. For racing bikes where the rider is bent over, weight is more evenly distributed between the handlebars and saddle, the hips are flexed, and a narrower and harder saddle is more efficient. Differing saddle designs exist for male and female cyclists, accommodating the genders' differing anatomies and sit bone width measurements, although bikes typically are sold with saddles most appropriate for men. Suspension seat posts and seat springs provide comfort by absorbing shock but can add to the overall weight of the bicycle. A recumbent bicycle has a reclinedchair-like seatthat some riders find more comfortable than a saddle, especially riders who suffer from certain types of seat, back, neck, shoulder, or wrist pain. Recumbent bicycles may have either under-seat or over-seatsteering. Bicycle brakes may be rim brakes, in which friction pads are compressed against the wheel rims; hub brakes, where the mechanism is contained within the wheel hub, or disc brakes, where pads act on a rotor attached to the hub. Most road bicycles use rim brakes, but some use disc brakes.[63]Disc brakesare more common for mountain bikes, tandems and recumbent bicycles than on other types of bicycles, due to their increased power, coupled with an increased weight and complexity.[64] With hand-operated brakes, force is applied to brake levers mounted on the handlebars and transmitted via Bowden cables orhydrauliclines to the friction pads, which apply pressure to the braking surface, causing friction which slows the bicycle down. A rear hub brake may be either hand-operated or pedal-actuated, as in the back pedalcoaster brakeswhich were popular in North America until the 1960s. Track bicyclesdo not have brakes, because all riders ride in the same direction around a track which does not necessitate sharp deceleration. Track riders are still able to slow down because all track bicycles are fixed-gear, meaning that there is no freewheel. Without a freewheel, coasting is impossible, so when the rear wheel is moving, the cranks are moving. To slow down, the rider applies resistance to the pedals, acting as a braking system which can be as effective as a conventional rear wheel brake, but not as effective as a front wheel brake.[65] Bicycle suspension refers to the system or systems used tosuspendthe rider and all or part of the bicycle. This serves two purposes: to keep the wheels in continuous contact with the ground, improving control, and to isolate the rider and luggage from jarring due to rough surfaces, improving comfort. Bicycle suspensions are used primarily on mountain bicycles, but are also common on hybrid bicycles, as they can help deal with problematic vibration from poor surfaces. Suspension is especially important on recumbent bicycles, since while an upright bicycle rider can stand on the pedals to achieve some of the benefits of suspension, a recumbent rider cannot. Basic mountain bicycles and hybrids usually have front suspension only, whilst more sophisticated ones also have rear suspension. Road bicycles tend to have no suspension. The wheel axle fits into fork ends in the frame and fork. A pair of wheels may be called a wheelset, especially in the context of ready-built "off the shelf", performance-oriented wheels. Tires vary enormously depending on their intended purpose.Road bicyclesuse tires 18 to 25 millimeters wide, most often completely smooth, orslick, and inflated to high pressure to roll fast on smooth surfaces. Off-road tires are usually between 38 and 64 mm (1.5 and 2.5 in) wide, and have treads for gripping in muddy conditions or metal studs for ice. Groupsetgenerally refers to all of the components that make up a bicycle excluding the bicycle frame, fork, stem, wheels, tires, and rider contact points, such as the saddle and handlebars. Some components, which are often optional accessories on sports bicycles, are standard features on utility bicycles to enhance their usefulness, comfort, safety and visibility.Fenderswith spoilers (mudflaps) protect the cyclist and moving parts from spray when riding through wet areas. In some countries (e.g. Germany, UK), fenders are called mudguards. Thechainguardsprotect clothes from oil on the chain while preventing clothing from being caught between the chain andcranksetteeth.Kick standskeep bicycles upright when parked, andbike locksdeter theft. Front-mountedbaskets, front or rearluggage carriersor racks, andpanniersmounted above either or both wheels can be used to carry equipment or cargo. Pegs can be fastened to one, or both of the wheel hubs to either help the rider perform certain tricks, or allow a place for extra riders to stand, or rest.[citation needed]Parents sometimes add rear-mountedchild seats, an auxiliary saddle fitted to the crossbar, or both to transport children. Bicycles can also be fitted with a hitch to tow atrailerfor carrying cargo, a child, or both. Toe-clipsand toestraps andclipless pedalshelp keep the foot locked in the proper pedal position and enable cyclists to pull and push the pedals. Technical accessories includecyclocomputersfor measuring speed, distance, heart rate, GPS data etc. Other accessories includelights, reflectors, mirrors, racks, trailers, bags, water bottles andcages, andbell.[66]Bicycle lights, reflectors, and helmets are required by law in some geographic regions depending on the legal code. It is more common to see bicycles with bottle generators, dynamos, lights, fenders, racks and bells in Europe. Bicyclists also have specialized form fitting and high visibility clothing. Children's bicycles may be outfitted with cosmetic enhancements such asbike horns, streamers, andspoke beads.[67]Training wheelsare sometimes used when learning to ride, but a dedicatedbalance biketeaches independent riding more effectively.[68][69] Bicycle helmetscan reduce injury in the event of a collision or accident, and a suitable helmet is legally required of riders in many jurisdictions.[70][71]Helmets may be classified as an accessory[66]or as an item of clothing.[72] Bike trainersare used to enable cyclists to cycle while the bike remains stationary. They are frequently used to warm up before races or indoors when riding conditions are unfavorable.[73] A number of formal and industry standards exist for bicycle components to help make spare parts exchangeable and to maintain a minimum product safety. TheInternational Organization for Standardization(ISO) has a special technical committee for cycles, TC149, that has the scope of "Standardization in the field of cycles, their components and accessories with particular reference to terminology, testing methods and requirements for performance and safety, and interchangeability". TheEuropean Committee for Standardization(CEN) also has a specific Technical Committee, TC333, that defines European standards for cycles. Their mandate states that EN cycle standards shall harmonize withISO standards. Some CEN cycle standards were developed before ISO published their standards, leading to strong European influences in this area. European cycle standards tend to describe minimum safety requirements, while ISO standards have historically harmonized parts geometry.[note 1] Like all devices with mechanical moving parts, bicycles require a certain amount of regular maintenance and replacement of worn parts. A bicycle is relatively simple compared with a car, so some cyclists choose to do at least part of the maintenance themselves. Some components are easy to handle using relatively simple tools, while other components may require specialist manufacturer-dependent tools. Many bicycle components are available at several different price/quality points; manufacturers generally try to keep all components on any particular bike at about the same quality level, though at the very cheap end of the market there may be some skimping on less obvious components (e.g. bottom bracket). The most basic maintenance item is keeping the tires correctly inflated; this can make a noticeable difference as to how the bike feels to ride. Bicycle tires usually have a marking on the sidewall indicating the pressure appropriate for that tire. Bicycles use much higher pressures than cars: car tires are normally in the range of 30 to 40 pounds per square inch (210 to 280 kPa), whereas bicycle tires are normally in the range of 60 to 100 pounds per square inch (410 to 690 kPa). Another basic maintenance item is regular lubrication of the chain and pivot points for derailleurs and brake components. Most of the bearings on a modern bike are sealed and grease-filled and require little or no attention; such bearings will usually last for 10,000 miles (16,000 km) or more. The crank bearings require periodic maintenance, which involves removing, cleaning and repacking with the correct grease. The chain and the brake blocks are the components which wear out most quickly, so these need to be checked from time to time, typically every 500 miles (800 km) or so. Most local bike shops will do such checks for free. Note that when a chain becomes badly worn it will also wear out the rear cogs/cassette and eventually the chain ring(s), so replacing a chain when only moderately worn will prolong the life of other components. Over the longer term, tires do wear out, after 2,000 to 5,000 miles (3,200 to 8,000 km); a rash of punctures is often the most visible sign of a worn tire. Very few bicycle components can actually be repaired; replacement of the failing component is the normal practice. The most common roadside problem is a puncture of the tire'sinner tube. A patch kit may be employed to fix the puncture or the tube can be replaced, though the latter solution comes at a greater cost and waste of material.[75]Some brands of tires are much morepuncture-resistantthan others, often incorporating one or more layers ofKevlar; the downside of such tires is that they may be heavier and/or more difficult to fit and remove. There are specialized bicycle tools for use both in the shop and at the roadside. Many cyclists carry tool kits. These may include a tire patch kit (which, in turn, may contain any combination of ahand pumporCO2pump,tire levers, sparetubes, self-adhesive patches, or tube-patching material, an adhesive, a piece of sandpaper or a metal grater (for roughening the tube surface to be patched) and sometimes even a block ofFrench chalk),wrenches,hex keys, screwdrivers, and achain tool. Special, thin wrenches are often required for maintaining various screw-fastened parts, specifically, the frequently lubricated ball-bearing "cones".[76][77]There are also cycling-specificmulti-toolsthat combine many of these implements into a single compact device. More specialized bicycle components may require more complex tools, including proprietary tools specific for a given manufacturer. The bicycle has had a considerable effect on human society, in both the cultural and industrial realms.[78] Around the turn of the 20th century, bicycles reduced crowding in inner-city tenements by allowing workers to commute from more spacious dwellings in the suburbs. They also reduced dependence on horses. Bicycles allowed people to travel for leisure into the country, since bicycles were three times as energy efficient as walking and three to four times as fast. In built-up cities around the world,urban planningusescycling infrastructurelike bikeways to reducetraffic congestionand air pollution.[79]A number of cities around the world have implemented schemes known asbicycle sharing systemsor community bicycle programs.[80][81]The first of these was the White Bicycle plan inAmsterdamin 1965. It was followed by yellow bicycles inLa Rochelleand green bicycles in Cambridge. These initiatives complement public transport systems and offer an alternative to motorized traffic to help reduce congestion and pollution.[82]In Europe, especially in the Netherlands and parts of Germany and Denmark, bicycle commuting is common. In Copenhagen, a cyclists' organization runs a Cycling Embassy that promotes biking for commuting and sightseeing. The United Kingdom has a tax break scheme (IR 176) that allows employees to buy a new bicycle tax free to use for commuting.[83] In theNetherlandsall train stations offer freebicycle parking, or a more secure parking place for a small fee, with the larger stations also offering bicycle repair shops. Cycling is so popular that the parking capacity may be exceeded, while in some places such as Delft the capacity is usually exceeded.[84]InTrondheimin Norway, theTrampe bicycle lifthas been developed to encourage cyclists by giving assistance on a steep hill. Buses in many cities havebicycle carriersmounted on the front. There are towns in some countries wherebicycle culturehas been an integral part of the landscape for generations, even without much official support. That is the case ofÍlhavo, in Portugal. In cities where bicycles are not integrated into the public transportation system, commuters often use bicycles as elements of amixed-mode commute, where the bike is used to travel to and from train stations or other forms of rapid transit. Some students who commute several miles drive a car from home to a campus parking lot, then ride a bicycle to class.Folding bicyclesare useful in these scenarios, as they are less cumbersome when carried aboard. Los Angeles removed a small amount of seating on some trains to make more room for bicycles and wheel chairs.[85] Some US companies, notably in thetech sector, are developing both innovative cycle designs and cycle-friendliness in the workplace.Foursquare, whose CEODennis Crowley"pedaled to pitch meetings ... [when he] was raising money fromventure capitalists" on a two-wheeler, chose a new location for its New York headquarters "based on where biking would be easy". Parking in the office was also integral to HQ planning. Mitchell Moss, who runs theRudin Center for Transportation Policy & ManagementatNew York University, said in 2012: "Biking has become the mode of choice for the educated high tech worker".[86] Bicycles offer an important mode of transport in many developing countries. Until recently, bicycles have been a staple of everyday life throughout Asian countries. They are the most frequently used method of transport for commuting to work, school, shopping, and life in general. In Europe, bicycles are commonly used.[87]They also offer a degree of exercise to keep individuals healthy.[88] Bicycles are also celebrated in the visual arts. An example of this is theBicycle Film Festival, a film festival hosted all around the world. Bicycle poverty reductionis the concept that access to bicycles and the transportation infrastructure to support them can dramaticallyreduce poverty.[89][90][91][92]This has been demonstrated in various pilot projects in South Asia and Africa.[93][94][95]Experiments done in Africa (UgandaandTanzania) andSri Lankaon hundreds of households have shown that a bicycle can increase the income of a poor family by as much as 35%.[93][96][97] The safety bicycle gave women unprecedented mobility, contributing totheir emancipationin Western nations. As bicycles became safer and cheaper, more women had access to the personal freedom that bicycles embodied, and so the bicycle came to symbolize theNew Womanof the late 19th century, especially in Britain and the United States.[7][99]Thebicycle craze in the 1890salso led to a movement for so-calledrational dress, which helped liberate women from corsets and ankle-length skirts and other restrictive garments, substituting the then-shockingbloomers.[7] The bicycle was recognized by 19th-century feminists andsuffragistsas a "freedom machine" for women. AmericanSusan B. Anthonysaid in aNew York Worldinterview on 2 February 1896: "I think it has done more to emancipate woman than any one thing in the world. I rejoice every time I see a woman ride by on a wheel. It gives her a feeling of self-reliance and independence the moment she takes her seat; and away she goes, the picture of untrammelled womanhood."[100]: 859In 1895Frances Willard, the tightly laced president of theWoman's Christian Temperance Union, wroteA Wheel Within a Wheel: How I Learned to Ride the Bicycle, with Some Reflections by the Way, a 75-page illustrated memoir praising "Gladys", her bicycle, for its "gladdening effect" on her health and political optimism.[98]Willard used a cycling metaphor to urge other suffragists to action.[98] In 1985, Georgena Terry started the first women-specific bicycle company. Her designs featured frame geometry and wheel sizes chosen to better fit women, with shorter top tubes and more suitable reach.[101] Bicycle manufacturingproved to be a training ground for other industries and led to the development of advanced metalworking techniques, both for the frames themselves and for special components such asball bearings,washers, and sprockets. These techniques later enabled skilled metalworkers and mechanics to develop the components used in early automobiles and aircraft. Wilbur and Orville Wright, a pair of businessmen, ran theWright Cycle Companywhich designed, manufactured and sold their bicycles during thebike boomof the 1890s.[102] They also served to teach the industrial models later adopted, including mechanization andmass production(later copied and adopted byFordandGeneral Motors),[103][104][105]vertical integration[104](also later copied and adopted by Ford), aggressive advertising[106](as much as 10% of all advertising in U.S. periodicals in 1898 was by bicycle makers),[107]lobbying for better roads (which had the side benefit of acting as advertising, and of improving sales by providing more places to ride),[105]all first practiced by Pope.[105]In addition, bicycle makers adopted the annual model change[103][108](later derided asplanned obsolescence, and usually credited to General Motors), which proved very successful.[109] Early bicycles were an example ofconspicuous consumption, being adopted by the fashionable elites.[110][111][112][103][113][114][115][116]In addition, by serving as a platform for accessories, which could ultimately cost more than the bicycle itself, it paved the way for the likes of theBarbie doll.[103][117][118] Bicycles helped create, or enhance, new kinds of businesses, such as bicycle messengers,[119]traveling seamstresses,[120]riding academies,[121]and racing rinks.[122][121]Their board tracks were later adapted to earlymotorcycleandautomobile racing. There were a variety of new inventions, such asspoketighteners,[123]and specialized lights,[118][123]socks and shoes,[124]and even cameras, such as theEastman Company's Poco.[125]Probably the best known and most widely used of these inventions, adopted well beyond cycling, is Charles Bennett's Bike Web, which came to be called thejock strap.[126] They also presaged a move away from public transit[127]that would explode with the introduction of the automobile. J. K. Starley's company became the Rover Cycle Company Ltd. in the late 1890s, and then renamed theRover Companywhen it started making cars.Morris MotorsLimited (inOxford) andŠkodaalso began in the bicycle business, as did theWright brothers.[128]Alistair Craig, whose company eventually emerged to become the engine manufacturersAilsa Craig, also started from manufacturing bicycles, in Glasgow in March 1885. In general, U.S. and European cycle manufacturers used to assemble cycles from their own frames and components made by other companies, although very large companies (such as Raleigh) used to make almost every part of a bicycle (including bottom brackets, axles, etc.) In recent years, those bicycle makers have greatly changed their methods of production. Now, almost none of them produce their own frames. Many newer or smaller companies only design and market their products; the actual production is done by Asian companies. For example, some 60% of the world's bicycles are now being made in China. Despite this shift in production, as nations such as China and India become more wealthy, their own use of bicycles has declined due to the increasing affordability of cars and motorcycles.[129]One of the major reasons for the proliferation of Chinese-made bicycles in foreign markets is the lower cost of labor in China.[130] In line with the European financial crisis of that time, in 2011 the number of bicycle sales in Italy (1.75 million) passed the number of new car sales.[131] One of the profound economic implications of bicycle use is that it liberates the user frommotor fuelconsumption. (Ballantine, 1972) The bicycle is an inexpensive, fast, healthy and environmentally friendly mode of transport.Ivan Illichstated that bicycle use extended the usable physical environment for people, while alternatives such as cars and motorways degraded and confined people's environment and mobility.[132]Currently, two billion bicycles are in use around the world. Children, students, professionals, laborers, civil servants and seniors are pedaling around their communities. They all experience the freedom and the natural opportunity for exercise that the bicycle easily provides. Bicycle also has lowest carbon intensity of travel.[133] The global bicycle market is $61 billion in 2011.[134]As of 2009[update], 130 million bicycles were sold every year globally and 66% of them were made in China.[135] Early in its development, as withautomobiles, there were restrictions on the operation of bicycles. Along with advertising, and to gain free publicity,Albert A. Popelitigated on behalf of cyclists.[105] The 1968Vienna Convention on Road Trafficof the United Nations considers a bicycle to be a vehicle, and a person controlling a bicycle (whether actually riding or not) is considered an operator or driver.[citation needed][137][138]The traffic codes of many countries reflect these definitions and demand that a bicycle satisfy certain legal requirements before it can be used on public roads. In manyjurisdictions, it is an offense to use a bicycle that is not in a roadworthy condition.[139][140] In some countries, bicycles must have functioning front and rear lights when ridden after dark.[141][142] Some countries require child and/or adult cyclists to wear helmets, as this may protect riders from head trauma. Countries which require adult cyclists to wear helmets include Spain,New Zealandand Australia. Mandatory helmet wearing is one of the most controversial topics in the cycling world, with proponents arguing that it reduces head injuries and thus is an acceptable requirement, while opponents argue that by making cycling seem more dangerous and cumbersome, it reduces cyclist numbers on the streets, creating an overall negative health effect (fewer people cycling for their own health, and the remaining cyclists being more exposed through a reversedsafety in numberseffect).[143] Bicycles are popular targets for theft, due to their value and ease of resale.[144]The number of bicycles stolen annually is difficult to quantify as a large number of crimes are not reported.[145]Around 50% of the participants in the Montreal International Journal of Sustainable Transportation survey were subjected to a bicycle theft in their lifetime as active cyclists.[146]Most bicycles have serial numbers that can be recorded to verify identity in case of theft.[147]
https://en.wikipedia.org/wiki/Bicycle
Public transport(also known aspublic transportation,public transit,mass transit, or simplytransit) is a system oftransportforpassengersby group travel systems available for use by the general public unlikeprivate transport, typically managed on a schedule, operated on established routes, and that may charge a posted fee for each trip.[1][2][3]There is no rigid definition of which kinds of transport are included, and air travel is often not thought of when discussing public transport—dictionaries use wording like "buses, trains, etc."[4]Examples of public transport includecity buses,trolleybuses,trams(orlight rail) andpassenger trains,rapid transit(metro/subway/underground, etc.) andferries. Public transport between cities is dominated byairlines,coaches, andintercity rail.High-speed railnetworks are being developed in many parts of the world. Most public transport systems run along fixed routes with set embarkation/disembarkation points to a prearranged timetable, with the most frequent services running to aheadway(e.g., "every 15 minutes" as opposed to being scheduled for a specific time of the day). However, most public transport trips include other modes of travel, such as passengers walking or catching bus services to access train stations.[5]Share taxisoffer on-demand services in many parts of the world, which may compete with fixed public transport lines, or complement them, by bringing passengers to interchanges.Paratransitis sometimes used in areas of low demand and for people who need a door-to-door service.[6] Urban public transit differs distinctly among Asia, North America, and Europe. In Japan, profit-driven, privately owned and publicly traded mass transit andreal estateconglomerates predominantly operate public transit systems.[7][8][better source needed]In North America, municipaltransit authoritiesmost commonly run mass transit operations. In Europe, both state-owned and private companies predominantly operate mass transit systems. For geographical, historical and economic reasons, differences exist internationally regarding the use and extent of public transport. TheInternational Association of Public Transport(UITP) is the international network for public transport authorities and operators, policy decision-makers, scientific institutes and the public transport supply and service industry. It has over 1,900 members from more than 100 countries from all over the globe. In recent years, some high-wealth cities have seen a decline in public transport usage. A number of sources attribute this trend to the rise in popularity of remote work, ride-sharing services, and car loans being relatively cheap across many countries. Major cities such as Toronto, Paris, Chicago, and London have seen this decline and have attempted to intervene by cutting fares and encouraging new modes of transportation, such as e-scooters and e-bikes.[9]Because of the reduced emissions and other environmental impacts of using public transportation over private transportation, many experts have pointed to an increased investment in public transit as an importantclimate change mitigationtactic.[10] Conveyances designed for public hire are as old as the firstferry service. The earliest public transport waswater transport.[11]Ferries appear inGreek mythologywritings. The mystical ferrymanCharonhad to be paid and would only then take passengers toHades.[12] Some historical forms of public transport include thestagecoachestraveling a fixed route betweencoaching inns, and thehorse-drawn boatcarrying paying passengers, which was a feature of Europeancanalsfrom the 17th century onwards. The canal itself as a form of infrastructure dates back to antiquity. Inancient Egyptcanals were used forfreight transportationto bypass theAswancataract. The Chinese also built canals for water transportation as far back as thewarring States period[13]which began in the 5th century BCE. Whether or not those canals were used for for-hire public transport remains unknown; theGrand Canalin China (begun in 486 BCE) served primarily thegrain trade. Thebus, the first organized public transit system within a city, appears to have originated inParisin 1662,[14]although the service in question,Carrosses à cinq sols(English: five-sol coaches), which have been developed by mathematician and philosopherBlaise Pascal, lasted only fifteen years until 1677.[15]Buses are known to have operated inNantesin 1826. The public bus transport system was introduced toLondonin July 1829.[16] The first passengerhorse-drawn vehicleopened in 1806. It ran along theSwansea and Mumbles Railway.[17] In 1825,George Stephensonbuilt theLocomotion No 1for theStockton and Darlington RailwayinnortheastEngland, the first public steam railway in the world. The world's first steam-poweredunderground railwayopened in London in 1863.[18] The first successful electricstreetcarwas built for 11 miles of track for the Union Passenger Railway in Tallahassee, Florida, in 1888. Electric streetcars could carry heavier passenger loads than predecessors, which reduced fares and stimulated greater transit use. Two years after the Richmond success, over thirty-two thousand electric streetcars were operating in America. Electric streetcars also paved the way for the firstsubwaysystem in America. Before electric streetcars, steam powered subways were considered. However, most people believed that riders would avoid the smoke-filled subway tunnels from the steam engines. In 1894, Boston built the first subway in the United States, an electric streetcar line in a 1.5-mile tunnel under Tremont Street's retail district. Other cities quickly followed, constructing thousands of miles of subway in the following decades.[19] In March 2020, Luxembourg abolished fares for trains, trams and buses and became the first country in the world to make all public transport free.[20] TheEncyclopædia Britannicaspecifies that public transportation is within urban areas, but does not limit its discussion of the topic to urban areas.[21] Seven criteria estimate the usability of different types of public transport and its overall appeal. The criteria are speed, comfort, safety, cost, proximity, timeliness and directness.[22]Speed is calculated from total journey time including transfers. Proximity means how far passengers must walk or otherwise travel before they can begin the public transport leg of their journey and how close it leaves them to their desired destination. Timeliness is how long they must wait for the vehicle. Directness records how far a journey using public transport deviates from a passenger's ideal route. In selecting between competingmodes of transport, many individuals are strongly motivated bydirect cost(travel fare/ ticket price to them) andconvenience, as well as being informed byhabit. The same individual may accept the lost time and statisticallyhigher risk of accidentinprivate transport, together with the initial, running and parking costs.Loss of control, spatial constriction,overcrowding, high speeds/accelerations, height and otherphobiasmay discourage use of public transport. Actual travel time on public transport becomes a lesser consideration whenpredictableand when travel itself is reasonablycomfortable(seats, toilets, services), and can thus be scheduled and used pleasurably, productively or for (overnight) rest. Chauffeured movement is enjoyed by many people when it is relaxing, safe, but not too monotonous. Waiting, interchanging, stops and holdups, for example due to traffic or for security, are discomforting.Jet lagis a human constraint discouraging frequent rapid long-distance east–west commuting, favoring modern telecommunications and VR technologies. An airline provides scheduled service with aircraft between airports. Air travel has high speeds, but incurs large waiting times before and after travel, and is therefore often only feasible over longer distances or in areas where a lack of surface infrastructure makes other modes of transport impossible. Bush airlines work more similarly to bus stops; an aircraft waits for passengers and takes off when the aircraft is full. Bus servicesusebuseson conventional roads to carry numerous passengers on shorter journeys. Buses operate with low capacity compared with trams or trains, and can operate on conventional roads, with relatively inexpensive bus stops to serve passengers. Therefore, buses are commonly used in smaller cities, towns, and rural areas, and forshuttleservices supplementing other means of transit in large cities.Midibuseshave an ever lower capacity, howeverdouble decker busesandarticulated buseshave a slightly larger capacity. Intercity bus serviceusecoaches(long-distance buses) for suburb-to-CBD or longer-distance transportation. The vehicles are normally equipped with more comfortable seating, a separate luggage compartment, video and possibly also a toilet. They have higher standards than city buses, but a limited stopping pattern. Trolleybusesareelectrically powered busesthat receive power fromoverhead power lineby way of a set of trolley poles for mobility.Online Electric Vehiclesare buses that run on a conventional battery, but arerechargedfrequently at certain points via underground wires.[23] Certain types of buses, styled after old-style streetcars, are also called trackless trolleys, but are built on the same platforms as a typicaldiesel,CNG, orhybridbus; these are more often used for tourist rides than commuting and tend to be privately owned. Electric busescan store the needed electrical energy on board, or be fed mains electricity continuously from an external source such as overhead lines. The majority of buses using on-board energy storage are battery electric buses (which is what this article mostly deals with), where the electric motor obtains energy from an onboard battery pack. Bus rapid transit(BRT) is a term used for buses operating on dedicated right-of-way, much like a light rail; resulting in a higher capacity and operating speed compared to regular buses. AGuided buscapable of being steered by external means, usually on a dedicated track or roll way that excludes other traffic, permitting the maintenance of schedules even during rush hours. Passenger railtransport is the conveyance of passengers by means of wheeled vehicles specially designed to run on railways. Trains allow high capacity at most distance scales, but requiretrack,signalling, infrastructure andstationsto be built and maintained resulting in high upfront costs. Passenger rail is used on long distances even crossing national borders, within regions and in various ways inurban environments. Inter-city railis long-haul passenger services that connect multiple urban areas. They have few stops, and aim at high average speeds, typically only making one of a few stops per city. These services may also be international. High-speed railis passenger trains operating significantly faster than conventional rail—typically defined as at least 200 kilometres per hour (120 mph). The most predominant systems have been built in Europe and East Asia, and compared with air travel, offer long-distance rail journeys as quick as air services, have lower prices to compete more effectively and use electricity instead of combustion.[24] Urban rail transitis an all-encompassing term for various types of local rail systems, such as these examplestrams,light rail,rapid transit,people movers,commuter rail,monorail,suspension railwaysandfuniculars. Commuter railis part of an urban area's public transport. It provides faster services to outersuburbsand neighboringsatellite cities. Trains stop attrain stationsthat are located to serve a smaller suburban or town center. The stations are often combined withshuttle busorpark and ridesystems. Frequency may be up to several times per hour, and commuter rail systems may either be part of the national railway or operated by local transit agencies. Common forms of commuter rail employ eitherdiesel electriclocomotives, orelectric multiple unittrains. Some commuter train lines share a railway withfreight trains.[25] AMetro rapid transit(MRT) railway system (also called a metro, underground, heavy rail, or subway) operates in an urban area with high capacity and frequency, andgrade separationfrom other traffic.[26][27]Heavy rail is a high-capacity form of rail transit, with 4 to 10 units forming a train, and can be the most expensive form of transit to build. Modern heavy rail systems are mostly driverless, which allows for higher frequencies and less maintenance cost.[25] Systems are able to transport large numbers of people quickly over short distances with little land use. Variations of rapid transit includepeople movers, small-scalelight metroand the commuter rail hybridS-Bahn. More than 160 cities have rapid transit systems, totalling more than 8,000 km (4,971 mi) of track and 7,000 stations. Twenty-five cities have systems under construction. Medium-capacity rail system(MCS) also including light metro, is light capacity rapid transit compared to typical heavy-rail rapid transit. MCS trains are usually 1 to 4 cars. Most medium-capacity rail systems are automated or use light-rail type vehicles. Automated guideway transit(AGT) system is a type of fixed guideway transit infrastructure with a riding or suspension track that supports and physically guides one or more driverless vehicles along its length. Light rail transit(LRT) is a term coined in 1972 and uses mainly tram technology. Light rail has mostly dedicated right-of-ways and less sections shared with other traffic and usually step-free access. A light rail line is generally traversed with increased speed compared to a tram line. Light rail lines are, thus, essentially modernizedinterurbans. Unlike trams, light rail trains are often longer and have one to four cars per train.[25]In some cases, trams are also considered part of the light rail family. Trams(also known as streetcars or trolleys) are railborne vehicles that originally ran in city streets, though over decades more and more dedicated tracks are used. They have higher capacity than buses, but must follow dedicated infrastructure with rails and wires either above or below the track, limiting their flexibility. In the United States, trams were commonly used prior to the 1930s, before being superseded by the bus. In modern public transport systems, they have been reintroduced in the form of the light rail.[25] ARubber-tyred tram, is a development of theguided busin which a vehicle is guided by a fixed rail in the road surface and draws current from overhead electric wires (either viapantographortrolley pole). ATranslohris a rubber-tyred tramway system, originally developed by Lohr Industrie of France and now run by a consortium of Alstom Transport and Fonds stratégique d'investissement (FSI) as newTL. TheAutonomous Rail Rapid Transit(ART) is a lidar (light detection and ranging)guided busandbi-articulated bussystem for urban passenger transport. It is resembling arubber-tyred tramas much a tram and aBus rapid transitsystem.[28] Somewhere between light and heavy rail in terms ofcarbon footprint,[citation needed]monorail systems usually use overhead tracks, similar to anelevated railwayabove other traffic. The systems are either mounted directly on the track supports or put in an overhead design with the train suspended. Monorailsystems are used throughout the world (especially in Europe and eastAsia, particularlyJapan), but apart from public transit installations in Las Vegas and Seattle, most North American monorails are either short shuttle services or privately owned services (With 150,000 daily riders, theDisney monorail systemsis a successful example).[29] Personal rapid transit(PRT) is an automated cab service that runs on rails or aguideway. This is an uncommon mode of transportation (excludingelevators) due to the complexity of automation. A fully implemented system might provide most of the convenience of individual automobiles with the efficiency of public transit. The crucial innovation is that the automated vehicles carry just a few passengers, turn off the guideway to pick up passengers (permitting other PRT vehicles to continue at full speed), and drop them off to the location of their choice (rather than at a stop). Conventional transit simulations show that PRT might attract many auto users in problematic medium-density urban areas. A number of experimental systems are in progress. One might compare personal rapid transit to the more labor-intensivetaxiorparatransitmodes of transportation, or to the (by now automated)elevatorscommon in many publicly accessible areas. Automated people mover(APM) are a special term for grade-separated rail which uses vehicles that are smaller and shorter in size.[25]These systems are generally used only in a small area such as a theme park or an airport. Cable-propelled transit(CPT) is a transit technology that moves people in motor-less, engine-less vehicles that are propelled by a steel cable.[30]There are two sub-groups of CPT—gondola liftsandcable cars (railway). Gondola lifts are supported and propelled from above by cables, whereas cable cars are supported and propelled from below by cables. While historically associated with usage inski resorts, gondola lifts are now finding increased consumption and utilization in many urban areas—built specifically for the purposes of mass transit.[31]Many, if not all, of these systems are implemented and fully integrated within existing public transportation networks. Examples includeMetrocable (Medellín),Metrocable (Caracas),Mi TeleféricoinLa Paz,Portland Aerial Tram,Roosevelt Island Tramwayin New York City, and theLondon Cable Car. Funicularis a type ofcable railwaysystem that connects points along a railway track laid on a steep slope. The system is characterized by two counterbalanced carriages (also called cars or trains) permanently attached to opposite ends of a haulage cable, which is looped over a pulley at the upper end of the track[32] Aferryis a boat used to carry (orferry) passengers, and sometimes their vehicles, across a body of water. Afoot-passengerferry with many stops is sometimes called awater bus. Ferries form a part of the public transport systems of many waterside cities and islands, allowing direct transit between points at a capital cost much lower than bridges or tunnels, though at a lower speed. Ship connections of much larger distances (such as over long distances in water bodies like theMediterranean Sea) may also be called ferry services. A report published by the UK National Infrastructure Commission in 2018 states that "cycling is mass transit and must be treated as such."Cycling infrastructureis normally provided without charge to users because it is cheaper to operate than mechanised transit systems that use sophisticated equipment and do not usehuman power.[33] Many cities around the world have introduced electric bikes and scooters to their public transport infrastructure. For example, in the Netherlands many individuals use e-bikes to replace their car commutes. In major American cities, start-up companies such as Uber and Lyft have implemented e-scooters as a way for people to take short trips around the city.[34] All public transport runs on infrastructure, either on roads, rail, airways or seaways. The infrastructure can be shared with other modes, freight and private transport, or it can be dedicated to public transport. The latter is especially valuable in cases where there are capacity problems for private transport. Investments in infrastructure are expensive and make up a substantial part of the total costs in systems that are new or expanding. Once built, the infrastructure will require operating and maintenance costs, adding to the total cost of public transport. Sometimes governments subsidize infrastructure by providing it free of charge, just as is common with roads for automobiles. Interchanges are locations where passengers can switch from one public transport route to another. This may be between vehicles of the same mode (like a bus interchange), or e.g. between bus and train. It can be between local and intercity transport (such as at acentral stationor airport). Timetables(or 'schedules' inNorth American English) are provided by the transport operator to allow users to plan their journeys. They are often supplemented bymapsand fare schemes to help travelers coordinate their travel. Onlinepublic transport route plannershelp make planning easier.Mobile appsare available for multiple transit systems that provide timetables and other service information and, in some cases, allow ticket purchase, some allowing to plan your journey, with time fares zones e.g. Services are often arranged to operate at regular intervals throughout the day or part of the day (known asclock-face scheduling). Often, more frequent services or even extra routes are operated during the morning and eveningrush hours. Coordination between services at interchange points is important to reduce the total travel time for passengers. This can be done by coordinating shuttle services with main routes, or by creating a fixed time (for instance twice per hour) when all bus and rail routes meet at a station and exchange passengers. There is often a potential conflict between this objective and optimising the utilisation of vehicles and drivers. The main sources of financing are ticket revenue, government subsidies and advertising. The percentage of revenue from passenger charges is known as thefarebox recovery ratio.[35]A limited amount of income may come fromland developmentand rental income from stores and vendors, parking fees, and leasing tunnels and rights-of-way to carryfiber opticcommunication lines. Most—but not all—public transport requires the purchase of aticketto generaterevenuefor the operators. Tickets may be bought either in advance, or at the time of the journey, or the carrier may allow both methods. Passengers may be issued with a paper ticket, a metal or plastictoken, or a magnetic or electronic card (smart card,contactless smart card). Sometimes a ticket has to be validated, e.g. a paper ticket has to be stamped, or anelectronic tickethas to be checked in. Tickets may be valid for a single (or return) trip, or valid within a certain area for a period of time (seetransit pass). Thefareis based on the travel class, either depending on the traveled distance, or based onzone pricing. The tickets may have to be shown or checked automatically at the station platform or when boarding, or during the ride by aconductor. Operators may choose to control all riders, allowing sale of the ticket at the time of ride. Alternatively, aproof-of-paymentsystem allows riders to enter the vehicles without showing the ticket, but riders may or may not be controlled by aticket controller; if the rider fails to show proof of payment, the operator may fine the rider at the magnitude of the fare. Multi-use tickets allow travel more than once. In addition to return tickets, this includes period cards allowing travel within a certain area (for instance month cards), or to travel a specified number of trips or number of days that can be chosen within a longer period of time (calledcarnetticket). Passes aimed at tourists, allowing free or discounted entry at many tourist attractions, typically includezero-fare public transportwithin the city. Period tickets may be for a particular route (in both directions), or for awhole network. Afree travel passallowing free and unlimited travel within a system is sometimes granted to particular social sectors, for example students, elderly, children, employees (job ticket) and the physically or mentallydisabled. Zero-fare public transportservices are funded in full by means other than collecting a fare from passengers, normally through heavysubsidyor commercialsponsorshipby businesses. Several mid-size European cities and many smaller towns around the world have converted their entire bus networks to zero-fare. Three capital cities in Europe have free public transport:Tallinn,Luxembourgand as of 2025,Belgrade. Local zero-fare shuttles or inner-city loops are far more common than city-wide systems. There are also zero-fare airport circulators and university transportation systems. Governments frequently opt to subsidize public transport for social, environmental or economic reasons. Common motivations include the desire to provide transport to people who are unable to use an automobile[36]and to reduce congestion, land use and automobile emissions.[36] Subsidies may take the form of direct payments for financially unprofitable services, but support may also include indirect subsidies. For example, the government may allow free or reduced-cost use of state-owned infrastructure such as railways and roads, to stimulate public transport's economic competitiveness over private transport, that normally also has free infrastructure (subsidized through such things as gas taxes). Other subsidies include tax advantages (for instanceaviation fuelis typically not taxed), bailouts if companies that are likely to collapse (often applied to airlines) and reduction of competition through licensing schemes (often applied to taxis and airlines). Private transport is normally subsidized indirectly through free roads and infrastructure,[37]as well as incentives to build car factories[38]and, on occasion, directly via bailouts of automakers.[39][40]Subsidies also may take the form of initial or increased tolls for drivers, such as theSan Francisco Bay Arearaising tolls on numerous bridges and proposing more hikes to fund theBay Area Rapid Transitsystem.[41] Land development schemes may be initialized, where operators are given the rights to use lands near stations, depots, or tracks for property development. For instance, in Hong Kong,MTR Corporation LimitedandKCR Corporationgenerate additional profits from land development to partially cover the cost of the construction of the urban rail system.[42] Some supporters of mass transit believe that use of taxpayer capital to fund mass transit will ultimately save taxpayer money in other ways, and therefore, state-funded mass transit is a benefit to the taxpayer. Some research has supported this position,[43]but the measurement of benefits and costs is a complex and controversial issue.[44]A lack of mass transit results in more traffic, pollution,[45][46][47]and road construction[48]to accommodate more vehicles, all costly to taxpayers;[49]providing mass transit will therefore alleviate these costs.[50] A study found that support for public transport spending is much higher amongconservativeswho have high levels of trust in government officials than those who do not.[51] Relative to other forms of transportation, public transit is safe (with a low crash risk) and secure (with low rates ofcrime).[52]The injury and death rate for public transit is roughly one-tenth that of automobile travel.[52]A 2014 study noted that "residents of transit-oriented communities have about one-fifth the per capita crash casualty rate as in automobile-oriented communities" and that "Transit also tends to have lower overall crime rates than automobile travel, and transit improvements can help reduce overall crime risk by improving surveillance and economic opportunities for at-risk populations."[52] Although relatively safe and secure, public perceptions that transit systems are dangerous endure.[52]A 2014 study stated that "Various factors contribute to the under-appreciation of transit safety benefits, including the nature of transit travel, dramatic news coverage of transit crashes and crimes, transit agency messages that unintentionally emphasize risks without providing information on its overall safety, and biased traffic safety analysis."[52] Some systems attract vagrants who use the stations or trains as sleeping shelters, though most operators have practices that discourage this.[53] Public transport is means of independent transport for individuals (without walking or bicycling) such as children too young to drive, the elderly without access to cars, those who do not hold a drivers license, and the infirm such as wheelchair users. Kneeling buses, low-floor access boarding on buses and light rail has also enabled greater access for the disabled in mobility. In recent decades low-floor access has been incorporated into modern designs for vehicles. In economically deprived areas, public transport increases individual accessibility to transport where private means are unaffordable. Although there is continuing debate as to the true efficiency of different modes of transportation, mass transit is generally regarded as significantly moreenergy efficientthan other forms of travel. A 2002 study by theBrookings Institutionand theAmerican Enterprise Institutefound that public transportation in the U.S. uses approximately half the fuel required by cars, SUVs and light trucks. In addition, the study noted that "private vehicles emit about 95 percent more carbon monoxide, 92 percent morevolatile organic compoundsand about twice as much carbon dioxide and nitrogen oxide than public vehicles for every passenger mile traveled".[55] Studies have shown that there is a strong inverse correlation betweenurban population densityandenergy consumption per capita, and that public transport could facilitate increased urban population densities, and thus reduce travel distances and fossil fuel consumption.[56] Supporters of thegreen movementusually advocate public transportation, because it offers decreased airbornepollutioncompared to automobiles transporting a single individual.[57]A study conducted in Milan, Italy, in 2004 during and after a transportation strike serves to illustrate the impact that mass transportation has on the environment. Air samples were taken between 2 and 9 January, and then tested for methane, carbon monoxide, non-methane hydrocarbons (NMHCs), and other gases identified as harmful to the environment. The figure below is a computer simulation showing the results of the study "with 2 January showing the lowest concentrations as a result of decreased activity in the city during the holiday season. 9 January showed the highest NMHC concentrations because of increased vehicular activity in the city due to a public transportation strike."[58] Based on the benefits of public transport, the green movement has affected public policy. For example, the state of New Jersey releasedGetting to Work: Reconnecting Jobs with Transit.[59]This initiative attempts to relocate new jobs into areas with higher public transportation accessibility. The initiative cites the use of public transportation as being a means of reducing traffic congestion, providing an economic boost to the areas of job relocation, and most importantly, contributing to a green environment by reducingcarbon dioxide(CO2) emissions. Using public transportation can result in a reduction of an individual's carbon footprint. A single person, 20-mile (32 km) round trip by car can be replaced using public transportation and result in a net CO2emissions reduction of 4,800 pounds (2,200 kg) per year.[60]Using public transportation saves CO2emissions in more ways than simply travel as public transportation can help to alleviate traffic congestion as well as promote more efficient land use. When all three of these are considered, it is estimated that 37 million metric tons of CO2will be saved annually.[60]Another study claims that using public transit instead of private in the U.S. in 2005 would have reduced CO2emissions by 3.9 million metric tons and that the resulting traffic congestion reduction accounts for an additional 3.0 million metric tons of CO2saved.[61]This is a total savings of about 6.9 million metric tons per year given the 2005 values. In order to compare energy impact of public transportation to private transportation, the amount of energy per passenger mile must be calculated. The reason that comparing the energy expenditure per person is necessary is to normalize the data for easy comparison. Here, the units are in per 100 p-km (read as person kilometer or passenger kilometer). In terms of energy consumption, public transportation is better than individual transport in a personal vehicle.[62]In England, bus and rail are popular methods of public transportation, especially in London. Rail provides rapid movement into and out of the city of London while busing helps to provide transport within the city itself. As of 2006–2007, the total energy cost of London's trains was 15 kWh per 100 p-km, about 5 times better than a personal car.[63] For busing in London, it was 32 kWh per 100 p-km, or about 2.5 times less than that of a personal car.[63]This includes lighting, depots, inefficiencies due to capacity (i.e., the train or bus may not be operating at full capacity at all times), and other inefficiencies. Efficiencies of transport in Japan in 1999 were 68 kWh per 100 p-km for a personal car, 19 kWh per 100 p-km for a bus, 6 kWh per 100 p-km for rail, 51 kWh per 100 p-km for air, and 57 kWh per 100 p-km for sea.[63]These numbers from either country can be used in energy comparison calculations orlife-cycle assessmentcalculations. Public transportation also provides an arena to test environmentally friendly fuel alternatives, such as hydrogen-powered vehicles. Swapping out materials to create lighter public transportation vehicles with the same or better performance will increase environmental friendliness of public transportation vehicles while maintaining current standards or improving them. Informing the public about the positive environmental effects of using public transportation in addition to pointing out the potential economic benefit is an important first step towards making a difference. In the 2023 study titled "Subways and CO₂ Emissions: A Global Analysis with Satellite Data," research reveals that subway systems significantly reduceCO₂ emissionsby approximately 50% in the cities they serve, contributing to an 11% global reduction. The study also explores potential expansion in 1,214 urban areas lacking subways, suggesting a potential emission cut by up to 77%. Economically, subways are viable in 794 cities under optimistic financial conditions (SCC at US$150/ton and SIC at US$140 million/km), but this figure drops to 294 cities with more pessimistic assumptions. Despite high costs—about US$200 million per kilometer for construction—subways offer substantial co-benefits, such as reduced traffic congestion and improved public health, making them a strategic investment forurban sustainabilityandclimate mitigation.[64][65] Dense areas with mixed-land uses promote daily public transport use while urban sprawl is associated with sporadic public transport use. A recent European multi-city survey found that dense urban environments, reliable and affordable public transport services, and limiting motorized vehicles in high density areas of the cities will help achieve much needed promotion of public transport use.[66] Urban space is a precious commodity and public transport utilises it more efficiently than a car dominant society, allowing cities to be built more compactly than if they were dependent on automobile transport.[67]Ifpublic transport planningis at the core ofurban planning, it will also force cities to be built more compactly to create efficient feeds into the stations and stops of transport.[5][68]This will at the same time allow the creation of centers around the hubs, serving passengers' daily commercial needs and public services. This approach significantly reducesurban sprawl. Public land planning for public transportation can be difficult but it is the State and Regional organizations that are responsible to planning and improving public transportation roads and routes. With public land prices booming, there must be a plan to using the land most efficiently for public transportation in order to create better transportation systems. Inefficient land use and poor planning leads to a decrease in accessibility to jobs, education, and health care.[69] The consequences for wider society and civic life, is public transport breaks down social and cultural barriers between people in public life. An important social role played by public transport is to ensure that all members of society are able to travel without walking or cycling, not just those with a driving license and access to an automobile—which include groups such as the young, the old, the poor, those with medical conditions, and people banned from driving.Automobile dependencyis a name given by policy makers to places where those without access to a private vehicle do not have access to independent mobility.[71]This dependency contributes to thetransport divide. A 2018 study published in theJournal of Environmental Economics and Managementconcluded that expanded access to public transit has no meaningful impact on automobile volume in the long term.[72] Above that, public transportation opens to its users the possibility of meeting other people, as no concentration is diverted from interacting with fellow-travelers due to any steering activities. Adding to the above-said, public transport becomes a location of inter-social encounters across all boundaries of social, ethnic and other types of affiliation. TheCOVID-19pandemic had a substantial effect on public transport systems, infrastructures and revenues in various cities across the world.[73]The pandemic negatively impacted public transport usage by imposing social distancing, remote work, or unemployment in theUnited States. It caused a 79% drop in public transport riders at the beginning of 2020. This trend continued throughout the year with a 65% reduced ridership as compared to previous years.[74]Similarly inLondon, at the beginning of 2020, ridership in theLondon Undergroundandbusesdeclined by 95% and 85% respectively.[75]A 55% drop in public transport ridership as compared to 2019 was reported inCairo, Egyptafter a period of mandatory halt. To reduce COVID-spread through cash contact, inNairobi, Kenya, cashless payment systems were enforced by National Transport and Safety Authority (NTSA). Public transport was halted for three months in 2020 in Kampala,Ugandawith people resorting to walking or cycling. Post-quarantine, upon renovating public transport infrastructure, public transport such as minibus taxis were assigned specific routes. The situation was difficult in cities where people are heavily dependent on the public transport system. InKigali, Rwandasocial distancing requirements led to fifty percent occupancy restrictions, but as the pandemic situation improved, the occupancy limit was increased to meet popular demands.Addis Ababa, Ethiopiaalso had inadequate bus services relative to demand and longer wait times due to social distancing restrictions and planned to deploy more buses. Both Addis Ababa and Kampala aim to improve walking and cycling infrastructures in the future as means of commuting complementary to buses.[76]
https://en.wikipedia.org/wiki/Public_transport
Empiricalmethods Prescriptiveand policy Thecreator economyor also known ascreator marketingandinfluencer economy, is a software-driven economy that is built aroundcreatorswho produce and distribute content, products, or services directly to their audience, leveraging social media platforms and AI tools.[1]These creators - who may includesocial media influencers,YouTubers, bloggers, artists, podcasters, and even independent professionals - generate revenue from their creations through a variety ofmonetizationstrategies, includingadvertising,sponsorships,product sales,crowdfunding, andsubscription-based services.[2]According to Goldman Sachs Research, the ongoing growth of the creator economy will likely benefit companies that possess a combination of factors, including a large global user base, access to substantial capital, robust AI-powered recommendation engines, versatile monetization tools, comprehensive data analytics, and integrated e-commerce options.[3]Examples of creator economy software platforms includeYouTube,TikTok,Instagram,Facebook,Twitch,Spotify,Substack,OnlyFansandPatreon.[4][5][6][7][8] In 1997,Stanford University's Paul Saffo suggested that the creator economy first came into being in 1997 as the "new economy". Early creators in that economy worked with animations and illustrations, but at the time there was no available marketplace infrastructure to enable them to generate revenue.[citation needed] The term "creator"was coined by YouTube in 2011 to be used instead of "YouTube star", an expression that at the time could only apply to famous individuals on the platform. The term has since become omnipresent and is used to describe anyone creating any form of online content.[9] A number of platforms such asTikTok,Snapchat, YouTube, andFacebookhave set up funds with which to pay creators.[10][11][12][13][14] The large majority of content creators derive no monetary gain for their creations, with most of the benefits accruing to the platforms who can make significant revenues from their uploads.[15]As few as 0.1% of creators are able to earn a living through their channels.[16]
https://en.wikipedia.org/wiki/Creator_economy
Cultural technology(English) is a term that arose from postmodern interpretations of how ideas are used by cultures to frame meaning and the interpretation of concepts; and thus how technologies of thought and culture shape identity and thinking about the self. The term was first used by Australian writer, therapeutic theorist, and social worker Michael White in his lectures[1]in 1991. Karl Tomm, a noted Canadian social worker, traces the use of the term to earlier lectures by Michael White in his foreword toNarrative Means to Therapeutic Ends[2](1990). Giorgio Agamben discusses how the French philosopher Michel Foucault might have used the term apparatus (French: "dispositif") in a synonymous way to describe the collection of ideas, practices, and meaning that determine how people, bodies, and institutions enact power/knowledge[3]or how power/knowledge enact people, bodies, and institutions. (Korean:문화기술;Hanja:文化技術;RR:munhwagisul) is a system used by South Korean talent agencies to promoteK-popculture throughout the world as part of theKorean Wave. The system was developed byLee Soo-man, founder of talent agency and record companySM Entertainment.[4][5] During a speech at theStanford Graduate School of Businessin 2011, Lee said he coined the term "cultural technology" as a system about fourteen years prior, when S.M. Entertainment decided to promote itsK-popartists to all ofAsia.[5]In the late 1990s, Lee and his colleagues created a manual on cultural technology, which specified the steps needed to popularize K-pop artists outside South Korea. The term "cultural technology," apart from Lee's systemized definition, can be traced back to the lectures[1]of Michael White, an Australian social worker, educator, and therapeutic theorist and his worksNarrative Means to Therapeutic Ends[6](1990) andMaps of Narrative Practice[7](2007). "The manual, which all S.M. employees are instructed to learn, explains when to bring in foreign composers, producers, and choreographers; what chord progressions to use in what country; the precise color of eyeshadow a performer should wear in a particular country; the exact hand gestures he or she should make; and the camera angles to be used in the videos (a three-hundred-and-sixty-degree group shot to open the video, followed by a montage of individual closeups)," according toThe New Yorker.[5] The cultural technology system originally employed by SM Entertainment since the 1990s existed in four stages: Casting, Training, Producing, and Marketing/Managing.[8]Each of these four stages were curated to help spread the Hallyu wave through the development of its artists, and are present in the strategies of many other South Korean talent agencies when creating, debuting, and marketing groups. While the majority of K-pop idols are from South Korea, some are from Japan, China, or Thailand. Many of Korea's entertainment companies, such as SM's Global Auditions, Bighit's Hit It auditions, and YG's Next Generation, host worldwide auditions. Scouting and streetcasting are also common, with members like BTS's Jin recruited for their looks or other surface reasons.[9]Sometimes, casting agents go to dance schools to recruit the top dancers to be trained further at the entertainment company.[10] Idols train extensively before debut. They receive training in dance, vocal activities, presentation, and other areas that will benefit them in the industry. Oftentimes, this training will last for years at a time, and trainees are in the proverbial dungeon.[5] Before debut, idols and groups attempt to gain fans through pre-debut activities. SM Entertainment has a system in place called SM Rookies, which is a pre-debut team that hosts concerts and releases videos that strengthen the fanbase of the group even before their first single is released.[11][unreliable source?]Other forms of pre-debut activities include featuring in other, more seasoned idols' videos—likeNu'estinOrange CaramelorExoinGirls' Generation-TTSTwinkle orBTSinJo Kwon. One particular method of pre-debut training is coupled with casting in production shows, likeSixteenandProduce 101, in which members for a final group are selected and trained.[12][unreliable source?] The production of music is integral in culture technology. For cultural technology, production of music helps create differentiated content to set trends in the K-pop world—trends that vary from music to also costume, choreography, and music videos. SM in particular focuses heavily on the expansion globally.[8]Some companies also outsource production to more internationally famed parties, likeCube Entertainment's partnership with Skrillex for4minute'sAct. 7.[13][unreliable source?] In the marketing and management stage, talent agencies seek to broaden their reach. Often, idols have potential for being actors and actresses in dramas, or perhaps hosts/permanent members of variety shows likeKim Hee-chulinKnowing Bros. This so-called omnidirectional marketing lineup ranges over lifestyle and seeks to reach many aspects of living, like music, TV, drama, entertainment, sports, and fashion.[8] This is also where older groups find new life, likeSuper Junior. Companies are not complacent but experiment constantly to develop the best marketing for the best management system.[8] Marketing also aspires to branch out to international audiences, sometimes via the implementation of variety shows. Despite being primarily in Korean, these variety shows are accessible to all due to the simplistic, easily understood nature of shows—game-oriented shows like Run BTS! or consistently subbed shows likeWeekly Idolare popular in showing the fun-loving side of idols. In February 2016, SM hosted a press conference discussing the future of SM and its cultural technology. Lee Soo-man announced the implementation of New Culture Technology, an SM-specific system. While SM's cultural technology in the past relied on local, Korean artists likeRainandBoA, the updated model tries to embed more and more foreign singers from strategic markets into larger girl or boy bands. These imported singers are then used to promote their acts back in their respective home countries.[4] New Culture Technology is five projects—SM Station, EDM, Digital Platforms,Rookies Entertainment, and MCN—and one experimental group,NCT. It is a convergence and expansion of SM's four core culture technologies developed and deals heavily with interaction and the desire to innovate through communication. SM announced their intention of creating a new song every week for 52 weeks. Through this constant output of music, they intend to stray away from conventional forms of music and show active movement in digital music market and physical album market through freely and continuously releasing music. Additionally, this SM Station will feature collaborations between artists, producers, composers, and company brands outside the SM label.[8] The name of SM Station is both derived from the radio station and the metaphorical train station.[8] Neo Culture Technology (NCT) introduced the idea of "Interactive". SM company tried to connect the targeting market, customers and artist, in order to lead theK-popculture.[14] NCT (Neo Culture Technology) is the new artist group formed by SM that embodies the concepts of cultural technology. With the seemingly limitless combinations and groups, SM aspires to make the whole world a stage for NCT.[15] Since 2023, there are six NCT groups, who debuted on the digital song sales: NCT U,NCT 127,NCT Dream,WayV,NCT DoJaeJung, andNCT Wish.[16] As of October 2023, the group consists of 25 members: Johnny,Taeyong,Yuta, Kun,Doyoung,Ten,Jaehyun, Winwin, Jungwoo,Mark, Xiaojun, Hendery, Renjun,Jeno,Haechan,Jaemin, Yangyang,Chenle, Jisung, Sion, Riku, Yushi, Daeyoung, Ryo, and Sakuya. ScreaM Records ScreaM Records has been released by SM Entertainment as an EDM label since 2016 for "SM TOWN: New Culture Technology". ScreaM Records is made for "performances made to be enjoyed". It collaborates with inside and outside Korean well-known EDM DJs. ScreaM Records has first launched collaborated song "Wave" E-Mart's home electronics store, Electro Mart.[17][unreliable source?] "Our goal is to provide opportunities to producers who have yet to be discovered and produce world famous DJs from the Asian scene." a ScreaM Records representative said.[18] According to Lee, there are three stages necessary to popularize Korean culture outside South Korea: exporting the product, collaborating with international companies to expand the product's presence abroad, and finally creating a joint venture with international companies.[19]As part of their joint ventures with international companies, South Korean talent agencies may hire foreign composers, producers, and choreographers to ensure K-pop songs feel "local" to foreign countries.[4] Despite Lee's claim that he coined the term "cultural technology," earlier examples of its use as a term can be traced back to Australian social worker Michael White as early as 1991 and perhaps even further back to French philosopher Michel Foucault (1977).[20]South Korean computer scientist Kwangyun Wohn said he coined the term "culture technology" in 1994.[21]Cultural technology has also been one of six technology initiatives of the South Korean government since 2001.[citation needed]In regards to cultural technology, theKorean Waveis considered one of the most successful outcomes of government support of exporting Korean entertainment products.[citation needed]
https://en.wikipedia.org/wiki/Cultural_technology
Hypein marketing is a strategy of using extreme publicity. Hype as a modern marketing strategy is closely associated withsocial media.[citation needed] Marketing through hype often usesartificial scarcityto induce demand. Consumers of hyped products often participate as a form ofconspicuous consumptionto signify characteristics about themselves.[1] Hype allows brands to promote their image above the actual quality of the product.Streetwearbrands have collaborated with luxury fashion to justify charging premium prices for their goods.[2]As an example, fashion labelVetementsused social media channels to promote a limited-edition hoodie which sold 500 units in hours, recording sales of €445,000.[3] When hype marketing is used to drive demand for limited-edition goods, consumers sometimes attempt resell those good on secondary markets for a profit (comparable toticket scalping). The resale market is a $24 billion industry.[4] Luxury brandsmay release products as a collaborate withready-made garmentbrands as a way to build hype.[5]Collaborations have been used by some luxury brands to circumventfast fashionbrands copying their designs.[6] NYUProfessor Adam Alter says that for an established brand to create a scarcity frenzy, they need to release a limited number of different products, frequently.[7] Hype is often built viaPop-up retail.Comme des Garçonswas one of the first to use this strategy, leasing a short-term vacant shop solved the storage problems of releasing product for quick sale.[8] The term ‘hypebeast’ has been coined to define consumers vulnerable to hype marketing. The origins of the term come from the Hong Kong-based companyHypebeast. The behaviours of the hypebeast define hype marketing; the purchase of popular goods they can't afford to impress others.[9]Hype also manifests itself in queues with brands often retailing hyped products through pop-up stores.[10][11] Many luxury brands release hyped products via their online shop. This has led to the creation of companies that allow consumers to usebotsto guarantee or improve their chances of purchasing a limited-edition product.[12]
https://en.wikipedia.org/wiki/Hype_(marketing)
Influencer marketing(also known asinfluence marketing) is a form ofsocial media marketinginvolvingendorsementsandproduct placementfrominfluencers, individuals and organizations who have a purportedexpert level of knowledgeorsocial influencein their field.[1]Influencers are people (or something) with the power to affect the buying habits or quantifiable actions of others by uploading some form of original—oftensponsored—content tosocial mediaplatforms likeInstagram,YouTube,Snapchat,TikTokor other online channels.[2]Influencer marketing is when abrandenrolls influencers who have an established credibility and audience on social media platforms to discuss or mention the brand in a social media post.[3] Influencer content may be framed astestimonialadvertising, according to theFederal Trade Commissionin the United States.[4]The FTC started enforcing this on a large scale in 2016, sending letters to several companies and influencers who had failed to disclose sponsored content. Many Instagram influencers started using #ad in response and feared that this would affect their income. However, fans increased their engagement after disclosure, being happy that they were landing such deals. This success led to some creators creating their own product lines in 2017.[5]Some influencers fake sponsored content to gain credibility and promote themselves.[6]Backlash to sponsored content became more prominent in mid-2018, leading to many influencers to focus instead on authenticity.[7] Influencer marketing began with early celebrity endorsements and has rapidly spread since the rise of popular social media platforms like Instagram, TikTok, and YouTube. Influencer marketing shows how influencers have become very important figures in fashion and beauty with a very impactful voice and opinion among consumers. The legacy of influencer marketing highlights its power in shaping consumer behavior, with concerns about authenticity and transparency continuing to grow. Influencer marketing has become a new strategy that shapesconsumer behaviorand purchasing decisions through videos and posts. This is shown particularly when looked at through social media platforms like Instagram and TikTok. Influencers have the ability to create personalized and interactive content to share directly with their audiences enhancesbrand engagementand overall purchasing intention. Social media influencers significantly impact consumers purchasing decisions by showing trust, authenticity, and overall credibility.[8]This allows viewers to trust influencer opinions and ultimately follow what they say to be true. Additionally, influencers who show expertise, interact with followers, and demonstrate reliability contribute to higher consumer trust, making influencer marketing more persuasive than traditional advertising in today's digital world. As a result, more brands are leaning toward influencer marketing to showcase their products since it has been seen to bring in more revenue to their company. Recent research highlights that factors such as influencer attractiveness and quality of content play a major role in how strongly influencers shape consumer behavior.[9] Consumers often take into consideration what influencers have to say about a product, using their recommendation instead of referencing traditional advertisements, since influencers give a sense of reliability and authenticity. Research on digital marketing content shows that influencers foster consumer engagement by being authentic. This positively affects consumer behavior, which leads to higher trust and satisfaction, which impacts purchasing decisions.[10]The impact of influencer credibility is strong when looking at impulsive buying behaviors. When consumers trust influencers, they are more likely to make impulsive purchases.[11] The influence of social media influencers on consumer attitudes is very significant. Many studies have shown that an influencer's language styles, like humble-bragging, can alter consumers' attitudes toward luxury brands. This affects consumers' idealization of what is practical and what they "need to have." When influencers engage in self-promoting bragging, it can increase viewers' feelings of envy while decreasing trustworthiness. When looking at it from another lens, self-deprecating bragging has the opposite effect on consumers. These shifts in emotions, like envy and trust, are shown to have a direct impact on consumer attitudes, especially toward luxury and high-end products.[12]More research shows that influencer attributes like credibility and attractiveness play a major role in shaping consumer attitudes toward a brand. The findings from this research show that consumers' attitudes toward influencers directly affect their purchasing intentions.[13]Ameta-analysisalso found that influencer authenticity significantly brought up customer engagement, which ultimately strengthens purchase intentions and positive attitudes toward brands.[14] Influencer marketing can also help shape consumer perceptions of brand personality. By using their own lifestyle choices, influencers can contribute to the brand's overall identity. In doing this, it also helps consumers view the brand itself and makes them think about themselves in relation to the brand or how they fit the brand because the influencer fits it. As a result, consumers are more inclined to view brands that collaborate with influencers they trust and/or admire. This can lead to increasedbrand loyaltyand long-term engagement with the brand and the influencer who is associated, which influences consumers' attitudes toward both. Some studies indicate that consumer attitudes are influenced by an influencer's perceived relationship with the audience. Influencers with stronger perceptions of friendships with viewers show an increase in consumer loyalty and positive attitudes.[15] Most discussions ofsocial influencefocus on social persuasion and compliance.[16]In the context of influencer marketing, influence is less about advocating for a point of view or product than about loose interactions between parties in a community (often with the aim of encouraging purchasing or behavior). Although influence is often equated with advocacy, it may also be negative.[17] By examining consumer engagement throughself-determination theory, where influencers foster engagement by aligning with consumer motivations and intentions, this adds a better understanding of influencer influence since it shows how influencers cater to what their consumers want to see.[18]Although influence is often associated with advocacy, it may also be negative. Some research highlights that negative influence may grow when viewers see influencer marketing as inauthentic, especially when looking at influencer directed campaigns that can lack the trust built by brand loyalty. This highlights the importance of authenticity in influencer marketing and overall consumer trust in brands and influencers.[19]A review further dives into that authenticity, credibility, and trust are central to effective influencer marketing. This highlighting that influencers who appear fake risk damaging both brand reputation and consumer relationships.[20] Thetwo-step flow of communicationmodel was introduced inThe People's Choice(Paul Lazarsfeld,Bernard Berelson, andHazel Gaudet's 1940 study of voters' decision-making processes), and developed inPersonal Influence(Lazarsfeld,Elihu Katz1955)[21]andThe Effects of Mass Communication(Joseph Klapper, 1960).[22] Influencer marketing is also important throughsocial comparison theory. AspsychologistChae reports, influencers serve as a comparison tool. Consumers may compare influencer lifestyles with their imperfections. Meanwhile, followers may view influencers as people with perfect lifestyles, interests, and dressing style.[23]As such, the promoted products may serve as a shortcut towards a complete lifestyle. Chae's study finds women with low self-esteem compare their own to the influencers. As such, they elevate the status of influencers above themselves. When using an influencer, a brand may use consumer insecurities to its benefits. For this reason, influencer marketing may lead to misleading advertising.[24] In addition to this, the perception of friendship with influencers can significantly enhance consumers' loyalty. Studies have also shown that consumers are more likely to developparasocial relationshipswhen they perceive influencers as similar to themselves.[25]When using an influencer, a brand may use consumer insecurities to its benefit. For this reason, influencer marketing may lead to misleading advertising. A significant portion ofGen Z Americansconsider being an influencer as a "reputable career choice".[26] A social media influencer,[27][28][29]or simplyinfluencer[30][31][32](also known as an online influencer[33][34][35]), is an individual who builds a grassroots online presence through engaging content such as photos, videos, and updates. By using direct audience-interaction to establish authenticity, expertise, and appeal, and by standing apart from traditionalcelebritiesby growing their platform throughsocial mediarather than pre-existing fame.[36][37]The modern referent of the term is commonly a paid role in which a business entity pays for the social mediainfluence-for-hireactivity to promote its products and services, known as influencer marketing.[38]Types of influencers includefashion influencer,travel influencer,virtual influencer, and involvecontent creators[39][40][41]andstreamers.[42][43][44] Online activity plays a central role in offline decision-making, allowing consumers to research products.[49]Social media has created new opportunities for marketers to expand their strategies beyond traditional mass-media channels.[50]Many use influencers to increase the reach of their marketing messages.[51][52]Online influencers who curatepersonal brandshave become marketing assets because of their relationship with their followers.[50][53]Social media influencers establish themselves asopinion leaderswith their followers and may have persuasive strengths such as attractiveness, likeability, niche expertise, and perceived good taste.[53][50][54]The interactive and personal nature of social media allowsparasocial relationshipsto form between influencers and their followers, which impacts purchase behavior.[50][54][55]Additionally, influencer marketing on social media reaches consumers who usead-blockers.[53] Critics of an online-intensive approach say that by researching exclusively online, consumers can overlook input from other influential individuals.[56]Early-2000s research suggested that 80 to 92 percent of influential consumer exchanges occurred face-to-face throughword-of-mouth(WOM), compared to seven to 10 percent in an online environment.[57][58][51][59]Scholars and marketers distinguish WOM from electronic word-of-mouth (eWOM).[60] Given their impact, especially among younger people, influencers have also been enlisted by governments. Countries likeEgyptand theUnited Arab Emirateshave been using these influencers to spread a positive image of them and distract from human rights criticisms.[61]InDubai, many such influencers have been working to promote the city's tourism by acquiring an expensive license or through agencies. Emirati authorities tightly manage influencers to ensure that the country is depicted in a positive light. Dubai authorities also restrict these influencers from speaking about religion, politics or against the regime.[62]A report in October 2022 revealed that some influencers promoting Dubai engaged in prostitution, using their high-profile to find clients and charge higher rates. Althoughprostitutionis illegal in Dubai, increasing numbers engage in the practice due to the rise in the number of ultra-rich expatriates in Dubai, including Russian oligarchs moving to the emirate to escape the US sanctions.[63] Marketers use influencer marketing to establish credibility in a market, to generatesocial conversationsabout brands, and to focus on driving online or in-store sales. They leverage credibility gained over time to promote a variety of products or services. Success in influencer marketing is measured throughearned mediavalue, impressions,[64]andcost per action.[56]Globally, 86% of brands planned to use influencer marketing in 2024.[26] A social media influencer's personal brand and product relation with marketers are important concepts. As social learning theory suggests, influencers serve as informed consumers and authenticity matters. When credible influencers align with the product, consumers will consider the promoted recommendations.[65][66]A study found that respondents see influencers as a neutral authority pitch for a product. Compared to CEO spokespeople, influencers are seen as approachable and trustworthy. Consumers are more likely to respond to influencers if both parties share certain characteristics and beliefs.[67][66] A 2015 article depicts that attributions drive endorsers and that globally 77% of shoppers would or may take action following what family, friends, and online reviews endorse. It shows that word of mouth marketing and digital media have changed the impact and reach of endorsements.[68] In the United States, the Federal Trade Commission (FTC) treats influencer marketing as a form of paidendorsement. It is governed by the rules fornative advertising, which include compliance with established truth-in-advertising standards and disclosure by endorsers (influencers) and is known as the Endorsement Guides.[69][4]The FTC compiled an easy-to-read guide on disclosure for influencers, specifying rules and tips on how to make good disclosures on social media. The guidelines include reminders of disclosing sponsored products in easily visible places so it is hard to miss, using easy-to-understand language, and giving honest reviews about sponsored products.[70][4][71] In 2017, the FTC sent more than 90 educational letters to celebrity and athlete influencers with the reminder of the obligation to clearly disclose business relationships while sponsoring and promoting products.[72]The same year, in response to YouTubers Trevor Martin and Thomas Cassell deceptively endorsing an online gambling site they owned, the FTC took three separate actions to catch the attention of influencers. By using law enforcement, warning letters, and updating the Endorsement Guidelines, the FTC provided influencers with endorsement questions or involved in misleading endorsements and disclosures with clear procedures of how to follow the laws.[73] Media-regulating bodies in other countries – such as Australia – followed the FTC in creating influencer-marketing guidelines.[74] The United Kingdom'sCompetition and Markets AuthorityandAdvertising Standards Authorityadopted similar laws and tips for influencers to follow.[71]The UK'sFinancial Conduct Authorityhave also warned "finfluencers" (influencers in the financial realm) of legal consequences for failing to include the kind of risk warnings required for financial and investment products.[75] FacebookandInstagramhave a set of brand content policies for influencer marketing and endorsements. Branded content may only be posted through Instagram and Facebook, and require the business relationships between influencers and endorsers to be tagged when promoting branded content. The branded tool provided in the business layout of Facebook and Instagram is to be used whenever promoting products and endorsers.[76][77] As of August 2020,YouTubehas updated the branded content policies. YouTube andGoogle'sad policies require influencers to check a box titledpaid promotionwhen publishing sponsored videos and provides instructions on how to set it up. The policies require disclosure messages for the viewers to indicate that the content is promoted.[78] All criteria used to determine the veracity of an influencer account can be fabricated. Third-party sites and apps sell services to individual accounts which falsely boost follower counts, likes, and comments.[79] An analysis of over 7,000 influencers in the UK indicated that about half of their followers have up to 20,000 "low-quality" followers themselves, consisting ofinternet botsand other suspicious accounts. Over four in 10 engagements with this group of influencers are considered "non-authentic".[80]A study of UK influencers which looked at almost 700,000 posts from the first half of 2018 found that 12 percent of UK influencers had bought fake followers.[80]Twenty-four percent of influencers were found to have abnormal growth patterns in another study, indicating that they had manipulated their likes or followers.[81] Influencer fraud (including fake followers) was estimated to cost businesses up to $1.3 billion, about 15 percent of global influencer–marketing spending. Research in 2019 accounted only for the calculable cost of fake followers.[82] Virtual influencersare virtual characters, intentionally designed by 3D artists to look like real people in real situations.[83]Although most of the characters can be easily identified as computer graphics, some are very realistic and can fool users.[84]The characters are usually identified as models, singers, or other celebrities. Their creators write their biographies, conduct interviews on their behalf, and act like the characters themselves.[83]Lil Miquelawas a realistic virtual influencer which prompted curiosity and speculation until it was learned that she was created by advertisers.[85] A study published in 2022 indicate that over half ofChileanshave never purchased products recommended by influencers.[86]
https://en.wikipedia.org/wiki/Influencer_marketing
Ashill, also called aplantor astooge, is a person who publicly helps or gives credibility to a person or organization without disclosing that they have a close relationship with said person or organization, or have been paid to do so. Shills can carry out their operations in the areas of media, journalism, marketing, politics, sports,confidence games,cryptocurrency, or other business areas. A shill may also act to discredit opponents or critics of the person or organization in which they have a vested interest.[1][2] In most uses,shillrefers to someone who purposely gives onlookers, participants or "marks" the impression of an enthusiastic customer independent of the seller, marketer or con artist, for whom they are secretly working. The person or group in league with the shill relies oncrowd psychologyto encourage other onlookers or audience members to do business with the seller or accept the ideas they are promoting. Shills may be employed by salespeople and professional marketing campaigns.Plantandstoogemore commonly refer to a person who is secretly in league with another person or outside organization while pretending to be neutral or part of the organization in which they are planted, such as a magician's audience, a political party, or an intelligence organization (seedouble agent).[citation needed] Shilling is illegal in many circumstances and under many jurisdictions[3]because of the potential forfraudand damage. However, if a shill does not place uninformed parties at a risk of loss, the shill's actions may be legal. For example, a person planted in an audience to laugh and applaud when desired (seeclaque), or to participate in on-stage activities as a "random member of the audience", is a legal type of shill.[4] The origin of the term "shill" is uncertain; it may be an abbreviation of "shillaber". The word originally denoted a carnival worker who pretended to be a member of the audience in an attempt to elicit interest in an attraction. Some sources trace the usage back to 1914,[5][6]or as far back as 1911.[7]American humoristBenjamin Penhallow Shillaber(1814–1890), who often wrote under the guise of his fictional character Mrs. Ruth Partington, the American version ofMrs. Malaprop, is a possible source. In online discussion media, shills make posts expressing opinions that further interests of an organization in which they have a vested interest, such as a commercialvendororspecial interest group, while posing as unrelated innocent parties. For example, an employee of a company that produces a specific product might praise the product anonymously in a discussion forum or group in order to generate interest in that product, service, or group.Web sitescan also be set up for the same purpose. In addition, some shills usesock puppetry, where one person poses as multiple users.[citation needed] In somejurisdictionsand circumstances, this type of activity is illegal. The plastic surgery company Lifestyle Lift ordered their employees to post fake positive reviews on websites. As a result, they were sued, and ordered to pay $300,000 in damages by the New York Attorney General's office.[8] Reputable organizations may prohibit their employees and other interested parties (contractors, agents, etc.) from participating in public forums or discussion groups in which aconflict of interestmight arise, or will at least insist that their employees and agents refrain from participating in any way that might create a conflict of interest.[citation needed] Both the illegal and legalgamblingindustries often use shills to make winning at games appear more likely than it actually is. For example, illegalthree-card monteandshell-gamepeddlers are notorious employers of shills. These shills also often aid in cheating, disrupting the game if themarkis likely to win. In a legal casino, however, a shill is sometimes a gambler who plays using the casino's money in order to keep games (especially poker) going when there are not enough players. The title of one ofErle Stanley Gardner's mystery novels,Shills Can't Cash Chips, is derived from this type of shill. This is different from "proposition players" who are paid a salary by the casino for the same purpose, but bet with their own money.[citation needed] In marketing, shills are often employed to assume the air of satisfied customers and givetestimonialsto the merits of a given product. This type of shilling is illegal in some jurisdictions, but almost impossible to detect. It may be considered a form ofunjust enrichmentorunfair competition, as inCalifornia's Business & Professions Code§ 17200, which prohibits any "unfair or fraudulent business act or practice and unfair, deceptive, untrue or misleadingadvertising".[9] People who drive prices in favor of the seller or auctioneer with fake bids in an auction are calledshillsorpotted plantsand seek to provoke abiddingwar among other participants.[10][11][12]Often they are told by the seller precisely how high to bid, as the seller does not lose money if the item does not sell, paying only the auction fees. Shilling has a substantially higher rate of occurrence inonline auctions, where any user with multiple accounts can bid on their own items. One detailed example of this has been documented in onlineauto auctions.[10]The online auction siteeBayforbids shilling; its rules do not allow friends or employees of a person selling an item to bid on the item,[13]even though eBay has no means to detect if a bidder is related to a seller or is in fact the seller.[14] In his bookFake: Forgery, Lies, & eBay,Kenneth Waltondescribes how he and his accomplices placed shill bids on hundreds of eBay auctions over the course of a year. Walton and his associates were charged and convicted of fraud by federal authorities for their eBay shill bidding.[15] With the proliferation of live online auctions in recent years, shill bidding has become commonplace.[16]Some websites allow shill bidding by participating auctioneers. These auctioneers are able to see bids placed in real time and can then place counter bids to increase the amount. One Proxibid auctioneers' website states, "At the request of the auction company, this auction permits bids to be placed by the seller or on the seller's behalf, even if such bids are placed solely for the purpose of increasing the bid."[17]
https://en.wikipedia.org/wiki/Shill
Social media marketingis the use ofsocial mediaplatforms andwebsitesto promote a product or service.[1]Although the termse-marketinganddigital marketingare still dominant in academia, social media marketing is becoming more popular for both practitioners and researchers.[2] Most social media platforms such as: Facebook, LinkedIn, Instagram, and Twitter, among others, have built-in dataanalyticstools, enabling companies to track the progress, success, and engagement of social media marketing campaigns. Companies address a range ofstakeholdersthrough social media marketing, including current and potential customers, current and potential employees,journalists,bloggers, and the general public. On a strategic level, social media marketing includes the management of a marketing campaign,governance, setting the scope (e.g. more active or passive use) and the establishment of a firm's desired social media "culture" and "tone". When using social media marketing, firms can allow customers andInternetusers to postuser-generated content(e.g., online comments, product reviews, etc.), also known as "earned media", rather than use marketer-preparedadvertisingcopy. Social networking websitesallow individuals, businesses, and other organizations to interact with one another and build relationships and communities online. When companies join these social channels, consumers can interact with them directly.[3]That interaction can be more personal to users than traditional methods ofoutbound marketingand advertising.[4] Social networking sites act asword of mouthor more precisely, e-word of mouth. The Internet's ability to reach billions across the globe has given online word of mouth a powerful voice and far reach. The ability to rapidly change buying patterns and product or service acquisition and activity to a growing number of consumers is defined as an influence network.[5] Social networking sites andblogsallow followers to "retweet" or "repost" comments made by others about a product being promoted, which occurs quite frequently on some social media sites.[6]By repeating the message, the user's connections can see it, reaching more people. Because the information about the product is being put out there and is getting repeated, more traffic is brought to the product/company.[4] Social networking websites are based on buildingvirtual communitiesthat allow consumers to express their needs, wants and values, online. Social media marketing then connects these consumers and audiences to businesses that share the same needs, wants, and values. Through social networking sites, companies can keep in touch with individual followers. This personal interaction can instill a feeling ofloyaltyinto followers and potential customers. Also, by choosing whom to follow on these sites, products can reach a very narrow target audience.[4] Social networking sites also include much information about what products and services prospective clients might be interested in. Through the use of newsemanticanalysis technologies, marketers can detect buying signals, such as content shared by people and questions posted online. An understanding of buying signals can help sales people target relevant prospects and marketers run micro-targeted campaigns. In 2014, over 80% of business executives identified social media as an integral part of their business.[7]Business retailers have seen 133% increases in their revenues from social media marketing.[8] Some examples of popular social networking websites over the years areFacebook,Instagram,Twitter,TikTok,Myspace,LinkedIn,Snapchat, andThreads. More than three billion people in the world are active on the Internet. Over the years, the Internet has continually gained more and more users, jumping from 738 million in 2000 all the way to 5.3 billion in 2023.[9]Roughly 81% of the current population in the United States has some type of social media profile that they engage with frequently.[10] Mobile phone usage is beneficial for social media marketing because of its web browsing capabilities, which allow individuals to have immediate access to social networking sites. Mobile phones have altered the path-to-purchase process by allowing consumers to easily obtain pricing and product information in real time.[11]They have also allowed companies to constantly remind and update their followers. Many companies are now puttingQR(Quick Response) codes along with products for individuals to access the company website or online services with their smart phones. Retailers use QR codes to facilitate consumer interaction with brands by linking the code to brand websites, promotions, product information, and any other mobile-enabled content. In addition,Real-time biddinguse in the mobile advertising industry is high and rising due to its value for on-the-go web browsing. In 2012, Nexage, a provider of real time bidding in mobile advertising reported a 37% increase in revenue each month. Adfonic, another mobile advertisement publishing platform, reported an increase of 22 billion ad requests that same year.[12] Mobile devices have become increasingly popular, where 5.7 billion people are using them worldwide.[13]This has played a role in the way consumers interact with media and has many further implications for TV ratings, advertising,mobile commerce, and more. Mobilemedia consumptionsuch as mobile audio streaming or mobile video are on the rise – In the United States, more than 100 million users are projected to access online video content via mobile device. Mobile video revenue consists ofpay-per-viewdownloads, advertising and subscriptions. As of 2013, worldwide mobile phone Internet user penetration was 73.4%. In 2017, figures suggest that more than 90% of Internet users will access online content through their phones.[14] There is both a passive approach and an active approach for using social media as a marketing tool: Social media can be a useful source of market information and a way to hear customer perspectives. Blogs, content communities, and forums are platforms where individuals share their reviews and recommendations of brands, products, and services. Businesses are able to tap and analyze customer voices and feedback generated in social media for marketing purposes.[15]In this sense, social media is a relatively inexpensive source ofmarket intelligencewhich can be used by marketers and managers to track and respond to consumer-identified problems and detect market opportunities. For example, the Internet erupted with videos and pictures of iPhone 6 "bend test" which showed that the coveted phone could be bent by hand pressure. The so-called "bend gate" controversy[16]created confusion amongst customers who had waited months for the launch of the latest rendition of the iPhone. However, Apple promptly issued a statement saying that the problem was extremely rare and that the company had taken several steps to make the mobile device's case stronger and robust. Unlike traditional market research methods such as surveys, focus groups, and data mining which are time-consuming and costly, and which take weeks or even months to analyze, marketers can use social media to obtain 'live' or "real time" information about consumer behavior and viewpoints on a company's brand or products. This can be useful in the highly dynamic, competitive, and fast-paced global marketplace. Social media can be used not only as apublic relationsanddirect marketingtool, but also as a communication channel, targeting very specific audiences withsocial media influencersand social media personalities as effectivecustomer engagementtools.[15]This tactic is widely known asinfluencer marketing. Influencer marketing allows brands the opportunity to reach their target audience in a more genuine, authentic way via a special group of selectedinfluencersadvertising their product or service. In fact, brands are set to spend up to $15 billion on influencer marketing by 2022, perBusiness Insider Intelligenceestimates, based on Mediakix data.[17] Technologies predating social media, such as broadcast TV and newspapers can also provide advertisers with a fairly targeted audience, given that an ad placed during a sports game broadcast or in the sports section of a newspaper is likely to be read by sports fans. However, social media websites can targetniche marketseven more precisely. Using digital tools such asGoogle AdSense, advertisers can target their ads to very specific demographics, such as people who are interested insocial entrepreneurship,political activismassociated with a particular political party, orvideo gaming. Google AdSense does this by looking forkeywordsin social media user's online posts and comments. It would be hard for a TV station or paper-based newspaper to provide ads that are this targeted (though not impossible, as can be seen with "special issue" sections on niche issues, which newspapers can use to sell targeted ads). Social networks are, in many cases, viewed as a great tool for avoiding costly market research. They are known for providing a short, fast, and direct way to reach an audience through a person who is widely known. For example, anathletewho gets endorsed by a sporting goods company also brings their support base of millions of people who are interested in what they do or how they play and now they want to be a part of this athlete through their endorsements with that particular company. At one point consumers would visit stores to view their products with famous athletes, but now you can view a famous athlete's, such asCristiano Ronaldo, latest apparel online with the click of a button. He advertises them to you directly through hisTwitter, Instagram, and Facebook accounts. FacebookandLinkedInare leading social media platforms where users canhyper-targettheir ads. Hypertargeting not only uses public profile information but also information users submit but hide from others.[18]There are several examples of firms initiating some form of online dialog with the public to foster relations with customers. According to Constantinides, Lorenzo and Gómez Borja (2008) "Business executives like Jonathan Swartz, President and CEO of Sun Microsystems, Steve Jobs CEO of Apple Computers, and McDonald's Vice President Bob Langert post regularly in their CEO blogs, encouraging customers to interact and freely express their feelings, ideas, suggestions, or remarks about their postings, the company or its products".[15]Using customer influencers (for example popular bloggers) can be a very efficient and cost-effective method to launch new products or services[19] Social media content that has been driven by algorithms has become an increasingly popular feature in recent years.[20] One social media platform that has used this ground-changing strategy is TikTok. TikTok has become one of the fastest growing applications to date and currently has around 1.5 billion users, mainly consisting of children and teenagers.[21]The algorithm used within this platform encourages creativity among TikTok users because of the platform's wide range of effects and challenges that change from day to day.[21]Because of this feature, content creators big or small have increased chances of going viral by appearing on TikTok's "for you" page. The "for you" page algorithm allows users to have videos recommended to them based on their previous watches, likes and shares.[21] This can be extremely beneficial for small businesses who are using this platform as a means of social media marketing. Although they may be starting off small, by following trends, using hashtags, and much more, anyone can promote themselves on this emerging application to attract new audiences from all around the world. Moreover, using algorithmically driven content within TikTok allows for a more positive response rate from users as the target audience tends to be young users, who are more susceptible to these increasingly popular marketing communications.[22]With this in mind, TikTok is filled with rich content that include images and videos which can beneficially aid influencer marketing over platforms that are heavily text-based as they are less engaging for their audiences.[22] Engagement with the social web means that customers and stakeholders are active participants rather than passive viewers. An example of these areconsumer advocacy groupsand groups that criticize companies (e.g.,lobby groupsoradvocacy organizations).Social mediause in a business or political context allows all consumers/citizens to express and share an opinion about a company's products, services, business practices, or a government's actions. Each participating customer, non-customer, or citizen who is participating online via social media becomes a part of the marketing department (or a challenge to the marketing effort) as other customers read their positive or negative comments or reviews. Getting consumers, potential consumers or citizens to be engaged online is fundamental to successful social media marketing.[23] With the advent of social media marketing, it has become increasingly important to gain customer interest in products and services. This can eventually be translated into buying behavior, or voting and donating behavior in a political context. New online marketing concepts of engagement and loyalty have emerged which aim to build customer participation and brand reputation.[24] Engagement in social media for the purpose of a social media strategy is divided into two parts. The first is proactive, regular posting of new online content. This can be seen throughdigital photos,digital videos, text, and conversations. It is also represented through sharing of content and information from others via weblinks. The second part is reactive conversations with social media users responding to those who reach out to your social media profiles through commenting or messaging.[25] Small businessesalso use social networking sites as a promotional technique. Businesses can follow individuals social networking site uses in the local area and advertise specials and deals. These can be exclusive and in the form of "get a free drink with a copy of this tweet". This type of message encourages other locals to follow the business on the sites in order to obtain the promotional deal. In the process, the business is getting seen and promoting itself (brand visibility). Small businessesalso use social networking sites to develop their own market research on new products and services. By encouraging their customers to give feedback on new product ideas, businesses can gain valuable insights on whether a product may be accepted by their target market enough to merit full production, or not. In addition, customers will feel the company has engaged them in the process of co-creation—the process in which the business uses customer feedback to create or modify a product or service the filling a need of the target market. Such feedback can present in various forms, such as surveys, contests, polls, etc. Social networking sites such as LinkedIn, also provide an opportunity for small businesses to find candidates to fill staff positions.[26] Of course, review sites, such as Yelp, also help small businesses to build their reputation beyond just brand visibility. Positive customer peer reviews help to influence new prospects to purchase goods and services more than company advertising.[27] In early 2012,Nikeintroduced its Make It Count social media campaign. The campaign kickoff began with YouTubers Casey Neistat and Max Joseph launching a YouTube video on April 9, 2012, where they traveled 34,000 miles to visit 16 cities in 13 countries. They promoted the #makeitcount hashtag, which millions of consumers shared via Twitter and Instagram by uploading photos and messages.[28]The #MakeItCount YouTube video went viral and Nike saw an 18% increase in profit in 2012, the year this product was released. Possible benefits of social media marketing include: One of the main purposes of employingsocial mediain marketing is as a communications tool that makes the companies accessible to those interested in their product and makes them visible to those who have no knowledge of their products.[33]These companies use social media to create buzz, and learn from and target customers. It's the only form of marketing that can finger consumers at each and every stage of the consumer decision journey.[34] Marketing through social media has other benefits as well. Of the top 10 factors that correlate with a strong Google organic search, seven are social media dependent. This means that if brands are less or non-active on social media, they tend to show up less on Google searches.[35]While platforms such asTwitter,FacebookandGoogle+have a larger number of monthly users, the visual media sharing based mobile platforms, however, garner a higher interaction rate in comparison and have registered the fastest growth and have changed the ways in which consumers engage with brand content. Instagram has an interaction rate of 1.46% with an average of 130 million users monthly as opposed to Twitter which has a .03% interaction rate with an average of 210 million monthly users.[35]Unlike traditional media that are often cost-prohibitive to many companies, a social media strategy does not require astronomical budgeting.[36] To this end, companies make use of platforms such as Facebook, Twitter,YouTube,TikTokandInstagramto reach audiences much wider than through the use of traditional print/TV/radio advertisements alone at a fraction of the cost, as most social networking sites can be used at little or no cost (however, some websites charge companies for premium services). This has changed the ways that companies approach to interact with customers, as a substantial percentage of consumer interactions are now being carried out over online platforms with much higher visibility. Customers can now post reviews of products and services, rate customer service, and ask questions or voice concerns directly to companies through social media platforms. According toMeasuring Success, over 80% of consumers use the web to research products and services.[37]Thus social media marketing is also used by businesses in order to build relationships of trust with consumers.[38]To this aim, companies may also hire personnel to specifically handle these social media interactions, who usually report under the title ofOnline community managers. Handling these interactions in a satisfactory manner can result in an increase of consumer trust. To both this aim and to fix the public's perception of a company, 3 steps are taken in order to address consumer concerns, identifying the extent of the social chatter, engaging theinfluencersto help, and developing a proportional response.[39] Twitterallows companies to promote their products in short messages known as tweets limited to 280 characters which appear on followers' Home timelines.[40]Twitter has also been used by companies to provide customer service.[41] Facebookpages are more detailed than Twitter accounts. They allow a product to provide videos, photos, longer descriptions, andtestimonialswhere followers can comment on the product pages for others to see. Facebook can link back to the product's Twitter page, as well as send out event reminders. As of May 2015, 93% of businesses marketers used Facebook to promote their brand.[42][unreliable source?] A study from 2011 attributed 84% of "engagement" or clicks and likes that link back to Facebook advertising.[43]By 2014, Facebook had restricted thecontentpublished from business and brand pages. Adjustments in Facebook algorithms had reduced the audience for non-paying business pages (that have at least 500,000 "Likes") from 16% in 2012 down to 2% in February 2014.[44][45][46] LinkedIn, a professional business-related networking site, allows companies to create professional profiles for themselves as well as their business to network and meet others.[47]LinkedIn members can use "Company Pages" similar to Facebook pages to create an area that will allow business owners to promote their products or services and be able to interact with their customers.[48] WhatsAppwas founded byJan KoumandBrian Acton. JoiningFacebookin 2014, WhatsApp continues to operate as a separate app with a laser focus on building a messaging service that works fast and reliably anywhere in the world. Started as an alternative to SMS, WhatsApp now supports sending and receiving a variety of media including text, photos, videos, documents, and location, as well as voice calls. WhatsApp messages and calls are secured withend-to-end encryption, meaning that no third party including WhatsApp can read or listen to them. WhatsApp has a customer base of 1 billion people in over 180 countries.[49][50]It is used to send personalized promotional messages to individual customers. It has plenty of advantages over SMS that includes ability to track how Message Broadcast Performs using blue tick option in WhatsApp. It allows sending messages to Do Not Disturb (DND) customers. WhatsApp is also used to send a series of bulk messages to their targeted customers using broadcast option. Companies started using this to a large extent because it is a cost-effective promotional option and quick to spread a message. As of 2019, WhatsApp still not allow businesses to place ads in their app.[51] Yelpconsists of a comprehensive online index of business profiles. Businesses are searchable by location, similar toYellow Pages. The website is operational in seven different countries, including theUnited StatesandCanada. Business account holders are allowed to create, share, and edit business profiles. They may post information such as the business location, contact information, pictures, and service information. The website further allows individuals to write, post reviews about businesses, and rate them on a five-point scale. Messaging and talk features are further made available for general members of the website, serving to guide thoughts and opinions.[52] Launched on October 6, 2010,Instagramis a picture and video sharing platform where users can like, follow, comment, share, and intereact with other users content. Instagram is now owned by Meta, the parent company of other social media platform, Facebook. With a growing 2 Billion active monthly users, Instagram has become a great place for businesses to share content and "offers them a way to connect with users in a visual way."[53]The best ways to start using Instagram for marketing would be identifying your target audience, as Instagram is a platform primarily used for people in the age range of 18-26 years old. Businesses can then start building their brand on the app by posting content, growing an audience, analysing their performance, start doing celebrity collaborations, and building a community through their posts.[54] Snapchatwas launched in September of 2011, and is a place where you send pictures and chats to friends and post content on your "story", for your added friends to see, Snapchat quickly became a popular way for younger people to communicate with one another. The app has 175 million active monthly users and "Snapchat remains especially popular among Gen Z and Millenials, making it a prime channel for brands targeting younger demographics."[55]Brands can use marketing on Snapchat in a numerous amunts of ways including driving traffic, boosting engagement, and building brand awareness. Along with many other platforms, Snapchat is free for brand use and can be very beneficial for brands to use for marketing.[56] Launched February 14, 2005 by Chad Hurley, Steve Chen, and Jawid Karim,YouTubeis a platform for users to share extended videos for people to view. YouTube, unlike other platforms, where the "audience has come onto that platform to specifically search for content their interested in."[57]With 2.5 million monthly active viewers, it has become one of the most largely used platforms. Using YouTube as a way great way for brands to market due to the large audience reach, building trust and credibility, and increasing website traffic.[58] Released in 2016,TikTokbecame one of the most popular social media apps with over 2 billion mobile downloads worldwide. Owned by Chinese Internet Company, ByteDance, Tiktok began as a way to share short videos with friends, family, and other users, but has now "evolved into a space of entertainment, creativity, marketing, social activism, and more."[59]TikTok is known for being addictive, as many users spend hours a day scrolling through the app. Now, creators are using TikTok as a way to build their platform across all different sisal medias, but they rely on the app to start their journey. Since TikTok's algorithm is relatively easy to follow and understand, the app makes it easier for users to watch videos they are more likely to enjoy through their "For You Page". Though TikTok is a fun app for sharing videos, "it is just as powerful as a marketing tool."[60]TikTok is a recommended tool for businesses who target towards younger audiences. Social bookmarkingsites are used in social media promotion. Each of these sites is dedicated to the collection,curation, and organization of links to other websites that users deem to be of good quality. This process is "crowdsourced", allowing amateur social media network members to sort and prioritize links by relevance and general category. Due to the large user bases of these websites, any link from one of them to another, the smaller website may in aflash crowd, a sudden surge of interest in the target website. In addition to user-generated promotion, these sites also offer advertisements within individual user communities and categories.[61]Because ads can be placed in designated communities with a very specific target audience and demographic, they have far greater potential for traffic generation than ads selected simply throughcookieand browser history.[62] Additionally, some of these websites have also implemented measures to make ads more relevant to users by allowing users to vote on which ones will be shown on pages they frequent.[63]The ability to redirect large volumes ofweb trafficand target specific, relevant audiences makessocial bookmarkingsites a valuable asset for social media marketers. Platforms likeLinkedIncreate an environment for companies and clients to connect online.[64]Companies that recognize the need for information, originality and accessibility employblogsto make their products popular and unique/ and ultimately reach out to consumers who are privy to social media.[65] Studies from 2009 show that consumers view coverage in the media or from bloggers as being more neutral and credible than print advertisements, which are not thought of as free or independent.[66]Blogs allow a product or company to provide longer descriptions of products or services, can include testimonials and can link to and from other social network and blog pages. Blogs can be updated frequently and are promotional techniques for keepingcustomers, and also for acquiring followers and subscribers who can then be directed to social network pages. Online communities can enable a business to reach the clients of other businesses using the platform. To allow firms to measure their standing in the corporate world, sites enable employees to place evaluations of their companies.[64] Some businesses opt out of integrating social media platforms into their traditional marketing regimen. There are also specific corporate standards that apply when interacting online.[64]To maintain an advantage in a business-consumer relationship, businesses have to be aware of four key assets that consumers maintain: information, involvement, community, and control.[67] Blogging websiteTumblrfirst launched ad products on May 29, 2012.[68]Rather than relying on simplebanner ads, Tumblr requires advertisers to create a Tumblr blog so the content of those blogs can be featured on the site.[69]In one year, four native ad formats were created on web and mobile, and had more than 100 brands advertising on Tumblr with 500 cumulativesponsored posts. These posts can be one or more of the following: images, photo sets, animated GIFs, video, audio, and text posts. For the users to differentiate the promoted posts to the regular users' posts, the promoted posts have a dollar symbol on the corner. On May 6, 2014, Tumblr announced customization and theming on mobile apps for brands to advertise.[71] To promote the 2013 filmMonsters University, Disney/Pixar created a Tumblr account, MUGrumblr, saying that the account is maintained by a 'Monstropolis transplant' and 'self-diagnosed coffee addict' who is currently a sophomore at Monsters University.[72]A "student" from Monsters University uploaded memes, animated GIFs, and Instagram-like photos related to the movie. In 2014, Apple created a Tumblr page to promote theiPhone 5c, labeling it "Every color has a story" with the website name: "ISee5c". Upon opening the website, the page is covered with different colors representing the iPhone 5c phone colors and case colors. When a colored section is clicked, a 15-second video plays a song and "showcases the dots featured on the rear of the iPhone 5c official cases and on the iOS 7 dynamic wallpapers",[73]concluding with words that are related to the video's theme. Social media marketing involves the use ofsocial networks,consumer's online brand-related activities(COBRA) andelectronic word of mouth(eWOM)[74][75]to successfully advertise online. Social networks such asFacebookandTwitterprovide advertisers with information about the likes and dislikes of their consumers.[76]This technique is crucial, as it provides the businesses with a "target audience".[76] With social networks, information relevant to the user's likes is available to businesses; who then advertise accordingly. Activities such as uploading a picture of your "newConversesneakers to Facebook[74]" is an example of aCOBRA.[74][75]Electronic recommendations and appraisals are a convenient manner to have a product promoted via "consumer-to-consumer interactions.[74]An example ofeWOMwould be an online hotel review;[77]the hotel company can have two possible outcomes based on their service. A good service would result in a positive review which gets the hotel free advertising viasocial media. However, a poor service will result in a negative consumer review which can potentially harm the company's reputation.[78] Social networking sites such as Facebook, Instagram, Twitter, MySpace etc. have all influenced the buzz of word of mouth marketing. In 1999, Misner said that word-of mouth marketing is, "the world's most effective, yet least understood marketing strategy" (Trusov, Bucklin, & Pauwels, 2009, p. 3).[79]Through the influence ofopinion leaders, the increased online "buzz" of "word-of-mouth" marketing that a product, service or companies are experiencing is due to the rise in use of social media and smartphones. Businesses and marketers have noticed that, "a person's behaviour is influenced by many small groups" (Kotler, Burton, Deans, Brown, & Armstrong, 2013, p. 189). These small groups rotate around social networking accounts that are run by influential people (opinion leaders or "thought leaders") who have followers of groups. The types of groups (followers) are called:[80]reference groups (people who know each other either face-to-face or have an indirect influence on a person's attitude or behaviour); membership groups (a person has a direct influence on a person's attitude or behaviour); and aspirational groups (groups which an individual wishes to belong to). Marketers target influential people, referred to asinfluencers, on social media that are recognized as being opinion leaders and opinion-formers based on the credibility of their following. An influencers role under a brandsponsorshipis to send messages to their target audiences through posts to amplify the credibility of a product or brand. A social media post by an opinion leader can have a much greater impact (via the forwarding of the post or "liking" of the post) than a social media post by a regular user. Influencers may help brands obtain more consumers by promoting their products in an honest and genuine way using personal sales methods, which is why brands think collaborating with influencers is a smart idea. However, the reason influencer marketing works so well is that it uses real, shareable, and viral content to reach a large audience and provide a profitable return on investment. Marketers have come to the understanding that "consumers are more prone to believe in other individuals" who they trust.[81]OL's and OF's can also send their own messages about products and services they choose.[82]The reason the opinion leader or formers have such a strong following base is because their opinion is valued or trusted.[83] They can review products and services for their followings, which can be positive or negative towards the brand. OL's and OF's are people who have a social status and because of their personality, beliefs, values etc. have the potential to influence other people.[80]They usually have a large number of followers otherwise known as their reference, membership or aspirational group.[80]By having an OL or OF support a brands product by posting a photo, video or written recommendation on a blog, the following may be influenced and because they trust the OL/OF a high chance of the brand selling more products or creating a following base. Having an OL/OF helps spread word of mouth talk amongst reference groups and/or memberships groups e.g. family, friends, work-friends etc.[84]The adjusted communication model shows the use of using opinion leaders and opinion formers. The sender/source gives the message to many, many OL's/OF's who pass the message on along with their personal opinion, the receiver (followers/groups) form their own opinion and send their personal message to their group (friends, family etc.).[85] Owned social media channels are an essential extension of businesses and brands in today's world. Brand must seek to create their brand image on each platform, and cater to the type of consumer demographics on each respective platform. In contrast with pre-Internet marketing, such as TV ads and newspaper ads, in which the marketer controlled all aspects of the ad, with social media, users are free to post comments right below an online ad or an online post by a company about its product. Companies are increasing using their social media strategy as part of their traditional marketing effort using magazines, newspapers, radio advertisements, television advertisements. Since in the 2010s, media consumers are often using multiple platforms at the same time (e.g., surfing the Internet on a tablet while watching a streaming TV show), marketing content needs to be consistent across all platforms, whether traditional or new media. Heath (2006) wrote about the extent of attention businesses should give to their social media sites. It is about finding a balance between frequently posting but not over posting. There is a lot more attention to be paid towards social media sites because people need updates to gain brand recognition. Therefore, a lot more content is need and this can often be unplanned content.[86] Planned content begins with the creative/marketing team generating their ideas, once they have completed their ideas they send them off for approval. There is two general ways of doing so. Planned content is often noticeable to customers and is un-original or lacks excitement but is also a safer option to avoid unnecessary backlash from the public.[87]Both routes for planned content are time-consuming as in the above; the first way to approval takes 72 hours to be approved. Although the second route can be significantly shorter it also holds more risk particularly in the legal department. Unplanned content is an 'in the moment' idea, "a spontaneous, tactical reaction".[88]The content could be trending and not have the time to take the planned content route. The unplanned content is posted sporadically and is not calendar/date/time arranged (Deshpande, 2014).[89][90]Issues with unplanned content revolve around legal issues and whether the message being sent out represents the business/brand accordingly. If a company sends out a Tweet or Facebook message too hurriedly, the company may unintentionally use insensitive language or messaging that could alienate some consumers. For example, celebrity chefPaula Deenwas criticized after she made a social media post commenting aboutHIV-AIDSandSouth Africa; her message was deemed to be offensive by many observers. The main difference between planned and unplanned is the time to approve the content. Unplanned content must still be approved by marketing managers, but in a much more rapid manner e.g. 1–2 hours or less. Sectors may miss errors because of being hurried. When using unplanned content Brito (2013) says, "be prepared to be reactive and respond to issues when they arise".[87]Brito (2013) writes about having a, "crisis escalation plan", because, "It will happen". The plan involves breaking down the issue into topics and classifying the issue into groups. Colour coding the potential risk "identify and flag potential risks" also helps to organise an issue. The problem can then be handled by the correct team and dissolved more effectively rather than any person at hand trying to solve the situation.[87] Traditional advertising techniques include print andtelevisionadvertising. TheInternethas already overtaken television as the largest advertising market.[91]Web sitesoften include the banner or pop-up ads. Social networking sites don't always have ads. In exchange, products have entire pages and are able to interact with users.Television commercialsoften end with aspokespersonasking viewers to check out the product website for more information. While briefly popular, print ads included QR codes on them. These QR codes can be scanned by cell phones andcomputers, sending viewers to the product website. Advertising is beginning to move viewers from the traditional outlets to the electronic ones.[92]While traditional media, like newspapers and television advertising, are largely overshadowed by the rise of social media marketing, there is still a place for traditional marketing. The Internet and social networking leaks are one of the issues facing traditional advertising. Video and print ads are often leaked to the world via the Internet earlier than they are scheduled topremiere. Social networking sites allow those leaks to goviral, and be seen by many users more quickly.The time differenceis also a problem facing traditional advertisers. When social events occur and arebroadcaston television, there is often atime delaybetween airings on theeast coastandwest coastof theUnited States. Social networking sites have become a hub of comment and interaction concerning the event. This allows individuals watching the event on the west coast (time-delayed) to know the outcome before it airs. The2011 Grammy Awardshighlighted this problem.Viewerson the west coast learned who won different awards based on comments made on social networking sites by individuals watching live on the east coast.[93] Due to the viral nature of the Internet, a mistake by a single employee has in some cases shown to result in devastating consequences for organizations. The code of ethics that is affiliated with traditional marketing can also be applied to social media.[100]However, with social media being so personal and international, there is another list of complications and challenges that come along with being ethical online. A sensitive topic about social media professionals is the subject of ethics in social media marketing practices, specifically: the proper uses of, often, very personal data.[101]With the invention of social media, the marketer no longer has to focus solely on the basic demographics and psychographics given from television and magazines, but now they can see what consumers like to hear from advertisers, how they engage online, and what their needs and wants are.[102] The general concept of being ethical while marking on social network sites is to be honest with the intentions of the campaign, avoid false advertising, be aware of userprivacyconditions (which means not using consumers' private information for gain), respect the dignity of persons in the shared online community, and claim responsibility for any mistakes or mishaps that are results of your marketing campaign.[103]Most social network marketers use websites like Facebook and MySpace to try to drive traffic to another website.[104] While it is ethical to use social networking websites to spread a message to people who are genuinely interested, many people game the system with auto-friend adding programs and spam messages and bulletins. Social networking websites are becoming wise to these practices, however, and are effectively weeding out and banning offenders. In addition, social media platforms have become extremely aware of their users and collect information about their viewers to connect with them in various ways. Social-networking website Facebook Inc. is quietly working on a new advertising system that would let marketers target users with ads based on the massive amounts of information people reveal on the site about themselves.[105]This may be an unethical or ethical feature to some individuals. Some people may react negatively because they believe it is an invasion of privacy. On the other hand, some individuals may enjoy this feature because their social network recognizes their interests and sends them particular advertisements pertaining to those interests. Consumers like to network with people who share their interests and desires.[106]Individuals who agree to have their social media profile public, should be aware that advertisers have the ability to take information that interests them to be able to send them information and advertisements to boost their sales. Managers invest in social media to foster relationships and interact with customers.[107]This is an ethical way for managers to send messages about their advertisements and products to their consumers. Since social media marketing first came into being, strategists and marketers have been getting smarter and more careful with the way they collect information and distributing advertisements. With the presence of data collecting companies, there is no longer a need to target specific audiences. This can be seen as a large ethically gray area. For many users, this is a breach of privacy, but there are no laws that prevent these companies from using the information provided on their websites. Companies like Equifax, Inc., TransUnion Corp, and LexisNexis Group thrive on collecting and sharing personal information of social media users.[108]In 2012, Facebook purchased information from 70 million households from a third-party company called Datalogix. Facebook later revealed that they purchased the information in order to create a more efficient advertising service.[109]
https://en.wikipedia.org/wiki/Social_media_marketing
Viral marketingis a business strategy that uses existing social networks to promote a product mainly on various social media platforms. Its name refers to how consumers spread information about a product with other people, much in the same way that avirusspreads from one person to another.[1]It can be delivered byword of mouth, or enhanced by the network effects of theInternetandmobile networks.[2] The concept is often misused or misunderstood,[3]as people apply it to any successful enough story without taking into account the word "viral".[4] Viral advertising is personal and, while coming from an identified sponsor, it does not mean businesses pay for its distribution.[5]Most of the well-known viral ads circulating online are ads paid by a sponsor company, launched either on their own platform (company web page orsocial mediaprofile) or on social media websites such as YouTube.[6]Consumers receive the page link from a social media network or copy the entire ad from a website and pass it along through e-mail or posting it on a blog, web page or social media profile. Viral marketing may take the form ofvideo clips, interactiveFlashgames,advergames,ebooks,brandable software,images,text messages,emailmessages, orweb pages. The most commonly utilized transmission vehicles for viral messages include pass-along based, incentive based, trendy based, and undercover based. However, the creative nature of viral marketing enables an "endless amount of potential forms and vehicles the messages can utilize for transmission", including mobile devices.[7] The ultimate goal of marketers interested in creating successful viral marketing programs is to createviral messagesthat appeal to individuals with highsocial networking potential(SNP) and that have a high probability of being presented and spread by these individuals and their competitors in their communications with others in a short period.[8] The term "viral marketing" has also been usedpejorativelyto refer tostealth marketingcampaigns—marketing strategies that advertise a product to people without them knowing they are being marketed to.[9] The emergence of "viral marketing", as an approach to advertisement, has been tied to the popularization of the notion that ideas spread like viruses. The field that developed around this notion, memetics, peaked in popularity in the 1990s.[10]As this then began to influence marketinggurus, it took on a life of its own in that new context. The brief career of Australian pop singerMarcus Montanais largely remembered as an early example of viral marketing. In early 1989, thousands of posters declaring "Marcus is Coming" were placed aroundSydney, generating discussion and interest within the media and the community about the meaning of the mysterious advertisements. The campaign successfully made Montana's musical debut a talking point, but his subsequent music career was a failure.[11] The term viral strategy was first used in marketing in 1995, in a pre-digital marketingera, by a strategy team at Chiat / Day advertising in LA (now TBWA LA), led by Lorraine Ketch and Fred Sattler, for the launch of the firstPlayStationforSony Computer Entertainment.[citation needed]Born from a need to combat huge target cynicism the insight was that people reject things pushed at them but seek out things that elude them.[citation needed]Chiat / Day created a 'stealth' campaign to go after influencers and opinion leaders, using street teams for the first time in brand marketing and layered an intricate omni-channel web of info and intrigue.[citation needed]Insiders picked up on it and spread the word.[citation needed]Within 6 months, PlayStation was number one in its category—Sony's most successful launch in history.[citation needed] There is debate on the origin and the popularization of the specific termviral marketing.[citation needed]The term is found inPC Usermagazine in 1989 with a somewhat differing meaning.[12][13]It was later used byJeffrey Rayportin the 1996Fast Companyarticle "The Virus of Marketing",[14]andTim DraperandSteve Jurvetsonof the venture capital firmDraper Fisher Jurvetsonin 1997 to describeHotmail's practice of appending advertising to outgoing mail from their users.[15] Doug Rushkoff, amedia critic, wrote about viral marketing on the Internet in 1996.[16]The assumption is that if such an advertisement reaches a "susceptible" user, that user becomes "infected" (i.e., accepts the idea) and shares the idea with others "infecting them", in the viral analogy's terms. As long as each infected user shares the idea with more than one susceptible user on average (i.e., thebasic reproductive rateis greater than one—the standard inepidemiologyfor qualifying something as anepidemic), the number of infected users grows according to anexponential curve. Of course, the marketing campaign may be successful even if the message spreads more slowly, if this user-to-user sharing is sustained by other forms of marketing communications, such as public relations or advertising.[citation needed] Bob Gerstley wrote aboutalgorithmsdesigned to identify people with high "social networking potential."[17]Gerstley employed SNP algorithms in quantitative marketing research. In 2004, the concept of thealpha userwas coined to indicate that it had now become possible to identify the focal members of any viral campaign, the "hubs" who were most influential. Alpha users could be targeted for advertising purposes most accurately inmobile phonenetworks, due to their personal nature.[18] In early 2013 the first ever Viral Summit was held in Las Vegas. The summit attempted to identify similar trends in viral marketing methods for various media. MarketerJonah Bergerdefines six key factors that drive virality,[19][20]organized in an acronym called STEPPS: The goal of a viral marketing campaign is to widely disseminate marketing content through sharing & liking. Another important factor that drives virality is the propagativity of the content, referring to the ease with which consumers can redistribute it.[21]This includes the effort required to share the content, the network size and type of the chosen distribution medium, and the proximity of shareable content with its means of redistribution (i.e. a 'Share' button). To form deeper connections with viewers and increase the chances of virality, many marketers use psychological principles. They argue that this approach is scientific and can foster an environment where the odds of gaining traction are much higher.[22] People find psychological safety and can develop a sense of trust when more people interact with online content. For this reason, marketers work to develop media that resonates with viewers on a deeper, emotional level as this approach frequently results in higher engagement. This level of interaction serves as a sign of approval, reducing the personal risk that is subconsciously linked to associating oneself with a company or brand’s content.[23] ProfessorJonah Bergerat the University of Pennsylvania's Wharton School of Business affirms that marketing campaigns that trigger psychological responses linked to strong emotions tend to perform better. In particular, Berger found that positive emotions like happiness, joy, and excitement have more successful share rates than their negative counterparts. This outcome results from the human instinct to respond more positively to content with activating emotions, increasing the desire to share content, which contributes to its virality.[24] Viral marketing utilizes the primitive feeling offrissonto increase their view and share counts. This feeling of excitement is considered powerful because of its ability to cause a physical response. From increased heart rates to full body chills, Professor Brent Coker at the University of Melbourne describes that this approach to marketing triggers a primitive response that immerses the viewer in the content on a deeper level.[25] Researchers Juliana Fernandes from the University of Florida and Sigal Segev from the Florida International University also found that people are more inclined to share emotional campaigns over those that are heavily informational. They claim that consumers do not often care to learn about a product’s actual features and benefits. Instead, people prefer to be immersed in experience-based content that creates an emotional impact.[26]Companies and brands can benefit from treating their content in this manner and go viral more frequently than those who do not. Social proofis another psychological phenomenon that impacts viral content. Experts in this field argue that it is a natural instinct to want to behave similarly to others because it results in positive validation. This phenomenon explains the human need to conform, so marketers focus on creating engaging content that encourages interactions and causes asnowball effect. This subconsciously influences people to like, comment, and share if they already see others doing the same.[23] Social proof goes further by providing people with a form ofsocial currency. When individuals interact with and share content, they become associated with the topics at hand. People naturally tend to perceive one another, and this pattern carries over to the digital world. As a result, many people tend to be vigilant about the viral marketing they engage with, since they want to be perceived positively. Companies and brands have the opportunity to develop social currency themselves by aligning with their target audiences and creating marketing campaigns that fit their interests or match their values.[22]The more the content aligns with a company’s intended audience, the higher the chances a campaign is to go viral. According to marketing professorsAndreas Kaplanand Michael Haenlein, to make viral marketing work, three basic criteria must be met, i.e., giving the right message to the right messengers in the right environment:[27] Whereas Kaplan, Haenlein and others reduce the role of marketers to crafting the initial viral message and seeding it, futurist and sales and marketing analyst Marc Feldman, who conducted IMT Strategies' viral marketing study in 2001,[citation needed]carves a different role for marketers which pushes the 'art' of viral marketing much closer to 'science'.[29] To clarify and organize the information related to potential measures of viral campaigns, the key measurement possibilities should be considered in relation to the objectives formulated for the viral campaign. In this sense, some of the key cognitive outcomes of viral marketing activities can include measures such as the number of views, clicks, and hits for specific content, as well as the number of shares insocial media, such as likes on Facebook or retweets on Twitter, which demonstrate that consumers processed the information received through the marketing message. Measures such as the number of reviews for a product or the number of members for a campaign web page quantify the number of individuals who have acknowledged the information provided by marketers. Besides statistics that are related to online traffic, surveys can assess the degree of product or brand knowledge, though this type of measurement is more complicated and requires more resources.[30][31] Related to consumers' attitudes toward a brand or even toward the marketing communication, different online and social media statistics, including the number of likes and shares within a social network, can be used. The number of reviews for a certain brand or product and the quality assessed by users are indicators of attitudes. Classical measures of consumer attitude toward the brand can be gathered through surveys of consumers. Behavioral measures are very important because changes in consumers' behavior and buying decisions are what marketers hope to see through viral campaigns. There are numerous indicators that can be used in this context as a function of marketers' objectives. Some of them include the most known online and social media statistics such as number and quality of shares, views, product reviews, and comments. Consumers'brand engagementcan be measured through the K-factor, the number of followers, friends, registered users, and time spent on the website. Indicators that are more bottom-line oriented focus on consumers' actions after acknowledging the marketing content, including the number of requests for information, samples, or test-drives. Nevertheless, responses to actual call-to-action messages are important, including the conversion rate. Consumers' behavior is expected to lead to contributions to the bottom line of the company, meaning increase in sales, both in quantity and financial amount. However, when quantifying changes in sales, managers need to consider other factors that could potentially affect sales besides the viral marketing activities. Besides positive effects on sales, the use of viral marketing is expected to bring significant reductions in marketing costs and expenses.[32][33] Viral marketing often involves and utilizes: Viral target marketing is based on three important principles:[34] By applying these three important disciplines to an advertising model, a VMS company is able to match a client with their targeted customers at a cost-effective advantage. The Internet makes it possible for a campaign to go viral very fast; it can, so to speak, make a brand famous overnight. However, the Internet and social media technologies themselves do not make a brand viral; they just enable people to share content to other people faster. Therefore, it is generally agreed that a campaign must typically follow a certain set of guidelines in order to potentially be successful: Wilert Puriwatand Suchart Tripopsakul, who read over countless academic journals on viral marketing, gathered there knowledge to propose what they called the "7I's of effective word-of-mouth marketing campaigns."[37]These seven I's can be used to highlight where the success of a viral marketing campaign comes from. While what Puriwat and Tripopsakul publish outlines what makes an effective campaign, there is also forewarnings that negative word-of-mouth messages about a brand or product have more power over a consumers purchasing decision.[38]With that being said, the 7I's are as follow: Using these seven described aspects of viral marketing, the two ran a statistical test utilizing a survey of 286 people on their thoughts of recent viral marketing efforts.[37]The questions in the survey gauged whether each point from the 7I's were met in the campaign using Likert scale questions and ended with questions on brand preference and brand recognition.[37]While many conclusions were drawn from the statistical analysis, the prominent ones to be shared were based around age groups and interaction results. Wilert and Tripopsakul found that viral marketing is a tool that has shown to be more beneficial in targeting a younger demographic than the older audience. They also found that consumers who partook in any interaction with a brands viral marketing campaign more often than not had a positive increase in that brands perception.[37] The growth ofsocial networkssignificantly contributed to the effectiveness of viral marketing.[40]As of 2009, two-thirds of the world's Internet population visits asocial networking serviceorblogsite at least every week.[41]Facebookalone has over 1 billion active users.[42]In 2009, time spent visitingsocial mediasites began to exceed time spent emailing.[43]A 2010 study found that 52% of people who view news online forward it on through social networks, email, or posts.[44] The introduction of social media has caused a change in how viral marketing is used and the speed at which information is spread and users interact.[45]This has prompted many companies to use social media as a way to market themselves and their products, with Elsamari Botha and Mignon Reyneke stating that viral messages are "playing an increasingly important role in influencing and shifting public opinion on corporate reputations, brands, and products as well as political parties and public personalities to name but a few."[45] In business, it is indicated that people prefer interaction with humans to a logo.[46]Influencers build up a relationship between a brand and their customers. Companies would be left behind if they neglected the trend of influencers in viral marketing, as over 60% of global brands have used influencers in marketing in 2016.[47]Influencers correlate to the level of customers' involvement in companies' marketing.[48]First, unintentional influences,[49][48]because of brand satisfaction and low involvement, their action is just to deliver a company's message to a potential user.[50]Secondly, users will become salesmen or promoters for a particular company with incentives.[49][48]For example, ICQ offered their users benefits to promote a product to their friends. A recent trend in business is to offer incentives to individual users for re-posting an advertisement's message to their own profiles. Marketers and agencies commonly consider celebrities as a good influencer with endorsement work. This conception is similar to celebrity marketing. Based on a survey, 69% of company marketing department and 74% of agencies are currently working with celebrities in the UK. The celebrity types come along with their working environment. Traditional celebrities are considered singers, dancers, actors or models. These types of public characters are continuing to be the most commonly used by company marketers. The survey found that 4 in 10 company having worked with these traditional celebrities in the prior year. However, people these years are spending more time on social media rather than traditional media such as TV. The researchers also claim that customers are not firmly believed celebrities are effectively influential.[51][52] Social media stars such as YouTuberZoellaor InstagrammerAimee Songare followed by millions of people online. Online celebrities have connection and influence with their followers because they frequently and realisticly converse and interact on the Internet through comments or likes.[53] This trend captured by marketers who are used to explore new potential customers. Agencies are placing social media stars alongside singers and musicians at the top of the heap of celebrity types they had worked with. And there are more than 28% of company marketers having worked with one social media celebrity in the previous year.[52] Using influencers in viral marketing provides companies several benefits. It enables companies to spend little time and budget on their marketing communication and brand awareness promotion.[54]For example, Alberto Zanot, in the 2006 FIFA Football World Cup, shared Zinedine Zidane's headbutt against Italy and engaged more than 1.5 million viewers in less than the very first hour. Secondly, it enhances the credibility of messages.[55][56][57][58][59]These trust-based relationships grab the audience's attention, create customers' demand, increase sales and loyalty, or simply drive customers' attitude and behavior.[57][58]In the case of Coke, Millennials changed their mind about the product, from parents' drink to the beverage for teens.[60]It built up Millennials' social needs by 'sharing a Coke' with their friends. This created a deep connection with Gen Y, dramatically increased sales (+11% compared with last year) and market share (+1.6%).[60] No doubt that harnessing influencers would be a lucrative business for both companies and influencers.[61]The concept of 'influencer' is no longer just an 'expert' but also anyone who delivers and influence on the credibility of a message (e.g. blogger)[56]In 2014, BritMums, network sharing family's daily life, had 6,000 bloggers and 11,300 views per month on average[61][62]and became endorsers for some particular brand such as Coca-Cola, Morrison. Another case, Aimee Song who had over 3.6m followers on the Instagram page and became Laura Mercier's social media influencers, gaining $500,000 monthly.[61] Decision-making processseems to be hard for customers these days. Millers (1956) argued that people suffered from short-term memory.[63]This links to difficulties in customers' decision-making process and Paradox of Choice,[64]as they face various adverts and newspapers daily.[65]Influencers serve as a credible source for customers' decision-making process.[56][50]Neilsen reported that 80% of consumers appreciated a recommendation of their acquaintances,[60]as they have reasons to trust in their friends delivering the messages without benefits[60]and helping them reduce perceived risks behind choices.[66][67] The main risk coming from the company is for it to target the wrong influencer or segment. Once the content is online, the sender will not be able to control it anymore.[68]It is therefore vital to aim at a particular segment when releasing the message. This is what happened to the company BlendTech which released videos showing the blender could blend anything, and encouraged users to share videos. This mainly caught the attention of teenage boys who thought it funny to blend and destroy anything they could;[69]even though the videos went viral, they did not target potential buyers of the product. This is considered to be one of the major factors that affects the success of the online promotion. It is critical and inevitable for the organisations to target the right audience. Another risk with internet is that a company's video could end up going viral on the other side of the planet where their products are not even for sale.[70] According to a paper by Duncan Watts and colleagues entitled: "Everyone's an influencer",[71]the most common risk in viral marketing is that of the influencer not passing on the message, which can lead to the failure of the viral marketing campaign. A second risk is that the influencer modifies the content of the message. A third risk is that influencers pass on the wrong message. This can result from a misunderstanding or as a deliberate move. Between 1996 and 1997,Hotmailwas one of the first internet businesses to become extremely successful utilizing viral marketing techniques by inserting the tagline "Get your free e-mail at Hotmail" at the bottom of every e-mail sent out by its users. Hotmail was able to sign up 12 million users in 18 months.[72]At the time, this was historically the fastest growth of any user based media company.[73]By the time Hotmail reached 66 million users, the company was establishing 270,000 new accounts each day.[73] On March 6, 2012,Dollar Shave Clublaunched their online video campaign. In the first 48 hours of their video debuting on YouTube they had over 12,000 people signing up for the service. The video cost just $4500 to make and as of November 2015 has had more than 21 million views. The video was considered one of the best viral marketing campaigns of 2012 and won "Best Out-of-Nowhere Video Campaign" at the 2012 AdAge Viral Video Awards. During the 2013 Super Bowl, the Mercedes-Benz stadium suffered from a massive power outage. Oreo took advantage of the power outage and created a viral marketing campaign, incorporating a black and white image of an Oreo. The image included a text that stated, “You can still dunk in the dark.” A caption was also included that stated “No Power? No problem.” Due to Oreo’s quick thinking and clever marketing created traction and caused thousands of tweets and retweets. The marketing tactic that Oreo used to bring traction to the Oreo Company is referred to as newsjacking, which companies use to bring more customers to their brand using clever marketing tactics.[74] Spotify Wrappedis a viral marketing campaign by Spotify released annually since 2016 between November 29 and December 6, allowing users to view a compilation of data about their activity on the platform over the preceding year, and inviting them to share a colorful pictorial representation of it on social media. Other brands started releasing similar features, like Apple with Apple Music Replay. In 2021, 120 million users accessed Spotify Wrapped.[75] In June 2023,McDonald'sinadvertently took advantage of viral marketing with the rollout of Grimace's Birthday Meal, and more specifically, theGrimace Shake. During its release, a popular trend emerged where people would take videos of themselves drinking the Grimace Shake and then would be found in disturbing positions with purple goo (assumed to be from the shake) splattered across them.[76]McDonald's, while not responsible for the trend themselves, did eventually go on to recognize it in aTwitterpost that read (as Grimace): "meee pretending i don't see the grimace shake trendd".[77]While the Grimace's Birthday campaign was already a success for McDonald's, the trend boosted sales even higher and kept them high all the way until the end of the promotion on June 29.[78] In Autumn 2019, a real estate listing for a century-old home in Lansing, Michigan went viral when the listing agent (James Pyle) used theGhostfacecharacter from theScreammovie in marketing photos that showcased the home onRealtor.com[79]andZillow.[80][81]The listing went live on September 27, 2019, and quickly began trending on Facebook, garnering 300,000 views in 2 days, at which point a story on the unusual popularity of the listing appeared in a local newspaper. Pyle stated that wanted to do something fun and novel for the Halloween season but to keep the photos professional at the same time, and hired photographer Bradley Johnson to take several pictures of him dressed as Ghostface raking leaves in the backyard, preparing to carve a pumpkin in the kitchen, standing on the front and back porches, and peeking out behind curtains and doors.[82]The following day, the story was picked up by several radio stations, including K102.5 in Kalamazoo,[83]WCRZ in Burton,[84]WOMC[85]and ALT97[86]in Detroit, as well as the Metro Times newspaper in Detroit.[87]Following the increased attention on the Zillow listing, over the next few days the story appeared on major news networks.[88][89][90][91][92][93][94][95][96]Pyle stated that a normal listing typically received under 150 views, and his goal was to get between 500 and 1,000 views of the home.[97]However, the Zillow listing ended up receiving over 20,000 views by October 1, one million views by October 2 and exceeded 1.2 million views by October 3. It was estimated that the combined views of the listings on both sites (Zillow and Realtor.com) exceeded 5 million in 5 days. The listing received a cash offer within 4 days and the immense popularity resulted in the home becoming overbooked during the open house and subsequent viewings. Due to the success of the listing, Pyle was scheduled to appear on “Good Morning America” on October 2, 2019. He was quoted as saying that he didn't think he would ever be able to duplicate the success of the listing, but he planned to try some additional variations for future listings.[98][99][100]The listing continued to be popular even after the house was off the market.[101]This approach was so successful that it became a recommended practice on Realtor.com.[102] HOM. “How Elf Conquered Tik Tok Case Study” HOM, 30 May 2023 How Elf Cosmetics Conquered TikTok: A Case Study on Beauty Brand Success (houseofmarketers.com)
https://en.wikipedia.org/wiki/Viral_marketing
Isaac Asimov(/ˈæzɪmɒv/AZ-im-ov;[b][c]c.January 2, 1920[a]– April 6, 1992) was an American writer and professor ofbiochemistryatBoston University. During his lifetime, Asimov was considered one of the "Big Three"science fictionwriters, along withRobert A. HeinleinandArthur C. Clarke.[2]A prolific writer, he wrote or edited more than 500 books. He also wrote an estimated 90,000 letters andpostcards.[d]Best known for hishard science fiction, Asimov also wrotemysteriesandfantasy, as well aspopular scienceand othernon-fiction. Asimov's most famous work is theFoundationseries,[3]the first three books of which won the one-timeHugo Awardfor "Best All-Time Series" in 1966.[4]His other major series are theGalactic Empireseries and theRobotseries. TheGalactic Empirenovels are set in the much earlier history of the same fictional universe as theFoundationseries. Later, withFoundation and Earth(1986), he linked this distant future to theRobotseries, creating a unified "future history" for his works.[5]He also wrotemore than 380 short stories, including thesocial science fictionnovelette "Nightfall", which in 1964 was voted the best short science fiction story of all time by theScience Fiction Writers of America. Asimov wrote theLucky Starrseries ofjuvenilescience-fiction novels using the pen name Paul French.[6] Most of his popular science books explain concepts in a historical way, going as far back as possible to a time when the science in question was at its simplest stage. Examples includeGuide to Science, the three-volumeUnderstanding Physics, andAsimov's Chronology of Science and Discovery. He wrote on numerous other scientific and non-scientific topics, such aschemistry,astronomy,mathematics,history,biblical exegesis, andliterary criticism. He was the president of theAmerican Humanist Association.[7]Several entities have been named in his honor, including theasteroid(5020) Asimov,[8]acrateronMars,[9][10]aBrooklynelementary school,[11]Honda's humanoid robotASIMO,[12]andfour literary awards. There are three very simple English words: 'Has', 'him' and 'of'. Put them together like this—'has-him-of'—and say it in the ordinary fashion. Now leave out the two h's and say it again and you have Asimov. Asimov's family name derives from the first part ofозимый хлеб(ozímyj khleb), meaning 'winter grain' (specificallyrye) in which his great-great-great-grandfather dealt, with the Russian surname ending-ovadded.[14]Azimov is spelledАзимовin theCyrillic alphabet.[1]When the family arrived in the United States in 1923 and their name had to be spelled in theLatin alphabet, Asimov's father spelled it with an S, believing this letter to be pronounced like Z (as in German), and so it became Asimov.[1]This later inspired one of Asimov's short stories, "Spell My Name with an S".[15] Asimov refused early suggestions of using a more common name as a pseudonym, believing that its recognizability helped his career. After becoming famous, he often met readers who believed that "Isaac Asimov" was a distinctive pseudonym created by an author with a common name.[16] Asimov was born inPetrovichi,Russian SFSR,[17]on an unknown date between October 4, 1919, and January 2, 1920, inclusive. Asimov celebrated his birthday on January 2.[a] Asimov's parents wereRussian Jews, Anna Rachel (née Berman) and Judah Asimov, the son of a miller.[18]He was named Isaac after his mother's father, Isaac Berman.[19]Asimov wrote of his father, "My father, for all his education as anOrthodox Jew, was not Orthodox in his heart", noting that "he didn't recite themyriad prayers prescribed for every action, and he never made any attempt to teach them to me."[20] In 1921, Asimov and 16 other children in Petrovichi developeddouble pneumonia. Only Asimov survived.[21]He had two younger siblings: a sister, Marcia (born Manya;[22]June 17, 1922 – April 2, 2011),[23]and a brother,Stanley(July 25, 1929 – August 16, 1995), who would become vice-president ofNewsday.[24][25] Asimov's family travelled to the United States via Liverpool on theRMSBaltic, arriving on February 3, 1923[26]when he was three years old. His parents spokeYiddishand English to him; he never learnedRussian, his parents using it as a secret language "when they wanted to discuss something privately that my big ears were not to hear".[27][28]Growing up inBrooklyn,New York, Asimov taught himself to read at the age of five (and later taught his sister to read as well, enabling her to enter school in thesecond grade).[29]His mother got him intofirst gradea year early by claiming he was born on September 7, 1919.[30][31]In third grade he learned about the "error" and insisted on an official correction of the date to January 2.[32]He became anaturalizedU.S. citizen in 1928 at the age of eight.[33] After becoming established in the U.S., his parents owned a succession ofcandy storesin which everyone in the family was expected to work. The candy stores sold newspapers and magazines, which Asimov credited as a major influence in his lifelong love of the written word, as it presented him as a child with an unending supply of new reading material (including pulpscience fiction magazines)[34]that he could not have otherwise afforded. Asimov began reading science fiction at age nine, at the time that the genre was becoming more science-centered.[35]Asimov was also a frequent patron of theBrooklyn Public Libraryduring his formative years.[36] Asimov attended New York City public schools from age five, includingBoys High SchoolinBrooklyn.[37]Graduating at 15, he attended theCity College of New Yorkfor several days before accepting a scholarship atSeth Low Junior College. This was a branch ofColumbia UniversityinDowntown Brooklyndesigned to absorb some of the academically qualified Jewish andItalian-Americanstudents who applied to the more prestigiousColumbia Collegebut exceeded the unwritten ethnicadmission quotaswhich were common at the time. Originally azoologymajor, Asimov switched tochemistryafter his first semester because he disapproved of "dissecting an alley cat". After Seth Low Junior College closed in 1936, Asimov finished hisBachelor of Sciencedegree at Columbia's Morningside Heights campus (later theColumbia University School of General Studies)[38]in 1939. (In 1983, Dr. Robert Pollack (dean of Columbia College, 1982–1989) granted Asimov an honorary doctorate from Columbia College after requiring that Asimov place his foot in a bucket of water to pass the college's swimming requirement.[39]) After two rounds of rejections by medical schools, Asimov applied to the graduate program in chemistry at Columbia in 1939; initially he was rejected and then only accepted on a probationary basis.[40]He completed hisMaster of Artsdegree in chemistry in 1941 and earned aDoctor of Philosophydegree in chemistry in 1948.[e][45][46]During his chemistry studies, he also learned French and German.[47] From 1942 to 1945 duringWorld War II, between his masters and doctoral studies, Asimov worked as a civilian chemist at thePhiladelphia Navy Yard's Naval Air Experimental Station and lived in theWalnut Hillsection ofWest Philadelphia.[48][49]In September 1945, he was conscripted into the post-warU.S. Army; if he had not had his birth date corrected while at school, he would have been officially 26 years old and ineligible.[50]In 1946, a bureaucratic error caused his military allotment to be stopped, and he was removed from a task force days before it sailed to participate inOperation Crossroadsnuclear weapons tests atBikini Atoll.[51]He was promoted tocorporalon July 11 before receiving anhonorable dischargeon July 26, 1946.[52][f] After completing his doctorate and apostdoctoralyear withRobert Elderfield,[54]Asimov was offered the position ofassociate professorofbiochemistryat theBoston University School of Medicine. This was in large part due to his years-long correspondence withWilliam Boyd, a former associate professor of biochemistry at Boston University, who initially contacted Asimov to compliment him on his storyNightfall.[55]Upon receiving a promotion to professor ofimmunochemistry, Boyd reached out to Asimov, requesting him to be his replacement. The initial offer of professorship was withdrawn and Asimov was offered the position of instructor of biochemistry instead, which he accepted.[56]He began work in 1949 with a $5,000 salary[57](equivalent to $66,000 in 2024), maintaining this position for several years.[58]By 1952, however, he was making more money as a writer than from the university, and he eventually stopped doing research, confining his university role to lecturing students.[g]In 1955, he was promoted totenuredassociate professor. In December 1957, Asimov was dismissed from his teaching post, with effect from June 30, 1958, due to his lack of research. After a struggle over two years, he reached an agreement with the university that he would keep his title[60]and give the opening lecture each year for a biochemistry class.[61]On October 18, 1979, the university honored his writing by promoting him to full professor of biochemistry.[62]Asimov's personal papers from 1965 onward are archived at the university'sMugar Memorial Library, to which he donated them at the request of curator Howard Gotlieb.[63][64] In 1959, after a recommendation fromArthur Obermayer, Asimov's friend and a scientist on theU.S. missile defenseproject, Asimov was approached byDARPAto join Obermayer's team. Asimov declined on the grounds that his ability to write freely would be impaired should he receiveclassified information, but submitted a paper to DARPA titled "On Creativity"[65]containing ideas on how government-based science projects could encourage team members to think more creatively.[66] Asimov met his first wife, Gertrude Blugerman (May 16, 1917,Toronto, Canada[67]– October 17, 1990,Boston, U.S.[68]), on ablind dateon February 14, 1942, and married her on July 26.[69]The couple lived in an apartment inWest Philadelphiawhile Asimov was employed at the Philadelphia Navy Yard (where two of his co-workers wereL. Sprague de CampandRobert A. Heinlein). Gertrude returned to Brooklyn while he was in the Army, and they both lived there from July 1946 before moving toStuyvesant Town,Manhattan, in July 1948. They moved toBostonin May 1949, then to nearby suburbsSomervillein July 1949,Walthamin May 1951, and, finally,West Newtonin 1956.[70]They had two children, David (born 1951) and Robyn Joan (born 1955).[71]In 1970, they separated and Asimov moved back to New York, this time to theUpper West Sideof Manhattan where he lived for the rest of his life.[72]He began seeingJanet O. Jeppson, a psychiatrist and science-fiction writer, and married her on November 30, 1973,[73]two weeks after his divorce from Gertrude.[74] Asimov was aclaustrophile: he enjoyed small, enclosed spaces.[75][h]In the third volume of his autobiography, he recalls a childhood desire to own a magazine stand in aNew York City Subwaystation, within which he could enclose himself and listen to the rumble of passing trains while reading.[76] Asimov wasafraid of flying, doing so only twice: once in the course of his work at the Naval Air Experimental Station and once returning home fromOʻahuin 1946. Consequently, he seldom traveled great distances. This phobia influenced several of his fiction works, such as theWendell Urthmystery stories and theRobotnovels featuringElijah Baley. In his later years, Asimov found enjoyment traveling oncruise ships, beginning in 1972 when he viewed theApollo 17launch from acruise ship.[77]On several cruises, he was part of the entertainment program, giving science-themed talks aboard ships such as theQueen Elizabeth 2.[78]He sailed to England in June 1974 on theSSFrancefor a trip mostly devoted to lectures in London and Birmingham,[79]though he also found time to visitStonehenge[80]and Shakespeare's birthplace.[81] Asimov was ateetotaler.[83] He was an able public speaker and was regularly invited to give talks about science in his distinctNew York accent. He participated in manyscience fiction conventions, where he was friendly and approachable.[78]He patiently answered tens of thousands of questions and other mail with postcards and was pleased to give autographs. He was of medium height, 5 ft 9 in (1.75 m)[84]and stocky build. In his later years, he adopted a signature style of "mutton-chop"sideburns.[85][86]He took to wearingbolo tiesafter his wife Janet objected to his clip-on bow ties.[87]He never learned to swim or ride a bicycle, but did learn to drive a car after he moved to Boston. In his humor bookAsimov Laughs Again, he describes Boston driving as "anarchy on wheels".[88] Asimov's wide interests included his participation in later years in organizations devoted to thecomic operasofGilbert and Sullivan.[78]Many of his short stories mention or quote Gilbert and Sullivan.[89]He was a prominent member ofThe Baker Street Irregulars, the leadingSherlock Holmessociety,[78]for whom he wrote an essay arguing that Professor Moriarty's work "The Dynamics of An Asteroid" involved the willful destruction of an ancient, civilized planet. He was also a member of the male-only literary banqueting club theTrap Door Spiders, which served as the basis of his fictional group of mystery solvers, theBlack Widowers.[90]He later used his essay on Moriarty's work as the basis for a Black Widowers story, "The Ultimate Crime", which appeared inMore Tales of the Black Widowers.[91][92] In 1984, theAmerican Humanist Association(AHA) named him the Humanist of the Year. He was one of the signers of theHumanist Manifesto.[93]From 1985 until his death in 1992, he served as honorary president of the AHA, and was succeeded by his friend and fellow writerKurt Vonnegut. He was also a close friend ofStar TrekcreatorGene Roddenberry, and earned a screen credit as "special science consultant" onStar Trek: The Motion Picturefor his advice during production.[94] Asimov was a founding member of the Committee for the Scientific Investigation of Claims of the Paranormal, CSICOP (now theCommittee for Skeptical Inquiry)[95]and is listed in its Pantheon of Skeptics.[96]In a discussion withJames RandiatCSICon 2016regarding the founding of CSICOP,Kendrick Fraziersaid that Asimov was "a key figure in theSkeptical movementwho is less well known and appreciated today, but was very much in the public eye back then." He said that Asimov's being associated with CSICOP "gave it immense status and authority" in his eyes.[97]: 13:00 Asimov describedCarl Saganas one of only two people he ever met whose intellect surpassed his own. The other, he claimed, was thecomputer scientistandartificial intelligenceexpertMarvin Minsky.[98]Asimov was an on-and-off member and honorary vice president ofMensa International, albeit reluctantly;[99]he described some members of that organization as "brain-proud and aggressive about their IQs".[100][i] After his father died in 1969, Asimov annually contributed to a Judah Asimov Scholarship Fund atBrandeis University.[103] In 2006, he was named byCarnegie Corporation of New Yorkto the inaugural class of winners of theGreat Immigrants Award.[104] In 1977, Asimov had aheart attack. In December 1983, he hadtriple bypass surgeryat NYU Medical Center, during which he contractedHIVfrom ablood transfusion.[105]His HIV status was kept secret out of concern that theanti-AIDS prejudicemight extend to his family members.[106] He died in Manhattan on April 6, 1992, and was cremated.[107]The cause of death was reported as heart andkidney failure.[108][109][110]Ten years following Asimov's death, Janet and Robyn Asimov agreed that the HIV story should be made public; Janet revealed it in her edition of his autobiography,It's Been a Good Life.[105][110][106][111] [T]he only thing about myself that I consider to be severe enough to warrant psychoanalytic treatment is my compulsion to write ... That means that my idea of a pleasant time is to go up to my attic, sit at my electric typewriter (as I am doing right now), and bang away, watching the words take shape like magic before my eyes. Asimov's career can be divided into several periods. His early career, dominated by science fiction, began with short stories in 1939 and novels in 1950. This lasted until about 1958, all but ending after publication ofThe Naked Sun(1957). He began publishing nonfiction as co-author of a college-level textbook calledBiochemistry and Human Metabolism. Following the brief orbit of the first human-made satelliteSputnik Iby the USSR in 1957, he wrote more nonfiction, particularlypopular sciencebooks, and less science fiction. Over the next quarter-century, he wrote only four science fiction novels, and 120 nonfiction books. Starting in 1982, the second half of his science fiction career began with the publication ofFoundation's Edge. From then until his death, Asimov published several more sequels and prequels to his existing novels, tying them together in a way he had not originally anticipated, making a unified series. There are many inconsistencies in this unification, especially in his earlier stories.[113]DoubledayandHoughton Mifflinpublished about 60% of his work up to 1969, Asimov stating that "both represent a father image".[61] Asimov believed his most enduring contributions would be his "Three Laws of Robotics" and theFoundationseries.[114]TheOxford English Dictionarycredits his science fiction for introducing into the English language the words "robotics", "positronic" (an entirely fictional technology), and "psychohistory" (which is also used for adifferent studyon historical motivations). Asimov coined the term "robotics" without suspecting that it might be an original word; at the time, he believed it was simply the natural analogue of words such asmechanicsandhydraulics, but forrobots. Unlike his word "psychohistory", the word "robotics" continues in mainstream technical use with Asimov's original definition.Star Trek: The Next Generationfeaturedandroidswith "positronic brains" and the first-season episode "Datalore" called the positronic brain "Asimov's dream".[115] Asimov was so prolific and diverse in his writing that his books span all major categories of theDewey Decimal Classificationexcept for category 100,philosophyandpsychology.[116]However, he wrote several essays about psychology,[117]and forewords for the booksThe Humanist Way(1988) andIn Pursuit of Truth(1982),[118]which were classified in the 100s category, but none of his own books were classified in that category.[116] According toUNESCO'sIndex Translationum database, Asimov is the world's 24th-most-translated author.[119] No matter how various the subject matter I write on, I was a science-fiction writer first and it is as a science-fiction writer that I want to be identified. Asimov became a science fiction fan in 1929,[121]when he began reading thepulp magazinessold in his family's candy store.[122]At first his father forbade reading pulps until Asimov persuaded him that because thescience fiction magazineshad "Science" in the title, they must be educational.[123]At age 18 he joined theFuturiansscience fiction fan club, where he made friends who went on to become science fiction writers or editors.[124] Asimov began writing at the age of 11, imitatingThe Rover Boyswith eight chapters ofThe Greenville Chums at College. His father bought him a used typewriter at age 16.[61]His first published work was a humorous item on the birth of his brother for Boys High School's literary journal in 1934. In May 1937 he first thought of writing professionally, and began writing his first science fiction story, "Cosmic Corkscrew" (now lost), that year. On May 17, 1938, puzzled by a change in the schedule ofAstounding Science Fiction, Asimov visited its publisherStreet & Smith Publications. Inspired by the visit, he finished the story on June 19, 1938, and personally submitted it toAstoundingeditorJohn W. Campbelltwo days later. Campbell met with Asimov for more than an hour and promised to read the story himself. Two days later he received a detailed rejection letter.[121]This was the first of what became almost weekly meetings with the editor while Asimov lived in New York, until moving to Boston in 1949;[57]Campbell had a strong formative influence on Asimov and became a personal friend.[125] By the end of the month, Asimov completed a second story, "Stowaway". Campbell rejected it on July 22 but—in "the nicest possible letter you could imagine"—encouraged him to continue writing, promising that Asimov might sell his work after another year and a dozen stories of practice.[121]On October 21, 1938, he sold the third story he finished, "Marooned Off Vesta", toAmazing Stories, edited byRaymond A. Palmer, and it appeared in the March 1939 issue. Asimov was paid $64 (equivalent to $1,430 in 2024), or one cent a word.[61][126]Two more stories appeared that year, "The Weapon Too Dreadful to Use" in the MayAmazingand "Trends" in the JulyAstounding, the issue fans later selected as the start of theGolden Age of Science Fiction.[16]For 1940,ISFDBcatalogs seven stories in four different pulp magazines, including one inAstounding.[127]His earnings became enough to pay for his education, but not yet enough for him to become a full-time writer.[126] He later said that unlike other Golden Age writers Heinlein andA. E. van Vogt—also first published in 1939, and whose talent and stardom were immediately obvious—Asimov "(this is not false modesty) came up only gradually".[16]Through July 29, 1940, Asimov wrote 22 stories in 25 months, of which 13 were published; he wrote in 1972 that from that date he never wrote a science fiction story that was not published (except for two "special cases"[j]).[130]By 1941 Asimov was famous enough thatDonald Wollheimtold him that he purchased "The Secret Sense" for a new magazine only because of his name,[131]and the December 1940 issue ofAstonishing—featuring Asimov's name in bold—was the first magazine to basecover arton his work,[132]but Asimov later said that neither he nor anyone else—except perhaps Campbell—considered him better than an often published "third rater".[133] Based on a conversation with Campbell, Asimov wrote "Nightfall", his 32nd story, in March and April 1941, andAstoundingpublished it in September 1941. In 1968 theScience Fiction Writers of Americavoted "Nightfall" the best science fiction short story ever written.[108][133]InNightfall and Other StoriesAsimov wrote, "The writing of 'Nightfall' was a watershed in my professional career ... I was suddenly taken seriously and the world of science fiction became aware that I existed. As the years passed, in fact, it became evident that I had written a 'classic'."[134]"Nightfall" is an archetypal example ofsocial science fiction, a term he created to describe a new trend in the 1940s, led by authors including him and Heinlein, away fromgadgetsandspace operaand toward speculation about thehuman condition.[135] After writing "Victory Unintentional" in January and February 1942, Asimov did not write another story for a year. He expected to make chemistry his career, and was paid $2,600 annually at the Philadelphia Navy Yard, enough to marry his girlfriend; he did not expect to make much more from writing than the $1,788.50 he had earned from the 28 stories he had already sold over four years. Asimov left science fiction fandom and no longer read new magazines, and might have left the writing profession had not Heinlein and de Camp been his coworkers at the Navy Yard and previously sold stories continued to appear.[136] In 1942, Asimov published the first of hisFoundationstories—later collected in theFoundationtrilogy:Foundation(1951),Foundation and Empire(1952), andSecond Foundation(1953). The books describe the fall of a vastinterstellar empireand the establishment of its eventual successor. They feature his fictional science ofpsychohistory, whose theories could predict the future course of history according to dynamical laws regarding the statistical analysis of mass human actions.[137] Campbell raised his rate per word,Orson Wellespurchased rights to "Evidence", and anthologies reprinted his stories. By the end of the war Asimov was earning as a writer an amount equal to half of his Navy Yard salary, even after a raise, but Asimov still did not believe that writing could support him, his wife, and future children.[138][139] His"positronic" robot stories—many of which were collected inI, Robot(1950)—were begun at about the same time. They promulgated a set of rules ofethicsfor robots (seeThree Laws of Robotics) and intelligent machines that greatly influenced other writers and thinkers in their treatment of the subject. Asimov notes in his introduction to the short story collectionThe Complete Robot(1982) that he was largely inspired by the tendency of robots up to that time to fall consistently into aFrankensteinplot in which they destroyed their creators. TheRobotseries has led to film adaptations. With Asimov's collaboration, in about 1977,Harlan Ellisonwrote a screenplay ofI, Robotthat Asimov hoped would lead to "the first really adult, complex, worthwhilescience fiction filmever made". The screenplay has never been filmed and was eventually published in book form in 1994. The 2004 movieI, Robot, starringWill Smith, was based on an unrelated script byJeff VintartitledHardwired, with Asimov's ideas incorporated later after the rights to Asimov's title were acquired.[140](The title was not original to Asimov but had previously been used fora storybyEando Binder.) Also, one of Asimov's robot short stories, "The Bicentennial Man", was expanded into a novelThe Positronic Manby Asimov andRobert Silverberg, and this was adapted into the 1999 movieBicentennial Man, starringRobin Williams.[94] In 1966 theFoundationtrilogy won theHugo Awardfor the all-time best series of science fiction and fantasy novels,[141]and they along with theRobotseriesare his most famous science fiction. Besides movies, hisFoundationandRobotstories have inspired other derivative works of science fiction literature, many by well-known and established authors such asRoger MacBride Allen,Greg Bear,Gregory Benford,David Brin, andDonald Kingsbury. At least some of these appear to have been done with the blessing of, or at the request of, Asimov's widow,Janet Asimov.[142][143][144] In 1948, he also wrote a spoof chemistry article, "The Endochronic Properties of Resublimated Thiotimoline". At the time, Asimov was preparing his own doctoraldissertation, which would include an oral examination. Fearing a prejudicial reaction from his graduate school evaluation board atColumbia University, Asimov asked his editor that it be released under a pseudonym. When it nevertheless appeared under his own name, Asimov grew concerned that his doctoral examiners might think he wasn't taking science seriously. At the end of the examination, one evaluator turned to him, smiling, and said, "What can you tell us, Mr. Asimov, about the thermodynamic properties of the compound known as thiotimoline". Laughing hysterically with relief, Asimov had to be led out of the room. After a five-minute wait, he was summoned back into the room and congratulated as "Dr. Asimov".[145] Demand for science fiction greatly increased during the 1950s, making it possible for a genre author to write full-time.[146]In 1949, book publisherDoubleday's science fiction editor Walter I. Bradbury accepted Asimov's unpublished "Grow Old with Me" (40,000 words), but requested that it be extended to a full novel of 70,000 words. The book appeared under the Doubleday imprint in January 1950 with the title ofPebble in the Sky.[57]Doubleday published five more original science fiction novels by Asimov in the 1950s, along with the six juvenileLucky Starr novels, the latter under the pseudonym "Paul French".[147]Doubleday also published collections of Asimov's short stories, beginning withThe Martian Way and Other Storiesin 1955. The early 1950s also sawGnome Presspublish one collection of Asimov's positronic robot stories asI, Robotand hisFoundationstories and novelettes as the three books of theFoundation trilogy. More positronic robot stories were republished in book form asThe Rest of the Robots. Book publishers and the magazinesGalaxyandFantasy & Science Fictionended Asimov's dependence onAstounding. He later described the era as his "'mature' period". Asimov's "The Last Question" (1956), on the ability of humankind to cope with and potentially reverse the process ofentropy, was his personal favorite story.[148] In 1972, his stand-alone novelThe Gods Themselveswas published to general acclaim, winning Best Novel in theHugo,[149]Nebula,[149]andLocusAwards.[150] In December 1974, formerBeatlePaul McCartneyapproached Asimov and asked him to write the screenplay for a science-fiction movie musical. McCartney had a vague idea for the plot and a small scrap of dialogue, about a rock band whose members discover they are being impersonated by extraterrestrials. The band and their impostors would likely be played by McCartney's groupWings, then at the height of their career. Though not generally a fan of rock music, Asimov was intrigued by the idea and quickly produced a treatment outline of the story adhering to McCartney's overall idea but omitting McCartney's scrap of dialogue. McCartney rejected it, and the treatment now exists only in the Boston University archives.[151] Asimov said in 1969 that he had "the happiest of all my associations with science fiction magazines" withFantasy & Science Fiction; "I have no complaints aboutAstounding,Galaxy, or any of the rest, heaven knows, butF&SFhas become something special to me".[152]Beginning in 1977, Asimov lent his name toIsaac Asimov's Science Fiction Magazine(nowAsimov's Science Fiction) and wrote an editorial for each issue. There was also a short-livedAsimov's SF Adventure Magazineand a companionAsimov's Science Fiction Anthologyreprint series, published as magazines (in the same manner as the stablematesEllery Queen's Mystery Magazine's andAlfred Hitchcock's Mystery Magazine's "anthologies").[153] Due to pressure by fans on Asimov to write another book in hisFoundationseries,[58]he did so withFoundation's Edge(1982) andFoundation and Earth(1986), and then went back to before the original trilogy withPrelude to Foundation(1988) andForward the Foundation(1992), his last novel. He also helpedLeonard Nimoyfleshing out the premise of the science fiction comicPrimortals(1995–1997).[154] Just say I am one of the most versatile writers in the world, and the greatest popularizer of many subjects. Asimov and two colleagues published a textbook in 1949, with two more editions by 1969.[61]During the late 1950s and 1960s, Asimov substantially decreased his fiction output (he published only four adult novels between 1957'sThe Naked Sunand 1982'sFoundation's Edge, two of which were mysteries). He greatly increased his nonfiction production, writing mostly on science topics; the launch of Sputnik in 1957 engenderedpublic concern over a "science gap".[155]Asimov explained inThe Rest of the Robotsthat he had been unable to write substantial fiction since the summer of 1958, and observers understood him as saying that his fiction career had ended, or was permanently interrupted.[156]Asimov recalled in 1969 that "the United States went into a kind of tizzy, and so did I. I was overcome by the ardent desire to write popular science for an America that might be in great danger through its neglect of science, and a number of publishers got an equally ardent desire to publish popular science for the same reason".[157] Fantasy and Science Fictioninvited Asimov to continue his regular nonfiction column, begun in the now-folded bimonthly companion magazineVenture Science Fiction Magazine. The first of 399 monthlyF&SFcolumns appeared in November 1958 and they continued until his terminal illness.[158][k]These columns, periodically collected into books by Doubleday,[61]gave Asimov a reputation as a "Great Explainer" of science; he described them as his only popular science writing in which he never had to assume complete ignorance of the subjects on the part of his readers. The column was ostensibly dedicated to popular science but Asimov had complete editorial freedom, and wrote about contemporary social issues[citation needed]in essays such as "Thinking About Thinking"[159]and "Knock Plastic!".[160]In 1975 he wrote of these essays: "I get more pleasure out of them than out of any other writing assignment."[161] Asimov's first wide-ranging reference work,The Intelligent Man's Guide to Science(1960), was nominated for aNational Book Award, and in 1963 he won aHugo Award—his first—for his essays forF&SF.[162]The popularity of his science books and the income he derived from them allowed him to give up most academic responsibilities and become a full-timefreelance writer.[163]He encouraged other science fiction writers to write popular science, stating in 1967 that "the knowledgeable, skillful science writer is worth his weight in contracts", with "twice as much work as he can possibly handle".[164] The great variety of information covered in Asimov's writings promptedKurt Vonnegutto ask, "How does it feel to know everything?" Asimov replied that he only knew how it felt to have the 'reputation' of omniscience: "Uneasy".[165]Floyd C. Galesaid that "Asimov has a rare talent. He can make your mental mouth water over dry facts",[166]and "science fiction's loss has been science popularization's gain".[167]Asimov said that "Of all the writing I do, fiction, non-fiction, adult, or juvenile, theseF & SFarticles are by far the most fun".[168]He regretted, however, that he had less time for fiction—causing dissatisfied readers to send him letters of complaint—stating in 1969 that "In the last ten years, I've done a couple of novels, some collections, a dozen or so stories, but that'snothing".[157] In his essay "To Tell a Chemist" (1965), Asimov proposed a simpleshibbolethfor distinguishing chemists from non-chemists: ask the person to read the word "unionized". Chemists, he noted, will readun-ionized(electrically neutral), while non-chemists will readunion-ized(belonging to a trade union). Asimov coined the term "robotics" in his 1941 story "Liar!",[169]though he later remarked that he believed then that he was merely using an existing word, as he stated inGold("The Robot Chronicles"). While acknowledging the Oxford Dictionary reference, he incorrectly states that the word was first printed about one third of the way down the first column of page 100 in the March 1942 issue ofAstounding Science Fiction– the printing of his short story "Runaround".[170][171] In the same story, Asimov also coined the term "positronic" (the counterpart to "electronic" forpositrons).[172] Asimov coined the term "psychohistory" in hisFoundationstories to name a fictional branch of science which combineshistory,sociology, andmathematical statisticsto make general predictions about the future behavior of very large groups of people, such as theGalactic Empire. Asimov said later that he should have called it psychosociology. It was first introduced in the five short stories (1942–1944) which would later be collected as the 1951fix-upnovelFoundation.[173]Somewhat later, the term "psychohistory" was applied by others to research of the effects of psychology on history.[174][175] In addition to his interest in science, Asimov was interested in history. Starting in the 1960s, he wrote 14 popular history books, includingThe Greeks: A Great Adventure(1965),[176]The Roman Republic(1966),[177]The Roman Empire(1967),[178]The Egyptians(1967)[179]The Near East: 10,000 Years of History(1968),[180]andAsimov's Chronology of the World(1991).[181] He publishedAsimov's Guide to the Biblein two volumes—covering theOld Testamentin 1967 and theNew Testamentin 1969—and then combined them into one 1,300-page volume in 1981. Complete with maps and tables, the guide goes through the books of the Bible in order, explaining the history of each one and the political influences that affected it, as well as biographical information about the important characters. His interest in literature manifested itself in several annotations of literary works, includingAsimov's Guide to Shakespeare(1970),[l]Asimov's Annotated Don Juan(1972),Asimov's Annotated Paradise Lost(1974), andThe Annotated Gulliver's Travels(1980).[182] Asimov was also a noted mystery author and a frequent contributor toEllery Queen's Mystery Magazine. He began by writing science fiction mysteries such as his Wendell Urth stories, but soon moved on to writing "pure" mysteries. He published two full-length mystery novels, and wrote 66 stories about theBlack Widowers, a group of men who met monthly for dinner, conversation, and a puzzle. He got the idea for the Widowers from his own association in a stag group called the Trap Door Spiders, and all of the main characters (with the exception of the waiter, Henry, who he admitted resembled Wodehouse's Jeeves) were modeled after his closest friends.[183]A parody of the Black Widowers, "An Evening with the White Divorcés," was written by author, critic, and librarian Jon L. Breen.[184]Asimov joked, "all I can do ... is to wait until I catch him in a dark alley, someday."[185] Toward the end of his life, Asimov published a series of collections oflimericks, mostly written by himself, starting withLecherous Limericks, which appeared in 1975.Limericks: Too Gross, whose title displays Asimov's love ofpuns, contains 144 limericks by Asimov and an equal number byJohn Ciardi. He even created a slim volume ofSherlockianlimericks. Asimov featuredYiddishhumor inAzazel, The Two Centimeter Demon. The two main characters, both Jewish, talk over dinner, or lunch, or breakfast, about anecdotes of "George" and his friend Azazel. Asimov'sTreasury of Humoris both a working joke book and a treatise propounding his views onhumor theory. According to Asimov, the most essential element of humor is an abrupt change in point of view, one that suddenly shifts focus from the important to the trivial, or from the sublime to the ridiculous.[186][187] Particularly in his later years, Asimov to some extent cultivated an image of himself as an amiable lecher. In 1971, as a response to the popularity of sexual guidebooks such asThe Sensuous Woman(by "J") andThe Sensuous Man(by "M"), Asimov publishedThe Sensuous Dirty Old Manunder the byline "Dr. 'A'"[188](although his full name was printed on the paperback edition, first published 1972). However, by 2016, Asimov's habit of groping women was seen assexual harassmentand came under criticism, and was cited as an early example of inappropriate behavior that can occur at science fiction conventions.[189] Asimov publishedthree volumes of autobiography.In Memory Yet Green(1979)[190]andIn Joy Still Felt(1980)[191]cover his life up to 1978. The third volume,I. Asimov: A Memoir(1994),[192]covered his whole life (rather than following on from where the second volume left off). The epilogue was written by his widowJanet Asimovafter his death. The book won aHugo Awardin 1995.[193]Janet Asimov editedIt's Been a Good Life(2002),[194]a condensed version of his three autobiographies. He also published three volumes of retrospectives of his writing,Opus 100(1969),[195]Opus 200(1979),[196]andOpus 300(1984).[197] In 1987, the Asimovs co-wroteHow to Enjoy Writing: A Book of Aid and Comfort. In it they offer advice on how to maintain a positive attitude and stay productive when dealing with discouragement, distractions, rejection, and thick-headed editors. The book includes many quotations, essays, anecdotes, and husband-wife dialogues about the ups and downs of being an author.[198][199] Asimov andStar TrekcreatorGene Roddenberrydeveloped a unique relationship duringStar Trek's initial launch in the late 1960s. Asimov wrote a critical essay onStar Trek's scientific accuracy forTV Guidemagazine. Roddenberry retorted respectfully with a personal letter explaining the limitations of accuracy when writing a weekly series. Asimov corrected himself with a follow-up essay toTV Guideclaiming that despite its inaccuracies,Star Trekwas a fresh and intellectually challengingscience fiction televisionshow. The two remained friends to the point where Asimov even served as an advisor on a number ofStar Trekprojects.[200] In 1973, Asimov published a proposal forcalendar reform, called the World Season Calendar. It divides the year into four seasons (named A–D) of 13 weeks (91 days) each. This allows days to be named, e.g., "D-73" instead of December 1 (due to December 1 being the 73rd day of the 4th quarter). An extra 'year day' is added for a total of 365 days.[201] Asimov won more than a dozen annual awards for particular works of science fiction and a half-dozen lifetime awards.[202]He also received 14honorary doctoratedegrees from universities.[203] I have an informal style, which means I tend to use short words and simple sentence structure, to say nothing of occasional colloquialisms. This grates on people who like things that are poetic, weighty, complex, and, above all, obscure. On the other hand, the informal style pleases people who enjoy the sensation of reading an essay without being aware that they are reading and of feeling that ideas are flowing from the writer's brain into their own without mental friction. Asimov was his own secretary, typist,indexer,proofreader, andliterary agent.[61]He wrote a typed first draft composed at the keyboard at 90 words per minute; he imagined an ending first, then a beginning, then "let everything in-between work itself out as I come to it". (Asimov used anoutlineonly once, later describing it as "like trying to play the piano from inside a straitjacket".) After correcting a draft by hand, he retyped the document as the final copy and only made one revision with minor editor-requested changes; aword processordid not save him much time, Asimov said, because 95% of the first draft was unchanged.[148][234][235] After disliking making multiple revisions of "Black Friar of the Flame", Asimov refused to make major, second, or non-editorial revisions ("like chewing used gum"), stating that "too large a revision, or too many revisions, indicate that the piece of writing is a failure. In the time it would take to salvage such a failure, I could write a new piece altogether and have infinitely more fun in the process". He submitted "failures" to another editor.[148][234] Asimov's fiction style is extremely unornamented. In 1980, science fiction scholarJames Gunnwrote ofI, Robot: Except for two stories—"Liar!" and "Evidence"—they are not stories in which character plays a significant part. Virtually all plot develops in conversation with little if any action. Nor is there a great deal of local color or description of any kind. The dialogue is, at best, functional and the style is, at best, transparent. ... . The robot stories and, as a matter of fact, almost all Asimov fiction—play themselves on a relatively bare stage.[236] Asimov addressed such criticism in 1989 at the beginning ofNemesis: I made up my mind long ago to follow one cardinal rule in all my writing—to be 'clear'. I have given up all thought of writing poetically or symbolically or experimentally, or in any of the other modes that might (if I were good enough) get me a Pulitzer prize. I would write merely clearly and in this way establish a warm relationship between myself and my readers, and the professional critics—Well, they can do whatever they wish.[237] Gunn cited examples of a more complex style, such as the climax of "Liar!". Sharply drawn characters occur at key junctures of his storylines:Susan Calvinin "Liar!" and "Evidence",Arkady DarellinSecond Foundation, Elijah Baley inThe Caves of Steel, andHari Seldonin theFoundationprequels. Other than books by Gunn and Joseph Patrouch, there is relatively little literary criticism on Asimov (particularly when compared to the sheer volume of his output). Cowart and Wymer'sDictionary of Literary Biography(1981) gives a possible reason: His words do not easily lend themselves to traditionalliterary criticismbecause he has the habit of centering his fiction on plot and clearly stating to his reader, in rather direct terms, what is happening in his stories and why it is happening. In fact, most of the dialogue in an Asimov story, and particularly in the Foundation trilogy, is devoted to such exposition. Stories that clearly state what they mean in unambiguous language are the most difficult for a scholar to deal with because there is little to be interpreted.[238] Gunn's and Patrouch's studies of Asimov both state that a clear, direct prose style is still a style. Gunn's 1982 book comments in detail on each of Asimov's novels. He does not praise all of Asimov's fiction (nor does Patrouch), but calls some passages inThe Caves of Steel"reminiscent ofProust". When discussing how that novel depicts night falling over futuristic New York City, Gunn says that Asimov's prose "need not be ashamed anywhere in literary society".[239] Although he prided himself on his unornamented prose style (for which he creditedClifford D. Simakas an early influence[16][240]), and said in 1973 that his style had not changed,[148]Asimov also enjoyed giving his longer stories complicatednarrative structures, often by arranging chapters in nonchronologicalways. Some readers have been put off by this, complaining that thenonlinearityis not worth the trouble and adversely affects the clarity of the story. For example, the first third ofThe Gods Themselvesbegins with Chapter 6, then backtracks to fill in earlier material.[241](John Campbell advised Asimov to begin his stories as late in the plot as possible. This advice helped Asimov create "Reason", one of the earlyRobotstories). Patrouch found that the interwoven and nested flashbacks ofThe Currents of Spacedid serious harm to that novel, to such an extent that only a "dyed-in-the-kyrt[242]Asimov fan" could enjoy it. In his later novelNemesisone group of characters lives in the "present" and another group starts in the "past", beginning 15 years earlier and gradually moving toward the time of the first group. Asimov once explained that his reluctance to write about aliens came from an incident early in his career whenAstounding's editorJohn Campbellrejected one of his science fiction stories because the alien characters were portrayed as superior to the humans. The nature of the rejection led him to believe that Campbell may have based his bias towards humans in stories on a real-world racial bias. Unwilling to write only weak alien races, and concerned that a confrontation would jeopardize his and Campbell's friendship, he decided he would not write about aliens at all.[243]Nevertheless, in response to these criticisms, he wroteThe Gods Themselves, which contains aliens and alien sex. The book won theNebula Award for Best Novelin 1972,[213]and theHugo Award for Best Novelin 1973.[213]Asimov said that of all his writings, he was most proud of the middle section ofThe Gods Themselves, the part that deals with those themes.[244] In theHugo Award–winning novelette "Gold", Asimov describes an author, based on himself, who has one of his books (The Gods Themselves) adapted into a "compu-drama", essentiallyphoto-realisticcomputer animation. The director criticizes the fictionalized Asimov ("Gregory Laborian") for having an extremely nonvisual style, making it difficult to adapt his work, and the author explains that he relies on ideas and dialogue rather than description to get his points across.[245] In the early days of science fiction some authors and critics felt that the romantic elements were inappropriate in science fiction stories, which were supposedly to be focused on science and technology. Isaac Asimov was a supporter of this point of view, expressed in his 1938-1939 letters toAstounding, where he described such elements as "mush" and "slop". To his dismay, these letters were met with a strong opposition.[246] Asimov attributed the lack of romance and sex in his fiction to the "early imprinting" from starting his writing career when he had never been on a date and "didn't know anything about girls".[126]He was sometimes criticized for the general absence of sex (and ofextraterrestrial life) in his science fiction. He claimed he wroteThe Gods Themselves(1972) to respond to these criticisms,[247]which often came fromNew Wave science fiction(and often British) writers. The second part (of three) of the novel is set on an alien world with three sexes, and the sexual behavior of these creatures is extensively depicted. There is a perennial question among readers as to whether the views contained in a story reflect the views of the author. The answer is, "Not necessarily—" And yet one ought to add another short phrase "—but usually." Asimov was anatheist, and ahumanist.[118]He did not oppose religious conviction in others, but he frequently railed againstsuperstitiousandpseudoscientificbeliefs that tried to pass themselves off as genuine science. During his childhood, his parents observed the traditions ofOrthodox Judaismless stringently than they had in Petrovichi; they did not force their beliefs upon young Isaac, and he grew up without strong religious influences, coming to believe that theTorahrepresentedHebrew mythologyin the same way that theIliadrecordedGreek mythology.[249]When he was 13, he chose not to have abar mitzvah.[250]As his booksTreasury of HumorandAsimov Laughs Againrecord, Asimov was willing to tell jokes involving God,Satan, theGarden of Eden,Jerusalem, and other religious topics, expressing the viewpoint that a good joke can do more to provoke thought than hours of philosophical discussion.[186][187] For a brief while, his father worked in the localsynagogueto enjoy the familiar surroundings and, as Isaac put it, "shine as a learned scholar"[251]versed in the sacred writings. This scholarship was a seed for his later authorship and publication ofAsimov's Guide to the Bible, an analysis of the historic foundations for the Old and New Testaments. For many years, Asimov called himself an atheist; he considered the term somewhat inadequate, as it described what he did not believe rather than what he did. Eventually, he described himself as a "humanist" and considered that term more practical. Asimov continued to identify himself as asecular Jew, as stated in his introduction toJack Dann's anthology of Jewish science fiction,Wandering Stars: "I attend no services and follow no ritual and have never undergone that curious puberty rite, the Bar Mitzvah. It doesn't matter. I am Jewish."[252] When asked in an interview in 1982 if he was an atheist, Asimov replied, I am an atheist, out and out. It took me a long time to say it. I've been an atheist for years and years, but somehow I felt it was intellectually unrespectable to say one was an atheist, because it assumed knowledge that one didn't have. Somehow it was better to say one was a humanist or an agnostic. I finally decided that I'm a creature of emotion as well as of reason. Emotionally I am an atheist. I don't have the evidence to prove that God doesn't exist, but I so strongly suspect he doesn't that I don't want to waste my time.[253] Likewise, he said about religious education: "I would not be satisfied to have my kids choose to be religious without trying to argue them out of it, just as I would not be satisfied to have them decide to smoke regularly or engage in any other practice I consider detrimental to mind or body."[254] In his last volume of autobiography, Asimov wrote, If I were not an atheist, I would believe in a God who would choose to save people on the basis of the totality of their lives and not the pattern of their words. I think he would prefer an honest and righteous atheist to a TV preacher whose every word is God, God, God, and whose every deed is foul, foul, foul.[255] The same memoir states his belief thatHellis "the drooling dream of asadist" crudely affixed to an all-merciful God; if even human governments were willing to curtail cruel and unusual punishments, wondered Asimov, why would punishment in the afterlife not be restricted to a limited term? Asimov rejected the idea that a human belief or action could merit infinite punishment. If an afterlife existed, he claimed, the longest and most severe punishment would be reserved for those who "slandered God by inventing Hell".[256] Asimov said about using religious motifs in his writing: I tend to ignore religion in my own stories altogether, except when I absolutely have to have it. ... and, whenever I bring in a religious motif, that religion is bound to seem vaguely Christian because that is the only religion I know anything about, even though it is not mine. An unsympathetic reader might think that I am "burlesquing" Christianity, but I am not. Then too, it is impossible to write science fiction and really ignore religion.[257] Asimov became a staunch supporter of theDemocratic Partyduring theNew Deal, and thereafter remained a politicalliberal. He was a vocal opponent of theVietnam Warin the 1960s and in a television interview during the early 1970s he publicly endorsedGeorge McGovern.[258]He was unhappy about what he considered an "irrationalist" viewpoint taken by many radical political activists from the late 1960s and onwards. In his second volume of autobiography,In Joy Still Felt, Asimov recalled meeting the counterculture figureAbbie Hoffman. Asimov's impression was that the1960s' countercultureheroes had ridden an emotional wave which, in the end, left them stranded in a "no-man's land of the spirit" from which he wondered if they would ever return.[259] Asimov vehemently opposedRichard Nixon, considering him "a crook and a liar". He closely followedWatergate, and was pleased when the president was forced to resign. Asimov was dismayed over the pardon extended to Nixon by his successorGerald Ford: "I was not impressed by the argument that it has spared the nation an ordeal. To my way of thinking, the ordeal was necessary to make certain it would never happen again."[260] After Asimov's name appeared in the mid-1960s on a list of people theCommunist Party USA"considered amenable" to its goals, theFBIinvestigated him. Because of his academic background, the bureau briefly considered Asimov as a possible candidate for known Soviet spy ROBPROF, but found nothing suspicious in his life or background.[261] Asimov appeared to hold an equivocal attitude towardsIsrael. In his first autobiography, he indicates his support for the safety of Israel, though insisting that he was not aZionist.[262]In his third autobiography, Asimov stated his opposition to the creation of aJewish state, on the grounds that he was opposed to havingnation-statesin general, and supported the notion of a single humanity. Asimov especially worried about the safety of Israel given that it had been created among Muslim neighbors "who will never forgive, never forget and never go away", and said that Jews had merely created for themselves another "Jewish ghetto".[n] Asimov believed that "sciencefiction ... serve[s] the good of humanity".[164]He considered himself a feminist even beforewomen's liberationbecame a widespread movement; he argued that the issue ofwomen's rightswas closely connected to that of population control.[263]Furthermore, he believed thathomosexualitymust be considered a "moral right" on population grounds, as must all consenting adult sexual activity that does not lead to reproduction.[263]He issued many appeals forpopulation control, reflecting a perspective articulated by people fromThomas MalthusthroughPaul R. Ehrlich.[264] In a 1988 interview byBill Moyers, Asimov proposedcomputer-aided learning, where people would use computers to find information on subjects in which they were interested.[265]He thought this would make learning more interesting, since people would have the freedom to choose what to learn, and would help spread knowledge around the world. Also, theone-to-onemodel would let students learn at their own pace.[266]Asimov thought that people would live in space by 2019.[267] In 1983 Asimov wrote:[268] Computerization will undoubtedly continue onward inevitably... This means that a vast change in the nature of education must take place, and entire populations must be made "computer-literate" and must be taught to deal with a "high-tech" world. He continues on education: Education, which must be revolutionized in the new world, will be revolutionized by the very agency that requires the revolution — the computer. Schools will undoubtedly still exist, but a good schoolteacher can do no better than to inspire curiosity which an interested student can then satisfy at home at the console of his computer outlet. There will be an opportunity finally for every youngster, and indeed, every person, to learn what he or she wants to learn, in his or her own time, at his or her own speed, in his or her own way. Education will become fun because it will bubble up from within and not be forced in from without. Asimov would often fondle, kiss and pinch women at conventions and elsewhere without regard for their consent. According toAlec Nevala-Lee, author of an Asimov biography[269]and writer on the history of science fiction, he often defended himself by saying that far from showing objections, these women cooperated.[270]In a 1971 satirical piece,The Sensuous Dirty Old Man, Asimov wrote: "The question then is not whether or not a girl should be touched. The question is merely where, when, and how she should be touched."[270] According to Nevala-Lee, however, "many of these encounters were clearly nonconsensual."[270]He wrote that Asimov's behaviour, as a leading science-fiction author and personality, contributed to an undesirable atmosphere for women in the male-dominated science fiction community. In support of this, he quoted some of Asimov's contemporary fellow-authors such asJudith Merril,Harlan EllisonandFrederik Pohl, as well as editors such as Timothy Seldes.[270]Additional specific incidents were reported by other people includingEdward L. Ferman, long-time editor ofThe Magazine of Fantasy & Science Fiction, who wrote "...instead of shaking my date's hand, he shook herleft breast".[271] Asimov's defense of civil applications ofnuclear power, even after theThree Mile Islandnuclear power plant incident, damaged his relations with some of his fellow liberals. In a letter reprinted inYours, Isaac Asimov,[263]he states that although he would prefer living in "no danger whatsoever" to living near a nuclear reactor, he would still prefer a home near a nuclear power plant to a slum onLove Canalor near "aUnion Carbideplant producingmethyl isocyanate", the latter being a reference to theBhopal disaster.[263] In the closing years of his life, Asimov blamed the deterioration of the quality of life that he perceived in New York City on the shrinking tax base caused by themiddle-class flightto the suburbs, though he continued to support high taxes on the middle class to pay for social programs. His last nonfiction book,Our Angry Earth(1991, co-written with his long-time friend, science fiction authorFrederik Pohl), deals with elements of the environmental crisis such asoverpopulation,oil dependence,war,global warming, and the destruction of theozone layer.[272][273]In response to being presented byBill Moyerswith the question "What do you see happening to the idea of dignity to human species if this population growth continues at its present rate?", Asimov responded: It's going to destroy it all ... if you have 20 people in the apartment and two bathrooms, no matter how much every person believes in freedom of the bathroom, there is no such thing. You have to set up, you have to set up times for each person, you have to bang at the door, aren't you through yet, and so on. And in the same way, democracy cannot survive overpopulation. Human dignity cannot survive it. Convenience and decency cannot survive it. As you put more and more people onto the world, the value of life not only declines, but it disappears.[274] Asimov enjoyed the writings ofJ. R. R. Tolkien, and usedThe Lord of the Ringsas a plot point in aBlack Widowersstory, titledNothing like Murder.[275]In the essay "All or Nothing" (forThe Magazine of Fantasy and Science Fiction,Jan 1981), Asimov said that he admired Tolkien and that he had readThe Lord of the Ringsfive times. (The feelings were mutual, with Tolkien saying that he had enjoyed Asimov's science fiction.[276]This would make Asimov an exception to Tolkien's earlier claim[276]that he rarely found "any modern books" that were interesting to him.) He acknowledged other writers as superior to himself in talent, saying ofHarlan Ellison, "He is (in my opinion) one of the best writers in the world, far more skilled at the art than I am."[277]Asimov disapproved of theNew Wave's growing influence, stating in 1967 "I want science fiction. I think science fiction isn't really science fiction if it lacks science. And I think the better and truer the science, the better and truer the science fiction".[164] The feelings of friendship and respect between Asimov andArthur C. Clarkewere demonstrated by the so-called "Clarke–Asimov Treaty ofPark Avenue", negotiated as they shared a cab in New York. This stated that Asimov was required to insist that Clarke was the best science fiction writer in the world (reserving second-best for himself), while Clarke was required to insist that Asimov was the best science writer in the world (reserving second-best for himself). Thus, the dedication in Clarke's bookReport on Planet Three(1972) reads: "In accordance with the terms of the Clarke–Asimov treaty, the second-best science writer dedicates this book to the second-best science-fiction writer." In 1980, Asimov wrote a highly critical review ofGeorge Orwell's1984.[278]Though dismissive of his attacks, James Machell has stated that they "are easier to understand when you consider that Asimov viewed 1984 as dangerous literature. He opines that if communism were to spread across the globe, it would come in a completely different form to the one in 1984, and by looking to Orwell as an authority on totalitarianism, 'we will be defending ourselves against assaults from the wrong direction and we will lose'."[279] Asimov became a fan of mystery stories at the same time as science fiction. He preferred to read the former because "I read every [science fiction] story keenly aware that it might be worse than mine, in which case I had no patience with it, or that it might be better, in which case I felt miserable".[148]Asimov wrote "I make no secret of the fact that in my mysteries I useAgatha Christieas my model. In my opinion, her mysteries are the best ever written, far better than the Sherlock Holmes stories, andHercule Poirotis the best detective fiction has seen. Why should I not use as my model what I consider the best?"[280]He enjoyed Sherlock Holmes, but consideredArthur Conan Doyleto be "a slapdash and sloppy writer."[281] Asimov also enjoyed humorous stories, particularly those ofP. G. Wodehouse.[282] In non-fiction writing, Asimov particularly admired the writing style ofMartin Gardner, and tried to emulate it in his own science books. On meeting Gardner for the first time in 1965, Asimov told him this, to which Gardner answered that he had based his own style on Asimov's.[283] Paul Krugman, holder of aNobel Prize in Economics, stated Asimov's concept ofpsychohistoryinspired him to become an economist.[284] John Jenkins, who has reviewed the vast majority of Asimov's written output, once observed, "It has been pointed out that most science fiction writers since the 1950s have been affected by Asimov, either modeling their style on his or deliberately avoiding anything like his style."[285]Along with such figures asBertrand RussellandKarl Popper, Asimov left his mark as one of the most distinguishedinterdisciplinariansof the 20th century.[286]"Few individuals", writesJames L. Christian, "understood better than Isaac Asimov whatsynopticthinking is all about. His almost 500 books—which he wrote as a specialist, a knowledgeable authority, or just an excited layman—range over almost all conceivable subjects: the sciences, history, literature, religion, and of course, science fiction."[287] In 2024,DARPAnamed one of its programs after Asimov, inspired by his “Three Laws of Robotics.” The program , Autonomy Standards and Ideals with Military Operational Values (ASIMOV), aims to develop benchmarks objectively and quantitatively assessing the ethical challenges and readiness of utilizing autonomous systems for military operations.[288] Over a space of 40 years, I published an average of 1,000 words a day. Over the space of the second 20 years, I published an average of 1,700 words a day. Depending on the counting convention used,[290]and including all titles, charts, and edited collections, there may be currently over 500 books in Asimov's bibliography—as well as his individual short stories, individual essays, and criticism. For his 100th, 200th, and 300th books (based on his personal count), Asimov publishedOpus 100(1969),Opus 200(1979), andOpus 300(1984), celebrating his writing.[195][196][197]An extensive bibliography of Isaac Asimov's works has been compiled by Ed Seiler.[291]His book writing rate was analysed, showing that he wrote faster as he wrote more.[292] An online exhibit inWest Virginia University Libraries' virtually complete Asimov Collection displays features, visuals, and descriptions of some of his more than 600 books, games, audio recordings, videos, and wall charts. Many first, rare, and autographed editions are in the Libraries' Rare Book Room. Book jackets and autographs are presented online along with descriptions and images of children's books, science fiction art, multimedia, and other materials in the collection.[293][294] TheRobotseries was originally separate from theFoundationseries. The Galactic Empire novels were published as independent stories, set earlier in the same future asFoundation. Later in life, Asimov synthesized theRobotseries into a single coherent "history" that appeared in the extension of theFoundationseries.[295] All of these books were published byDoubleday & Co, except the original Foundation trilogy which was originally published by Gnome Books before being bought and republished by Doubleday. All published byDoubleday & Co All published by Walker & Company Novels marked with an asterisk (*) have minor connections toFoundationuniverse. The following books collected essays which were originally published as monthly columns inThe Magazine of Fantasy and Science Fictionand collected byDoubleday & Co All published by Doubleday All published byHoughton Mifflinexcept where otherwise stated
https://en.wikipedia.org/wiki/Isaac_Asimov
TheFoundationseriesis ascience fictionbook series written by American authorIsaac Asimov. First published as a series ofshort storiesand novellas in 1942–1950, and subsequently in three books in 1951–1953, for nearly thirty years the series was widely known asThe Foundation Trilogy:Foundation(1951),Foundation and Empire(1952), andSecond Foundation(1953). It won the one-timeHugo Awardfor "Best All-Time Series" in 1966.[1][2]Asimov later added new volumes, with two sequels,Foundation's Edge(1982) andFoundation and Earth(1986), and two prequels,Prelude to Foundation(1988) andForward the Foundation(1993). The premise of the stories is that in the waning days of a futureGalactic Empire, the mathematicianHari Seldondevises the theory ofpsychohistory, a new and effectivemathematicsofsociology. Using statistical laws ofmass action, it can predict the future of large populations. Seldon foresees the imminent fall of the Empire, which encompasses the entireMilky Way, and adark agelasting 30,000 years before a second empire arises. Although the momentum of the Empire's fall is too great to stop, Seldon devises a plan by which "the onrushing mass of events must be deflected just a little" to eventually limit thisinterregnumto just one thousand years. The books describe some of the dramatic events of those years as they are shaped by the underlying political and social mechanics of Seldon's Plan. The original trilogy of novels collected a series of eight short stories and novellas published inAstounding Science-Fictionmagazine between May 1942 and January 1950. According to Asimov, the premise was based on ideas inEdward Gibbon'sHistory of the Decline and Fall of the Roman Empire, and was invented spontaneously on his way to meet with editorJohn W. Campbell, with whom he developed the concepts of the collapse of theGalactic Empire, the civilization-preserving Foundations, and psychohistory.[3]Asimov wrote these early stories in hisWest Philadelphiaapartment when he worked at thePhiladelphia Naval Yard.[4] The first four stories were collected, along with a new introductory story, and published byGnome Pressin 1951 asFoundation. The later stories were published in pairs by Gnome asFoundation and Empire(1952) andSecond Foundation(1953), resulting in the "Foundation Trilogy", as the series is still known.[5] In 1981, Asimov was persuaded by his publishers to write a fourth book, which becameFoundation's Edge(1982). Four years later, Asimov followed up with another sequel,Foundation and Earth(1986),[6]which was followed by the prequelsPrelude to Foundation(1988) andForward the Foundation(1993), published after his 1992 death. During the two-year lapse between writing the sequels and prequels, Asimov had tied in hisFoundationseries with his various other series, creating a single unified universe. The basic link is mentioned inFoundation's Edge: an obscure myth about a first wave of space settlements with robots and then a second without. The idea is the one developed inRobots of Dawn, which, in addition to showing the way that the second wave of settlements was to be allowed, illustrates the benefits and shortcomings of the first wave of settlements and their so-calledC/Fe(carbon/iron, signifying humans and robots together) culture. In this same book, the wordpsychohistoryis used to describe the nascent idea of Seldon's work. Some of the drawbacks to this style of colonization, also calledSpacerculture, are also exemplified by the events described all the way back in 1957'sThe Naked Sun. The link between the Robot and Foundation universes was tightened by letting the robotR. Daneel Olivaw– originally introduced inThe Caves of Steel– live on for tens of thousands of years and play a major role behind the scenes in both the Galactic Empire in its heyday and in the rise of the two Foundations to take its place. Called forth to stand trial on Trantor for allegations oftreason(for foreshadowing the decline of the Galactic Empire), Seldon explains that his science of psychohistory foresees many alternatives, all of which result in the Galactic Empire eventually falling. If humanity follows its current path, the Empire will fall and 30,000 years of turmoil will overcome humanity before a second empire arises. However, an alternative path allows for the intervening years to be only 1,000 if Seldon is allowed to collect the most intelligent minds and create a compendium of all human knowledge, entitled theEncyclopedia Galactica. The board is still wary, but allows Seldon to assemble whomever he needs, provided he and the "Encyclopedists" are exiled to a remote planet, Terminus. Seldon agrees to these terms – and secretly establishes a second foundation of which almost nothing is known, which he says is at the "opposite end" of the galaxy. After 50 years on Terminus, and with Seldon dead, the inhabitants find themselves in a crisis. With four powerful planets surrounding theirs, the Encyclopedists have no defenses but their own intelligence. A vault left by Seldon is due to automatically open and it reveals a recordedhologramof Seldon, who informs the Encyclopedists that their reason for being on Terminus is false; Seldon did not care whether or not an encyclopedia was created, only that the population was placed on Terminus and the events needed by his calculations were set in motion. In reality, the recording discloses, Terminus was set up to reduce the Dark Ages based on his calculations. It will develop by facing intermittent and extreme "crises" – known as "Seldon Crises" – which the laws governing psychohistory show will inevitably be overcome, simply because human nature will cause events to fall in particular ways which lead to the intended goal. The recording reveals that the present events are the first such crisis, reminds them that a second foundation was also formed at the "opposite end" of the galaxy, and then falls silent. The Mayor of Terminus City,Salvor Hardin, proposes to play the planets against each other. His plan is a success; the Foundation remains untouched, and he becomes its ruler. The minds of the Foundation continue to develop newer and greater technologies which are more compact and powerful than the Empire's equivalents. Using its scientific advantages, Terminus develops trade routes with nearby planets, eventually taking them over when its technology becomes a coveted commodity. The interplanetary traders become diplomats to other planets. One such trader,Hober Mallow, becomes powerful enough to challenge and win the office of Mayor and, by cutting off supplies to a nearby region, also succeeds in adding more planets to the Foundation's control. An ambitious general of the emperor of the galaxy perceives the Foundation to be a growing threat and orders an attack on it, using the Empire's mighty fleet of war vessels. The Emperor, initially supportive, becomes suspicious of his general's long-term motive for the attack and recalls the fleet despite being close to victory. In spite of its undoubted inferiority in purely military terms, the Foundation emerges as the victor. Seldon's hologram reappears in the vault on Terminus, and explains to the Foundation that this opening of the vault follows a conflict whose result was inevitable whatever might have been done – a weak Imperial navy could not have attacked them, while a strong navy would have shown itself by its successes to be a threat to the Emperor and been recalled. A century later, an unknown outsider calledthe Mulehas begun taking over planets at a rapid pace. The Foundation comes to realize, too late, that the Mule is unforeseen by Seldon's plan.Toran and Bayta Darell, accompanied byEbling Mis– the Foundation's greatest psychologist – and a court jester named Magnifico, who is familiar with the Mule, set out to Trantor to find the Second Foundation, hoping to bring an end to the Mule's reign. Mis studies furiously in the Great Library of Trantor to figure out the Second Foundation's location to seek its help. He is successful and also deduces that the Mule's success stems from his being a mutant who is able to change the emotions of others, a power he used to first instill fear in the inhabitants of his conquered planets, then to make his enemies devoutly loyal to him. Mis is murdered by Bayta Darell before he can reveal the location because she realized that Magnifico is the Mule and has been using his gifts to help Mis do his research, so that the Mule can subjugate the Second Foundation. The Mule ruefully acknowledges that his feelings for Bayta prevented him from tampering with her mind to block just such interference. He leaves Trantor to rule over his conquered planets while continuing his search. As the Mule comes closer to finding it, the mysterious Second Foundation emerges briefly out of hiding to face the threat. While the first Foundation has developed the physical sciences, the Second Foundation has been developing Seldon's mathematics and the Seldon Plan, along with their use of mental abilities. The Second Foundation launches an operation to deceive and eventually mind control the Mule, whom they return to rule over his kingdom peacefully for the rest of his life, without further thought of conquering the Second Foundation. As a result, the first Foundation learns something of the Second Foundation beyond the fact that it exists, and comes to have some understanding of its role. This means that their behavior will now be influenced by that knowledge, invalidating the mathematics of the Seldon Plan and placing the Plan itself at great risk. The First Foundation starts to resentfully consider the other a rival, and a small group secretly begins to develop equipment to detect and block the Second Foundation's mental influence. After many attempts to infer the Second Foundation's whereabouts from the few clues, the Foundation is led to believe the Second Foundation is located on Terminus (the "opposite end of the galaxy" for a galaxy with a circular shape). The Foundation uncovers and eliminates a group of 50 members of the Second Foundation, believing they have destroyed it. The 50 were volunteers who sacrificed themselves so that humanity's collective behavior would once again be predictable and follow the mathematics of the Seldon Plan. The Second Foundation is revealed to be on Trantor, the former Imperial homeworld. The clue "at Star's End" was not a physical clue but based on an old saying, "All roads lead to Trantor, and that is where all stars end". Believing that the Second Foundation exists (despite the common belief that it has been extinguished), young politicianGolan Trevizeis sent intoexileby the Mayor of the Foundation,Harla Branno, to uncover the Second Foundation;Trevizeis accompanied by a scholar namedJanov Pelorat. The reason for their belief is that, despite the unforeseeable impact of the Mule, the Seldon Plan still appears to be proceeding in accordance with the statements of Seldon's hologram, suggesting that the Second Foundation still exists and is secretly intervening to follow the plan. After a few conversations with Pelorat, Trevize comes to believe that a mythical planet calledEarthmay hold the secret to the location. No such planet exists in any database, yet myths and legends refer to it. Trevize believes that the planet is being kept hidden. Unknown to Trevize and Pelorat, Branno is tracking their ship so that if they find the Second Foundation, the first Foundation can take action. Stor Gendibal, a prominent member of the Second Foundation, discovers a simple local on Trantor who has had a very subtle alteration made to her mind, far more delicate than anything the Second Foundation can do. He concludes that a greater force of Mentalics, those with the ability to read and shape the minds of others, must be active in the Galaxy. Following the events on Terminus, Gendibal tries to follow Trevize, reasoning that by doing so, he may find out who has altered the mind of the Trantor native. Using the few scraps of reliable information within the myths, Trevize and Pelorat discover a planet called Gaia on which every organism and inanimate object on the planet shares a common mind. Branno and Gendibal, who have followed Trevize, also reach Gaia. Gaia reveals that it has engineered this situation because it wishes to do what is best for humanity but cannot be sure what is best. Trevize's purpose, faced with the leaders of the First and Second Foundations and Gaia, is to be trusted to make the best decision among the three main alternatives for the future of the human race, the First Foundation's path, based on mastery of the physical world and its traditional political organization (i.e., Empire); the Second Foundation's path, based on mentalics and probable rule by an elite using mind control; or Gaia's path of absorption of the entire Galaxy into one shared, harmonious living entity in which all beings and the galaxy would be a part. After Trevize makes his decision for Gaia's path, the intellect of Gaia adjusts Branno's and Gendibal's minds so that each believes he or she has succeeded in a significant task. (Branno believes she has negotiated a treaty tying Sayshell to the Foundation and Gendibal – now leader of the Second Foundation – believes that the Second Foundation is victorious and should continue as normal.) Trevize remains but is uncertain as to why he is "sure" that Gaia is the correct outcome for the future. Still uncertain about his decision, Trevize continues the search for Earth along with Pelorat and a local of Gaia, advanced in Mentalics, known as Blissenobiarella (usually referred to simply as Bliss). Eventually, Trevize finds three sets of coordinates which are very old. Adjusting them for time, he realizes that his ship's computer does not list any planet in the vicinity of the coordinates. When he visits the locations, he rediscovers the forgotten Spacer worlds of Aurora, Solaria, and finally Melpomenia. After searching and facing dilemmas on each planet, Trevize still has not discovered any answers. Aurora and Melpomenia are long deserted but Solaria contains a small population extremely advanced in the field of Mentalics. When the lives of the group are threatened, Bliss uses her abilities (and the shared intellect of Gaia) to destroy the Solarian who is about to kill them. This leaves behind a small child who will be put to death if left alone, so Bliss makes the decision to keep the child as they quickly escape the planet. Eventually, Trevize discovers Earth but it contains no satisfactory answers for him (it is also long-since deserted). It dawns on Trevize that the answer may not be on Earth but on Earth's satellite – the Moon. Upon approaching the planet, they are drawn inside the Moon's core, where they meet a robot namedR. Daneel Olivaw. Olivaw explains that he has been instrumental in guiding human history for thousands of years, having provided the impetus for Seldon to create psychohistory and also the creation of Gaia, but is now close to the end of his ability to maintain himself and will shortly cease to function. Despite replacing hispositronicbrain (which contains 20,000 years of memories), he is going to die shortly. He explains that no further robotic brain can be devised to replace his or which will let him continue assisting for the benefit of humanity. Some time can be won to ensure the long-term benefit of humanity by merging Olivaw's mind with the organic intellect of a human – in this case, the intellect of the child that the group rescued on Solaria. Once again, Trevize is put in the position of deciding if having Olivaw meld with the child's superior intellect would be in the best interests of the galaxy. The decision is left ambiguous (though likely a "yes") as it is implied that the melding of the minds may be to the child's benefit but that she may have sinister intentions. Prelude to Foundationopens on the planetTrantor, the empire's capital planet, the day afterHari Seldonhas given a speech at a mathematics conference. Several parties become aware of the content of his speech (that using mathematical formulas, it may be possible to predict the future course of human history). Seldon is hounded by the Emperor and various employed thugs who are working surreptitiously, which forces him into exile. Over the course of the book, Seldon andDors Venabili, a female companion and professor of history, are taken from location to location byChetter Humminwho, under the guise of a reporter, introduces them to various Trantorian walks of life in his attempts to keep Seldon hidden from the Emperor. Throughout their adventures all over Trantor, Seldon continually denies that psychohistory is a realistic science. Even if feasible, it may take several decades to develop. Hummin, however, is convinced that Seldon knows something, so he continuously presses him to work out a starting point to develop psychohistory. Eventually, after much traveling and introductions to various, diverse cultures on Trantor, Seldon realizes that using the entire known galaxy as a starting point is too overwhelming; he then decides to use Trantor as a model to work out the science, with a goal of later using the applied knowledge on the rest of the galaxy. Eight years after the events ofPrelude, Seldon has worked out the science of psychohistory and has applied it on a galactic scale. His notability and fame increase, and he is eventually promoted to First Minister to the Emperor. As the book progresses, Seldon loses those closest to him, including his wife, Dors Venabili, as his own health deteriorates into old age. Having worked his entire adult life to understand psychohistory, Seldon instructs his granddaughter, Wanda, to set up the Second Foundation. The early stories were inspired byEdward Gibbon'sThe History of the Decline and Fall of the Roman Empire. The plot of the series focuses on the growth and reach of the Foundation, against a backdrop of the "decline and fall of the Galactic Empire." The themes of Asimov's stories were also influenced by the political tendency inscience fiction fandom, associated with theFuturians, known asMichelism. The focus of the books is the trends through which a civilization might progress, specifically seeking to analyze their progress, using history as a precedent. Although many science fiction novels such asNineteen Eighty-FourorFahrenheit 451do this, their focus is on how current trends in society might come to fruition and they act as a moral allegory of the modern world. TheFoundationseries, on the other hand, looks at the trends in a wider scope, dealing with societal evolution and adaptation rather than the human and cultural qualities at one point in time. In this Asimov followed the model ofThucydides' workTheHistory of the Peloponnesian War, as heonce acknowledged. Asimov tried to end the series withSecond Foundation. However, because of the predicted thousand years until the rise of the next Empire (of which only a few hundred had elapsed), the series lacked a sense of closure. For decades, fans pressured him to write a sequel. In 1982, after a 30-year hiatus, Asimov gave in and wrote what was at the time a fourth volume:Foundation's Edge. This was followed shortly thereafter byFoundation and Earth. This novel, which takes place some 500 years after Seldon, ties up all the loose ends and ties all his Robot, Empire, and Foundation novels into a single story. He also opens a brand new line of thought in the last dozen pages regardingGalaxia, a galaxy inhabited by a singlecollective mind. This concept was never explored further. According to his widowJanet Asimov(in her biography of Isaac,It's Been a Good Life), he had no idea how to continue afterFoundation and Earth, so he started writing the prequels. In the spring of 1955, Asimov published afuture historyof humanity in the pages ofThrilling Wonder Storiesmagazine based upon his thought processes concerning the Foundation universe at that point in his life. According to the publication, "the scheme was not originally worked out as a consistent pattern and only includes about one-quarter of his total writings". Because of this, the dating in theFoundationseries is approximate and inconsistent.[8] Asimov estimates that hisFoundationseries takes place nearly 50,000 years into the future, with Hari Seldon born in 47,000 CE.[8]Around this time, the future emperor Cleon I is born in the imperial capital Trantor, 78 years before the Foundation Era (FE) and the events of the original Foundation trilogy. After Cleon inherits the crown, the mathematician Hari Seldon comes to Trantor from Helicon to deliver his theory of psychohistory that predicts the fall of the empire, which triggers the events ofPrelude to Foundation.[9]Forward the Foundationpicks up the story a few years later, with the emperor being assassinated and Seldon retiring from politics.[10] At the start of the Foundation Era, the events of the originalFoundationnovel(first published inAstounding Science Fictionas a series of short stories) take place, and the in-universe Foundation Era truly begins.[11]According to Asimov, he intended this to take place around the year 47000 CE, with the Empire in decay as it battles the rising Foundation, who emerges as the dominant power a few centuries later.[8]Thus begins the events of theFoundation and Empire, which include the unpredicted rise of the Mule, who defeats the Foundation thanks to his mutant abilities.[12]The events ofSecond Foundationchronicle the titular Second Foundation's search and defeat of the Mule, and their conflict with the remnants of the original Foundation, averting the Dark Age.[13]Asimov estimates that the Mule rises and falls somewhere around 47300 CE.[8] Foundation's Edgetakes place 500 years after the establishment of the Foundation, outside of the original trilogy of novels.[14][8]Foundation and Earthfollows immediately after, with humanity choosing and justifying a third path distinct from the opposing visions of the two Foundations.[15]According to Asimov, the Second Galactic Empire is established 48000 CE, 1000 years after the events of the first novel.[8] Asimov himself commented that his fiction's internal history was "actually made up ad hoc. My cross-references in the novels are thrown in as they occur to me and did not come from a systemized history. ... If some reader checks my stories carefully and finds that my dating is internally inconsistent, I can only say I'm not surprised."[8] A second Foundation trilogy of prequels was written after Asimov's death by three authors, authorized by the Asimov estate. These wereFoundation's Fear(1997) byGregory Benford,Foundation and Chaos(1998) byGreg Bear, andFoundation's Triumph(1999) byDavid Brin.[16] InLearned Optimism,[17]psychologistMartin Seligmanidentifies theFoundationseries as one of the most important influences in his professional life, because of the possibility of predictive sociology based on psychological principles. He also lays claim to the first successful prediction of a major historical (sociological) event, in the1988 US elections, and he specifically attributes this to a psychological principle.[18] In his 1996 bookTo Renew America,U. S. House SpeakerNewt Gingrichwrote that he was influenced by reading theFoundationtrilogy in high school.[19] Paul Krugman, winner of the 2008Nobel Memorial Prize in Economic Sciences, credits theFoundationseries with turning his mind to economics, as the closest existing science to psychohistory.[20][21] Stating that it "offers a useful summary of some of the dynamics of far-flung imperial Rome",Carl Saganin 1978 listed theFoundationseries as an example of how science fiction "can convey bits and pieces, hints and phrases, of knowledge unknown or inaccessible to the reader".[22]In the nonfiction PBS seriesCosmos: A Personal Voyage, Sagan referred to anEncyclopedia Galacticain the episodes "Encyclopaedia Galactica" and "Who Speaks for Earth". In 1966, theFoundationtrilogy beat several other science fiction and fantasy series to receive a specialHugo Awardfor "Best All-Time Series". The runners-up for the award were theBarsoom seriesbyEdgar Rice Burroughs, theFuture History seriesbyRobert A. Heinlein, theLensman seriesbyEdward E. SmithandThe Lord of the RingsbyJ. R. R. Tolkien.[23]The Foundation series was the only series so honored until the establishment of the "Best Series" category in 2017. Asimov himself wrote that he assumed the one-time award had been created to honorThe Lord of the Rings, and he was amazed when his work won.[24] The series has won three other Hugo Awards.Foundation's Edgewon Best Novel in 1983, and was a bestseller for almost a year. Retrospective Hugo Awards were given in 1996 and 2018 for, respectively, "The Mule" (the major part ofFoundation and Empire) for Best Novel (1946) and "Foundation" (the first story written for the series, and second chapter of the first novel) for Best Short Story (1943). Douglas Adams'The Hitchhiker's Guide to the Galaxymentions the encyclopedia by name, remarking that it is rather "dry", and consequently sells fewer copies than his own creation "The Guide".[44] Frank Herbertalso wroteDuneas a counterpoint toFoundation. Tim O'Reilly in his monograph on Herbert wrote that "Duneis clearly a commentary on theFoundationtrilogy. Herbert has taken a look at the same imaginative situation that provoked Asimov's classic—the decay of a galactic empire—and restated it in a way that draws on different assumptions and suggests radically different conclusions. The twist he has introduced intoDuneis that the Mule, not the Foundation, is his hero."[45] In 1995,Donald Kingsburywrote "Historical Crisis", which he later expanded into a novel,Psychohistorical Crisis. It takes place about 2,000 years afterFoundation, after the founding of the Second Galactic Empire. It is set in the same fictional universe as the Foundation series, in considerable detail, but with virtually allFoundation-specific names either changed (e.g., Kalgan becomes Lakgan), or avoided (psychohistory is created by an unnamed, but often-referenced Founder). The novel explores the ideas of psychohistory in a number of new directions, inspired by more recent developments in mathematics andcomputer science, as well as by new ideas in science fiction itself.[citation needed] In 1998, the novelSpectre(part of theShatnerverseseries) byWilliam ShatnerandJudith and Garfield Reeves-Stevensstates that theMirror Universedivergent path has been studied by theSeldon Psychohistory Institute.[citation needed] Theoboe-like holophonor inMatt Groening's animated television seriesFuturamais based directly upon theVisi-SonorwhichMagnificoplays inFoundation and Empire.[46][47] During the 2006–2007Marvel ComicsCivil Warcrossoverstoryline, inFantastic Four#542Mister Fantasticrevealed his own attempt to develop psychohistory, saying he was inspired after reading theFoundationseries.[citation needed] According to lead singerIan Gillan, the hard rock bandDeep Purple's songThe Muleis based on the Foundation character: "Yes, The Mule was inspired by Asimov. It's been a while but I'm sure you've made the right connection... Asimov was required reading in the 1960s."[48] An eight-partradio adaptationof the original trilogy, with sound design by theBBC Radiophonic Workshop, was broadcast onBBC Radio 4[49]in 1973—one of the first BBC radio drama serials to be made instereo. ABBC 7reruncommenced in July 2003. Adapted byPatrick Tull(episodes 1 to 4) and Mike Stott (episodes 5 to 8), the dramatisation was directed byDavid Cainand starred William Eedle as Hari Seldon, withGeoffrey Beeversas Gaal Dornick,Lee Montagueas Salvor Hardin,Julian Gloveras Hober Mallow,Dinsdale Landenas Bel Riose,Maurice Denhamas Ebling Mis andPrunella Scalesas Lady Callia. By 1998,New Line Cinemahad spent $1.5 million developing a film version of theFoundation Trilogy. The failure to develop a new franchise was partly a reason the studio signed on to produceThe Lord of the Ringsfilm trilogy.[50] On July 29, 2008, New Line Cinema co-foundersBob ShayeandMichael Lynnewere reported to have been signed on to produce an adaptation of the trilogy by their company Unique Pictures for Warner Brothers.[51]However,Columbia Pictures(Sony) successfully bid for the screen rights on January 15, 2009, and then contractedRoland Emmerichto direct and produce. Michael Wimer was named as co-producer.[52]Two years later, the studio hiredDante Harperto adapt the books. This project failed to materialize, andHBOacquired the rights when they became available in 2014.[53] In November 2014,TheWrapreported thatJonathan Nolanwas writing and producing a TV series based on theFoundation TrilogyforHBO.[53]Nolan confirmed his involvement at aPaley Centerevent on April 13, 2015.[54] In June 2017,Deadlinereported thatSkydance Mediawould produce a TV series.[55]In August 2018 it was announced thatApple TV+had commissioned a 10 episode straight-to-series order.[56]However, on April 18, 2019, Josh Friedman left the project as co-writer and co-showrunner. This was apparently planned, with either Friedman or screenwriterDavid Goyerleaving and the other staying.[57]On June 22, 2020, Apple CEOTim Cookannounced the series would be released in 2021.[58]On 13 March 2020, Apple suspended filming on their shows due to the COVID-19 outbreak;[59]filming resumed on October 6, 2020.[60] TheFoundationTV series was filmed at Troy Studios,Limerick, Ireland, and the budget was expected to be approximately $50 million.[61]The first episodes premiered on September 24, 2021.[62]Metacriticgave the first season a weighted average score of 63 out of 100 based on 22 reviews, indicating "generally favorable reviews".[63]Thesecond seasonwas released in 2023.
https://en.wikipedia.org/wiki/Foundation_(book_series)
Positivismis aphilosophical schoolthat holds that all genuine knowledge is eithertrue by definitionorpositive– meaninga posteriorifacts derived byreasonandlogicfromsensory experience.[1][2]Otherways of knowing, such asintuition,introspection, orreligious faith, are rejected orconsidered meaningless. Although the positivist approach has been a recurrent theme in the history of Western thought, modern positivism was first articulated in the early 19th century byAuguste Comte.[3][4]His school ofsociologicalpositivism holds that society, like the physical world, operates according toscientific laws.[5]After Comte, positivist schools arose inlogic,psychology,economics,historiography, and other fields of thought. Generally, positivists attempted to introduce scientific methods to their respective fields. Since the turn of the 20th century, positivism, although still popular, has declined under criticism within the social sciences byantipositivistsandcritical theorists, among others, for its allegedscientism,reductionism, overgeneralizations, and methodological limitations. Positivism also exerted an unusual influence onKardecism.[6][7][8] The English nounpositivismin this meaning was imported in the 19th century from the French wordpositivisme, derived frompositifin its philosophical sense of 'imposed on the mind by experience'. The corresponding adjective (Latin:positivus) has been used in a similar sense to discuss law (positive lawcompared tonatural law) since the time ofChaucer.[9] Kieran Eganargues that positivism can be traced to the philosophy side of whatPlatodescribed as the quarrel betweenphilosophyandpoetry, later reformulated byWilhelm Diltheyas a quarrel between thenatural sciences(German:Naturwissenschaften) and thehuman sciences(Geisteswissenschaften).[10][11][12] In the early nineteenth century, massive advances in the natural sciences encouraged philosophers to apply scientific methods to other fields. Thinkers such asHenri de Saint-Simon,Pierre-Simon LaplaceandAuguste Comtebelieved that thescientific method, the circular dependence of theory and observation, must replacemetaphysicsin thehistoryof thought.[13] Auguste Comte(1798–1857) first described the epistemological perspective of positivism inThe Course in Positive Philosophy, a series of texts published between 1830 and 1842. These texts were followed in 1844 byA General View of Positivism(published in French 1848, English in 1865). The first three volumes of theCoursedealt chiefly with the physical sciences already in existence (mathematics,astronomy,physics,chemistry,biology), whereas the latter two emphasized the inevitable coming ofsocial science. Observing the circular dependence of theory and observation in science, and classifying the sciences in this way, Comte may be regarded as the firstphilosopher of sciencein the modern sense of the term.[14][15]For him, the physical sciences had necessarily to arrive first, before humanity could adequately channel its efforts into the most challenging and complex "Queen science" of human society itself. HisView of Positivismtherefore set out to define the empirical goals of sociological method: The most important thing to determine was the natural order in which the sciences stand—not how they can be made to stand, but how they must stand, irrespective of the wishes of any one. ... This Comte accomplished by taking as the criterion of the position of each the degree of what he called "positivity," which is simply the degree to which the phenomena can be exactly determined. This, as may be readily seen, is also a measure of their relative complexity, since the exactness of a science is in inverse proportion to its complexity. The degree of exactness or positivity is, moreover, that to which it can be subjected to mathematical demonstration, and therefore mathematics, which is not itself a concrete science, is the general gauge by which the position of every science is to be determined. Generalizing thus, Comte found that there were five great groups of phenomena of equal classificatory value but of successively decreasing positivity. To these he gave the names astronomy, physics, chemistry, biology, and sociology. Comte offered anaccount of social evolution, proposing that society undergoes three phases in its quest for the truth according to a general "law of three stages". Comte intended to develop a secular-scientific ideology in the wake of Europeansecularisation. Comte's stages were (1) thetheological, (2) themetaphysical, and (3) thepositive.[17]The theological phase of man was based on whole-hearted belief in all things with reference toGod. God, Comte says, had reigned supreme over human existence pre-Enlightenment. Humanity's place in society was governed by its association with the divine presences and with the church. The theological phase deals with humankind's accepting the doctrines of the church (or place of worship) rather than relying on its rational powers to explore basic questions about existence. It dealt with the restrictions put in place by the religious organization at the time and the total acceptance of any "fact" adduced for society to believe.[18] Comte describes the metaphysical phase of humanity as the time since theEnlightenment, a time steeped in logicalrationalism, to the time right after theFrench Revolution. This second phase states that the universal rights of humanity are most important. The central idea is that humanity is invested with certain rights that must be respected. In this phase, democracies and dictators rose and fell in attempts to maintain the innate rights of humanity.[19] The final stage of the trilogy of Comte's universal law is the scientific, or positive, stage. The central idea of this phase is that individual rights are more important than the rule of any one person. Comte stated that the idea of humanity's ability to govern itself makes this stage inherently different from the rest. There is no higher power governing the masses and the intrigue of any one person can achieve anything based on that individual's free will. The third principle is most important in the positive stage.[20]Comte calls these three phases the universal rule in relation to society and its development. Neither the second nor the third phase can be reached without the completion and understanding of the preceding stage. All stages must be completed in progress.[21] Comte believed that the appreciation of the past and the ability to build on it towards the future was key in transitioning from the theological and metaphysical phases. The idea of progress was central to Comte's new science, sociology. Sociology would "lead to the historical consideration of every science" because "the history of one science, including pure political history, would make no sense unless it was attached to the study of the general progress of all of humanity".[22]As Comte would say: "from science comes prediction; from prediction comes action".[23]It is a philosophy of human intellectual development that culminated in science. The irony of this series of phases is that though Comte attempted to prove that human development has to go through these three stages, it seems that the positivist stage is far from becoming a realization. This is due to two truths: The positivist phase requires having a complete understanding of the universe and world around us and requires that society should never know if it is in this positivist phase.Anthony Giddensargues that since humanity constantly uses science to discover and research new things, humanity never progresses beyond the second metaphysical phase.[21] Comte's fame today owes in part toEmile Littré, who foundedThe Positivist Reviewin 1867. As an approach to thephilosophy of history, positivism was appropriated by historians such asHippolyte Taine. Many of Comte's writings were translated into English by theWhigwriter,Harriet Martineau, regarded by some as the first female sociologist. Debates continue to rage as to how much Comte appropriated from the work of his mentor, Saint-Simon.[24]He was nevertheless influential: Brazilian thinkers turned to Comte's ideas about training a scientific elite in order to flourish in the industrialization process.Brazil's nationalmotto,Ordem e Progresso("Order and Progress") was taken from the positivism motto, "Love as principle, order as the basis, progress as the goal", which was also influential inPoland.[citation needed] In later life, Comte developed a 'religion of humanity' for positivist societies in order to fulfil the cohesive function once held by traditional worship. In 1849, he proposed acalendar reformcalled the 'positivist calendar'. For close associateJohn Stuart Mill, it was possible to distinguish between a "good Comte" (the author of theCourse in Positive Philosophy) and a "bad Comte" (the author of the secular-religioussystem).[14]Thesystemwas unsuccessful but met with the publication ofDarwin'sOn the Origin of Speciesto influence the proliferation of varioussecular humanistorganizations in the 19th century, especially through the work of secularists such asGeorge HolyoakeandRichard Congreve. Although Comte's English followers, includingGeorge Eliotand Harriet Martineau, for the most part rejected the full gloomy panoply of his system, they liked the idea of a religion of humanity and his injunction to "vivre pour autrui" ("live for others", from which comes the word "altruism").[25] The early sociology ofHerbert Spencercame about broadly as a reaction to Comte; writing after various developments in evolutionary biology, Spencer attempted (in vain) to reformulate the discipline in what we might now describe associally Darwinisticterms.[citation needed] Within a few years, other scientific and philosophical thinkers began creating their own definitions for positivism. These includedÉmile Zola,Emile Hennequin,Wilhelm Scherer, andDimitri Pisarev.Fabien Magninwas the first working-class adherent to Comte's ideas, and became the leader of a movement known as "Proletarian Positivism". Comte appointed Magnin as his successor as president of the Positive Society in the event of Comte's death. Magnin filled this role from 1857 to 1880, when he resigned.[26]Magnin was in touch with the English positivistsRichard CongreveandEdward Spencer Beesly. He established theCercle des prolétaires positivistesin 1863 which was affiliated to theFirst International.Eugène Sémériewas a psychiatrist who was also involved in the Positivist movement, setting up a positivist club in Paris after the foundation of theFrench Third Republicin 1870. He wrote: "Positivism is not only a philosophical doctrine, it is also a political party which claims to reconcile order—the necessary basis for all social activity—with Progress, which is its goal."[27] 1800s:Martineau·Tocqueville·Marx·Spencer·Le Bon·Ward·Pareto·Tönnies·Veblen·Simmel·Durkheim·Addams·Mead·Weber·Du Bois·Mannheim·Elias The modern academic discipline of sociology began with the work ofÉmile Durkheim(1858–1917). While Durkheim rejected much of the details of Comte's philosophy, he retained and refined its method, maintaining that the social sciences are a logical continuation of the natural ones into the realm of human activity, and insisting that they may retain the same objectivity, rationalism, and approach to causality.[28]Durkheim set up the first European department of sociology at theUniversity of Bordeauxin 1895, publishing hisRules of the Sociological Method(1895).[29]In this text he argued: "[o]ur main goal is to extend scientific rationalism to human conduct... What has been called our positivism is but a consequence of this rationalism."[16] Durkheim's seminalmonograph,Suicide(1897), a case study of suicide rates amongstCatholicandProtestantpopulations, distinguished sociological analysis frompsychologyor philosophy.[30]By carefully examining suicide statistics in different police districts, he attempted to demonstrate that Catholic communities have a lower suicide rate than Protestants, something he attributed to social (as opposed to individual or psychological) causes. He developed the notion of objectivesui generis"social facts" to delineate a unique empirical object for the science of sociology to study.[28]Through such studies, he posited, sociology would be able to determine whether a given society is 'healthy' or 'pathological', and seek social reform to negate organic breakdown or "socialanomie". Durkheim described sociology as the "science ofinstitutions, their genesis and their functioning".[31] David Ashley and David M. Orenstein have alleged, in a textbook published byPearson Education, that accounts of Durkheim's positivism are possibly exaggerated and oversimplified; Comte was the only major sociological thinker to postulate that the social realm may be subject to scientific analysis in exactly the same way as natural science, whereas Durkheim saw a far greater need for a distinctly sociological scientific methodology. His lifework was fundamental in the establishment of practicalsocial researchas we know it today—techniques which continue beyond sociology and form the methodological basis of othersocial sciences, such aspolitical science, as well ofmarket researchand other fields.[32] Inhistoriography, historical or documentary positivism is the belief that historians should pursue theobjective truthof the past by allowinghistorical sourcesto "speak for themselves", without additional interpretation.[33][34]In the words of the French historianFustel de Coulanges, as a positivist, "It is not I who am speaking, but history itself". The heavy emphasis placed by historical positivists on documentary sources led to the development of methods ofsource criticism, which seek to expungebiasand uncover original sources in their pristine state.[33] The origin of the historical positivist school is particularly associated with the 19th-century German historianLeopold von Ranke, who argued that the historian should seek to describe historical truth "wie es eigentlich gewesen ist" ("as it actually was")—though subsequent historians of the concept, such asGeorg Iggers, have argued that its development owed more to Ranke's followers than Ranke himself.[35] Historical positivism was critiqued in the 20th century by historians and philosophers of history from various schools of thought, includingErnst KantorowiczinWeimar Germany—who argued that "positivism ... faces the danger of becomingRomanticwhen it maintains that it is possible to find theBlue Flowerof truth without preconceptions"—andRaymond AronandMichel Foucaultin postwar France, who both posited that interpretations are always ultimately multiple and there is no final objective truth to recover.[36][34][37]In his posthumously published 1946The Idea of History, the English historianR. G. Collingwoodcriticized historical positivism for conflating scientific facts with historical facts, which are alwaysinferredand cannot beconfirmed by repetition, and argued that its focus on the "collection of facts" had given historians "unprecedented mastery over small-scale problems", but "unprecedented weakness in dealing with large-scale problems".[38] Historicistarguments against positivist approaches in historiography include thathistorydiffers from sciences likephysicsandethologyinsubject matterandmethod;[39][40][41]that much of what history studies is nonquantifiable, and therefore to quantify is to lose in precision; and that experimental methods and mathematical models do not generally apply to history, so that it is not possible to formulate general (quasi-absolute) laws in history.[41] Inpsychologythe positivist movement was influential in the development ofoperationalism. The 1927 philosophy of science bookThe Logic of Modern Physicsin particular, which was originally intended for physicists, coined the termoperational definition, which went on to dominate psychological method for the whole century.[42] Ineconomics, practicing researchers tend to emulate the methodological assumptions of classical positivism, but only in ade factofashion: the majority of economists do not explicitly concern themselves with matters of epistemology.[43]Economic thinkerFriedrich Hayek(see "Law, Legislation and Liberty") rejected positivism in the social sciences as hopelessly limited in comparison to evolved and divided knowledge. For example, much (positivist) legislation falls short in contrast to pre-literate or incompletely defined common or evolved law. Injurisprudence, "legal positivism" essentially refers to the rejection ofnatural law; thus its common meaning with philosophical positivism is somewhat attenuated and in recent generations generally emphasizes the authority of human political structures as opposed to a "scientific" view of law. Logical positivism(later and more accurately called logical empiricism) is a school of philosophy that combinesempiricism, the idea that observational evidence is indispensable for knowledge of the world, with a version ofrationalism, the idea that our knowledge includes a component that is not derived from observation. Logical positivism grew from the discussions of a group called the "First Vienna Circle", which gathered at theCafé CentralbeforeWorld War I. After the warHans Hahn, a member of that early group, helped bringMoritz Schlickto Vienna. Schlick'sVienna Circle, along withHans Reichenbach'sBerlin Circle, propagated the new doctrines more widely in the 1920s and early 1930s. It wasOtto Neurath's advocacy that made the movement self-conscious and more widely known. A 1929 pamphlet written by Neurath, Hahn, andRudolf Carnapsummarized the doctrines of the Vienna Circle at that time. These included the opposition to allmetaphysics, especiallyontologyandsynthetica prioripropositions; the rejection of metaphysics not as wrong but as meaningless (i.e., not empirically verifiable); a criterion of meaning based onLudwig Wittgenstein's early work (which he himself later set out to refute); the idea that all knowledge should be codifiable in a single standard language of science; and above all the project of "rational reconstruction," in which ordinary-language concepts were gradually to be replaced by more precise equivalents in that standard language. However, the project is widely considered to have failed.[44][45] After moving to the United States, Carnap proposed a replacement for the earlier doctrines in hisLogical Syntax of Language. This change of direction, and the somewhat differing beliefs of Reichenbach and others, led to a consensus that the English name for the shared doctrinal platform, in its American exile from the late 1930s, should be "logical empiricism."[citation needed]While the logical positivist movement is now considered dead, it has continued to influence philosophical development.[46] Historically, positivism has been criticized for itsreductionism, i.e., for contending that all "processes are reducible to physiological, physical or chemical events," "social processes are reducible to relationships between and actions of individuals," and that "biological organisms are reducible to physical systems."[47] The consideration that laws in physics may not be absolute but relative, and, if so, this might be even more true of social sciences, was stated, in different terms, byG. B. Vicoin 1725.[40][48]Vico, in contrast to the positivist movement, asserted the superiority of the science of the human mind (the humanities, in other words), on the grounds that natural sciences tell us nothing about the inward aspects of things.[49] Wilhelm Diltheyfought strenuously against the assumption that only explanations derived from science are valid.[12]He reprised Vico's argument that scientific explanations do not reach the inner nature of phenomena[12]and it is humanisticknowledgethat gives us insight into thoughts, feelings and desires.[12]Dilthey was in part influenced by thehistorismofLeopold von Ranke(1795–1886).[12] The contesting views over positivism are reflected both in older debates (see thePositivism dispute) and current ones over the proper role of science in the public sphere.Public sociology—especially as described byMichael Burawoy—argues that sociologists should use empirical evidence to display the problems of society so they might be changed.[50] At the turn of the 20th century, the first wave of German sociologists formally introduced methodological antipositivism, proposing that research should concentrate on human culturalnorms,values,symbols, and social processes viewed from asubjectiveperspective.Max Weber, one such thinker, argued that while sociology may be loosely described as a 'science' because it is able to identify causal relationships (especially amongideal types), sociologists should seek relationships that are not as "ahistorical, invariant, or generalizable" as those pursued by natural scientists.[51][52]Weber regarded sociology as the study ofsocial action, using critical analysis andverstehentechniques. The sociologistsGeorg Simmel,Ferdinand Tönnies,George Herbert Mead, andCharles Cooleywere also influential in the development of sociological antipositivism, whilstneo-Kantianphilosophy,hermeneutics, andphenomenologyfacilitated the movement in general. In the mid-twentieth century, several important philosophers and philosophers of science began to critique the foundations of logical positivism. In his 1934 workThe Logic of Scientific Discovery,Karl Popperargued againstverificationism. A statement such as "all swans are white" cannot actually be empirically verified, because it is impossible to know empirically whether all swans have been observed. Instead, Popper argued that at best an observation canfalsifya statement (for example, observing a black swan would prove that not all swans are white).[53]Popper also held that scientific theories talk about how the world really is (not about phenomena or observations experienced by scientists), and critiqued the Vienna Circle in hisConjectures and Refutations.[54][55]W. V. O. QuineandPierre Duhemwent even further. TheDuhem–Quine thesisstates that it is impossible to experimentally test a scientific hypothesis in isolation, because an empirical test of the hypothesis requires one or more background assumptions (also called auxiliary assumptions or auxiliary hypotheses); thus, unambiguous scientific falsifications are also impossible.[56]Thomas Kuhn, in his 1962 bookThe Structure of Scientific Revolutions, put forward his theory of paradigm shifts. He argued that it is not simply individual theories but wholeworldviewsthat must occasionally shift in response to evidence.[57][53] Together, these ideas led to the development ofcritical rationalismandpostpositivism.[58]Postpositivism is not a rejection of thescientific method, but rather a reformation of positivism to meet these critiques. It reintroduces the basic assumptions of positivism: the possibility and desirability ofobjective truth, and the use of experimental methodology. Postpositivism of this type is described insocial scienceguides to research methods.[59]Postpositivists argue that theories, hypotheses, background knowledge and values of the researcher can influence what is observed.[60]Postpositivists pursue objectivity by recognizing the possible effects of biases.[60][53][61]While positivists emphasizequantitativemethods, postpositivists consider bothquantitativeandqualitativemethods to be valid approaches.[61] In the early 1960s, thepositivism disputearose between the critical theorists (see below) and the critical rationalists over the correct solution to the value judgment dispute (Werturteilsstreit). While both sides accepted that sociology cannot avoid a value judgement that inevitably influences subsequent conclusions, the critical theorists accused the critical rationalists of being positivists; specifically, of asserting that empirical questions can be severed from their metaphysical heritage and refusing to ask questions that cannot be answered with scientific methods. This contributed to what Karl Popper termed the "Popper Legend", a misconception among critics and admirers of Popper that he was, or identified himself as, a positivist.[62] AlthoughKarl Marx's theory ofhistorical materialismdrew upon positivism, the Marxist tradition would also go on to influence the development of antipositivistcritical theory.[63]Critical theoristJürgen Habermascritiqued pureinstrumental rationality(in its relation to the cultural"rationalisation"of the modern West) as a form ofscientism, or science "asideology".[64]He argued that positivism may be espoused by "technocrats" who believe in the inevitability ofsocial progressthrough science and technology.[65][66]New movements, such ascritical realism, have emerged in order to reconcile postpositivist aims with various so-called 'postmodern' perspectives on the social acquisition of knowledge. Max Horkheimercriticized the classic formulation of positivism on two grounds. First, he claimed that it falsely represented human social action.[67]The first criticism argued that positivism systematically failed to appreciate the extent to which the so-called social facts it yielded did not exist 'out there', in the objective world, but were themselves a product of socially and historically mediated human consciousness.[67]Positivism ignored the role of the 'observer' in the constitution of social reality and thereby failed to consider the historical and social conditions affecting the representation of social ideas.[67]Positivism falsely represented the object of study byreifyingsocial reality as existing objectively and independently of the labour that actually produced those conditions.[67]Secondly, he argued, representation of social reality produced by positivism was inherently and artificially conservative, helping to support the status quo, rather than challenging it.[67]This character may also explain the popularity of positivism in certain political circles. Horkheimer argued, in contrast, that critical theory possessed a reflexive element lacking in the positivistic traditional theory.[67] Some scholars today hold the beliefs critiqued in Horkheimer's work, but since the time of his writing critiques of positivism, especially from philosophy of science, have led to the development ofpostpositivism. This philosophy greatly relaxes the epistemological commitments of logical positivism and no longer claims a separation between the knower and the known. Rather than dismissing the scientific project outright, postpositivists seek to transform and amend it, though the exact extent of their affinity for science varies vastly. For example, some postpositivists accept the critique that observation is always value-laden, but argue that the best values to adopt for sociological observation are those of science: skepticism, rigor, and modesty. Just as some critical theorists see their position as a moral commitment to egalitarian values, these postpositivists see their methods as driven by a moral commitment to these scientific values. Such scholars may see themselves as either positivists or antipositivists.[68] During the later twentieth century, positivism began to fall out of favor with scientists as well. Later in his career, German theoretical physicistWerner Heisenberg, Nobel laureate for his pioneering work inquantum mechanics, distanced himself from positivism: The positivists have a simple solution: the world must be divided into that which we can say clearly and the rest, which we had better pass over in silence. But can any one conceive of a more pointless philosophy, seeing that what we can say clearly amounts to next to nothing? If we omitted all that is unclear we would probably be left with completely uninteresting and trivial tautologies.[69] In the early 1970s, urbanists of the quantitative school likeDavid Harveystarted to question the positivist approach itself, saying that the arsenal of scientific theories and methods developed so far in their camp were "incapable of saying anything of depth and profundity" on the real problems of contemporary cities.[70] According to the Catholic Encyclopedia, Positivism has also come under fire on religious and philosophical grounds, whose proponents state that truth begins insense experience, but does not end there. Positivism fails to prove that there are not abstract ideas, laws, and principles, beyond particular observable facts and relationships and necessary principles, or that we cannot know them. Nor does it prove that material and corporeal things constitute the whole order of existing beings, and that our knowledge is limited to them. According to positivism, our abstract concepts or general ideas are mere collective representations of the experimental order—for example; the idea of "man" is a kind of blended image of all the men observed in our experience.[71]This runs contrary to aPlatonicorChristianideal, where an idea can be abstracted from any concrete determination, and may be applied identically to an indefinite number of objects of the same class.[citation needed]From the idea's perspective, Platonism is more precise. Defining an idea as a sum of collective images is imprecise and more or less confused, and becomes more so as the collection represented increases. An idea defined explicitly always remains clear. Other new movements, such ascritical realism, have emerged in opposition to positivism. Critical realism seeks to reconcile the overarching aims of social science with postmodern critiques.Experientialism, which arose with second generation cognitive science, asserts that knowledge begins and ends with experience itself.[72][73]In other words, it rejects the positivist assertion that a portion of human knowledge isa priori. Echoes of the "positivist" and "antipositivist" debate persist today, though this conflict is hard to define. Authors writing in different epistemological perspectives do not phrase their disagreements in the same terms and rarely actually speak directly to each other.[74]To complicate the issues further, few practising scholars explicitly state their epistemological commitments, and their epistemological position thus has to be guessed from other sources such as choice of methodology or theory. However, no perfect correspondence between these categories exists, and many scholars critiqued as "positivists" are actuallypostpositivists.[75]One scholar has described this debate in terms of the social construction of the "other", with each side defining the other by what it isnotrather than what itis, and then proceeding to attribute far greater homogeneity to their opponents than actually exists.[74]Thus, it is better to understand this not as a debate but as two different arguments: the "antipositivist" articulation of a socialmeta-theorywhich includes a philosophical critique ofscientism, and "positivist" development of a scientific research methodology for sociology with accompanying critiques of thereliabilityandvalidityof work that they see as violating such standards.Strategic positivismaims to bridge these two arguments. While most social scientists today are not explicit about their epistemological commitments, articles in top American sociology and political science journals generally follow a positivist logic of argument.[76][77]It can be thus argued that "natural science and social science [research articles] can therefore be regarded with a good deal of confidence as members of the same genre".[76][clarification needed] In contemporary social science, strong accounts of positivism have long since fallen out of favour. Practitioners of positivism today acknowledge in far greater detailobserver biasand structural limitations. Modern positivists generally eschew metaphysical concerns in favour of methodological debates concerning clarity,replicability,reliabilityandvalidity.[78]This positivism is generally equated with "quantitative research" and thus carries no explicit theoretical or philosophical commitments. The institutionalization of this kind of sociology is often credited toPaul Lazarsfeld,[28]who pioneered large-scale survey studies and developed statistical techniques for analyzing them. This approach lends itself to whatRobert K. Mertoncalledmiddle-range theory: abstract statements that generalize from segregated hypotheses and empirical regularities rather than starting with an abstract idea of a social whole.[79] In the original Comtean usage, the term "positivism" roughly meant the use of scientific methods to uncover the laws according to which both physical and human events occur, while "sociology" was the overarching science that would synthesize all such knowledge for the betterment of society. "Positivism is a way of understanding based on science"; people don't rely on the faith in God but instead on the science behind humanity. "Antipositivism" formally dates back to the start of the twentieth century, and is based on the belief that natural and human sciences are ontologically and epistemologically distinct. Neither of these terms is used any longer in this sense.[28]There are no fewer than twelve distinct epistemologies that are referred to as positivism.[80]Many of these approaches do not self-identify as "positivist", some because they themselves arose in opposition to older forms of positivism, and some because the label has over time become a term of abuse[28]by being mistakenly linked with a theoreticalempiricism. The extent of antipositivist criticism has also become broad, with many philosophies broadly rejecting the scientifically based social epistemology and other ones only seeking to amend it to reflect 20th century developments in the philosophy of science. However, positivism (understood as the use of scientific methods for studying society) remains the dominant approach to both the research and the theory construction in contemporary sociology, especially in the United States.[28] The majority of articles published in leading American sociology and political science journals today are positivist (at least to the extent of beingquantitativerather thanqualitative).[76][77]This popularity may be because research utilizing positivist quantitative methodologies holds a greater prestige[clarification needed]in the social sciences than qualitative work; quantitative work is easier to justify, as data can be manipulated to answer any question.[81][need quotation to verify]Such research is generally perceived as being more scientific and more trustworthy, and thus has a greater impact on policy and public opinion (though such judgments are frequently contested by scholars doing non-positivist work).[81][need quotation to verify] The key features of positivism as of the 1950s, as defined in the "received view",[82]are: Stephen Hawkingwas a recent high-profile advocate of positivism in the physical sciences. InThe Universe in a Nutshell(p. 31) he wrote: Any sound scientific theory, whether of time or of any other concept, should in my opinion be based on the most workable philosophy of science: the positivist approach put forward byKarl Popperand others. According to this way of thinking, a scientific theory is a mathematical model that describes and codifies the observations we make. A good theory will describe a large range of phenomena on the basis of a few simple postulates and will make definite predictions that can be tested. ... If one takes the positivist position, as I do, one cannot say what time actually is. All one can do is describe what has been found to be a very good mathematical model for time and say what predictions it makes.
https://en.wikipedia.org/wiki/Positivism
Statistics(fromGerman:Statistik,orig."description of astate, a country"[1]) is the discipline that concerns the collection, organization, analysis, interpretation, and presentation ofdata.[2]In applying statistics to a scientific, industrial, or social problem, it is conventional to begin with astatistical populationor astatistical modelto be studied. Populations can be diverse groups of people or objects such as "all people living in a country" or "every atom composing a crystal". Statistics deals with every aspect of data, including the planning of data collection in terms of the design ofsurveysandexperiments.[3] Whencensusdata (comprising every member of the target population) cannot be collected,statisticianscollect data by developing specific experiment designs and surveysamples. Representative sampling assures that inferences and conclusions can reasonably extend from the sample to the population as a whole. Anexperimental studyinvolves taking measurements of the system under study, manipulating the system, and then taking additional measurements using the same procedure to determine if the manipulation has modified the values of the measurements. In contrast, anobservational studydoes not involve experimental manipulation. Two main statistical methods are used indata analysis:descriptive statistics, which summarize data from a sample usingindexessuch as themeanorstandard deviation, andinferential statistics, which draw conclusions from data that are subject to random variation (e.g., observational errors, sampling variation).[4]Descriptive statistics are most often concerned with two sets of properties of adistribution(sample or population):central tendency(orlocation) seeks to characterize the distribution's central or typical value, whiledispersion(orvariability) characterizes the extent to which members of the distribution depart from its center and each other. Inferences made usingmathematical statisticsemploy the framework ofprobability theory, which deals with the analysis of random phenomena. A standard statistical procedure involves the collection of data leading to atest of the relationshipbetween two statistical data sets, or a data set and synthetic data drawn from an idealized model. A hypothesis is proposed for the statistical relationship between the two data sets, analternativeto an idealizednull hypothesisof no relationship between two data sets. Rejecting or disproving the null hypothesis is done using statistical tests that quantify the sense in which the null can be proven false, given the data that are used in the test. Working from a null hypothesis, two basic forms of error are recognized:Type I errors(null hypothesis is rejected when it is in fact true, giving a "false positive") andType II errors(null hypothesis fails to be rejected when it is in fact false, giving a "false negative"). Multiple problems have come to be associated with this framework, ranging from obtaining a sufficient sample size to specifying an adequate null hypothesis.[4] Statistical measurement processes are also prone to error in regards to the data that they generate. Many of these errors are classified as random (noise) or systematic (bias), but other types of errors (e.g., blunder, such as when an analyst reports incorrect units) can also occur. The presence ofmissing dataorcensoringmay result in biased estimates and specific techniques have been developed to address these problems. "Statistics is both the science of uncertainty and the technology of extracting information from data." - featured in the International Encyclopedia of Statistical Science.[5] Statistics is the discipline that deals withdata, facts and figures with which meaningful information is inferred. Data may represent a numerical value, in form of quantitative data, or a label, as with qualitative data. Data may be collected, presented and summarised, in one of two methods called descriptive statistics. Two elementary summaries of data, singularly called a statistic, are the mean and dispersion. Whereas inferential statistics interprets data from a population sample to induce statements and predictions about a population.[6][7][5] Statistics is regarded as a body of science[8]or a branch of mathematics.[9]It is based on probability, a branch of mathematics that studies random events. Statistics is considered the science of uncertainty. This arises from the ways to cope with measurement and sampling error as well as dealing with uncertanties in modelling. Although probability and statistics was once paired together as a single subject, they are conceptually distinct from one another. The former is based on deducing answers to specific situations from a general theory of probability, meanwhile statistics induces statements about a population based on a data set. Statistics serves to bridge the gap between probability and applied mathematical fields.[10][5][11] Some consider statistics to be a distinctmathematical sciencerather than a branch of mathematics. While many scientific investigations make use of data, statistics is generally concerned with the use of data in the context of uncertainty and decision-making in the face of uncertainty.[12][13]Statistics is indexed at 62, a subclass of probability theory and stochastic processes, in the Mathematics Subject Classification.[14]Mathematical statistics is covered in the range 276-280 of subclass QA (science > mathematics) in the Library of Congress Classification.[15] The word statistics ultimately comes from the Latin word Status, meaning "situation" or "condition" in society, which in late Latin adopted the meaning "state". Derived from this, political scientist Gottfried Achenwall, coined the German word statistik (a summary of how things stand). In 1770, the term entered the English language through German and referred to the study of political arrangements. The term gained its modern meaning in the 1790s in John Sinclair's works.[16][17]In modern German, the term statistik is synonymous with mathematical statistics. The term statistic, in singular form, is used to describe a function that returns its value of the same name.[18] When full census data cannot be collected, statisticians collect sample data by developing specificexperiment designsandsurvey samples. Statistics itself also provides tools for prediction and forecasting throughstatistical models. To use a sample as a guide to an entire population, it is important that it truly represents the overall population. Representativesamplingassures that inferences and conclusions can safely extend from the sample to the population as a whole. A major problem lies in determining the extent that the sample chosen is actually representative. Statistics offers methods to estimate and correct for any bias within the sample and data collection procedures. There are also methods of experimental design that can lessen these issues at the outset of a study, strengthening its capability to discern truths about the population. Sampling theory is part of themathematical disciplineofprobability theory. Probability is used inmathematical statisticsto study thesampling distributionsofsample statisticsand, more generally, the properties ofstatistical procedures. The use of any statistical method is valid when the system or population under consideration satisfies the assumptions of the method. The difference in point of view between classic probability theory and sampling theory is, roughly, that probability theory starts from the given parameters of a total population todeduceprobabilities that pertain to samples. Statistical inference, however, moves in the opposite direction—inductively inferringfrom samples to the parameters of a larger or total population. A common goal for a statistical research project is to investigatecausality, and in particular to draw a conclusion on the effect of changes in the values of predictors orindependent variables on dependent variables. There are two major types of causal statistical studies:experimental studiesandobservational studies. In both types of studies, the effect of differences of an independent variable (or variables) on the behavior of the dependent variable are observed. The difference between the two types lies in how the study is actually conducted. Each can be very effective. An experimental study involves taking measurements of the system under study, manipulating the system, and then taking additionalmeasurements with different levelsusing the same procedure to determine if the manipulation has modified the values of the measurements. In contrast, an observational study does not involveexperimental manipulation. Instead, data are gathered and correlations between predictors and response are investigated. While the tools of data analysis work best on data fromrandomized studies, they are also applied to other kinds of data—likenatural experimentsandobservational studies[19]—for which a statistician would use a modified, more structured estimation method (e.g.,difference in differences estimationandinstrumental variables, among many others) that produceconsistent estimators. The basic steps of a statistical experiment are: Experiments on human behavior have special concerns. The famousHawthorne studyexamined changes to the working environment at the Hawthorne plant of theWestern Electric Company. The researchers were interested in determining whether increased illumination would increase the productivity of theassembly lineworkers. The researchers first measured the productivity in the plant, then modified the illumination in an area of the plant and checked if the changes in illumination affected productivity. It turned out that productivity indeed improved (under the experimental conditions). However, the study is heavily criticized today for errors in experimental procedures, specifically for the lack of acontrol groupandblindness. TheHawthorne effectrefers to finding that an outcome (in this case, worker productivity) changed due to observation itself. Those in the Hawthorne study became more productive not because the lighting was changed but because they were being observed.[20] An example of an observational study is one that explores the association between smoking and lung cancer. This type of study typically uses a survey to collect observations about the area of interest and then performs statistical analysis. In this case, the researchers would collect observations of both smokers and non-smokers, perhaps through acohort study, and then look for the number of cases of lung cancer in each group.[21]Acase-control studyis another type of observational study in which people with and without the outcome of interest (e.g. lung cancer) are invited to participate and their exposure histories are collected. Various attempts have been made to produce a taxonomy oflevels of measurement. The psychophysicistStanley Smith Stevensdefined nominal, ordinal, interval, and ratio scales. Nominal measurements do not have meaningful rank order among values, and permit any one-to-one (injective) transformation. Ordinal measurements have imprecise differences between consecutive values, but have a meaningful order to those values, and permit any order-preserving transformation. Interval measurements have meaningful distances between measurements defined, but the zero value is arbitrary (as in the case withlongitudeandtemperaturemeasurements inCelsiusorFahrenheit), and permit any linear transformation. Ratio measurements have both a meaningful zero value and the distances between different measurements defined, and permit any rescaling transformation. Because variables conforming only to nominal or ordinal measurements cannot be reasonably measured numerically, sometimes they are grouped together ascategorical variables, whereas ratio and interval measurements are grouped together asquantitative variables, which can be eitherdiscreteorcontinuous, due to their numerical nature. Such distinctions can often be loosely correlated withdata typein computer science, in that dichotomous categorical variables may be represented with theBoolean data type, polytomous categorical variables with arbitrarily assignedintegersin theintegral data type, and continuous variables with thereal data typeinvolvingfloating-point arithmetic. But the mapping of computer science data types to statistical data types depends on which categorization of the latter is being implemented. Other categorizations have been proposed. For example, Mosteller and Tukey (1977)[22]distinguished grades, ranks, counted fractions, counts, amounts, and balances. Nelder (1990)[23]described continuous counts, continuous ratios, count ratios, and categorical modes of data. (See also: Chrisman (1998),[24]van den Berg (1991).[25]) The issue of whether or not it is appropriate to apply different kinds of statistical methods to data obtained from different kinds of measurement procedures is complicated by issues concerning the transformation of variables and the precise interpretation of research questions. "The relationship between the data and what they describe merely reflects the fact that certain kinds of statistical statements may have truth values which are not invariant under some transformations. Whether or not a transformation is sensible to contemplate depends on the question one is trying to answer."[26]: 82 Adescriptive statistic(in thecount nounsense) is asummary statisticthat quantitatively describes or summarizes features of a collection ofinformation,[27]whiledescriptive statisticsin themass nounsense is the process of using and analyzing those statistics. Descriptive statistics is distinguished frominferential statistics(or inductive statistics), in that descriptive statistics aims to summarize asample, rather than use the data to learn about thepopulationthat the sample of data is thought to represent.[28] Statistical inferenceis the process of usingdata analysisto deduce properties of an underlyingprobability distribution.[29]Inferential statistical analysis infers properties of apopulation, for example by testing hypotheses and deriving estimates. It is assumed that the observed data set issampledfrom a larger population. Inferential statistics can be contrasted withdescriptive statistics. Descriptive statistics is solely concerned with properties of the observed data, and it does not rest on the assumption that the data come from a larger population.[30] Considerindependent identically distributed (IID) random variableswith a givenprobability distribution: standardstatistical inferenceandestimation theorydefines arandom sampleas therandom vectorgiven by thecolumn vectorof these IID variables.[31]Thepopulationbeing examined is described by a probability distribution that may have unknown parameters. A statistic is a random variable that is a function of the random sample, butnot a function of unknown parameters. The probability distribution of the statistic, though, may have unknown parameters. Consider now a function of the unknown parameter: anestimatoris a statistic used to estimate such function. Commonly used estimators includesample mean, unbiasedsample varianceandsample covariance. A random variable that is a function of the random sample and of the unknown parameter, but whose probability distributiondoes not depend on the unknown parameteris called apivotal quantityor pivot. Widely used pivots include thez-score, thechi square statisticand Student'st-value. Between two estimators of a given parameter, the one with lowermean squared erroris said to be moreefficient. Furthermore, an estimator is said to beunbiasedif itsexpected valueis equal to thetrue valueof the unknown parameter being estimated, and asymptotically unbiased if its expected value converges at thelimitto the true value of such parameter. Other desirable properties for estimators include:UMVUEestimators that have the lowest variance for all possible values of the parameter to be estimated (this is usually an easier property to verify than efficiency) andconsistent estimatorswhichconverges in probabilityto the true value of such parameter. This still leaves the question of how to obtain estimators in a given situation and carry the computation, several methods have been proposed: themethod of moments, themaximum likelihoodmethod, theleast squaresmethod and the more recent method ofestimating equations. Interpretation of statistical information can often involve the development of anull hypothesiswhich is usually (but not necessarily) that no relationship exists among variables or that no change occurred over time.[32][33] The best illustration for a novice is the predicament encountered by a criminal trial. The null hypothesis, H0, asserts that the defendant is innocent, whereas the alternative hypothesis, H1, asserts that the defendant is guilty. The indictment comes because of suspicion of the guilt. The H0(status quo) stands in opposition to H1and is maintained unless H1is supported by evidence "beyond a reasonable doubt". However, "failure to reject H0" in this case does not imply innocence, but merely that the evidence was insufficient to convict. So the jury does not necessarilyacceptH0butfails to rejectH0. While one can not "prove" a null hypothesis, one can test how close it is to being true with apower test, which tests fortype II errors. Whatstatisticianscall analternative hypothesisis simply a hypothesis that contradicts the null hypothesis. Working from anull hypothesis, two broad categories of error are recognized: Standard deviationrefers to the extent to which individual observations in a sample differ from a central value, such as the sample or population mean, whileStandard errorrefers to an estimate of difference between sample mean and population mean. Astatistical erroris the amount by which an observation differs from itsexpected value. Aresidualis the amount an observation differs from the value the estimator of the expected value assumes on a given sample (also called prediction). Mean squared erroris used for obtainingefficient estimators, a widely used class of estimators.Root mean square erroris simply the square root of mean squared error. Many statistical methods seek to minimize theresidual sum of squares, and these are called "methods of least squares" in contrast toLeast absolute deviations. The latter gives equal weight to small and big errors, while the former gives more weight to large errors. Residual sum of squares is alsodifferentiable, which provides a handy property for doingregression. Least squares applied tolinear regressionis calledordinary least squaresmethod and least squares applied tononlinear regressionis callednon-linear least squares. Also in a linear regression model the non deterministic part of the model is called error term, disturbance or more simply noise. Both linear regression and non-linear regression are addressed inpolynomial least squares, which also describes the variance in a prediction of the dependent variable (y axis) as a function of the independent variable (x axis) and the deviations (errors, noise, disturbances) from the estimated (fitted) curve. Measurement processes that generate statistical data are also subject to error. Many of these errors are classified asrandom(noise) orsystematic(bias), but other types of errors (e.g., blunder, such as when an analyst reports incorrect units) can also be important. The presence ofmissing dataorcensoringmay result inbiased estimatesand specific techniques have been developed to address these problems.[34] Most studies only sample part of a population, so results do not fully represent the whole population. Any estimates obtained from the sample only approximate the population value.Confidence intervalsallow statisticians to express how closely the sample estimate matches the true value in the whole population. Often they are expressed as 95% confidence intervals. Formally, a 95% confidence interval for a value is a range where, if the sampling and analysis were repeated under the same conditions (yielding a different dataset), the interval would include the true (population) value in 95% of all possible cases. This doesnotimply that the probability that the true value is in the confidence interval is 95%. From thefrequentistperspective, such a claim does not even make sense, as the true value is not arandom variable. Either the true value is or is not within the given interval. However, it is true that, before any data are sampled and given a plan for how to construct the confidence interval, the probability is 95% that the yet-to-be-calculated interval will cover the true value: at this point, the limits of the interval are yet-to-be-observedrandom variables. One approach that does yield an interval that can be interpreted as having a given probability of containing the true value is to use acredible intervalfromBayesian statistics: this approach depends on a different way ofinterpreting what is meant by "probability", that is as aBayesian probability. In principle confidence intervals can be symmetrical or asymmetrical. An interval can be asymmetrical because it works as lower or upper bound for a parameter (left-sided interval or right sided interval), but it can also be asymmetrical because the two sided interval is built violating symmetry around the estimate. Sometimes the bounds for a confidence interval are reached asymptotically and these are used to approximate the true bounds. Statistics rarely give a simple Yes/No type answer to the question under analysis. Interpretation often comes down to the level of statistical significance applied to the numbers and often refers to the probability of a value accurately rejecting the null hypothesis (sometimes referred to as thep-value). The standard approach[31]is to test a null hypothesis against an alternative hypothesis. Acritical regionis the set of values of the estimator that leads to refuting the null hypothesis. The probability of type I error is therefore the probability that the estimator belongs to the critical region given that null hypothesis is true (statistical significance) and the probability of type II error is the probability that the estimator does not belong to the critical region given that the alternative hypothesis is true. Thestatistical powerof a test is the probability that it correctly rejects the null hypothesis when the null hypothesis is false. Referring to statistical significance does not necessarily mean that the overall result is significant in real world terms. For example, in a large study of a drug it may be shown that the drug has a statistically significant but very small beneficial effect, such that the drug is unlikely to help the patient noticeably. Although in principle the acceptable level of statistical significance may be subject to debate, thesignificance levelis the largest p-value that allows the test to reject the null hypothesis. This test is logically equivalent to saying that the p-value is the probability, assuming the null hypothesis is true, of observing a result at least as extreme as thetest statistic. Therefore, the smaller the significance level, the lower the probability of committing type I error. Some problems are usually associated with this framework (Seecriticism of hypothesis testing): Some well-known statisticaltestsand procedures are: An alternative paradigm to the popularfrequentistparadigm is to useBayes' theoremto update theprior probabilityof the hypotheses in consideration based on therelative likelihoodof the evidence gathered to obtain aposterior probability. Bayesian methods have been aided by the increase in available computing power to compute theposterior probabilityusing numerical approximation techniques likeMarkov Chain Monte Carlo. For statistically modelling purposes, Bayesian models tend to behierarchical, for example, one could model eachYoutubechannel as having video views distributed as a normal distribution with channel dependent mean and varianceN(μi,σi){\displaystyle {\mathcal {N}}(\mu _{i},\sigma _{i})}, while modeling the channel means as themselves coming from a normal distribution representing the distribution of average video view counts per channel, and the variances as coming from another distribution. The concept of usinglikelihood ratiocan also be prominently seen inmedical diagnostic testing. Exploratory data analysis(EDA) is an approach toanalyzingdata setsto summarize their main characteristics, often with visual methods. Astatistical modelcan be used or not, but primarily EDA is for seeing what the data can tell us beyond the formal modeling or hypothesis testing task. Mathematical statistics is the application of mathematics to statistics. Mathematical techniques used for this includemathematical analysis,linear algebra,stochastic analysis,differential equations, andmeasure-theoretic probability theory.[1][7]All statistical analyses make use of at least some mathematics, and mathematical statistics can therefore be regarded as a fundamental component of general statistics.[8] Formal discussions on inference date back to themathematiciansandcryptographersof theIslamic Golden Agebetween the 8th and 13th centuries.Al-Khalil(717–786) wrote theBook of Cryptographic Messages, which contains one of the first uses ofpermutationsandcombinations, to list all possible Arabic words with and without vowels.[36]Al-Kindi'sManuscript on Deciphering Cryptographic Messagesgave a detailed description of how to usefrequency analysisto decipherencryptedmessages, providing an early example ofstatistical inferencefordecoding.Ibn Adlan(1187–1268) later made an important contribution on the use ofsample sizein frequency analysis.[36] Although the termstatisticwas introduced by the Italian scholarGirolamo Ghiliniin 1589 with reference to a collection of facts and information about a state, it was the GermanGottfried Achenwallin 1749 who started using the term as a collection of quantitative information, in the modern use for this science.[37][38]The earliest writing containing statistics in Europe dates back to 1663, with the publication ofNatural and Political Observations upon the Bills of MortalitybyJohn Graunt.[39]Early applications of statistical thinking revolved around the needs of states to base policy on demographic and economic data, hence itsstat-etymology. The scope of the discipline of statistics broadened in the early 19th century to include the collection and analysis of data in general. Today, statistics is widely employed in government, business, and natural and social sciences. The mathematical foundations of statistics developed from discussions concerninggames of chanceamong mathematicians such asGerolamo Cardano,Blaise Pascal,Pierre de Fermat, andChristiaan Huygens. Although the idea of probability was already examined in ancient and medieval law and philosophy (such as the work ofJuan Caramuel),probability theoryas a mathematical discipline only took shape at the very end of the 17th century, particularly inJacob Bernoulli's posthumous workArs Conjectandi.[40]This was the first book where the realm of games of chance and the realm of the probable (which concerned opinion, evidence, and argument) were combined and submitted to mathematical analysis.[41]Themethod of least squareswas first described byAdrien-Marie Legendrein 1805, thoughCarl Friedrich Gausspresumably made use of it a decade earlier in 1795.[42] The modern field of statistics emerged in the late 19th and early 20th century in three stages.[43]The first wave, at the turn of the century, was led by the work ofFrancis GaltonandKarl Pearson, who transformed statistics into a rigorous mathematical discipline used for analysis, not just in science, but in industry and politics as well. Galton's contributions included introducing the concepts ofstandard deviation,correlation,regression analysisand the application of these methods to the study of the variety of human characteristics—height, weight and eyelash length among others.[44]Pearson developed thePearson product-moment correlation coefficient, defined as a product-moment,[45]themethod of momentsfor the fitting of distributions to samples and thePearson distribution, among many other things.[46]Galton and Pearson foundedBiometrikaas the first journal of mathematical statistics andbiostatistics(then calledbiometry), and the latter founded the world's first university statistics department atUniversity College London.[47] The second wave of the 1910s and 20s was initiated byWilliam Sealy Gosset, and reached its culmination in the insights ofRonald Fisher, who wrote the textbooks that were to define the academic discipline in universities around the world. Fisher's most important publications were his 1918 seminal paperThe Correlation between Relatives on the Supposition of Mendelian Inheritance(which was the first to use the statistical term,variance), his classic 1925 workStatistical Methods for Research Workersand his 1935The Design of Experiments,[48][49][50]where he developed rigorousdesign of experimentsmodels. He originated the concepts ofsufficiency,ancillary statistics,Fisher's linear discriminatorandFisher information.[51]He also coined the termnull hypothesisduring theLady tasting teaexperiment, which "is never proved or established, but is possibly disproved, in the course of experimentation".[52][53]In his 1930 bookThe Genetical Theory of Natural Selection, he applied statistics to variousbiologicalconcepts such asFisher's principle[54](whichA. W. F. Edwardscalled "probably the most celebrated argument inevolutionary biology") andFisherian runaway,[55][56][57][58][59][60]a concept insexual selectionabout a positive feedback runaway effect found inevolution. The final wave, which mainly saw the refinement and expansion of earlier developments, emerged from the collaborative work betweenEgon PearsonandJerzy Neymanin the 1930s. They introduced the concepts of "Type II" error,power of a testandconfidence intervals. Jerzy Neyman in 1934 showed that stratified random sampling was in general a better method of estimation than purposive (quota) sampling.[61] Today, statistical methods are applied in all fields that involve decision making, for making accurate inferences from a collated body of data and for making decisions in the face of uncertainty based on statistical methodology. The use of moderncomputershas expedited large-scale statistical computations and has also made possible new methods that are impractical to perform manually. Statistics continues to be an area of active research, for example on the problem of how to analyzebig data.[62] Applied statistics,sometimes referred to asStatistical science,[63]comprises descriptive statistics and the application of inferential statistics.[64][65]Theoretical statisticsconcerns the logical arguments underlying justification of approaches tostatistical inference, as well as encompassingmathematical statistics. Mathematical statistics includes not only the manipulation ofprobability distributionsnecessary for deriving results related to methods of estimation and inference, but also various aspects ofcomputational statisticsand thedesign of experiments. Statistical consultantscan help organizations and companies that do not have in-house expertise relevant to their particular questions. Machine learningmodels are statistical and probabilistic models that capture patterns in the data through use of computational algorithms. Statistics is applicable to a wide variety ofacademic disciplines, includingnaturalandsocial sciences, government, and business. Business statistics applies statistical methods ineconometrics,auditingand production and operations, including services improvement and marketing research.[66]A study of two journals in tropical biology found that the 12 most frequent statistical tests are:analysis of variance(ANOVA),chi-squared test,Student's t-test,linear regression,Pearson's correlation coefficient,Mann-Whitney U test,Kruskal-Wallis test,Shannon's diversity index,Tukey's range test,cluster analysis,Spearman's rank correlation coefficientandprincipal component analysis.[67] A typical statistics course covers descriptive statistics, probability, binomial andnormal distributions, test of hypotheses and confidence intervals,linear regression, and correlation.[68]Modern fundamental statistical courses for undergraduate students focus on correct test selection, results interpretation, and use offree statistics software.[67] The rapid and sustained increases in computing power starting from the second half of the 20th century have had a substantial impact on the practice of statistical science. Early statistical models were almost always from the class oflinear models, but powerful computers, coupled with suitable numericalalgorithms, caused an increased interest innonlinear models(such asneural networks) as well as the creation of new types, such asgeneralized linear modelsandmultilevel models. Increased computing power has also led to the growing popularity of computationally intensive methods based onresampling, such aspermutation testsand thebootstrap, while techniques such asGibbs samplinghave made use ofBayesian modelsmore feasible. The computer revolution has implications for the future of statistics with a new emphasis on "experimental" and "empirical" statistics. A large number of both general and special purposestatistical softwareare now available. Examples of available software capable of complex statistical computation include programs such asMathematica,SAS,SPSS, andR. In business, "statistics" is a widely usedmanagement-anddecision supporttool. It is particularly applied infinancial management,marketing management, andproduction,servicesandoperations management.[69][70]Statistics is also heavily used inmanagement accountingandauditing. The discipline ofManagement Scienceformalizes the use of statistics, and other mathematics, in business. (Econometricsis the application of statistical methods toeconomic datain order to give empirical content toeconomic relationships.) A typical "Business Statistics" course is intended forbusiness majors, and covers[71]descriptive statistics(collection, description, analysis, and summary of data), probability (typically thebinomialandnormal distributions), test of hypotheses and confidence intervals,linear regression, and correlation; (follow-on) courses may includeforecasting,time series,decision trees,multiple linear regression, and other topics frombusiness analyticsmore generally.Professional certification programs, such as theCFA, often include topics in statistics. Statistical techniques are used in a wide range of types of scientific and social research, including:biostatistics,computational biology,computational sociology,network biology,social science,sociologyandsocial research. Some fields of inquiry use applied statistics so extensively that they havespecialized terminology. These disciplines include: In addition, there are particular types of statistical analysis that have also developed their own specialised terminology and methodology: Statistics form a key basis tool in business and manufacturing as well. It is used to understand measurement systems variability, control processes (as instatistical process controlor SPC), for summarizing data, and to make data-driven decisions. Misuse of statisticscan produce subtle but serious errors in description and interpretation—subtle in the sense that even experienced professionals make such errors, and serious in the sense that they can lead to devastating decision errors. For instance, social policy, medical practice, and the reliability of structures like bridges all rely on the proper use of statistics. Even when statistical techniques are correctly applied, the results can be difficult to interpret for those lacking expertise. Thestatistical significanceof a trend in the data—which measures the extent to which a trend could be caused by random variation in the sample—may or may not agree with an intuitive sense of its significance. The set of basic statistical skills (and skepticism) that people need to deal with information in their everyday lives properly is referred to asstatistical literacy. There is a general perception that statistical knowledge is all-too-frequently intentionallymisusedby finding ways to interpret only the data that are favorable to the presenter.[72]A mistrust and misunderstanding of statistics is associated with the quotation, "There are three kinds of lies: lies, damned lies, and statistics". Misuse of statistics can be both inadvertent and intentional, and the bookHow to Lie with Statistics,[72]byDarrell Huff, outlines a range of considerations. In an attempt to shed light on the use and misuse of statistics, reviews of statistical techniques used in particular fields are conducted (e.g. Warne, Lazo, Ramos, and Ritter (2012)).[73] Ways to avoid misuse of statistics include using proper diagrams and avoidingbias.[74]Misuse can occur when conclusions areovergeneralizedand claimed to be representative of more than they really are, often by either deliberately or unconsciously overlooking sampling bias.[75]Bar graphs are arguably the easiest diagrams to use and understand, and they can be made either by hand or with simple computer programs.[74]Most people do not look for bias or errors, so they are not noticed. Thus, people may often believe that something is true even if it is not wellrepresented.[75]To make data gathered from statistics believable and accurate, the sample taken must be representative of the whole.[76]According to Huff, "The dependability of a sample can be destroyed by [bias]... allow yourself some degree of skepticism."[77] To assist in the understanding of statistics Huff proposed a series of questions to be asked in each case:[72] The concept ofcorrelationis particularly noteworthy for the potential confusion it can cause. Statistical analysis of adata setoften reveals that two variables (properties) of the population under consideration tend to vary together, as if they were connected. For example, a study of annual income that also looks at age of death, might find that poor people tend to have shorter lives than affluent people. The two variables are said to be correlated; however, they may or may not be the cause of one another. The correlation phenomena could be caused by a third, previously unconsidered phenomenon, called a lurking variable orconfounding variable. For this reason, there is no way to immediately infer the existence of a causal relationship between the two variables.
https://en.wikipedia.org/wiki/Statistics
Thomas Crombie Schelling(April 14, 1921 – December 13, 2016) was an Americaneconomistand professor offoreign policy,national security,nuclear strategy, andarms controlat theSchool of Public Policyat theUniversity of Maryland, College Park. He was also co-faculty at theNew England Complex Systems Institute. Schelling was awarded the 2005Nobel Memorial Prize in Economic Sciences(shared withRobert Aumann) for "having enhanced our understanding of conflict and cooperation throughgame theoryanalysis."[3] Schelling was born on April 14, 1921, inOakland, California.[3]He graduated fromSan Diego High School. He received hisbachelor's degreein economics from theUniversity of California, Berkeley, in 1944 and received hisPhDin economics fromHarvard Universityin 1951. Schelling served with theMarshall PlaninEurope, theWhite House, and theExecutive Office of the Presidentfrom 1948 to 1953.[4]He wrote most of his dissertation on national income behavior working at night while in Europe. He left government to join the economics faculty atYale University. In 1956, "he joined theRAND Corporationas an adjunct fellow, becoming a full-time researcher for a year after leaving Yale, and returning to adjunct status through 2002."[5]In 1958 Schelling was appointed professor of economics at Harvard. That same year, he "co-founded the Center for International Affairs, which was [later] renamed theWeatherhead Center for International Affairs."[6] In 1969, Schelling joined Harvard'sJohn F. Kennedy School of Government, where he was the Lucius N. Littauer Professor of Political Economy.[4]He was among the "founding fathers" of the "modern" Kennedy School, as he helped to shift the curriculum's emphasis away from administration and more toward leadership.[6] Between 1994 and 1999, he conducted research at theInternational Institute for Applied Systems Analysis(IIASA), inLaxenburg,Austria. In 1990, he left Harvard and joined theUniversity of Maryland School of Public Policyand the University of Maryland Department of Economics.[7]In 1991, he accepted the presidency of theAmerican Economic Association, an organization of which he was also a Distinguished Fellow.[8] In 1995, he accepted the presidency of the Eastern Economic Association.[9] Schelling was a contributing participant of theCopenhagen Consensus.[4][10] In 1977, Schelling received The Frank E. Seidman Distinguished Award in Political Economy. In 1993, he was awarded theAward for Behavior Research Relevant to the Prevention of Nuclear Warfrom theNational Academy of Sciences.[11] He received honorary doctorates fromErasmus University Rotterdamin 2003,Yale Universityin 2009, and RAND Graduate School of Public Analysis, as well as an honorary degree from theUniversity of Manchesterin 2010.[12][9][8] He was awarded the 2005Nobel Memorial Prize in Economic Sciences, along withRobert Aumann, for "having enhanced our understanding of conflict and cooperation throughgame-theoryanalysis."[3] In 2008 he was the Witten Lecturer at theWitten/Herdecke Universityas the awardee of the Witten Lectures in Economics and Philosophy.[13] Schelling was married to Corinne Tigay Saposs from 1947 to 1991, with whom he had four sons. Later in 1991 he married Alice M. Coleman, who brought two sons to the marriage; they became his stepsons.[14][15] Schelling died on December 13, 2016, inBethesda, Maryland, from complications following a hip fracture at the age of 95.[7] Schelling's family auctioned his Nobel award medal, fetching $187,000. They donated this money to theSouthern Poverty Law Center, an American 501 nonprofit legal advocacy organization specializing in civil rights and public interest litigation. Alice Schelling said her late husband had creditedSmoky the CowhorsebyWill James, the winner of theNewbery Medalin 1927, as the most influential book he had read.[16] The Strategy of Conflict, which Schelling published in 1960,[17]pioneered the study of bargaining andstrategic behaviorin what he refers to as "conflict behavior."[18]The Times Literary Supplementin 1995 ranked it as one of the 100 most influential books in the 50 years since 1945.[19]In this book Schelling introduced concepts such as the"focal point"and "credible commitment." In a 1961 review, International Relations scholarMorton Kaplandescribed the book as a "strikingly original contribution" and a "landmark in the literature."[20] Schelling's book comprised a series of scholarly journal articles that he had published over the period 1957–1960.[20] Schelling encourages in his work a strategic view toward conflict that is equally "rational" and "successful."[17]He believes that conflict cannot be based merely on one's intelligence but must also address the "advantages" associated with a course of action. He considers that the advantages that are gleaned should be firmly fixed in a value system that is both "explicit" and "consistent."[17] Also, conflict has a distinct meaning. In Schelling's approach, it is not enough to defeat an opponent, but one must also seize opportunities to co-operate of which there are usually many. He points out that it is only on the rarest of occasions, in what is known as "pure conflict," that the participants' interests are implacably opposed.[17]He uses the example of "a war of complete extermination" to illustrate this phenomenon.[17] Co-operation, if available, may take many forms and thus potentially involve everything from "deterrence, limited war, and disarmament" to "negotiation."[17]Indeed, it is through such actions that participants are left with less of a conflict and more of a "bargaining situation."[17]The bargaining itself is best thought of in terms of the other participant's actions, as any gains one might realize are highly dependent upon the "choices or decisions" of their opponent.[17] Communication between parties, though, is another matter entirely. Verbal or written communication is known as "explicit," and involves such activities as "offering concessions."[17]What happens, though, when this type of communication becomes impossible or improbable? This is when something called "tacit maneuvers" become important.[17]Think of this as action-based communication. Schelling uses the example of one's occupation or evacuation of strategic territory to illustrate this latter communication method. In an article celebrating Schelling's Nobel Memorial Prize for Economics,[21]Michael Kinsley,Washington Postop‑edcolumnist and one of Schelling's former students, anecdotally summarizes Schelling's reorientation of game theory thus: "[Y]ou're standing at the edge of a cliff, chained by the ankle to someone else. You'll be released, and one of you will get a large prize, as soon as the other gives in. How do you persuade the other guy to give in, when the only method at your disposal—threatening to push him off the cliff—would doom you both? Answer: You start dancing, closer and closer to the edge. That way, you don't have to convince him that you would do something totally irrational: plunge him and yourself off the cliff. You just have to convince him that you are prepared to take a higher risk than he is of accidentally falling off the cliff. If you can do that, you win." Schelling's theories about war were extended inArms and Influence, published in 1966.[22][23]According to the publisher, the book "carries forward the analysis so brilliantly begun in his earlierThe Strategy of Conflict(1960) andStrategy and Arms Control(withMorton Halperin, 1961), and makes a significant contribution to the growing literature on modern war anddiplomacy." Chapter headings includeThe Diplomacy of Violence,The Diplomacy of Ultimate SurvivalandThe Dynamics of Mutual Alarm. Within the work, Schelling discusses military capabilities and how they can be used as bargaining power. Instead of considering only the choices that are available on a surface level, one can think ahead to try to influence the other party to come to the desired conclusion. Specifically, Schelling mentions the actions taken by the U.S. during the Cuban and Berlin crises and how they functioned as not only preparation for war but also signals. For example, Schelling points out that the bombing of North Vietnam "is as much coercive as tactical."[24]Not only was the bombing to cripple their enemies armies, but it also served to bring Vietnam to the table for negotiations. Much of this writing was influenced largely due to Schelling's personal interest in Game Theory and its application to nuclear armaments. Schelling's work influencedRobert Jervis.[25][26] In 1969 and 1971, Schelling published widely cited articles dealing withracialdynamics and what he termed "a general theory oftipping."[27]In those papers, he showed that a preference that one's neighbors be of the same color, or even a preference for a mixture "up to some limit," can lead to totalsegregation. He thus argued that motives, malicious or not, were indistinguishable as to explaining the phenomenon of complete local separation of distinct groups. He used coins on graph paper to demonstrate his theory by placing pennies and dimes in different patterns on the "board" and then moving them one by one if they were in an "unhappy" situation.[citation needed] Schelling's dynamics has been cited as a way of explaining variations that are found in what are regarded as meaningful differences – gender, age, race, ethnicity, language, sexual preference, and religion. A cycle of such change, once it has begun, may have a self-sustaining momentum. Schelling's 1978 bookMicromotives and Macrobehaviorexpanded on and generalized those themes[28][29]and is often cited in the literature ofagent-based computational economics. Schelling was involved in theglobal warmingdebate since chairing a commission for PresidentJimmy Carterin 1980. He believedclimate changeposes a serious threat to developing nations, but that the threat to the United States was exaggerated. He wrote that, Today, little of ourgross domestic productis produced outdoors, and therefore, little is susceptible to climate. Agriculture and forestry are less than 3 percent of total output, and little else is much affected. Even ifagricultural productivitydeclined by a third over the next half-century, the per capita GNP we might have achieved by 2050 we would still achieve in 2051. Considering that agricultural productivity in most parts of the world continues to improve (and that many crops may benefit directly from enhancedphotosynthesisdue to increasedcarbon dioxide), it is not at all certain that the net impact on agriculture will be negative or much noticed in thedeveloped world.[30] Drawing on his experience with theMarshall PlanafterWorld War II, he argued that addressing global warming is abargaining problem: if the world were able to reduce emissions, poor countries would receive most of the benefits, but rich countries would bear most of the costs. Stanley Kubrickread an article Schelling wrote that included a description of thePeter GeorgenovelRed Alert, and conversations between Kubrick, Schelling, and George eventually led to the 1964 movieDr. Strangelove or: How I Learned to Stop Worrying and Love the Bomb.[31] Schelling is also cited for the first known use of the phrasecollateral damagein his May 1961 articleDispersal, Deterrence, and Damage.[32] In his bookChoice and Consequence,[33]he explored various topics such asnuclear terrorism,blackmail,daydreaming, andeuthanasia, from abehavioral economicspoint of view.
https://en.wikipedia.org/wiki/Thomas_Schelling
Peter Michael Blau(February 7, 1918 – March 12, 2002) was an Austrian and Americansociologistandtheorist. Born inVienna, Austria, he immigrated to the United States in 1939. He completed hisPhDdoctoral thesis withRobert K. MertonatColumbia Universityin 1952, laying an early theory for the dynamics of bureaucracy. The next year, he was offered a professorship at theUniversity of Chicago, where he taught from 1953 to 1970. He also taught as Pitt Professor at Cambridge University in Great Britain, as a senior fellow at King's College, and as a Distinguished Honorary professor at Tianjin Academy of Social Sciences which he helped to establish. In 1970 he returned to Columbia University, where he was awarded the lifetime position of professor emeritus. From 1988 to 2000 he taught as the Robert Broughton Distinguished Research Professor at University of North Carolina, Chapel Hill in the same department as his wife,Judith Blau, while continuing to commute to New York to meet with graduate students and colleagues. His sociological specialty was in organizational andsocial structures.He formulated theories relating to many aspects of social phenomena, includingupward mobility,occupational opportunity, andheterogeneity. From each of his theories, he deduced an hypothesis which he would test against large scale empirical research. He was one of the first sociological theorists to use high level statistics to develop sociology as a scientific discipline using macro-level empirical data to gird theory. He also produced theories on howpopulation structurescan influence human behavior. One of Blau's most important contributions to social theory is his work regardingexchange theory, which explains how small-scale social exchange directly relates to social structures at a societal level. He also was the first to map out the wide variety of social forces, dubbed "Blau space" byMiller McPherson. This idea was one of the first to take individuals and distribute them along a multidimensional space. Blau-space is still used as a guide by sociologists and has been expanded to include areas of sociology never specifically covered by Blau himself. In 1974 Blau served as the 65th president of theAmerican Sociological Association.[2] Peter Blau was born in 1918 inVienna, a few months before theAustro-Hungarian Empirecollapsed. He was raised in aJewishfamily asfascistpower within Europe grew and Hitler's influence within Austria became increasingly evident. At the age of seventeen, Blau was convicted of high treason for speaking out against government repression in articles he wrote for an underground newspaper of theSocial Democratic Worker's Partyand was subsequently incarcerated. Blau was given a ten-year sentence in the federal prison in Vienna.[3]He was then released shortly after his imprisonment when the ban on political activity was lifted due to the National Socialists' rise to power. When Nazi Germanyannexed Austria, Blau attempted to escape toCzechoslovakiaon March 13, 1938. Both Blau and his sister—who was sent to England—managed to escape. The rest of his family, however, decided to stay in Austria. Blau's original attempt to flee proved unsuccessful as he was captured by Nazi border patrol and was imprisoned for two months. During the two months he was detained, he was tortured, starved, and was forced to eat only lard.[4]Yet, he was once again released and made his way toPrague. When Hitleroccupied Czechoslovakia, he escaped again, returning illegally to Vienna to visit one more time with his parents. In the dark of night, Blau hid on a train to cross the border into France. There he turned himself in to the Allied forces, who had not yet reversed their policy of putting anyone with a German passport—even the Jews—into labor camps. He spent several weeks as a POW of France crushing grapes in Bordeaux. When the policy about Jews was reversed, he was able to continue his journey toLe Havre, France where he received a refugee scholarship toElmhurst CollegeinIllinoisthrough a group of missionaries studying at the theological seminary. Blau emigrated to America on the Degrasse ship and landed in New York on January 1, 1939. He later attended Elmhurst College, earning his degree insociologyin 1942, and becoming a United States citizen in 1943. Blau returned to Europe 1942 as a member of the United States Army, acting as an interrogator given his skills in the German language and was awarded the bronze star for his duties. It was during this time that Blau also received word that his family had been killed atAuschwitz. After receiving his bachelor's degree fromElmhurst College, Blau continued his education atColumbia University, where he received his PhD in 1952. One of Blau's most memorable and significant contributions to the field of sociology came in 1967. Working together withOtis Dudley Duncanand Andrea Tyree, he co-authoredThe American Occupational Structure, which provided a meaningful sociological contribution to the study of social stratification, and won the highly touted Sorokin Award from the American Sociological Association in 1968. Blau is also known for his contributions to sociological theory.Exchange and Power in Social Life(1964) was an important contribution to contemporary exchange theory, one of Blau's distinguished theoretical orientations. The aim of this work was "(to analyze) the processes that govern the associations among men as a prolegomenon of a theory of social structure".[5]In it, Blau makes the effort to take micro-level exchange theory and apply it to social structures at a macro-level. Blau was also very active in the study of structural theory. Blau's 1977 book,Inequality and Heterogeneity, presents "a macro sociological theory of social structure"[6]where the foundation of his theory "is a quantitative conception of social structure in terms of the distributions of people among social positions that affect their social relations".[6] Blau received notable distinctions for his achievements, which include: election to theNational Academy of Sciences, theAmerican Academy of Arts and Sciences, and theAmerican Philosophical Society.[7]He also served as the president of the American Sociological Association from 1973 to 1974. He died on March 12, 2002, ofacute respiratory distress syndrome. He was eighty-four years old. For Blau, sociological theories were produced through logical deduction. Blau began theoretical studies by making a broad statement or basic assumption regarding the social world, which was then proven by the logical predictions it produced.[8]Blau claimed these statements could not be validated or refuted based on one empirical test. Instead, it was a theory's "logical implications" that could be trusted, more so than an empirical test.[8]Only if continued empirical tests contradicted the theory could the theory be modified, or dropped entirely if a new theory was proposed in its place.[8]Blau's trust in logic and his deductive approach to social theory aligns him closely with the philosophy ofpositivismand traditional French sociologists,Auguste ComteandÉmile Durkheim. Blau also focused his sociological theory on both the micro and macro level, and often connected the two throughout his research.[9] Population structures and their relationship with social interaction was another primary interest within Blau's work. Blau believed that population structure created guidelines for specific human behaviors, especially intergroup relations.[10]Blau created a number of theories explaining aspects of population structure that increased chances of intergroup relations. Blau viewed social structure as being somewhat stable, but he did identify two phenomena that he believed contributed to structural change within a population: social mobility and conflict. Blau thought social mobility, which he described as "any movement within a population by an individual," was beneficial to intergroup relations within a population structure, and theorized various scenarios involving social relations and mobility.[10]Blau also theorized explanations for structural causes of conflict, focusing on population distribution as a cause of conflict separate from individual or political issues.[11]According to Blau, structural conflict is linked to the inequality of status of groups, size of group, social mobility between groups, and the probability of social contact between groups. Blau determined that prevention of conflict within a population structure can be achieved through "multi-group affiliations and intersection in complex societies".[11] Social exchange provides an explanation of the interactions and relationships Blau observed while researching. He believed that social exchange could reflect behavior oriented to socially mediated goals. Peter started from the premise that social interaction has value to people, and he explored the forms and sources of this value in order to understand collective outcomes, such as the distribution of power in a society.[12]People engage in social interactions in which we would not think deep about, but Blau suggested it is for the same reason why people engage in economic transactions. They need something from other people, the exchange. That then leads to an increase in social exchange in which people attempt to stay out of debt because it gives them an advantage, as well as potential power. Although social exchange can be genuine, when the goal for the individual to stay out of debt or to get something in return, it is selfishness. "The tendency to help others is frequently motivated by the expectation that doing so will bring social rewards"[13]Blau explained that social exchange between individuals either stem from inherent rewards (things such as love or admiration) or external rewards (money, etc.).[14]Blau expressed the difference between social exchange and the purchasing of goods, stating that there is an emotional component within social exchange that is nonexistent in everyday transactions.[15] Blau also studied the social exchange that occurs within relationships. He believed that most thriving friendships occur when both participants are the same status level, allowing for an equal potential for exchange and benefit throughout the relationship. He also studied the social exchange of partners, and how these relationships come together in the first place. Blau explains how loving relationships come to existence through the exchange of certain favorable traits that would attract one person to another. Blau discusses how status, beauty, and wealth are some of the key characteristics that people search for in a partner, and that the most successful relationships occur when both partners have valuable attributes that they can benefit from.[15] Some of Blau's first major contributions to sociology were in the field of organizations. His first publication,Dynamics of Bureaucracy(1955), prompted a wave of post-Weberian organizational studies. Organizational research consisted in exploring to what extent the received image of the Weberian bureaucracy—an efficient, mechanical system of roles—held up under close scrutiny in the empirical study of social interaction within organizations. Blau, in his research and study, highlighted the ways in which the real life of the organization was structured along informal channels of interaction and socio-emotional exchange. Blau conducted a different approach in the main subject group for organizational sociology, focusing far more on white-collar workers rather than those of the blue-collar status, concentrating on the relationships between workers.[16]He also discussed how the incipient status systems formed were important to the continued functioning of these organizations as the formal status structure. Hence, much of Blau's work involving organizations centered on the interplay between formal structure, informal practices, and bureaucratic pressures and how these processes affect organizational change. Blau's second major contribution to organizational analysis revolved around the study of determinants of the "bureaucratic components" of organizations. He collected data on 53 Employment Security Agencies in the US and 1,201 local offices. The result of this research was Blau's (1970) general theory of differentiation in organizations. This piece had an immediate impact in the field of organizations and more importantly American sociology. Blau derived several generalizations, the most important which are (1) increasing size results in an increase in the number of distinct positions (differentiation) in an organization at a decreasing rate, and (2) as size increases the administrative component (personnel not directly engaged in production but in coordination) decreases.[17]This specific work, however, had a brief influence as organization sociology moved away from monothetic generalizations about determinants of intra-organizational structure and to the study of organizational environments. Probably one of the biggest contributions Blau gave to sociology was his work in macrostructural theory. For him, social structure consisted of the networks of social relations that organize patterns of interaction across different social positions. During this time, many people had different definitions for social structure. Blau's definition, however, set him apart from the rest. For Blau, social structure did not consist of natural persons, but instead social positions. This meant that the "parts" of social structure were classes of people such as men, women, rich and poor. Blau believed that the root of social structure can be found whenever an undifferentiated group begins to separate itself along some socially relevant distinction. In Blau's eyes, one could not speak of social structure without speaking of the differentiation of people. He believes that it is these social distinctions along with some social characteristics (race, religion, age, gender, etc.) which determines who interacts with whom. His theory gave a more structured idea of "homophily" which describes the observation that people are drawn to others like themselves. This is because of structure although individuals may seem to have other interests but those are structurally produced as well.[18]Blau coined the term "parameter of social structure" to refer to socially relevant positions that people could be classified as. Something could not be considered a parameter if it did not actually affect the social relations of individuals "on the ground". In his 1974 Presidential Address, "Parameters of Social Structure", Blau discussed two categories of parameters: graduated and nominal. For Blau, modern society was characterized by the fact that they were composed of (1) multiplicity of socially relevant positions and (2) that these positions were connected and sometimes in contradictory ways resulting in cross-cutting social circles. Two positions are contradictory if one interaction increase leads to interaction of another decreasing. One of his most famous quotations is: "One cannot marry an eskimo, if no eskimo is around". The stated "goal" of Peter Blau's work was:[14]"An understanding of social structure on the basis of an analysis of the social processes that govern the relations between individuals and groups. The basic question ... is how social life becomes organized into increasingly complex structures of associations among men." Peter Blau played an important role in shaping the field of modern sociology and is one of the most influential post-war American sociologists. He is sometimes considered the last great "grand theorist" of twentieth century American sociology. While Blau's work in the differentiation of organizations was short-lived, his style of research was not. He provided an exemplar of how to do research and how to build theory. He proved that general and valuable deductive theory was possible in sociology. Blau eventually paved the way for many young sociologists that then used similar styles of research and deductive theory. In addition to that, he, along with the help ofOtis Dudley Duncan, introduced multiple regression and path analysis to the sociological audience. These two methods are currently the go-to methods of quantitative sociology. Blau's foundational theories continue to give momentum for development in social science and his ideas are still widely used.
https://en.wikipedia.org/wiki/Peter_Blau
Harrison Colyar White(March 21, 1930 – May 18, 2024) was an American sociologist who was the Giddings Professor of Sociology atColumbia University. White played an influential role in the “Harvard Revolution” insocial networks[1]and theNew York School of relational sociology.[2]He is credited with the development of a number of mathematical models of social structure includingvacancy chainsandblockmodels. He has been a leader of a revolution insociologythat is still in process, using models of social structure that are based onpatterns of relationsinstead of the attributes and attitudes of individuals.[3] Among social network researchers, White is widely respected. For instance, at the 1997International Network of Social Network Analysisconference, the organizer held a special “White Tie” event, dedicated to White.[4]Social network researcher Emmanuel Lazega refers to him as both “Copernicus and Galileo” because he invented both the vision and the tools. The most comprehensive documentation of his theories can be found in the bookIdentity and Control, first published in 1992. A major rewrite of the book appeared in June 2008. In 2011, White received the W.E.B. DuBois Career of Distinguished Scholarship Award from theAmerican Sociological Association, which honors "scholars who have shown outstanding commitment to the profession of sociology and whose cumulative work has contributed in important ways to the advancement of the discipline."[5]Before his retirement to live inTucson, Arizona, White was interested in sociolinguistics and business strategy as well as sociology. White was born on March 21, 1930, inWashington, D.C.He had three siblings and his father was a doctor in the US Navy. Although moving around to different Naval bases throughout his adolescence, he considered himself Southern, andNashville, TNto be his home. At the age of 15, he entered theMassachusetts Institute of Technology(MIT), receiving his undergraduate degree at 20 years of age; five years later, in 1955, he received a doctorate intheoretical physics, also from MIT withJohn C. Slateras his advisor.[6]His dissertation was titledA quantum-mechanical calculation of inter-atomic force constants in copper.[7]This was published in thePhysical Reviewas "Atomic Force Constants of Copper from Feynman's Theorem" (1958).[8]While at MIT he also took a course with the political scientistKarl Deutsch, who White credits with encouraging him to move toward the social sciences.[9] After receiving his PhD in theoretical physics, he received a Fellowship from the Ford Foundation to begin his second doctorate in sociology atPrinceton University. His dissertation advisor wasMarion J. Levy. White also worked withWilbert Moore,Fred Stephan, andFrank W. Notesteinwhile at Princeton.[10]His cohort was very small, with only four or five other graduate students includingDavid Matza, andStanley Udy. At the same time, he took up a position as an operations analyst at theOperations Research Office,Johns Hopkins Universityfrom 1955 to 1956.[11]During this period, he worked with Lee S. Christie onQueuing with Preemptive Priorities or with Breakdown, which was published in 1958.[12]Christie previously worked alongside mathematical psychologistR. Duncan Lucein the Small Group Laboratory at MIT while White was completing his first PhD in physics also at MIT. While continuing his studies at Princeton, White also spent a year as a fellow at theCenter for Advanced Study in the Behavioral Sciences,Stanford University, California where he metHarold Guetzkow. Guetzkow was a faculty member at the Carnegie Institute of Technology, known for his application of simulations to social behavior and long-time collaborator with many other pioneers in organization studies, includingHerbert A. Simon,James March, andRichard Cyert.[13]Upon meeting Simon through his mutual acquaintance with Guetzkow, White received an invitation to move from California to Pittsburgh to work as an assistant professor of Industrial Administration and Sociology at theGraduate School of Industrial Administration, Carnegie Institute of Technology (laterCarnegie-Mellon University), where he stayed for a couple of years, between 1957 and 1959. In an interview, he claimed to have fought with the dean,Leyland Bock, to have the word "sociology" included in his title. It was also during his time at the Stanford Center for Advanced Study that White met his first wife, Cynthia A. Johnson, who was a graduate ofRadcliffe College, where she had majored in art history. The couple's joint work on the French Impressionists,Canvases and Careers(1965) and “Institutional Changes in the French Painting World” (1964), originally grew out of a seminar on art in 1957 at the Center for Advanced Study led by Robert Wilson. White originally hoped to use sociometry to map the social structure of French art to predict shifts, but he had an epiphany that it was not social structure but institutional structure which explained the shift. It was also during these years that White, still a graduate student in sociology, wrote and published his first social scientific work, "Sleep: A Sociological Interpretation" inActa Sociologicain 1960, together withVilhelm Aubert, a Norwegian sociologist. This work was a phenomenological examination of sleep which attempted to "demonstrate that sleep was more than a straightforward biological activity... [but rather also] a social event".[14] For his dissertation, White carried out empirical research on a research and development department in a manufacturing firm, consisting of interviews and a 110-item questionnaire with managers. He specifically used sociometric questions, which he used to model the "social structure" of relationships between various departments and teams in the organization. In May 1960 he submitted as his doctoral dissertation, titledResearch and Development as a Pattern in Industrial Management: A Case Study in Institutionalisation and Uncertainty,[15]earning a PhD in sociology fromPrinceton University. His first publication based on his dissertation was ''Management conflict and sociometric structure'' in theAmerican Journal of Sociology.[16] In 1959James Colemanleft the University of Chicago to found a new department of social relations at Johns Hopkins University, this left a vacancy open for a mathematical sociologist like White. He moved to Chicago to start working as an associate professor at the Department of Sociology. At that time, highly influential sociologists, such asPeter Blau,Mayer Zald,Elihu Katz,Everett Hughes,Erving Goffmanwere there. As Princeton only required one year in residence, and White took the opportunity to take positions at Johns Hopkins, Stanford, and Carnegie while still working on his dissertation, it was at Chicago that White credits as being his "real socialization in a way, into sociology."[17]It was here that White advised his first two graduate students Joel H. Levine andMorris Friedell, both who went on to make contributions to social network analysis in sociology. While at the Center for Advanced Study, White began learning anthropology and became fascinated with kinship. During his stay at theUniversity of ChicagoWhite was able to finishAn Anatomy of Kinship, published in 1963 within the Prentice-Hall series in Mathematical Analysis of Social Behavior, withJames ColemanandJames Marchas chief editors. The book received significant attention from many mathematical sociologists of the time, and contributed greatly to establish White as a model builder.[18] In 1963, White left Chicago to be an associate professor of sociology at theHarvard Department of Social Relations—the same department founded by Talcott Parsons and still heavily influenced by the structural-functionalist paradigm of Parsons. As White previously only taught graduate courses at Carnegie and Chicago, his first undergraduate course wasAn Introduction to Social Relations(see Influence) at Harvard, which became infamous among network analysts. As he "thought existing textbooks were grotesquely unscientific,"[19]the syllabus of the class was noted for including few readings by sociologists, and comparatively more readings by anthropologists, social psychologists, and historians.[20]White was also a vocal critic of what he called the "attributes and attitudes" approach of Parsonsian sociology, and came to be the leader of what has been variously known as the “Harvard Revolution," the "Harvard breakthrough," or the "Harvard renaissance" in social networks. He worked closely with small group researchersGeorge C. HomansandRobert F. Bales, which was largely compatible with his prior work in organizational research and his efforts to formalize network analysis. Overlapping White's early years,Charles Tilly, a graduate of the Harvard Department of Social Relations, was a visiting professor at Harvard and attended some of White's lectures - network thinking heavily influenced Tilly's work. White remained at Harvard until 1986. In addition to a divorce from his wife, Cynthia, (with whom he published several works) and wanting a change, the sociology department at the University of Arizona offer him the position as department chair.[21]He remained at Arizona for two years. In 1988, White joined Columbia University as a professor of sociology and was the director of thePaul F. Lazarsfeld Center for the Social Sciences. This was at the early stages of what is perhaps the second major revolution in network analysis, the so-called "New York School of relational sociology." This invisible college included Columbia as well as the New School for Social Research and New York University. While the Harvard Revolution involved substantial advances in methods for measuring and modeling social structure, the New York School involved the merging of cultural sociology with network-structural sociology, two traditions which had previously been antagonistic. White stood at the heart of this, and his magnum opusIdentity and Controlwas a testament to this new relational sociology. In 1992, White received the named position of Giddings Professor of Sociology and was the chair of the department of sociology for various years until his retirement. He resided in Tucson, Arizona. A good summary of White's sociological contributions is provided by his former student and collaborator,Ronald Breiger: White addresses problems of social structure that cut across the range of the social sciences. Most notably, he has contributed (1) theories of role structures encompassing classificatory kinship systems of native Australian peoples and institutions of the contemporary West; (2) models based on equivalences of actors across networks of multiple types of social relation; (3) theorization of social mobility in systems of organizations; (4) a structural theory of social action that emphasizes control, agency, narrative, and identity; (5) a theory of artistic production; (6) a theory of economic production markets leading to the elaboration of a network ecology for market identities and new ways of accounting for profits, prices, and market shares; and (7) a theory of language use that emphasizes switching between social, cultural, and idiomatic domains within networks of discourse. His most explicit theoretical statement isIdentity and Control: A Structural Theory of Social Action(1992), although several of the major components of his theory of the mutual shaping of networks, institutions, and agency are also readily apparent inCareers and Creativity: Social Forces in the Arts(1993), written for a less-specialized audience.[22] More generally, White and his students sparked interest in looking at society as networks rather than as aggregates of individuals.[23] This view is still controversial. In sociology and organizational science, it is difficult to measure cause and effect in a systematic way. Because of that, it is common to use sampling techniques to discover some sort of average in a population. For instance, we are told almost daily how the average European or American feels about a topic. It allows social scientists and pundits to make inferences about cause and say “people are angry at the current administration because the economy is doing poorly.” This kind of generalization certainly makes sense, but it does not tell us anything about an individual. This leads to the idea of an idealized individual, something that is the bedrock of modern economics.[24]Most modern economic theories look at social formations, like organizations, as products of individuals all acting in their own best interest.[25] While this has proved to be useful in some cases, it does not account well for the knowledge that is required for the structures to sustain themselves. White and his students (and his students' students) have been developing models that incorporate the patterns of relationships into descriptions of social formations. This line of work includes: economic sociology, network sociology and structuralist sociology. White's most comprehensive work isIdentity and Control. The first edition came out in 1992 and the second edition appeared in June 2008. In this book, White discusses the social world, including “persons,” as emerging from patterns of relationships. He argues that it is a default human heuristic to organize the world in terms of attributes, but that this can often be a mistake. For instance, there are countless books on leadership that look for the attributes that make a good leader. However, no one is a leader without followers; the term describes a relationship one has with others. Without the relationships, there would be no leader. Likewise, an organization can be viewed as patterns of relationships. It would not “exist” if people did not honor and maintain specific relationships. White avoids giving attributes to things that emerge from patterns of relationships, something that goes against our natural instincts and requires some thought to process.[26] Identity and Controlhas seven chapters. The first six are about social formations that control us and how our own judgment organizes our experience in ways that limit our actions. The final chapter is about “getting action” and how change is possible. One of the ways is by “proxy,” empowering others. Harrison White also developed a perspective on market structure and competition in his 2002 book,Markets from Networks, based on the idea that markets are embedded insocial networks. His approach is related to economic concepts such asuncertainty(as defined byFrank Knight),monopolistic competition(Edward Chamberlin), orsignalling(Spence). This sociological perspective on markets has influenced both sociologists (seeJoel M. Podolny) and economists (seeOlivier Favereau). White's later work discussed linguistics. InIdentity and Controlhe emphasized “switching” between network domains as a way to account for grammar in a way that does not ignore meaning as does much of standard linguistic theory. He had a long-standing interest in organizations, and before he retired, he worked on how strategy fits into the overall models of social construction he has developed. In addition to his own publications, White is widely credited with training many influential generations of network analysts in sociology. Including the early work in the 1960s and 1970s during the Harvard Revolution, as well as the 1980s and 1990s at Columbia during the New York School of relational sociology. White's student and teaching assistant,Michael Schwartz,took notes in the spring of 1965, known asNotes on the Constituents of Social Structure, of White's undergraduateIntroduction to Social Relations course (Soc Rel 10). These notes were circulated among network analysis students and aficionados, until finally published in 2008 in Sociologica. As popular social science blog Orgtheory.net explains, "in contemporary American sociology, there are no set of student-taken notes that have had as much underground influence as those from Harrison White’s introductorySoc Rel 10seminar at Harvard."[27] The first generation of Harvard graduate students that trained with White during the 1960s went on to be a formidable cohort of network analytically inclined sociologists. His first graduate student at Harvard wasEdward Laumannwho went onto develop one of the most widely used methods of studying personal networks known as ego-network surveys (developed with one of Laumann's students at the University of Chicago,Ronald Burt). Several of them went on to contribute to the "Toronto school" of structural analysis.Barry Wellman, for instance, contributed heavily to the cross fertilization of network analysis and community studies, later contributing to the earliest studies of online communities. Another of White's earliest students at Harvard was Nancy Lee (now Nancy Howell) who used social network analysis in her groundbreaking study of how women seeking an abortion found willing doctors before Roe v. Wade. She found that women found doctors through links of friends and acquaintances and was four degrees separated from the doctor on average. White also trained later additions to the Toronto school, Harriet Friedmann ('77) and Bonnie Erickson ('73). One of White's most well-known graduate students wasMark Granovetter, who attended Harvard as a Ph.D. student from 1965 to 1970. Granovetter studied how people got jobs, discovered they were more likely to get them through acquaintances than through friends. Recounting the development of his widely cited 1973 article, "The Strength of Weak Ties", Granovetter credits White's lectures and specifically White's description of sociometric work by Anatol Rapaport and William Horrath that gave him the idea. This, tied with earlier work byStanley Milgram(who was also in theHarvard Department of Social Relations1963–1967, though not one of White's students), gave scientists a better sense of how the social world was organized: into many densegroupswith “weak ties” between them. Granovetter's work provided the theoretical background forMalcolm Gladwell'sThe Tipping Point. This line of research is still actively being pursued byDuncan Watts,Albert-László Barabási,Mark Newman,Jon Kleinbergand others. White's research on “vacancy chains” was assisted by a number of graduate students, includingMichael SchwartzandIvan Chase. The outcome of this was the bookChains of Opportunity. The book described a model of social mobility where the roles and the people that filled them were independent. The idea of a person being partially created by their position in patterns of relationships has become a recurring theme in his work. This provided a quantitative analysis of social roles, allowing scientists new ways to measure society that were not based on statistical aggregates. During the 1970s, White work with his student'sScott Boorman,Ronald Breiger, and François Lorrain on a series of articles that introduce a procedure called "blockmodeling" and the concept of "structural equivalence." The key idea behind these articles was identifying a "position" or "role" through similarities in individuals' social structure, rather than characteristics intrinsic to the individuals ora prioridefinitions of group membership. At Columbia, White trained a new cohort of researchers who pushed network analysis beyond methodological rigor to theoretical extension and the incorporation of previously neglected concepts, namely, culture and language. Many of his students and mentees have had a strong impact in sociology. Other former students includeMichael Schwartzand Ivan Chase, both professors at Stony Brook; Joel Levine, who foundedDartmouth College's Math/Social Science program;Edward Laumannwho pioneered survey-based egocentric network research and became a dean and provost atUniversity of Chicago;Kathleen CarleyatCarnegie Mellon University;Ronald Breigerat theUniversity of Arizona;Barry Wellmanat theUniversity of Torontoand then the NetLab Network;Peter BearmanatColumbia University; Bonnie Erickson (Toronto);Christopher Winship(Harvard University); Joel Levine (Dartmouth College), Nicholas Mullins (Virginia Tech, deceased), Margaret Theeman (Boulder), Brian Sherman (retired, Atlanta), Nancy Howell (retired, Toronto);David R. Gibson(University of Notre Dame); Matthew Bothner (University of Chicago);Ann Mische(University of Notre Dame); Kyriakos Kontopoulos (Temple University); andFrédéric Godart(INSEAD).[28] White died at an assisted living facility inTucson, on May 19, 2024, at the age of 94.[29]
https://en.wikipedia.org/wiki/Harrison_White
Nicolas Rashevsky(November 9, 1899 – January 16, 1972) was an Americantheoretical physicistwho was one of the pioneers ofmathematical biology, and is also considered the father ofmathematical biophysicsand theoretical biology.[1][2][3][4] He studiedtheoretical physicsat theSt. Vladimir Imperial University of Kyiv. He left Ukraine after theOctober Revolution, emigrating first toTurkey, then toPoland,France, and finally to the US in 1924.[citation needed] In USA he worked at first for theWestinghouseResearch Labs inPittsburghwhere he focused on the theoretical physics modeling of the cell division and the mathematics of cell fission. He was awarded a Rockefeller Fellowship in 1934 and went to the University of Chicago to take up the appointment of assistant professor in the department of physiology.[citation needed]In 1938, inspired by readingOn Growth and Form(1917) byD'Arcy Wentworth Thompson, he made his first major contribution by publishing his first book onMathematical Biophysics, and then in 1939 he also founded the firstmathematical biologyinternational journal entitledThe Bulletin of Mathematical Biophysics(BMB); these two essential contributions founded the field ofmathematical biology, with theBulletin of Mathematical Biologyserving as the focus of contributing mathematical biologists over the last 70 years. During the late 1930s, Rashevsky's research group was producing papers that had difficulty publishing in other journals at the time, so Rashevsky decided to found a new journal exclusively devoted to mathematical biophysics. In January 1939, he approached the editor of the journalPsychometrika,L.L. Thurstone, and formed an agreement that the new journal, the BMB, would be published as a supplement to their quarterly issues.[5] In 1938 he published one of the first books on mathematical biology and mathematical biophysics entitled: "Mathematical Biophysics: Physico-Mathematical Foundations of Biology." This fundamental book was eventually published in three revised editions, the last revision appearing in two volumes in 1960. It was followed in 1940 by "Advances and applications of mathematical biology.", and in 1947 by "Mathematical theory of human relations", an approach to a mathematical model of society.[citation needed]In the same year he established the World' s first[citation needed]PhD program inmathematical biologyat theUniversity of Chicago. In the early 1930s, Rashevsky developed the first model ofneural networks.[6][7][8]This was paraphrased in a Boolean context by his studentWalter Pittstogether withWarren McCulloch, inA logical calculus of the ideas immanent in nervous activity, published in Rashevsky'sBulletin of Mathematical Biophysicsin 1943.[9]The Pitts-McCulloch article subsequently became extremely influential for research on artificial intelligence and artificial neural networks.[10] His later efforts focused on the topology of biological systems, the formulation of fundamental principles in biology, relational biology,set theoryandpropositional logicformulation of the hierarchical organization of organisms and human societies. In the second half of the 1960s, he introduced the concept of "organismic sets" that provided a unified framework for physics, biology and sociology. This was subsequently developed by other authors asorganismic supercategoriesandComplex Systems Biology. Some of Rashevsky's most outstanding PhD students who earned their doctorate under his supervision were:George Karreman,Herbert Daniel Landahl,Clyde Coombs,Robert RosenandAnatol Rapoport. In 1948, Anatol Rapoport took over Rashevsky's course in mathematical biology, so that Rashevsky could teachmathematical sociologyinstead.[citation needed] However, his more advanced ideas and abstract relational biology concepts found little support in the beginning amongst practicing experimental or molecular biologists, although current developments incomplex systems biologyclearly follow in his footsteps.[citation needed] In 1954 the budget for hisCommittee of Mathematical Biologywas drastically cut; however, this was at least in part politically imposed, rather than scientifically, motivated. Thus, the subsequentUniversity of Chicagoadministration—notably represented by the geneticsNobel laureateGeorge Wells Beadle— who reversed in the 1960s the previous position and quadrupled the financial support for Rashevsky's Committee for Mathematical Biology research activities ("Reminiscences of Nicolas Rashevsky." by Robert Rosen, written in late 1972). There was later however a fall out between the retiring Nicolas Rashevsky and the University of Chicago president over the successor to the Chair of the Committee of Mathematical Biology; Nicolas Rashevsky strongly supported Dr. Herbert Landahl-his first PhD student to graduate in Mathematical Biophysics, whereas the president wished to appoint a certain US biostatistician. The result was Rashevsky's move to theUniversity of Michiganin Ann Arbor, Michigan, and his taking ownership of the well-funded "Bulletin of Mathematical Biophysics".[citation needed] He also formed in 1969 a non-profit organization, "Mathematical Biology, Incorporated", which was to be the precursor of"The Society for Mathematical Biology", with the purpose of "dissemination of information regarding Mathematical Biology".[citation needed] In his later years, after 1968, he became again very active inrelational biologyand held, as well as Chaired, in 1970 the first international "Symposium of Mathematical Biology" at Toledo, Ohio, in USA with the help of his former PhD student, Dr.Anthony Bartholomay, who has become the chairman of the first Department of Mathematical Medicine atOhio University. The meeting was sponsored byMathematical Biology, Inc.[citation needed] Rashevsky was greatly influenced and inspired both by Herbert Spencer's book on thePrinciples of Biology (1898), and also by J. H. Woodger `axiomatic (Mendelian) genetics', to launch his own search and quest for biological principles, and also to formulate mathematically precise principles and axioms of biology. He then developed his own highly original approach to address the fundamental question ofWhat is Life?that another theoretical physicist,Erwin Schrödinger, had asked before him from the narrower viewpoint ofquantum theoryin biology.[citation needed] He wished to reach this `holy grail' of (theoretical/ mathematical) biology, but his heavy work load during the late 1960s—despite his related health problems—took its toll, and finally prevented him in 1972 from reaching his ultimate goal. Rashevsky's relational approach represents a radical departure from reductionistic approaches, and it has greatly influenced the work of his student Robert Rosen.[citation needed] In 1917, Nicolas Rashevsky joined the White Russian Navy and in 1920 he and his wife, Countess Emily had to flee for their lives to Constantinople where he taught at the American College. In 1921 they moved to Prague where he taught both special andgeneral relativity.[citation needed] From Prague, he moved in the 1930s to Paris, France, and then to New York, Pittsburgh and Chicago, USA. His life has been dedicated to the science that he founded, Mathematical Biology, and his wife Emily was very supportive and appreciative of his scientific efforts, accompanying him at the scientific meetings that he either initiated or attended.[citation needed] He cut a tall, impressive figure with a slight Eastern European accent, but a clear voice and thought to the very day when in 1972 he died from a heart attack caused bycoronary heart disease. His generosity was very well known and is often recognized in print by former associates or visitors. As the Chief Editor of BMB he had a declared policy of helping the authors to optimize their presentation of submitted papers, as well as proving many valuable suggestions to the submitting authors.[citation needed] His suggested detailed changes, additions and further developments were like a real `gold mine' for the submitting authors. He managed to stay aloof of all science `politics' most of the time, even in very adverse circumstances such as those during theMcCarthyera when completely unfounded political accusations were made about one or two members of his close research group. Not unlike another American theoretical physicistRobert Oppenheimer, he then had much to lose for his loyal support of the wrongly accused researcher in his group.[citation needed] This article incorporates material from Nicolas Rashevsky onPlanetMath, which is licensed under theCreative Commons Attribution/Share-Alike License.The article also incorporates additional data fromplanetphysics.org[usurped]; furthermore, both external entries are original, contributed objects in the public domain.
https://en.wikipedia.org/wiki/Nicolas_Rashevsky
The Society for Mathematical Biology (SMB)is an international association co-founded in 1972 in the United States byGeorge Karreman,Herbert Daniel Landahland (initially chaired) byAnthony Bartholomayfor the furtherance of joint scientific activities betweenMathematicsandBiologyresearch communities.[1][2]The society publishes theBulletin of Mathematical Biology,[3][4]as well as the quarterly SMB newsletter.[5] The Society for Mathematical Biology emerged and grew from the earlier school ofmathematical biophysics, initiated and supported by the Founder ofMathematical Biology,Nicolas Rashevsky.[6][7]Thus, the roots of SMB go back to the publication in 1939 of the first international journal of mathematical biology, previously entitled "The Bulletin of Mathematical Biophysics"—which was founded by Nicolas Rashevsky, and which is currently published by SMB under the name of "Bulletin of Mathematical Biology".[8]Professor Rashevsky also founded in 1969 the non-profit organization "Mathematical Biology, Incorporated"—the precursor of SMB. Another notable member of theUniversity of Chicagoschool of mathematical biology wasAnatol Rapoportwhose major interests were in developing basic concepts in the related area ofmathematical sociology, who cofounded theSociety for General Systems Researchand became a president of the latter society in 1965. Herbert D. Landahl was initially also a member of Rashevsky's school of mathematical biology, and became the second president of SMB in the 1980s; both Herbert Landahl andRobert Rosenfrom Rashevsky's research group were focused on dynamical systems approaches tocomplex systems biology, with the latter researcher becoming in 1980 the president of theSociety for General Systems Research. The Society for Mathematical Biology is governed by its Officers and Board of Directors, elected by the membership. Current SMB President isJane Heffernan(York University), and Past-President serving as vice president isHeiko Enderling(Moffitt Cancer Center). SMB secretary is Jon Forde (Hobart and William Smith Colleges), and treasurer is Stanca Ciupe (Virginia Tech). The current Board of Directors is composed ofRuth Baker(University of Oxford), Padmini Rangamani (University of California San Diego), Amina Eladdadi (The College of St Rose), Peter Kim (The University of Sydney), Robyn Araujo (Queensland University of Technology), and Amber Smith (University of Tennessee Health Science Center). In addition to its research and news publications, the society supports education in:mathematical biology, mathematical biophysics,complex systems biologyandtheoretical biologythrough sponsorship of several topic-focused graduate and postdoctoral courses. To encourage and stimulate young researchers in this relatively new and rapidly developing field of mathematical biology, the society awards several prizes, as well as lists regularly new open international opportunities for researchers and students in this field.[9] The society publishes theBulletin of Mathematical Biology. The Bulletin was founded by Dr.Nicolas Rashevsky, who is generally recognized as the founder of the first organized group in mathematical biology in the world. The journal was originally published as the Bulletin of Mathematical Biophysics, and quickly became the classical journal in general mathematical biology and served as the principal natural publication outlet for the majority of mathematical biologists. Many classical papers have appeared in the Bulletin, and several of these are familiar to biologists. It has become an important avenue for the exchange and transmission of new ideas and approaches to biological problems and incorporates both the quantitative and qualitative aspects of mathematical models and characterizations of biological processes and systems.[10] Dr. Rashevsky remained the editor of the Bulletin until his death on January 16, 1972. University of Michigan Reinhard Laubenbacher University of Connecticut Health
https://en.wikipedia.org/wiki/Society_for_Mathematical_Biology
Insocial network analysisandmathematical sociology,interpersonal tiesare defined as information-carrying connections between people. Interpersonal ties, generally, come in three varieties:strong,weakorabsent. Weak social ties, it is argued, are responsible for the majority of the embeddedness and structure ofsocial networksin society as well as the transmission of information through these networks. Specifically, more novel information flows to individuals through weak rather than strong ties. Because our close friends tend to move in the same circles that we do, the information they receive overlaps considerably with what we already know. Acquaintances, by contrast, know people that we do not, and thus receive more novel information.[1] Included in the definition ofabsent ties, according to the American sociologistMark Granovetter, are those relationships (or ties) without substantial significance, such as "nodding" relationships between people living on the same street, or the "tie", for example, to a frequent vendor one would buy from. Such relations with familiar strangers have also been calledinvisible tiessince they are hardly observable, and are often overlooked as a relevant type of ties.[2]They nevertheless support people's sense of familiarity and belonging.[3]Furthermore, the fact that two people may know each other by name does not necessarily qualify the existence of a weak tie. If their interaction is negligible the tie may beabsentorinvisible. The "strength" of an interpersonal tie is a linear combination of the amount of time, the emotional intensity, the intimacy (or mutual confiding), and the reciprocal services which characterize each tie.[4] One of the earliest writers to describe the nature of the ties between people was German scientist and philosopher,Johann Wolfgang von Goethe. In his classic 1809 novella,Elective Affinities, Goethe discussed the "marriage tie". The analogy shows how strong marriage unions are similar in character to particles ofquicksilver, which find unity through the process ofchemical affinity. In 1954, the Russian mathematical psychologistAnatol Rapoportcommented on the "well-known fact that the likely contacts of two individuals who are closely acquainted tend to be more overlapping than those of two arbitrarily selected individuals". This argument became one of the cornerstones ofsocial network theory. In 1973, stimulated by the work of Rapoport and Harvard theoristHarrison White, Mark Granovetter publishedThe Strength of Weak Ties.[5][4]This paper is now recognized as one of the most influential sociology papers ever written.[6] To obtain data for his doctoral thesis, Granovetter interviewed dozens of people to find out how social networks are used to land new jobs. Granovetter found that most jobs were found through "weak" acquaintances. This pattern reminded Granovetter of his freshman chemistry lesson that demonstrated how "weak"hydrogen bondshold together many water molecules, which are themselves composed of atoms held together by "strong"covalent bonds. In Granovetter's view, a similar combination of strong and weak bonds holds the members of society together.[6]This model became the basis of his first manuscript on the importance of weak social ties in human life, published in May 1973.[4]According toCurrent Contents, by 1986, the Weak Ties paper had become a citation classic, being one of the most cited papers in sociology. In a related line of research in 1969, anthropologistBruce Kapferer, published "Norms and the Manipulation of Relationships in a Work Context" after doing field work in Africa. In the document, he postulated the existence ofmultiplex ties, characterized by multiple contexts in a relationship.[7][8]In telecommunications, amultiplexeris a device that allows a transmission medium to carry a number of separate signals. In social relations, by extrapolation, "multiplexity" is the overlap of roles, exchanges, or affiliations in a social relationship.[9] In 1970, Granovetter submitted his doctoral dissertation toHarvard University, entitled "Changing Jobs: Channels of Mobility Information in a Suburban Community".[10]The thesis of his dissertation illustrated the conception of weak ties. For his research, Dr. Granovetter crossed the Charles River to Newton, Massachusetts where he surveyed 282 professional, technical, and managerial workers in total. 100 were personally interviewed, in regards to the type of ties between the job changer and the contact person who provided the necessary information. Tie strength was measured in terms of how often they saw the contact person during the period of the job transition, using the following assignment: Of those who found jobs through personal contacts (N=54), 16.7% reported seeing their contact often, 55.6% reported seeing their contact occasionally, and 27.8% rarely.[10]When asked whether a friend had told them about their current job, the most frequent answer was "not a friend, an acquaintance". The conclusion from this study is that weak ties are an important resource in occupational mobility. When seen from a macro point of view, weak ties play a role in affecting social cohesion. Insocial network theory, social relationships are viewed in terms ofnodesandties. Nodes are the individual actors within the networks, and ties are the relationships between the actors. There can be many kinds of ties between the nodes. In its simplest form, a social network is a map of all of the relevant ties between the nodes being studied. The "weak tie hypothesis" argues, using a combination ofprobabilityand mathematics, as originally stated byAnatol Rapoportin 1957, that if A is linked to both B and C, then there is a greater-than-chance probability that B and C are linked to each other:[11] That is, if we consider any two randomly selected individuals, such as A and B, from the set S = A, B, C, D, E, ..., of all persons with ties to either or both of them, then, for example, if A is strongly tied to both B and C, then according to probability arguments, the B–C tie is always present. The absence of the B–C tie, in this situation, would create, according to Granovetter, what is called theforbidden triad. In other words, the B–C tie, according to this logic, is always present, whether weak or strong, given the other two strong ties. In this direction, the "weak tie hypothesis" postulates that clumps orcliquesof social structure will form, being bound predominately by "strong ties", and that "weak ties" will function as the crucial bridge between any two densely knit clumps of close friends.[12] It may follow that individuals with few bridging weak ties will be deprived of information from distant parts of thesocial systemand will be confined to the provincial news and views of their close friends. However, having a large number of weak ties can mean that novel information is effectively "swamped" among a high volume of information, even crowding out strong ties. The arrangement of links in a network may matter as well as the number of links. Further research is needed to examine the ways in which types of information, numbers of ties, quality of ties, and trust levels interact to affect the spreading of information.[5] According toDavid Krackhardt,[13]there are some problems in the Granovetter definition. The first one refers to the fact that the Granovetter definition of the strength of a tie is a curvilinear prediction and his question is "how do we know where we are on this theoretical curve?". The second one refers to the effective character of strong ties. Krackhardt says that there are subjective criteria in the definition of the strength of a tie such as emotional intensity and the intimacy. He thought that strong ties are very important in severe changes and uncertainty: "People resist change and are uncomfortable with uncertainty. Strong ties constitute a base of trust that can reduce resistance and provide comfort in the face of uncertainty. This it will be argued that change is not facilitated by weak ties, but rather by a particular type of strong tie." He called this particular type of strong tiephiloand definephilosrelationship as one that meets the following three necessary and sufficient conditions: The combination of these qualities predicts trust and predicts that strong ties will be the critical ones in generating trust and discouraging malfeasance. When it comes to major change, change that may threaten the status quo in terms of power and the standard routines of how decisions are made, then trust is required. Thus, change is the product ofphilos. Starting in the late 1940s,Anatol Rapoportand others developed a probabilistic approach to the characterization of large social networks in which the nodes are persons and the links are acquaintanceship. During these years, formulas were derived that connected local parameters such as closure of contacts, and the supposed existence of the B–C tie to the global network property of connectivity.[11] Moreover, acquaintanceship (in most cases) is apositive tie. However, there are alsonegative tiessuch as animosity among persons. In considering the relationships of three,Fritz Heiderinitiated abalance theoryof relations. In a larger network represented by agraph, the totality of relations is represented by asigned graph. This effort led to an important and non-obvious Structure Theorem for signed graphs,[14]which was published byFrank Hararyin 1953. A signed graph is calledbalancedif the product of the signs of all relations in everycycleis positive. A signed graph is unbalanced if the product is ever negative. The theorem says that if a network of interrelated positive and negative ties is balanced, then it consists of two subnetworks such that each has positive ties among its nodes and negative ties between nodes in distinct subnetworks. In other words, "my friend's enemy is my enemy".[15]The imagery here is of a social system that splits into twocliques. There is, however, a special case where one of the two subnetworks may be empty, which might occur in very small networks. In these two developments, we have mathematical models bearing upon the analysis of the structure. Other early influential developments in mathematical sociology pertained to process. For instance, in 1952Herbert A. Simonproduced a mathematical formalization of a published theory of social groups by constructing a model consisting of a deterministic system of differential equations. A formal study of the system led to theorems about the dynamics and the impliedequilibrium statesof any group. In a footnote,Mark Granovetterdefines what he considers as absent ties: Included in 'absent' are both the lack of any relationship and ties without substantial significance, such as a 'nodding' relationship between people living on the same street, or the 'tie' to the vendor from whom one customarily buys a morning newspaper. That two people 'know' each other by name need not move their relation out of this category if their interaction is negligible. In some contexts, however (disasters, for example), such 'negligible' ties might usefully be distinguished from non-existent ties. This is an ambiguity caused by substitution, for convenience of exposition, of discrete values for an underlying continuous variable.[4] The concept ofinvisible tiewas proposed to overcome the contradiction between the adjective "absent" and this definition, which suggests that such ties exist and might "usefully be distinguished" from the absence of ties.[2]From this perspective, the relationship between twofamiliar strangers, such as two people living on the same street, is not absent but invisible. Indeed, because such ties involve only limited interaction (as in the case of 'nodding relationships'), if any, they are hardly observable, and are often overlooked as a relevant type of ties.[2]Absent or invisible ties nevertheless support people's sense of familiarity and belonging.[3] Adding any network-based means of communication such as a newIRCchannel, a social support group, aWebboardlays the groundwork for connectivity between formerly unconnected others. Similarly, laying an infrastructure, such as the Internet,intranets,wireless connectivity, grid computing, telephone lines, cellular service, or neighborhood networks, when combined with the devices that access them (phones, cellphones, computers, etc.) makes it possible for social networks to form. Such infrastructures make a connection available technically, even if not yet activated socially. These technical connections support latent social network ties, used here to indicate ties that are technically possible but not yet activated socially. They are only activated, i.e. converted from latent to weak, by some sort of social interaction between members, e.g. by telephoning someone, attending a group-wide meeting, reading and contributing to a Webboard, emailing others, etc. Given that such connectivity involves unrelated persons, the latent tie structure must be established by an authority beyond the persons concerned. Internet-based social support sites contain this profile. These are started by individuals with a particular interest in a subject who may begin by posting information and providing the means for online discussion.[16] Granovetter's 1973 work proved to be crucial in the individualistic approach of the social network theory as seen by the number of references in other papers.[17]His argument asserts that weak ties or "acquaintances",[4][12]are less likely to be involved within the social network than strong ties (close friends and family). By not going further in the strong ties, but focusing on the weak ties, Granovetter highlights the importance of acquaintances in social networks. He argues, that the only thing that can connect two social networks with strong ties is a weak tie: "… these clumps / [strong ties networks] would not, in fact, be connected to one another at all were it not for the existence of weak ties.[4]: 1363[12]: 202 It follows that in an all-covering social network individuals are at a disadvantage with only a few weak links, compared to individuals with multiple weak links, as they are disconnected with the other parts of the network. Another interesting observation that Granovetter makes in his work is the increasing specialization of individuals creates the necessity for weak ties, as all the other specialist information and knowledge is present in large social networks consisting predominately of weak ties.[4] Cross et al., (2001) confirm this by presenting six features which differentiate effective and ineffective knowledge sharing relations: "1)knowing what other person knows and thus when to turn to them; 2) being able to gain timely access to that person; 3) willingness of the person sought out to engage in the problem solving rather than dump information; 4) a degree of safety in the relationship that promoted learning and creativity; 5) the factors put byGeert Hofstede; and 6) individual characteristics, such as openness" (pp 5). This fits in nicely with Granovetter's argument that "Weak ties provide people with access to information and resources beyond those available in their own social circle; but strong ties have greater motivation to be of assistance and are typically more easily available."[12]: 209 This weak/strong ties paradox is elaborated by myriad authors. The extent in which individuals are connected to others is called centrality. Sparrowe & Linden (1997) argue how the position of a person in a social network confer advantages such organizational assimilation, and job performance (Sparrowe et al., 2001); Burt (1992) expects it to result in promotions, Brass (1984) affiliates centrality with power and Friedkin (1993) with influence in decision power. Other authors, such as Krackhardt and Porter (1986) contemplate the disadvantages of the position is social networks such as organizational exit (see also Sparrowe et al., 2001) and Wellman et al.,(1988) introduce the use of social networks for emotional and material support. Blau and Fingerman, drawing from these and other studies, refer to weak ties asconsequential strangers,positing that they provide some of the same benefits as intimates as well as many distinct and complementary functions.[18] In the early 1990s,USsocial economistJames D. Montgomerycontributed to economic theories of network structures in the labour market. In 1991, Montgomery incorporated network structures in an adverse selection model to analyze the effects of social networks onlabour marketoutcomes.[19]In 1992, Montgomery explored the role of "weak ties", which he defined as non-frequent and transitory social relations in the labour market.[20][21]He demonstrated that weak ties are positively correlated with higher wages and higher aggregate employment rates.[citation needed]
https://en.wikipedia.org/wiki/Interpersonal_ties
James Samuel Coleman(May 12, 1926 – March 25, 1995) was an American sociologist, theorist, and empirical researcher, based chiefly at the University of Chicago.[1][2] He served as president of theAmerican Sociological Associationin 1991–1992. He studied the sociology of education and public policy, and was one of the earliest users of the termsocial capital.[3]He may be considered one of the original neoconservatives in sociology.[4]His workFoundations of Social Theory(1990) influenced countless sociological theories, and his worksThe Adolescent Society(1961) and "Coleman Report" (Equality of Educational Opportunity, 1966) were two of the most cited books in educational sociology. The landmark Coleman Report helped transform educational theory, reshape national education policies, and it influenced public and scholarly opinion regarding the role of schooling in determining equality and productivity in the United States.[3][5] As the son of James and Maurine Coleman, he spent his early childhood inBedford, Indiana, he then moved toLouisville, Kentucky. After graduating in 1944, he enrolled in a small school inVirginia, but left to enlist in theUS NavyduringWorld War II. After he was discharged from theUS Navyin 1946, he enrolled inIndiana University. Eventually he transferred schools, and Coleman received his bachelor's degree inchemical engineeringfromPurdue Universityin 1949. He initially intended on studying Chemistry but quickly became fascinated with Sociology as he navigated his way through University life. He began working atEastman Kodakuntil 1952.[6]He became interested insociologyand pursued his degree atColumbia University. During his time there, he spent two years as a research assistant with theBureau of Applied Social Research, and published a chapter inMathematical Thinking in the Social Sciences, which was edited byPaul Lazarsfeld. He went on to receive his doctorate fromColumbia Universityin 1955.[6] He is best known today for his work on the massive study that produced "Equality of Educational Opportunity" (EEO), or the Coleman Report, Coleman's intellectual appetite was prodigious.[7] In 1949 he married Lucille Richey with whom he had 3 children, Thomas, John, and Stephen. Lucille and James divorced in 1973 and he later went on to marry his second wife, Zdzislawa Walaszek, in which he had one son, Daniel Coleman. He died on March 25, 1995, at University Hospital in Chicago Illinois and was outlived by his wife Zdzislawa Walaszek and sons. Coleman achieved success with two studies on problem solving:Introduction to Mathematical Sociology(1964) andMathematics of Collective Action(1973). He was a fellow at theCenter for Advanced Study in the Behavioral Sciencesand taught at theUniversity of Chicago. In 1959, he moved toJohns Hopkins University, where he served as an associate professor and founded the Sociology department. In 1965 he became involved inProject Camelot, an academic research project funded by theUnited Statesmilitary through the Special Operations Research Office to train in counter-insurgency techniques. He eventually became a professor in social relations until 1973, when he returned toChicagoto teach as a University Professor of Sociology and Education at the University of Chicago again.[6] During the mid-1960s and early 1970s, Coleman was an elected member of theAmerican Academy of Arts and Sciences, theAmerican Philosophical Society, and the United StatesNational Academy of Sciences.[8][9][10]Proceeding on the assumption that the study of human society can become a true science, the author examines the contribution that various mathematical techniques might make to systematic conceptual elaboration of social behavior. He notes that it is only when the logical structure of mathematics is possible, and claims that in this way mathematics will ultimately become useful in sociology.[11] Upon his return, he became the professor and senior study director at theNational Opinion Research Center. In 1991, Coleman was elected as the eighty-third President of theAmerican Sociological Association.[12]In 2001, Coleman was named among the top 100 American intellectuals, as measured by academic citations, inRichard Posner's book,Public Intellectuals: A Study of Decline.[13]Over his lifetime he published nearly 30 books, and more than 300 articles and book chapters, which contributed to the understanding of education in the United States.[14] He was influenced byErnest NagelandPaul Lazarsfeld, both who interested Coleman in mathematical sociology, andRobert Merton, who introduced Coleman toÉmile DurkheimandMax Weber.[6]Coleman is associated withadolescence,corporate actionandrational choice. He shares common ground with sociologistsPeter Blau,Daniel Bell, andSeymour Martin Lipset, with whom Coleman first did research after obtaining his PhD.[15] Coleman is widely cited in the field ofsociology of education. In the 1960s, during his time teaching at Johns Hopkins University, Coleman and several other scholars were commissioned by theNational Center for Education Statistics[6]to write a report on educational equality in the US. It was one of the largest studies in history, with more than 650,000 students in the sample. The result was a massive report of over 700 pages. The 1966 report, titledEquality of Educational Opportunity(otherwise known as the "Coleman Report"), fueled debates about "school effects" that are still relevant today.[16]The report is commonly presented as evidence that school funding has little effect on student achievement, a key finding of the report and subsequent research.[17][18][5]It was found as for physical facilities, formal curricula, and other measurable criteria, there was little difference between black and white schools. Also, a significant gap in the achievement scores between black and white children already existed in the first grade. Despite the similar conditions of black and white schools, the gap became even wider by the end of elementary school. The only consistent variable explaining the differences in score within each racial group or ethnic group was the educational and economic attainment of the parents.[19]Therefore, student background and socioeconomic status were found to be more important in determining educational outcomes of a student. Specifically, the key factors were the attitudes toward education of parents and caregivers at home and peers at school. Differences in the quality of schools and teachers did have a small impact on student outcomes.[17][18][5] The study cost approximately 1.5 million dollars and to date is one of the largest studies in history, involving 600,000 students and 60,000 teachers in the research sample. The participants were black, Native, and Mexican American, poor white, Puerto Rican and Asian students. This study was a driving factor in the debate for “school effects”, a debate that continues to date. A few major findings and controversies from the study were that black student drop rates were twice as high that of white students, and that poor home environments were a major influence to poor academic performance for minorities. Eric Hanushekcriticized the focus on thestatistical methodologyand the estimation of the impacts of various factors on achievement which took attention away from the achievement comparisons in the Coleman Report. The study tested students around United States, and the differences in achievement by race and region were enormous. The average black twelfth grade student in the rural South was achieving at the level of a seventh grade white student in the urban Northeast. At the fiftieth anniversary of the report's publication,Eric Hanushekassessed the closure in the black-white achievement gap. He found that achievement differences had narrowed, largely from improvements in the South, but that at the pace of the previous half-century, it would take two-and-a-half centuries to close the mathematics achievement gap.[20][21] InFoundations of Social Theory(1990), Coleman discusses his theory ofsocial capital, the set of resources found in family relations and in a community's social organization.[3][22]Coleman believed that social capital is important for the development of a child or young person, and that functional communities are important as sources of social capital that can support families in terms of youth development.[3]He discusses three main types of capital: human, physical, and social.[23] Human capitalis an individual's skills, knowledge, and experience, which determine their value in society.[24]Physical capital, being completely tangible and generally a private good, originates from the creation of tools to facilitate production. In addition to social capital, the three types of investments create the three main aspects of society's exchange of capital.[25] According to Coleman, social capital and human capital are often go hand in hand with one another. By having certain skill sets, experiences, and knowledge, an individual can gainsocial statusand so receive more social capital.[22] “The interrogation by his colleague was likely very difficult to navigate as Coleman was a man who was opposed to segregation. It is known that when Coleman and his wife Lucille Richey brought their three children John, Tom, and Steve to a white only amusement park, outside of Baltimore. They attempted to enter the park with a black family and as anticipated were arrested along with approximately 300 other demonstrators”. Coleman was a pioneer in the construction of mathematical models in sociology with his book,Introduction to Mathematical Sociology(1964). His later treatise,Foundations of Social Theory(1990), made major contributions toward a more rigorous form of theorizing in sociology based on rational choice.[3][26][27]Coleman wrote more than thirty books and 300 articles.[3]He also created an educational corporation that developed and marketed "mental games" aimed at improving the abilities of disadvantaged students.[28]Coleman made it a practice to send his most controversial research findings "to his worst critics" prior to their publication, calling it "the best way to ensure validity."[citation needed] At the time of his death, he was engaged in a long-term study titled theHigh School and Beyond, which examined the lives and careers of 75,000 people who had been high school juniors and seniors in 1980.[29] “In 1966, James Coleman presented a report to the U.S Congress where he presented his findings from his research where he spoke of how to reach a racial balance in public schools. He shared his most controversial findings that poor black children did better academically when integrated into middle-class schools”. Coleman published lasting theories of education, which helped shape the field.[30][31]With his focus on the allocation of rights, one can understand the conflict between rights. Towards the end of his life, Coleman questioned how to make the education systems more accountable, which caused educators to question their use and interpretation of standardized testing.[32] Coleman's publication of the "Coleman Report" included greatly influential findings that pioneered aspects of the desegregation of American public schools. His theories of integration also contributed. He also raised the issue of narrowing the educational gap between those who had money and others. By creating a well-rounded student body, a student's educational experience can be greatly benefited.[3] With Coleman's many shocking findings and conclusions that were drawn from his research, many of the people who were interested and trusted his research including Coleman himself were reluctant to follow them as time passed. Coleman's later studies suggested that desegregation efforts via busing failed due to “white flight” from areas in which students were bussed. Coleman integrated himself into a teacher lifestyle with the intention of sharing his passion for sociology and continuing his legacy despite his difficulty after his failed research. Having been one of the pioneers in mathematical sociology, it was not uncommon for people to ask Coleman to review papers submitted to various scholarly journals. He had little time on his hands as a well-known sociologist in the United States, in turn he built a seminar on the mathematics of sociology to build more people with the capability and education necessary to broaden and strengthen the field.
https://en.wikipedia.org/wiki/James_Samuel_Coleman
James Douglas Montgomery(born April 13, 1963) is professor ofsociologyandeconomicsat theUniversity of Wisconsin–Madison. He received his Ph.D. in economics fromMassachusetts Institute of Technology. He has applied game-theoretic models and non-monotonic logic to present formal analysis and description of social theories and sociological phenomena. He was the recipient of James Coleman Award (1999) for his paper “Toward a Role-Theoretic Conception of Embeddedness”. His paper is a major contribution towards formalization of social theories and sociological interpretation of game theories since he presents a repeated-game model in which the players are not individuals (as traditionally conceived in economic models) but assume social roles such as a profit-maximizing "businessperson" and nonstrategic "friend" (Montgomery, 1999). In the early 1990s, Montgomery contributed to economic theories of network structures in labor market. In 1991, Montgomery incorporated network structures in anadverse selectionmodel to analyze the effects of social networks onlabor marketoutcomes.[1]In 1992, Montgomery explored the role of “weak ties”, which he defined as non-frequent and transitory social relations, in labor market.[2][3]He demonstrates that weak ties are positively related to higher wages and higher aggregate employment rates. He is currently[when?]working on integrating non-monotonic logic with social network analysis in the context of sociological theories.
https://en.wikipedia.org/wiki/James_D._Montgomery_(economist)
Thomas J. Fararo(February 11, 1933 - August 20, 2020) was Distinguished ServiceProfessor Emeritusat theUniversity of Pittsburgh. After earning aPh.D.insociologyatSyracuse Universityin 1963, he received a three-year postdoctoralfellowshipfor studies inpureandapplied mathematicsatStanford University(1964–1967). In 1967, he joined thefacultyof University of Pittsburgh; during 1972–1973, he was visiting professor at the University of York in England.[1] Fararo is listed inAmerican Men and Women of Science,Who's Who in America, andWho's Who in Frontier Science and Technology. In 1998, he received the Distinguished Career Award from the Mathematical Sociology section of theAmerican Sociological Association. In addition to over a dozen books, Fararo has published over two dozen book chapters, over one dozen articles in reference works, and over 50 journal articles. Some of his books are edited works that relate to his career-long interest in making mathematical ideas relevant to the development of sociological theory. Fararo has served on the editorial boards of theAmerican Journal of Sociology, theAmerican Sociological Review, theJournal of Mathematical Sociology,Social Networks,Sociological Forum, andSociological Theory. Fararo has been both an originator and an explicator of ideas and methods relating to the use of formal methods in sociological theory. In his original work, he has employed theories and methods relating to social networks in combination with a focus on social processes. This combination is illustrated by the theoretical method he has called E-state Structuralism (where E stands for Expectations) with work on this done with former student John Skvoretz. He often employed the axiomatic method in such work, as in the 2003 monograph with his student Kenji Kosaka that sets out a formal theory of how images of stratification are generated. In his expository work, he has attempted to move the field of sociology closer to a conception of theorizing that is more formal, as in his 1973 bookMathematical Sociologyand in various papers and edited books, including the 1984 volumeMathematical Ideas and Sociological Theory. One of his objectives has been to articulate a coherent vision of the core of sociological theory: its philosophy, its key theoretical problems, and its methods, especially those employing formal representation. This objective is represented in his 1989 book,The Meaning of General Theoretical Sociology: Tradition and Formalization. The general vision that informs Fararo's theoretical work is "the spirit of unification," a theme that is set out inSocial Action Systems: Foundation and Synthesis in Sociological Theory, a 2001 book that analyzes key theories from the standpoint of the aspiration of synthesis, moving toward more comprehensive theories of social life.
https://en.wikipedia.org/wiki/Thomas_Fararo
Ineconomics,Beckstrom's lawis a model ortheoremformulated byRod Beckstrom. It purports to answer "the decades-old question of 'how valuable is a network'", and states in summary that "The value of a network equals the net value added to each user’s transactions conducted through that network, summed over all users." According to its creator, this law can be used to value any network be itsocial networks,computer networks,support groupsand even theInternetas a whole.[1]This new model values the network by looking from the edge of the network at all of the transactions conducted and the value added to each. It states that one way to contemplate the value the network adds to each transaction is to imagine the network being shut off and what the additional transactions costs or loss would be. It can thus be compared to the value of apizza deliveryservice offered to its customers. If the pizza delivery service shut down, then the social value generated by its deliveries declines, and people will either go hungry or elsewhere. Similarly, a potluck derives its total enjoyment value from the net value produced by each participant's dish. The success of such a gathering hinges on increasing the number of independent guests and their pots, thereby maximizing the amount of "luck" any one guest would have to achieve a satisfactory meal. Assuming one pot per person, a potluck with a set maximum number of guests could produce only a relatively small amount of total potential group satisfaction. Beckstrom's Law differs fromMetcalfe's law,Reed's lawand other concepts that proposed that the value of a network was based purely on the size of the network, and inMetcalfe's law, one other variable. According to Rod Beckstrom, the most significant improvement when using Beckstrom's Law instead of Metcalfe's Law, is the applicability to current experiences on the Internet. Metcalfe's Law does not account for service degradation due to a high number of users or bad actors who steal value from the network.[2] The net present valueVof any networkjto any individualiis equal to the sum of the net present value of the benefit of all transactions less the net present value of the costs of all transactions on the network over any given period of timet, as shown in the following equation. The value of the entire network is the summary of the value to all users, who are defined as all parties doing transactions on that network. where: Beckstrom's Law gives an indication on community dynamics that affect the experience of the individual. If consumers use services that are based on funding by a community of people, every member of that community is contributing to delivering the service. As more members join the community they aid funding the services through their contributions, however, these member also demand services for themselves which ultimately can lead to delays and deteriorating quality of the community service. For example, a larger number of members of a golf club lead to more revenue of the golf club, but a larger number of members aids to overcrowding golf courses and delays which has a negative effect on the golfing experience. Beckstrom's Law provides a model that could identify the point at which the marginal effect of each new member's contribution is zero and where adding an additional member makes everybody else worse off.[3]
https://en.wikipedia.org/wiki/Beckstrom%27s_law
Reed's lawis the assertion ofDavid P. Reedthat theutilityof largenetworks, particularlysocial networks, canscale exponentiallywith the size of the network.[1] The reason for this is that the number of possible sub-groups of network participants is 2N−N− 1, whereNis the number of participants. This grows much more rapidly than either so that even if the utility of groups available to be joined is very small on a per-group basis, eventually thenetwork effectof potential group membership can dominate the overall economics of the system. Given asetAofNpeople, it has 2Npossible subsets. This is not difficult to see, since we can form each possible subset by simply choosing for each element ofAone of two possibilities: whether to include that element, or not. However, this includes the (one) empty set, andNsingletons, which are not properly subgroups. So 2N−N− 1 subsets remain, which is exponential, like 2N. From David P. Reed's, "The Law of the Pack" (Harvard Business Review, February 2001, pp 23–4): Reed's Law is often mentioned when explaining competitive dynamics of internet platforms. As the law states that a network becomes more valuable when people can easily form subgroups to collaborate, while this value increases exponentially with the number of connections, business platform that reaches a sufficient number of members can generatenetwork effectsthat dominate the overall economics of the system.[2] Other analysts of network value functions, includingAndrew Odlyzko, have argued that both Reed's Law and Metcalfe's Law[3]overstate network value because they fail to account for the restrictive impact of human cognitive limits on network formation. According to this argument, the research aroundDunbar's numberimplies a limit on the number of inbound and outbound connections a human in a group-forming network can manage, so that the actual maximum-value structure is much sparser than the set-of-subsets measured by Reed's law or the complete graph measured by Metcalfe's law.
https://en.wikipedia.org/wiki/Reed%27s_law
David Sarnoff(February 27, 1891 – December 12, 1971) was a Russian[4]and American businessman who played an important role in the American history ofradioandtelevision. He led theRadio Corporation of America(RCA) for most of his career in various capacities from shortly after its founding in 1919 until his retirement in 1970. He headed a conglomerate oftelecommunicationsandmediacompanies, including RCA andNBC, that became one of the largest in the world. Named a ReserveBrigadier Generalof theSignal Corpsin 1945, Sarnoff thereafter was widely known as "The General".[3] David Sarnoff was born to aJewishfamily inUzlyany, a small town inMinsk Governorate,Russian Empire[4](today part ofBelarus), the son of Abraham Sarnoff and Leah Privin. Abraham emigrated to theUnited Statesand raised funds to bring the family. Sarnoff spent much of his early childhood in acheder(oryeshiva) studying and memorizing theTorah. He emigrated with his mother and three brothers and one sister toNew York Cityin 1900, where he helped support his family by selling newspapers before and after his classes at theEducational Alliance. In 1906 his father became incapacitated bytuberculosis, and at age 15 Sarnoff went to work to support the family.[5]He had planned to pursue a full-time career in the newspaper business, but a chance encounter led to a position as an office boy at theCommercial Cable Company. When his superior refused him leave forRosh Hashanah, he joined theMarconi Wireless Telegraph Company of Americaon September 30, 1906, and started a career of over60 yearsinelectronic communications. Over the next13 years, Sarnoff rose from office boy to commercial manager of the company, learning about the technology and the business of electronic communications on the job and in libraries. He also served at Marconi stations on ships and posts onSiasconset,Nantucketand the New YorkWanamaker Department Store. In 1911, he installed and operated the wireless equipment on a ship hunting seals offNewfoundland and Labrador, and used the technology to relay the first remote medical diagnosis from the ship's doctor to a radio operator atBelle Islewith an infected tooth. The following year, he led two other operators at the Wanamaker station in an effort to confirm the fate of theTitanic.[1]Sarnoff later exaggerated his role as the sole hero who stayed by histelegraph keyfor three days to receive information on theTitanic's survivors.[5][6]Schwartzquestions whether Sarnoff, who was a manager of the telegraphers by the time of the disaster, was working the key although that brushes aside concerns about corporate hierarchy. The event began on a Sunday when the store would have been closed.[7] Over the next two years Sarnoff earned promotions to chief inspector and contracts manager for a company whose revenues swelled after Congress passed legislation mandating continuous staffing of commercial shipboard radio stations. That same year Marconi won a patent suit that gave it the coastal stations of theUnited Wireless Telegraph Company. Sarnoff also demonstrated the first use of radio on a railroad line, theLackawanna RailroadCompany's link betweenBinghamton, New York, andScranton, Pennsylvania; and permitted and observedEdwin Armstrong's demonstration of his regenerative receiver at the Marconi station atBelmar, New Jersey. Sarnoff usedH. J. Round's hydrogen arc transmitter to demonstrate the broadcast of music from the New York Wanamaker station. This demonstration and theAT&Tdemonstrations in 1915 of long-distance wireless telephony inspired the first of many memos to his superiors on applications of current and future radio technologies. Sometime late in 1915 or in 1916 he proposed to the company's president,Edward J. Nally, that the company develop a "radio music box" for the "amateur" market of radio enthusiasts.[6][8]Nally deferred on the proposal because of the expanded volume of business duringWorld War I. Throughout the war years, Sarnoff remained Marconi's Commercial Manager,[3]including oversight of the company's factory inRoselle Park, New Jersey. Unlike many who were involved with early radio communications, who often viewed radio as point-to-point, Sarnoff saw the potential of radio as point-to-mass. One person (the broadcaster) could speak to many (the listeners). WhenOwen D. YoungofGeneral Electricarranged the purchase of American Marconi and reorganized it as theRadio Corporation of America, a radio patentmonopoly, Sarnoff realized his dream and revived his proposal in a lengthy memo on the company's business and prospects. His superiors again ignored him but he contributed to the rising postwar radio boom by helping arrange for the broadcast of aheavyweight boxing match between Jack Dempsey and Georges Carpentierin July, 1921. Up to 300,000 people listened to the broadcast of the fight, and demand for home radio equipment bloomed that winter.[9]By the spring of 1922, Sarnoff's prediction of popular demand for radio broadcasting had come to pass and over the next few years, he gained much stature and influence. In 1925, RCA purchased its first radio station (WEAF, New York) and launched the National Broadcasting Company (NBC), the first radio network in America. Four years later, Sarnoff became president of RCA. NBC had by that time split into two networks, theRedand theBlue. The Blue Network eventually becameABCRadio.[1]Sarnoff is often inaccurately referred to as the founder of both RCA and NBC, but he was in fact founder of only NBC.[5] Sarnoff was instrumental in building and establishing theAM broadcastingradio business that became the preeminent public radio standard for the majority of the 20th century. Sarnoff negotiated successful contracts to formRadio-Keith-Orpheum(RKO), a film production and distribution company.[5]Essential elements in that new company were RCA, theFilm Booking Offices of America(FBO), and theKeith-Albee-Orpheum(KAO) theater chain.[10] When Sarnoff was put in charge of radio broadcasting at RCA, he soon recognized the potential fortelevision, i.e., the combination of motion pictures with electronic transmission. Schemes for television had long been proposed (well before World War I) but with no practical outcome. Sarnoff was determined to lead his company in pioneering the medium and met withWestinghouseengineerVladimir Zworykinin 1928. At the time Zworykin was attempting to develop an all-electronic television system at Westinghouse, but with little success. Zworykin had visited the laboratory of the inventorPhilo T. Farnsworth, who had developed an Image Dissector, part of a system that could enable a working television. Zworykin was sufficiently impressed with Farnsworth's invention that he had his team at Westinghouse make several copies of the device for experimentation.[11] Zworykin pitched the concept to Sarnoff, claiming a viable television system could be realized in two years with a mere $100,000 investment. Sarnoff opted to fund Zworkyin's research, most likely well-aware that Zworykin was underestimating the scope of his television effort. Seven years later, in late 1935, Zworykin's photograph appeared on the cover of the trade journalElectronics, holding an early RCAphotomultiplierprototype. The photomultiplier, subject of intensive research at RCA and in Leningrad, Russia, would become an essential component within sensitive television cameras. On April 24, 1936, RCA demonstrated to the press a workingiconoscopecamera tube andkinescopereceiver display tube (an earlycathode-ray tube), two key components of all-electronictelevision. The final cost of the enterprise was closer to $50 million. On the road to success they encountered a legal battle withFarnsworth, who had been grantedpatentsin 1930 for his solution tobroadcastingmoving pictures. Despite Sarnoff's efforts to prove that he was the inventor of the television, he was ordered to pay Farnsworth $1,000,000 in royalties, a small price to settle the dispute for an invention that would profoundly revolutionize the world. However this sum was never paid to Farnsworth. In 1929, Sarnoff engineered the purchase of theVictor Talking Machine Company, the nation's largest manufacturer ofrecordsandphonographs, merging radio-phonograph production at Victor's large manufacturing facility inCamden, New Jersey. The acquisition became known as theRCA Victor Division, laterRCA Records. On January 3, 1930, Sarnoff became president of RCA, succeeding GeneralJames Harbord. On May 30, the company became involved in anantitrustcase concerning the original radiopatent pool. Sarnoff negotiated an outcome where RCA was no longer partially owned by Westinghouse andGeneral Electric, giving him final say in the company's affairs. Initially, theGreat Depressionof the early 1930s caused RCA to cut costs, but Zworykin's project was protected. After nine years of Zworykin's hard work, Sarnoff's determination, and legal battles with Farnsworth (in which Farnsworth was proved in the right), they had a commercial system ready to launch. Finally, in April 1939, regularly scheduled, electronic television in America was initiated by RCA under the name of their broadcasting division at the time, TheNational Broadcasting Company(NBC). The first television broadcast aired was the dedication of the RCA pavilion at the1939 New York World's Fairgroundsand was introduced by Sarnoff himself. Later that month on April 30, opening day ceremonies at The World's Fair were telecast in the medium's first major production, featuring a speech by PresidentFranklin D. Roosevelt, the first US president to appear on television. These telecasts were seen only in New York City and the immediate vicinity, since NBC television had only one station at the time, W2XBS Channel 1, now WNBC Channel 4. The broadcast was seen by an estimated 1,000 viewers from the roughly 200 television sets which existed in the New York City area at the time. The standard approved by theNational Television System Committee(theNTSC) in 1941 differed from RCA's standard, but RCA quickly became the market leader of manufactured sets and NBC became the first television network in the United States, connecting their New York City station to stations in Philadelphia and Schenectady for occasional programs in the early 1940s. According to the book “Global Communication Since 1844”[12]by Peter J. Hughill, Sarnoff was part of a group of Russian Jewish scientists  in the 1930s who wanted their research to advance military technology with the possibility of a forthcoming war with Germany. The account, credited to British government scientistBrian Callick, is supported by other contemporary evidence.[13][14]The group, also comprisingSimeon Aisenstein,Vladimir ZworykinandIsaac Shoenberg, knew each other well from Russia and saw possible military applications for their work on television. The group is said to have raised one million pounds sterling (about $5 million at the time) from US donors. The specific work took place at EMI-Marconi in the U.K. and resulted in Britain becoming significantly advanced in television development and able to launch a public service on 2 November 1936. The military applications helped the development of radio-location (later named radar). In addition the design and production in quantity of television equipment and sets allowed the similar military technology (cathode ray tubes, VHF transmission and reception and wideband circuits to be advanced. A former British defence minister,Lord Orr-Ewing, referred to the work in a1979 BBC interviewand stated “that’s how we won theBattle of Britain”. Meanwhile, a system developed byEMIbased on Russian research and Zworykin's work was adopted inBritainand theBBChad aregular television servicefrom 1936 onwards. However,World War IIput a halt to a dynamic growth of the early television development stages. At the onset of World War II, Sarnoff served onEisenhower'scommunications staff, arranging expanded radio circuits for NBC to transmit news from theinvasion of France in June 1944. In France, Sarnoff arranged for the restoration of theRadio Francestation inParisthat the Germans destroyed and oversaw the construction of a radio transmitter powerful enough to reach all of the allied forces in Europe, calledRadio Free Europe. In recognition of his achievements, Sarnoff was decorated with theLegion of Meriton October 11, 1944.[15] Thanks to his communications skills and support he received theBrigadier General's star in December 1945, and thereafter was known as "General Sarnoff."[16]The star, which he proudly and frequently wore, was buried with him. Sarnoff anticipated that post-war America would need an international radio voice explaining its policies and positions. In 1943, he tried to influence Secretary of StateCordell Hullto include radio broadcasting in post-war planning. In 1947, he lobbied Secretary of State George Marshall to expand the roles ofRadio Free EuropeandVoice of America. His concerns and proposed solutions were eventually seen as prescient.[17] After the war, monochrome TV production began in earnest. Color TV was the next major development, and NBC once again won the battle.CBShad their electro-mechanical color television system approved by theFCCon October 10, 1950, but Sarnoff filed an unsuccessful suit in theUnited States district courtto suspend that ruling. Subsequently, he made an appeal to theSupreme Courtwhich eventually upheld the FCC decision. Sarnoff's tenacity and determination to win the "Color War" pushed his engineers to perfect an all-electronic color television system that used a signal that could be received on existing monochrome sets that prevailed. CBS was now unable to take advantage of the color market, due to lack of manufacturing capability and color programming, a system that could not be seen on the millions of black and white receivers and sets that were triple the cost of monochrome sets. A few days after CBS had its color premiere on June 14, 1951, RCA demonstrated a fully functional all-electronic color TV system and became the leading manufacturer of color TV sets in the US. CBS system color TV production was suspended in October 1951 for the duration of theKorean War. As more people bought monochrome sets, it was increasingly unlikely that CBS could achieve any success with its incompatible system. Few receivers were sold, and there were almost no color broadcasts, especially in prime time, when CBS could not run the risk of broadcasting a program which few could see. The NTSC was reformed and recommended a system virtually identical to RCA's in August 1952. On December 17, 1953, the FCC approved RCA's system as the new standard. In 1955, Sarnoff receivedThe Hundred Year Association of New York's Gold Medal Award "in recognition of outstanding contributions to the City of New York." In 1959, Sarnoff was a member of theRockefeller Brothers Fundpanel to report on U.S. foreign policy. As a member of that panel and in a subsequent essay published inLifeas part of its "The National Purpose" series, he was critical of the tentative stand being taken by the United States in fighting the political and psychological warfare being waged bySoviet-led internationalCommunismagainst theWest. He strongly advocated an aggressive, multi-faceted fight in the ideological and political realms with a determination to decisively win theCold War.[18] Sarnoff retired in 1970, at the age of 79, and died the following year, aged 80. He is interred in a mausoleum featuring a stained-glass vacuum tube inKensico CemeteryinValhalla, New York. After his death, Sarnoff left behind an estate estimated to be worth over $1 million. The majority of the estate went to his widow, Lizette Hermant Sarnoff, who received $300,000, personal and household effects in addition to the Sarnoff home, located on 44 East 71st Street.[19] On July 4, 1917, Sarnoff married Lizette Hermant, the daughter of aFrench-Jewishimmigrant family who settled in the Bronx as one of his family's neighbors.[20][3]TheMuseum of Broadcast Communicationsdescribes their 54-year marriage as the bedrock of his life.[5]Lizette was often the first person to hear her husband's new ideas as radio and television became integral to American home life.[3] The couple had three sons. Eldest sonRobert W. Sarnoff(1918–1997)[21]succeeded his father at the helm of RCA in 1970.[22]Robert's third wife was operatic sopranoAnna Moffo.[21]Edward Sarnoff, the middle child, headed Fleet Services of New York.[23]Thomas W. Sarnoff, the youngest, was NBC's West Coast President.[24] Sarnoff was the maternaluncleofscreenwriterRichard Baer.[25]Sarnoff was credited with sparking Baer's interest in television.[25]According to Baer's 2005autobiography, Sarnoff called avice presidentatNBCat 6 A.M. and ordered him to find Baer "a job by 9 o'clock" that same morning.[25]The NBC vice president complied with Sarnoff's request. David Sarnoff was initiated to theScottish RiteFreemasonry[26][27]in the Renovation Lodge No. 97, Albion, NY.[28][29] The David Sarnoff Library, a library and museum open to the public containing many historical items from David Sarnoff's life was inPrinceton Junction, NJ. The David Sarnoff Library now exists as a virtual museum online. When the Library was operating, The David Sarnoff Radio Club composed of localamateur radio operatorsused to meet there, as did the New Jersey Antique Radio Club and other community organizations. The exhibits are now on display in Roscoe L. West Hall atThe College of New Jersey. In 1999, computer scientistDavid P. Reedcoined Sarnoff's Law, which states that "the value of a network grows in proportion to the number of viewers."[34]Sarnoff's Law,Metcalfe's LawandReed's Laware frequently used in tandem in discussions of the value of networks.
https://en.wikipedia.org/wiki/Sarnoff%27s_law
Government by algorithm[1](also known asalgorithmic regulation,[2]regulation by algorithms,algorithmic governance,[3][4]algocratic governance,algorithmic legal orderoralgocracy[5]) is an alternative form ofgovernmentorsocial orderingwhere the usage of computeralgorithmsis applied to regulations, law enforcement, and generally any aspect of everyday life such as transportation or land registration.[6][7][8][9][10]The term "government by algorithm" has appeared in academic literature as an alternative for "algorithmic governance" in 2013.[11]A related term, algorithmic regulation, is defined as setting the standard, monitoring and modifying behaviour by means of computational algorithms – automation ofjudiciaryis in its scope.[12]In the context of blockchain, it is also known asblockchain governance.[13] Government by algorithm raises new challenges that are not captured in thee-governmentliterature and the practice of public administration.[14]Some sources equatecyberocracy, which is a hypotheticalform of governmentthat rules by the effective use of information,[15][16][17]with algorithmic governance, although algorithms are not the only means of processing information.[18][19]Nello Cristianiniand Teresa Scantamburlo argued that the combination of a human society and certain regulation algorithms (such as reputation-based scoring) forms asocial machine.[20] In 1962, the director of the Institute for Information Transmission Problems of theRussian Academy of Sciencesin Moscow (later Kharkevich Institute),[21]Alexander Kharkevich, published an article in the journal "Communist" about a computer network for processing information and control of the economy.[22][23]In fact, he proposed to make a network like the modern Internet for the needs of algorithmic governance (ProjectOGAS). This created a serious concern among CIA analysts.[24]In particular,Arthur M. Schlesinger Jr.warned that"by 1970 the USSR may have a radically new production technology, involving total enterprises or complexes of industries, managed by closed-loop, feedback control employingself-teaching computers".[24] Between 1971 and 1973, theChileangovernment carried outProject Cybersynduring thepresidency of Salvador Allende. This project was aimed at constructing a distributeddecision support systemto improve the management of the national economy.[25][2]Elements of the project were used in 1972 to successfully overcome the traffic collapse caused by aCIA-sponsored strike of forty thousand truck drivers.[26] Also in the 1960s and 1970s,Herbert A. Simonchampionedexpert systemsas tools for rationalization and evaluation of administrative behavior.[27]The automation of rule-based processes was an ambition of tax agencies over many decades resulting in varying success.[28]Early work from this period includes Thorne McCarty's influential TAXMAN project[29]in the US and Ronald Stamper'sLEGOLproject[30]in the UK. In 1993, the computer scientistPaul Cockshottfrom theUniversity of Glasgowand the economist Allin Cottrell from theWake Forest Universitypublished the bookTowards a New Socialism, where they claim to demonstrate the possibility of a democraticallyplanned economybuilt on modern computer technology.[31]The Honourable JusticeMichael Kirbypublished a paper in 1998, where he expressed optimism that the then-available computer technologies such aslegal expert systemcould evolve to computer systems, which will strongly affect the practice of courts.[32]In 2006, attorneyLawrence Lessig, known for the slogan"Code is law", wrote: [T]he invisible hand of cyberspace is building an architecture that is quite the opposite of its architecture at its birth. This invisible hand, pushed by government and by commerce, is constructing an architecture that will perfect control and make highly efficient regulation possible[33] Since the 2000s, algorithms have been designed and used toautomatically analyze surveillance videos.[34] In his 2006 bookVirtual Migration,A. Aneeshdeveloped the concept of algocracy — information technologies constrain human participation in public decision making.[35][36]Aneesh differentiated algocratic systems from bureaucratic systems (legal-rational regulation) as well as market-based systems (price-based regulation).[37] In 2013, algorithmic regulation was coined byTim O'Reilly, founder and CEO of O'Reilly Media Inc.: Sometimes the "rules" aren't really even rules. Gordon Bruce, the former CIO of the city of Honolulu, explained to me that when he entered government from the private sector and tried to make changes, he was told, "That's against the law." His reply was "OK. Show me the law." "Well, it isn't really a law. It's a regulation." "OK. Show me the regulation." "Well, it isn't really a regulation. It's a policy that was put in place by Mr. Somebody twenty years ago." "Great. We can change that!" [...] Laws should specify goals, rights, outcomes, authorities, and limits. If specified broadly, those laws can stand the test of time. Regulations, which specify how to execute those laws in much more detail, should be regarded in much the same way that programmers regard their code and algorithms, that is, as a constantly updated toolset to achieve the outcomes specified in the laws. [...] It's time for government to enter the age of big data. Algorithmic regulation is an idea whose time has come.[38] In 2017, Ukraine'sMinistry of Justiceran experimentalgovernment auctionsusingblockchaintechnology to ensure transparency and hinder corruption in governmental transactions.[39]"Government by Algorithm?" was the central theme introduced at Data for Policy 2017 conference held on 6–7 September 2017 in London.[40] Asmart cityis an urban area where collected surveillance data is used to improve various operations. Increase in computational power allows more automated decision making and replacement of public agencies by algorithmic governance.[41]In particular, the combined use of artificial intelligence and blockchains forIoTmay lead to the creation ofsustainablesmart city ecosystems.[42]Intelligent street lightinginGlasgowis an example of successful government application of AI algorithms.[43]A study of smart city initiatives in the US shows that it requires public sector as a main organizer and coordinator, the private sector as a technology and infrastructure provider, and universities as expertise contributors.[44] Thecryptocurrencymillionaire Jeffrey Berns proposed the operation oflocal governmentsinNevadaby tech firms in 2021.[45]Berns bought 67,000 acres (271 km2) in Nevada's ruralStorey County(population 4,104) for $170,000,000 (£121,000,000) in 2018 in order to develop a smart city with more than 36,000 residents that could generate an annual output of $4,600,000,000.[45]Cryptocurrency would be allowed for payments.[45]Blockchains, Inc. "Innovation Zone" was canceled in September 2021 after it failed to secure enough water[46]for the planned 36,000 residents, through water imports from a site located 100 miles away in the neighboringWashoe County.[47]A similar water pipeline proposed in 2007 was estimated to cost $100 million and would have taken about 10 years to develop.[47]With additional water rights purchased from Tahoe Reno Industrial General Improvement District, "Innovation Zone" would have acquired enough water for about 15,400 homes - meaning that it would have barely covered its planned 15,000 dwelling units, leaving nothing for the rest of the projected city and its 22 million square-feet of industrial development.[47] InSaudi Arabia, the planners ofThe Lineassert that it will be monitored by AI to improve life by using data and predictive modeling.[48] Tim O'Reilly suggested that data sources andreputation systemscombined in algorithmic regulation can outperform traditional regulations.[38]For instance, once taxi-drivers are rated by passengers, the quality of their services will improve automatically and "drivers who provide poor service are eliminated".[38]O'Reilly's suggestion is based on thecontrol-theorericconcept offeed-back loop—improvementsanddisimprovementsof reputation enforce desired behavior.[20]The usage of feedback-loops for the management of social systems has already been suggested inmanagement cyberneticsbyStafford Beerbefore.[50] These connections are explored byNello Cristianiniand Teresa Scantamburlo, where the reputation-credit scoring system is modeled as an incentive given to the citizens and computed by asocial machine, so that rational agents would be motivated to increase their score by adapting their behaviour. Several ethical aspects of that technology are still being discussed.[20] China'sSocial Credit Systemwas said to be a mass surveillance effort with a centralized numerical score for each citizen given for their actions, though newer reports say that this is a widespread misconception.[51][52][53] Smart contracts,cryptocurrencies, anddecentralized autonomous organizationare mentioned as means to replace traditional ways of governance.[54][55][10]Cryptocurrencies are currencies which are enabled by algorithms without a governmentalcentral bank.[56]Central bank digital currencyoften employs similar technology, but is differentiated from the fact that it does use a central bank. It is soon to be employed by major unions and governments such as the European Union and China.Smart contractsare self-executablecontracts, whose objectives are the reduction of need in trusted governmental intermediators, arbitrations and enforcement costs.[57][58]A decentralized autonomous organization is anorganizationrepresented by smart contracts that is transparent, controlled by shareholders and not influenced by a central government.[59][60][61]Smart contracts have been discussed for use in such applications as use in (temporary)employment contracts[62][63]and automatic transfership of funds and property (i.e.inheritance, upon registration of adeath certificate).[64][65][66][67]Some countries such as Georgia and Sweden have already launched blockchain programs focusing on property (land titlesandreal estateownership)[39][68][69][70]Ukraine is also looking at other areas too such asstate registers.[39] According to a study ofStanford University, 45% of the studied US federal agencies have experimented with AI and related machine learning (ML) tools up to 2020.[1]US federal agencies counted the number ofartificial intelligenceapplications, which are listed below.[1]53% of these applications were produced by in-house experts.[1]Commercial providers of residual applications includePalantir Technologies.[71] In 2012,NOPDstarted a collaboration with Palantir Technologies in the field ofpredictive policing.[72]Besides Palantir's Gotham software, other similar (numerical analysis software) used by police agencies (such as the NCRIC) includeSAS.[73] In the fight against money laundering,FinCENemploys the FinCEN Artificial Intelligence System (FAIS) since 1995.[74][75] National health administration entities and organisations such as AHIMA (American Health Information Management Association) holdmedical records. Medical records serve as the central repository for planning patient care and documenting communication among patient and health care provider and professionals contributing to the patient's care. In the EU, work is ongoing on aEuropean Health Data Spacewhich supports the use of health data.[76] USDepartment of Homeland Securityhas employed the software ATLAS, which run onAmazon Cloud. It scanned more than 16.5 million records of naturalized Americans and flagged approximately 124,000 of them for manual analysis and review byUSCISofficers regardingdenaturalization.[77][78]They were flagged due to potential fraud, public safety and national security issues. Some of the scanned data came fromTerrorist Screening DatabaseandNational Crime Information Center. TheNarxCareis a US software,[79]which combines data from the prescription registries of variousU.S. states[80][81]and usesmachine learningto generate various three-digit "risk scores" for prescriptions of medications and an overall "Overdose Risk Score", collectively referred to as Narx Scores,[82]in a process that potentially includesEMSand criminal justice data[79]as well as court records.[83] In Estonia, artificial intelligence is used in itse-governmentto make it more automated and seamless. A virtual assistant will guide citizens through any interactions they have with the government. Automated and proactive services "push" services to citizens at key events of their lives (including births, bereavements, unemployment). One example is the automated registering of babies when they are born.[84]Estonia'sX-Road systemwill also be rebuilt to include even more privacy control and accountability into the way the government uses citizen's data.[85] In Costa Rica, the possible digitalization of public procurement activities (i.e. tenders for public works) has been investigated. The paper discussing this possibility mentions that the use of ICT in procurement has several benefits such as increasing transparency, facilitating digital access to public tenders, reducing direct interaction between procurement officials and companies at moments of high integrity risk, increasing outreach and competition, and easier detection of irregularities.[86] Besides using e-tenders for regularpublic works(construction of buildings, roads), e-tenders can also be used forreforestationprojects and othercarbon sinkrestoration projects.[87]Carbon sinkrestoration projectsmaybe part of thenationally determined contributionsplans in order to reach the nationalParis agreement goals. Governmentprocurementaudit softwarecan also be used.[88][89]Audits are performed in some countries aftersubsidies have been received. Some government agencies provide track and trace systems for services they offer. An example istrack and tracefor applications done by citizens (i.e. driving license procurement).[90] Some government services useissue tracking systemsto keep track of ongoing issues.[91][92][93][94] Judges' decisions in Australia are supported by the"Split Up" softwarein cases of determining the percentage of a split after adivorce.[95]COMPASsoftware is used in the USA to assess the risk ofrecidivismin courts.[96][97]According to the statement of Beijing Internet Court, China is the first country to create an internet court or cyber court.[98][99][100]The Chinese AI judge is avirtual recreationof an actual female judge. She "will help the court's judges complete repetitive basic work, including litigation reception, thus enabling professional practitioners to focus better on their trial work".[98]Also,Estoniaplans to employ artificial intelligence to decide small-claim cases of less than €7,000.[101] Lawbotscan perform tasks that are typically done by paralegals or young associates at law firms. One such technology used by US law firms to assist in legal research is from ROSS Intelligence,[102]and others vary in sophistication and dependence on scriptedalgorithms.[103]Another legal technologychatbotapplication isDoNotPay. Due to the COVID-19 pandemic in 2020, in-person final exams were impossible for thousands of students.[104]The public high schoolWestminster Highemployed algorithms to assign grades. UK'sDepartment for Educationalso employed a statistical calculus to assign final grades inA-levels, due to the pandemic.[105] Besides use in grading, software systems like AI were used in preparation for college entrance exams.[106] AI teaching assistants are being developed and used for education (e.g. Georgia Tech's Jill Watson)[107][108]and there is also an ongoing debate on the possibility of teachers being entirely replaced by AI systems (e.g. inhomeschooling).[109] In 2018, an activist named Michihito Matsuda ran for mayor in theTama city area of Tokyoas a human proxy for anartificial intelligenceprogram.[110]While election posters and campaign material used the termrobot, and displayedstock imagesof a feminineandroid, the "AI mayor" was in fact amachine learning algorithmtrained using Tama city datasets.[111]The project was backed by high-profile executives Tetsuzo Matsumoto ofSoftbankand Norio Murakami ofGoogle.[112]Michihito Matsuda came third in the election, being defeated byHiroyuki Abe.[113]Organisers claimed that the 'AI mayor' was programmed to analyzecitizen petitionsput forward to thecity councilin a more 'fair and balanced' way than human politicians.[114] In 2018,Cesar Hidalgopresented the idea ofaugumented democracy.[115]In an augumented democracy, legislation is done bydigital twinsof every single person. In 2019, AI-powered messengerchatbotSAM participated in the discussions on social media connected to an electoral race in New Zealand.[116]The creator of SAM, Nick Gerritsen, believed SAM would be advanced enough to run as acandidateby late 2020, when New Zealand had its next general election.[117] In 2022, the chatbot "Leader Lars" or "Leder Lars" was nominated forThe Synthetic Partyto run in the 2022Danishparliamentary election,[118]and was built by the artist collectiveComputer Lars.[119]Leader Lars differed from earlier virtual politicians by leading apolitical partyand by not pretending to be an objective candidate.[120]This chatbot engaged in critical discussions on politics with users from around the world.[121] In 2023, In the Japanese town of Manazuru, a mayoral candidate called "AI Mayer" hopes to be the first AI-powered officeholder in Japan in November 2023. This candidacy is said to be supported by a group led by Michihito Matsuda[122] In the2024 United Kingdom general election, a businessman named Steve Endacott ran for the constituency ofBrighton Pavilionas an AI avatar named "AI Steve",[123]saying that constituents could interact with AI Steve to shape policy. Endacott stated that he would only attend Parliament to vote based on policies which had garnered at least 50% support.[124]AI Steve placed last with 179 votes.[125] In February 2020, China launched amobile appto deal with theCoronavirus outbreak[127]called "close-contact-detector".[128]Users are asked to enter their name and ID number. The app is able to detect "close contact" using surveillance data (i.e. using public transport records, including trains and flights)[128]and therefore a potential risk of infection. Every user can also check the status of three other users. To make this inquiry users scan a Quick Response (QR) code on their smartphones using apps likeAlipayorWeChat.[129]The close contact detector can be accessed via popular mobile apps including Alipay. If a potential risk is detected, the app not only recommends self-quarantine, it also alerts local health officials.[130] Alipay also has theAlipay Health Codewhich is used to keep citizens safe. This system generates a QR code in one of three colors (green, yellow, or red) after users fill in a form on Alipay with personal details. A green code enables the holder to move around unrestricted. A yellow code requires the user to stay at home for seven days and red means a two-week quarantine. In some cities such as Hangzhou, it has become nearly impossible to get around without showing one's Alipay code.[131] In Cannes, France, monitoring software has been used on footage shot byCCTVcameras, allowing to monitor their compliance to localsocial distancingandmask wearingduring the COVID-19 pandemic. The system does not store identifying data, but rather allows to alert city authorities and police where breaches of the mask and mask wearing rules are spotted (allowingfiningto be carried out where needed). The algorithms used by the monitoring software can be incorporated into existing surveillance systems in public spaces (hospitals, stations, airports, shopping centres, ...)[132] Cellphone data is used to locate infected patients in South Korea, Taiwan, Singapore and other countries.[133][134]In March 2020, the Israeli government enabled security agencies to track mobile phone data of people supposed to have coronavirus. The measure was taken to enforce quarantine and protect those who may come into contact with infected citizens.[135]Also in March 2020,Deutsche Telekomshared private cellphone data with the federal government agency,Robert Koch Institute, in order to research and prevent the spread of the virus.[136]Russia deployedfacial recognition technologyto detect quarantine breakers.[137]Italian regional health commissionerGiulio Gallerasaid that "40% of people are continuing to move around anyway", as he has been informed by mobile phone operators.[138]In USA, Europe and UK,Palantir Technologiesis taken in charge to provide COVID-19 tracking services.[139] Tsunamiscan be detected byTsunami warning systems. They can make use of AI.[140][141]Floodingscan also be detected using AI systems.[142]Wildfirescan be predicted using AI systems.[143][144]Wildfire detection is possible by AI systems(i.e. through satellite data, aerial imagery, and GPS phone personnel position) and can help in the evacuation of people during wildfires,[145]to investigate how householders responded in wildfires[146]and spotting wildfire in real time usingcomputer vision.[147][148]Earthquake detection systemsare now improving alongside the development of AI technology through measuring seismic data and implementing complex algorithms to improve detection and prediction rates.[149][150][151]Earthquake monitoring, phase picking, and seismic signal detection have developed through AI algorithms ofdeep-learning, analysis, and computational models.[152]Locustbreeding areas can be approximated using machine learning, which could help to stop locust swarms in an early phase.[153] Algorithmic regulation is supposed to be a system of governance where more exact data, collected from citizens via their smart devices and computers, is used to more efficiently organize human life as a collective.[154][155]AsDeloitteestimated in 2017, automation of US government work could save 96.7 million federal hours annually, with a potential savings of $3.3 billion; at the high end, this rises to 1.2 billion hours and potential annual savings of $41.1 billion.[156] There are potential risks associated with the use of algorithms in government. Those include: According to a 2016's bookWeapons of Math Destruction, algorithms andbig dataare suspected to increase inequality due to opacity, scale and damage.[159] There is also a serious concern thatgamingby the regulated parties might occur, once moretransparency is brought into the decision making by algorithmic governance, regulated parties might try to manipulate their outcome in own favor and even useadversarial machine learning.[1][20]According toHarari, the conflict between democracy and dictatorship is seen as a conflict of two different data-processing systems—AI and algorithms may swing the advantage toward the latter by processing enormous amounts of information centrally.[160] In 2018, the Netherlands employed an algorithmic system SyRI (Systeem Risico Indicatie) to detect citizens perceived as being high risk for committingwelfare fraud, which quietly flagged thousands of people to investigators.[161]This caused a public protest. The district court of Hague shut down SyRI referencingArticle 8 of the European Convention on Human Rights(ECHR).[162] The contributors of the 2019 documentaryiHumanexpressed apprehension of "infinitely stable dictatorships" created by government AI.[163] Due to public criticism, the Australian government announced the suspension ofRobodebt schemekey functions in 2019, and a review of all debts raised using the programme.[164] In 2020, algorithms assigning exam grades to students in theUK sparked open protestunder the banner "Fuck the algorithm."[105]This protest was successful and the grades were taken back.[165] In 2020, the US government softwareATLAS, which run onAmazon Cloud, sparked uproar from activists and Amazon's own employees.[166] In 2021, Eticas Foundation launched a database of governmental algorithms calledObservatory of Algorithms with Social Impact(OASI).[167] An initial approach towards transparency included theopen-sourcing of algorithms.[168]Software code can be looked into and improvements can be proposed throughsource-code-hosting facilities. A 2019 poll conducted byIE University's Center for the Governance of Change in Spain found that 25% of citizens from selected European countries were somewhat or totally in favor of letting an artificial intelligence make important decisions about how their country is run.[169]The following table lists the results by country: Researchers found some evidence that when citizens perceive their political leaders or security providers to be untrustworthy, disappointing, or immoral, they prefer to replace them by artificial agents, whom they consider to be more reliable.[170]The evidence is established by survey experiments on university students of all genders. A 2021 poll byIE Universityindicates that 51% of Europeans are in favor of reducing the number of national parliamentarians and reallocating these seats to an algorithm. This proposal has garnered substantial support in Spain (66%), Italy (59%), and Estonia (56%). Conversely, the citizens of Germany, the Netherlands, the United Kingdom, and Sweden largely oppose the idea.[171]The survey results exhibit significant generational differences. Over 60% of Europeans aged 25-34 and 56% of those aged 34-44 support the measure, while a majority of respondents over the age of 55 are against it. International perspectives also vary: 75% of Chinese respondents support the proposal, whereas 60% of Americans are opposed.[171] The 1970David Bowiesong "Saviour Machine" depicts an algocratic society run by the titular mechanism, which ended famine and war through "logic" but now threatens to cause an apocalypse due to its fear that its subjects have become excessively complacent.[172] The novelsDaemon(2006) andFreedom™(2010) byDaniel Suarezdescribe a fictional scenario of global algorithmic regulation.[173]Matthew De Abaitua'sIf Thenimagines an algorithm supposedly based on "fairness" recreating a premodern rural economy.[174]
https://en.wikipedia.org/wiki/Algocracy
Thedigerati(ordigirati) are the elite ofdigitalization,social media,content marketing,computer industryandonlinecommunities. The word is aportmanteau, derived from "digital" and "literati", and reminiscent of the earlier coinageglitterati(glitter and literati). Famouscomputer scientists,techmagazine writers, digital consultants with multi-year experiences and well-known bloggers are included among the digerati. The word is used in several related but different ways. It can mean: The first mention of the wordDigerationUSENEToccurred in 1992 byArthur Wang, and referred to an article byGeorge GilderinUpsidemagazine. According to the March 1, 1992 "On Language" column byWilliam SafireinThe New York Times Magazine, thetermwas coined byThe New York TimeseditorTim Racein a January 1992New York Timesarticle.[1]In Race's words:
https://en.wikipedia.org/wiki/Digerati
The termdigital citizenis used with different meanings. According to the definition provided byKaren Mossberger, one of the authors ofDigital Citizenship: The Internet, Society, and Participation,[1]digital citizens are "those who use the internet regularly and effectively." In this sense, a digital citizen is a person usinginformation technology(IT) in order to engage in society, politics, and government. More recent elaborations of the concept define digital citizenship as the self-enactment of people’s role in society through the use of digital technologies, stressing the empowering and democratizing characteristics of the citizenship idea. These theories aim at taking into account the ever increasingdataficationof contemporary societies (as can be symbolically linked to theSnowden leaks), which radically called into question the meaning of “being (digital) citizens in a datafied society”,[2]also referred to as the “algorithmic society”,[3]which is characterised by the increasing datafication of social life and the pervasive presence of surveillance practices – seesurveillanceandsurveillance capitalism, the use ofartificial intelligence, andBig Data. Datafication presents crucial challenges for the very notion of citizenship, so thatdata collectioncan no longer be seen as an issue of privacy alone[2]so that: We cannot simply assume that being a citizen online already means something (whether it is the ability to participate or the ability to stay safe) and then look for those whose conduct conforms to this meaning[4] Instead, the idea of digital citizenship shall reflect the idea that we are no longer mere “users” of technologies since they shape our agency both as individuals and as citizens. Digital citizenship is the responsible and respectful use of technology to engage online, find reliable sources, and protect and promote human rights.[1][2][3][4]It teaches skills to communicate, collaborate, and act positively on any online platform.[2][3]It also teaches empathy, privacy protection, and security measures to prevent data breaches and identity theft. In the context of the algorithmic society, the question of digital citizenship "becomes one of the extents to which subjects are able to challenge, avoid or mediate their data double in this datafied society”.[2] These reflections put the emphasis on the idea of the digital space (orcyberspace) as a political space where the respect of fundamental rights of the individual shall be granted (with reference both to the traditional ones as well as to new specific rights of the internet [see “digital constitutionalism”]) and where the agency and the identity of the individuals as citizens is at stake. This idea of digital citizenship is thought to be not only active but also performative, in the sense that “in societies that are increasingly mediated through digital technologies, digital acts become important means through which citizens create, enact and perform their role in society.”[2] In particular, for Isin and Ruppert this points towards an active meaning of (digital) citizenship based on the idea that we constitute ourselves as digital citizen by claiming rights on the internet, either by saying or by doing something.[4] People who characterize themselves as digital citizens often use IT extensively—creatingblogs, usingsocial networks, and participating inonline journalism.[5]Although digital citizenship begins when any child, teen, or adult signs up for anemail address, posts pictures online, usese-commerceto buy merchandise online, and/or participates in any electronic function that isB2BorB2C, the process of becoming a digital citizen goes beyond simple internet activity. According toThomas Humphrey Marshall, a British sociologist known for his work onsocial citizenship, a primary framework of citizenship comprises three different traditions:liberalism,republicanism, andascriptivehierarchy. Within this framework, the digital citizen needs to exist in order to promote equal economic opportunities and increasepolitical participation.[6]In this way, digital technology helps to lower thebarriers to entryfor participation as a citizen within a society. They also have a comprehensive understanding of digital citizenship, which is the appropriate and responsible behavior when using technology.[7]Since digital citizenship evaluates the quality of an individual's response to membership in a digital community, it often requires the participation of all community members, both visible and those who are less visible.[8]A large part in being a responsible digital citizen encompasses digital literacy, etiquette,online safety, and an acknowledgement of private versus public information.[9][10][11]The development of digital citizen participation can be divided into two main stages.[12] The first stage is throughinformation dissemination, which includes subcategories of its own:[12] The second stage of digital citizen participation iscitizen deliberation, which evaluates what type of participation and role that they play when attempting to ignite some sort of policy change. One of the primary advantages of participating in online debates through digital citizenship is that it incorporatessocial inclusion. In a report oncivic engagement, citizen-powered democracy can be initiated either through information shared through the web, direct communication signals made by the state toward the public, and social media tactics from both private and public companies.[13]In fact, it was found that the community-based nature of social media platforms allow individuals to feel more socially included and informed about political issues that peers have also been found to engage with, otherwise known as a "second-order effect."[14]Understanding strategic marketing on social media would further explain social media customers’ participation.Two types of opportunities rise as a result, the first being the ability to lower barriers that can make exchanges much easier. In addition, they have the chance to participate in transformative disruption, giving people who have a historically lower political engagement to mobilize in a much easier and convenient fashion. Nonetheless, there are several challenges that face the presence of digital technologies in political participation. Both current as well as potential challenges can create significant risks for democratic processes. Not only is digital technology still seen as relatively ambiguous, it was also seen to have "less inclusivity in democratic life."[15]Demographic groups differ considerably in the use of technology, and thus, one group could potentially be more represented than another as a result of digital participation. Another primary challenge consists in the ideology of a "filter bubble" effect. Alongside a tremendous spread of false information, internet users could reinforce existing prejudices and assist in polarizing disagreements in the public sphere. This can lead to misinformed voting and decisions based on exposure rather than on pure knowledge. Acommunication technologydirector, Van Dijk,[16]stated, "Computerized information campaigns and mass public information systems have to be designed and supported in such a way that they help to narrow the gap between the 'information rich' and 'information poor' otherwise the spontaneous development of ICT will widen it." Access and equivalent amounts of knowledge behind digital technology must be equivalent in order for a fair system to put into place. Alongside a lack of evidenced support for technology that can be proven to be safe for citizens, theOECDhas identified five struggles for the online engagement of citizens:[17] Highly developed states possess the capacity to link their respective governments with digital sites. Such sites function in ways such as publicizing recent legislation, current, and future policy objectives; lending agency toward political candidates; and/or allowing citizens to voice themselves in a political way. Likewise, the emergence of these sites has been linked to increased voting advocacy. Lack of access to technology can be a serious obstacle in becoming a digital citizen, since many elementary procedures such as tax report filing, birth registration, and use of websites to support candidates in political campaigns (e-democracy) have become available solely via the internet. Furthermore, many cultural and commercial entities only publicize information on web pages. Non-digital citizens will not be able to retrieve this information, and this may lead tosocial isolationoreconomic stagnation.[citation needed] The gap between digital citizens and non-digital citizens is often referred as thedigital divide. Indeveloping countries, digital citizens are fewer. They consist of the people who use technology to overcome local obstacles including development issues, corruption, and even military conflict.[18]Examples of such citizens include users ofUshahididuring the2007 disputed Kenyan electionand protesters in theArab Springmovements who used media to document repression of protests. Currently, the digital divide is a subject of academic debate as access to the internet has increased in these developing countries, but the place in which it is accessed (work, home, public library, etc.) has a significant effect on how much access will be used, if even in a manner related to the citizenry. Recent scholarship has correlated the desire to be technologically proficient with greater belief in computer access equity, and thus, digital citizenship (Shelley, et al.).[full citation needed] On the other side of the divide, one example of a highly developed digital technology program in a wealthy state is thee-Residency of Estonia. This form of digital residency allows both citizens and non-citizens of the state to pursue business opportunities in a digital business environment.[19]The application is simple; residents can fill out a form with their passport and photograph alongside the reason for applying. Following a successful application, the "e-residency" will allow them to register a company, sign documents, make online banking declarations, and file medical prescriptions online, though they will be tracked through financial footprints. The project plans to cover over 10 million e-residents by 2025 and as of April 2019,[update]there were over 54,000 participants from over 162 countries that have expressed an interest, contributing millions of dollars to the country's economy and assisting in access to any public service online.[20]Other benefits include hassle-free administration, lower business costs, access to the European Union market, and a broad range of e-services.[21]Though the program is designed for entrepreneurs, Estonia hopes to value transparency and resourcefulness as a cause for other companies to implement similar policies domestically. In 2021, Estonia's neighborLithuanialaunched a similare-Residency program.[22] Nonetheless, Estonia's e-Residency system has been subject to criticism. Many have pointed out thattax treatieswithin their own countries will play a major role in preventing this idea from spreading to more countries. Another risk is politically for governments to sustain "funding and legislative priorities across different coalitions of power."[23]Most importantly, the threat ofcyberattacksmay disrupt the seemingly optimal idea of having a platform for eIDs, as Estonia suffered its own massive cyberattack in 2007 by Russianhacktivists. Today, the protection of digital services and databases is essential to national security, and many countries are still hesitant to take the next step forward to promote a new system that will change the scale of politics with all its citizens.[citation needed] Within developed countries, the digital divide, other than economic differences, is attributed to educational levels. A study conducted by the United StatesNational Telecommunications and Information Administrationdetermined that the gap in computer usage and internet access widened 7.8% and 25% between those with the most and least educated, and it has been observed that those with college degrees or higher are 10 times more likely to have internet access at work when compared with those with only a high school education.[24] Adigital divideoften extends along specific racial lines as well. The difference in computer usage grew by 39.2% between White and Black households and by 42.6% between White and Hispanic households only three years ago.[when?]Race can also affect the number of computers at school, and as expected, gaps between racial groups narrow at higher income levels while widening among households at lower economic levels.Racial disparitieshave been proven to exist irrespective of income, and in a cultural study to determine reasons for the divide other than income, in accordance to the Hispanic community, computers were seen as a luxury, not a need. Participants collectively stated that computer activities isolated individuals and took away valuable time from family activities. In the African-American community, it was observed that they historically have had negative encounters with technological innovations, and with Asian-Americans, education was emphasized, and thus, there was a larger number of people who embraced the rise intechnological advances.[25] An educational divide also takes place as a result of differences in the use of daily technology. In a report analyzed by theACTCenter for Equity in Learning, "85% of respondents reported having access to anywhere from two to five devices at home. The remaining one percent of respondents reported having access to no devices at home."[26]For the 14% of respondents with one device at home, many of them reported the need to share these devices with other household members, facing challenges that are often overlooked. The data all suggest that wealthier families have access to more devices. In addition, out of the respondents that only used one device at home, 24% of them lived in rural areas, and over half reported that this one device was a smartphone; this could make completing schoolwork assignments more difficult. The ACT recommended that underserved students need access to more devices and higher-quality networks, and educators should do their best to ensure that students can find as many electronic materials through their phones to not place a burden on family plans.[citation needed] A recent survey revealed that teenagers and young adults spend more time on the internet than watching TV. This has raised a number of concerns about how internet use could impact cognitive abilities.[27]According to a study by Wartella et al., teens are concerned about how digital technologies may have an impact on their health.[28]Digital youth can generally be viewed as the test market for the next generation's digital content and services. Sites such asMyspaceandFacebookhave come to the fore in sites where youth participate and engage with others on the internet. However, due to the lack of popularity of MySpace in particular, more young people are turning to websites such asSnapchat,Instagram, andYouTube.[29]It was reported that teenagers spend up to nine hours a day online, with the vast majority of that time spent on social media websites from mobile devices, contributing to the ease of access and availability to young people.[30]Vast amounts of money are spent annually to research the demographic by hiring psychologists, sociologists and anthropologists in order to discover habits, values and fields of interest.[citation needed] Particularly in the United States, "Social media use has become so pervasive in the lives of American teens that having a presence on a social network is almost synonymous with being online; 95% of all teens ages 12-17 are now online and 80% of those online teens are users of social media sites".[31][needs update]However, movements such as these appear to benefit strictly those wishing to advocate for their business towards youth. The critical time when young people are developing their civic identities is between the ages 15–22. During this time they develop three attributes, civic literacy, civic skills and civic attachment, that constitute civic engagement later reflected in political actions of their adult lives.[citation needed] For youth to fully participate and realize their presence on the internet, a quality level of reading comprehension is required. "The average government web site, for example, requires an eleventh-grade level of reading comprehension, even though about half of the U.S. population reads at an eighth-grade level or lower".[32]So despite the internet being a place irrespective of certain factors such as race, religion, and class, education plays a large part in a person's capacity to present themselves online in a formal manner conducive towards their citizenry. Concurrently, education also affects people's motivation to participate online.[citation needed] Students should be encouraged to use technology with responsibility and ethical digital citizenship promoted. Education on harmful viruses and othermalwaremust be emphasized to protect resources. A student can be a successful digital citizen with the help of educators, parents, and school counselors.[33] These 5 competencies will assist and support teachers in teaching about digital citizenship:InclusiveI am open to hearing and respectfully recognizing multiple viewpoints and I engage with others online with respect and empathy.InformedI evaluate the accuracy, perspective, and validity of digital media and social posts.EngagedI use technology and digital channels for civic engagement, to solve problems and be a force for good in both physical and virtual communities.BalancedI make informed decisions about how to prioritize my time and activities online and off.AlertI am aware of my online actions, and know how to be safe and create safe spaces for others online.[34] InternationalOECDguidelines state that "personal data should be relevant to the purposes for which they are to be used, and to the extent necessary for those purposes should be accurate, complete, and kept up to date". Article 8 prevents subjects to certain exceptions. Meaning that certain things cannot be published online revealing race, ethnicity, religion, political stance, health, and sex life. in the United States, this is enforced generally by theFederal Trade Commission(FTC)- but very generally. For example, the FTC brought an action against Microsoft for failing to properly protect customers' personal information.[35]In addition, many have described the United States as being in acyberwarwith Russia, and several Americans have credited Russia to their country's downfall in transparency and declining trust in the government. With several foreign users posting anonymous information through social media in order to gather a following, it is difficult to understand whom to target and what affiliation or root cause they may have of performing a particular action aimed to sway public opinion.[36] The FTC does play a significant role in protecting the digital citizen. However, individuals' public records are increasingly useful to the government and highly sought after. This material can help the government detect a variety of crimes such as fraud, drug distribution rings, terrorist cells. it makes it easier to properly profile a suspected criminal and keep an eye on them. Although there are a variety of ways to gather information on an individual through credit card history, employment history, and more, the internet is becoming the most desirable information gatherer thanks to its façade of security and the amount of information that can be stored on the internet.Anonymityhas proven to be very rare online asISPscan keep track of an individual's activity online.[37] Digital citizenship is a term used to define the appropriate and responsible use of technology among users. Three principles were developed by Mike Ribble to teach digital users how to responsibly use technology to become a digital citizen: respect, educate, and protect.[38]Each principle contains three of the nine elements of digital citizenship.[39] Within these three core principles, there are nine elements to also be considered in regards to digital citizenship:[39] According to Mike Ribble, an author who has worked on the topic of digital citizenship for more than a decade, digital access is the first element that is prevalent in today's educational curriculum. He cited a widening gap between the impoverished and the wealthy, as 41% of African Americans and Hispanics use computers in the home when compared to 77% of white students. Other crucial digital elements includecommerce,communication,literacy, and etiquette. He also emphasized that educators must understand that technology is important for all students, not only those who already have access to it, in order to decrease the digital divide that currently exists.[10] Furthermore, in research brought up byCommon Sense Media, approximately six out of ten AmericanK-12teachers used some type of digital citizenship curriculum, and seven out of ten taught some sort of competency skill utilizing digital citizenship.[41]Many of the sections that these teachers focused in on includedhate speech,cyberbullying, and digital drama. A problem with digital technology that still exists is that over 35% of students were observed to not possess the proper skills to critically evaluate information online, and these issues and statistics increased as the grade levels rose. Online videos such as those found on YouTube and Netflix have been used approximately by 60% of the K-12 teachers in classrooms, while educational tools such asMicrosoft OfficeandGoogle G Suitehave been used by around half of the teachers. Social media was used the least, at around 13% in comparison to other digital methods of education.[42]When analyzing the social class differences between schools, it was found that Title I schools were more likely to use digital citizenship curricula than teachers in more affluent schools. In the past two years,[when?]there has been a major shift to move students from digital citizenship to digital leadership in order to make a greater impact on online interactions. Though digital citizens take a responsible approach to act ethically, digital leadership is a more proactive approach, encompassing the "use of internet and social media to improve the lives, well-being, and circumstances of others" as part of one's daily life.[43]In February 2018, after the Valentine's Day shooting inParkland, Florida, students became dynamic digital citizens, using social media and other web platforms to engage proactively on the issue and push back against cyberbullies and misinformation. Students fromMarjory Stoneman Douglas High Schoolspecifically rallied against gun violence, engaging in live tweeting, texting, videoing, and recording the attack as it happened, utilizing onside digital tools to not only witness what was happening at the time but to allow the world to witness it as well. This allowed the nation to see and react, and as a result, students built a web page and logo for their new movement.[44]They gave interviews to major media outlets and at rallies and protects and coordinated a nationwide march online on March 24 against elected officials at meetings and town halls.[45]The idea of this shift in youth is to express empathy beyond one's self, and moving to seeing this self in the digital company of others. Nonetheless, several critics state that just as empathy can be spread to a vast number of individuals, hatred can be spread as well. Though the United Nations and groups have been establishing fronts against hate speech, there is no legal definition of hate speech used internationally, and more research needs to be done on its impact.[46] Along with educational trends, there are overlapping goals of digital citizenship education. Altogether, these facets contribute to one another in the development of a healthy and effective education for digital technology and communication.[47] There are free and opencurriculadeveloped by different organizations for teaching Digital Citizenship skills in schools: 51. Baron, Jessica. “Posting about Your Kids Online Could Damage Their Futures.”Forbes, Forbes Magazine, 24 Mar. 2022, https://www.forbes.com/sites/jessicabaron/2018/12/16/parents-who-post-about-their-kids-online-could-be-damaging-their-futures/?sh=34d59a4427b7. 1 . Hollebeek, Linda. “Exploring Customer Brand Engagement: Definition and Themes.” Journal of Strategic Marketing, vol. 19, no. 7, Dec. 2011, pp. 555–73. Taylor and Francis+NEJM, https://doi.org/10.1080/0965254X.2011.599493.
https://en.wikipedia.org/wiki/Digital_citizen
Thedigital divideis the unequal access todigital technology, includingsmartphones, tablets,laptops, and the internet.[1][2]The digital divide worsens inequality around access to information and resources. In theInformation Age, people without access to the Internet and other technology are at a disadvantage, for they are unable or less able to connect with others, find and apply for jobs, shop, and learn.[1][3][4][5] People who arehomeless, living in poverty, elderly people, and those living in rural communities may have limited access to the Internet; in contrast, urban middle class and upper-class people have easy access to the Internet. Another divide is between producers and consumers of Internet content,[6][7]which could be a result of educational disparities.[8]While social media use varies across age groups, a US 2010 study reported no racial divide.[9] The historical roots of the digital divide in America refer to the increasing gap that occurred during the early modern period between those who could and could not access the real time forms of calculation, decision-making, and visualization offered via written and printed media.[10]Within this context, ethical discussions regarding the relationship between education and the free distribution of information were raised by thinkers such asImmanuel Kant,Jean Jacques Rousseau, andMary Wollstonecraft(1712–1778). The latter advocated that governments should intervene to ensure that any society's economic benefits should be fairly and meaningfully distributed. Amid theIndustrial Revolutionin Great Britain, Rousseau's idea helped to justifypoor lawsthat created a safety net for those who were harmed by new forms of production. Later when telegraph and postal systems evolved, many used Rousseau's ideas to argue for full access to those services, even if it meant subsidizing hard-to-serve citizens. Thus, "universal services"[11]referred to innovations in regulation and taxation that would allow phone services such asAT&Tinthe United Statesto serve hard-to-serve rural users. In 1996, as telecommunications companies merged with Internet companies, theFederal Communications CommissionadoptedTelecommunications Services Act of 1996to consider regulatory strategies and taxation policies to close the digital divide. Though the term "digital divide" was coined among consumer groups that sought to tax and regulateinformation and communications technology(ICeT) companies to close the digital divide, the topic soon moved onto a global stage. The focus was theWorld Trade Organizationwhich passed a Telecommunications Services Act, which resisted regulation of ICT companies so that they would be required to serve hard to serve individuals and communities. In 1999, to assuage anti-globalization forces, the WTO hosted the "Financial Solutions to Digital Divide" in Seattle, US, co-organized byCraig Warren Smithof Digital Divide Institute andBill Gates Sr.the chairman of theBill and Melinda Gates Foundation. It catalyzed a full-scale global movement to close the digital divide, which quickly spread to all sectors of the global economy.[12]In 2000, US president Bill Clinton mentioned the term in theState of the Union Address. At the outset of theCOVID-19 pandemic, governments worldwide issued stay-at-home orders that establishedlockdowns, quarantines, restrictions, and closures. The resulting interruptions to schooling, public services, and business operations drove nearly half of the world's population into seeking alternative methods to live while in isolation.[13]These methods included telemedicine, virtual classrooms, online shopping, technology-based social interactions and working remotely, all of which require access to high-speed or broadbandinternet accessand digital technologies. A Pew Research Centre study reports that 90% of Americans describe the use of the Internet as "essential" during the pandemic.[14]The accelerated use of digital technologies creates a landscape where the ability, or lack thereof, to access digital spaces becomes a crucial factor in everyday life.[15] According to thePew Research Center, 59% of children from lower-income families were likely to face digital obstacles in completing school assignments.[14]These obstacles included the use of acellphoneto complete homework, having to usepublic Wi-Fibecause of unreliable internet service in the home and lack of access to a computer in the home. This difficulty, titled thehomework gap, affects more than 30% of K-12 students living below the poverty threshold, and disproportionally affects American Indian/Alaska Native, Black, and Hispanic students.[16][17]These types of interruptions or privilege gaps in education exemplify problems in the systemic marginalization of historically oppressed individuals in primary education. The pandemic exposed inequity causing discrepancies in learning.[18] A lack of "tech readiness", that is, confident and independent use of devices, was reported among the US elderly population; with more than 50% reporting an inadequate knowledge of devices and more than one-third reporting a lack of confidence.[14][19]Moreover, according to a UN research paper, similar results can be found across various Asian countries, with those above the age of 74 reporting a lower and more confused usage of digital devices.[20]This aspect of the digital divide and the elderly occurred during the pandemic as healthcare providers increasingly relied upon telemedicine to manage chronic and acute health conditions.[21] There are various definitions of the digital divide, all with slightly different emphasis, which is evidenced by related concepts likedigital inclusion,[22]digital participation,[23]digital skills,[24]media literacy,[25]anddigital accessibility.[26] The infrastructure by which individuals, households, businesses, and communities connect to the Internet address the physical mediums that people use to connect to the Internet such as desktop computers, laptops, basic mobile phones orsmartphones, iPods or other MP3 players, gaming consoles such asXboxorPlayStation, electronic book readers, and tablets such as iPads.[27] Traditionally, the nature of the divide has been measured in terms of the existing numbers of subscriptions and digital devices. Given the increasing number of such devices, some have concluded that the digital divide among individuals has increasingly been closing as the result of a natural and almost automatic process.[29][30]Others point to persistent lower levels of connectivity among women, racial and ethnic minorities, people with lower incomes, rural residents, and less educated people as evidence that addressing inequalities in access to and use of the medium will require much more than the passing of time.[31][32]Recent studies have measured the digital divide not in terms of technological devices, but in terms of the existing bandwidth per individual (in kbit/s per capita).[33][28] As shown in the Figure on the side, the digital divide in kbit/s is not monotonically decreasing but re-opens up with each new innovation. For example, "the massive diffusion of narrow-band Internet and mobile phones during the late 1990s" increased digital inequality, as well as "the initial introduction of broadband DSL and cable modems during 2003–2004 increased levels of inequality".[33]During the mid-2000s, communication capacity was more unequally distributed than during the late 1980s, when only fixed-line phones existed. The most recent increase in digital equality stems from the massive diffusion of the latest digital innovations (i.e. fixed and mobile broadband infrastructures, e.g.5Gand fiber opticsFTTH).[34]Measurement methodologies of the digital divide, and more specifically an Integrated Iterative Approach General Framework (Integrated Contextual Iterative Approach – ICI) and the digital divide modeling theory under measurement model DDG (Digital Divide Gap) are used to analyze the gap existing between developed and developing countries, and the gap among the 27 members-states of the European Union.[35][36]TheGood Things Foundation, a UK non-profit organisation, collates data on the extent and impact of the digital divide in the UK[37]and lobbies the government to fix digital exclusion[38] Research from 2001 showed that the digital divide is more than just an access issue and cannot be alleviated merely by providing the necessary equipment. There are at least three factors at play: information accessibility, information utilization, and information receptiveness. More than just accessibility, the digital divide consists of society's lack of knowledge on how to make use of the information and communication tools once they exist within a community.[39]Information professionals have the ability to help bridge the gap by providing reference and information services to help individuals learn and utilize the technologies to which they do have access, regardless of the economic status of the individual seeking help.[40] One can connect to the internet in a variety of locations, such as homes, offices, schools, libraries, public spaces, and Internet cafes. Levels of connectivity often vary between rural, suburban, and urban areas.[41][42] In 2017, theWireless Broadband Alliancepublished thewhite paperThe Urban Unconnected, which highlighted that in the eight countries with the world's highestGNPabout 1.75 billion people had no internet connection, and one third of them lived in the major urban centers.Delhi(5.3 millions, 9% of the total population),São Paulo(4.3 millions, 36%),New York(1.6 mln, 19%), andMoscow(2.1 mln, 17%) registered the highest percentages of citizens who had no internet access of any type.[43] As of 2021, only about half of the world's population had access to the internet, leaving 3.7 billion people without internet. A majority of those are in developing countries, and a large portion of them are women.[44]Also, the governments of different countries have different policies about privacy, data governance, speech freedoms and many other factors. Government restrictions make it challenging for technology companies to provide services in certain countries. This disproportionately impacts the different regions of the world; Europe has the highest percentage of the population online while Africa has the lowest. From 2010 to 2014 Europe went from 67% to 75% and in the same time span Africa went from 10% to 19%.[45] Network speeds play a large role in the quality of an internet connection. Large cities and towns may have better access to high speed internet than rural areas, which may have limited or no service.[46]Households can be locked into a specific service provider, since it may be the only carrier that even offers service to the area. This applies to regions that have developed networks, like the United States, but also applies to developing countries, so that very large areas have virtually no coverage.[47]In those areas there are very limited actions that a consumer could take, since the issue is mainly infrastructure. Technologies that provide an internet connection through satellite are becoming more common, like Starlink, but they are still not available in many regions.[48] Based on location, a connection may be so slow as to be virtually unusable, solely because a network provider has limited infrastructure in the area. For example, to download 5 GB of data in Taiwan it might take about 8 minutes, while the same download might take 30 hours in Yemen.[49] From 2020 to 2022, average download speeds in the EU climbed from 70 Mbps to more than 120 Mbps, owing mostly to the demand for digital services during the pandemic.[50]There is still a large rural-urban disparity in internet speeds, with metropolitan areas inFranceandDenmarkreaching rates of more than 150 Mbps, while many rural areas inGreece,Croatia, andCyprushave speeds of less than 60 Mbps.[50][51] The EU aspires for complete gigabit coverage by 2030, however as of 2022, only over 60% of Europe has high-speed internet infrastructure, signalling the need for more enhancements.[50][52] Common Sense Media, a nonprofit group based in San Francisco, surveyed almost 1,400 parents and reported in 2011 that 47 percent of families with incomes more than $75,000 had downloaded apps for their children, while only 14 percent of families earning less than $30,000 had done so.[53] As of 2014, the gap in a digital divide was known to exist for a number of reasons. Obtaining access to ICTs and using them actively has been linked to demographic and socio-economic characteristics including income, education, race, gender, geographic location (urban-rural), age, skills, awareness, political, cultural and psychological attitudes.[54][55][56][57][58][59][60]Multiple regressionanalysis across countries has shown that income levels and educational attainment are identified as providing the most powerful explanatory variables for ICT access and usage.[61]Evidence was found that Caucasians are much more likely than non-Caucasians to own a computer as well as have access to the Internet in their homes.[citation needed][62][63]As for geographic location, people living in urban centers have more access and show more usage of computer services than those in rural areas. In developing countries, a digital divide between women and men is apparent in tech usage, with men more likely to be competent tech users. Controlled statistical analysis has shown that income, education and employment act asconfounding variablesand that women with the same level of income, education and employment actually embrace ICT more than men (see Women and ICT4D), this argues against any suggestion that women are "naturally" more technophobic or less tech-savvy.[64]However, each nation has its own set of causes or the digital divide. For example, thedigital divide in Germanyis unique because it is not largely due to difference in quality of infrastructure.[65] The correlation between income and internet use suggests that the digital divide persists at least in part due to income disparities.[66]Most commonly, a digital divide stems from poverty and the economic barriers that limit resources and prevent people from obtaining or otherwise using newer technologies. In research, while each explanation is examined, others must be controlled to eliminateinteraction effectsormediating variables,[54]but these explanations are meant to stand as general trends, not direct causes. Measurements for the intensity of usages, such as incidence and frequency, vary by study. Some report usage as access to Internet and ICTs while others report usage as having previously connected to the Internet. Some studies focus on specific technologies, others on a combination (such asInfostate, proposed byOrbicom-UNESCO, theDigital Opportunity Index, orITU'sICT Development Index). During the mid-1990s, the United States Department of Commerce, National Telecommunications & Information Administration (NTIA) began publishing reports about the Internet and access to and usage of the resource. The first of three reports is titled "Falling Through the Net: A Survey of the "Have Nots" in Rural and Urban America" (1995),[67]the second is "Falling Through the Net II: New Data on the Digital Divide" (1998),[68]and the final report "Falling Through the Net: Defining the Digital Divide" (1999).[69]The NTIA's final report attempted clearly to define the term digital divide as "the divide between those with access to new technologies and those without".[69]Since the introduction of the NTIA reports, much of the early, relevant literature began to reference the NTIA's digital divide definition. The digital divide is commonly defined as being between the "haves" and "have-nots".[69][67] The U.S.Federal Communications Commission's (FCC) 2019 Broadband Deployment Report indicated that 21.3 million Americans do not have access to wired or wireless broadband internet.[70]As of 2020, BroadbandNow, an independent research company studying access to internet technologies, estimated that the actual number of United States Americans without high-speed internet is twice that number.[71]According to a 2021Pew Research Centerreport, smartphone ownership and internet use has increased for all Americans, however, a significant gap still exists between those with lower incomes and those with higher incomes:[72]U.S. households earning $100K or more are twice as likely to own multiple devices and have home internet service as those making $30K or more, and three times as likely as those earning less than $30K per year.[72]The same research indicated that 13% of the lowest income households had no access to internet or digital devices at home compared to only 1% of the highest income households.[72] According to a Pew Research Center survey of U.S. adults executed from January 25 to February 8, 2021, the digital lives of Americans with high and low incomes are varied. Conversely, the proportion of Americans that use home internet or cell phones has maintained constant between 2019 and 2021. A quarter of those with yearly average earnings under $30,000 (24%) says they don't own smartphones. Four out of every ten low-income people (43%) do not have home internet access or a computer (43%). Furthermore, the more significant part of lower-income Americans does not own a tablet device.[72] On the other hand, every technology is practically universal among people earning $100,000 or higher per year. Americans with larger family incomes are also more likely to buy a variety of internet-connected products. Wi-Fi at home, a smartphone, a computer, and a tablet are used by around six out of ten families making $100,000 or more per year, compared to 23 percent in the lesser household.[72] Although many groups in society are affected by a lack of access to computers or the Internet, communities of color are specifically observed to be negatively affected by the digital divide.[73]Pew research shows that as of 2021, home broadband rates are 81% for White households, 71% for Black households and 65% for Hispanic households.[74]While 63% of adults find the lack of broadband to be a disadvantage, only 49% of White adults do.[73]Smartphone and tablet ownership remains consistent with about 8 out of 10 Black, White, and Hispanic individuals reporting owning a smartphone and half owning a tablet.[73]A 2021 survey found that a quarter of Hispanics rely on their smartphone and do not have access to broadband.[73] Inequities inaccess to informationtechnologies are present among individuals living with a physical disability in comparison to those who are not living with a disability. In 2011, according to the Pew Research Center, 54% of households with a person who had a disability had home Internet access, compared to 81% of households that did not have a person who has a disability.[75]The type of disability an individual has can prevent them from interacting with computer screens and smartphone screens, such as having aquadriplegiadisability or having a disability in the hands. However, there is still a lack of access to technology and home Internet access among those who have a cognitive and auditory disability as well. There is a concern of whether or not the increase in the use of information technologies will increase equality through offering opportunities for individuals living with disabilities or whether it will only add to the present inequalities and lead to individuals living with disabilities being left behind in society.[76]Issues such as the perception of disabilities in society, national and regional government policy, corporate policy, mainstream computing technologies, and real-time online communication have been found to contribute to the impact of the digital divide on individuals with disabilities. In 2022, a survey of people in the UK with severe mental illness found that 42% lacked basic digital skills, such as changing passwords or connecting to Wi-Fi.[77][78] People with disabilities are also the targets of online abuse. Online disability hate crimes have increased by 33% across the UK between 2016–17 and 2017–18 according to a report published byLeonard Cheshire, a health and welfare charity.[79]Accounts of online hate abuse towards people with disabilities were shared during an incident in 2019 when modelKatie Price's son was the target of online abuse that was attributed to him having a disability. In response to the abuse, a campaign was launched by Price to ensure that Britain's MPs held accountable those who perpetuate online abuse towards those with disabilities.[80]Online abuse towards individuals with disabilities is a factor that can discourage people from engaging online which could prevent people from learning information that could improve their lives. Many individuals living with disabilities face online abuse in the form of accusations of benefit fraud and "faking" their disability for financial gain, which in some cases leads to unnecessary investigations. Due to the rapidly declining price of connectivity and hardware, skills deficits have eclipsed barriers of access as the primary contributor to thegender digital divide. Studies show that women are less likely to know how to leverage devices and Internet access to their full potential, even when they do use digital technologies.[81]In ruralIndia, for example, a study found that the majority of women who ownedmobile phonesonly knew how to answer calls. They could not dial numbers or read messages without assistance from their husbands, due to a lack of literacy and numeracy skills.[82]A survey of 3,000 respondents across 25 countries found that adolescent boys withmobile phonesused them for a wider range of activities, such as playing games and accessing financial services online. Adolescent girls in the same study tended to use just the basic functionalities of their phone, such as making calls and using the calculator.[83]Similar trends can be seen even in areas where Internet access is near-universal. A survey of women in nine cities around the world revealed that although 97% of women were using social media, only 48% of them were expanding their networks, and only 21% of Internet-connected women had searched online for information related to health, legal rights or transport.[83]In some cities, less than one quarter of connected women had used the Internet to look for a job.[81] Studies show that despite strong performance in computer and information literacy (CIL), girls do not have confidence in theirICTabilities. According to theInternational Computer and Information Literacy Study(ICILS) assessment girls'self-efficacyscores (their perceived as opposed to their actual abilities) for advanced ICT tasks were lower than boys'.[84][81] A paper published by J. Cooper from Princeton University points out that learning technology is designed to be receptive to men instead of women. Overall, the study presents the problem of various perspectives in society that are a result of gendered socialization patterns that believe that computers are a part of the male experience since computers have traditionally presented as a toy for boys when they are children.[85]This divide is followed as children grow older and young girls are not encouraged as much to pursue degrees in IT and computer science. In 1990, the percentage of women in computing jobs was 36%, however in 2016, this number had fallen to 25%. This can be seen in the under representation of women in IT hubs such as Silicon Valley.[86] There has also been the presence of algorithmic bias that has been shown in machine learning algorithms that are implemented by major companies.[clarification needed]In 2015, Amazon had to abandon a recruiting algorithm that showed a difference between ratings that candidates received for software developer jobs as well as other technical jobs. As a result, it was revealed that Amazon's machine algorithm was biased against women and favored male resumes over female resumes. This was due to the fact that Amazon's computer models were trained to vet patterns in resumes over a 10-year period. During this ten-year period, the majority of the resumes belong to male individuals, which is a reflection of male dominance across the tech industry.[87] The age gap contributes to the digital divide due to the fact that people born before 1983 did not grow up with the internet. According to Marc Prensky, people who fall into this age range are classified as "digital immigrants."[88]A digital immigrant is defined as "a person born or brought up before the widespread use of digital technology."[89]The internet became officially available for public use on January 1, 1983; anyone born before then has had to adapt to the new age of technology.[90]On the contrary, people born after 1983 are considered "digital natives". Digital natives are defined as people born or brought up during the age of digital technology.[89] Across the globe, there is a 10% difference in internet usage between people aged 15–24 years old and people aged 25 years or older. According to the International Telecommunication Union (ITU), 75% of people aged 15–24 used the internet in 2022 compared to 65% of people aged 25 years or older.[91]The highest amount of digital divide between generations occurs in Africa with 55% of the younger age group using the internet compared to 36% of people aged 25 years or older. The lowest amount of divide occurs between the Commonwealth of Independent States with 91% of the younger age group using the internet compared to 83% of people aged 25 years or older. In addition to being less connected with the internet, older generations are less likely to use financial technology, also known as fintech. Fintech is any way of managing money via digital devices.[92]Some examples of fintech include digital payment apps such as Venmo and Apple Pay, tax services such as TurboTax, or applying for a mortgage digitally. In data from World Bank Findex, 40% of people younger than 40 years old utilized fintech compared to less than 25% of people aged 60 years or older.[93] The divide between differing countries or regions of the world is referred to as theglobal digital divide, which examines the technological gap between developing and developed countries.[94]The divide within countries (such as thedigital divide in the United States) may refer to inequalities between individuals, households, businesses, or geographic areas, usually at differentsocioeconomiclevels or other demographic categories. In contrast, the global digital divide describes disparities in access to computing and information resources, and the opportunities derived from such access.[95]As the internet rapidly expands it is difficult for developing countries to keep up with the constant changes. In 2014 only three countries (China,US,Japan) host 50% of the globally installed bandwidth potential.[28]This concentration is not new, as historically only ten countries have hosted 70–75% of the global telecommunication capacity (see Figure). The U.S. lost its global leadership in terms of installed bandwidth in 2011, replaced by China, who hosted more than twice as much national bandwidth potential in 2014 (29% versus 13% of the global total).[28] Somezero-ratingprograms such asFacebook Zerooffer free/subsidized data access to certain websites. Critics object that this is an anti-competitive program that underminesnet neutralityand creates a "walled garden".[96]A 2015 study reported that 65% ofNigerians, 61% ofIndonesians, and 58% ofIndiansagree with the statement that "Facebook is the Internet" compared with only 5% in the US.[97] Once an individual is connected, Internet connectivity and ICTs can enhance his or her future social and cultural capital.Social capitalis acquired through repeated interactions with other individuals or groups of individuals. Connecting to the Internet creates another set of means by which to achieve repeated interactions. ICTs and Internet connectivity enable repeated interactions through access to social networks, chat rooms, and gaming sites. Once an individual has access to connectivity, obtains infrastructure by which to connect, and can understand and use the information that ICTs and connectivity provide, that individual is capable of becoming a "digital citizen."[54] In the United States, the research provided by Unguarded Availability Services notes a direct correlation between a company's access to technological advancements and its overall success in bolstering the economy.[98]The study, which includes over 2,000 IT executives and staff officers, indicates that 69 percent of employees feel they do not have access to sufficient technology to make their jobs easier, while 63 percent of them believe the lack of technological mechanisms hinders their ability to develop new work skills.[98]Additional analysis provides more evidence to show how the digital divide also affects the economy in places all over the world. ABEGreport suggests that in countries like Sweden, Switzerland, and the U.K., the digital connection among communities is made easier, allowing for their populations to obtain a much larger share of the economies via digital business.[99]In fact, in these places, populations hold shares approximately 2.5 percentage points higher.[99]During a meeting with the United Nations a Bangladesh representative expressed his concern that poor and undeveloped countries would be left behind due to a lack of funds to bridge the digital gap.[100] The digital divide impacts children's ability to learn and grow in low-income school districts. Without Internet access, students are unable to cultivate necessary technological skills to understand today's dynamic economy.[101]The need for the internet starts while children are in school – necessary for matters such as school portal access, homework submission, and assignment research.[102]The Federal Communications Commission's Broadband Task Force created a report showing that about 70% of teachers give students homework that demand access to broadband.[103]Approximately 65% of young scholars use the Internet at home to complete assignments as well as connect with teachers and other students via discussion boards and shared files.[103]A recent study indicates that approximately 50% of students say that they are unable to finish their homework due to an inability to either connect to the Internet or in some cases, find a computer.[103]Additionally, ThePublic Policy Institute of Californiareported in 2023 that 27% of the state’s school children lack the necessary broadband to attend school remotely, and 16% have no internet connection at all.[104] This has led to a new revelation: 42% of students say they received a lower grade because of this disadvantage.[103]According to research conducted by the Center for American Progress, "if the United States were able to close the educational achievement gaps between native-born white children and black and Hispanic children, the U.S. economy would be 5.8 percent—or nearly $2.3 trillion—larger in 2050".[105] In a reverse of this idea, well-off families, especially the tech-savvy parents inSilicon Valley, carefully limit their own children'sscreen time. The children of wealthy families attend play-based preschool programs that emphasizesocial interactioninstead of time spent in front of computers or other digital devices, and they pay to send their children to schools that limit screen time.[106]American families that cannot afford high-quality childcare options are more likely to usetablet computersfilled with apps for children as a cheap replacement for a babysitter, and their government-run schools encourage screen time during school. Students in school are also learning about the digital divide.[106] To reduce the impact of the digital divide and increase digital literacy in young people at an early age, governments have begun to develop and focus policy on embedding digital literacies in both student and educator programs, for instance, in Initial Teacher Training programs in Scotland.[107]The National Framework for Digital Literacies in Initial Teacher Education was developed by representatives from Higher Education institutions that offer Initial Teacher Education (ITE) programs in conjunction with the Scottish Council of Deans of Education (SCDE) with the support of Scottish Government[107]This policy driven approach aims to establish an academic grounding in the exploration of learning and teaching digital literacies and their impact on pedagogy as well as ensuring educators are equipped to teach in the rapidly evolving digital environment and continue their own professional development. Factors such as nationality, gender, and income contribute to the digital divide across the globe. Depending on what someone identifies as, their access to the internet can potentially decrease. According to a study conducted by the ITU in 2022, Africa has the fewest people on the internet at a 40% rate; the next lowest internet population is the Asia-Pacific region at 64%. Internet access remains a problem in Least Developing Countries and Landlocked Developing Countries. They both have 36% of people using the internet compared to a 66% average around the world.[91] Men generally have more access to the internet around the world. The gender parity score across the globe is 0.92. A gender parity score is calculated by the percentage of women who use the internet divided by the percentage of men who use the internet. Ideally, countries want to have gender parity scores between 0.98 and 1.02. The region with the least gender parity is Africa with a score of 0.75. The next lowest gender parity score belongs to the Arab States at 0.87. Americans, Commonwealth of Independent States, and Europe all have the highest gender parity scores with scores that do not go below 0.98 or higher than 1. Gender parity scores are often impacted by class. Low income regions have a score of 0.65 while upper-middle income and high income regions have a score of 0.99.[91] The difference between economic classes has been a prevalent issue with the digital divide up to this point. People who are considered to earn low income use the internet at a 26% rate followed by lower-middle income at 56%, upper-middle income at 79%, and high income at 92%. The staggering difference between low income individuals and high income individuals can be traced to the affordability of mobile products. Products are becoming more affordable as the years pass; according to the ITU, “the global median price of mobile-broadband services dropped from 1.9 percent to 1.5 percent of average gross national income (GNI) per capita.” There is still plenty of work to be done, as there is a 66% difference between low income individuals and high income individuals' access to the internet.[91] TheFacebook divide,[108][109][110][111]a concept derived from the "digital divide", is the phenomenon with regard to access to, use of, and impact ofFacebookon society. It was coined at the International Conference on Management Practices for the New Economy (ICMAPRANE-17) on February 10–11, 2017.[112] Additional concepts of Facebook Native and Facebook Immigrants were suggested at the conference.Facebook divide,Facebook native,Facebook immigrants, andFacebook left-behindare concepts for social and business management research. Facebook immigrants utilize Facebook for their accumulation of both bonding and bridgingsocial capital. Facebook natives, Facebook immigrants, and Facebook left-behind induced the situation of Facebook inequality. In February 2018, the Facebook Divide Index was introduced at the ICMAPRANE conference in Noida, India, to illustrate the Facebook divide phenomenon.[113] In the year 2000, theUnited Nations Volunteers(UNV) program launched its Online Volunteering service,[114]which uses ICT as a vehicle for and in support of volunteering. It constitutes an example of a volunteering initiative that effectively contributes to bridge the digital divide. ICT-enabled volunteering has a clear added value for development. If more people collaborate online with more development institutions and initiatives, this will imply an increase in person-hours dedicated to development cooperation at essentially no additional cost. This is the most visible effect of online volunteering for human development.[115] Since May 17, 2006, theUnited Nationshas raised awareness of the divide by way of theWorld Information Society Day.[116]In 2001, it set up the Information and Communications Technology (ICT) Task Force.[117]LaterUNinitiatives in this area are theWorld Summit on the Information Societysince 2003, and theInternet Governance Forum, set up in 2006. As of 2009, the borderline between ICT as anecessity goodand ICT as aluxury goodwas roughly around US$10 per person per month, or US$120 per year,[61]which means that people consider ICT expenditure of US$120 per year as a basic necessity. Since more than 40% of the world population lives on less than US$2 per day, and around 20% live on less than US$1 per day (or less than US$365 per year), these income segments would have to spend one third of their income on ICT (120/365 = 33%). The global average of ICT spending is at a mere 3% of income.[61]Potential solutions include driving down the costs of ICT, which includes low-cost technologies and shared access throughTelecentres.[118][119] In 2022, the USFederal Communications Commissionstarted a proceeding "to prevent and eliminate digital discrimination and ensure that all people of the United States benefit from equal access to broadband internet access service, consistent with Congress's direction in the Infrastructure Investment and Jobs Act.[120] Social media websites serve as both manifestations of and means by which to combat the digital divide. The former describes phenomena such as the divided users' demographics that make up sites such as Facebook,WordPressand Instagram. Each of these sites hosts communities that engage with otherwise marginalized populations. In 2010, an "online indigenous digital library as part of public library services" was created inDurban, South Africa to narrow the digital divide by not only giving the people of the Durban area access to this digital resource, but also by incorporating the community members into the process of creating it.[121] In 2002, theGates Foundationstarted the Gates Library Initiative which provides training assistance and guidance in libraries.[122] InKenya, lack of funding, language, and technology illiteracy contributed to an overall lack of computer skills and educational advancement. This slowly began to change when foreign investment began.[123][124]In the early 2000s, theCarnegie Foundationfunded a revitalization project through theKenya National Library Service. Those resources enabled public libraries to provide information and communication technologies to their patrons. In 2012, public libraries in theBusiaandKiberiacommunities introduced technology resources to supplement curriculum for primary schools. By 2013, the program expanded into ten schools.[125] Even though individuals might be capable of accessing the Internet, many are opposed by barriers to entry, such as a lack of means to infrastructure or the inability to comprehend or limit the information that the Internet provides. Some individuals can connect, but they do not have the knowledge to use what information ICTs and Internet technologies provide them. This leads to a focus on capabilities and skills, as well as awareness to move from mere access to effective usage of ICT.[126] Community informatics(CI) focuses on issues of "use" rather than "access". CI is concerned with ensuring the opportunity not only for ICT access at the community level but also, according toMichael Gurstein, that the means for the "effective use" of ICTs for community betterment and empowerment are available.[127]Gurstein has also extended the discussion of the digital divide to include issues around access to and the use of "open data" and coined the term "data divide" to refer to this issue area.[128] Since gender, age, race, income, and educational digital divides have lessened compared to the past, some researchers suggest that the digital divide is shifting from a gap in access and connectivity to ICTs to aknowledge divide.[129]A knowledge divide concerning technology presents the possibility that the gap has moved beyond the access and having the resources to connect to ICTs to interpreting and understanding information presented once connected.[130] The second-level digital divide, also referred to as the production gap, describes the gap that separates the consumers of content on the Internet from the producers of content.[131]As the technological digital divide is decreasing between those with access to the Internet and those without, the meaning of the term digital divide is evolving.[129]Previously, digital divide research was focused on accessibility to the Internet and Internet consumption. However, with an increasing number of the population gaining access to the Internet, researchers are examining how people use the Internet to create content and what impact socioeconomics are having on user behavior.[132] New applications have made it possible for anyone with a computer and an Internet connection to be a creator of content, yet the majority ofuser-generated contentavailable widely on the Internet, like public blogs, is created by a small portion of the Internet-using population.Web 2.0technologies like Facebook, YouTube, Twitter, and Blogs enable users to participate online and create content without having to understand how the technology actually works, leading to an ever-increasing digital divide between those who have the skills and understanding to interact more fully with the technology and those who are passive consumers of it.[131] Some of the reasons for this production gap include material factors like the type of Internet connection one has and the frequency of access to the Internet. The more frequently a person has access to the Internet and the faster the connection, the more opportunities they have to gain the technology skills and the more time they have to be creative.[133] Other reasons include cultural factors often associated with class and socioeconomic status. Users of lower socioeconomic status are less likely to participate in content creation due to disadvantages in education and lack of the necessary free time for the work involved in blog or website creation and maintenance.[133]Additionally, there is evidence to support the existence of the second-level digital divide at the K-12 level based on how educators' use technology for instruction.[134]Schools' economic factors have been found to explain variation in how teachers use technology to promote higher-order thinking skills.[134] This article incorporates text from afree contentwork. Licensed under CC BY-SA 3.0 IGO. Text taken fromI'd blush if I could: closing gender divides in digital skills through education​, UNESCO, EQUALS Skills Coalition, UNESCO. UNESCO.
https://en.wikipedia.org/wiki/Digital_divide
Group decision-making(also known ascollaborative decision-makingorcollective decision-making) is a situation faced when individuals collectively make a choice from the alternatives before them. The decision is then no longer attributable to any single individual who is a member of the group. This is because all the individuals andsocial groupprocesses such as social influence contribute to the outcome. The decisions made by groups are often different from those made by individuals. In workplace settings, collaborative decision-making is one of the most successful models to generate buy-in from other stakeholders, build consensus, and encourage creativity. According to the idea ofsynergy, decisions made collectively also tend to be more effective than decisions made by a single individual. In this vein, certain collaborative arrangements have the potential to generate better net performance outcomes than individuals acting on their own.[1]Under normal everyday conditions, collaborative or group decision-making would often be preferred and would generate more benefits than individual decision-making when there is the time for proper deliberation, discussion, and dialogue.[2]This can be achieved through the use of committee, teams, groups, partnerships, or other collaborative social processes. However, in some cases, there can also be drawbacks to this method. In extreme emergencies or crisis situations, other forms of decision-making might be preferable as emergency actions may need to be taken more quickly with less time for deliberation.[2]On the other hand, additional considerations must also be taken into account when evaluating the appropriateness of a decision-making framework. For example, the possibility ofgroup polarizationalso can occur at times, leading some groups to make more extreme decisions than those of its individual members, in the direction of the individual inclinations.[3]There are also other examples where the decisions made by a group are flawed, such as theBay of Pigs invasion, the incident on which thegroupthinkmodel of group decision-making is based.[4] Factors that impact other socialgroup behavioursalso affect group decisions. For example, groups high incohesion, in combination with other antecedent conditions (e.g. ideological homogeneity and insulation from dissenting opinions) have been noted to have a negative effect on group decision-making and hence on group effectiveness.[4]Moreover, when individuals make decisions as part of a group, there is a tendency to exhibit a bias towards discussing shared information (i.e.shared information bias), as opposed to unshared information. Thesocial identity approachsuggests a more general approach to group decision-making than the popular groupthink model, which is a narrow look at situations where group and other decision-making is flawed. Social identity analysis suggests that the changes which occur during collective decision-making are part of rational psychological processes which build on the essence of the group in ways that are psychologically efficient, grounded in the social reality experienced by members of the group, and have the potential to have a positive impact on society.[5] Decision-making in groups is sometimes examined separately as process and outcome. Process refers to the group interactions. Some relevant ideas include coalitions among participants as well as influence and persuasion. The use of politics is often judged negatively, but it is a useful way to approach problems when preferences among actors are in conflict, when dependencies exist that cannot be avoided, when there are no super-ordinate authorities, and when the technical or scientific merit of the options is ambiguous. In addition to the different processes involved in making decisions, groupdecision support systems(GDSSs) may have different decision rules. A decision rule is the GDSS protocol a group uses to choose amongscenario planningalternatives. Plurality and dictatorship are less desirable as decision rules because they do not require the involvement of the broader group to determine a choice. Thus, they do not engender commitment to the course of action chosen. An absence of commitment from individuals in the group can be problematic during the implementation phase of a decision. There are no perfect decision-making rules. Depending on how the rules are implemented in practice and the situation, all of these can lead to situations where either no decision is made, or to situations where decisions made are inconsistent with one another over time. Sometimes, groups may have established and clearly defined standards for making decisions, such as bylaws and statutes. However, it is often the case that the decision-making process is less formal, and might even be implicitly accepted. Social decision schemes are the methods used by a group to combine individual responses to come up with a single group decision. There are a number of these schemes, but the following are the most common: There are strengths and weaknesses to each of these social decision schemes. Delegation saves time and is a good method for less important decisions, but ignored members might react negatively. Averaging responses will cancel out extreme opinions, but the final decision might disappoint many members. Plurality is the most consistent scheme when superior decisions are being made, and it involves the least amount of effort.[6]Voting, however, may lead to members feeling alienated when they lose a close vote, or to internal politics, or to conformity to other opinions.[7]Consensus schemes involve members more deeply, and tend to lead to high levels of commitment. But, it might be difficult for the group to reach such decisions.[8] Groups have many advantages and disadvantages when making decisions. Groups, by definition, are composed of two or more people, and for this reason naturally have access to more information and have a greater capacity to process this information.[9]However, they also present a number of liabilities to decision-making, such as requiring more time to make choices and by consequence rushing to a low-quality agreement in order to be timely. Some issues are also so simple that a group decision-making process leads to too many cooks in the kitchen: for such trivial issues, having a group make the decision is overkill and can lead to failure. Because groups offer both advantages and disadvantages in making decisions,Victor Vroomdeveloped a normative model of decision-making[10]that suggests different decision-making methods should be selected depending on the situation. In this model, Vroom identified five different decision-making processes.[9] The idea of using computerized support systems is discussed byJames Reasonunder the heading of intelligent decision support systems in his work on the topic of human error. James Reason notes that events subsequent to The Three Mile accident have not inspired great confidence in the efficacy of some of these methods. In the Davis-Besse accident, for example, both independent safety parameter display systems were out of action before and during the event.[11] Decision-making softwareis essential forautonomous robotsand for different forms of active decision support for industrial operators, designers and managers. Due to the large number of considerations involved in many decisions, computer-baseddecision support systems(DSS) have been developed to assist decision-makers in considering the implications of various courses of thinking. They can help reduce theriskof humanerrors. DSSs which try to realize some human-cognitivedecision-makingfunctionsare calledIntelligent Decision Support Systems(IDSS).[12]On the other hand, an active and intelligent DSS is an important tool for the design of complex engineering systems and the management of large technological and business projects.[13] With age, cognitive function decreases and decision-making ability decreases. Generally speaking, the low age group uses the team decision effect to be good; with the age, the gap between the team decision and the excellent choice increases. Past experience can influence future decisions. It can be concluded that when a decision produces positive results, people are more likely to make decisions in similar ways in similar situations. On the other hand, people tend to avoid repeating the same mistakes, because future decisions based on past experience are not necessarily the best decisions. Cognitive bias is a phenomenon in which people often distort their perceived results due to their own or situational reasons when they perceive themselves, others or the external environment. in the decision-making process, cognitive bias influences people by making them over-dependent or giving more trust to expected observations and prior knowledge, while discarding information or observations that are considered uncertain, rather than focusing on more factors. The prospects are broad.[14] Groups have greater informational and motivational resources, and therefore have the potential to outperform individuals. However they do not always reach this potential. Groups often lack proper communication skills. On the sender side this means that group members may lack the skills needed to express themselves clearly. On the receiver side this means that miscommunication can result from information processing limitations and faulty listening habits of human beings. In cases where an individual controls the group it may prevent others from contributing meaningfully.[15] It is also the case that groups sometimes use discussion to avoid rather than make a decision. Avoidance tactics include the following:[9] Two fundamental "laws" that groups all too often obey:[citation needed] Research using the hidden profiles task shows that lack of information sharing is a common problem in group decision making. This happens when certain members of the group have information that is not known by all of the members in the group. If the members were to all combine all of their information, they would be more likely to make an optimal decision. But if people do not share all of their information, the group may make a sub-optimal decision. Stasser and Titus have shown that partial sharing of information can lead to a wrong decision.[16]And Lu and Yuan found that groups were eight times more likely to correctly answer a problem when all of the group members had all of the information rather than when some information was only known by select group members.[17] Individuals in a group decision-making setting are often functioning under substantial cognitive demands. As a result, cognitive and motivationalbiasescan often affect group decision-making adversely. According to Forsyth,[9]there are three categories of potential biases that a group can fall victim to when engaging in decision-making: The misuse, abuse and/or inappropriate use of information, including: Overlooking useful information. This can include: Relying too heavily onheuristicsthat over-simplify complex decisions. This can include:
https://en.wikipedia.org/wiki/Group_decision-making
TheIndigo Era(orIndigo economies) is a concept publicized by businessmanMikhail Fridman, describing what he views as an emerging new era of economies and economics based on ideas, innovation, and creativity, replacing those based on the possession ofnatural resources. Fridman is the co-founder ofLetterOne, an international investment business,[1]and first publicized the idea in early 2016. The word "indigo" was initially chosen based on the termindigo children, which has been used to describe people with unusual and innovative abilities. Fridman describes the Indigo Era as a disruptive era driven by extraordinary levels of human creativity, where abnormally talented individuals and entities are able to realize new levels of human potential and economic achievement. It is "a new economic era where the main source of national wealth is no longerresource rentbut the socio-economic infrastructure that allows every person to realise his or her intellectual or creative potential."[2]But, according to Fridman – based on his observations of recent economic indicators, political and market volatility, and historical patterns – it is also an era that will generate winners and losers as lagging countries and groups fail to adapt quickly enough. In late 2016 LetterOne'sGlobal Perspectivesjournal published an Indigo Index, ranking 152 countries on their ability to compete and grow as economies move away from being powered by natural resources to being powered by ideas, creativity, and digital skills. In 2017 it launched the Indigo Prize, to award new concepts of economic measurement beyond mereGDPas countries in the 21st century transition into economies where innovation, creativity, and digital skills are economic drivers. The competition is intended to "stimulate debate about factors currently measured, given evolving economies, technology and skill bases, and what should now be taken into consideration in official economic statistics that measure the health, size and growth of a modern economy."[2] In an April 2016 article inRealClearPoliticsand reprinted in theJerusalem Post, Fridman wrote: We are entering a disruptive era driven by extraordinary levels of human creativity. A new generation of curious, strong-willed and talented individuals is unhindered by convention or the past. This new “Indigo” generation is now shaping tomorrow's economy and creating national wealth. I use the term Indigo because it has been used to refer to children with special or unusual abilities. This is an era where abnormally talented individuals and entities are now able to realize new levels of human potential and economic achievement.[3] LetterOne'sGlobal Perspectiveswebsite adds that the indigo symbolism "embodies a breaking of the norm, something that is highly reflective of the new era that we are entering into, one that lacks convention and is driven by innovation."[4] In a series of articles published in 2016, Fridman cites the recent extreme volatility in markets,[3]and worldwide political change and instability,[3]as signs of an emerging global shift.[5][6][7]He notes two frequently cited prominent indicators of an economic shift: the sharp decline in prices of natural resources including oil, and the slowing of China's economic growth despite this decline in the cost of natural resources.[3][5][6][8][7] He and other commentators also note the rise of populism and populist leaders and candidates, both right-wing and left-wing, as these changes occur.[3][9][8][5][6][7]Meanwhile, companies likeAppleandGoogle– digital and technological companies he calls "Indigo companies" – have replaced longterm traditional natural-resource or manufacturing companies such asExxonas the world's largest companies.[3][8][5][6] Fridman observes that throughout history innovations, alternatives, and technologies have always overcome any perceived shortages of any natural resource.[3][5][6][8]He therefore posits that the new "Indigo era", fueled by digital and technological resources, will be marked by a shift away from the struggle for natural resources and their perceived scarcity, to a reliance on ideas, innovation, and creativity and on supporting the intellectual and creative potential of each human being:[3]"The world has entered a new era where the source of a nation's wealth is no longer natural resources. Intellectual capacity has now replaced land, raw materials and trade routes as the biggest source of wealth."[8] According to Fridman, three interconnected factors are needed for successful Indigo companies and an Indigo economy:[3][5][6][7][10] He notes that most emerging economies have focused on building physical structures (roads, buildings, cities, physical infrastructure) rather than the complex legal, political, and social systems, institutions, and changes that will support an effective free and innovative intellectual-resource economy.[3][5][6][7]The freedoms, protections, and political and legal frameworks of developed Western countries rest on centuries-long histories, socio-political traditions, and mindsets, and therefore will be difficult to replicate quickly in emerging economies.[8][5][6][7]Fridman singles out India as an emerging country that has adequate legal infrastructure and freedom to probably survive the Indigo shift.[8][7] Fridman considers the growth of Indigo economies to be a paradigm shift; he states that the pace at which technology is developing is creating worldwide tectonic shifts, and predicts huge global change over the next five to ten years.[8][5][6][3]He and other analysts predict that the growing economic gap between free, creative economies and groups in contrast to repressive, authoritarian, totalitarian, or tradition-bound economies or groups will widen and create resentment and hostilities – whether this is between nations or within nations.[3][9][11][5][6][7]Those left behind may be either emerging countries, or the average person – as opposed to the intellectual elites – within developed Western countries.[7][9][5][6][11] Authoritarian leaders and authoritarianism often rise during periods of uncertainty and insecurity and economic deprivation.[8][5][6][7]Fridman maintains however that in this ever-changing new economic era, the main source of wealth in a country or region will no longer be a natural resource, but a social infrastructure that will allow everyone to realise their intellectual and creative potential.[10][12][13][5][6]Therefore, he asserts that "The future Indigo economy is an economy of free people. And this means that the world will become more and more free."[3][5][6][7] In November 2016LetterOnelaunched a journal,Global Perspectives, as a platform to explore "the new emerging economic era, the Indigo era, from different perspectives, including education, religion, politics, economics, history and business" and to examine "global issuesthrough the eyes of leading commentators and business people around the world".[14]The inaugural issue contained articles by Fridman,Dominic Barton,Michael Bloomberg,Stan Greenberg,Carl Bildt,Vince Cable, Ken Robinson,Brent Hoberman,Alex Klein,Deirdre McCloskey,Yuri Milner,Nick D'Aloisio,Lynda Gratton,Parag Khanna,Ian Goldin,George Freeman,Ian Bremmer, and others.[15][16][11] The November 2016 inaugural issue ofGlobal Perspectivesalso published an Indigo Index,[17][18]which rated 152 countries based on five key metrics for doing business as economies move away from being powered by natural resources to being powered by creativity and digital skills.[19]The five metrics are: creativity and innovation, economic diversity, digital economy, freedom, and stability and legal frameworks,[20][21]which were scored based on over 30 measures from published data sources such as theWorld Bank,UNESCO, theCIRI Human Rights Data Project, theCenter for International DevelopmentatHarvard University, and theGlobal Education Monitoring Report.[22][19][17][13][12]The index sought to measure a country's entrepreneurial ecosystem, and therefore its potential to adapt and develop.[22][19][21][23] Each country was given a combined overall Indigo Score, with 200 being the highest possible score.[24][25]The 10 top-ranked countries were Sweden, Switzerland, Finland, Denmark, the UK, the Netherlands, Norway, Germany, Ireland, and Japan.[26][27][25]The United States was 18th overall.[27] The report also included three key findings: Creativity and innovation was the biggest overall driver of high scores; this accentuated the importance of fostering entrepreneurialism and lifelong learning and of investing heavily in people.[10][26][17]Nordic countries scored particularly high on the Indigo Index, with three Nordic countries in the top four and four Nordic countries in the top ten; this was attributed to their high rankings both in creativity and innovation and in freedom.[10][26][23][21][17]And the lowest-scoring countries were beset with social and political problems, such aswar,political turmoil, andcorruption.[10][17][28] In July 2017, LetterOne'sGlobal Perspectivesjournal announced the Indigo Prize, to stimulate discussion towards finding a new way of measuring the economy in the 21st century that moves beyond the limitations of mere GDP measurements.[2][29][30][31][32] Entrants were asked to submit an essay of up to 5,000 words answering the question: How would you design a new economic measure for global economies that fully acknowledges not only social and economic factors but the impact of creativity, entrepreneurship and digital skills? How should your new measure be used to improve the way we measure GDP in official statistics?[33] Entries were due 15 September 2017, and were open worldwide to groups or individuals over the age of 16, with entries particularly encouraged from people at academic institutions, businesses, charities, think tanks, consultancies, or other organisations.[33][29]The award amount was announced as £100,000, with second- and third-place winners to receive £25,000 and £10,000.[33][29][34] The judging panel included:[34][35][30] The winners of the inaugural Indigo Prize were announced on 25 October 2017. A joint first prize, £125,000 to be equally split, was awarded to two teams of writers:Diane Coyleand Benjamin Mitra-Kahn; andJonathan Haskel,Carol Corrado,et al.[36][37][38]A third place "Rising star" award of £10,000 was given to Alice Lassman.[36][37][38] Coyle was professor of economics at theUniversity of Manchester, and Mitra-Kahn is chief economist atIP Australia.[37]They proposed radically replacing GDP with a dashboard measuring six key assets: physical assets,natural capital,human capital, intellectual property, social and institutional capital, and net financial capital.[38][37][39]Their essay stated that "GDP never pretended to be a measure of economic welfare", and proposed that the new measure should assess "the range of assets needed to maximise individuals' capabilities to lead the life they would like to lead";[37]this would include "financial and physical capital but also natural and intangible capital".[40]They asserted that the new statistics should focus on measuring changes in the stock of important assets, rather than flows of income, expenditure, and output.[41]Tracking the evolution of stocks of physical assets, financial assets and liabilities, natural capital, skill levels, and implicit state liabilities would better measure the sustainability of the economy.[41][42]Coyle and Mitra-Kahn also proposed interim improvements to GDP measurements – such as better measurement of intangibles, adjusting for the distribution of income, and removing unproductive financial activity – before scrapping it entirely.[43]Following her prize-winning essay, Coyle now leads the Six Capitals research project, funded by LetterOne, at the Bennett Institute for Public Policy atCambridge University; the project was inaugurated in January 2019 and explores social and natural capital.[39][44] Haskel is professor of economics atImperial College Business School.[45]Rather than abandoning GDP, he proposed refining, updating, and extending the existing GDP measure.[41][45][46]He proposed better measurement of services and intangibles, and direct measurement of the economic welfare being created by digital goods.[43]His essay focused on the fact that economies have dramatically changed structure since GDP was originally developed, with more knowledge production, more digital goods, more free things and free information, and more intangible assets such as intellectual property.[41][45]He also emphasized the importance of factoring in the environment, sustainability, and societal welfare, in addition to calculating the value of goods and services that are provided for free.[37][45][46][41][47] Lassman, the recipient of the "Rising star" award, was a 19-year-old geography student atDurham University.[37]Her entry proposed a "Global Integration and Individual Potential" index, which measures each nation on two levels: its value relative to other nations, and the individuals and their contributions within each nation.[37][38][48][49]
https://en.wikipedia.org/wiki/Indigo_Era_(economics)
Information ecologyis the application ofecologicalconcepts for modeling theinformation society. It considers the dynamics and properties of the increasingly dense, complex and important digital informational environment. "Information ecology" often is used asmetaphor, viewing theinformation spaceas anecosystem, theinformation ecosystem. Information ecology also makes a connection to the concept ofcollective intelligenceandknowledge ecology(Pór 2000). Eddy et al. (2014) use information ecology for science-policy integration in ecosystems-based management (EBM). InThe Wealth of Networks: How Social Production Transforms Markets and Freedom, a book published in 2006 and available under aCreative Commonslicense on its own wikispace,[1]Yochai Benklerprovides an analytic framework for the emergence of the networkedinformation economythat draws deeply on the language and perspectives of information ecology together with observations and analyses of high-visibility examples of successfulpeer productionprocesses, citing Wikipedia as a prime example. Bonnie NardiandVicki O'Dayin their book "Information Ecologies: Using Technology with Heart," (Nardi & O’Day 1999) apply the ecology metaphor to local environments, such as libraries and schools, in preference to the more common metaphors for technology as tool, text, or system. Nardi and O’Day's book represents the first specific treatment of information ecology by anthropologists. H.E. Kuchka[2]situates information within socially-distributed cognition of cultural systems. Casagrande and Peters[3]use information ecology for an anthropological critique of Southwest US water policy. Stepp (1999)[4]published a prospectus for the anthropological study of information ecology. Information ecology was used as book title by Thomas H. Davenport and Laurence Prusak (Davenport & Prusak 1997), with a focus on the organization dimensions of information ecology. There was also an academic research project atDSTCcalledInformation ecology, concerned with distributed information systems and online communities. Law schools represent another area where the phrase is gaining increasing acceptance, e.g. NYU Law School Conference Towards a Free Information Ecology[5]and a lecture series on Information ecology at Duke University Law School'sCenter for the Study of the Public Domain. The field oflibrary sciencehas seen significant adoption of the term and librarians have been described by Nardi and O'Day as a "keystone species in information ecology",[6][7]and references to information ecology range as far afield as the Collaborative Digital Reference Service of the Library of Congress,[8]to children's library database administrator in Russia. Eddy et al. (2014) use principles of information ecology to develop a framework for integrating scientific information in decision-making in ecosystem-based management (EBM). Using a metaphor of how a species adapts to environmental changes through information processing, they developed a 3-tiered model that differentiates primary, secondary and tertiary levels of information processing, within both the technical and human domains.
https://en.wikipedia.org/wiki/Information_ecology
Theknowledge divideis the gap between those who can find, create, manage, process, and disseminate information orknowledge, and those who are impaired in this process. According to a 2005 UNESCO World Report, the rise in the 21st century of a global information society has resulted in the emergence of knowledge as a valuable resource, increasingly determining who has access to power and profit.[1]The rapid dissemination of information on a potentially global scale as a result of new information media[2]and the globally uneven ability to assimilate knowledge and information has resulted in potentially expanding gaps in knowledge between individuals and nations.[3]The digital divide is an extension of the knowledge divide, dividing people who have access to the internet and those who do not.[citation needed]The knowledge divide also represents the inequalities of knowledge among different identities, including but not limited to race, economic status, and gender. In the 21st century, the emergence of theknowledge societybecomes pervasive.[4]The transformations of world's economy and of each society have a fast pace. Together with information and communication technologies (ICT), these new paradigms have the power to reshape the global economy.[5]In order to keep pace with innovations, to come up with new ideas, people need to produce and manage knowledge. This is why knowledge has become essential for all societies. While knowledge has become essential for all societies due to the growth of new technologies, the increase of mass media information continues to facilitate the knowledge divide between those with educational differences.[6] According toUNESCOand theWorld Bank,[7]knowledge gaps between nations may occur due to the varying degrees by which individual nations incorporate the following elements: First, it was noticed that a great difference exists between theNorth and the South[where?](rich countries vs. poor countries). The development of knowledge depends on spreading Internet and computer technology and also on the development of education in these countries. If a country has attained a higher literacy level then this will result in a higher level of knowledge. Indeed, UNESCO's report details many social issues in knowledge divide related to globalization. There was noticed a knowledge divide with respect to Scholars have made similar possibilities in closing or minimizing the knowledge divide between individuals, communities, and nations. Providing access to computers and other technologies that disseminate knowledge is not enough to bridge the digital divide, rather importance must be out on developing digital literacy to bridge the gap.[28]Addressing the digital divide will not be enough to close the knowledge divide, disseminating relevant knowledge also depends on training and cognitive skills.[29]
https://en.wikipedia.org/wiki/Knowledge_divide
Noocracy(/noʊˈɒkrəsi/,nousmeaning 'mind" or 'intellect', andkratosmeaning 'power' or 'authority') is a type of government where decisions are delegated to those deemedwisest. The idea is classically advanced, among others, byPlato,al-FarabiandConfucius. Platoin hisLawsconsidered such a city a "sophocracy," i.e. rule of thephilosopher kings, but some consider him a proponent of "noocracy" in the same vein.[1] In modern history, similar concepts were introduced byVladimir Vernadsky, who did not use this term, but the term "noosphere".Teilhard de Chardinis also notable. In turn,Mikhail Epsteindefined noocracy as "the thinking matter increases its mass in nature and geo- and biosphere grow into noosphere, the future of the humanity can be envisioned as noocracy—that is the power of the collective brain rather than separate individuals representing certain social groups or society as whole". In a more concrete sense, one might find pertinent the paradigmatic controversy surroundinggenetically modified foodororganisms(GMOs). Here the opinions of a likely under-informed public and those of experts are well-known to be starkly in conflict, potentially rendering it a textbook case for setting up such a polity. A council of those reasonably deemed more wise than most, would then, among other things, arguably be expected to not abide by such a strictprecautionary principle. Adhering toJason F. Brennan'staxonomy of the roughly equivalent concept he himself instead designates "epistocracy",[2]one may already discern some basic ways of rendering governance more wise a process: • “Restricted suffrage”: Give voting rights only to those who prove themselves sufficiently well informed to earn the right to cast a ballot. Test to determine the right to vote. Everyone would be eligible to take the exam, but only those who show mastery of the basic concepts of political science, economics, and sociology would earn permission to vote. To make the test fair, focus the questions on objective topics. To create an incentive, voters who pass the test could receive a $1,000 bonus. A citizen who failed the test but wanted to vote could pay a penalty of $2,000, similar to agas-guzzler tax. • “Plural voting”: Everyone gets a vote, but the better-educated and more-informed get more votes. This system, espoused by the philosopherJohn Stuart Mill, holds that political participation helps voters feel empowered. It also acknowledges that stupid voters make bad decisions. It favors those who can prove their competence. • “Enfranchisement lottery”[also known asSortition]: Before each election, hold a random drawing to grant voting rights. Winners would have to earn the right to vote, perhaps by participating in forums with other voters. The random nature of the lottery would ensure the electorate reflects the demographics of the larger population. • “Epistocratic veto”: Every citizen retains the right to vote, but an epistocratic branch of government could overrule democratic deliberations. Membership in this deliberative body would be open to any member of society, but qualifying would require passing difficult tests and undergoing criminal background checks. People with conflicts of interest would be disqualified. This council of expert overseers couldn’t create new legislation or regulation but could overrule decisions it deems misguided. The council could block the candidacies of unqualified candidates; this might create gridlock but would force voters to consider candidates carefully. • “Simulated oracle”: In this model, all citizens are asked simultaneously to vote on policies or candidates, to take a test of basic political knowledge, and to indicate their demographics. With these three sets of data, the government can estimate the public’s “enlightened preferences,” for example, what a fully-informed but demographically-identical voting public would want. It implements these enlightened preferences.[3] Proponents of noocratic theory cite evidence that suggests voters in modern democracies are largely ignorant, misinformed and irrational.[4]Therefore,one person one votemechanism proposed by democracy cannot be used to produce efficient policy outcomes, for which the transfer of power to a smaller, informed and rational group would be more appropriate. The irrationality of voters inherent in democracies can be explained by two major behavioral and cognitive patterns. Firstly, most of the voters think that the marginal contribution of their vote will not make a difference on election outcomes; therefore, they do not find it useful to inform themselves on political matters.[4]In other terms, due to the required time and effort of acquiring new information, voters rationally prefer to remain ignorant. Moreover, it has been shown that most citizens process political information in deeply biased, partisan, motivated ways rather than in dispassionate, rational ways.[4]This psychological phenomenon causes voters to strongly identify themselves with a certain political group, specifically find evidence to support arguments aligning with their preferred ideological inclinations, and eventually vote with a high level of bias. Irrational political behaviors of voters prevent them from making calculated choices and opting for the right policy proposals. On the other hand, many political experiments have shown that as voters get more informed, they tend to support better policies, demonstrating that acquisition of information has a direct impact on rational voting.[4] According to noocrats, given the complex nature of political decisions, it is not reasonable to assume that a citizen would have the necessary knowledge to decide on means to achieve their political aims. In general, political actions require a lot of social scientific knowledge from various fields, such as economics, sociology, international relations, and public policy; however, an ordinary voter is hardly specialized enough in any of those fields to make the optimal decision. To address this issue, Christiano proposes a ruling system based on division of political labor, in which citizens set the agenda for political discussions and determine the aims of the society, whereas legislators are in charge of deciding on the means to achieve these aims.[5]For noocrats, transferring the decision-making mechanism to a body of specifically trained, specialized and experienced body is expected to result in superior and more efficient policy outcomes. Recent economic success of some countries that have a sort of noocratic ruling element provides basis for this particular argument in favor of noocracy. For instance, Singapore has a political system that favors meritocracy; the path to government in Singapore is structured in such a way that only those with above-average skills are identified with strict university-entrance exams, recruiting processes, etc., and then rigorously trained to be able to devise best the solutions that benefit the entire society. In the words of the country's founding father, Lee Kuan Yew, Singapore is a society based on effort and merit, not wealth or privilege depending on birth.[6]In order to develop further Singapore's technocratic system, some thinkers, like Parag Khanna, have proposed for the country to adapt a model of direct technocracy, demanding citizen input in essential matters through online polls, referendums, etc., and asking for a committee of experts to analyze this data to determine the best course of action.[7] Noocracies, liketechnocracies, have been criticized formeritocraticfailings, such as potentially upholding a more or less permanent ruling class. Others have highlighted more democratic ideals as betterepistemicmodels of law and policy. Noocracy's criticisms come in multiple forms, two of which are those focused on the efficacy of noocracies and the political viability of them. Criticisms of noocracy in all its forms – includingtechnocracy,meritocracy, and epistocracy (the focus ofJason Brennan's oft-cited book) – range from support of direct democracy instead to proposed alterations to our consideration of representation in democracy. Political theoristHélène Landemore, while arguing for representatives to effectively enact legislation important to the polity, criticizes conceptions of representation that aim especially to remove the people from the process of making decisions, and thereby nullify their political power.[8]Noocracy, especially as it is conceived in Jason Brennan'sAgainst Democracy, aims specifically to separate the people from the decision on the basis of the immensely superior knowledge of officials who will presumably make superior decisions to laypeople. Jason Brennan's epistocracy, specifically, is at odds with democracy and with certain criteria for democracies that theorists have proposed.Robert Dahl'sPolyarchysets out certain rules for democracies that govern many people and the rights that the citizens must be granted. His demand that the government not discriminatorily heed the preferences of full members of the polity is abridged by Brennan's "restricted suffrage" and "plural voting" schemes of epistocracy.[9][4]In the eighth chapter of his book, Brennan posits a system of graduated voting power that gives people more votes based on established levels of education achieved, with the number of additional votes granted to a hypothetical citizen increasing at each level, from turning sixteen to completing high school, a bachelor's degree, a master's degree, and so forth.[4]Dahl wrote, however, that any democracy that rules over a large group of people must accept and validate "alternative sources of information."[9]Granting the full powers of citizenship based on a system like formal education attainment does not account for the other ways that people can consume information, is the commonly cited argument, and still eschews consideration for the uneducated within a group. Noocracy also receives criticism for its claims to efficiency. Brennan writes that one of the many reasons that common people cannot be trusted to make decisions for the state is because reasoning is commonly motivated, and, therefore, people decide what policies to support based on their connection to those proposing and supporting the measures, not based on what is most effective. He contrasts real people with the ultra-reasonable vulcan that he mentions throughout the book.[4]That vulcan reflects Plato'sphilosopher kingand, in a more realistic sense, the academic elites whom Michael Young satirized in his essayThe Rise of the Meritocracy.[10][11] Modern political theorists do not necessarily denounce a biased viewpoint in politics, however, though those biases are not written about as they are commonly considered. Professor Landemore utilizes the existence of cognitive diversity to argue that any group of people that represents great diversity in their approaches to problem-solving (cognition) is more likely to succeed than groups that do not.[12]She further illustrates her point by employing the example of a New Haven task force made up of private citizens of many careers, politicians, and police who needed to reduce crime on a bridge without lighting, and they all used different aspects of their experiences to discover the solution that was to install solar lamps on the bridge. That solution has proven effective, with not a single mugging reported there since the lamp installation as of November 2010.[12]Her argument lies mainly in the refutation of noocratic principles, for they do not utilize the increased problem solving skill of a diverse pool, when the political system because as debate between elites alone, and not a debate between the whole polity.[8] To some theorists, noocracy is built on a fantasy that will uphold current structures of elite power, while maintaining its inefficacy. Writing for theNew Yorker, Caleb Crain notes that there is little to say that the vulcans that Brennan exalts actually exist.[13]Crain mentions a study that appears in Brennan's book that shows that even those who have proven that they have superb skills in mathematics do not employ those skills if their use threatens their already-held political belief. While Brennan utilized that study to demonstrate how deeply rooted political tribalism is in all people, Crain drew on this study to question the very nature of an epistocratic body that can make policy with a greater regard to knowledge and truth than the ordinary citizen can.[4][13]The only way to correct for that seems, to many, to be to widen the circle of deliberation (as discussed above) because policy decisions that were made with more input and approval from the people last longer and even garner the agreement of the experts.[14][12]To further illustrate that experts, too, are flawed, Crain enumerates some of the expert-endorsed political decisions that he has deemed failures in recent years: "invading Iraq, having a single European currency, grinding subprime mortgages into the sausage known as collateralized debt obligations."[13] With the contention around the reasoning for those political decisions, political theorist David Estlund posited what he considered to be one of the prime arguments against epistocracy – bias in choosing voters.[15]His fear was that the method by which voters, and voters' quantity of votes, was chosen might be biased in a way that people had not been able to identify and could not, therefore, rectify.[15]Even the aspects of the modes of selecting voters that are known cause many theorists concern, as both Brennan and Crain note that the majority of poor black women would be excluded from the enfranchised polity and risk seeing their needs represented even less than they currently are.[13] Proponents of democracy attempt to show that noocracy is intrinsically unjust on two dimensions, stating its unfairness and bad results. The former states that since people with different income levels and education backgrounds have unequal access to information, the epistocratic legislative body will be naturally composed of citizens with higher economic status, and thus fail to equally represent different demographics of the society. The latter argument is about the policy outcomes; since there will be a demographic overrepresentation and underrepresentation in the noocratic body, the system will produce unjust outcomes, favoring the demographically advantaged group.[16]Brennan defends noocracy against these two criticisms, presenting a rationale for the system. As a rejection of the unfairness argument put forward by democrats, Brennan argues that the voting electorate in modern democracies is also demographically disproportionate; based on empirical studies, it has been demonstrated that voters coming from privileged background, such as white, middle aged, higher-income men, tend to vote at a higher rate than other demographic groups.[16]Although de jure every group has same right to vote underone person one voteassumption, de facto practices show that privileged people have more influence on election results. As a result, the representatives will not match the demographics of the society either, for which democracy seems to be unjust in practice. With the right of type of noocracy, the unfairness effect can actually be minimized; for instance, enfranchisement lottery, in which a legislative electorate is selected at random by lottery, and then incentivized to become competent to address political issues, illustrates a fair representationmethodologythanks to its randomness. To refute the latter claim, Brennan states that voters do not vote selfishly; in other terms, the advantaged group does not attempt to undermine the interests of the minority group.[16]Therefore, the worry that noocratic bodies that are demographically more skewed towards the advantaged group make decisions in favor of the advantaged one fails. According to Brennan, noocracy can serve in a way that improves the welfare of the overall community, rather than certain individuals. Nietzsche, Friedrich (2017).The Greek State and Other Fragments.Delphi Classics.
https://en.wikipedia.org/wiki/Noocracy
Social marketing intelligenceis the method of extrapolating valuable information fromsocial networkinteractions anddataflows that can enable companies tolaunch new productsand services into the market at greater speed and lower cost. This is an area of research however, companies using social marketing intelligence have achieved significant improvement in marketing campaigns.[citation needed] Through social marketing intelligence, companies can identify people that are the most influential within their communities. These are the most connected people within any given social network. These people, sometimes called thealpha usersorhubsas insmall-world networktheory, have considerable influence over the spread of information within their social network.[1] Alpha users are key elements of any social networks, who manage the connectivity of the core members of the community. Similar to how viruses spread in nature, there is an initial starting point to communications in social networks, and the originators of such communications are alpha users. They tend to be highly connected users with exceptional influence to the other thought-leaders of any social network. Beforedigital communications, it was only possible to isolate the most influential members of any community by interviewing every member and tracing their full communication patterns within the social network. Traditional fixedlandline telephoneandinternetuse did not give enough accuracy to be able to pinpoint alpha users to a meaningful degree. With the advent of mobile phones, a personal digital communication channel was available to study. Early research by mathematicians at Xtract[1]inFinlandproduced models that suggested mobile networks could indeed track the full communication and isolate the alpha users. Since then, several companies including Xtract have launched commercial tools to detect alpha users, usually using mobile operator billing and telecoms traffic data. Engagement marketingcampaigns attempt to use alpha users as spokespersons in marketing and advertising. The idea is that consumers will trust more the opinion of their friend or known contact from a social network, than the random marketing and advertising messages of companies and brands. The desire is to achieveviral marketingeffects by which the alpha users would spread the messages further. Alpha users were first briefly discussed in public in the book3G Marketingin 2004.[2]The first industry article about alpha users was by Ahonen and Ahvenainen in Total Telecom in February 2005. The first telecoms conference wherealpha userwas explained was the 3G Mobile World Congress in Tokyo in January 2005. The topic was part of the strategy keynote address at the 3GSM World Congress in Cannes in February 2005. The first book to discuss alpha users at length wasCommunities Dominate Brandsin 2005.[3]
https://en.wikipedia.org/wiki/Social_marketing_intelligence#Alpha_users
Gig workersareindependent contractors, online platform workers,[1]contract firm workers, on-demand workers,[2]andtemporary workers.[3]Gig workers enter into formal agreements with on-demand companies to provide services to the company's clients.[4] In many countries, the legal classification of gig workers is still being debated, with companies classifying their workers as "independent contractors", whileorganized laboradvocates have been lobbying for them to be classified as "employees", which would legally require companies to provide the full suite of employee benefits like time-and-a-half for overtime, paid sick time, employer-provided health care, bargaining rights, and unemployment insurance, among others. In 2020, the voters inCaliforniaapproved2020 California Proposition 22, which created a third worker classification whereby gig-worker-drivers are classified as contractors but get some benefits, such as minimum wage, mileage reimbursement, and others. Gighas various meanings in English, but it has two modern meanings: any paidjobor role, especially for amusicianor aperformerand any job, especially one that is temporary.[5] The earliest usage of the wordgigin the sense of "any, usual temporary, paid job" is from a 1952 piece byJack Kerouacabout his gig as a part-timebrakemanfor theSouthern Pacific Railroad.[6] In the 2000s, thedigital transformationof the economy and industry developed rapidly due to the development of information and communication technologies such as theInternetand the popularization ofsmartphones. As a result, on-demand platforms based on digital technology have created jobs and employment forms that are differentiated from existing offline transactions by the level of accessibility, convenience and price competitiveness.[7] Normally "work" describes afull-time jobwith set working hours, including benefits. But the definition of work began to change with changing economic conditions and continued technological advances, and the change in the economy created a new labor force characterized by independent and contractual labor.[8] Uberisation or uberization is aneologismdescribing thecommercializationof an existingservice industryby new participants usingcomputing platforms, such as mobile applications, in order to aggregate transactions between clients and providers of a service, often bypassing the role of existingintermediariesas part of the so-calledplatform economy. Thisbusiness modelhas different operating costs compared to a traditional business.[9] Uberizationis derived from the company name "Uber". Uberization has also raised concerns over government regulations and taxation, insofar as the formalized application of thesharing economyhas led to disputes over the extent to which the provider of services via an uberized platform should be held accountable to corporate regulations and tax obligations.[10]In 2018, 36% of US workers joined in the gig economy through either their primary or secondary jobs.[11]The number of people working in major economies is generally less than 10 percent of the economically viable population, according in Europe, 9.7% of adults from 14 EU countries participated in the gig economy in 2017, according to the survey. Meanwhile, it is estimated that gig worker's size, which covers independent or non-conventional workers, is 20% to 30% of the economically active population in the United States and Europe.[7] A 2016 study by theMcKinsey Global Instituteconcluded that, across America and England, there were a total of 162 million people that were involved in some type of independent work.[12]Moreover, their payment is linked to the gigs they perform, which could be deliveries, rentals or other services.[13] Because a lot of gig work can be done online, gig workers find themselves competing with one another in a 'planetary labour market'.[14] Many factors go into a desirable job, and the best employers focus on the aspects of work that are most attractive to today's increasingly competitive and fluid labor force.[11]Traditional workers have long term employer–employee relationship in which the worker is paid by the hour or year, earning a wage or salary. Outside of that arrangement, work tends to be temporary or project-based workers are hired to complete a particular task or for certain period of time.[15]Coordination of jobs through an on-demand company reduces entry and operating costs for providers and allows workers' participation to be more transitory in gig markets (i.e., they have greater flexibility around work hours).[4]Freelancerssell their skills to maximize their freedom, while full-time gig workers leverage digital service-on-demand platforms and job matching apps to level up their skills.[16]Another example of temporary workers may be digital nomads. Digital nomads have a mobile lifestyle combining work and leisure, requiring a particular set of skills and equipment.[17]Gig work enables digital nomads by offering flexible, location-independent job opportunities that can be performed remotely, typically through digital platforms, allowing for a lifestyle of travel and work anywhere with internet connectivity. It is important to distinguish employment in the sharing economy from employment throughzero-hour contracts, a term primarily used in the United Kingdom to refer a contract in which an employer is not obliged to provide any minimum number of working hours to an employee. Employment in the gig economy entails receiving compensation for one key performance indicator, which, for example, is defined as parcels delivered or taxi lifts conducted. Another feature is that employees can opt to refuse taking an order. Although employers do not have to guarantee employment or employees can also refuse to take an order under a zero-hour contract, workers under such a contract are paid by the hour and not directly through business-related indicators as in the case of the gig economy.[18] Ghost workis a specific type of labor that is typically task-based and invisible to the end user.[19]Ghost workers work on discrete tasks for a company, but they do not have a relationship with the company beyond assignment of the task and the minimal training necessary. A key characteristic of ghost work is the completion of small tasks to assist in machine learning or automation.[20] Gig workers have high levels of flexibility, autonomy, task variety, and complexity.[21]The gig economy has also raised some concerns. First, these jobs generally confer few employer-provided benefits and workplace protections. Second, technological developments occurring in the workplace have come to blur the legal definitions of the terms "employee" and "employer" in ways that were unimaginable when employment regulations in the United States like theWagner Act of 1935and theFair Labor Standards Act of 1938were written.[8]These mechanisms of control can result in low pay, social isolation, working unsocial and irregular hours, overwork, sleep deprivation and exhaustion.[22] According to a 2021 report by theWorld Health Organizationand theInternational Labour Organizationthe expansion of the gig economy can be seen as one significant factor for the increase in worker deaths for those who work over 55 hours a week (relative to those who work 35–40), rising from 600,000 deaths in 2000 to 750,000 in 2016.[23]The report found that in 2016, 9% of the world's population worked greater than 55 hours weekly, and this was more prevalent among men, as well as workers in the Western Pacific and South-East Asia regions.[23]Work has also suggested poor mental health outcomes amongst gig workers.[24] Legislatures have adopted regulations intended to protect gig economy workers, mainly by forcing employers to provide gig workers with benefits normally reserved for traditional employees. Critics of such regulations have asserted that these obligations have negative consequences, with employers almost inevitably reducing wages to compensate for the increased benefits or even terminating employment when they have no leeway to reduce wages.[25] There are several gender differences within gig work from the number of women who are participating to the wage pay gap.[26]Globally, the gender differences in participation of women in the gig economy differ. For example, in the United States, female gig workers make up 55% of the gig work population.[27]In India, 28% of the gig workforce consists of women.[28]Theplatform economyhas been described as conferring a professional status that allows women to participate in paid work without disrupting social hierarchies and while managing household and childcare responsibilities. The advent of home service providers and beauticians within the gig economy has led to the formalizing and feminization of casual labor, dubbed “pink collar work".[29] In October 2021, India’s first women-led gig workers’ strike was led by 100 women agitating outside the office of Urban Company in Gurugram, Haryana, a platform that provides at-home services, protesting “low wages, high commissions and poor safety conditions”.[29]This led to a lawsuit being filed by Urban Company against its workers for "instigating violence against the Company". The lawsuit stated that Urban Company was an aggregator connecting customers to independent workers and sought a permanent prohibitory injunction from the court against protests by the Urban Company employees.[30]The protest was eventually called off following the imposition of Section 144 of theCriminal Procedure Codein Gurugram. The gig economy is ostensibly less gender-segregated worldwide than the traditional labor market. However, women across the world continue to protest against gender gaps such as lower wages and working hours and the lack of flexibility. The COVID-19 pandemic highlighted the need for worker protections for women who work in the gig economy for supplemental income.[31] Gig work has witnessed a similar gendered division that exist within traditional work. The platform economy has particularly attracted female service providers due to the flexibility it offers. For example, 80% of women onDoorDashsaid that flexibility is the main reason they pursue gig work.[32]One reason for this is that many women need to balance work with familial responsibilities and are therefore more likely than men to participate in gig work due to scheduling reasons.[33]For many women, platform-based food delivery work also provides an opportunity monetize previously unpaid domestic skills like food shopping.[33] Platform-based work is also highly segregated by gender. Men in the gig economy typically perform traditionally male tasks, most notably transportation.[34]A study in Australia found that the most common task for male gig workers was driving, particularly forUber.[34]Women, on the other hand, tend to perform traditionally female tasks like food shopping, care work, cleaning, and creative jobs like graphic design and writing tasks.[34] There has also been a recent rise in women joining thedelivery economy.[33]Women now make up just under half of the delivery people on theUber Eatsplatform and DoorDash now reports that 58% of their delivery drivers are women.[32]Aside from the flexibility, women tend to prefer delivery work to ride-sharing work because of safety concerns in being a female driver in ride-sharing services. There have been various accounts of sexual harassment claims filed by female Uber drivers.[35] There have been various accounts ofsexual harassmentclaims filed by female Uber drivers. A 2019 safety report released by Uber reported 6,000 incidences of sexual assault from 2017-2018 experienced by both riders and drivers.[36]Despite the prevalence of harassment and assault, platforms do little to protect women frombias, harassment, and violence.[36]Some platforms have implemented preventative measures to protect both customers and workers.[36]Most notably, Uber now requires drivers to complete anti-sexual violence training and their app now includes a 'panic button' feature that connects users to 911 dispatchers, however these measures are widely believed to be insufficient.[36]For instance, women often face drunken and disorderly customers and are left to deal with potentially dangerous individuals on their own with little support from platforms, which provide minimal guidelines for how to respond in dangerous situations.[37]Gender stereotypesand customer bias also mean that customers are more likely to challenge women's decisions, making it difficult for female drivers to defend themselves and advocate for themselves in customer interactions.[37] The way many platforms are designed also pressures workers, particularly women, to sacrifice their safety in order to maintain their standing on the platform. Platforms like Uber assign work based on the ratings workers receive from customers. Low ratings can result in a worker receiving less work or being removed form the platform entirely, creating an environment where workers often tolerate some level of harassment to avoid a low rating that may jeopardize their earnings.[37] Assault and harassment also place undue financial burdens on female gig workers. Since gig workers are typically categorized as independent contractors, they are not extended the protections and benefits of traditional employees. For instance, independent contractors are not covered under the provisions of the United States' Fair Labor Standards Act (FLSA). As a result, if a worker needs to take time off to recover from harassment or assault that they experienced while working on a platform, the financial burden of that recovery time falls entirely on the worker, which means that many women continue to work under conditions that feel unsafe in order to avoid a loss of income.[37] The literature on thegender pay gapin theplatform economyis mixed. But many studies show that women continue to earn less than men, even in platform-based economies.[38][39]The gender pay gap for platform-based work is also typically similar in magnitude to the pay gap observed for sectors outside the gig economy.[33]One analysis of Uber drivers in the United States found that on average, women earned about 7% less than their male counterparts.[40]On Amazon's platform,Mechanical Turk(MTurk), which allows companies to hire people to perform simple online tasks that are difficult to automate, women earned about 10.5% less per hour of work than men, largely because women tended to take breaks between tasks rather than working continuously through a series of tasks to accommodate caregiving responsibilities, particularly young children.[41] Many workers cite flexibility as a primary reason for choosing to engage in gig work, however that flexibility is subject to some limitations that may have gendered impacts. The primary limitation is that imposed by surge pricing. By tying pricing to demand, surge pricing incentivizes workers to be online during high-traffic or high-demand times.[42]Surge pricing times may conflict with non-work commitments like caregiving responsibilities, creating a trade-off between flexibility and higher earnings.[42] Measuring the size of the gig workforce is difficult because of the different definitions of what constitutes "gig work"; limitations in the methods used to collect data via household surveys versus information from business establishments; and differing legal definitions of workers under tax, workplace, and other public policies.[43] Gig work's appearance has been related to wide changes in the economy. Advances in globalization and technology put pressure on companies to respond quickly to market changes. Securing labor through nontraditional agreements such as gig work will enable companies to quickly adjust the size of their workforce. This can help companies increase their profits. From this point of view, the unconventional gig work is a fundamental component of today's economy, and it is unlikely to disappear anytime soon.[43] In their book,The Gig Economy, Woodcock and Graham outline four pathways worker-friendly futures for the gig economy: increased transparency, better regulation, stronger collective organisation of workers, and platforms run as cooperatives or public infrastructures.[44] When it comes to gig workers in Africa, there are significant variations across different countries. Sub-Saharan Africa comprises 13% of the world's workforce and over 85% of the employment in Africa is considered informal.[45] NITI Aayogdefines 'gig workers' as those engaged in work outside of the traditional employer-employee arrangement. In 2020–21, the gig economy was estimated to employ 7.7 million workers, with a projected workforce of 23.5 million by 2029–30. The industry is expected to produce a revenue of $455 billion by 2024.[46]47% of gig workers are employed in medium-skilled jobs, about 22% in high-skilled jobs, and about 31% in low-skilled jobs. 93% of the Indian population is employed in theinformal economy, which is dependent on local linguistic, ethnic and regional dynamics and networks.[47]The technologization of informal labor with app-based work has obviated the need to navigate these local systems for work and payment. Rural-to-urban migrants form a majority of the gig workforce, which serves an intermediary work settlement and an alternative to unregulated contractors who place them at risk of trafficking and other forms of exploitation.[48]Class and caste identities that have historically been excluded from the formal labor market have utilized the gig economy as a means to escape discrimination.[49]However, the term "platform paternalism" has emerged to describe the perpetuation of caste and class hierarchies, trapping workers in jobs with very little security and no potential for long-term growth.[50]For instance, caste-oppressed women continue to dominate low-paying work, such as cleaning and washing in households.[51]BookMyBai, a platform service that helps people hire house-maids and caretakers, has provisions to request workers from specific geographic regions and religions. This has been criticized for perpetuatingcaste-based discrimination.[52] TheIndian Federation of App-based Transport Workersand the Telangana Gig and Platform Workers Union currently have 36,000 and 10,000 members respectively, including cab drivers, food and grocery delivery workers, and e-commerce delivery persons.[53]Some of the demands of these unions include security benefits, higher base fares and protection against exploitation by aggregator companies. In response, the Indian parliament passed new laws guaranteeing social security and occupational health and safety of gig workers in 2020. These laws are yet to be implemented.[54]In its 2021 report, NITI Aayog also recommended fiscal incentives including tax breaks or startup grants for companies with about one-third of their workforce as women and people with disabilities. Securing social protection coverage, improving national statistics on gig and platform work and policy options, and discussing insurance and tax-financed schemes for gig platforms have been delineated as key priorities for theG20 summit2023, held at Delhi, India.[55]On 24 July 2023, theRajasthanlegislative assembly approved a groundbreaking bill that provides social security benefits to gig workers, making it the first of its kind in India. The Rajasthan Platform Based Gig Workers (Registration and Welfare) Bill, 2023 aims to enlist all gig workers and aggregators operating in the state, ensuring they receive essential social security protections. Additionally, the bill establishes a mechanism for gig workers to voice and address their grievances.[56] Gig work is spreading around the side job and delivery business.Kakaohas hired drivers to build a system for proxy driving, and the people of delivery are meeting the surging demand for delivery through a near-field delivery called "Vamin Connect". There is a gig work platform for professional freelancers, not just work. The platform, which connects those who want skilled professionals and those with skills, offers ten kinds of services, including design, marketing, computer programming, translation, document writing and lessons. However, "gig worker" is not yet very welcome in Korea. This is because many "gig workers" have conflicts with existing services and expose the lack of social and legal preparation.[57] Gig work in Southeast Asia has been rapidly growing since 2010; based on World Bank estimates in 2019, the gig work population has seen a consistent 30% annual growth rate.[58] Although there is already a large informal sector in many Southeast Asian countries, the growing number of gig workers in Southeast Asia means that there is growing demand for labor regulations to protect workers against unfair labor practices.[59]The pandemic has highlighted this concern and shone light on the vulnerability of gig workers in Southeast Asia. In Indonesia, ojek drivers in particular were left with neither a social safety net nor health protection.[60] In Australia, the gig economy include services such as ride sharing, food delivery, and various types of personal services for a fee. It is against the law for an employer to claim a worker as an independent contractor when they are in fact an employee. Where this happens, the business could be liable for penalties under the Fair Work Act, and have to backpay the entitlements.[61] When it comes to platform workers in Europe, there are significant differences across countries. The UK has the highest incidence of platform work. Other countries with high relative values are Germany, the Netherlands, Spain, Portugal and Italy. By contrast, Finland, Sweden, France, Hungary and Slovakia show very low values compared to the rest. The typical European platform worker is a young male. A typical platform worker is likely to have a family and kids, and regardless of age, platform workers tend to have fewer years of labour market experience than the average worker. The majority of platform workers provide more than one type of service and are active on two or more platforms. While flexibility and autonomy are frequently mentioned motivations for platform workers, so too is the lack of alternatives.[62] One controversial issue, though not unique to Europe, is the employment status of platform workers. In most cases the providers of labour services via platforms are formally independent contractors rather than employees, however when asked about their current employment situation, 75.7% of platform workers claimed to be an employee (68.1%) or self-employed (7.6%). The labour market status of platform workers is unclear even to workers themselves, and it also reflects uncertainty surrounding this issue in policy and legal debates around Europe. While platform work can lower the entry barriers to the labour market and facilitate work participation through better matching procedures and easing the working conditions of specific groups, this type of work often relies on a workforce of independent contractors whose conditions of employment, representation and social protection are unclear and often unfavourable.[62] In most EU states, the rules governing contributions and entitlements of social protection schemes are still largely based on full-time open-ended contracts between a worker and a single employer. As a result workers with non-standard arrangements often do not have the same income and social security protection compared to workers with standard employer-employee contracts. Modern social protection systems should be adapted to a context of more irregular careers and frequent transitions, linking entitlements to individuals rather than jobs may contribute to this, while fostering mobility and mitigating the social cost of labour market adjustments.[62][63] In some jurisdictions, legal rulings have classified full-time freelancers working for a single main employer of the gig economy as workers and awarded them regular worker rights and protection. An example is the October 2016 ruling against Uber in the case ofUber BV v Aslam, which supported the claim of two Uber drivers to be classified as workers and to receive the related worker rights and benefits.[64] In 2019, theUK Supreme Courtprovided guidance on the correct way to categorize "gig economy" workers. The London-based companyPimlico Plumberslost an appeal against the argument that one of its plumbers was a "worker", i.e. not an employee, but enjoying some "employment" rights such as holiday pay and sickness pay.[65]The Employment Appeals Tribunal ruled that Hermes' couriers are "workers" with certain statutory benefits including minimum wage, rest periods and holiday pay.[66]In 2018, Uber lost a court case which claimed drivers are workers and therefore entitled to workers' rights, including the national minimum wage and paid holiday.[67]Another UK company involved in "worker status" legal cases isCitySprint.[68]On 19 February 2021, the Supreme Court ruled in favour of 25 Uber drivers having "worker status"; the publicationPersonnel Todaysuggests that this case establishes "once and for all that in the UK the self-employed app-based driver model is no longer viable".[69] Many "gig economy workers" have not been able to receiveCOVID-19 pandemic supportfunding.[70] Theprecarityof work, with the growth of digital applications for the delivery of goods and services, a phenomenon popularly known asuberization, despite being occurring in several countries around the world, has gained strength inBrazil, a country affected by deindustrialization and dependence on the service sector.[71] Despite promises from Brazilian government authorities to create new laws to regulate the activity, the absence of a specific regulation covering this new form of relationship between Brazilian companies and gig workers has increased legal uncertainty and been a source of social conflicts.[71] In 2020, there was a nationalstrikebringing together delivery workers coordinated by a set of social movements, such as theEntregadores Antifascistas, a collective organisation of gig workers, which mobilized a set of actions and drew the attention of Brazilian society to the problem.[72][73] According toIPEA, a government-led research agency, it was estimated that in October 2021, gig workers numbered 1.4 million people in Brazil.[74] In 2015 nearly one-in-ten Americans (8%) have earned money using digital platforms to take on a job or task. Meanwhile, nearly one-in-five Americans (18%) have earned money by selling something online, while 1% have rented out their properties on a home-sharing site. Adding up everyone who has performed at least one of these three activities, some 24% of American adults have earned money in the "platform economy" in 2015.[75] In 2022, the U.S. Department of Labor released a proposal to revise the Department’s guidance on how to determine who is an employee or independent contractor under theFair Labor Standards Act (FLSA). The proposed rule would make it easier for gig workers/independent contractors to gain full employee status.[76]Companies would be required to provide rights and benefits to gig workers/independent contractors equivalent to standard employees. These benefits include minimum wage, health insurance, social security contributions, and unemployment insurance. The rule would replace a previous one enacted under the Trump administration that made it more difficult for a gig worker/independent contractor to be classified as an employee.[77] Eligible workers of all ages participate in the gig economy. The highest percentage of Americans who report having earned money at least once via gig work found through an online platform are those between the ages of 18 and 29, at 30%. Participation drops to 18% for individuals between 30 and 49 years of age, and lower than that for individuals 50 and older.[78]The consulting firm McKinsey attributes the difference in participation by age in part to the low barrier of entry into gig work as young adults are still developing marketable skill sets for other lines of work.[79] The American Enterprise Institute (AEI) finds that despite the general decline in gig workforce participation with age, approximately 20% of retired Americans participate in the gig economy, primarily by performing services such as tutoring, rental hosting, caring for pets, and ride-hail driving. AEI states that the increase in gig work participation following retirement is due in part to fear of financial preparedness for retirement, given the increase in life expectancy or the effect of economic decline on the value of retirement accounts. However, AEI also cites boredom as a significant reason for participation, with 96% of gig workers over 65 claiming they feel more fulfilled in life when they maintain a job they enjoy.[80] Gig work participation also differs between races in the United States. More non-white Americans report having earned money in the gig economy – 30% of Hispanic adults, 20% of Black adults, and 19% of Asian adults - than their white counterparts, at 12%.[81]The differences in participation by race can be explained in part by individuals’ migrant status, as globally, a disproportionate number of migrants report earning money through gig work.[82]58% of gig workers surveyed said the extra income earned as either “essential” or “important” as opposed to “nice to have."[78]On Uber’s Q2 2022 earnings call, 70% of new Uber drivers cited increased cost of living as the primary motivator to join the company.[83] In 2021, more non-white gig workers expressed concern about their exposure to COVID-19 on the job, at 50%, than their white counterparts, at 38%. A similar difference between races was found among standard workers with respect to their employer’s lack of COVID-19 precautions.[81] In 2019, theCalifornia legislaturepassed a law(AB 5)requiring all companies to re-classify their gig-workers from "independent contractors" to "employees". (In the US, there are two mutually exclusive employee classifications; the following ballot initiative created a third in California.) In response to AB 5, app-based ride-sharing and delivery companiesUber,Lyft,DoorDash,Instacart, andPostmatescreated a ballot initiative (2020 California Proposition 22), which won with 60% of the vote and exempted them from providing the full suite of mandatedemployee benefits(time-and-a-half for overtime, paid sick time, employer-provided health care, bargaining rights, and unemployment insurance - among others) while instead giving drivers new protections of:
https://en.wikipedia.org/wiki/Uberisation
Six Degrees of Kevin BaconorBacon's Lawis aparlor gamewhere players challenge each other to choose an actor whom they connect to another actor via a film in which both actors appeared: this is repeated to try to find the shortest path that leads to prolific American actorKevin Bacon. It rests on the assumption that anyone involved in theHollywoodfilm industry can be linked through their film roles to Bacon within six steps. The game's name is a reference to "six degrees of separation", a concept that posits that any two people on Earth are six or fewer acquaintance links apart. In 2007, Bacon started acharitable organizationcalledSixDegrees.org. In 2020, Bacon started apodcastcalledThe Last Degree of Kevin Bacon.[1] In a January 1994 interview withPremieremagazine, Kevin Bacon mentioned while discussing the filmThe River Wildthat "he had worked with everybody in Hollywood or someone who's worked with them."[2]Following this, a lengthynewsgroupthread which was headed "Kevin Bacon is the Center of the Universe" appeared.[3]In 1994, threeAlbright Collegestudents - Craig Fass, Brian Turtle and Mike Ginelli - invented the game that became known as "Six Degrees of Kevin Bacon" after seeing two movies on television that featured Bacon back to back,FootlooseandThe Air Up There. During the latter film they began to speculate on how many movies Bacon had been in and the number of people with whom he had worked.[4][5] They wrote a letter to talk show hostJon Stewart, telling him that "Kevin Bacon was the center of the entertainment universe" and explaining the game.[6]They appeared onThe Jon Stewart ShowandThe Howard Stern Showwith Bacon to explain the game. Bacon admitted that he initially disliked the game because he believed it was ridiculing him, but he eventually came to enjoy it. The three inventors released a book,Six Degrees of Kevin Bacon(ISBN9780452278448), with an introduction written by Bacon.[6]A board game based on the concept was released by Endless Games.[7] In 1995Cartoon Networkreferenced the concept in a commercial, havingVelma(fromScooby-Doo) as the central figure in the 'Cartoon Network Universe'.[8]The commercial cites connections as arbitrary as fake appearances, sharing of clothes, or physical resemblance. The concept was also presented in an episode of the TV showMad About Youdated November 19, 1996, in which a character expressed the opinion that every actor is only three degrees of separation from Kevin Bacon. Bacon spoofed the concept himself in a cameo he performed for the independent filmWe Married Margo.[9]Playing himself in a 2003 episode ofWill and Grace, Bacon connects himself toVal KilmerthroughTom Cruiseand jokes "Hey, that was a short one!".[10]The headline ofThe Onion, a satirical newspaper, on October 30, 2002, was "Kevin Bacon Linked ToAl-Qaeda".[11]Bacon provides the voice-over commentary for theNY Skyrideattraction at theEmpire State BuildinginNew York City. At several points throughout the commentary, Bacon alludes to his connections to Hollywood stars via other actors with whom he has worked.[citation needed] InScream 2, written byKevin Williamson, a sorority sister played byPortia De Rossirefers to Six Degrees of Kevin Bacon.[12]Bacon himself later starred inThe Following, also created and written by Williamson, and broadcast onFoxbetween 2013 and 2015.[13] The annual31 Days of Oscarevent on theTurner Classic Moviestelevision channel sometimes includes a "360 Degrees of Oscar" strand where each film shown shares an actor with the previous one.[14]It has been used as recently as 2020. In 2009, Bacon narrated aNational Geographic ChannelshowThe Human Family Tree[15]– a program which describes the efforts of that organization'sGenographic Projectto establish the genetic interconnectedness of all humans. Bacon appeared in a commercial for theVisacheck cardthat referenced the game. In the commercial, Bacon wants to write a check to buy a book, but the clerk asks for his ID, which he does not have. He leaves and returns with a group of people, then says to the clerk, "Okay, I was in a movie with an extra, Eunice, whose hairdresser, Wayne, attended Sunday school with Father O'Neill, who plays racquetball with Dr. Sanjay, who recently removed the appendix of Kim, who dumped you sophomore year. So you see, we're practically brothers."[16] In 2011,James Francomade reference to Six Degrees of Kevin Bacon while hosting the83rd Academy Awards.[clarification needed]EEbegan a UK television advertising campaign in November 2012, based on the Six Degrees concept, where Bacon illustrates his connections and draws attention to how the EE 4G network allows similar connectivity.[17] In"Weird Al" Yankovic's song "Lame Claim to Fame", one of the lines is, "I know a guy who knows a guy who knows a guy who knows a guy who knows a guy who knows Kevin Bacon."[18]American rapper MC Zappa also makes reference to the game in his 2018 song "Level Up (The Ill Cypher)".[19] The most highly connected nodes of the Internet have been referred to as "the Kevin Bacons of the Web", inasmuch as they enable most users to navigate to most sites in 19 clicks or less.[20][21] TheBacon numberof anactoris the number of degrees of separation they have fromKevin Bacon, as defined by the game. This is an application of theErdős numberconcept to theHollywoodmovie industry. The higher the Bacon number, the greater the separation from Kevin Bacon the actor is.[22] The computation of a Bacon number for actor X is a "shortest path"algorithm, applied to theco-stardom network: Because some people have both a finite Baconanda finiteErdős numberbecause of acting and publications, there are a rare few who have a finiteErdős–Bacon number, which is defined as the sum of a person's independent Erdős and Bacon numbers. Inspired by the game, the British photographer Andy Gotts tried to reach Kevin Bacon through photographic links instead of film links. Gotts wrote to 300 actors asking to take their pictures and received permission only fromJoss Ackland. Ackland then suggested that Gotts photographGreta Scacchi, with whom he had appeared in the filmWhite Mischief. Gotts proceeded from there, asking each actor to refer him to one or more friends or colleagues. Eventually,Christian Slaterreferred him to Bacon. Gotts' photograph of Bacon completed the project, eight years after it began. Gotts published the photos in a book,Degrees(ISBN0-9546843-6-2), with text byAlan Bates,Pierce Brosnan, and Bacon.[24]
https://en.wikipedia.org/wiki/Bacon_number
Dunbar's numberis a suggested cognitive limit to the number of people with whom one can maintain stable social relationships—relationships in which an individual knows who each person is and how each person relates to every other person.[1][2]This number was first proposed in the 1990s byRobin Dunbar, a Britishanthropologistwho found a correlation between primate brain size and average social group size.[3]By using the averagehumanbrain size and extrapolating from the results of primates, he proposed that humans can comfortably maintain 150 stable relationships.[4]There is some evidence that brain structure predicts the number of friends one has, though causality remains to be seen.[5] Dunbar explained the principle informally as "the number of people you would not feel embarrassed about joining uninvited for a drink if you happened to bump into them in a bar."[6]Dunbar theorised that "this limit is a direct function of relativeneocortexsize, and that this, in turn, limits group size ... the limit imposed by neocortical processing capacity is simply on the number of individuals with whom a stable inter-personal relationship can be maintained". On the periphery, the number also includes past colleagues, such as high school friends, with whom a person would want to reacquaint themselves if they met again.[7]Proponents assert that numbers larger than this generally require more restrictive rules, laws, and enforced norms to maintain a stable, cohesive group. It has been proposed to lie between 100 and 250, with a commonly used value of 150.[8][9] Primatologistshave noted that, owing to their highly social nature, primates must maintain personal contact with the other members of their social group, usually throughsocial grooming. Such social groups function as protective cliques within the physical groups in which the primates live. The number of social group members a primate can track appears to be limited by the volume of the neocortex. This suggests that there is a species-specific index of the social group size, computable from the species' mean neocortical volume.[citation needed] In 1992,[1]Dunbar used the correlation observed for non-human primates to predict a social group size for humans. Using a regression equation on data for 38 primategenera, Dunbar predicted a human "mean group size" of 148 (casually rounded to 150), a result he considered exploratory because of the large error measure (a 95% confidence interval of 100 to 230).[1] Dunbar then compared this prediction with observable group sizes for humans. Beginning with the assumption that the current mean size of the human neocortex had developed about 250,000 years ago, during thePleistocene, Dunbar searched the anthropological and ethnographical literature for census-like group size information for various hunter–gatherer societies, the closest existing approximations to how anthropology reconstructs the Pleistocene societies. Dunbar noted that the groups fell into three categories—small, medium and large, equivalent tobands, cultural lineage groups and tribes—with respective size ranges of 30–50, 100–200 and 500–2500 members each.[citation needed] Dunbar's surveys of village and tribe sizes also appeared to approximate this predicted value, including 150 as the estimated size of aNeolithicfarming village; 150 as the splitting point ofHutteritesettlements; 200 as the upper bound on the number of academics in a discipline's sub-specialisation; 150 as the basic unit size of professional armies inRoman antiquityand in modern times since the 16th century, as well as notions of appropriatecompanysize.[citation needed] Dunbar has argued that 150 would be the mean group size only for communities with a very high incentive to remain together. For a group of this size to remain cohesive, Dunbar speculated that as much as 42% of the group's time would have to be devoted to social grooming. Correspondingly, only groups under intense survival pressure,[citation needed]such assubsistencevillages, nomadic tribes, and historical military groupings, have, on average, achieved the 150-member mark. Moreover, Dunbar noted that such groups are almost always physically close: "[...] we might expect the upper limit on group size to depend on the degree of social dispersal. In dispersed societies, individuals will meet less often and will thus be less familiar with each other, so group sizes should be smaller in consequence." Thus, the 150-member group would occur only because of absolute necessity—because of intense environmental and economic pressures. Dunbar, inGrooming, Gossip, and the Evolution of Language, proposes furthermore that language may have arisen as a "cheap" means of social grooming, allowing early humans to maintain social cohesion efficiently. Without language, Dunbar speculates, humans would have to expend nearly half their time on social grooming, which would have made productive, cooperative effort nearly impossible. Language may have allowed societies to remain cohesive, while reducing the need for physical and social intimacy.[6][10]This result is confirmed by the mathematical formulation of thesocial brainhypothesis, that showed that it is unlikely that increased brain size would have led to large groups without the kind of complex communication that only language allows.[11] Dunbar's number has become of interest inanthropology,evolutionary psychology,[12]statistics, and business management. For example, developers ofsocial softwareare interested in it, as they need to know the size of social networks their software needs to take into account; and in the modern military, operational psychologists seek such data to support or refute policies related to maintaining or improving unit cohesion and morale. A recent study has suggested that Dunbar's number is applicable to online social networks[13]and communication networks (mobile phone).[14]Participants of the European career-oriented online social networkXINGwho have about 157 contacts reported the highest level of job offer success, which also supports Dunbar's number of about 150.[15]Flight Centre, an Australian travel agency, applied Dunbar's number when reorganising the firm into “families” (stores), “villages” (clusters of stores) and “tribes” (aggregates of villages up to a maximum of 150 people).[16] There are discussions in articles and books, of the possible application of using Dunbar's number for analyzingdistributed, dynamicterrorist networks,cybercrimenetworks, or networks preaching criminal ideology.[17][18] AnthropologistH. Russell Bernard,Peter Killworthand associates have done a variety of field studies in the United States that came up with an estimated mean number of ties, 290, which is roughly double Dunbar's estimate. The Bernard–Killworth median of 231 is lower, because of an upward skew in the distribution, but still appreciably larger than Dunbar's estimate. The Bernard–Killworth estimate of the maximum likelihood of the size of a person's social network is based on a number of field studies using different methods in various populations. It is not an average of study averages but a repeated finding.[19][20][21]Nevertheless, the Bernard–Killworth number has not been popularized as widely as Dunbar's. A replication of Dunbar's analysis on updated complementary datasets using different comparative phylogenetic methods yielded wildly different numbers.Bayesianand generalized least-squares phylogenetic methods generated approximations of average group sizes between 69–109 and 16–42, respectively. However, enormous 95% confidence intervals (4–520 and 2–336, respectively) implied that specifying any one number is of limited value. The researchers drew the conclusion that a cognitive limit on human group size cannot be derived in this manner. The researchers also criticised the theory behind Dunbar's number because other primates' brains do not handle information exactly as human brains do, because primate sociality is primarily explained by other factors than the brain, such as what they eat and who their predators are, and because humans have a large variation in the size of their social networks.[22]Dunbar commented on the choice of data for this study, however, now stating that his number should not be calculated from data on primates oranthropoids, as in his original study, but onapes.[23]This would mean that his cognitive limit would be based on 16 pair-livinggibbonspecies, three solitaryorangutans, and only four group livinggreat apes(chimpanzees,bonobosand twogorillaspecies), which would not be sufficient for statistical analyses.[citation needed][dubious–discuss] Philip Liebermanargues that since band societies of approximately 30–50 people are bounded by nutritional limitations to what group sizes can be fed without at least rudimentary agriculture, big human brains consuming more nutrients than ape brains, group sizes of approximately 150 cannot have been selected for inPaleolithichumans.[24][dubious–discuss]Brains much smaller than human or even mammalian brains are also known to be able to support social relationships, includingsocial insectswith hierarchies where each individual "knows" its place (such as thepaper waspwith its societies of approximately 80 individuals[25]) and computer-simulated virtual autonomous agents with simple reaction programming emulating what is referred to in primatology as "ape politics".[26] Comparisons of primate species show that what appears to be a link between group size and brain size, and also what species do not fit such a correlation, is explainable by diet. Many primates that eat specialized diets that rely on scarce food have evolved small brains to conserve nutrients and are limited to living in small groups or even alone, and they lower average brain size for solitary or small group primates. Small-brained species of primate that are living in large groups are successfully predicted by diet theory to be the species that eat food that is abundant but not very nutritious. Along with the existence of complex deception in small-brained primates in large groups with the opportunity (both abundant food eaters in their natural environments and originally solitary species that adopted social lifestyles under artificial food abundances), this is cited as evidence against the model of social groups selecting for large brains and/or intelligence.[27]
https://en.wikipedia.org/wiki/Dunbar%27s_number
TheErdős number(Hungarian:[ˈɛrdøːʃ]) describes the "collaborative distance" between mathematicianPaul Erdősand another person, as measured by authorship ofmathematical papers. The same principle has been applied in other fields where a particular individual has collaborated with a large and broad number of peers. Paul Erdős (1913–1996) was an influential Hungarian mathematician who, in the latter part of his life, spent a great deal of time writing papers with a large number of colleagues — more than 500 — working on solutions to outstanding mathematical problems.[1]He published more papers during his lifetime (at least 1,525[2]) than any other mathematician in history.[1](Leonhard Eulerpublished more total pages of mathematics but fewer separate papers: about 800.)[3]Erdős spent most of his career with no permanent home or job. He traveled with everything he owned in two suitcases, and would visit mathematicians with whom he wanted to collaborate, often unexpectedly, and expect to stay with them.[4][5][6] The idea of the Erdős number was originally created by the mathematician's friends as a tribute to his enormous output. Later it gained prominence as a tool to study how mathematicians cooperate to find answers to unsolved problems. Several projects are devoted to studying connectivity among researchers, using the Erdős number as a proxy.[7]For example, Erdőscollaboration graphscan tell us how authors cluster, how the number of co-authors per paper evolves over time, or how new theories propagate.[8] Several studies have shown that leading mathematicians tend to have particularly low Erdős numbers,i.e., high proximity).[9]The median Erdős number ofFields Medalistsis 3. Only 7,097 (about 5% of mathematicians with a collaboration path) have an Erdős number of 2 or lower.[10]As time passes, the lowest Erdős number that can still be achieved will necessarily increase, as mathematicians with low Erdős numbers die and become unavailable for collaboration. Still, historical figures can have low Erdős numbers. For example, renowned Indian mathematicianSrinivasa Ramanujanhas an Erdős number of only 3 (throughG. H. Hardy, Erdős number 2), even though Paul Erdős was only 7 years old when Ramanujan died.[11] To be assigned an Erdős number, someone must be a coauthor of a research paper with another person who has a finite Erdős number. Paul Erdős himself is assigned an Erdős number of zero. A certain author's Erdős number is one greater than the lowest Erdős number of any of their collaborators; for example, an author who has coauthored a publication with Erdős would have an Erdős number of 1. TheAmerican Mathematical Societyprovides a free online tool to determine the collaboration distance between two mathematical authors listed in theMathematical Reviewscatalogue.[11] Erdős wrote around 1,500 mathematical articles in his lifetime, mostly co-written. He had 509 direct collaborators;[7]these are the people with Erdős number 1. The people who have collaborated with them (but not with Erdős himself) have an Erdős number of 2 (12,600 people as of 7 August 2020[12]), those who have collaborated with people who have an Erdős number of 2 (but not with Erdős or anyone with an Erdős number of 1) have an Erdős number of 3, and so forth. A person with no such coauthorship chain connecting to Erdős has an Erdős number ofinfinity(or anundefinedone). Since the death of Paul Erdős, the lowest Erdős number that a new researcher can obtain is 2. There is room for ambiguity over what constitutes a link between two authors. The American Mathematical Society collaboration distance calculator uses data fromMathematical Reviews, which includes most mathematics journals but covers other subjects only in a limited way, and which also includes some non-research publications. The Erdős Number Project web site says: ... One drawback of the MR system is that it considers all jointly authored works as providing legitimate links, even articles such as obituaries, which are not really joint research. ...[13] It also says: ... Our criterion for inclusion of an edge between vertices u and v is some research collaboration between them resulting in a published work. Any number of additional co-authors is permitted,... but excludes non-research publications such as elementary textbooks, joint editorships, obituaries, and the like. The "Erdős number of the second kind" restricts assignment of Erdős numbers to papers with only two collaborators.[14] The Erdős number was most likely first defined in print by Casper Goffman, ananalystwhose own Erdős number is 2.[12]Goffman published his observations about Erdős' prolific collaboration in a 1969 article entitled "And what is your Erdős number?"[15]See also some comments in an obituary by Michael Golomb.[16] The median Erdős number among Fields Medalists is as low as 3.[10]Fields Medalists with Erdős number 2 includeAtle Selberg,Kunihiko Kodaira,Klaus Roth,Alan Baker,Enrico Bombieri,David Mumford,Charles Fefferman,William Thurston,Shing-Tung Yau,Jean Bourgain,Richard Borcherds,Manjul Bhargava,Jean-Pierre SerreandTerence Tao. There are no Fields Medalists with Erdős number 1;[17]however,Endre Szemerédiis anAbel PrizeLaureate with Erdős number 1.[9] While Erdős collaborated with hundreds of co-authors, there were some individuals with whom he co-authored dozens of papers. This is a list of the ten persons who most frequently co-authored with Erdős and their number of papers co-authored with Erdős,i.e., their number of collaborations.[18] As of 2022[update], all Fields Medalists have a finite Erdős number, with values that range between 2 and 6, and a median of 3. In contrast, the median Erdős number across all mathematicians (with a finite Erdős number) is 5, with an extreme value of 13.[19]The table below summarizes the Erdős number statistics forNobel prizelaureates in Physics, Chemistry, Medicine, and Economics.[20]The first column counts the number of laureates. The second column counts the number of winners with a finite Erdős number. The third column is the percentage of winners with a finite Erdős number. The remaining columns report the minimum, maximum, average, and median Erdős numbers among those laureates. Among the Nobel Prize laureates in Physics,Albert EinsteinandSheldon Glashowhave an Erdős number of 2. Nobel Laureates with an Erdős number of 3 includeEnrico Fermi,Otto Stern,Wolfgang Pauli,Max Born,Willis E. Lamb,Eugene Wigner,Richard P. Feynman,Hans A. Bethe,Murray Gell-Mann,Abdus Salam,Steven Weinberg,Norman F. Ramsey,Frank Wilczek,David Wineland, andGiorgio Parisi. Fields Medal-winning physicistEd Wittenhas an Erdős number of 3.[10] Computational biologistLior Pachterhas an Erdős number of 2.[21]Evolutionary biologistRichard Lenskihas an Erdős number of 3, having co-authored a publication with Lior Pachter and with mathematicianBernd Sturmfels, each of whom has an Erdős number of 2.[22] There are at least two winners of theNobel Prize in Economicswith an Erdős number of 2:Harry M. Markowitz(1990) andLeonid Kantorovich(1975). Other financial mathematicians with Erdős number of 2 includeDavid Donoho,Marc Yor,Henry McKean,Daniel Stroock, andJoseph Keller. Nobel Prize laureates in Economics with an Erdős number of 3 includeKenneth J. Arrow(1972),Milton Friedman(1976),Herbert A. Simon(1978),Gerard Debreu(1983),John Forbes Nash, Jr.(1994),James Mirrlees(1996),Daniel McFadden(2000),Daniel Kahneman(2002),Robert J. Aumann(2005),Leonid Hurwicz(2007),Roger Myerson(2007),Alvin E. Roth(2012), andLloyd S. Shapley(2012) andJean Tirole(2014).[23] Some investment firms have been founded by mathematicians with low Erdős numbers, among themJames B. AxofAxcom Technologies, andJames H. SimonsofRenaissance Technologies, both with an Erdős number of 3.[24][25] Since the more formal versions of philosophy share reasoning with the basics of mathematics, these fields overlap considerably, and Erdős numbers are available for many philosophers.[26]PhilosophersJohn P. BurgessandBrian Skyrmshave an Erdős number of 2.[12]Jon BarwiseandJoel David Hamkins, both with Erdős number 2, have also contributed extensively to philosophy, but are primarily described as mathematicians. JudgeRichard Posner, having coauthored withAlvin E. Roth, has an Erdős number of at most 4.Roberto Mangabeira Unger, a politician, philosopher, and legal theorist who teaches at Harvard Law School, has an Erdős number of at most 4, having coauthored withLee Smolin. Angela Merkel,Chancellor of Germanyfrom 2005 to 2021, has an Erdős number of at most 5.[17] Some fields of engineering, in particularcommunication theoryandcryptography, make direct use of the discrete mathematics championed by Erdős. It is therefore not surprising that practitioners in these fields have low Erdős numbers. For example,Robert McEliece, a professor ofelectrical engineeringatCaltech, had an Erdős number of 1, having collaborated with Erdős himself.[27]CryptographersRon Rivest,Adi Shamir, andLeonard Adleman, inventors of theRSAcryptosystem, all have Erdős number 2.[21] The Romanian mathematician and computational linguistSolomon Marcushad an Erdős number of 1 for a paper inActa Mathematica Hungaricathat he co-authored with Erdős in 1957.[28] Erdős numbers have been a part of thefolkloreof mathematicians throughout the world for many years. Among all working mathematicians at the turn of the millennium who have a finite Erdős number, the numbers range up to 15, the median is 5, and the mean is 4.65;[7]almost everyone with a finite Erdős number has a number less than 8. Due to the very high frequency of interdisciplinary collaboration in science today, very large numbers of non-mathematicians in many other fields of science also have finite Erdős numbers.[29]For example, political scientistSteven Bramshas an Erdős number of 2. In biomedical research, it is common for statisticians to be among the authors of publications, and many statisticians can be linked to Erdős viaPersi DiaconisorPaul Deheuvels, who have Erdős numbers of 1, orJohn Tukey, who has an Erdős number of 2. Similarly, the prominent geneticistEric Landerand the mathematicianDaniel Kleitmanhave collaborated on papers,[30][31]and since Kleitman has an Erdős number of 1,[32]a large fraction of the genetics and genomics community can be linked via Lander and his numerous collaborators. Similarly, collaboration withGustavus Simmonsopened the door forErdős numberswithin thecryptographicresearch community, and manylinguistshave finite Erdős numbers, many due to chains of collaboration with such notable scholars asNoam Chomsky(Erdős number 4),[33]William Labov(3),[34]Mark Liberman(3),[35]Geoffrey Pullum(3),[36]orIvan Sag(4).[37]There are also connections withartsfields.[38] According to Alex Lopez-Ortiz, all theFieldsandNevanlinna prizewinners during the three cycles in 1986 to 1994 have Erdős numbers of at most 9. Earlier mathematicians published fewer papers than modern ones, and more rarely published jointly written papers. The earliest person known to have a finite Erdős number is eitherAntoine Lavoisier(born 1743, Erdős number 13),Richard Dedekind(born 1831, Erdős number 7), orFerdinand Georg Frobenius(born 1849, Erdős number 3), depending on the standard of publication eligibility.[39] Martin Tompa[40]proposed adirected graphversion of the Erdős number problem, by orienting edges of the collaboration graph from the alphabetically earlier author to the alphabetically later author and defining themonotone Erdős numberof an author to be the length of alongest pathfrom Erdős to the author in this directed graph. He finds a path of this type of length 12. Also,Michael Barrsuggests "rational Erdős numbers", generalizing the idea that a person who has writtenpjoint papers with Erdős should be assigned Erdős number 1/p.[41]From the collaboration multigraph of the second kind (although he also has a way to deal with the case of the first kind)—with one edge between two mathematicians foreachjoint paper they have produced—form an electrical network with a one-ohm resistor on each edge. The total resistance between two nodes tells how "close" these two nodes are. It has been argued that "for an individual researcher, a measure such as Erdős number captures the structural properties of [the] network whereas theh-indexcaptures the citation impact of the publications," and that "One can be easily convinced that ranking in coauthorship networks should take into account both measures to generate a realistic and acceptable ranking."[42] In 2004 William Tozier, a mathematician with an Erdős number of 4 auctioned off a co-authorship oneBay, hence providing the buyer with an Erdős number of 5. The winning bid of $1031 was posted by a Spanish mathematician, who refused to pay and only placed the bid to stop what he considered a mockery.[43][44] A number of variations on the concept have been proposed to apply to other fields, notably theBacon number(as in the gameSix Degrees of Kevin Bacon), connecting actors to the actorKevin Baconby a chain of joint appearances in films. It was created in 1994, 25 years after Goffman's article on the Erdős number. A small number of people are connected to both Erdős and Bacon and thus have anErdős–Bacon number, which combines the two numbers by taking their sum. One example is the actress-mathematicianDanica McKellar, best known for playing Winnie Cooper on the TV seriesThe Wonder Years. Her Erdős number is 4,[45]and her Bacon number is 2.[46] Further extension is possible. For example, the "Erdős–Bacon–Sabbath number" is the sum of the Erdős–Bacon number and the collaborative distance to the bandBlack Sabbathin terms of singing in public. PhysicistStephen Hawkinghad an Erdős–Bacon–Sabbath number of 8,[47]and actressNatalie Portmanhas one of 11 (her Erdős number is 5).[48] Inchess, theMorphy numberdescribes a player's connection toPaul Morphy, widely considered the greatest chess player of his time and an unofficialWorld Chess Champion.[49] Ingo, theShusakunumber describes a player's connection to Honinbo Shusaku, the strongest player of his time.[50][51] Invideo games, theRyunumber describes a video game character's connection to theStreet Fightercharacter Ryu.[52][53]
https://en.wikipedia.org/wiki/Erd%C5%91s_number
A person'sErdős–Bacon numberis the sum of theirErdős number—which measures the "collaborative distance" in authoring academic papers between that person and Hungarian mathematicianPaul Erdős—and theirBacon number—which represents the number of links, through roles in films, by which the person is separated from American actorKevin Bacon.[1][2]The lower the number, the closer a person is to Erdős and Bacon, which reflects asmall world phenomenonin academia and entertainment.[3] To have a defined Erdős–Bacon number, it is necessary to have both appeared in a film and co-authored an academic paper, although this in and of itself is not sufficient as one's co-authors must have a known chain leading toPaul Erdős, and one's film must have actors eventually leading toKevin Bacon. PhysicistNicholas Metropolishas an Erdős number of 2,[4]and also a Bacon number of 2 via theWoody AllenfilmHusbands and Wives,[5]giving him an Erdős–Bacon number of 4. Metropolis andRichard Feynmanboth worked on theManhattan ProjectatLos Alamos Laboratory. Via Metropolis, Feynman has an Erdős number of 3 and, from having appeared in the filmAnti-ClockalongsideTony Tang, Feynman also has a Bacon number of 3. Richard Feynman thus has an Erdős–Bacon number of 6.[4] Theoretical physicistStephen Hawkinghas an Erdős–Bacon number of 6: his Bacon number of 2 (via his appearance alongsideJohn CleeseinMonty Python Live (Mostly), who acted alongside Kevin Bacon inThe Big Picture) is lower than his Erdős number of 4.[6] LinguistNoam Chomskyhas an Erdős number of 4,[7]he also co-starred withDanny Gloverin the 2005 documentaryThe Peace!, giving him a Bacon number of 2[8]and combined Erdős–Bacon number of 6. ScientistCarl Saganhas an Erdős–Bacon number of 7, from a Bacon number of 3 and an Erdős number of 4.[9] Physicist and philosopher of scienceJames Owen Weatherallhas an Erdős-Bacon number of 7. His Erdős number is 4 and his Bacon number is 3, having appeared inThe Edge of All We KnowwithStephen Hawking. Canadian actorAlbert M. Chanhas Erdős–Bacon number of 4. He co-authored a peer-reviewed paper onorthogonal frequency-division multiplexing, giving him an Erdős number of 3.[10][11][12]Chan appeared alongside Kevin Bacon inPatriots Day, giving him a Bacon number of 1.[13] American actressDanica McKellar, who playedWinnie CooperinThe Wonder Years, has an Erdős–Bacon number of 6. While an undergraduate at theUniversity of California, Los Angeles, McKellar coauthored amathematicspaper[14]with Lincoln Chayes, who via his wifeJennifer Tour Chayes[15]has an Erdős number of 3, giving McKellar one of 4. Having worked withMargaret Easley, McKellar has a Bacon number of 2.[2] British actorColin Firthhas an Erdős–Bacon number of 6. Firth is credited as co-author of a neuroscience paper, "Political Orientations Are Correlated with Brain Structure in Young Adults",[16]after he suggested onBBC Radio 4that such a study could be done.[17]Another author of that paper,Geraint Rees, has an Erdős number of 4,[18]which gives Firth an Erdős number of 5. Firth's Bacon number of 1 is due to his appearance inWhere the Truth Lies.[19][20] Israeli-American actressNatalie Portmanhas an Erdős–Bacon number of 7.[21]She collaborated (using her birth name, Natalie Hershlag) with Abigail A. Baird,[22]who has a collaboration path[23][24][25]leading toJoseph Gillis, who has an Erdős number of 1, giving Portman an Erdős number of 5.[26]Portman appeared inA Powerful Noise Live(2009) withSarah Michelle Gellar, who appeared inThe Air I Breathe(2007) with Bacon, giving Portman a Bacon number of 2.[27] American actressKristen Stewarthas an Erdős–Bacon number of 7; she is credited as a co-author on anartificial intelligencepaper that was written after a technique was used for her short filmCome Swim, giving her an Erdős number of 5,[28][29]and she co-starred withMichael SheeninTwilight, who co-starred with Bacon inFrost/Nixon, giving her a Bacon number of 2.[30] Sergey Brinhas an Erdős number of 3 through papers withJeffrey UllmanandRonald Graham,[31]and he has two cameos in the 2013 comedyThe Internship,[32]leading to a Bacon number of 2 viaRose Byrne[33]and consequently an Erdős–Bacon number of 5. Bill Gateshas an Erdős number of 4[34]and in 1987 he participated in a shortmockumentarytitledCitizen SteveaboutSteven Spielberg, where he co-starred withWhoopi Goldberg, giving him a Bacon number of 2[35]and consequently an Erdős–Bacon number of 6. Notes:
https://en.wikipedia.org/wiki/Erd%C5%91s%E2%80%93Bacon_number
Apersonal networkis a set of human contacts known to an individual, with whom that individual would expect tointeractat intervals to support a given set of activities. In other words, a personal network is a group of caring, dedicated people who are committed to maintain a relationship with a person in order to support a given set of activities. Having a strong personal network requires being connected to a network of resources for mutual development and growth. Personal networks can be understood by: Personal networks are intended to be mutually beneficial, extending the concept of teamwork beyond the immediatepeer group. The term is usually encountered in theworkplace, though it could apply equally to other pursuits outside work. Personal networkingis the practice of developing and maintaining a personal network, which is usually undertaken over an extended period. The concept is related tobusiness networkingand is often encouraged by largeorganizations, in the hope of improvingproductivity, and so a number of tools exist to support the maintenance of networks. Many of these tools are IT-based, and useWeb 2.0technologies. In the second half of the twentieth century, U.S. advocates for workplace equity popularized the term and concept of networking as part of a largersocial capitallexicon—which also includes terms such asglass ceiling,role model,mentoring, andgatekeeper—serving to identify and address the problems barring non-dominant groups from professional success. Mainstream business literature subsequently adopted the terms and concepts, promoting them as pathways to success for all career climbers. In 1970 these terms were not in the general American vocabulary; by the mid-1990s they had become part of everyday speech.[2] Before the mid-twentieth century, what we call networking today was framed in the language of family and friendship. These close personal relationships provided a range of opportunities to preferred subsets of people, such as access to job opportunities, information, credit, and partnerships. Family networks andnepotismhave proven particularly strong throughout history. However, other common bonds—from ethnicity and religion to school ties and club memberships—can connect subsets of people as well. Of course people whom insiders consider undesirable have been barred from such networks, with important consequences. Those who tap into influential networks can be nurtured toward success. Those who are shut out from networks can lose hope of success. Numerous business heroes of the past—such asBenjamin Franklin,Andrew Carnegie,Henry Ford, andJohn D. Rockefeller—exploited networks to great effect.[2] The business networks that seemed natural and transparent to these white men were a closed book to women and minorities for much of American history. Drawing on work from the social sciences, these outsider groups had to identify and then harness the mechanisms behind networking's power. A prominent early example of this process was the formation of corporate caucuses by black men atXeroxstarting in 1969. Groups of black salesmen met regularly to share information about Xerox's culture and strategies for navigating it most effectively. Through confrontation and collaboration with a relatively accommodating upper management, the caucuses helped open opportunities for high-performing black employees.[2] The popular and business press began using the terms "network" and "networking" in the mid-1970s in the context of businesswomen consciously pursuing this strategy. Authors encouraged female workers to recognize and exploit the informal workplace systems that provided advancement. They urged women to identify mentors, use social contacts, and build peer and authority networks. The push for networking drew on ideas and relationships from the era'sfeministmovement, and dictionaries of the time explicitly linked business networking to women's efforts to succeed in the workplace.[2] Since the closing decades of the twentieth century, networking has become a pervasive term and concept in American society. People now invoke networking in relation to everything from business to child rearing to science. While ambitious careerists seek networks as an indispensable talisman, companies purposefully encourage networking among their employees to boost performance and gain competitive advantage. At the same time, Americans are forgetting the workplace activism that first illuminated the power of networking. Unfortunately, this loss of historical context can fuel a backlash against outsider groups who still seek to synthesize networks so they can access the same opportunities enjoyed by insiders.[2] Broadly speaking, all networks have the following characteristics: Namkee Park, Seungyoon Lee and Jang Hyun Kim examined the relations between personal network characteristics and Facebook use. According to their study, personal networks are investigated through several structural characteristics, which can be categorized into three major dimensions according to the level of analysis: Each of these characteristics represents unique aspects of individuals' network relationships.[11] Personal networks can be used for two main reasons: social and professional. In 2012,LinkedInalong with TNS conducted a survey of 6,000 social network users to understand the difference between personal social networks and personal professional networks. The "Mindset Divide" of users of these networks was compared as follows: Personal network management (PNM) is a crucial aspect ofpersonal information managementand can be understood as the practice of managing the links and connections for social and professional benefits. Some ways to do this would be: Although it is easy to build a network, the real challenge is maintaining and leveraging the connections. Information fragmentation makes it this even more challenging. Information fragmentation refers to the difficulty encountered in ensuring co-operation and keeping track of different personal information assets (e.g. Facebook, Twitter etc.).[14] According to Dan Schawbel, there is a lot of value in a contact management system. It "allows you to keep organized and aware of which contacts you haven't spoken to in a while, and who works at companies that you either want to collaborate with, or work for".[15]In many ways, a contact manager can incorporate new, innovative services to not only help users take a smarter approach to meeting new people but also transmit readily available information from social media profiles directly into that contact profile.
https://en.wikipedia.org/wiki/Personal_network
Richard Gilliamis a short story author and the editor of such theme anthologies asConfederacy of the Dead(1993),Phobias(1994) and theGrailsseries (1992–94). He has contributed fantasy short stories to numerous books and magazines, and his non-fiction includesJoltin' Joe DiMaggio(1999). Gilliam began as a sportswriter and worked as a publicist to CoachBear Bryantat theUniversity of Alabama. A veteran anthologist, he worked from his home inGreen Bay, Wisconsin. He devised the Movie Links online game in 1990, and it was played extensively onGEniefour years before the quite similarSix Degrees of Kevin Bacongame was promoted in 1994. Gilliam's game is much more difficult in that a player is required to find the shortest number of movies linking actors as diverse as, say,Gloria SwansonandChris Farleyrather than links to the same specific actor (a feat which can be memorized). His story "Caroline and Caleb" was a Best Novella nominee in the 1993 Bram Stoker Awards, given annually by theHorror Writers Association. His book "Grails: Quests, Visitations and Other Occurrences" was nominated for theWorld Fantasy Award—Anthology awardin 1993.
https://en.wikipedia.org/wiki/Richard_Gilliam
Social media measurement, also calledsocial media controlling, is themanagementpractice of evaluating successful social media communications of brands, companies, or other organizations.[1] Key performance indicatorsmay be measured by extracting information from social media channels,[2]such asblogs,wikis, micro-blogs such asTwitter,social networkingsites, or video/photo sharing websites,forumsfrom time to time. It is also used by companies to gauge current trends in the industry.[3]The process first gathers data from different websites and then performs analysis based on different metrics like time spent on the page,click through rate, content share, comments,text analyticsto identify positive or negative emotions about the brand.[4][5]Some other social media metrics include share of voice, owned mentions, and earned mentions. The social media measurement process starts with defining a goal that needs to be achieved and defining the expected outcome of the process. The expected outcome varies per the goal and is usually measured by a variety of metrics. This is followed by defining possible social strategies to be used to achieve the goal. Then the next step is designing strategies to be used and setting up configuration tools that ease the process of collecting the data. In the next step, strategies and tools are deployed in real-time. This step involves conductingQuality Assurancetests of the methods deployed to collect the data. And in the final step, data collected from the system is analyzed and if the need arises, it is refined on the run time to enhance the methodologies used. The last step ensures that the result obtained is more aligned with the goal defined in the first step.[6] Acquiring data from social media is in demand of an exploring the user participation and population with the purpose of retrieving and collecting so many kinds of data(ex: comments, downloads etc.).[7]There are several prevalent techniques to acquire data such as Networktraffic analysis, Ad-hoc application andCrawling[8] Network Traffic Analysis- Network traffic analysis is the process of capturing network traffic and observing it closely to determine what is happening in the network. It is primarily done to improve the performance, security and other general management of the network.[9]However concerned about the potential tort of privacy on the Internet, network traffic analysis is always restricted by the government. Furthermore, high-speed links are not adaptable to traffic analysis because of the possible overload problem according to the packet sniffing mechanism[10] Ad-hoc Application- Ad-hoc application is a kind of application that provides services and games tosocial networkusers by developing the APIs offered by social network companies (Facebook Developer Platform). The infrastructure of Ad-hoc application allows the user to interact with the interface layer instead of the application servers. The API provides a path for application to access information after the user login.[8]Moreover, the size of the data set collected vary with the popularity of the social media platform i.e. social media platforms having high number of users will have more data than platforms having less user base.[8]Scraping is a process in which the APIs collect online data from social media. The data collected from Scraping is in raw format. However, having access to these type of data is a bit difficult because of its commercial value.[11] Crawling- Crawling is a process in which a web crawler creates indexes of all the words in a web-page, stores them, then follows all the hyperlinks and indexes on that page and again stores them.[12]It is the most popular technique for data acquisition and is also well known for its easy operation based on prevalent Object-Orientated Programming Language (Java or Python etc.). And most important, social network companies (YouTube, Flicker, Facebook, Instagram, etc.) are friendly to crawling techniques by providing public APIs[13] Monitoring social media allows researchers to find insights into abrand's overall visibility on social media, to measure the impact of campaigns, to identify opportunities for engagement, to assess competitor activity and share of voice, and to detect impending crises. It can also provide valuable information about emerging trends and whatconsumersand clients think about specific topics, brands or products.[14]This is the work of a cross-section of groups that include market researchers,PRstaff,marketingteams, social-engagement, and community staff, agencies andsalesteams. Several different providers have developed tools to facilitate the monitoring of a variety of social media channels - from blogging to internet video to internet forums. This allowscompaniesto track what consumers say about their brands and actions. Companies can then react to these conversations and interact with consumers through social media platforms.[2] Apart from commercial applications, social media monitoring has become a pervasive technique applied by public organizations and governments. Monitoring is a tradition within thepublic sector, and social-media monitoring provides a real-time approach to detecting and responding to social developments. Governments have come to realize the need for strategies to cope with surprises from the rapid expansion of public issues. Sobkowicz[15]introduced a framework with three blocks of social-media opinion tracking, simulating and forecasting. It includes: Bekkers introduced the application of social media monitoring in the Netherlands.[16][need quotation to verify]Public organizations in the Netherlands (such as the Tax Agency and theEducation Ministry) have started to use social media monitoring to obtain better insights into the sentiments of target groups. On the one hand, the public sector will be enabled to provide timely and efficient answers to the public by using social media monitoring techniques, but on the other hand, they also have to deal with concerns about ethical issues such astransparencyandprivacy. Social media management software (SMMS) is an application program or software that facilitates an organization's ability to successfully engage in social media across differentcommunication channels. SMMS is used to monitor inbound and outbound conversations, support customer interaction, audit or document social marketing initiatives and evaluate the usefulness of a social media presence.[17] It can be difficult to measure all social media conversations. Due to privacy settings and other issues, not all social media conversations can be found and reported by monitoring tools. However, whilst social media monitoring cannot give absolute figures, it can be extremely useful for identifying trends and for benchmarking, in addition to the uses mentioned above. These findings can, in turn, influence and shape future business decisions. In order to access social media data (posts, Tweets, and meta-data) and to analyze and monitor social media, many companies use software technologies built for business. Mostsocial media networksallow users to add a location to their posts (reference all of our feeds). The location can be classified as either 'at-the-location' or 'about-the-location'. "'At-the-location' services can be defined as services where location-based content is created at the geographic location. 'About-the-location' services can be defined as services which are referring to a particular location but the content is not necessarily created in this particular physical place."[18]The added information available from geotagged (link to Geotagging article) posts means that they can be displayed on a map. This means that a location can be used as the start of a social media search rather than a keyword or hashtag. This has major implications for disaster relief, event monitoring, safety and security professionals since a large portion of their job is related to tracking and monitoring specific locations. Various monitoring platforms use different technologies for social media monitoring and measurement. These technology providers may connect to theAPIprovided by social platforms that are created for 3rd party developers to develop their own applications and services that access data. Facebook's Graph API is one such API that social media monitoring solution products would connect to pull data from.[19]Some social media monitoring and analytics companies use calls to data providers each time an end-user develops a query. Others will also store and index social posts to offer historical data to their customers. Additional monitoring companies usecrawlersand spidering technology to find keyword references. (See also:Semantic analysis,Natural language processing.) Basic implementation involves curating data from social media on a large scale and analyzing the results to make sense out of it. Examples of these platforms includeHootsuite,Sprout Social, andGoogle Analytics.[20]
https://en.wikipedia.org/wiki/Social_media_measurement
1800s:Martineau·Tocqueville·Marx·Spencer·Le Bon·Ward·Pareto·Tönnies·Veblen·Simmel·Durkheim·Addams·Mead·Weber·Du Bois·Mannheim·Elias Thisbibliography of sociologyis a list of works, organized by subdiscipline, on the subject ofsociology. Some of the works are selected from general anthologies of sociology,[1][2][3][4][5]while other works are selected because they are notable enough to be mentioned in a general history of sociology or one of its subdisciplines.[i] Sociology studiessocietyusing various methods of empirical investigation to understand humansocial activity, from themicrolevel of individualagencyand interaction to themacrolevel of systems andsocial structure.[6][7][8] Economic sociology attempts to explaineconomicphenomena. While overlapping with the general study ofeconomicsat times, economic sociology chiefly concentrates on the roles of social relations and institutions.[25] Industrial sociologyis the sociology oftechnologicalchange,globalization, labor markets, work organization,managerialpractices andemployment relations.[35][36] Environmental sociologystudies the relationship between society and environment, particularly the social factors that cause environmental problems, the societal impacts of those problems, and efforts to solve the problems. Demographyis thestatistical studyofhumanpopulation. It encompasses the study of the size, structure and distribution of these populations, and spatial and/or temporal changes in them in response tobirth,migration,aginganddeath. Urban sociologyrefers the study of social life and human interaction inmetropolitan areas. Sociology of knowledge refers to the study of the relationship between human thought and the social context within which it arises, as well as of the effects prevailing ideas have on societies. Traditionally, political sociology has been concerned with the ways in which social trends, dynamics, and structures of domination affect formal political processes, as well as exploring how various social forces work together to change political policies.[67]Now, it is also concerned with the formation of identity through social interaction, the politics of knowledge, and other aspects of social relations. The sociology of race and ethnic relations refers to the study ofsocial,political, andeconomicrelations betweenracesandethnicitiesat all levels of society, encompassing subjects such asracismandresidential segregation. The sociology of religion concerns the role ofreligioninsociety, including practices, historical backgrounds, developments, and universal themes.[75]There is particular emphasis on the recurring role of religion in all societies and throughout recorded history. Sociological theories are complextheoreticalandmethodologicalframeworks used to analyze and explain objects of social study, which ultimately facilitate the organization of sociological knowledge.[78] Conflict theories, originally influenced byMarxist thought, are perspectives that see societies as defined through conflicts that are produced by inequality.[79]: 34–6Conflict theory emphasizessocial conflict, as well aseconomic inequality,social inequality,oppression, andcrime. Rational choice theorymodels social behavior as the interaction of utility-maximizing individuals. Social Exchange Theorymodels social interaction as a series of exchanges between actors who give one another rewards and penalties, which impacts and guides future behavior.George Homans'version of exchange theory specifically argues thatbehavioriststimulus-response principles can explain the emergence of complex social structures. Making use ofnetwork theory,social network analysisis structural approach to sociology that views norms and behaviors as embedded in chains of social relations. Sociocyberneticsis the application ofsystems theoryandcyberneticsto sociology. Structural functionalismis a broad perspective that interprets society as astructurewith interrelated parts. Symbolic interactionismargues that human behavior is guided by the meanings people construct together in social interaction.
https://en.wikipedia.org/wiki/Bibliography_of_sociology
Business networkingis the practice of building relationships with individuals and businesses for professional purposes.[1]It involves the strategic exchange of information and resources to create connections that can be mutually beneficial.[2]Business networking can be conducted in person, online, or through a combination of both. Through repeated interactions, companies create deeper connections. This encourages knowledge exchange, mutual adaptation, and a commitment of resources, which can be both financial and social, to one another.[1] Business networking helps individuals achieve effective networking which can result in career advancement, building mutually beneficial relationships and knowledge sharing. There are two main approaches of networking: in-person events like conferences and online platforms likeLinkedIn. Setting clear goals beforehand and following up with connections after the event are two methods used to maximize the value of the interactions. Business networking offers a variety of advantages for professionals at all stages of their careers. Some goals individuals can achieve through effective networking include career advancement opportunities, gaining access to valuable knowledge and expertise, and building mutually beneficial relationships.[3] Networks can be a powerful tool for identifying new job openings, particularly positions that are not advertised.[4]Connections can provide valuable recommendations and introductions to hiring managers. Networking allows individuals to showcase their skills and experience to potential employers. By building relationships with potentialclientsandpartnersat networking events, one can significantly increaseawareness of their brandor business. A strong network can act as areferralsource, bringing in new business opportunities through trusted recommendations.[5]These connections can become a source of valuable knowledge and expertise. Through conversations and potential mentorship opportunities with experienced professionals, valuable insights can be gained into industry trends and best practices.[1]Business networking fosters the development of mutually beneficial relationships. By connecting with like-minded professionals, organizations can build long-term, trusted bonds that offer support, advice, and collaboration opportunities.[4] Many businesses utilize networking as a key element in their marketing plans. It helps to develop trust between those involved and plays a big part in raising the profile of a company.Suppliersand businesses can be seen as networked businesses, and will tend to source the business and their suppliers through their existing relationships, as well as with the companies they work closely with.Penny Powerstates that networked businesses tend to be open, random, and supportive, whereas those relying onhierarchical, traditional managed approaches are closed, selective, and controlling.[6] Historically, there have been multiple forms of business networks, such as those among religious or ethnic groups, among small businesses, or between large companies and theirsubcontractors. Business networks have existed between firms as well as between individuals.Guilds, associations of merchants and craftspeople, were the main form of business network in North America and Western Europe prior to theIndustrial Revolution.[7][8]Beginning in the 1700s,chambers of commercebegan to be founded.[9]In the early twentieth century,service clubssuch as theRotary Club,Lions Club, andKiwanis Clubwere founded as social organizations for business networking.[10] In the second half of the twentieth century, networking was promoted to help business people to build theirsocial capital. Business networking by members of marginalized groups (e.g.,women,African Americans, etc.) has been encouraged to identify and address the challenges barring them from professional success.[11][12]Mainstream business literature subsequently adopted the terms and concepts, and promoted them as pathways to success for all career climbers.[citation needed] Before online business networking, in-person networking was the only option for business people. This was achieved through a number of techniques such astrade showmarketing andloyalty programs. Though these techniques have been proven to still be an effective source of making connections and growing a business, many companies now focus more ononline marketingdue to the ability to track every detail of acampaignand justify the expenditure involved in setting up a campaign.[13][better source needed] Business networking can be broadly categorized into two main approaches: in-person networking and online networking.[14] In-person networking allows organizations and entrepreneurs to connect with professionals face-to-face. Industry conferences and trade shows are a resource to meet potential clients, partners, and colleagues while also learning about current trends. Online networking provides another resource to connect with professionals virtually. Platforms like LinkedIn are designed specifically for professional networking, allowing individuals to build their network, share their expertise, and participate in industry discussions.[15]Online forums and communities focused on specific industries can be a valuable resource for connecting with like-minded individuals and asking questions about the industry. Industry-specific discussion boards offer another avenue for online networking, where individuals can showcase their knowledge, learn from others, and potentially find new collaborators or clients.[15] Successful business networking relies on a well-defined strategy implemented before, during, and after networking events. By planning a proactive approach, professionals can maximize the value gained from these interactions and connect to a network that promotes career growth and business development.[2]It is beneficial for organizations to establish clear goals for their interactions and to tailor the approach according to each connection one is trying to form. Researching event attendees beforehand, if possible, is beneficial.[16]Identifying individuals whose work aligns with interests or professional goals allows for unique conversation starters. This demonstrates genuine interest in connecting and establishing rapport which can in turn increase an organization's or individual's reputation.[1]Once the initial connection is made, following up after a networking event with a professional email that shows gratitude for the interaction and knowledge gained can continue the conversation and can begin the foundation for a strong business network connection.[4] One of the most significant advantages is the potential for increased career opportunities. A strong network can provide opportunities such as unadvertised job openings through connections who might be aware of potential fits within their companies.[3]These connections can also provide valuable recommendations and introductions to hiring managers or other individuals who are interested in forming business connections. Additionally, networking events and online platforms offer opportunities to show one's skills or an organization's capabilities and knowledge to a wider audience of potential employers, increasing overall visibility in the job market and business community.[4] Beyond career advancement, business networking builds brand awareness. By building strong relationships with potential clients and partners at networking events and through online interactions, organizations can significantly increase awareness of their brand or business.[1]These connections can then develop into paying customers or business connections that can help expand an organization. A strong network acts as a referral source, bringing in new business opportunities through trusted recommendations from one's network members. The larger the network an organization or individual has, the more access to knowledge and experience they have.[16]This type of access proves to be valuable when attempting to expand the business into unknown territory or beginning one's business career. Learning from those who have been in a certain industry for a long time can improve chances of recognition from other brands and businesses who would want to form connections and builds reputation for the organization or individual.[1] Networking can be an effective way for job-seekers to gain a competitive edge over others in thejob market. The skilled networker cultivates personal relationships with prospective employers and selection panelists in the hope that these personal affections will influence future hiring decisions. This form of networking has raised ethical concerns. The objection is that it constitutes an attempt to corrupt formal selection processes. The networker is accused of seeking non-meritocratic advantage over other candidates, advantage that is based on personal fondness rather than on any objective appraisal of which candidate is most qualified for the position.[17][18] While social media offers a powerful platform for business networking, it still has its downsides. Social media interactions can sometimes feel superficial or inauthentic. Building genuine relationships takes time and effort, which can be difficult in the fast-paced online world.[14]It can be difficult to maintain a balance of showcasing one's expertise and oversharing personal or confidential information and this type of oversharing can cause damage to an organization's reputation.[15]Social media can be overwhelming, since it is filled with a constant stream of content. This can make it difficult for an organization or an individual to stand out and put their name out online. Social media can also be cruel to new content or those who post controversial topics, as these can also damage an organization's reputation. Comprehensive Employment and Training Act
https://en.wikipedia.org/wiki/Business_networking
Acollective networkis a set of social groups linked, directly or indirectly, by some common bond. According to this approach of the social sciences to study social relationships, social phenomena are investigated through the properties of relations among groups, which also influence the internal relations among the individuals of each group within the set. A collective network may be defined a set of social groups linked, directly or indirectly, by some common bond, shared group status, similar or shared group functions, or geographic or cultural connection; the intergroup links also reinforce the intragroup links, hence the group identity. In informal types of associations, such as the mobilisation of social movements, a collective network may be a set of groups whose individuals, though not necessarily knowing each other or sharing anything outside the organising criteria of the network, are psychologically bound to the network itself and are willing to maintain it indefinitely, tying the internal links among the persons in a group while forming new links with the persons in other groups of the collective network. It may be interesting to note that the term collective network was firstly officially used in the public domain not in science, instead in a global meeting called by theZapatista Army of National Liberation(EZLN): on July 27, 1996, over 3,000 activists from more than 40 countries converged on Zapatista territory in rebellion in Chiapas, Mexico, to attend the “First Intercontinental Encuentro for Humanity and Against Neoliberalism”. At the end of the Encuentro (Meeting), the General Command of the EZLN issued the “Second Declaration of La Realidad (The Reality) for Humanity and Against Neoliberalism”, calling for the creation of a “collective network of all our particular struggles and resistances, an intercontinental network of resistance against neoliberalism, an intercontinental network of resistance for humanity.[1]” In science, the term collective network is related to the study ofcomplex systems. As all complex systems have many interconnected components, thescience of networksand thenetwork theoryare important aspects of the study of complex systems, hence of the collective network, too. The idea of collective network rises from that ofsocial networkand its analysis, that is thesocial network analysis, SNA. Cynthia F. Kurtz’s group (Snowden 2005) developed methods of carrying out SNA in which people were asked questions about groups (SNA for identities) and about abstract representations of behavior (SNA for abstractions). Whilst the SNA is primarily concerned with connections among individuals, according to Cynthia F. Kurtz thecollective network analysisinvolves the creation of ‘identity group constructs’ as abstract expressions of group-to-group interactions.[2] Since 2007 the campus-wide interdisciplinary research group CoCo at Binghamton University,U.S. stateofNew York, studies the collective dynamics of various types of interacting agents as complex systems. CoCo’s goals are (i) to advance our understanding about the collective dynamics of physical, biological, social, and engineered complex systems through scientific research; (ii) promote interdisciplinary collaboration among faculty and students in different schools and departments; (iii) translate the understanding to products and processes which will improve the well-being of people at regional, state, national and global scales.[3] In 2011 Emerius, the Euro-Mediterranean Research Institute Upon Social Sciences, based in Rome, started the development of an experimental collective network namedYoospherawith the purpose of studying the intra- and intergroup dynamics in order to reinforce thesense of communityin territorial groups along four main components: (i) the rational and affective perception of the affinities with other individuals both within a person’s main group and other groups; (ii) the consciousness and acceptance of the dependence to the intra- and intergroup bonds; (iii) the voluntary commitment to keep the dependence as far as it is valuable and useful for both the person, his main group and the perceived macrogroup (theYoosphera); (iv) the will of not being detrimental to other individuals, groups or macrogroups.[4] Emerius’s research on collective networks incorporates thesmall-world networkgraph, with the nodes represented both by the individuals and their groups, as well as it embeds the idea ofMalcolm Gladwellexpressed in his bookThe Tipping Point: How Little Things Can Make a Big Difference, though whilst Gladwell considers that “The success of any kind of social epidemic is heavily dependent on the involvement of people with a particular and rare set of social gifts,[5]” according to Emerius the success of any social epidemic is also strongly dependent on the involvement of special groups with a strong intra- and intergroup degree of cohesion. Social sciences aim also at the development of new models to manage groups and their internal and external relations according to the limits and the abilities of the human nature, so that to increase the efficiency of the groups. This is the reason behind theYoosphera, the experimental collective network which is being continuously monitored and developed through a specific piece of software, also namedYoosphera, which reinforces the sense of community in territorial groups as mentioned above. It also nurtures the creation of small groups organised in concentric rings, being small groups easily to be managed according to the theories of ProfessorRobin Dunbar, in particularDunbar’s number. The first observations of theYooshperaexperiment seem to point out that it tends to improve the quality of the relationships between each individual and its environment through the organisation of small cooperative groups which back their own members and the closest groups both in the material and the psychological aspects, thus creating also emotional and affective links. To the function of socialisation, typical of the social networks, the collective networks add those of organisation and cohesion within and among the groups, that balance the need of maximising the community’s potentialities with that of respecting the different conditions of their members as regards with their culture, profession, family commitments, wealth, time, as well as keeping into account the tidal of those conditions and seconding them with the utmost flexibility. Related to that of collective network is the definition ofcollective network intelligence, orcolnetigence, which is close tocollective intelligencethough differs from it ascolnetigenceemerges from both intra- and intergroupcompetitive cooperation.
https://en.wikipedia.org/wiki/Collective_network
Network societyis the set of social, political, economic, and cultural changes brought about by the widespread use of networked digital information and communication technologies. The intellectual origins of the idea can be traced back to the work of early social theorists such asGeorg Simmelwho analyzed the effect of modernization and industrial capitalism on complex patterns of affiliation, organization, production and experience. The termnetwork societywas coined byJan van Dijkin his 1991 Dutch bookDe Netwerkmaatschappij(The Network Society) and byManuel CastellsinThe Rise of the Network Society(1996), the first part of his trilogyThe Information Age. In 1978James Martinused the related term 'The Wired Society' indicating a society that is connected by mass- and telecommunication networks.[1] Van Dijk defines the network society as a society in which a combination of social and media networks shapes its prime mode of organization and most important structures at all levels (individual, organizational and societal). He compares this type of society to a mass society that is shaped by groups, organizations and communities ('masses') organized in physical co-presence.[2] Manuel Castellsdefines thenetwork societyas a new social structure emerging from advances in information and communication technologies. It represents a shift from industrial production to aknowledge economy, where information flows across global networks. Key concepts include: The network society alters the experience of space and time, leading to flexible work arrangements, precarious employment, and global interconnectivity. These changes reinforce new class divisions and reshape relationships between individuals and institutions. Wellman studied the network society as a sociologist at theUniversity of Toronto. His first formal work was in 1973, "The Network City" with a more comprehensive theoretical statement in 1988. Since his 1979 "The Community Question", Wellman has argued that societies at any scale are best seen as networks (and "networks of networks") rather than as bounded groups in hierarchical structures.[3][4][5]More recently, Wellman has contributed to the theory of social network analysis with an emphasis on individualized networks, also known as "networked individualism".[6]In his studies, Wellman focuses on three main points of the network society: community, work and organizations. He states that with recent technological advances an individual's community can be socially and spatially diversified. Organizations can also benefit from the expansion of networks in that having ties with members of different organizations can help with specific issues.[citation needed] In 1978, Roxanne Hiltz and Murray Turoff'sThe Network Nationexplicitly built on Wellman's community analysis, taking the book's title from Craven and Wellman's "The Network City". The book argued that computer supported communication could transform society. It was remarkably prescient, as it was written well before the advent of theInternet. Turoff and Hiltz were the progenitors of an early computer supported communication system, calledEIES.[7] According to Castells, networks constitute the new social morphology of our societies.[8]When interviewed byHarry Kreislerfrom the University of California Berkeley, Castells said "...the definition, if you wish, in concrete terms of a network society is a society where the key social structures and activities are organized around electronically processed information networks. So it's not just about networks orsocial networks, because social networks have been very old forms of social organization. It's about social networks which process and manage information and are using micro-electronic based technologies."[9]The diffusion of a networking logic substantially modifies the operation and outcomes in processes of production, experience, power, and culture.[10]For Castells, networks have become the basic units of modern society. Van Dijk does not go that far; for him these units still are individuals, groups, organizations and communities, though they may increasingly be linked by networks.[11] The network society goes further than theinformation societythat is often proclaimed. Castells argues that it is not purely the technology that defines modern societies, but also cultural, economic and political factors that make up the network society. Influences such as religion, cultural upbringing, political organizations, and social status all shape the network society. Societies are shaped by these factors in many ways. These influences can either raise or hinder these societies. For van Dijk, information forms the substance of contemporary society, while networks shape the organizational forms and infrastructures of this society.[12] Thespace of flowsplays a central role in Castells' vision of the network society. It is a network of communications, defined by hubs where these networks crisscross. Élites incitiesare not attached to a particular locality but to the space of flows.[8] Castells puts great importance on the networks and argues that the real power is to be found within the networks rather than confined inglobal cities. This contrasts with other theorists who rank cities hierarchically.[citation needed] Van Dijk has defined the idea "network society" as a form of society increasingly organizing its relationships in media networks gradually replacing or complementing the social networks of face-to-face communication. Personal and social-network communication is supported by digital technology. This means that social and media networks are shaping the prime mode of organization and most important structures of modern society.[2] Van Dijk'sThe Network Societydescribes what the network society is and what it might be like in the future. The first conclusion of this book is that modern society is in a process of becoming a network society. This means that on the internet interpersonal, organizational, and mass communication come together. People become linked to one another and have access to information and communication with one another constantly. Using the internet brings the “whole world” into homes and work places. Also, when media like the internet becomes even more advanced it will gradually appear as “normal media” in the first decade of the 21st century as it becomes used by larger sections of the population and by vested interests in the economy, politics and culture. It asserts that paper means of communication will become out of date, with newspapers and letters becoming ancient forms for spreading information.[2] New mediaare “media which are both integrated and interactive and also use digital code at the turn of the 20th and 21st centuries.”[13] In western societies, the individual linked by networks is becoming the basic unit of the network society. In eastern societies, this might still be the group (family, community, work team) linked by networks. In the contemporary process of individualisation, the basic unit of the network society has become the individual who is linked by networks. This is caused by simultaneous scale extension (nationalisation and internationalisation) and scale reduction (smaller living and working environments)[14]Other kinds of communities arise. Daily living and working environments are getting smaller and more heterogenous, while the range of the division of labour, interpersonal communications and mass media extends. So, the scale of the network society is both extended and reduced as compared to the mass society. The scope of the network society is both global and local, sometimes indicated as “glocal”. The organization of its components (individuals, groups, organizations) is no longer tied to particular times and places. Aided by information and communication technology, these coordinates of existence can be transcended to create virtual times and places and to simultaneously act, perceive and think in global and local terms.[15] There is an explosion of horizontal networks of communication, quite independent from media business and governments, that allows the emergence of what can be called self-directed mass communication. It is mass communication because it is diffused throughout the Internet, so it potentially reaches the whole planet. It is self-directed because it is often initiated by individuals or groups by themselves bypassing the media system. The explosion of blogs, vlogs, podding, streaming and other forms of interactive, computer to computer communication set up a new system of global, horizontal communication Networks that, for the first time in history, allow people to communicate with each other without going through the channels set up by the institutions of society for socialized communication.[citation needed] What results from this evolution is that the culture of the network society is largely shaped by the messages exchanged in the composite electronic hypertext made by the technologically linked networks of different communication modes. In the network society, virtuality is the foundation of reality through the new forms of socialized communication. Society shapes technology according to the needs, values and interests of people who use the technology. Furthermore, information and communication technologies are particularly sensitive to the effects of social uses on technology itself. The history of the internet provides ample evidence that the users, particularly the first thousands of users, were, to a large extent, the producers of the technology. However, technology is a necessary, albeit not sufficient condition for the emergence of a new form of social organization based on networking, that is on the diffusion of networking in all realms of activity on the basis of digital communication networks.[16] The concepts described by Jan van Dijk, Barry Wellman, Hiltz and Turoff, and Manuel Castells are embodied in much digital technology. Social networking sites such asFacebookandTwitter, instant messaging and email are prime examples of the Network Society at work. These web services allow people all over the world to communicate through digital means without face-to-face contact. This demonstrates how the ideas of society changing will affect the persons we communicate over time.[citation needed]Network society does not have any confinements and has found its way to the global scale.[8]Network society is developed in modern society that allows for a great deal of information to be traded to help improve information and communication technologies.[17]Having this luxury of easier communication also has consequences. This allows for globalization to take place by having more and more people joining the online society and learning about different techniques with the world wide web. This benefits users who have access to the internet, to stay connected at all times with any topic the user wants. Individuals without internet may be affected because they are not directly connected into this society. People always have an option to find public space with computers with internet. This allows a user to keep up with the ever changing system. Network society is constantly changing the “cultural production in a hyper-connected world.”[18]Social Structures revolve around the relationship of the “production/consumption, power, and experience.”[8]These conclusively create a culture, which continues to sustain by getting new information constantly.[19]Our society system was a mass media system where it was a more general place for information. Now the system is more individualized and custom system for users making the internet more personal. This makes messages to the audience more inclusive sent into society. Ultimately allowing more sources to be included to better communication.[13]Network society is seen as a global system that helps with globalization. This is beneficial to the people who have access to the internet to get this media. The negative to this is the people without access do not get this sense of the network society. These networks, that have now been digitized, are more efficient of connecting people. Everything we know now can be put into a computer and processed. Users put messages online for others to read and learn about. This allows people to gain knowledge faster and more efficiently. Networked society allows for people to connect to each other quicker and to engage more actively. This networks go away from having a central theme, but still has a focus in what it is there to accomplish.[20]
https://en.wikipedia.org/wiki/Network_society
Thesemiotics of social networkingdiscusses the images, symbols and signs used in systems that allow users to communicate and share experiences with each other. Examples of social networking systems includeFacebook,TwitterandInstagram. Semioticsis a discipline that studies images, symbols, signs and other similarly related objects in an effort to understand their use and meaning. Semioticstructuralismseeks the meaning of these objects within a social context.Post-structuralisttheories take tools from structuralist semiotics in combination with social interaction, creating social semiotics.[1]Social semioticsis “a branch of the field of semiotics which investigates human signifying practices in specific social and cultural circumstances and which tries to explain meaning-making as a social practice.” “Social semiotics also examines semiotic practices, specific to a culture and community, for the making of various kinds of texts and meanings in various situational contexts and contexts of culturally meaningful activity”.[2]Social semiotics is concerned with studying human interactions.[3] Social networking is the communication among people within a virtual social space.[4]This medium of communication allows insight into the significance of social semiotics. “Millions of people now interact through blogs, collaborate through wikis, play multiplayer games, publish podcasts and video, build relationships through social network sites and evaluate all the above forms of communication through feedback and ranking mechanisms”.[5]Social semiotics “unlike speech, writing necessitates some sort of technology in the form of person device interaction”.[6]Social semiotics functions through the triad of communication orPeircean semioticsin the form of sign, object, interpretant[7](Chart 1) and “Human, Machine, Tag (Information)”[8](Chart 2). In Peircean semiotics (Chart 1), "A sign…[in the form of representamen] is something which stands to somebody for something in some respect or capacity. It addresses somebody, that is, creates in the mind of that person an equivalent sign, or perhaps a more developed sign. That sign which it creates I call the interpretant of the first sign. The sign stands for an object, not in all respects, but in reference to a sort of idea which I have something called the ground of the representamen".[1] This example of the triangle of Human, Machine, Tag is shown when looking at tagging photographs on Facebook (Chart 3).[9]The Human takes the photo on a camera and puts the digital file (information) on the Machine, the Machine is then navigated to Facebook where the file is downloaded. The Human has the Machine Tag the photo with information (e. g., names, places, data) for other Humans to see. This process then can be continued (see Chart 2). “Collaborative tagging has been quickly gaining ground because of its ability to recruit the activity of web users into effectively organizing and sharing large amounts of information”.[10]
https://en.wikipedia.org/wiki/Semiotics_of_social_networking
Scientific collaboration networkis asocial networkwhere nodes are scientists and links are co-authorships as the latter is one of the most well documented forms of scientific collaboration.[1]It is an undirected,scale-free networkwhere thedegree distributionfollows apower lawwith an exponential cutoff – most authors are sparsely connected while a few authors are intensively connected.[2]The network has an assortative nature – hubs tend to link to other hubs and low-degree nodes tend to link to low-degree nodes.Assortativityis not structural, meaning that it is not a consequence of the degree distribution, but it is generated by some process that governs the network’s evolution.[3] A detailed reconstruction of an actual collaboration was made byMark Newman. He analyzed the collaboration networks through several large databases in the fields of biology and medicine, physics and computer science in a five-year window (1995-1999). The results showed that these networks form small worlds, in which randomly chosen pairs of scientists are typically separated by only a short path of intermediate acquaintances. They also suggest that the networks are highly clustered, i.e. two scientists are much more likely to have collaborated if they have a third common collaborator than are two scientists chosen randomly from the community.[4] Barabasiet al. studied the collaboration networks in mathematics and neuro-science of an 8-year period (1991-1998) to understand thetopologicaland dynamical laws governingcomplex networks. They viewed the collaboration network as a prototype ofevolving networks, as it expands by the addition of new nodes (authors) and new links (papers co-authored). The results obtained indicated that the network is scale-free and that its evolution is governed bypreferential attachment. Moreover, authors concluded that most quantities used to characterize the network are time dependent. For example, the average degree (network's interconnectedness) increases in time. Furthermore, the study showed that the node separation decreases over time, however this trend is believed to be offered by incomplete database and it can be opposite in the full system.[5]
https://en.wikipedia.org/wiki/Scientific_collaboration_network
Asocial graphis agraphthat represents social relations between entities. It is amodelorrepresentationof asocial network. The social graph has been referred to as "the global mapping of everybody and how they're related".[1] The term was used as early as 1964, albeit in the context ofisoglosses.[2]Leo Aposteluses the term in the context here in 1978.[3]The concept was originally calledsociogram. The term was popularized at theFacebook F8conference on May 24, 2007, when it was used to explain how the newly introducedFacebook Platformwould take advantage of the relationships between individuals to offer a richer online experience.[4]The definition has been expanded to refer to a social graph of all Internet users. Since explaining the concept of the social graph,Mark Zuckerberg, one of the founders of Facebook, has often touted Facebook's goal of offering the website's social graph to other websites so that a user's relationships can be put to use on websites outside Facebook's control.[5] As of 2010[update], Facebook's social graph is the largestsocial networkdataset in the world,[6]and it contains the largest number of defined relationships between the largest number of people among all websites because it is the most widely used social networking service in the world.[7] Facebook’s social graph played a crucial role in the rapid growth of the company by increasing the engagement of its users, optimizing what each user sees in their feed and enabling an extremely efficient advertising policy. With their social graph, Facebook created a huge network of their platform’s users which enabled them to grow exponentially.[8] One of the stars features of Facebook is its feed – what each user sees in their app. Facebook’s feed is mainly distributed using its Social Graph. Instead of displaying random publication from random users, the graph allows the app to display personalized content based on each user’s previous interactions. This individualized approach enhances the experience that the app offers which increase users’ engagement towards the social media application. Likes, shares and comments also play a key role in the social graph’s layout, by reinforcing interactions and visibility between two users who enjoy the same classes of entertainment.[9] Facebook's social graph has been analyzed by multiple papers. In 2011, a study[10]confirmed thesix degrees of separationphenomenon on the scale of the graph. Social graphs are typically stored usinggraph databases, which utilize graph query languages to manage and query relationships efficiently. For the storing of its social graph, Facebook relies onTAO (The Associations and Objects), a custom-built, distributed system optimized for fast read operations at a massive scale.[11] Several issues have come forward regarding the existing implementation of the social graph owned by Facebook. For example, currently, a social networking service is unaware of the relationships forged between individuals on a different service. This creates an online experience that is not seamless, and instead provides for a fragmented experience due to the lack of an openly available graph between services. In addition, existing services define relationships differently. Concern has also focused on the fact that Facebook's social graph is owned by the company and is not shared with other services, giving it a major advantage over other services and preventing its users from taking their graph with them to other services when they wish to do so, such as when a user is dissatisfied with Facebook. Googlehas attempted to offer a solution to this problem by creating the Social Graph API, released in January 2008,[12]which allows websites to draw publicly available information about a person to form a portable identity of the individual, in order to represent a user's online identity.[13]This did not, however, experience Google's desired uptake and was thus retired in 2012.[14] Facebook introduced its own Graph API at the 2010 f8 conference. Both companies monetise collected data sets throughdirect marketingandsocial commerce.[15]In December 2016, Microsoft acquiredLinkedInfor $26.2 billion.[16] Lastly, massive use of Social Graph raised ethical questions and confidentiality problems. The Cambridge Analytica scandal in 2018[17]displayed to the open world how other apps had used data of the social graph to do political profiling, which sparked global outrage. Moreover, extreme personalization algorithms caused another problematic effect – the creation of filter bubble and echo chambers, reinforcing user’s existing beliefs which influenced public debates.[18]These concerns led to the adoption of stricter regulations on data protection, like the California Consumer Privacy Act, forcing Facebook to change its way of using data.[19] As of 2012,Twitteris the most popular micro-blogging service in the world. Unlike classical social networks (e.g., Facebook), the relation between Twitter users is unidirectional, which makes information propagation in Twitter much closer to how information propagates in real life. In 2012, Twitter's social graph consisted of 537 million Twitter accounts connected by 23.95 billion links.[20] Facebook's Graph API allows websites to draw information about more objects than simply people, including photos, events, and pages, and their relationships between each other. This expands the social graph concept to more than just relationships between individuals and instead applies it to virtual non-human objects between individuals, as well.[21] The concept of the social graph can be extended to other uses than online social networks. It finds uses in multiple fields where interconnected relationships can be found. In sports, most commonly in team’s sports, interactions between players and teams can be studied to enhance performances, such as the amount of passes between two specific players in football, proximity and distance between players in basketball.[22]Those interactions can be modeled through a social graph and can lead to strategy optimization. In statistical studies, social graphs can map the spread of diseases in a society.
https://en.wikipedia.org/wiki/Social_graph
In the field ofsociolinguistics,social networkdescribes the structure of a particularspeech community. Social networks are composed of a "web of ties" (Lesley Milroy) between individuals, and the structure of anetworkwill vary depending on the types of connections it is composed of. Social network theory (as used by sociolinguists) posits that social networks, and the interactions between members within the networks, are a driving force behind language change. The key participant in a social network is theanchor, or center individual. From this anchor, ties of varying strengths radiate outwards to other people with whom the anchor is directly linked. These people are represented bypoints. Participants in a network, regardless of their position, can also be referred to asactorsormembers. There are multiple ways to describe the structure of a social network. Among them aredensity, member closeness centrality, multiplexity,andorders. These metrics measure the different ways of connecting within of a network, and when used together they provide a complete picture of the structure of a particular network. A social network is defined as either "loose" or "tight" depending on how connected its members are with each other, as measured by factors like density and multiplexity.[1]This measure of tightness is essential to the study of socially motivated language change because the tightness of a social network correlates with lack of innovation in the population's speech habits. Conversely, a loose network is more likely to innovate linguistically. The density of a given social network is found by dividing the number of all existing links between the actors by the number of potential links within the same set of actors.[2]The higher the resulting number, the denser a network is. Dense networks are most likely to be found in small, stable communities with few external contacts and a high degree of social cohesion. Loose social networks, by contrast, are more liable to develop in larger, unstable communities that have many external contacts and exhibit a relative lack of social cohesion.[3] Member closeness centrality is the measurement of how close an individual actor is to all the other actors in the community. An actor with high closeness centrality is a central member, and thus has frequent interaction with other members of the network. A central member of a network tends to be under pressure to maintain the norms of that network, while a peripheral member of the network (one with a low closeness centrality score) does not face such pressure.[4]Therefore, central members of a given network are typically not the first members to adopt a linguistic innovation because they are socially motivated to speak according to pre-existing norms within the network.[5] Multiplexity is the number of separate social connections between any two actors. It has been defined as the "interaction of exchanges within and across relationships".[6]A single tie between individuals, such as a shared workplace, is a uniplex relationship. A tie between individuals is multiplexwhen those individuals interact in multiple social contexts. For instance, A is B's boss, and they have no relationship outside of work, so their relationship is uniplex. However, C is both B's coworker and neighbor, so the relationship between B and C is multiplex, since they interact with each other in a variety of social roles.[2] Orders are a way of defining the place of a speaker within a social network. Actors are classified into three different zones depending on the strength of their connection to a certain actor.[7]The closer an individual's connection to the central member is, the more powerful an individual will be within their network. Social network theories of language change look for correlation between a speaker's order and their use of prestigious or non-prestigious linguistic variants. Afirst order zoneis composed of all individuals that are directly linked to any given individual. The first order zone can also be referred to as the "interpersonal environment"[8]or "neighborhood". A first order member of a network is an actor who has a large number of direct connections to the center of the network. Asecond order zoneis a grouping of any individuals who are connected to at least one actor within the first order zone. However, actors in the second order zone are not directly connected to the central member of the network. A second order member has a loose or indirect connection to the network, and may only be connected to a certain network member. Athird order zoneis made up of newly observed individuals not directly connected to the first order zone.[9]Third order members may be connected to actors in the second order zone, but not the first. They are peripheral members of the network, and are often the actors with the lowest member closeness centrality, since they may not have frequent contact with other members of the network. Social networks are used in sociolinguistics to explain linguistic variation in terms of community norms, rather than broad categories like gender or race.[7]Instead of focusing on the social characteristics of speakers, social network analysis concentrates on the relationships between speakers, then considers linguistic change in the light of those relationships. In an effort to depart fromvariationist sociolinguistics,[10]the concept of the social network has been used to examine the links between the strength ofnetwork tiesand the use of a linguistic variant. This allows researchers to create an accurate picture of a community's language use without resorting to stereotypical classification. The concept of social networks is applicable at both the macro and micro levels. Social networks are at work in communities as large as nation-states or as small as an online dating service. They can also be applied to intimate social groups such as a friendship, family unit, or neighborhood. Because even the smallest of networks contains an enormous number of potential connections between actors, sociolinguists usually only study small networks so that the fieldwork is manageable. In fact, even when studying small networks, sociolinguists rely on the metrics outlined in the previous section, rather than mapping the network out, one connection at a time. One way of mapping the general structure of a network is to assign astrength scaleto each speaker. For example, in Lesley Milroy's study of social networks in Belfast, Northern Ireland, the researchers measured five social variables, which together generated a strength scale for each member of the network: The allocation of a network strength score allows the network patterns of individuals to be measured and possible links with linguistic patterns to be tested.[11] In recent years,computer simulationand modeling have been used to study social networks from a broader perspective.[12][13][14]Because previous social network studies were focused on individual connections, the size of the networks were limited so that the researcher could work personally with subjects. With the rise of advanced computer modeling techniques, sociolinguists have been able to study the linguistic behavior of large networks of individuals over long periods of time without the inconvenience of individually working with thousands of subjects. Advances in computer simulation and modeling technology have been used to study social networks on a larger scale, both with more participants and over a greater span of time.[12][13][14]Previous social network studies had to examine individual connections in great detail, and so had to limit the size of the networks involved. Linguists working in the field were also unable to accurately pinpoint the causes of linguistic change because it tends to occur slowly over a long period of time, on a scale beyond the scope of a single research project. With the rise of computer modeling, sociolinguists have been able to study the linguistic behavior of large networks without the huge expenditure of time required to individually work with thousands of subjects long-term. The pioneering study in this field was Fagyal et al. in 2011.[12] Because social networks investigate the forces that impact individual behavior, rather than simply attributing linguistic difference to social class, a theory of language change based on social networks is able to explain linguistic behavior more deeply than variationist sociolinguistics. The two major findings of social network theory are that dense (highly interconnected) networks are resistant to change, and that most linguistic change is initiated by weak links—people who are not centrally connected to the network in question. Though most sociolinguists working on social networks agree on these findings, there has been extended debate about which actors in the network are the primary drivers of linguistic change. The results of this debate are two theories, the strong-tie theory, and the weak-tie theory. This study demonstrated that actors chose to imitate other (more prestigious) actors who embodied desirable social attributes, especially "toughness" as exemplified by urban students. This imitation of desirable qualities indicates that strongly connected agents lead change by spreading norms through network members. InEckert'sstudy of speech norms in Detroit high schools, she notes that suburban youth adopted the speech traits of urban youth (including a diphthongized and lowered [i]).[5] Labov's 1986 study of Philadelphia speech communities (a term used before "social networks" became widespread) demonstrated that the agents of linguistic change were the leaders of the speech communities. Actors with high levels of prestige in the linguistic led the use of these forms, and enforced them as norms within the community. Members of this network then used the forms normalized within the network outside of the network, and continuous usage led to wide adoption of these speech norms.[5] Takeshi Sibata's 1960 study of elementary school children[25]provides strong support for the view that insiders, or leaders, in a social network facilitate language change. He interviewed several elementary school children, and taught them some invented words he created for the purpose of this study. After teaching the students these words, and telling them to teach the other students these words, he came back a week later to observe the results. A few children, those who were popular, friendly, cheerful, active in class activities, were the main people who spread those words. As the centers of their respective networks, these children functioned as strong-tie leaders of linguistic change. Labov's 1966 study ofAfrican American Vernacular Englishin SouthHarlem,[26]revealed that second-order actors in African American social networks were the initiators of linguistic change in their communities. Though these second-order actors, or "lames" were not held in high regard by the leaders of the speech network, they had connections to other networks, and were sources of new linguistic variables. This study served as the basis of theWeak Tie Theoryproposed by Milroy and Milroy. ThisMilroy and Milroystudy examined vernacular English as it was spoken in inner-city Belfast in the 1970s, in three working class communities in Belfast: those in the Ballymacarrett area, the Hammer area, and the Clonard area. Milroy took part in the life of each community as an acquaintance, or 'friend of a friend', investigating the correlation between the integration of individuals in the community and the way those individuals speak. Each individual studied was given a network strength score based on the person's knowledge of other people in the community, the workplace and at leisure activities to give a score of 1 to 5, with 5 being the highest network 'strength score'. Out of the five variables, one measured density, while the other four measured multiplexity. Each person's use of phonological variables, (ai), (a), (l), (th), (ʌ), (e), which were clearly indexical of the Belfast urban speech community, were then measured. The independent variables for this study were age, sex and location. These linguistic variables made up the dependent variable of the study, and were analyzed in relation to the network structure and background of each individual speaker. Deviation from the regional standard was determined by density and multiplicity of the social networks into which speakers are integrated. The researchers found that a high network strength score was correlated with the use of vernacular forms, and therefore that the use of vernacular variants was strongly influenced by the level of integration into a network. The conclusion of the study was that close-knit networks are important for dialect maintenance. This 1987 study, also conducted by Milroy, examined the variable [u], and its relationship to working class identity. The researchers found that actors with the weakest tie to this community identity were most likely to use the variable [u], possibly as a way to strengthen their ties to the network. In Ballymacarrett, one of the villages the researchers surveyed, unrounded [u] was most often used by young males and females, who had weak ties to the working class networks, but use the variables frequently to project an image of working-class toughness. These young people often interacted with members of other social networks, and thus spread the [u] realization through their own social networks, which resulted in the adoption of unrounded [u] in most of Belfast. These results provide support for the weak tie theory of language change, because it was the actors on the peripheries of social networks who were responsible for spreading linguistic change. One key study that employed computer simulations was Fagyal, Swarup, Escobar, Gasser, and Lakkarajud's work on the roles of group insiders (leaders) and outsiders (loners) in language change.[12]The researchers found that both first-order and second-order network members (also known as "leaders" and "loners") were both needed in order for changes to spread predictably within the network. In this study, the researchers simulated a social network of 900 participants, called nodes, which were connected into a network using a matrix algorithm. They then randomly assigned a linguistic variant to each node. On each cycle of the algorithm, every node interacted with another node, and the variant assigned to each node changed randomly depending on which variant the other node had. This cycle was repeated 40,000 times, and at the end of each cycle, the variant connected to each node was recorded. The results of the Fagyal et al. study indicated that "in a large, socially heterogenous population", one linguistic variant eventually became the community norm, though other variants were not entirely eliminated. However, when the researchers manipulated the network to remove either loners or leaders, the results changed: without loners, one variant rapidly caused the loss of all other variants; and without leaders, no single variant became the norm for a majority of speakers. These findings allowed the researchers to address the major debate in social network theory: whether it is leaders (or centers) or loners who are responsible for language change. In their findings, the presence of both leaders and loners was essential, though the two types of agents played different roles in the process of change. Rather than introducing entirely new forms, leaders accelerate the adoption of forms that already exist within the network. Conversely, the researchers describe the loners' role this way: "when loners are a part of a population structure that allows their influence to reach centrally-connected hubs, they can have a decisive impact on the linguistic system over time." Previously, researchers had posited that loners preserved old forms that had been neglected by the larger community. Fagyal et al. complicate this claim by suggesting that the role of loners in a network is to safeguard old features, then reintroduce them to the community. The researchers in Berg's 2006 study of digital social networks as linguistic social networks note the value of social networks as both linguistic corpuses and linguistic networks.[13] In Carmen Perez-Sabater's 2012 study of Facebook users,[27]she discusses the use of English by native and non-native speakers on university Facebook pages. The researchers categorize these posts as a model of "computer-mediated communication", a new communication style that combines features of writing and speech. Facebook posts generally have a degree of informality, whether the users are native or nonnative English speakers, but native English speakers often have a higher degree of informality. For example, non-native speakers cited in the study use separated letter-style greetings and salutations, indicatinglinguistic insecurity. The conclusions of the study were that "computer-mediated communication" do not always tend toward informality, and that online social networks pattern similarly to non-virtual social networks.
https://en.wikipedia.org/wiki/Social_network_(sociolinguistics)
Structural foldingis the network property of a cohesive group whose membership overlaps with that of another cohesive group.[1]The idea reaches back toGeorg Simmel's argument that individuality itself might be the product of unique intersection of network circles.[2] It has been proposed that successful firms often cluster together in cohesive groups, as dense ties among the group members reduce transaction cost by providing a basis for trust and coordination amongst the firms.[3][4]Cohesive ties also enable the firms to implement projects beyond their capacity and cushions them against great uncertainty[5] However, another logic suggests that business groups might choose to forgo high density within the group in favour of maintaining some weaker ties to other firms outside the group. This way firms can reduce redundant ties and form long-distance links to other firms who can provide novel information to them.[6]This logic rests on the assumption that the conservatizing strategy of in-group cohesion is maladaptive as it risks locking the businesses into early success and strategies, which in the absence of new information can easily become detrimental in a rapidly changing business environment. A third, different, strategy would be to combine the benefits of the previous two. Such solutions can be termed either “closure and brokerage” or “cohesion and connectivity” and the benefits of the complementarity of these distinctive network features is common to them, especially for entrepreneurship. Actors at structural folds are multiple insiders, benefiting from dense cohesive ties that provide familiarity with the operations of the members in their group. However, due to being part of more than one group they also have access to diverse information. This combination of familiarity and diversity facilitates innovation and creative success through recombining resources.[7] Intercohesion is a distinctive network structure built from intersecting cohesive groups. It rests on the theoretical principle that cohesive group structures are not necessarily exclusive, but network structures can actually be cohesive and overlapping.[7]This idea originates from Georg Simmel's who, in one of his works, argued that a person is often a member of more than one cohesive group in the same time, and these multiple group memberships are part of both individuation and social integration of the person.[2]Intercohesion thus refers to mutually interpenetrating, cohesive structures, while the resulting distinctive network position at the intersection is a structural fold.[7] As actors at structural folds can be considered multiple insiders, who benefit from both dense cohesive ties that provide familiarity with the operations of the members in their group and from access to non-redundant information, they are believed to be in a better position for innovation and creative success. Based on the Schumpeterian understanding of the term, entrepreneurship is conceptualised as knowledge production through recombination, rather than just importing new ideas. Thus, actors at structural folds occupy a privileged position for successful innovation and creativity. On the one hand, they are part of a cohesive group that provides deeply familiar access to knowledge bases and productive resources, which are essential for generative recombination. On the other hand, they also have the opportunity to interact across different groups and have access to a diverse set of sources, which are also considered key for innovative recombination of already existing resource.[7]Vedres and Stark (2010) in their study indeed found that structural folding contributed to higher performance of business groups. Moreover, entrepreneurship also has dynamic properties along the temporal dimension. AsSchumpeterobserved entrepreneurship also brings along what he calledcreative destruction. Applied to network analytical terms, structural folding may in fact destabilise groups[7]as overlapping membership can disrupt group coordination and reciprocal trust. On an empirical level, it was found that in the case of Hungarian businesses, groups with more structural folds are more likely to break up, and when they do so they are likely to fragment into smaller groups. It has also been argued that structural folding also contributes to creative success in the case of cultural products especially, when the overlapping groups are cognitively distant.[1]For creative innovation, teams need to have a diversity of stylistic elements for recombination. In cultural fields, where teams periodically assemble, dissolve and reassemble the knowledge base of team resides in the previous experience of its members with various styles. Cognitively diverse groups held in tension by structural folds have both a greater repertoire of action and the ability to recontextualize knowledge. Thus, structural folding improves the likelihood of creative success and innovation by increasing the possibility to override things that are taken for granted and to think more reflexively. Therefore cognitively distant but overlapping cohesive group structures are productive because of the mixing, ambiguities and tensions they encounter.[1]
https://en.wikipedia.org/wiki/Structural_fold
Research networking(RN) is about using tools to identify, locate and use research and scholarly information about people and resources.Research networking tools(RN tools) serve asknowledge managementsystems for the research enterprise. RN tools connect institution-level/enterprise systems, national research networks, publicly available research data (e.g., grants and publications), and restricted/proprietary data by harvesting information from disparate sources into compiled profiles for faculty, investigators, scholars, clinicians, community partners and facilities. RN tools facilitate collaboration and team science to address research challenges through the rapid discovery and recommendation of researchers, expertise and resources.[1][2] RN tools differ fromsearch engineslikeGooglein that RN tools access information in databases and other data not limited to web pages. They also differ fromsocial networkingsystems in that they represent a compendium of data ingested from authoritative and verifiable sources rather than predominantly individually-posted information, making RN tools more reliable.[3]Yet, RN tools have sufficient flexibility to allow for profile editing. RN tools provide resources to bolster human connections:[4]they can make non-intuitive matches, do not depend on serendipity and do not have a propensity to return only to previously identified collaborations/collaborators. RN tools generally have associated analytical capabilities that enable evaluation of collaboration and cross-disciplinary research/scholarly activity, especially over time. RN tools and research profiling systems can help researchers gain recognition. Active promotion of scholarship is an aspect of the publication cycle. Commercial and non-profit services help researchers increase visibility and recognition.Digital researcherservices enhance discoverability, shareability and citability of scholarship. According to Shanks and Arlitsch,[5]digital researcher services fall into three categories: Importantly, data harvested into RN tools can be repurposed, especially if available asLinked Open Data(RDFtriples). These RN tools enhance research support activities by providing data for customized, web pages, CV/biosketch generation and data tables for grant proposals. (seeSome of Symplectic's clients) This table provides information on the types of data used in each RN tool and how this data is ingested, along with data export formats (e.g.XML,RDF,RIS,PDF) various government assessment submission formats Whether a research networking tool is compatible with institutional enterprise systems (e.g. human resources databases), can be integrated with other external products or add-ons and can be used for regional, national, international or federated connectivity. This table provides information on what user population is profiled for each tool, ability for users to edit their own profile data and type of networking. Active networking means that the user can enter connections to the network by entering colleagues' names. Passive networking means that the software infers network connections from a user's publication co-authors and builds a network from these names. This table provides information on the types of controlled vocabulary or thesauri used by the tools, as well as ontologies supported and whether author disambiguation is performed by the software. This table provides information on the types ofbibliometricsprovided in the tool. This page has been cited by"AAMC Technology Now Research Networking"(pdf).
https://en.wikipedia.org/wiki/Comparison_of_research_networking_tools_and_research_profiling_systems
Organizational network analysis(ONA) is a method for studying communication[1]and socio-technical networks within a formal organization. This technique creates statistical and graphical models of the people, tasks, groups, knowledge and resources of organizational systems. It is based onsocial network theory[2]and more specifically,dynamic network analysis. ONA can be used in a variety of ways by managers, consultants, and executives. There are severaltoolsthat allow managers to visually depict their employee networks. Most of the tools are built specifically for researchers and academics who study Network theory, but are relatively inexpensive to use, as long as the leaders are well-versed on how to capture the information, feed it into the tool in the correct formats, and understand how to "read" and translate the network graphs into business decisions. Several recent studies and research has highlighted that 'Psychological Safety' is the marker for an innovative team. This has been studied and published first by Google, in their Project Aristotle work[3]as well as highlighted inNew York Times[4]and other research publications.[5]Amy Edmondsonis the preeminent scholar and researcher in this field who has worked across various industries to identify the benefits and even the characteristics of 'Psychological Safety' in teams. ONA is now increasingly being used in this context to analyze the relationships developed within a given team, and for understanding how that team works as a unit to create this psychological safety for its members. This technique is more thorough than the traditional surveys. Engagement surveys and other such culture surveys have become a mainstay of the workplace. However, one of the largest complaints from such surveys are that once managers see the results, often the aggregated sentiments of their employees, they are unsure of next steps and actions. Organizational Network Analysis, when combined with such engagement surveys, however change the way that leaders use and leverage these results. Because ONA allows managers to see the context behind the sentiments, they can actually understand how to correct or sustain these results. For example, if a company's engagement survey said 30% of the employees felt they are inadequately trained for their jobs, a manager would be perhaps inclined to either do nothing, or invest more in comprehensive training programs. However, doing an ONA alongside this might reveal to managers that employees are unhappy with training because they have limited access to institutional knowledge at the company. Then, instead of a training program, managers might simply work on ensuring their top knowledge hubs share their knowledge broadly, and have a longer, more sustainable improvement to the team's level of information and training.[6]
https://en.wikipedia.org/wiki/Organizational_network_analysis
Anonymous social mediais a subcategory ofsocial mediawherein the main social function is to share and interact around content and information anonymously on mobile and web-based platforms.[1]Another key aspect of anonymous social media is that content or information posted is not connected with particular online identities or profiles.[2] Appearing very early on the web as mostly anonymous-confession websites, this genre of social media has evolved into various types and formats of anonymous self-expression.[3]One of the earliest anonymous social media forums was2channel, which was first introduced online on May 30, 1999, as a Japanese text board forum. With the way digital content is consumed and created continuously changing, the trending shift from web to mobile applications is also affecting anonymous social media.[4]This can be seen asanonymous blogging, or various other format based content platforms such as nameless question and answer online platforms likeAsk.fmintroduced mobile versions of their services. The number of new networks joining the anonymous social sharing scene continues to grow rapidly.[citation needed] Across different forms of anonymous social media there are varying degrees of anonymity. Some applications, such as Librex, require users to sign up for an account, even though their profile is not linked to their posts. While these applications remain anonymous, some of these sites can sync up with the user's contact list or location to develop a context within the social community and help personalize the user's experience, such asYik YakorSecret.[5]Other sites, such as4chanand2channel, allow for a purer form of anonymity as users are not required to create an account, and posts default to the username of "Anonymous".[6]While users can still be traced through theirIP address, there are anonymizing services likeI2Por variousproxy serverservices that encrypt a user's identity online by running it through different routers.Secretusers must provide a phone number or email when signing up for the service, and their information is encrypted into their posts.[7]Stylometryposes a risk to the anonymity or pseudonymity of social media users, who may be identifiable by writing style; in turn, they may useadversarial stylometryto resist such identification.[8] Apps such asFormspring,Ask,Sarahah,Whisper, andSecrethave elicited discussion around the rising popularity of anonymity apps, including debate and anticipation about this social sharing class.[9]As more and more platforms join the league of anonymous social media, there is growing concern about the ethics and morals of anonymous social networking as cases ofcyber-bullying, and personaldefamationoccurs.[10][11]Formspring, also known asspring.me, andAsk.fmhave both been associated with teen suicides as a result of cyberbullying on the sites. Formspring has been associated with at least three teen suicides[12][13][14]and Ask.fm with at least five.[15][16] For instance, the app Secret got shut down due to its escalated use of cyberbullying.[17]The app Yik Yak has also helped to contribute to more cyberbullying situations and, in turn, was blocked on some school networks.[18]Their privacy policy meant that users could not be identified without a subpoena, search warrant, or court order.[19]Another app calledAfter Schoolalso sparked controversy for its app design that lets students post any anonymous content. Due to these multiple controversies,[20]the app has been removed from both Apple and Google app stores. As the number of people using these platforms multiplies, unintended uses of the apps have increased, urging popular networks to enact in-app warnings and prohibit the use for middle and high school students.[21]70% of teens admit to making an effort to conceal their online behavior from their parents.[22]Even Snapchat has some relation to the health of children after using social media. This an app that is meant to be quick and simple but in many ways it can be overwhelming. A person can post something, and it be gone in seconds. Oftentimes, the post that was made was inappropriate and harmful to another person. It's a never-ending cycle.[23] Some of these apps have also been criticized for causing chaos in American schools, such aslockdownsandevacuations.[24]In order to limit the havoc caused, anonymous apps are currently removing all abusive and harmful posts.[25]Apps such asYik Yak,Secret, andWhisperare removing these posts byoutsourcingthe job of content supervision to overseasurveillancecompanies. These companies hire a team of individuals to inspect and remove any harmful or abusive posts. Furthermore,algorithmsare also used to detect and remove any abusive posts the individuals may have missed.[26]Another method used by the anonymous app named Cloaq to reduce the number of harmful and abusive posts is to limit the number of users that can register during a certain period. Under this system, all contents are still available to the public, but only registered users can post.[27]Other websites such asYouTubehave gone on to create new policies regarding anonymity.[28]YouTube now does not allow anonymous comments on videos. Users must have a Google account to like, dislike, comment or reply to comments on videos.[29]Once a sign-in user "likes" a video, it will be added to that user's 'Liked video playlist'.[30]YouTube changed their "Liked video playlist" policy in December 2019, allowing a signed-in user to keep their "Liked video playlist" private.[30] Historically, these controversies and the rise of cyberbullying have been blamed on the anonymous aspect of many social media platforms,[31]but about half of US adult online harassment cases do not involve anonymity,[32]and researchers have found that if targeted harassment exists offline it will also be found online, because online harassment is a reflection of existing prejudices.[33][34] Anonymous social media can be used for political discussion in countries where political opinions opposed to the government are normally suppressed, and allow persons of different genders to communicate freely in cultures where such communication is not generally accepted.[35][36]In the United States, the2016 presidential electionled to an increase in the use of anonymous social media websites to express political stances.[37] Moreover, anonymous social media can also provide authentic connection to complete anonymous communication. There have been cases where these anonymous platforms have saved individuals from life-threatening situation or spread news about a social cause.[24]Additionally, anonymous social websites also allow internet users to communicate while also safeguarding personal information from criminal actors and corporations that sell users' data.[36] A study in 2017 on the content posted to 4chan's /pol/ board found that the majority of the content was unique, including 70% of the 1 million images included in the studied data set.[38] Generating revenue from anonymous apps has been a discussion for investors. Since little information is collected about the users, it is difficult for anonymous apps to advertise to users.[25]However some apps, such asWhisper, have found a method to overcome this obstacle. They have developed a "keyword-based" approach, where advertisements are shown to users depending on certain words they type.[39]The app Yik Yak has been able to capitalize on the features they provide.[40]Anonymous apps such a Chrends take the approach of usinganonymityto providefreedom of speech.[41]TelephonyappBurnerhas regularly been a top grossing utilities app in the iOS and Android app stores using its phone number generation technology.[42]Despite the success of some anonymous apps, there are also apps, such asSecret, which have yet to find a way to generate revenue.[43]The idea of an anonymous app has also caused mixed opinions withininvestors. Some investors have invested a large sum of money because they see the potentialrevenuegenerated within these apps. Other investors have stayed away from investing these apps because they feel these apps bring more harm than good.[44] There are several sources to generate revenue for anonymous social media sites.[45]One source of revenue is by implementing programs such as apremiummembership or a gift-exchanging program.[46]Another source of revenue is bymerchandisinggoods and specificusernamesto users.[47]In addition, sites such asFMyLife, have implemented a policy where the anonymous site will receive 50% of profit from apps that makes money off it.[48] In terms ofadvertisements, some anonymous sites have had troubles implementing or attracting them. There are several reasons for this problem. Anonymous sites, such as4chan, have received few advertisement offers due to some of the contents it generates.[49]Other anonymous sites, such asReddit, have been cautious in implementing them in order to maintain their user base.[46]Despite the lack of advertisements on certain anonymous sites, there are still anonymous sites, such as SocialNumber, that support the idea.[47]
https://en.wikipedia.org/wiki/Anonymous_social_media
Adistributed social network(more recently referred to as afederated social network) is anetworkwherein all participatingsocial networking servicescan communicate with each other through a unifiedcommunication protocol. Users that reside on a compatible service can interact with any user from any compatible service without having tolog onto the origin's website. From a societal perspective, one may compare this concept to that ofsocial media being a public utility. Federated social networks contrast withsocial network aggregationservices, which are used to manageaccountsand activities across multiple discrete social networks that cannot communicate with each other. A popular example for a federated social network is thefediverse, with more niche examples such asIndieWebcomplementing the network. Services that want tonativelyconnect into a federated social network need to beinteroperablewith both the majority of content that the network produces (either through converting the content into the service's native format or by adding the ability to read the content in its intended presentation) and the common protocol that the services use. The protocols that are used for federated social networking are generallyportableand independent of a service's architecture so it can be easily adopted across various services without requiring arefactoringof its design to accommodate the network, although platforms that do incorporate support for a federated network typically do so to improve the user experience and make the network's effects more clear for users. A few social networking service providers have used the term more broadly to describe provider-specific services that can be installed across different websites, typically through addedwidgetsorplugins. Through the add-ons, the social network functionality is redirected to the users' social networking service. TheElectronic Frontier Foundation(EFF), a U.S. legal defense organization and advocacy group for civil liberties on the Internet, endorses the distributed social network model as one "that can plausibly return control and choice to the hands of the Internet user" and allow persons living under restrictive regimes to "conduct activism on social networking sites while also having a choice of services and providers that may be better equipped to protect their security and anonymity".[1] TheWorld Wide Web Consortium(W3C), the main international standards organization for theWorld Wide Web, launched a new Social Activity in July 2014 to develop standards for social web application interoperability.[2] In 2013, theOpen Mobile Alliance(OMA) released a candidate version of the Social Network Web enabler (SNeW) that was approved in 2016. Its specification is based mainly onOStatusandOpenSocialspecifications and designed to meetGDPRrecommendations. It is a tentative of the telco industry to establish a operated-led federation of social network services.[3] Both kinds of networks aredecentralized. However, distribution goes further than federation. A federated network has multiple centers, whereas a distributed network has no center at all.[4] While early federated social networking projects traditionally developed a protocol along with their software to fit the needs of the desired architecture, modern projects use a protocol and network that already exists to accelerate adoption of their platform by allowing existing users of other services to migrate seamlessly to the new project. Software that is developed for such networks are almost alwaysfree and open-source software, with the protocols in use beingopen standardsthat do not chargeroyalty feesfor actions that are taken on the network. Various open standards that are used to provide a complete network includeOAuthfor authenticating users and managing their sessions, theActivityPubprotocol forfederatingcontent between services,WebFingerfor discovering profiles and content on the network, as well as various standards formetadatasuch asMicroformats,Open Graphand others. While this combination of technologies are most associated with the concept of a federated social network and are universal among these networks, the federation protocol has been a major source on controversy regarding the ideal architecture for transmitting content. While ActivityPub (and its predecessors OStatus and ActivityPump) have been used by most services when implementing support for a federated social network, alternatives have been created over the years that attempt to fix perceived issues with the current stack of standards. The most successful of these alternatives has been theAT Protocol, an open standard created byBlueskythat has been built to solve various portability, discovery and content format issues that have arisen with the adoption of ActivityPub among a variety of social networking services. A more experimental protocol that has built its own networking stack isNostr, which has been designed to be simple for implementors to build as it has no dependencies on any existing standards. The protocol has gained some traction among newer SNSes, particularly within thecryptocurrencycommunity. While many of these standards have been in use for both early and modern projects, some older projects typically used standards such asOStatus,XRDS,Portable Contacts, theWave Federation Protocol,XMPP,OpenSocial, microformats likeXFNandhCard, andAtomweb feeds. Some of these standards were referred to as the Open Stack, due to their status as open standards.[5]
https://en.wikipedia.org/wiki/Distributed_social_network
Men and women usesocial mediain different ways and with different frequencies. In general, several researchers have found that women tend to use so-calledsocial network services(SNSs) more than men and primiarly to socialize. Many studies have found that women are more likely to use either specific SNSs such asFacebook[2][3]orMySpace[4][5][6]or SNSs in general.[7]In 2015, 73% of online men and 80% of online women used social networking sites. The gap in gender differences has become less apparent in LinkedIn. In 2015 about 26 percent of online men and 25% of online women used the business-and employee-oriented networking site.[8] Researchers who have examined the gender of users of multiple SNSs have found contradictory results. Hargittai's groundbreaking 2007 study examining race, gender, and other differences between undergraduate college student users of SNSs found that women were not only more likely to have used SNSes than men but that they were also more likely to have used many different services, including Facebook, MySpace, and Friendster; these differences persisted in several models and analyses. Although she only surveyed students at one institution – theUniversity of Illinois at Chicago– Hargittai selected that institution intentionally as "an ideal location for studies of how different kinds of people use online sites and services."[9]In contrast, data collected by thePew Internet & American Life Projectfound that men were more likely to have multiple SNS profiles. Although the sample sizes of the two surveys are comparable – 1,650 Internet users in the Pew survey[4]compared with 1,060 in Hargittai's survey[9]– the data from the Pew survey are newer and arguably more representative of the entire adult United States population.[10] [8]Pinterest, Facebook, and Instagram attract more females. Picture sharing sites overall are very popular among women. Pinterest alone attracts three times as many female users than male. However, use of Pinterest by men has increased from 5% in 2012. Facebook attracts about 77% of women online. Instagram is also more likely to attract women. Men are more likely to participate in online forums like Reddit, Digg or Slashdot. One in five men claim to be a part of an online forum.[8] In general, women seem to use SNSs more to explicitly foster social connections. A study conducted by Pew research centers found that women were more avid users of social media.[8]In November 2010, the gap between men and women was as high as 15%.[8]Female participants in a multi-stage study conducted in 2007 to discover the motivations of Facebook users scored higher on scales for social connection and posting of photographs.[3]Studies have also been conducted on the differences between females and males with regards to blogging. The Pew Research Center found that younger females are more likely to blog than males their own age, even males that are older than them.[11]Similarly, in a study ofblogsmaintained in MySpace, women were found to be more likely to not only write blogs but also write about family, romantic relationships, friendships, and health in those blogs.[12]A study ofSwedishSNS users found that women were more likely to have expressions of friendship, specifically in the areas of (a) publishing photos of their friends, (b) specifically naming their best friends, and (c) writing poems to and about their friends. Women were also more likely to have expressions related to family relationships and romantic relationships. One of the key findings of this research is that those men who do have expressions of romantic relationships in their profile had expressions just as strong as the women. However, the researcher speculated that this may be in part due to a desire to publicly expressheterosexualbehaviors and mannerisms instead of merely expressing romantic feelings.[13] A large-scale study of gender differences in MySpace found that both men and women tended to have a majority of female Friends, and both men and women tended to have a majority of female "Top" Friends in the site.[14]A later study found women to author disproportionately many (public) comments in MySpace,[15]but an investigation into the role of emotion in public MySpace comments found that women both give and receive stronger positive emotion.[16]It was hypothesised that women are simply more effective at usingsocial networking sitesbecause they are better able to harness positive emotion. A study focused on the influence of gender and personality on individuals’ use of online social networking websites such as Facebook, reported that men use social networking sites with the intention of forming new relationships, whereas, women use them more for relationship maintenance.[17] In addition to this, women are more likely to use Facebook or MySpace to compare themselves to others and also to search for information. Men, however, are more likely to look at other people's profiles with in the intention to find friends.[18] Women were less successful at actually finding new friends, but more successful at "maintaining existing relationships, making new relationships, using for academic purposes and following specific agenda".[19]Similarly, men also self-reported this motivation "while women reported using them more for relationship maintenance".[20] OCEANpersonality traits are known tosystematically vary between human males and females.[21] In one study, the same women were more extraverted and agreeable, such as less neurotic while on social media than offline.[21]Other studies associated neuroticism with female use of social media.[22] Privacyhas been the primary topic of many studies of SNS users, and many of these studies have found differences between male and female SNS users, although some studies have found results contradictory to those found in other studies. Some researchers have found that women are more protective of their personal information and more likely to have private profiles.[3][6][23]Other researchers have found that women are less likely to post some types of information. Acquisti and Gross found that women in their sample were less likely to reveal theirsexual orientation, personal address, or cell phone number.[2]This is similar to Pew Internet & American Life research of children users of SNSs that found that boys and girls presented different views of privacy and behaviors, with girls being more concerned about and restrictive of information such as city, town, last name, and cell phone number that could be used to locate them.[24]At least one group of researchers has found that women are less likely to share information that "identifies them directly – last name, cell phone number, and address or home phone number," linking that resistance to women's greater concerns about "cyberstalking", "cyberbullying", and security problems.[5] Despite these concerns about privacy, researchers have found that women are more likely to maintain up-to-date photos of themselves.[25][26]Further, Kolek and Saunders found in their sample of college student Facebook users that women were more likely to not only post a photograph of themselves in their profile but that they were more likely to have a publicly viewable Facebook account (a contradictory finding compared to many other studies), post photos, and post photo albums.[25] Women were more likely to have: (a) a publicly viewable Facebook account, (b) more photo albums, (c) more photos, (d) a photo of themselves as their profile picture, (e) positive references to alcohol, partying, or drugs, and (f) more positive references to or about the institution or institution-related activities. In general, women were more likely to disclose information about themselves in their Facebook profile, with the primary exception of sharing their telephone number.[25]Similarly, female respondents to Strano's study were more likely to keep their profile photo recent and choose a photo that made them appear attractive, happy, and fun-loving. Citing several examples, Strano opined that there may also be a difference in how men and women Facebook users display and interpret profile photos depicting relationships.[26] Privacy has also been a concern for the SnapChat app, which allows you to send messages either text or photo or video which then disappear. One study has shown that security is not a major concern for the majority of users and that most do not use Snapchat to send sensitive content (although up to 25% may do so experimentally). As part of their research almost no statistically significant gender differences were found.[27] Past research carried out to investigate if there are any gender differences in cyber-bullying has found that boys commit more cyber verbal bullying, cyber forgery and more violence based on hidden identity or presenting themselves as other person.[28] A 2021 article found that mansplaining could be seen more prominent online rather than offline, saying that "More than 50% of our respondents in the United States and 30% in the UK heard of the term. We find a discrepancy between the percentage of women to whom mansplaining happened (54%) and men who were accused of mansplaining (24%)."[29]The authors analyzed information and conducted experiments to find if mansplaining could cause potential silence of female voices because they were afraid they would have to face sexist remarks.[29] A 2021 article by Emily Van Duyn, Cynthia Peacock, and Natalie Jomini Stroud,[30]suggests that women's voices are typically only heard in smaller matters. Although men and women users of SNSs exhibit different behavior and motivations, they share some similarities. For example, one study that examined the veracity of information shared on SNSs by college students found that men and women were as likely to "provide accurate and complete information about their birthday, schedule of classes, partner's name, AIM, or political views."[2] In contradiction to several of the studies described above that found that women are more likely to be SNS users, at least one very reputable study has found that men and women are equally likely to be SNS users. Data gathered in December 2008 by thePew Internet & American Life Projectshowed that the SNS users in their sample were equally divided among men and women.[4]As mentioned above, the data from the Pew survey are newer and arguably more representative of the entire adult United States population[10]than the data in much of the previously described research. Some studies have found that traditional gender roles are present in SNSs, with men in this study conforming to traditional views ofmasculinityand the women to traditional views offemininity.[31]Qualitative work with college student SNS users by Martínez Alemán and Wartman[31]and Managgoet al.[32]have found similar results for both Facebook and MySpace users. Moreover, the work by Managgoet al.discovered not only traditional gender roles and images but sexualisation of female users of MySpace.[32]Similarly, research into the impact of comments in the profile of a Facebook users on that user's perceived attractiveness revealed a "sexual double standard", wherein negative statements resulted in male profile owners being judged more attractive and female profile owners less attractive.[33]Finally, at least one study has found that men and women SNS users both left textual clues about their gender.[6] Curiously,gay menwere one of the earliest groups to join and use the early SNSFriendster.[34]
https://en.wikipedia.org/wiki/Gender_differences_in_social_network_service_use
Geosocial networkingis a type ofsocial networkingin whichgeographicservices and capabilities such asgeocodingandgeotaggingare used to enable additional social dynamics.[1][2]User-submitted location data orgeolocationtechniques can allow social networks to connect and coordinate users with local people or events that match their interests. Geolocation on web-basedsocial network servicescan beIP-based or usehotspottrilateration. Formobile social networks,textedlocation information ormobile phone trackingcan enablelocation-based servicesto enrich social networking.[3][4] The evolution of geosocial can be traced back to the implication of socialapplication programming interfacesby internet-based corporations in the early 2000s.eBayuses one of the oldest, announcing its socialAPIat the end of 2000 and allowing free access to over 21,000 developers in late 2005.[5][6]Amazon'sprimary API was released in 2002, which allowed developers to pull consumer information like product reviews into third-party applications.[7]Google, Inc.began testing an API in April 2002 and currently owns dozens that are used by thousands of applications.[5]The Facebook Developer's API is considered the first to be specific to a social network and was launched in 2006.Facebooklater created an open stream API, allowing outside developers access to user's status updates.[8]By June, 2010,Twitterintegrated API into their applications and is considered the most open of all social networks. By 2008, expanded geolocation technologies includingcell tower localizationbecame available and devices such asdigital camerasandcamera phonesbegan to integrate features such asWi-Ficonnectivity andGPS navigationinto more sophisticated capabilities. Geosocial networking allows users to interact relative to their current locations.Web mappingservices withgeocodingdata for places (streets, buildings, and parks) can be used with geotagged information (meetups, concert events, nightclubs or restaurant reviews) to match users with a place, event or local group to socialize in or enable a group of users to decide on a meeting activity. Popular geosocial applications likeYelp,Gowalla, Facebook Places andFoursquareallow users to share their locations as well as recommendations for locations or 'venues'. New applications follow other approaches and do not focus on places. Instead, they allow users to enrich maps with their own points of interest and build kind oftravel booksfor themselves. At the same time users can explore overlays of other users as collaborative extension.[9] In disaster scenarios, geosocial networking can allow users to coordinate aroundcollaboratively filteredgeotag information on hazards and disaster aid activities to develop acollectivesituational awarenessthrough an assembly of individual perspectives. This type of geosocial networking is known ascollaborative mapping. Furthermore, geolocated messages could assist automated tools to detect and track potential dangers for the general public such as an emergingepidemic.[10][11] The technology has obvious implications for event planning and coordination. Geosocial has political applications, as it can be used to organize, track, and communicate events and protests. For example, people can use mobile phones and Twitter to quickly organize a protest event before authorities can stop it. People at the event can communicate with each other and the larger world using a mobile device connected to the Internet. Geosocial has the combined potential of bringing a social network orsocial graphto a location, and having people at a location form into a social network or social graph. Thus social networks can be expanded by real world contact and recruiting new members. All geosocial networks revolve around specific features that are additional to geolocating. Amobile ad hoc networkis an opt-in group of mobile devices in the same immediate area linked to a master device. These groups are then able to communicate freely with each other. This sort of social networking is used mostly during events so the host (operating the master device) can provide information, suggestions or coupons specific to the event.[12]An example would beApple'siGroups.[13] A less-used form of geosocial networking is one mostly used by fast food restaurants, in which customers check-in their orders rather than themselves. Users choose the ingredients of their order, name it, and are awarded points for every order based on their suggestion. Customers are given discounts and coupons for their involvement and the restaurant receives more customers.[14] Freelancingnetworks are created with the specific purpose to allow users to find or post temporary employment opportunities. Users establish and operate a professional profile and are able to connect with past and possible employers, employees, colleagues, classmates and friends.[12] With location-planning, or social-mapping, users are able to search and browse nearby stores, restaurants, etc. Users' venues are assigned profiles and users can rate them, share their opinions and post pictures. These networks use the location of mobile phones to connect users and may also provide directions to and from the venue by linking to aGPSservice.[12] Some networks use moodsourcing as a recreational way to make user's status's seem more similar to personal interaction. In addition to checking in, users convey their current mood with a correspondingemoticon.[12] Paperless ticketing is a feature that usessmart phonesas digital tickets for events and travel.[12]Besides becoming more convenient than the normal ticketing process, Paperless Ticketing eliminates wasteful paper use. Examples include Apple's 2010 purchased patent for a travel ticketing app, ITravel,[15]andTicketmaster's smart phone application. Social shopping service users create personal profiles to collect information on different items they find. Instead of simply updating their status on other social networks with a description or link of their purchases, users download software that allows them to grab images of those products to post on their own shopping lists. Some social shopping sites form affiliate relationships with merchants, who often pay percent commissions on sales that come as a result of their products being featured on other sites.[16]Sites have gone so far as to allow users to add theircredit card numberso their purchases are automatically checked in. Some fashion corporations have invested in sensors placed in their stores and dressing rooms so users on social shopping applications have to physically be in their store or trying something on in order to gather points. This increases participation and encourages customers to try on other clothes. Most criminal investigations and news events happen in a geographical location. Geosocial investigation tools provide the ability to source social media from multiple networks (such as Twitter,Flickr, andYouTube) without the use of hashtags or keyword searches. Some vendors provide subscription based services to source real-time and historical social media for events. Some sites, like Facebook, have been scrutinized for allowing users to "tag" their friends via email while checking in. An "opt-in" is a permission-based network that requires a user to join or sign up. The host is then given permission to access the user's information and to contact him or her. An "opt-out" network is defaulted to have the user included in a group. Users must remove themselves from the network if they wish to not be included.
https://en.wikipedia.org/wiki/Geosocial_networking
TheInternet(orinternet)[a]is theglobal systemof interconnectedcomputer networksthat uses theInternet protocol suite(TCP/IP)[b]to communicate between networks and devices. It is anetwork of networksthat consists ofprivate, public, academic, business, and government networks of local to global scope, linked by a broad array of electronic,wireless, andoptical networkingtechnologies. The Internet carries a vast range of information resources and services, such as the interlinkedhypertextdocuments andapplicationsof theWorld Wide Web(WWW),electronic mail,internet telephony, andfile sharing. The origins of the Internet date back to research that enabled thetime-sharingof computer resources, the development ofpacket switchingin the 1960s and the design of computer networks fordata communication.[2][3]The set of rules (communication protocols) to enableinternetworkingon the Internet arose from research and development commissioned in the 1970s by theDefense Advanced Research Projects Agency(DARPA) of theUnited States Department of Defensein collaboration with universities and researchers across theUnited Statesand in theUnited KingdomandFrance.[4][5][6]TheARPANETinitially served as a backbone for the interconnection of regional academic and military networks in the United States to enableresource sharing. The funding of theNational Science Foundation Networkas a new backbone in the 1980s, as well as private funding for other commercial extensions, encouraged worldwide participation in the development of new networking technologies and the merger of many networks using DARPA'sInternet protocol suite.[7]The linking of commercial networks and enterprises by the early 1990s, as well as the advent of theWorld Wide Web,[8]marked the beginning of the transition to the modern Internet,[9]and generated sustained exponential growth as generations of institutional,personal, andmobilecomputerswere connected to the internetwork. Although the Internet was widely used byacademiain the 1980s, the subsequentcommercialization of the Internetin the 1990s and beyond incorporated its services and technologies into virtually every aspect of modern life. Most traditional communication media, includingtelephone,radio,television, paper mail, and newspapers, are reshaped, redefined, or even bypassed by the Internet, giving birth to new services such asemail,Internet telephone,Internet radio,Internet television,online music, digital newspapers, andaudioandvideo streamingwebsites. Newspapers, books, and other print publishing have adapted towebsitetechnology or have been reshaped intoblogging,web feeds, and onlinenews aggregators. The Internet has enabled and accelerated new forms of personal interaction throughinstant messaging,Internet forums, andsocial networking services.Online shoppinghas grown exponentially for major retailers,small businesses, andentrepreneurs, as it enables firms to extend their "brick and mortar" presence to serve a larger market or evensell goods and services entirely online.Business-to-businessandfinancial serviceson the Internet affectsupply chainsacross entire industries. The Internet has no single centralized governance in either technological implementation or policies for access and usage; each constituent network sets its own policies.[10]The overarching definitions of the two principalname spaceson the Internet, theInternet Protocol address(IP address) space and theDomain Name System(DNS), are directed by a maintainer organization, theInternet Corporation for Assigned Names and Numbers(ICANN). The technical underpinning and standardization of the core protocols is an activity of theInternet Engineering Task Force(IETF), a non-profit organization of loosely affiliated international participants that anyone may associate with by contributing technical expertise.[11]In November 2006, the Internet was included onUSA Today's list of theNew Seven Wonders.[12] The wordinternettedwas used as early as 1849, meaninginterconnectedorinterwoven.[13]The wordInternetwas used in 1945 by the United States War Department in a radio operator's manual,[14]and in 1974 as the shorthand form of Internetwork.[15]Today, the termInternetmost commonly refers to the global system of interconnectedcomputer networks, though it may also refer to any group of smaller networks.[16] When it came into common use, most publications treated the wordInternetas a capitalizedproper noun; this has become less common.[16]This reflects the tendency in English to capitalize new terms and move them to lowercase as they become familiar.[16][17]The word is sometimes still capitalized to distinguish the global internet from smaller networks, though many publications, including theAP Stylebooksince 2016, recommend the lowercase form in every case.[16][17]In 2016, theOxford English Dictionaryfound that, based on a study of around 2.5 billion printed and online sources, "Internet" was capitalized in 54% of cases.[18] The termsInternetandWorld Wide Webare often used interchangeably; it is common to speak of "going on the Internet" when using aweb browserto viewweb pages. However, theWorld Wide Web, orthe Web, is only one of a large number of Internet services,[19]a collection of documents (web pages) and otherweb resourceslinked byhyperlinksandURLs.[20] In the 1960s,computer scientistsbegan developing systems fortime-sharingof computer resources.[22][23]J. C. R. Lickliderproposed the idea of a universal network while working atBolt Beranek & Newmanand, later, leading theInformation Processing Techniques Office(IPTO) at theAdvanced Research Projects Agency(ARPA) of the United StatesDepartment of Defense(DoD). Research intopacket switching, one of the fundamental Internet technologies, started in the work ofPaul BaranatRANDin the early 1960s and, independently,Donald Daviesat the United Kingdom'sNational Physical Laboratory(NPL) in 1965.[2][24]After theSymposium on Operating Systems Principlesin 1967, packet switching from the proposedNPL networkand routing concepts proposed by Baran were incorporated into the design of theARPANET, an experimentalresource sharingnetwork proposed by ARPA.[25][26][27] ARPANET development began with two network nodes which were interconnected between theUniversity of California, Los Angeles(UCLA) and theStanford Research Institute(now SRI International) on 29 October 1969.[28]The third site was at theUniversity of California, Santa Barbara, followed by theUniversity of Utah. In a sign of future growth, 15 sites were connected to the young ARPANET by the end of 1971.[29][30]These early years were documented in the 1972 filmComputer Networks: The Heralds of Resource Sharing.[31]Thereafter, the ARPANET gradually developed into a decentralized communications network, connecting remote centers and military bases in the United States.[32]Other user networks and research networks, such as theMerit NetworkandCYCLADES, were developed in the late 1960s and early 1970s.[33] Early international collaborations for the ARPANET were rare. Connections were made in 1973 to Norway (NORSARandNDRE),[34]and toPeter Kirstein'sresearch group atUniversity College London(UCL), which provided a gateway toBritish academic networks, forming the firstinternetworkforresource sharing.[35]ARPA projects, theInternational Network Working Groupand commercial initiatives led to the development of variousprotocolsand standards by which multiple separate networks could become a single network or "a network of networks".[36]In 1974,Vint CerfatStanford UniversityandBob Kahnat DARPA published a proposal for "A Protocol for Packet Network Intercommunication".[37]Cerf and his students used the terminternetas a shorthand forinternetworkinRFC675,[15]and laterRFCsrepeated this use. Cerf and Kahn creditLouis Pouzinand others with important influences on the resultingTCP/IPdesign.[37][38][39]NationalPTTsand commercial providers developed theX.25standard and deployed it onpublic data networks.[40] Access to the ARPANET was expanded in 1981 when theNational Science Foundation(NSF) funded theComputer Science Network(CSNET). In 1982, theInternet Protocol Suite(TCP/IP) was standardized, which facilitated worldwide proliferation of interconnected networks. TCP/IP network access expanded again in 1986 when theNational Science Foundation Network(NSFNet) provided access tosupercomputersites in the United States for researchers, first at speeds of 56 kbit/s and later at 1.5 Mbit/s and 45 Mbit/s.[41]The NSFNet expanded into academic and research organizations in Europe, Australia, New Zealand and Japan in 1988–89.[42][43][44][45]Although other network protocols such asUUCPand PTT public data networks had global reach well before this time, this marked the beginning of the Internet as an intercontinental network. CommercialInternet service providers(ISPs) emerged in 1989 in the United States and Australia.[46]The ARPANET was decommissioned in 1990.[47] Steady advances insemiconductortechnology andoptical networkingcreated new economic opportunities for commercial involvement in the expansion of the network in its core and for delivering services to the public. In mid-1989,MCI MailandCompuserveestablished connections to the Internet, delivering email and public access products to the half million users of the Internet.[48]Just months later, on 1 January 1990, PSInet launched an alternate Internet backbone for commercial use; one of the networks that added to the core of the commercial Internet of later years. In March 1990, the first high-speed T1 (1.5 Mbit/s) link between the NSFNET and Europe was installed betweenCornell UniversityandCERN, allowing much more robust communications than were capable with satellites.[49] Later in 1990,Tim Berners-Leebegan writingWorldWideWeb, the firstweb browser, after two years of lobbying CERN management. By Christmas 1990, Berners-Lee had built all the tools necessary for a working Web: theHyperText Transfer Protocol(HTTP) 0.9,[50]theHyperText Markup Language(HTML), the first Web browser (which was also anHTML editorand could accessUsenetnewsgroups andFTPfiles), the first HTTPserver software(later known asCERN httpd), the firstweb server,[51]and the first Web pages that described the project itself. In 1991 theCommercial Internet eXchangewas founded, allowing PSInet to communicate with the other commercial networksCERFnetand Alternet.Stanford Federal Credit Unionwas the firstfinancial institutionto offer online Internet banking services to all of its members in October 1994.[52]In 1996,OP Financial Group, also acooperative bank, became the second online bank in the world and the first in Europe.[53]By 1995, the Internet was fully commercialized in the U.S. when the NSFNet was decommissioned, removing the last restrictions on use of the Internet to carry commercial traffic.[54] As technology advanced and commercial opportunities fueled reciprocal growth, the volume ofInternet trafficstarted experiencing similar characteristics as that of the scaling ofMOS transistors, exemplified byMoore's law, doubling every 18 months. This growth, formalized asEdholm's law, was catalyzed by advances inMOS technology,laserlight wave systems, andnoiseperformance.[57] Since 1995, the Internet has tremendously impacted culture and commerce, including the rise of near-instant communication by email,instant messaging, telephony (Voice over Internet Protocolor VoIP),two-way interactive video calls, and the World Wide Web[58]with itsdiscussion forums, blogs,social networking services, andonline shoppingsites. Increasing amounts of data are transmitted at higher and higher speeds over fiber optic networks operating at 1 Gbit/s, 10 Gbit/s, or more. The Internet continues to grow, driven by ever-greater amounts of online information and knowledge, commerce, entertainment and social networking services.[59]During the late 1990s, it was estimated that traffic on the public Internet grew by 100 percent per year, while the mean annual growth in the number of Internet users was thought to be between 20% and 50%.[60]This growth is often attributed to the lack of central administration, which allows organic growth of the network, as well as the non-proprietary nature of the Internet protocols, which encourages vendor interoperability and prevents any one company from exerting too much control over the network.[61]As of 31 March 2011[update], the estimated total number of Internet users was 2.095 billion (30% ofworld population).[62]It is estimated that in 1993 the Internet carried only 1% of the information flowing through two-waytelecommunication. By 2000 this figure had grown to 51%, and by 2007 more than 97% of all telecommunicated information was carried over the Internet.[63] The Internet is aglobal networkthat comprises many voluntarily interconnected autonomous networks. It operates without a central governing body. The technical underpinning and standardization of the core protocols (IPv4andIPv6) is an activity of theInternet Engineering Task Force(IETF), a non-profit organization of loosely affiliated international participants that anyone may associate with by contributing technical expertise. To maintain interoperability, the principalname spacesof the Internet are administered by theInternet Corporation for Assigned Names and Numbers(ICANN). ICANN is governed by an international board of directors drawn from across the Internet technical, business, academic, and other non-commercial communities. ICANN coordinates the assignment of unique identifiers for use on the Internet, includingdomain names, IP addresses, application port numbers in the transport protocols, and many other parameters. Globally unified name spaces are essential for maintaining the global reach of the Internet. This role of ICANN distinguishes it as perhaps the only central coordinating body for the global Internet.[64] Regional Internet registries(RIRs) were established for five regions of the world. TheAfrican Network Information Center(AfriNIC) forAfrica, theAmerican Registry for Internet Numbers(ARIN) forNorth America, theAsia–Pacific Network Information Centre(APNIC) forAsiaand thePacific region, theLatin American and Caribbean Internet Addresses Registry(LACNIC) forLatin Americaand theCaribbeanregion, and theRéseaux IP Européens – Network Coordination Centre(RIPE NCC) forEurope, theMiddle East, andCentral Asiawere delegated to assign IP address blocks and other Internet parameters to local registries, such asInternet service providers, from a designated pool of addresses set aside for each region. TheNational Telecommunications and Information Administration, an agency of theUnited States Department of Commerce, had final approval over changes to theDNS root zoneuntil the IANA stewardship transition on 1 October 2016.[65][66][67][68]TheInternet Society(ISOC) was founded in 1992 with a mission to"assure the open development, evolution and use of the Internet for the benefit of all people throughout the world".[69]Its members include individuals (anyone may join) as well as corporations,organizations, governments, and universities. Among other activities ISOC provides an administrative home for a number of less formally organized groups that are involved in developing and managing the Internet, including: the IETF,Internet Architecture Board(IAB),Internet Engineering Steering Group(IESG),Internet Research Task Force(IRTF), andInternet Research Steering Group(IRSG). On 16 November 2005, the United Nations-sponsoredWorld Summit on the Information SocietyinTunisestablished theInternet Governance Forum(IGF) to discuss Internet-related issues. The communications infrastructure of the Internet consists of its hardware components and a system of software layers that control various aspects of the architecture. As with any computer network, the Internet physically consists ofrouters, media (such as cabling and radio links), repeaters, modems etc. However, as an example ofinternetworking, many of the network nodes are not necessarily Internet equipment per se. The internet packets are carried by other full-fledged networking protocols with the Internet acting as a homogeneous networking standard, running acrossheterogeneoushardware, with the packets guided to their destinations by IP routers. Internet service providers(ISPs) establish the worldwide connectivity between individual networks at various levels of scope. End-users who only access the Internet when needed to perform a function or obtain information, represent the bottom of the routing hierarchy. At the top of the routing hierarchy are thetier 1 networks, large telecommunication companies that exchange traffic directly with each other via very high speedfiber-optic cablesand governed bypeeringagreements.Tier 2and lower-level networks buyInternet transitfrom other providers to reach at least some parties on the global Internet, though they may also engage in peering. An ISP may use a single upstream provider for connectivity, or implementmultihomingto achieve redundancy and load balancing.Internet exchange pointsare major traffic exchanges with physical connections to multiple ISPs. Large organizations, such as academic institutions, large enterprises, and governments, may perform the same function as ISPs, engaging in peering and purchasing transit on behalf of their internal networks. Research networks tend to interconnect with large subnetworks such asGEANT,GLORIAD,Internet2, and the UK'snational research and education network,JANET. Common methods ofInternet accessby users include dial-up with a computermodemvia telephone circuits,broadbandovercoaxial cable,fiber opticsor copper wires,Wi-Fi,satellite, andcellular telephonetechnology (e.g.3G,4G). The Internet may often be accessed from computers in libraries andInternet cafés.Internet access pointsexist in many public places such as airport halls and coffee shops. Various terms are used, such aspublic Internet kiosk,public access terminal, andWebpayphone. Many hotels also have public terminals that are usually fee-based. These terminals are widely accessed for various usages, such as ticket booking, bank deposit, oronline payment. Wi-Fi provides wireless access to the Internet via local computer networks.Hotspotsproviding such access include Wi-Fi cafés, where users need to bring their own wireless devices, such as a laptop orPDA. These services may be free to all, free to customers only, or fee-based. Grassrootsefforts have led towireless community networks. CommercialWi-Fiservices that cover large areas are available in many cities, such asNew York,London,Vienna,Toronto,San Francisco,Philadelphia,ChicagoandPittsburgh, where the Internet can then be accessed from places such as a park bench.[70]Experiments have also been conducted with proprietary mobile wireless networks likeRicochet, various high-speed data services over cellular networks, and fixed wireless services. Modernsmartphonescan also access the Internet through the cellular carrier network. For Web browsing, these devices provide applications such asGoogle Chrome,Safari, andFirefoxand a wide variety of other Internet software may be installed fromapp stores. Internet usage by mobile and tablet devices exceeded desktop worldwide for the first time in October 2016.[71] TheInternational Telecommunication Union(ITU) estimated that, by the end of 2017, 48% of individual users regularly connect to the Internet, up from 34% in 2012.[72]Mobile Internetconnectivity has played an important role in expanding access in recent years, especially inAsia and the Pacificand in Africa.[73]The number of unique mobile cellular subscriptions increased from 3.9 billion in 2012 to 4.8 billion in 2016, two-thirds of the world's population, with more than half of subscriptions located in Asia and the Pacific. The number of subscriptions was predicted to rise to 5.7 billion users in 2020.[74]As of 2018[update], 80% of the world's population were covered by a4Gnetwork.[74]The limits that users face on accessing information via mobile applications coincide with a broader process offragmentation of the Internet. Fragmentation restricts access to media content and tends to affect the poorest users the most.[73] Zero-rating, the practice of Internet service providers allowing users free connectivity to access specific content or applications without cost, has offered opportunities to surmount economic hurdles but has also been accused by its critics as creating a two-tiered Internet. To address the issues with zero-rating, an alternative model has emerged in the concept of 'equal rating' and is being tested in experiments byMozillaandOrangein Africa. Equal rating prevents prioritization of one type of content and zero-rates all content up to a specified data cap. In a study published byChatham House, 15 out of 19 countries researched in Latin America had some kind of hybrid or zero-rated product offered. Some countries in the region had a handful of plans to choose from (across all mobile network operators) while others, such asColombia, offered as many as 30 pre-paid and 34 post-paid plans.[75] A study of eight countries in theGlobal Southfound that zero-rated data plans exist in every country, although there is a great range in the frequency with which they are offered and actually used in each.[76]The study looked at the top three to five carriers by market share in Bangladesh, Colombia, Ghana, India, Kenya, Nigeria, Peru and Philippines. Across the 181 plans examined, 13 percent were offering zero-rated services. Another study, coveringGhana,Kenya,NigeriaandSouth Africa, foundFacebook's Free Basics andWikipedia Zeroto be the most commonly zero-rated content.[77] The Internet standards describe a framework known as theInternet protocol suite(also calledTCP/IP, based on the first two components.) This is a suite of protocols that are ordered into a set of four conceptionallayersby the scope of their operation, originally documented inRFC1122andRFC1123. At the top is theapplication layer, where communication is described in terms of the objects or data structures most appropriate for each application. For example, a web browser operates in aclient–serverapplication model and exchanges information with theHyperText Transfer Protocol(HTTP) and an application-germane data structure, such as theHyperText Markup Language(HTML). Below this top layer, thetransport layerconnects applications on different hosts with a logical channel through the network. It provides this service with a variety of possible characteristics, such as ordered, reliable delivery (TCP), and an unreliable datagram service (UDP). Underlying these layers are the networking technologies that interconnect networks at their borders and exchange traffic across them. TheInternet layerimplements theInternet Protocol(IP) which enables computers to identify and locate each other byIP addressand route their traffic via intermediate (transit) networks.[78]The Internet Protocol layer code is independent of the type of network that it is physically running over. At the bottom of the architecture is thelink layer, which connects nodes on the same physical link, and contains protocols that do not require routers for traversal to other links. The protocol suite does not explicitly specify hardware methods to transfer bits, or protocols to manage such hardware, but assumes that appropriate technology is available. Examples of that technology includeWi-Fi,Ethernet, andDSL. The most prominent component of the Internet model is the Internet Protocol (IP). IP enables internetworking and, in essence, establishes the Internet itself. Two versions of the Internet Protocol exist,IPv4andIPv6. For locating individual computers on the network, the Internet providesIP addresses. IP addresses are used by the Internet infrastructure to direct internet packets to their destinations. They consist of fixed-length numbers, which are found within the packet. IP addresses are generally assigned to equipment either automatically viaDHCP, or are configured. However, the network also supports other addressing systems. Users generally enterdomain names(e.g. "en.wikipedia.org") instead of IP addresses because they are easier to remember; they are converted by theDomain Name System(DNS) into IP addresses which are more efficient for routing purposes. Internet Protocol version 4(IPv4) defines an IP address as a32-bitnumber.[78]IPv4 is the initial version used on the first generation of the Internet and is still in dominant use. It was designed in 1981 to address up to ≈4.3 billion (109) hosts. However, the explosive growth of the Internet has led toIPv4 address exhaustion, which entered its final stage in 2011,[79]when the global IPv4 address allocation pool was exhausted. Because of the growth of the Internet and thedepletion of available IPv4 addresses, a new version of IPIPv6, was developed in the mid-1990s, which provides vastly larger addressing capabilities and more efficient routing of Internet traffic. IPv6 uses 128 bits for the IP address and was standardized in 1998.[80][81][82]IPv6 deploymenthas been ongoing since the mid-2000s and is currently in growing deployment around the world, since Internet address registries (RIRs) began to urge all resource managers to plan rapid adoption and conversion.[83] IPv6 is not directly interoperable by design with IPv4. In essence, it establishes a parallel version of the Internet not directly accessible with IPv4 software. Thus, translation facilities must exist for internetworking or nodes must have duplicate networking software for both networks. Essentially all modern computer operating systems support both versions of the Internet Protocol. Network infrastructure, however, has been lagging in this development. Aside from the complex array of physical connections that make up its infrastructure, the Internet is facilitated by bi- or multi-lateral commercial contracts, e.g.,peering agreements, and by technical specifications or protocols that describe the exchange of data over the network. Indeed, the Internet is defined by its interconnections and routing policies. Asubnetworkorsubnetis a logical subdivision of anIP network.[84]: 1, 16The practice of dividing a network into two or more networks is calledsubnetting. Computers that belong to a subnet are addressed with an identicalmost-significant bit-group in their IP addresses. This results in the logical division of an IP address into two fields, thenetwork numberorrouting prefixand therest fieldorhost identifier. Therest fieldis an identifier for a specifichostor network interface. Therouting prefixmay be expressed inClassless Inter-Domain Routing(CIDR) notation written as the first address of a network, followed by a slash character (/), and ending with the bit-length of the prefix. For example,198.51.100.0/24is the prefix of theInternet Protocol version 4network starting at the given address, having 24 bits allocated for the network prefix, and the remaining 8 bits reserved for host addressing. Addresses in the range198.51.100.0to198.51.100.255belong to this network. The IPv6 address specification2001:db8::/32is a large address block with 296addresses, having a 32-bit routing prefix. For IPv4, a network may also be characterized by itssubnet maskornetmask, which is thebitmaskthat when applied by abitwise ANDoperation to any IP address in the network, yields the routing prefix. Subnet masks are also expressed indot-decimal notationlike an address. For example,255.255.255.0is the subnet mask for the prefix198.51.100.0/24. Traffic is exchanged between subnetworks through routers when the routing prefixes of the source address and the destination address differ. A router serves as a logical or physical boundary between the subnets. The benefits of subnetting an existing network vary with each deployment scenario. In the address allocation architecture of the Internet using CIDR and in large organizations, it is necessary to allocate address space efficiently. Subnetting may also enhance routing efficiency or have advantages in network management when subnetworks are administratively controlled by different entities in a larger organization. Subnets may be arranged logically in a hierarchical architecture, partitioning an organization's network address space into a tree-like routing structure. Computers and routers userouting tablesin their operating system todirect IP packetsto reach a node on a different subnetwork. Routing tables are maintained by manual configuration or automatically byrouting protocols. End-nodes typically use adefault routethat points toward an ISP providing transit, while ISP routers use theBorder Gateway Protocolto establish the most efficient routing across the complex connections of the global Internet. Thedefault gatewayis thenodethat serves as the forwarding host (router) to other networks when no other route specification matches the destinationIP addressof a packet.[85][86] While the hardware components in the Internet infrastructure can often be used to support other software systems, it is the design and the standardization process of the software that characterizes the Internet and provides the foundation for its scalability and success. The responsibility for the architectural design of the Internet software systems has been assumed by theInternet Engineering Task Force(IETF).[87]The IETF conducts standard-setting work groups, open to any individual, about the various aspects of Internet architecture. The resulting contributions and standards are published asRequest for Comments(RFC) documents on the IETF web site. The principal methods of networking that enable the Internet are contained in specially designated RFCs that constitute theInternet Standards. Other less rigorous documents are simply informative, experimental, or historical, or document the best current practices (BCP) when implementing Internet technologies. The Internet carries manyapplications and services, most prominently the World Wide Web, includingsocial media,electronic mail,mobile applications,multiplayer online games,Internet telephony,file sharing, andstreaming mediaservices. Mostserversthat provide these services are today hosted indata centers, and content is often accessed through high-performancecontent delivery networks. The World Wide Web is a global collection ofdocuments,images,multimedia, applications, and other resources, logically interrelated byhyperlinksand referenced withUniform Resource Identifiers(URIs), which provide a global system of named references. URIs symbolically identify services,web servers, databases, and the documents and resources that they can provide.HyperText Transfer Protocol(HTTP) is the main access protocol of the World Wide Web.Web servicesalso use HTTP for communication between software systems for information transfer, sharing and exchanging business data and logistics and is one of many languages or protocols that can be used for communication on the Internet.[88] World Wide Web browser software, such asMicrosoft'sInternet Explorer/Edge,Mozilla Firefox,Opera,Apple'sSafari, andGoogle Chrome, enable users to navigate from one web page to another via the hyperlinks embedded in the documents. These documents may also contain any combination ofcomputer data, including graphics, sounds,text,video,multimediaand interactive content that runs while the user is interacting with the page.Client-side softwarecan include animations,games,office applicationsand scientific demonstrations. Throughkeyword-drivenInternet researchusingsearch engineslikeYahoo!,BingandGoogle, users worldwide have easy, instant access to a vast and diverse amount of online information. Compared to printed media, books, encyclopedias and traditional libraries, the World Wide Web has enabled the decentralization of information on a large scale. The Web has enabled individuals and organizations topublishideas and information to a potentially largeaudienceonline at greatly reduced expense and time delay. Publishing a web page, a blog, or building a website involves little initialcostand many cost-free services are available. However, publishing and maintaining large, professional websites with attractive, diverse and up-to-date information is still a difficult and expensive proposition. Many individuals and some companies and groups useweb logsor blogs, which are largely used as easily being able to update online diaries. Some commercial organizations encouragestaffto communicate advice in their areas of specialization in the hope that visitors will be impressed by the expert knowledge and free information and be attracted to the corporation as a result. Advertisingon popular web pages can be lucrative, ande-commerce, which is the sale of products and services directly via the Web, continues to grow. Online advertising is a form ofmarketingand advertising which uses the Internet to deliverpromotionalmarketing messages to consumers. It includes email marketing,search engine marketing(SEM), social media marketing, many types ofdisplay advertising(includingweb banneradvertising), andmobile advertising. In 2011, Internet advertising revenues in the United States surpassed those ofcable televisionand nearly exceeded those ofbroadcast television.[89]: 19Many common online advertising practices are controversial and increasingly subject to regulation. When the Web developed in the 1990s, a typical web page was stored in completed form on a web server, formatted inHTML, ready for transmission to a web browser in response to a request. Over time, the process of creating and serving web pages has become dynamic, creating a flexible design, layout, and content. Websites are often created usingcontent managementsoftware with, initially, very little content. Contributors to these systems, who may be paid staff, members of an organization or the public, fill underlying databases with content using editing pages designed for that purpose while casual visitors view and read this content in HTML form. There may or may not be editorial, approval and security systems built into the process of taking newly entered content and making it available to the target visitors. Emailis an important communications service available via the Internet. The concept of sending electronic text messages between parties, analogous to mailing letters or memos, predates the creation of the Internet.[90][91]Pictures, documents, and other files are sent asemail attachments. Email messages can becc-edto multipleemail addresses. Internet telephonyis a common communications service realized with the Internet. The name of the principal internetworking protocol, the Internet Protocol, lends its name tovoice over Internet Protocol(VoIP). The idea began in the early 1990s withwalkie-talkie-like voice applications for personal computers. VoIP systems now dominate many markets and are as easy to use and as convenient as a traditional telephone. The benefit has been substantial cost savings over traditional telephone calls, especially over long distances.Cable,ADSL, andmobile datanetworks provideInternet accessin customer premises[92]and inexpensive VoIP network adapters provide the connection for traditional analog telephone sets. The voice quality of VoIP often exceeds that of traditional calls. Remaining problems for VoIP include the situation that emergency services may not be universally available and that devices rely on a local power supply, while older traditional phones are powered from the local loop, and typically operate during a power failure. File sharingis an example of transferring large amounts of data across the Internet. Acomputer filecan be emailed to customers, colleagues and friends as an attachment. It can be uploaded to a website orFile Transfer Protocol(FTP) server for easy download by others. It can be put into a "shared location" or onto afile serverfor instant use by colleagues. The load of bulk downloads to many users can be eased by the use of "mirror" servers orpeer-to-peernetworks. In any of these cases, access to the file may be controlled by userauthentication, the transit of the file over the Internet may be obscured byencryption, and money may change hands for access to the file. The price can be paid by the remote charging of funds from, for example, a credit card whose details are also passed—usually fully encrypted—across the Internet. The origin and authenticity of the file received may be checked bydigital signaturesor byMD5or other message digests. These simple features of the Internet, over a worldwide basis, are changing the production, sale, and distribution of anything that can be reduced to a computer file for transmission. This includes all manner of print publications, software products, news, music, film, video, photography, graphics and the other arts. This in turn has caused seismic shifts in each of the existing industries that previously controlled the production and distribution of these products. Streaming mediais the real-time delivery of digital media for immediate consumption or enjoyment by end users. Many radio and television broadcasters provide Internet feeds of their live audio and video productions. They may also allow time-shift viewing or listening such as Preview, Classic Clips and Listen Again features. These providers have been joined by a range of pure Internet "broadcasters" who never had on-air licenses. This means that an Internet-connected device, such as a computer or something more specific, can be used to access online media in much the same way as was previously possible only with a television or radio receiver. The range of available types of content is much wider, from specialized technicalwebcaststo on-demand popular multimedia services.Podcastingis a variation on this theme, where—usually audio—material is downloaded and played back on a computer or shifted to aportable media playerto be listened to on the move. These techniques using simple equipment allow anybody, with little censorship or licensing control, to broadcast audio-visual material worldwide. Digital media streaming increases the demand for network bandwidth. For example, standard image quality needs 1 Mbit/s link speed for SD 480p, HD 720p quality requires 2.5 Mbit/s, and the top-of-the-line HDX quality needs 4.5 Mbit/s for 1080p.[93] Webcamsare a low-cost extension of this phenomenon. While some webcams can give full-frame-rate video, the picture either is usually small or updates slowly. Internet users can watch animals around an African waterhole, ships in thePanama Canal, traffic at a local roundabout or monitor their own premises, live and in real time. Videochat roomsandvideo conferencingare also popular with many uses being found for personal webcams, with and without two-way sound. YouTube was founded on 15 February 2005 and is now the leading website for free streaming video with more than two billion users.[94]It uses an HTML5 based web player by default to stream and show video files.[95]Registered users may upload an unlimited amount of video and build their own personal profile.YouTubeclaims that its users watch hundreds of millions, and upload hundreds of thousands of videos daily. The Internet has enabled new forms of social interaction, activities, and social associations. This phenomenon has given rise to the scholarly study of thesociology of the Internet. The early Internet left an impact on somewriterswho usedsymbolismto write about it, such as describing the Internet as a "means to connect individuals in a vast invisible net over all theearth."[96] Between 2000 and 2009, the number of Internet users globally rose from 390 million to 1.9 billion.[100]By 2010, 22% of the world's population had access to computers with 1 billionGooglesearches every day, 300 million Internet users reading blogs, and 2 billion videos viewed daily onYouTube.[101]In 2014 the world's Internet users surpassed 3 billion or 44 percent of world population, but two-thirds came from the richest countries, with 78 percent of Europeans using the Internet, followed by 57 percent of the Americas.[102]However, by 2018, Asia alone accounted for 51% of all Internet users, with 2.2 billion out of the 4.3 billion Internet users in the world. China's Internet users surpassed a major milestone in 2018, when the country's Internet regulatory authority, China Internet Network Information Centre, announced that China had 802 million users.[103]China was followed by India, with some 700 million users, with the United States third with 275 million users. However, in terms of penetration, in 2022 China had a 70% penetration rate compared to India's 60% and the United States's 90%.[104]In 2022, 54% of the world's Internet users were based in Asia, 14% in Europe, 7% in North America, 10% in Latin America and theCaribbean, 11% in Africa, 4% in the Middle East and 1% in Oceania.[105]In 2019, Kuwait, Qatar, the Falkland Islands, Bermuda and Iceland had the highestInternet penetration by the number of users, with 93% or more of the population with access.[106]As of 2022, it was estimated that 5.4 billion people use the Internet, more than two-thirds of the world's population.[107] The prevalent language for communication via the Internet has always been English. This may be a result of the origin of the Internet, as well as the language's role as alingua francaand as aworld language. Early computer systems were limited to the characters in theAmerican Standard Code for Information Interchange(ASCII), a subset of theLatin alphabet. After English (27%), the most requested languages on the World Wide Web are Chinese (25%), Spanish (8%), Japanese (5%), Portuguese and German (4% each), Arabic, French and Russian (3% each), and Korean (2%).[108]The Internet's technologies have developed enough in recent years, especially in the use ofUnicode, that good facilities are available for development and communication in the world's widely used languages. However, some glitches such asmojibake(incorrect display of some languages' characters) still remain. In a US study in 2005, the percentage of men using the Internet was very slightly ahead of the percentage of women, although this difference reversed in those under 30. Men logged on more often, spent more time online, and were more likely to be broadband users, whereas women tended to make more use of opportunities to communicate (such as email). Men were more likely to use the Internet to pay bills, participate in auctions, and for recreation such as downloading music and videos. Men and women were equally likely to use the Internet for shopping and banking.[109]In 2008, women significantly outnumbered men on most social networking services, such as Facebook and Myspace, although the ratios varied with age.[110]Women watched more streaming content, whereas men downloaded more.[111]Men were more likely to blog. Among those who blog, men were more likely to have a professional blog, whereas women were more likely to have a personal blog.[112] Several neologisms exist that refer to Internet users:Netizen(as in "citizen of the net")[113]refers to thoseactively involvedin improvingonline communities, the Internet in general or surrounding political affairs and rights such asfree speech,[114][115]Internautrefers to operators or technically highly capable users of the Internet,[116][117]digital citizenrefers to a person using the Internet in order to engage in society, politics, and government participation.[118] The Internet allows greater flexibility in working hours and location, especially with the spread of unmetered high-speed connections. The Internet can be accessed almost anywhere by numerous means, including throughmobile Internet devices. Mobile phones,datacards,handheld game consolesandcellular routersallow users to connect to the Internetwirelessly. Within the limitations imposed by small screens and other limited facilities of such pocket-sized devices, the services of the Internet, including email and the web, may be available. Service providers may restrict the services offered and mobile data charges may be significantly higher than other access methods. Educational material at all levels from pre-school to post-doctoral is available from websites. Examples range fromCBeebies, through school and high-school revision guides andvirtual universities, to access to top-end scholarly literature through the likes ofGoogle Scholar. Fordistance education, help withhomeworkand other assignments, self-guided learning, whiling away spare time or just looking up more detail on an interesting fact, it has never been easier for people to access educational information at any level from anywhere. The Internet in general and the World Wide Web in particular are important enablers of bothformalandinformal education. Further, the Internet allows researchers (especially those from the social and behavioral sciences) to conduct research remotely via virtual laboratories, with profound changes in reach and generalizability of findings as well as in communication between scientists and in the publication of results.[122] The low cost and nearly instantaneous sharing of ideas, knowledge, and skills have madecollaborativework dramatically easier, with the help ofcollaborative software. Not only can a group cheaply communicate and share ideas but the wide reach of the Internet allows such groups more easily to form. An example of this is thefree software movement, which has produced, among other things,Linux,Mozilla Firefox, andOpenOffice.org(later forked intoLibreOffice). Internet chat, whether using anIRCchat room, aninstant messagingsystem, or a social networking service, allows colleagues to stay in touch in a very convenient way while working at their computers during the day. Messages can be exchanged even more quickly and conveniently than via email. These systems may allow files to be exchanged, drawings and images to be shared, or voice and video contact between team members. Content managementsystems allow collaborating teams to work on shared sets of documents simultaneously without accidentally destroying each other's work. Business and project teams can share calendars as well as documents and other information. Such collaboration occurs in a wide variety of areas including scientific research, software development, conference planning, political activism and creative writing. Social and political collaboration is also becoming more widespread as both Internet access andcomputer literacyspread. The Internet allows computer users to remotely access other computers and information stores easily from any access point. Access may be withcomputer security; i.e., authentication and encryption technologies, depending on the requirements. This is encouraging new ways ofremote work, collaboration and information sharing in many industries. An accountant sitting at home canauditthe books of a company based in another country, on a server situated in a third country that is remotely maintained by IT specialists in a fourth. These accounts could have been created by home-working bookkeepers, in other remote locations, based on information emailed to them from offices all over the world. Some of these things were possible before the widespread use of the Internet, but the cost of privateleased lineswould have made many of them infeasible in practice. An office worker away from their desk, perhaps on the other side of the world on a business trip or a holiday, can access their emails, access their data usingcloud computing, or open aremote desktopsession into their office PC using a securevirtual private network(VPN) connection on the Internet. This can give the worker complete access to all of their normal files and data, including email and other applications, while away from the office. It has been referred to amongsystem administratorsas the Virtual Private Nightmare,[123]because it extends the secure perimeter of a corporate network into remote locations and its employees' homes. By the late 2010s the Internet had been described as "the main source of scientific information "for the majority of the global North population".[124]: 111 Many people use the World Wide Web to access news, weather and sports reports, to plan and book vacations and to pursue their personal interests. People usechat, messaging and email to make and stay in touch with friends worldwide, sometimes in the same way as some previously hadpen pals. Social networking services such asFacebookhave created new ways to socialize and interact. Users of these sites are able to add a wide variety of information to pages, pursue common interests, and connect with others. It is also possible to find existing acquaintances, to allow communication among existing groups of people. Sites likeLinkedInfoster commercial and business connections. YouTube andFlickrspecialize in users' videos and photographs. Social networking services are also widely used by businesses and other organizations to promote their brands, to market to their customers and to encourage posts to "go viral". "Black hat" social media techniques are also employed by some organizations, such asspamaccounts andastroturfing. A risk for both individuals' and organizations' writing posts (especially public posts) on social networking services is that especially foolish or controversial posts occasionally lead to an unexpected and possibly large-scale backlash on social media from other Internet users. This is also a risk in relation to controversialofflinebehavior, if it is widely made known. The nature of this backlash can range widely from counter-arguments and public mockery, through insults andhate speech, to, in extreme cases, rape and deaththreats. Theonline disinhibition effectdescribes the tendency of many individuals to behave more stridently or offensively online than they would in person. A significant number offeministwomen have been the target of various forms ofharassmentin response to posts they have made on social media, and Twitter in particular has been criticized in the past for not doing enough to aid victims of online abuse.[125] For organizations, such a backlash can cause overallbrand damage, especially if reported by the media. However, this is not always the case, as any brand damage in the eyes of people with an opposing opinion to that presented by the organization could sometimes be outweighed by strengthening the brand in the eyes of others. Furthermore, if an organization or individual gives in to demands that others perceive as wrong-headed, that can then provoke a counter-backlash. Some websites, such asReddit, have rules forbidding the posting ofpersonal informationof individuals (also known asdoxxing), due to concerns about such postings leading to mobs of large numbers of Internet users directing harassment at the specific individuals thereby identified. In particular, the Reddit rule forbidding the posting of personal information is widely understood to imply that all identifying photos and names must becensoredin Facebookscreenshotsposted to Reddit. However, the interpretation of this rule in relation to public Twitter posts is less clear, and in any case, like-minded people online have many other ways they can use to direct each other's attention to public social media posts they disagree with. Children also face dangers online such ascyberbullyingandapproaches by sexual predators, who sometimes pose as children themselves. Children may also encounter material that they may find upsetting, or material that their parents consider to be not age-appropriate. Due to naivety, they may also post personal information about themselves online, which could put them or their families at risk unless warned not to do so. Many parents choose to enableInternet filteringor supervise their children's online activities in an attempt to protect their children from inappropriate material on the Internet. The most popular social networking services, such as Facebook and Twitter, commonly forbid users under the age of 13. However, these policies are typically trivial to circumvent by registering an account with a false birth date, and a significant number of children aged under 13 join such sites anyway. Social networking services for younger children, which claim to provide better levels of protection for children, also exist.[126] The Internet has been a major outlet for leisure activity since its inception, with entertainingsocial experimentssuch asMUDsandMOOsbeing conducted on university servers, and humor-relatedUsenetgroups receiving much traffic.[127]ManyInternet forumshave sections devoted to games and funny videos.[127]TheInternet pornographyandonline gamblingindustries have taken advantage of the World Wide Web. Although many governments have attempted to restrict both industries' use of the Internet, in general, this has failed to stop their widespread popularity.[128] Another area of leisure activity on the Internet ismultiplayer gaming.[129]This form of recreation creates communities, where people of all ages and origins enjoy the fast-paced world of multiplayer games. These range fromMMORPGtofirst-person shooters, fromrole-playing video gamestoonline gambling. While online gaming has been around since the 1970s, modern modes of online gaming began with subscription services such asGameSpyandMPlayer.[130]Non-subscribers were limited to certain types of game play or certain games. Many people use the Internet to access and download music, movies and other works for their enjoyment and relaxation. Free and fee-based services exist for all of these activities, using centralized servers and distributed peer-to-peer technologies. Some of these sources exercise more care with respect to the original artists' copyrights than others. Internet usage has been correlated to users' loneliness.[131]Lonely people tend to use the Internet as an outlet for their feelings and to share their stories with others, such as in the "I am lonely will anyone speak to me" thread. A 2017 book claimed that the Internet consolidates most aspects of human endeavor into singular arenas of which all of humanity are potential members and competitors, with fundamentally negativeimpacts on mental healthas a result. While successes in each field of activity are pervasively visible and trumpeted, they are reserved for an extremely thin sliver of the world's most exceptional, leaving everyone else behind. Whereas, before the Internet, expectations of success in any field were supported by reasonable probabilities of achievement at the village, suburb, city or even state level, the same expectations in the Internet world are virtually certain to bring disappointment today: there is always someone else, somewhere on the planet, who can do better and take the now one-and-only top spot.[132] Cybersectarianismis a new organizational form that involves, "highly dispersed small groups of practitioners that may remain largely anonymous within the larger social context and operate in relative secrecy, while still linked remotely to a larger network of believers who share a set of practices and texts, and often a common devotion to a particular leader. Overseas supporters provide funding and support; domestic practitioners distribute tracts, participate in acts of resistance, and share information on the internal situation with outsiders. Collectively, members and practitioners of such sects construct viable virtual communities of faith, exchanging personal testimonies and engaging in the collective study via email, online chat rooms, and web-based message boards."[133]In particular, the British government has raised concerns about the prospect of young British Muslims being indoctrinated into Islamic extremism by material on the Internet, being persuaded to jointerroristgroups such as the so-called "Islamic State", and then potentially committing acts of terrorism on returning to Britain after fighting in Syria or Iraq. Cyberslackingcan become a drain on corporate resources; the average UK employee spent 57 minutes a day surfing the Web while at work, according to a 2003 study by Peninsula Business Services.[134]Internet addiction disorderis excessive computer use that interferes with daily life.Nicholas G. Carrbelieves that Internet use has othereffects on individuals, for instance improving skills of scan-reading andinterferingwith the deep thinking that leads to true creativity.[135] Electronic business(e-business) encompasses business processes spanning the entirevalue chain: purchasing,supply chain management,marketing,sales,customerservice, and business relationship.E-commerceseeks to add revenue streams using the Internet to build and enhance relationships with clients and partners. According toInternational Data Corporation, the size of worldwide e-commerce, when global business-to-business and -consumer transactions are combined, equate to $16 trillion for 2013. A report by Oxford Economics added those two together to estimate the total size of thedigital economyat $20.4 trillion, equivalent to roughly 13.8% of global sales.[136] While much has been written of the economic advantages ofInternet-enabled commerce, there is also evidence that some aspects of the Internet such as maps and location-aware services may serve to reinforceeconomic inequalityand thedigital divide.[137]Electronic commerce may be responsible forconsolidationand the decline ofmom-and-pop,brick and mortarbusinesses resulting in increases inincome inequality.[138][139][140] AuthorAndrew Keen, a long-time critic of the social transformations caused by the Internet, has focused on the economic effects of consolidation from Internet businesses. Keen cites a 2013Institute for Local Self-Reliancereport saying brick-and-mortar retailers employ 47 people for every $10 million in sales while Amazon employs only 14. Similarly, the 700-employee room rental start-upAirbnbwas valued at $10 billion in 2014, about half as much asHilton Worldwide, which employs 152,000 people. At that time,Uberemployed 1,000 full-time employees and was valued at $18.2 billion, about the same valuation asAvis Rent a CarandThe Hertz Corporationcombined, which together employed almost 60,000 people.[141] Remote workis facilitated by tools such asgroupware,virtual private networks,conference calling,videotelephony, and VoIP so that work may be performed from any location, most conveniently the worker's home. It can be efficient and useful for companies as it allows workers to communicate over long distances, saving significant amounts of travel time and cost. More workers have adequate bandwidth at home to use these tools to link their home to their corporateintranetand internal communication networks. Wikishave also been used in the academic community for sharing and dissemination of information across institutional and international boundaries.[142]In those settings, they have been found useful for collaboration ongrant writing,strategic planning, departmental documentation, and committee work.[143]TheUnited States Patent and Trademark Officeuses a wiki to allow the public to collaborate on findingprior artrelevant to examination of pending patent applications.Queens, New York has used a wiki to allow citizens to collaborate on the design and planning of a local park.[144]TheEnglish Wikipediahas the largest user base among wikis on the World Wide Web[145]and ranks in the top 10 among all sites in terms of traffic.[146] The Internet has achieved new relevance as a political tool. The presidential campaign ofHoward Deanin 2004 in the United States was notable for its success in soliciting donation via the Internet. Many political groups use the Internet to achieve a new method of organizing for carrying out their mission, having given rise toInternet activism.[147][148]The New York Timessuggested thatsocial mediawebsites, such as Facebook and Twitter, helped people organize the political revolutions in Egypt, by helping activists organize protests, communicate grievances, and disseminate information.[149] Many have understood the Internet as an extension of theHabermasiannotion of thepublic sphere, observing how network communication technologies provide something like a global civic forum. However, incidents of politically motivatedInternet censorshiphave now been recorded in many countries, including western democracies.[150][151] E-governmentis the use oftechnological communicationsdevices, such as the Internet, to providepublic servicesto citizens and other persons in a country or region. E-government offers opportunities for more direct and convenient citizen access to government[152]and for government provision of services directly to citizens.[153] The spread of low-cost Internet access in developing countries has opened up new possibilities forpeer-to-peercharities, which allow individuals to contribute small amounts to charitable projects for other individuals. Websites, such asDonorsChooseandGlobalGiving, allow small-scale donors to direct funds to individual projects of their choice. A popular twist on Internet-based philanthropy is the use ofpeer-to-peer lendingfor charitable purposes.Kivapioneered this concept in 2005, offering the first web-based service to publish individual loan profiles for funding. Kiva raises funds for local intermediarymicrofinanceorganizations that post stories and updates on behalf of the borrowers. Lenders can contribute as little as $25 to loans of their choice and receive their money back as borrowers repay. Kiva falls short of being a pure peer-to-peer charity, in that loans are disbursed before being funded by lenders and borrowers do not communicate with lenders themselves.[154][155] Internet resources, hardware, and software components are the target of criminal or malicious attempts to gain unauthorized control to cause interruptions, commit fraud, engage in blackmail or access private information.[156] Malwareis malicious software used and distributed via the Internet. It includescomputer viruseswhich are copied with the help of humans,computer wormswhich copy themselves automatically, software fordenial of service attacks,ransomware,botnets, andspywarethat reports on the activity and typing of users. Usually, these activities constitutecybercrime. Defense theorists have also speculated about the possibilities ofhackersusingcyber warfareusing similar methods on a large scale.[157] Malware poses serious problems to individuals and businesses on the Internet.[158][159]According toSymantec's 2018 Internet Security Threat Report (ISTR), malware variants number has increased to 669,947,865 in 2017, which is twice as many malware variants as in 2016.[160]Cybercrime, which includes malware attacks as well as other crimes committed by computer, was predicted to cost the world economy US$6 trillion in 2021, and is increasing at a rate of 15% per year.[161]Since 2021, malware has been designed to target computer systems that run critical infrastructure such as theelectricity distribution network.[162][163]Malware can be designed to evade antivirus software detection algorithms.[164][165][166] The vast majority of computer surveillance involves the monitoring ofdataandtrafficon the Internet.[167]In the United States for example, under theCommunications Assistance For Law Enforcement Act, all phone calls and broadband Internet traffic (emails, web traffic, instant messaging, etc.) are required to be available for unimpeded real-time monitoring by Federal law enforcement agencies.[168][169][170]Packet captureis the monitoring of data traffic on acomputer network. Computers communicate over the Internet by breaking up messages (emails, images, videos, web pages, files, etc.) into small chunks called "packets", which are routed through a network of computers, until they reach their destination, where they are assembled back into a complete "message" again.Packet Capture Applianceintercepts these packets as they are traveling through the network, in order to examine their contents using other programs. A packet capture is an informationgatheringtool, but not ananalysistool. That is it gathers "messages" but it does not analyze them and figure out what they mean. Other programs are needed to performtraffic analysisand sift through intercepted data looking for important/useful information. Under theCommunications Assistance For Law Enforcement Actall U.S. telecommunications providers are required to install packet sniffing technology to allow Federal law enforcement and intelligence agencies to intercept all of their customers'broadband Internetand VoIP traffic.[171] The large amount of data gathered from packet capture requires surveillance software that filters and reports relevant information, such as the use of certain words or phrases, the access to certain types of web sites, or communicating via email or chat with certain parties.[172]Agencies, such as theInformation Awareness Office,NSA,GCHQand theFBI, spend billions of dollars per year to develop, purchase, implement, and operate systems for interception and analysis of data.[173]Similar systems are operated byIranian secret policeto identify and suppress dissidents. The required hardware and software were allegedly installed by GermanSiemens AGand FinnishNokia.[174] Some governments, such as those ofBurma,Iran,North Korea,Mainland China,Saudi Arabiaand theUnited Arab Emirates, restrict access to content on the Internet within their territories, especially to political and religious content, with domain name and keyword filters.[179] In Norway, Denmark, Finland, and Sweden, major Internet service providers have voluntarily agreed to restrict access to sites listed by authorities. While this list of forbidden resources is supposed to contain only known child pornography sites, the content of the list is secret.[180]Many countries, including the United States, have enacted laws against the possession or distribution of certain material, such aschild pornography, via the Internet but do not mandate filter software. Many free or commercially available software programs, calledcontent-control softwareare available to users to block offensive websites on individual computers or networks in order to limit access by children to pornographic material or depiction of violence. As the Internet is a heterogeneous network, its physical characteristics, including, for example thedata transfer ratesof connections, vary widely. It exhibitsemergent phenomenathat depend on its large-scale organization.[181] The volume ofInternet trafficis difficult to measure because no single point of measurement exists in the multi-tiered, non-hierarchical topology. Traffic data may be estimated from the aggregate volume through the peering points of theTier 1 networkproviders, but traffic that stays local in large provider networks may not be accounted for. AnInternet blackoutor outage can be caused by local signaling interruptions. Disruptions ofsubmarine communications cablesmay cause blackouts or slowdowns to large areas, such as in the2008 submarine cable disruption. Less-developed countries are more vulnerable due to the small number of high-capacity links. Land cables are also vulnerable, as in 2011 when a woman digging for scrap metal severed most connectivity for the nation of Armenia.[182]Internet blackouts affecting almost entire countries can be achieved by governments as a form ofInternet censorship, as in the blockage of theInternet in Egypt, whereby approximately 93%[183]of networks were without access in 2011 in an attempt to stop mobilization foranti-government protests.[184] Estimates of the Internet'selectricity usagehave been the subject of controversy, according to a 2014 peer-reviewed research paper that found claims differing by a factor of 20,000 published in the literature during the preceding decade, ranging from 0.0064kilowatt hoursper gigabyte transferred (kWh/GB) to 136 kWh/GB.[185]The researchers attributed these discrepancies mainly to the year of reference (i.e. whether efficiency gains over time had been taken into account) and to whether "end devices such aspersonal computersand servers are included" in the analysis.[185] In 2011, academic researchers estimated the overallenergy usedby the Internet to be between 170 and 307GW, less than two percent of the energy used by humanity. This estimate included the energy needed to build, operate, and periodically replace the estimated 750 millionlaptops, a billionsmart phonesand 100 million servers worldwide as well as the energy that routers,cell towers,optical switches,Wi-Fitransmitters andcloud storagedevices use when transmittingInternet traffic.[186][187]According to a non-peer-reviewed study published in 2018 byThe Shift Project(a French think tank funded by corporate sponsors), nearly 4% of globalCO2emissionscould be attributed to globaldata transferand the necessary infrastructure.[188]The study also said thatonline video streamingalone accounted for 60% of this data transfer and therefore contributed to over 300 million tons of CO2emission per year, and argued for new "digital sobriety" regulations restricting the use and size of video files.[189]
https://en.wikipedia.org/wiki/Internet
AnInternet forum, ormessage board, is anonline discussion platformwhere people can hold conversations in the form of posted messages.[1]They differ fromchat roomsin that messages are often longer than one line of text, and are at least temporarily archived. Also, depending on the access level of a user or the forum set-up, a posted message might need to be approved by a moderator before it becomes publicly visible. Forums have a specific set of jargon associated with them; for example, a single conversation is called a "thread", ortopic. The name comes from theforumsof Ancient Rome. A discussion forum is hierarchical ortree-likein structure; a forum can contain a number of subforums, each of which may have several topics. Within a forum's topic, each new discussion started is called a thread and can be replied to by as many people as they so wish. Depending on the forum's settings, users can be anonymous or have to register with the forum and then subsequentlylog into post messages. On most forums, users do not have to log in to read existing messages. The modern forum originated frombulletin boardsand so-called computer conferencing systems, which are a technological evolution of the dial-upbulletin board system(BBS).[2][3]From a technological standpoint,forumsorboardsareweb applicationsthat manageuser-generated content.[3][4] Early Internet forums could be described as a web version of anelectronic mailing listornewsgroup(such as those that exist onUsenet), allowing people to post messages and comment on other messages. Later developments emulated the different newsgroups or individual lists, providing more than one forum dedicated to a particular topic.[2] Internet forums are prevalent in severaldeveloped countries. Japan posts the most,[citation needed]with over two million per day on their largest forum,2channel. China also has millions of posts on forums such asTianya Club. Some of the first forum systems were the Planet-Forum system, developed at the beginning of the 1970s; theEIES system, first operational in 1976; and theKOM system, first operational in 1977. In 1979 students from Duke University created an online discussion platform withUsenet.[5] One of the first forum sites (which is still active today) is Delphi Forums, once calledDelphi. The service, with four million members, dates to 1983. Forums perform a function similar to that of dial-upbulletin board systemsand Usenet networks that were first created in the late 1970s.[2]Early web-based forums date back as far as 1994, with the WIT[6]project from the W3 Consortium, and starting at this time, many alternatives were created.[7]A sense ofvirtual communityoften develops around forums that have regular users.Technology,video games,sports,music,fashion,religion, andpoliticsare popular areas for forum themes, but there are forums for a huge number of topics.Internet slangandimage macrospopular across the Internet are abundant and widely used in Internet forums. Forum software packages are widely available on theInternetand are written in a variety ofprogramming languages, such asPHP,Perl,Java, andASP. The configuration and records of posts can be stored intext filesor in adatabase. Each package offers different features, from the most basic, providing text-only postings, to more advanced packages, offeringmultimediasupport and formatting code (usually known asBBCode). Many packages can be integrated easily into an existing website to allow visitors to post comments on articles. Several other web applications, such asblogsoftware, also incorporate forum features.WordPresscomments at the bottom of a blog post allow for a single-threaded discussion of any given blog post.Slashcode, on the other hand, is far more complicated, allowing fully threaded discussions and incorporating a robust moderation and meta-moderation system as well as many of the profile features available to forum users. Some stand-alone threads on forums have reached fame and notability, such as the "I am lonely will anyone speak to me" thread on MovieCodec.com's forums, which was described as the "web's top hangout for lonely folk" byWiredmagazine,[8]orStevan Harnad'sSubversive Proposal. Online discussion platforms can engage people in collective reflection and exchanging perspectives and cross-cultural understanding.[9] Public display of ideas can encourage intersubjective meaning making.[10][self-published source?] Online discussion platforms may be an important structural means for effective large-scale participation.[11] Online discussion platforms can play a role in education.[12][self-published source?]In recent years, online discussion platform have become a significant part of not onlydistance educationbut also in campus-based settings.[13] The proposed interactive e-learning community (iELC) is a platform that engages physics students in online and classroom learning tasks. In brief classroom discussions fundamental physics formulas, definitions and concepts are disclosed, after which students participate in the iELC form discussion and utilize chat and dialogue tools to improve their understanding of the subject. The teacher then discusses selected forum posts in the subsequent classroom session.[14] Classroom online discussion platforms are one type of such platforms.[15] Rose argues that the basic motivation for the development of e–learning platforms is efficiency of scale — teaching more students for less money.[16] A study found that learners will enhance the frequencies of course discussion and actively interact with e-learning platform when e-learning platform integrates the curriculum reward mechanism into learning activities.[17] "City townhall" includes a participation platform for policy-making inRotterdam.[18][additional citation(s) needed] In 2022,United Nationsreported that D-Agree Afghanistan is used as a digital and smart city solutions inAfghanistan.[19][20]D-Agree is a discussion support platform withartificial intelligence–based facilitation.[21]The discussion trees in D-Agree, inspired byissue-based information system, contain a combination of four types of elements: issues, ideas, pros, and cons.[21]The software extracts a discussion's structure in real time based on IBIS, automatically classifying all the sentences.[21] Online discussion platforms may be designed and improved to streamline discussions for efficiency, usefulness and quality. For instance voting, targeted notifications, user levels,gamification, subscriptions, bots, discussion requirements, structurization, layout, sorting, linking, feedback-mechanisms, reputation-features, demand-signaling features, requesting-features, visual highlighting, separation, curation, tools for real-time collaboration, tools for mobilization of humans and resources, standardization, data-processing, segmentation, summarization, moderation, time-intervals, categorization/tagging, rules and indexing can be leveraged in synergy to improve the platform.[citation needed] In 2013 Sarah Perez claimed that the best platform for online discussion doesn't yet exist, noting that comment sections could be more useful if they showed "which comments or shares have resonated and why" and which "understands who deserves to be heard".[22] Online platforms don't intrinsically guarantee informed citizen input. Research demonstrates that such spaces can even undermine deliberative participation when they allow hostile, superficial and misinformed content to dominate the conversation (see also:Internet troll,shitposting). A necessary mechanism that enables these platforms to yield informed citizen debate and contribution to policy isdeliberation. It is argued that the challenge lies in creating an online context that does not merely aggregate public input but promotes informed public discussion that may benefit the policy-making process.[11] Online citizen communication has been studied for an evaluations of how deliberative their content is and how selective perception and ideological fragmentation play a role in them (see also:filter bubble). One sub-branch of online deliberation research is dedicated to the development of new platforms that "facilitate deliberative experiences that surpass currently available options".[23] A forum consists of a tree-like directory structure. The top end is "Categories". A forum can be divided into categories for the relevant discussions. Under the categories are sub-forums, and these sub-forums can further have more sub-forums. Thetopics(commonly calledthreads) come under the lowest level of sub-forums, and these are the places under which members can start their discussions orposts. Logically, forums are organized into a finite set of generic topics (usually with one main topic), driven and updated by a group known asmembers, and governed by a group known asmoderators.[24]It can also have a graph structure.[25]All message boards will use one of three possible display formats. Each of the three basic message board display formats: Non-Threaded/Semi-Threaded/Fully Threaded, has its own advantages and disadvantages. If messages are not related to one another at all, a Non-Threaded format is best. If a user has a message topic and multiple replies to that message topic, a semi-threaded format is best. If a user has a message topic and replies to that message topic and responds to replies, then a fully threaded format is best.[26] Internally, Western-style forums organize visitors and logged-in members into user groups.Privilegesand rights are given based on these groups. A user of the forum can automatically be promoted to a more privileged user group based on criteria set by the administrator.[27]A person viewing a closed thread as amemberwill see a box saying he does not have the right to submit messages there, but amoderatorwill likely see the same box, granting him access to more than just posting messages.[28] An unregistered user of the site is commonly known as aguestorvisitor. Guests are typically granted access to all functions that do not require database alterations or breach privacy. A guest can usually view the contents of the forum or use such features asread marking, but occasionally an administrator will disallow visitors to read their forum as an incentive to become a registered member.[note 1]A person who is a very frequent visitor of the forum, a section, or even a thread is referred to as alurker, and the habit is referred to aslurking. Registered members often will refer to themselves aslurkingin a particular location, which is to say they have no intention of participating in that section but enjoy reading the contributions to it. Themoderators(short singular form: "mod") are users (or employees) of the forum who are granted access to thepostsandthreadsof all members for the purpose ofmoderating discussion(similar toarbitration) and also keeping the forum clean (neutralizingspamandspambots, etc.).[29]Moderators also answer users' concerns about the forum and general questions, as well as respond to specific complaints. Common privileges of moderators include: deleting, merging, moving, and splitting of posts and threads, locking, renaming, andstickyingof threads;banning, unbanning, suspending, unsuspending, warning the members; or adding, editing, and removing the polls of threads.[30]"Junior modding", "backseat modding", or "forum copping" can refer negatively to the behavior of ordinary users who take a moderator-like tone in criticizing other members. Essentially, it is the duty of the moderator to manage the day-to-day affairs of a forum or board as it applies to the stream of user contributions and interactions. The relative effectiveness of this user management directly impacts the quality of a forum in general, its appeal, and its usefulness as a community of interrelated users. Moderators act as unpaid volunteers on many websites, which has sparked controversies and community tensions. OnReddit, some moderators have prominently expressed dissatisfaction with their unpaid labor being underappreciated, while other site users have accused moderators of abusing special access privileges to act as a "cabal" of "petty tyrants".[31]On4chan, moderators are subject to notable levels of mockery and contempt. There, they are often referred to as janitors (or, more pejoratively, "jannies"[note 2]) given their job, which is tantamount to cleaning up the imageboards' infamousshitposting.[32] Theadministrators(short form: "admin") manage the technical details required for running the site. As such, they have the authority to appoint and revoke members asmoderators, manage the rules, create sections and sub-sections, as well as perform anydatabaseoperations (database backup, etc.). Administrators often also act asmoderators. Administrators may also make forum-wide announcements or change the appearance (known as the skin) of a forum. There are also many forums where administrators share their knowledge.[30] Apostis a user-submitted message enclosed in a block containing the user's details and the date and time it was submitted. Members are usually allowed to edit or delete their own posts. Posts are contained in threads, where they appear as blocks one after another. The first post[33]starts the thread; this may be called the TS (thread starter) or OP (original post). Posts that follow in the thread are meant to continue discussion about that post or respond to other replies; it is not uncommon for discussions to be derailed. On Western forums, the classic way to show a member's own details (such as name and avatar) has been on the left side of the post, in a narrow column of fixed width, with the post controls located on the right, at the bottom of the main body, above the signature block. In more recent forum software implementations, the Asian style of displaying the members' details above the post has been copied. Posts have an internal limit, usually measured in characters. Often, one is required to have a message with a minimum length of 10 characters. There is always an upper limit, but it is rarely reached – most boards have it at either 10,000, 20,000, 30,000, or 50,000 characters. Most forums keep track of a user's postcount. The postcount is a measurement of how many posts a certain user has made.[34]Users with higher postcounts are often considered more reputable than users with lower postcounts, but not always. For instance, some forums have disabled postcounts with the hopes that doing so will emphasize the quality of information over quantity. Athread(sometimes called atopic) is a collection of posts, usually displayed from oldest to latest, although this is typically configurable: Options for newest to oldest and for a threaded view (a tree-like view applying logical reply structure before chronological order) can be available.A thread is defined by a title, an additional description that may summarize the intended discussion, and an opening ororiginal post(common abbreviationOP, which can also be used to refer to theoriginal poster), which opens whatever dialogue or makes whatever announcement the poster wishes. A thread can contain any number of posts, including multiple posts from the same members, even if they are one after the other. A thread is contained in a forum and may have an associated date that is taken as the date of the last post (options to order threads by other criteria are generally available). When a member posts in a thread, it will jump to the top since it is the latest updated thread. Similarly, other threads will jump in front of it when they receive posts. When a member posts in a thread for no reason but to have it go to the top, it is referred to as abumporbumping. It has been suggested that "bump" is an acronym of "bring up my post";[35]however, this is almost certainly abackronym, and the usage is entirely consistent with the verb "bump" which means "to knock to a new position".[36] On some message boards, users can choose tosage(/ˈsɑːɡeɪ/, though often/seɪdʒ/) a post if they wish to make a post but not "bump" it. The word "sage" derives from the2channelterminology 下げるsageru, meaning "to lower". Threads that are important but rarely receive posts arestickied(or, in some software, "pinned"). Asticky threadwill always appear in front of normal threads, often in its own section. A "threaded discussion group" is simply any group of individuals who use a forum for threaded, or asynchronous, discussion purposes. The group may or may not be the only users of the forum. A thread's popularity is measured on forums in reply (total posts minus one, the opening post, in most default forum settings) counts. Some forums also trackpage views. Threads meeting a set number of posts or a set number of views may receive a designation such as "hot thread" and be displayed with a different icon compared to other threads. This icon may stand out more to emphasize the thread. If the forum's users have lost interest in a particular thread, it becomes adead thread. Forums prefer the premise of open and free discussion and often adoptde facto standards. The most common topics on forums include questions, comparisons, polls of opinion, and debates. It is not uncommon for nonsense or unsocial behavior to sprout as people lose their temper, especially if the topic is controversial. Poor understanding of the differences in values among the participants is a common problem on forums. Because replies to a topic are often worded to target someone's point of view, discussion will usually go slightly off in several directions as people question each other's validity, sources, and so on. Circular discussion and ambiguity in replies can extend for several tens of posts in a thread, eventually ending wheneveryone gives upor attention spans waver and a more interesting subject takes over. It is not uncommon for debate to end inad hominemattacks. Severallawsuitshave been brought against the forums and moderators, claiminglibeland damage.[citation needed] For the most part, forum owners and moderators in the United States are protected bySection 230of theCommunications Decency Act, which states that "[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider". In 2019,Facebookwas faced with aclass actionlawsuit set forth by moderators diagnosed withpost-traumatic stress disorder. It was settled for $52 million the following year.[37] By default, to be an Internet forum, the web application needs the ability to submit threads and replies. Typically, threads are in a newer to older view, and replies are in an older to newer view. Most imageboards and 2channel-style discussion boards allow (and encourage) anonymous posting and use a system of tripcodes instead of registration. A tripcode is the hashed result of a password that allows one's identity to be recognized without storing any data about the user. In a tripcode system, a secret password is added to the user's name following a separator character (often anumber sign). This password, or tripcode, ishashedinto a special key, or trip, distinguishable from the name by HTML styles. Tripcodes cannot be faked, but on some types of forum software, they are insecure and can be guessed. On other types, they can be brute-forced with software designed to search for tripcodes, such as Tripcode Explorer.[38] Moderators and administrators will frequently assign themselves capcodes or tripcodes where the guessable trip is replaced with a special notice (such as "# Administrator") or cap. Apersonalorprivate message, or PM for short, is a message sent in private from a member to one or more other members. The ability to send so-calledblind carbon copies(BCC) is sometimes available. When sending a BCC, the users to whom the message is sent directly will not be aware of the recipients of the BCC or even if one was sent in the first place.[example 1] Private messages are generally used for personal conversations. They can also be used with tripcodes—a message is addressed to a public trip and can be picked up by typing in the tripcode. An attachment can be almost any file. When someone attaches a file to a person's post, they are uploading that particular file to the forum's server. Forums usually have very strict limits on what can be attached and what cannot (among which is the size of the files in question). Attachments can be part of a thread, a social group, etc. HyperText Markup Language(HTML) is sometimes allowed, but usually its use is discouraged or, when allowed, extensively filtered. Modern bulletin board systems often have it disabled altogether[citation needed]or allow only administrators to use it, as allowing it at any normal user level is considered a security risk due to the high rate ofXSSvulnerabilities. When HTML is disabled,Bulletin Board Code(BBCode) is the most common preferred alternative. BBCode usually consists of a tag, similar to HTML, but instead of<and>, the tagname is enclosed within square brackets (meaning:[and]). Commonly,[i]is used foritalic type,[b]is used forbold,[u]forunderline,[color="value"]for color, and[list]for lists, as well as[img]for images and[url]for links. The following example BBCode:[b]This[/b] is [i]clever[/i] [b] [i]text[/i] [/b]. When the post is viewed, the code is rendered to HTML and will appear as:Thisisclevertext. Many forum packages offer a way to create Custom BBCodes, or BBcodes that are not built into the package, where the administrator of the board can create complex BBCodes to allow the use of JavaScript or iframe functions in posts, for example, embedding a YouTube or Google Video complete with viewer directly into a post. Anemoticon, orsmiley, is a symbol or combination of symbols used to convey emotional content in written or message form. Forums implement a system through which some of the text representations of emoticons (e.g.,xD,:p) are rendered as a small image. Depending on what part of the world the forum's topic originates from (since most forums are international), smilies can be replaced by other forms of similar graphics; an example would bekaoani(e.g.,*(^O^)*,(^-^)b), or even text between special symbols (e.g., :blink:, :idea:). Most forums implement anopinion pollsystem for threads. Most implementations allow for single-choice or multi-choice (sometimes limited to a certain number) when selecting options, as well as private or public display of voters. Polls can be set to expire after a certain date or, in some cases, after a number of days from their creation. Members vote in a poll, and a statistic is displayed graphically. Anignore listallows members to hide posts of other members that they do not want to see or have a problem with. In most implementations, they are referred to asfoe listorignore list. The posts are usually not hidden but minimized, with only a small bar indicating a post from the user on theignore listis there.[39][40]Almost all Internet forums include amember list, which allows the display of all forum members with an integrated search feature. Some forums will not list members with zero posts, even if they have activated their accounts. Many forums allow users to give themselves anavatar. An avatar is an image that appears beside all of a user's posts in order to make the user more recognizable. The user may upload the image to the forum database or provide a link to an image on a separate website. Each forum has limits on the height, width, and data size of avatars that may be used; if the user tries to use an avatar that is too big, it may be scaled down or rejected. Similarly, most forums allow users to define asignature(sometimes called asig), which is a block of text, possibly with BBCode, that appears at the bottom of all of the user's posts. There is a character limit on signatures, though it may be so high that it is rarely hit. Often, the forum's moderators impose manual rules on signatures to prevent them from being obnoxious (for example, being extremely long or having flashing images) and issue warnings or bans to users who break these rules. Like avatars, signatures may improve the recognizability of a poster. They may also allow the user to attach information to all of their posts, such as proclaiming support for a cause, noting facts about themselves, or quoting humorous things that have previously been said on the forum. Asubscriptionis a form of automated notification integrated into the software of most forums. It usually notifies the member either by email or on the site when the member returns. The option to subscribe is available for every thread while logged in. Subscriptions work withread marking, namely the property ofunread, which is given to the content never served to the user by the software. Recent developments in some popular implementations of forum software have broughtsocial network features and functionality.[41]Such features include personal galleries and pages, as well as social networks likechatsystems. Most forum software is now fully customizable, with "hacks" or "modifications" readily available to customize a person's forum to theirs and their members' needs. Often forums use "cookies", or information about the user's behavior on the site sent to a user's browser and used upon re-entry into the site. This is done to facilitate automatic login and to show a user whether a thread or forum has received new posts since his or her last visit. These may be disabled or cleared at any time.[42] Forums are governed by a set of individuals, collectively referred to asstaff, made up ofadministratorsandmoderators, which are responsible for the forums' conception, technical maintenance, and policies (creation and enforcing). Most forums have a list of rules detailing the wishes, aim, and guidelines of the forums' creators. There is usually also aFAQsection containing basic information for new members and people not yet familiar with the use and principles of a forum (generally tailored for specific forum software). Rules on forums usually apply to the entire user body and often have preset exceptions, most commonly designating a section as an exception. For example, in anITforum any discussion regarding anything but computerprogramming languagesmay be against the rules, with the exception of ageneral chatsection. Forum rules are maintained and enforced by the moderation team, but users are allowed to help out via what is known as a report system. Most Western forum platforms automatically provide such a system.[39][43]It consists of a small function applicable to each post (including one's own). Using it will notify all currently available moderators of its location, and subsequent action or judgment can be carried out immediately, which is particularly desirable in large or very developed boards. Generally, moderators encourage members to also use theprivate messagesystem if they wish to report behavior. Moderators will generally frown upon attempts of moderation by non-moderators, especially when the would-be moderators do not even issue a report. Messages from non-moderators acting as moderators generally declare a post as against the rules or predict punishment. While not harmful, statements that attempt to enforce the rules are discouraged.[44] When rules are broken several steps are commonly taken. First, a warning is usually given; this is commonly in the form of aprivate messagebut recent development has made it possible for it to be integrated into the software. Subsequent to this, if the act is ignored and warnings do not work, the member is – usually – first exiled from the forum for a number of days. Denying someone access to the site is called aban. Bans can mean the person can no longer log in or even view the site anymore. If the offender, after the warning sentence, repeats the offense, another ban is given, usually this time a longer one. Continuous harassment of the site eventually leads to a permanent ban. In most cases, this means simply that the account is locked. In extreme cases where the offender – after being permanently banned – creates another account and continues to harass the site, administrators will apply anIP addressban or block (this can also be applied at the server level): If the IP address is static, the machine of the offender is prevented from accessing the site. In some extreme circumstances, IP address range bans or country bans can be applied; this is usually for political, licensing, or other reasons. See also:Block (Internet),IP address blocking, andInternet censorship. Offending content is usually deleted. Sometimes if the topic is considered the source of the problem, it islocked; often a poster may request a topic expected to draw problems to be locked as well, although the moderators decide whether to grant it. In alocked thread, members cannot post anymore. In cases where the topic is considered a breach of rules it – with all of its posts – may be deleted. Forumtrollsare users that repeatedly and deliberately breach thenetiquetteof an established online community, posting inflammatory, extraneous, or off-topic messages to bait or excite users into responding or to test the forum rules and policies, and with that the patience of the forum staff. Their provocative behavior may potentially startflame wars(see below) or other disturbances. Responding to a troll's provocations is commonly known as 'feeding the troll' and is generally discouraged, as it can encourage their disruptive behavior. The term sock puppet refers to multiple pseudonyms in use by the same person on a particular message board or forum. The analogy of a sock puppet is of a puppeteer holding up both hands and supplying dialogue to both puppets simultaneously. A typical use of a sockpuppet account is to agree with or debate another sockpuppet account belonging to the same person, for the purposes of reinforcing the puppeteer's position in an argument. Sock puppets are usually found when anIP addresscheck is done on the accounts in forums. Forum spamming is a breach of netiquette where users repeat the same word or phrase over and over, but differs from multiple posting in that spamming is usually a willful act that sometimes has malicious intent. This is a common trolling technique. It can also be traditionalspam, unpaidadvertisementsthat are in breach of the forum's rules. Spammers utilize a number of illicit techniques to post their spam, including the use ofbotnets. Some forums consider concise, comment-oriented posts spam, for exampleThank you,CoolorI love it. One commonfaux pason Internet forums is to post the same message twice. Users sometimes post versions of a message that are only slightly different, especially in forums where they are not allowed to edit their earlier posts. Multiple posting instead of editing prior posts can artificially inflate a user's post count. Multiple posting can be unintentional; a user's browser might display an error message even though the post has been transmitted or a user of a slow forum might become impatient and repeatedly hit the submit button. An offline editor may post the same message twice. Multiple posting can also be used as a method oftrollingor spreadingforum spam. A user may also send the same post to several forums, which is termedcrossposting. The term derives from Usenet, where crossposting was an accepted practice but causes problems in web forums, which lack the ability to link such posts so replies in one forum are not visible to people reading the post in other forums. Anecropostis a message that revives (as innecromancy) an arbitrarily old thread, causing it to appear above newer and more active threads. This practice is generally seen as a breach of netiquette on most forums. Because old threads are not usually locked from further posting, necroposting is common for newer users and in cases where the date of previous posts is not apparent.[45] Aword censoring systemis commonly included in the forum software package. The system will pick up words in the body of the post or some other user-editable forum element (like user titles), and if they partially match a certain keyword (commonly nocase sensitivity) they will be censored. The most common censoring is letter replacement with anasteriskcharacter. For example, in the user title, it is deemed inappropriate for users to use words such as "admin", "moderator", "leader" and so on. If the censoring system is implemented, a title such as "forum leader" may be filtered to "forum ******". Rude or vulgar words are common targets for the censoring system.[46][47]But such auto-censors can make mistakes, for example censoring "wristwatch" to "wris****ch", "Scunthorpe" to "S****horpe or "Essex" to "Es***". When a thread—or in some cases, an entire forum—becomes unstable, the result is usually uncontrolled spam in the form of one-line complaints,image macros, or abuse of the report system. When the discussion becomes heated and sides do nothing more than complain and not accept each other's differences in point of view, the discussion degenerates into what is called aflame war. Toflamesomeone means to go off-topic and attack the person rather than their opinion. Likely candidates for flame wars are usually religion and socio-political topics, or topics that discuss pre-existing rivalries outside the forum (e.g., rivalry between games, console systems, car manufacturers, nationalities, etc.). When a topic that has degenerated into a flame war is considered akin to that of the forum (be it a section or the entire board), spam and flames have a chance of spreading outside the topic and causing trouble, usually in the form of vandalism. Some forums (commonly game forums) have suffered from forum-wide flame wars almost immediately after their conception, because of a pre-existingflame warelementin theonline community. Many forums have created devoted areas strictly for discussion of potential flame war topics that are moderated like normal. Many Internet forums require registration to post. Registered users of the site are referred to asmembersand are allowed to submit or send electronic messages through theweb application. The process of registration involves verification of one's age (typically age 13 and over is required so as to meetCOPPArequirements of American forum software) followed by a declaration of theterms of service(other documents may also be present) and a request for agreement to said terms.[48][49][50]Subsequently, if all goes well, the candidate is presented with aweb formto fill requesting at the very least ausername(an alias), password, email and validation of aCAPTCHAcode. While simply completing the registrationweb formis in general enough to generate an account,[note 3]the status labelInactiveis commonly provided by default until the registered user confirms the email address given while registering indeed belongs to the user. Until that time, the registered user can log into the new account but may notpost,reply, or sendprivate messagesin the forum. Sometimes areferrer systemis implemented. Areferreris someone who introduced or otherwise "helped someone" with the decision to join the site (likewise, how aHTTP referreris the site who linked one to another site). Usually, referrers are other forum members and members are usually rewarded for referrals. The referrer system is also sometimes implemented so that, if a visitor visits the forum through a link such asreferrerid=300, the user with the id number (in this example, 300) would receive referral credit if the visitor registers.[51]The purpose is commonly just to give credit (sometimes rewards are implied) to those who help the community grow. In areas such asJapan, registration is frequently optional and anonymity is sometimes even encouraged.[52]On these forums, atripcodesystem may be used to allow verification of anidentitywithout the need for formal registration. People who regularly read the forum discussions but do not register or do not post are often referred to as "lurkers". Electronic mailing lists: The main difference between forums and electronic mailing lists is that mailing lists automatically deliver new messages to the subscriber, while forums require the reader to visit the website and check for new posts. Because members may miss replies in threads they are interested in, many modern forums offer an "e-mail notification" feature, whereby members can choose to be notified of new posts in a thread, andweb feedsthat allow members to see a summary of the new posts usingaggregatorsoftware. There are also software products that combine forum and mailing list features, i.e. posting and reading via email as well as the browser depending on the member's choice.[examples needed] Newsreader: The main difference between newsgroups and forums is that additional software, aNews client, is required to participate in newsgroups whereas using a forum requires no additional software beyond theweb browser. Shoutboxes: Unlike Internet forums, most shoutboxes do not require registration, only requiring an email address from the user. Additionally, shoutboxes are not heavily moderated, unlike most message boards. Wiki: Unlike conventional forums, the original wikis allowed all users to edit all content (including each other's messages). This level of content manipulation is reserved for moderators or administrators on most forums. Wikis also allow the creation of other content outside thetalk pages. On the other hand,weblogsand generic content management systems tend to be locked down to the point where only a few select users can post blog entries, although many allow other users to comment upon them. The Wiki hosting site known asWikiahas two features in operation, known as the Forum and Message Wall. The forum is used solely for discussion and works through editing, while the message wall works through posted messages more similar to a traditional forum. Chat roomsandinstant messaging: Forums differ from chats and instant messaging in that forum participants do not have to be online simultaneously to receive or send messages. Messages posted to a forum are publicly available for some time even if the forum or thread is closed, which is uncommon in chat rooms that maintain frequent activity. One rarity among forums is the ability to create a picture album. Forum participants may upload personal pictures onto the site and add descriptions to the pictures. Pictures may be in the same format as posting threads, and contain the same options such as "Report Post" and "Reply to Post".
https://en.wikipedia.org/wiki/Internet_forum
Thecell membrane(also known as theplasma membraneorcytoplasmic membrane, and historically referred to as theplasmalemma) is abiological membranethat separates and protects theinteriorof acellfrom theoutside environment(the extracellular space).[1][2]The cell membrane consists of alipid bilayer, made up of two layers ofphospholipidswithcholesterols(a lipid component) interspersed between them, maintaining appropriate membranefluidityat various temperatures. The membrane also containsmembrane proteins, includingintegral proteinsthat span the membrane and serve asmembrane transporters, andperipheral proteinsthat loosely attach to the outer (peripheral) side of the cell membrane, acting asenzymesto facilitate interaction with the cell's environment.[3]Glycolipidsembedded in the outer lipid layer serve a similar purpose. The cell membranecontrols the movement of substancesin and out of a cell, beingselectively permeabletoionsandorganic molecules.[4]In addition, cell membranes are involved in a variety of cellular processes such ascell adhesion,ion conductivity, andcell signallingand serve as the attachment surface for several extracellular structures, including thecell walland the carbohydrate layer called theglycocalyx, as well as the intracellular network of protein fibers called thecytoskeleton. In the field of synthetic biology, cell membranes can beartificially reassembled.[5][6][7][8] Robert Hooke's discovery of cells in 1665 led to the proposal of thecell theory. Initially it was believed that all cells contained a hard cell wall since only plant cells could be observed at the time.[9]Microscopists focused on the cell wall for well over 150 years until advances in microscopy were made. In the early 19th century, cells were recognized as being separate entities, unconnected, and bound by individual cell walls after it was found that plant cells could be separated. This theory extended to include animal cells to suggest a universal mechanism for cell protection and development. By the second half of the 19th century, microscopy was still not advanced enough to make a distinction between cell membranes and cell walls. However, some microscopists correctly identified at this time that while invisible, it could be inferred that cell membranes existed in animal cells due to intracellular movement of components internally but not externally and that membranes were not the equivalent of aplant cell wall. It was also inferred that cell membranes were not vital components to all cells. Many refuted the existence of a cell membrane still towards the end of the 19th century. In 1890, a revision to the cell theory stated that cell membranes existed, but were merely secondary structures. It was not until later studies with osmosis and permeability that cell membranes gained more recognition.[9]In 1895,Ernest Overtonproposed that cell membranes were made of lipids.[10] The lipid bilayer hypothesis, proposed in 1925 byGorterand Grendel,[11]created speculation in the description of the cell membrane bilayer structure based on crystallographic studies and soap bubble observations. In an attempt to accept or reject the hypothesis, researchers measured membrane thickness. These researchers extracted the lipid from human red blood cells and measured the amount of surface area the lipid would cover when spread over the surface of the water. Since mature mammalianred blood cellslack both nuclei and cytoplasmic organelles, the plasma membrane is the only lipid-containing structure in the cell. Consequently, all of the lipids extracted from the cells can be assumed to have resided in the cells' plasma membranes. The ratio of the surface area of water covered by the extracted lipid to the surface area calculated for the red blood cells from which the lipid was 2:1(approx) and they concluded that the plasma membrane contains a lipid bilayer.[9][12] In 1925 it was determined by Fricke that the thickness of erythrocyte and yeast cell membranes ranged between 3.3 and 4 nm, a thickness compatible with a lipid monolayer. The choice of the dielectric constant used in these studies was called into question but future tests could not disprove the results of the initial experiment. Independently, the leptoscope was invented in order to measure very thin membranes by comparing the intensity of light reflected from a sample to the intensity of a membrane standard of known thickness. The instrument could resolve thicknesses that depended on pH measurements and the presence of membrane proteins that ranged from 8.6 to 23.2 nm, with the lower measurements supporting the lipid bilayer hypothesis. Later in the 1930s, the membrane structure model developed in general agreement to be thepaucimolecular modelofDavsonandDanielli(1935). This model was based on studies of surface tension between oils andechinodermeggs. Since the surface tension values appeared to be much lower than would be expected for an oil–water interface, it was assumed that some substance was responsible for lowering the interfacial tensions in the surface of cells. It was suggested that a lipid bilayer was in between two thin protein layers. The paucimolecular model immediately became popular and it dominated cell membrane studies for the following 30 years, until it became rivaled by the fluid mosaic model ofSingerandNicolson(1972).[13][9] Despite the numerous models of the cell membrane proposed prior to thefluid mosaic model, it remains the primary archetype for the cell membrane long after its inception in the 1970s.[9]Although thefluid mosaic modelhas been modernized to detail contemporary discoveries, the basics have remained constant: the membrane is a lipid bilayer composed of hydrophilic exterior heads and a hydrophobic interior where proteins can interact with hydrophilic heads through polar interactions, but proteins that span the bilayer fully or partially have hydrophobic amino acids that interact with the non-polar lipid interior. Thefluid mosaic modelnot only provided an accurate representation of membrane mechanics, it enhanced the study of hydrophobic forces, which would later develop into an essential descriptive limitation to describe biologicalmacromolecules.[9] For many centuries, the scientists cited disagreed with the significance of the structure they were seeing as the cell membrane. For almost two centuries, the membranes were seen but mostly disregarded as an important structure with cellular function. It was not until the 20th century that the significance of the cell membrane as it was acknowledged. Finally, two scientists Gorter and Grendel (1925) made the discovery that the membrane is "lipid-based". From this, they furthered the idea that this structure would have to be in a formation that mimicked layers. Once studied further, it was found by comparing the sum of the cell surfaces and the surfaces of the lipids, a 2:1 ratio was estimated; thus, providing the first basis of the bilayer structure known today. This discovery initiated many new studies that arose globally within various fields of scientific studies, confirming that the structure and functions of the cell membrane are widely accepted.[9] The structure has been variously referred to by different writers as the ectoplast (de Vries, 1885),[14]Plasmahaut(plasma skin,Pfeffer, 1877, 1891),[15]Hautschicht(skin layer, Pfeffer, 1886; used with a different meaning byHofmeister, 1867), plasmatic membrane (Pfeffer, 1900),[16]plasma membrane, cytoplasmic membrane, cell envelope and cell membrane.[17][18]Some authors who did not believe that there was a functional permeable boundary at the surface of the cell preferred to use the term plasmalemma (coined by Mast, 1924) for the external region of the cell.[19][20][21] Cell membranes contain a variety ofbiological molecules, notably lipids and proteins. Composition is not set, but constantly changing for fluidity and changes in the environment, even fluctuating during different stages of cell development. Specifically, the amount of cholesterol in human primary neuron cell membrane changes, and this change in composition affects fluidity throughout development stages.[22] Material is incorporated into the membrane, or deleted from it, by a variety of mechanisms: The cell membrane consists of three classes ofamphipathiclipids:phospholipids,glycolipids, andsterols. The amount of each depends upon the type of cell, but in the majority of cases phospholipids are the most abundant, often contributing for over 50% of all lipids in plasma membranes.[23][24]Glycolipids only account for a minute amount of about 2% and sterols make up the rest. Inred blood cellstudies, 30% of the plasma membrane is lipid. However, for the majority of eukaryotic cells, the composition of plasma membranes is about half lipids and half proteins by weight. The fatty chains inphospholipidsandglycolipidsusually contain an even number of carbon atoms, typically between 16 and 20. The 16- and 18-carbon fatty acids are the most common. Fatty acids may be saturated or unsaturated, with the configuration of the double bonds nearly always "cis". The length and the degree of unsaturation of fatty acid chains have a profound effect on membrane fluidity as unsaturated lipids create a kink, preventing the fatty acids from packing together as tightly, thus decreasing themelting temperature(increasing the fluidity) of the membrane.[23][24]The ability of some organisms to regulatethe fluidity of their cell membranesby altering lipid composition is calledhomeoviscous adaptation. The entire membrane is held together vianon-covalentinteraction of hydrophobic tails, however the structure is quite fluid and not fixed rigidly in place. Underphysiological conditionsphospholipid molecules in the cell membrane are in theliquid crystalline state. It means the lipid molecules are free to diffuse and exhibit rapid lateral diffusion along the layer in which they are present.[23]However, the exchange of phospholipid molecules between intracellular and extracellular leaflets of the bilayer is a very slow process.Lipid raftsand caveolae are examples ofcholesterol-enriched microdomains in the cell membrane.[24]Also, a fraction of the lipid in direct contact with integral membrane proteins, which is tightly bound to the protein surface is calledannular lipid shell; it behaves as a part of protein complex. Cholesterol is normally found dispersed in varying degrees throughout cell membranes, in the irregular spaces between the hydrophobic tails of the membrane lipids, where it confers a stiffening and strengthening effect on the membrane.[4]Additionally, the amount of cholesterol in biological membranes varies between organisms, cell types, and even in individual cells. Cholesterol, a major component of plasma membranes, regulates the fluidity of the overall membrane, meaning that cholesterol controls the amount of movement of the various cell membrane components based on its concentrations.[4]In high temperatures, cholesterol inhibits the movement of phospholipid fatty acid chains, causing a reduced permeability to small molecules and reduced membrane fluidity. The opposite is true for the role of cholesterol in cooler temperatures. Cholesterol production, and thus concentration, is up-regulated (increased) in response to cold temperature. At cold temperatures, cholesterol interferes with fatty acid chain interactions. Acting as antifreeze, cholesterol maintains the fluidity of the membrane. Cholesterol is more abundant in cold-weather animals than warm-weather animals. In plants, which lack cholesterol, related compounds called sterols perform the same function as cholesterol.[4] Lipid vesicles orliposomesare approximately spherical pockets that are enclosed by a lipid bilayer.[25]These structures are used in laboratories to study the effects of chemicals in cells by delivering these chemicals directly to the cell, as well as getting more insight into cell membrane permeability. Lipid vesicles and liposomes are formed by first suspending a lipid in an aqueous solution then agitating the mixture throughsonication, resulting in a vesicle. Measuring the rate ofeffluxfrom the inside of the vesicle to the ambient solution allows researchers to better understand membrane permeability.[citation needed]Vesicles can be formed with molecules and ions inside the vesicle by forming the vesicle with the desired molecule or ion present in the solution. Proteins can also be embedded into the membrane throughsolubilizingthe desired proteins in the presence of detergents and attaching them to the phospholipids in which the liposome is formed.[citation needed]These provide researchers with a tool to examine various membrane protein functions. Plasma membranes also containcarbohydrates, predominantlyglycoproteins, but with some glycolipids (cerebrosidesandgangliosides). Carbohydrates are important in the role ofcell-cell recognitionin eukaryotes; they are located on the surface of the cell where they recognize host cells and share information. Viruses that bind to cells using these receptors cause an infection.[26]For the most part, noglycosylationoccurs on membranes within the cell; rather generally glycosylation occurs on the extracellular surface of the plasma membrane. Theglycocalyxis an important feature in all cells, especiallyepitheliawith microvilli. Recent data suggest the glycocalyx participates in cell adhesion,lymphocyte homing,[26]and many others. Thepenultimatesugar isgalactoseand the terminal sugar issialic acid, as the sugar backbone is modified in theGolgi apparatus. Sialic acid carries a negative charge, providing an external barrier to charged particles. The cell membrane has large content of proteins, typically around 50% of membrane volume[27]These proteins are important for the cell because they are responsible for various biological activities. Approximately a third of thegenesinyeastcode specifically for them, and this number is even higher in multicellular organisms.[25]Membrane proteinsconsist of three main types: integral proteins, peripheral proteins, and lipid-anchored proteins.[4] As shown in the adjacent table, integral proteins are amphipathic transmembrane proteins. Examples of integral proteins include ion channels, proton pumps, and g-protein coupled receptors.Ion channelsallow inorganic ions such as sodium, potassium, calcium, or chlorine to diffuse down their electrochemical gradient across the lipid bilayer through hydrophilic pores across the membrane. The electrical behavior of cells (i.e. nerve cells) is controlled by ion channels.[4]Proton pumps are protein pumps that are embedded in the lipid bilayer that allow protons to travel through the membrane by transferring from one amino acid side chain to another. Processes such as electron transport and generating ATP use proton pumps.[4]A G-protein coupled receptor is a single polypeptide chain that crosses the lipid bilayer seven times responding to signal molecules (i.e. hormones and neurotransmitters). G-protein coupled receptors are used in processes such as cell to cell signaling, the regulation of the production of cAMP, and the regulation of ion channels.[4] The cell membrane, being exposed to the outside environment, is an important site of cell–cell communication. As such, a large variety of protein receptors and identification proteins, such asantigens, are present on the surface of the membrane. Functions of membrane proteins can also include cell–cell contact, surface recognition, cytoskeleton contact, signaling, enzymatic activity, or transporting substances across the membrane. Most membrane proteins must be inserted in some way into the membrane.[28]For this to occur, an N-terminus "signal sequence" of amino acids directs proteins to theendoplasmic reticulum, which inserts the proteins into a lipid bilayer. Once inserted, the proteins are then transported to their final destination in vesicles, where the vesicle fuses with the target membrane. The cell membrane surrounds thecytoplasmof living cells, physically separating theintracellularcomponents from theextracellularenvironment. The cell membrane also plays a role in anchoring thecytoskeletonto provide shape to the cell, and in attaching to theextracellular matrixand other cells to hold them together to formtissues.Fungi,bacteria, mostarchaea, andplantsalso have acell wall, which provides a mechanical support to the cell and precludes the passage oflarger molecules. The cell membrane isselectively permeableand able to regulate what enters and exits the cell, thus facilitating thetransportof materials needed for survival. The movement of substances across the membrane can be achieved by eitherpassive transport, occurring without the input of cellular energy, or byactive transport, requiring the cell to expend energy in transporting it. The membrane also maintains thecell potential. The cell membrane thus works as a selective filter that allows only certain things to come inside or go outside the cell. The cell employs a number of transport mechanisms that involve biological membranes: Prokaryotesare divided into two different groups,ArchaeaandBacteria, with bacteria dividing further intogram-positiveandgram-negative.Gram-negative bacteriahave both a plasma membrane and anouter membraneseparated byperiplasm; however, otherprokaryoteshave only a plasma membrane. These two membranes differ in many aspects. The outer membrane of the gram-negative bacteria differs from other prokaryotes due tophospholipidsforming the exterior of the bilayer, andlipoproteinsand phospholipids forming the interior.[33]The outer membrane typically has a porous quality due to its presence of membrane proteins, such as gram-negativeporins, which are pore-forming proteins. The inner plasma membrane is also generally symmetric whereas the outer membrane is asymmetric because of proteins such as the aforementioned. Also, for the prokaryotic membranes, there are multiple things that can affect the fluidity. One of the major factors that can affect the fluidity is fatty acid composition. For example, when the bacteriaStaphylococcus aureuswas grown at 37 °C for 24 h, the membrane exhibited a more fluid state instead of a gel-like state. This supports the concept that in higher temperatures, the membrane is more fluid than in colder temperatures. When the membrane is becoming more fluid and needs to become more stabilized, it will make longer fatty acid chains or saturated fatty acid chains in order to help stabilize the membrane.[34] Bacteriaare also surrounded by acell wallcomposed ofpeptidoglycan(amino acids and sugars). Some eukaryotic cells also have cell walls, but none that are made of peptidoglycan. The outer membrane of gram negative bacteria is rich inlipopolysaccharides, which are combined poly- or oligosaccharide and carbohydrate lipid regions that stimulate the cell's natural immunity.[35]The outer membrane canblebout into periplasmic protrusions under stress conditions or upon virulence requirements while encountering a host target cell, and thus such blebs may work as virulence organelles.[36]Bacterial cells provide numerous examples of the diverse ways in which prokaryotic cell membranes are adapted with structures that suit the organism's niche. For example, proteins on the surface of certain bacterial cells aid in their gliding motion.[37]Many gram-negative bacteria have cell membranes which contain ATP-driven protein exporting systems.[37] According to thefluid mosaic modelofS. J. SingerandG. L. Nicolson(1972), which replaced the earliermodel of Davson and Danielli, biological membranes can be considered as atwo-dimensional liquidin which lipid and protein molecules diffuse more or less easily.[38]Although the lipid bilayers that form the basis of the membranes do indeed form two-dimensional liquids by themselves, the plasma membrane also contains a large quantity of proteins, which provide more structure. Examples of such structures are protein-protein complexes, pickets and fences formed by the actin-basedcytoskeleton, and potentiallylipid rafts. Lipid bilayersform through the process ofself-assembly. The cell membrane consists primarily of a thin layer ofamphipathicphospholipidsthat spontaneously arrange so that the hydrophobic "tail" regions are isolated from the surrounding water while the hydrophilic "head" regions interact with the intracellular (cytosolic) and extracellular faces of the resulting bilayer. This forms a continuous, sphericallipid bilayer. Hydrophobic interactions (also known as thehydrophobic effect) are the major driving forces in the formation of lipid bilayers. An increase in interactions between hydrophobic molecules (causing clustering of hydrophobic regions) allows water molecules to bond more freely with each other, increasing the entropy of the system. This complex interaction can include noncovalent interactions such asvan der Waals, electrostatic and hydrogen bonds. Lipid bilayers are generally impermeable to ions and polar molecules. The arrangement of hydrophilic heads and hydrophobic tails of the lipid bilayer prevent polar solutes (ex. amino acids, nucleic acids, carbohydrates, proteins, and ions) from diffusing across the membrane, but generally allows for the passive diffusion of hydrophobic molecules. This affords the cell the ability to control the movement of these substances viatransmembrane proteincomplexes such as pores, channels and gates.Flippasesandscramblasesconcentratephosphatidyl serine, which carries a negative charge, on the inner membrane. Along withNANA, this creates an extra barrier to chargedmoietiesmoving through the membrane. Membranes serve diverse functions ineukaryoticandprokaryoticcells. One important role is to regulate the movement of materials into and out of cells. The phospholipid bilayer structure (fluid mosaic model) with specific membrane proteins accounts for the selective permeability of the membrane and passive and active transport mechanisms. In addition, membranes in prokaryotes and in the mitochondria and chloroplasts of eukaryotes facilitate the synthesis of ATP through chemiosmosis.[8] Theapical membraneorluminal membraneof a polarized cell is the surface of the plasma membrane that faces inward to thelumen. This is particularly evident inepithelialandendothelial cells, but also describes other polarized cells, such asneurons. Thebasolateral membraneor basolateral cell membrane of a polarized cell is the surface of the plasma membrane that forms its basal and lateral surfaces.[39]It faces outwards, towards theinterstitium, and away from the lumen. Basolateral membrane is a compound phrase referring to the terms "basal (base) membrane" and "lateral (side) membrane", which, especially in epithelial cells, are identical in composition and activity. Proteins (such as ion channels andpumps) are free to move from the basal to the lateral surface of the cell or vice versa in accordance with thefluid mosaic model.Tight junctionsjoin epithelial cells near their apical surface to prevent the migration of proteins from the basolateral membrane to the apical membrane. The basal and lateral surfaces thus remain roughly equivalent[clarification needed]to one another, yet distinct from the apical surface. Cell membrane can form different types of "supramembrane" structures such ascaveolae,postsynaptic density,podosomes,invadopodia,focal adhesion, and different types ofcell junctions. These structures are usually responsible forcell adhesion, communication,endocytosisandexocytosis. They can be visualized byelectron microscopyorfluorescence microscopy. They are composed of specific proteins, such asintegrinsandcadherins. Thecytoskeletonis found underlying the cell membrane in the cytoplasm and provides a scaffolding for membrane proteins to anchor to, as well as formingorganellesthat extend from the cell. Indeed, cytoskeletal elements interact extensively and intimately with the cell membrane.[40]Anchoring proteins restricts them to a particular cell surface — for example, the apical surface of epithelial cells that line thevertebrategut— and limits how far they may diffuse within the bilayer. The cytoskeleton is able to form appendage-like organelles, such ascilia, which aremicrotubule-based extensions covered by the cell membrane, andfilopodia, which areactin-based extensions. These extensions are ensheathed in membrane and project from the surface of the cell in order to sense the external environment and/or make contact with the substrate or other cells. The apical surfaces of epithelial cells are dense with actin-based finger-like projections known asmicrovilli, which increase cell surface area and thereby increase the absorption rate of nutrients. Localized decoupling of the cytoskeleton and cell membrane results in formation of ableb. The content of the cell, inside the cell membrane, is composed of numerousmembrane-bound organelles, which contribute to the overall function of the cell. The origin, structure, and function of each organelle leads to a large variation in the cell composition due to the individual uniqueness associated with each organelle. The cell membrane has different lipid and protein compositions in distincttypes of cellsand may have therefore specific names for certain cell types. Thepermeabilityof a membrane is the rate of passivediffusionof molecules through the membrane. These molecules are known aspermeantmolecules. Permeability depends mainly on theelectric chargeandpolarityof the molecule and to a lesser extent themolar massof the molecule. Due to the cell membrane's hydrophobic nature, small electrically neutral molecules pass through the membrane more easily than charged, large ones. The inability of charged molecules to pass through the cell membrane results inpH partitionof substances throughout thefluid compartmentsof the body[citation needed].
https://en.wikipedia.org/wiki/Lateral_diffusion
This is alist ofmajorsocial gaming networks. The list is not exhaustive and is limited to notable, well-known services.
https://en.wikipedia.org/wiki/List_of_social_gaming_networks
Asocial networking serviceis an online platform that people use to build social networks orsocial relationshipswith other people who share similar personal or career interests, activities, backgrounds or real-life connections. This is a list of notable activesocial network services, excludingonline dating services, that have Wikipedia articles. For defunct social networking websites, seeList of defunct social networking services.
https://en.wikipedia.org/wiki/List_of_social_networking_services
Mobile social networkingissocial networkingwhere individuals with similar interests converse and connect with one another through theirmobile phoneand/ortablet. Much like web-based social networking, mobile social networking occurs invirtual communities. Many web-based social networking sites, such asFacebookandTwitter, have createdmobile applicationsto give their users instant and real-time access from anywhere they have access to the Internet. Additionally, native mobile social networks have been created to allow communities to be built around mobile functionality. More and more, the line between mobile and web is being blurred as mobile apps use existing social networks to create native communities and promote discovery, and web-based social networks take advantage of mobile features and accessibility. As mobile web evolved from proprietary mobile technologies and networks, to full mobile access to the Internet, the distinction changed to the following types: While mobile and web-based social networking systems often work symbiotically to spread content, increase accessibility, and connect users, consumers are increasingly spending their attention on native apps compared toweb browsers.[citation needed] The evolution of social networking onmobile networksstarted in 1999 with basicchattingandtextingservices. With the introduction of various technologies in mobile networks, social networking has reached an advance level over four generations.[1] Technologies used in this generation are application-based, pre-installed on mobile handsets.[2]Features include text-only chat viachat rooms. The people who used these services were anonymous. The services of this generation's mobile social networks can be used on a pay-as-you-go orsubscription-to-servicebasis. The introduction of3Gandcamera phonesadded many features such as uploading photos,mobile searchfor person based on profile, and contacting/flirting with another person anonymously. Regional distributions of these features include Japan, Korea, Australia, Western Europe and US. The applications are mostly useful for dating purposes. The services of this generation's mobile social networks can be used on a pay-as-you-go or subscription-to-service basis. The experiments for this generation mobile social networks started in 2006. It was adopted widely in 2008/2009. This generation brought tremendous changes and made mobile social networks as a part of daily life. The features include a richeruser experience, automatic publishing toweb profileandstatus updates, someWeb 2.0features, search by group/join by interests,alerts,location-based servicesand content sharing (especially music). Technologies includeWAP 2.0, Java on the server,MMS, and voice capture. Applications introduced were customized with general interests such as music and mobile-specificcontent distribution. Regional distributions of this generation of mobile social networks include Japan, Korea, Western Europe, and North America.Advertisingand ad-supported content become increasingly important. The services in this generation can be used with pay-as-you-go plans; subscription-based plans were still popular as networks increased their scale to become content distribution platforms. Fourth generation began in 2008 and reached in 2010. All the features in third generation are advanced in this generation of social mobile networks. The features of this generation include the features of the third generation, the ability to hide/mask one's presence, asynchronousvideo conversation, multi-point audio chat conversation with one button, and multiplayer mobile gaming. Technologies which made these features possible are Web 2.0 widgets,Flash Lite,OpenSocial, andOpen Handset Alliance. The business model of previous generations continued along withvirtual currency– the purchase and trade of virtual goods. In parallel to the increase of various technologies in mobile networks, the number of hours spent per adult on mobile devices per day has increased dramatically since 2008.[3]As of 2014, mobile devices have surpassed desktop/laptops as the most used device per day for internet usage. A steady increase of mobile application usage over the past few years has contributed to the rise of mobile social networks, as well as to the diversity of usage of mobile social networks.[4] As the use of mobile social networks has increased, the location-based services within the mobile social network has also been increasing. Social network service companies now provide more location-based services for customers' wide use of the mobile devices and their convenience.[5] Mobile social networking sites allow users to create aprofile, send and receive messages via phone or computer and visit an online version of a mobile site. There were different models which were adapted by different networking sites. Most of these sites have many unique features or special functions, but the main function of the site remains the same as other services. All these sites are categorized according to the following business models and usage. Similar to there being many online social networking sites, such asFacebookandTwitter, there are just as many social network on mobile devices. They offer vast number of functions including multimedia posts, photo sharing, and instant messaging. Most of these mobile apps offer free international calling and texting capabilities. Today, social networking apps are not just for the social aspect, but are frequently used for professional aspects as well, such as LinkedIn, which is still constantly growing.[6]Along with sharing multimedia posts and instant messaging, social networks are commonly used to connect immigrants in a new country. While the thought of moving to a new country may be intimidating for many, social media can be used to connect immigrants of the same land together to make assimilation a little less stressful.[7] This model is focused on the ability to send short, text-based messages to an individual, group of close friends, or even a large group of classmates, simultaneously. This category enables messages to reach the right people as quickly as possible. Many messaging apps are very popular, maybe even more than classical texting. Some social network platforms, such asFacebook, have their own native messaging applications, similar toFacebook Messenger. Different countries have a certain messenger that is predominant, like China withWeChat, Korea withKakaoTalk, and the US withWhatsApp.[8] This can be viewed as an advanced version of the messenger category. In addition to text messages, audio and video files can be transmitted among a group, such asSkypeorOovoo, which are forms of online video chatting. In the case ofInstagramandVine, photos and videos of personal lives are shared to either friends or to the public. Similarly,Pinterestis used to share photos, but on a more community level.Mary Meeker's KPCB report stated that time spent on short video apps climbed 360% in 2017.[9]The largest media sharing app today isYouTube, which allows people post videos and share with the public.[10]Many of these services store media content online for easy storage and access. Some mobile social networks, such asYelp,FourSquareandYikYak, allow users to search for local venues. Many of these apps publishcrowd-sourcedreviews and tips about restaurants, shops, places of interest and more. Yelp and FourSquare also personalizes each user's database according to their latest search and interest to make searching more efficient. This model is about connecting people through bothmulti-playerand competitivesingle-playergames. Mobile devices are always increasing their capacity for graphics performance and computing power, making them capable gaming devices.[11]The leader in this category isZynga, creators ofFarmvilleandWords with Friends, though it has suffered a decline.[12]Hearthstoneis another popular mobile game where players use monster and spell cards to fight each other. Many games also introduce the idea of having another player as an "ally" during game play. For example, inNaruto Blazing, players can choose one person from a set of players to be on their team while fighting enemies throughout the game. Mobile social networks can also connect people outside of the mobile environment.Pokémon Goincorporated augmented reality to allow players to catch Pokémon while together physically while outside. Players can also battle each other at gyms in various locations in the world. Facebook has also integrated games through its chat messenger. For example, friends can play chess by sending "@fbchess play" to the other person or basketball by sending a basketball emoji and clicking on the emoji.[13] These are location-based apps that allow users to create a profile and are matched with those who have similar interests. Some of these sites use radar to ping a user if there is a matching single profile within a certain distance.Tinderwas the first dating app that started the trend and has one of the largest user base. Other dating apps includeCoffee Meets BagelandOkCupid. These sites are marked with serious security measures, so that no personal details are released without the user's consent. However, there still has been several dangerous incidents that rose questions of whether Tinder-like apps are safe and should be kept around.[14] Music apps connect people by sharing playlists and being able to see what other people are listening to.Spotify, a very popular music site, is also used to social networking in a sense that people can see what their friends are listening to at the moment as well. Users can also follow certain artists or even friends that they want to, which is a form of “liking” a post on Facebook.[15]Other social media music apps include radio stations likePandoraandlast.fm. Recently, mobile social networks has also been used to motivate individuals to stick to their fitness and health goals. These social networks either work as a form of encouragement by rewarding the individual when they have accomplish a goal, or as a form of punishment by disciplining those who failed to accomplish their goal through a monetary cost or social pressure. An example of this network is PACT. In PACT, individuals make a weekly goal to exercise more or eat healthier and set a monetary amount that you will pay if you don't succeed. Using the app, you can prove that you were at the gym through GPS or that you ate healthy meals by uploading pictures of the meal. If you succeed in your goal, you earn cash paid by members who didn't keep to their goals. Stravais another mobile social network application that lets you keep track of your activities using GPS and analyze your performance through metrics such as speed and distance. Using the social network, you can meet other individuals who are also into the same activities as you and find out about new track routes or challenges or other athletic content. Mobile commerce, orm-commerce, is a branch ofe-commerce, that is available in a form of apps and mobile sites. In some apps likeletgo, it is easy for the buyer to talk to the seller about specifics about the product or negotiate the price and this assumes a form of social networking. It also narrows the search down by cities and topics to make it more efficient. Some major e-commerce sites, such asAmazonandeBay, are also available in apps, so that people can shop anytime and anywhere. Particularly, however, some mobile social media networks add m-commerce functionalities to their applications. For instance, there is the case of the Facebook Marketplace where people can sell and purchase products through their mobile devices. Many e-commerce and m-commerce applications are also increasingly developed to interface with other applications such as mobile payment, banking, and ticketing applications so customers can easily pay or accept payments.[16][17]There is the case of Instagram, which in 2018 became open to merchants using theShopifyplatform.[18] Mobile payment social networks such asVenmoandSquare Cashallow for person-to-person money transfer between family and friends, with a swipeable feed of payment details similar to Facebook'sNews Feed. The rise of the digital age has made social media a lasting trend, Facebook is still the leader of social networks, were initially as web-based and then extended towards access via mobile browsers and smartphone apps. Compared with Twitter, Instagram, and Pinterest, Facebook continues to dominate the social media world. As of the fourth quarter of 2015, 823 million Facebook users accessed the social network exclusively through mobile device, exceed from 526 million users in the previous year. In 2016, there was Instagram started as mobile and later developed into web-based platforms as well. In 2016, there was practically 1.6 billion active users around the world. Moreover, in the United States, a study named the usage of the most popular mobile social networking percentages showed that social media audiences spent a total of 230 billion minutes on Facebook in 2014, 80% higher than Instagram. Until January 2016, 52% of users in North America accessed social media through mobile when the global mobile social penetration rate was 27%.[19]The report in 2017 showed that around 1 billion users will visit Facebook via mobile devices and during this year, the US market plays a significant role with nearly 80% of Facebook users using mobile devices to access their accounts. Facebook mobile advertising revenue account for 10 billion dollars and occupy for 74% of revenue in total. It shows that by 2018, more than 75% of the Facebook users worldwide will access the service via their mobile phones.[20] Safety issues (includingsecurity, privacy, and trust) in mobile social networks are concerned about the condition of being protected against different types of failure, damage, error, accidents, harm or any other non-desirable event, while mobile carriers contact each other in mobile environments. However, lack of a protective infrastructure in these networks has turned them in to convenient targets for various perils. This is the main impulse why mobile social networks carry disparate and intricate safety concerns and embrace divergent safety challenging problems.[21] There has been cases where a user was caused bodily harm through mobile social media. For example, Kurt Eichenwald was sent a tweet with a flashing animated image by another user who knew that Eichenwald had epilepsy, causing a seizure.[22]As a result of these dangers, many mobile social networks such as Twitter and Facebook have implemented various methods for protecting user safety such as removing harmful users, detecting malware, and verifying a user's identity; however, these policies are still in the workings. Other than online safety issues, the evolution of mobile devices has also introduced new offline, or physical, safety concerns. The distractions caused by mobile social networks have cause numerous accidents due to the user not paying attention to their surroundings. According to theNational Safety Council, nearly 330,000 injuries occur each year from accidents where the driver was texting while driving.[23]The safety issues caused by distractions from mobile social network became more prominent after the release ofPokémon Goin July 2016. In Pokémon Go, users can catch Pokémon while walking around outdoors on their phones. While this game has positive impacts such as getting players to exercise, increasing museum and theme park visitors, and helping single people find dates, it has also led to more accidents. In California, two men fell over 40 feet from an ocean bluff while playing Pokémon Go.[24]In Auburn, a driver went off the road and hit a tree because he was playing Pokémon Go while driving.[25]In Pittsburgh, a teenager crossed a highway to catch a Pokémon and was hit by a car because she was distracted.[26] Another safety concern arose from mobile dating applications such as Tinder and Hinge, where single people are matched up with each other to go on dates. These environments make it much easier for criminals to commit crimes such as rape and murder because it is difficult for users to completely know the other person before agreeing to meet them face to face. In England and Wales, there were 204 reported crimes due to Tinder orGrindrin 2014.[27]This number rose to 412 in 2015. On November 23, 2016,Stephen Portwas convicted for rape and murder of 4 men who he met on the Grindr. In April 2016, Ingrid Lyne was murdered and her accused murderer was a man she had met on a dating app.[28]While there are many dangers to meeting people online, it has also successfully helped single people find love and marriage. Increasing the safety procedures regarding mobile dating applications is an ongoing work by the police force and by the developers of these mobile applications. While Japan, Korea, and China have a higher usage rate of mobile social networks compared to other western countries, the United States is a prevalent user of mobile social networks. The US has a population of 303.82 million people and a mobile penetration of 72% with 219.73 million mobile subscribers in 2008. Informa forecasts the number of mobile subscribers to rise to 243.27 million by 2013.[1] The mobile data market in the US is at a developed stage of growth where non-messaging data revenues account for 20% of US operators' overall data revenues. In September 2012, the CTIA (Cellular Telephone Industries Association) announced that data service revenues rose 40% to US$14.8 billion. The CTIA announced that SMS usage had maintained its strong growth.[1] Social networking once began in the online space, but it has rapidly spread to mobile platforms. Currently, consumption of mobile internet usage is being driven by mobile social networking. Data shows that the US has 220.14 million online internet users which is 72.5% of the population. Flat-rate data plans have been prevalent in the US for a number of years but the customer adoption of mobile internet was slow until 2008. However, the introduction of the iPhone has definitely increased the market for mobile internet. iPhones have transformed the mobile social network market, and today there is numerous mobile development for social network apps.[1] The US mobile social networking market experienced steady growth in 2008 with 6.4 million mobile social network users. Since then, the number of mobile users has continued to grow and below is graph forecasting the growth until 2013.[1]According to Statista.com, the most popular social media networking app as of 2016 is Facebook at 123.55 million monthly users, surpassing the next most popular app, Facebook Messenger, which has 97.86 million monthly users.[29]
https://en.wikipedia.org/wiki/Mobile_social_network
Aprofessional network service(or, in anInternetcontext, simply aprofessional network) is a type ofsocial network servicethat focuses on interactions and relationships for business opportunities and career growth, with less emphasis on activities in personal life.[1] A professional network service is used by working individuals, job-seekers, and businesses to establish and maintain professional contacts,[2]to find work or hire employees, share professional achievements, sell or promote services, and stay up-to-date with industry news and trends. According to LinkedIn managing director Clifford Rosenberg in an interview with AAP in 2010, "[t]his is a call to action for professionals to re-address their use of social networks and begin to reap as many rewards from networking professionally as they do personally." Businesses mostly depend on resources and information outside the company and to get what they need, they need to reach out and professionally network with others, such as employees or clients as well as potential opportunities.[3] "Nardi, Whittaker, and Schwarz (2002) point out three main tasks that they believe networkers need to attend to keep a successful professional (intentional) network: building a network, maintaining the network, and activating selected contacts. They stress that networkers need to continue to add new contacts to their network to access as many resources as possible and to maintain their network by staying in touch with their contacts. This is so that the contacts are easy to activate when the networker has work that needs to be done."[4] By using a professional network service, businesses can keep all of their networks up-to-date, and in order, and helps figure out the best way to efficiently get in touch with each of them. A service that can do all that helps relieve some of the stress when trying to get things done. Not all professional network services are online sites that help promote a business. Some services connect the user to other services that help promote the business other than online sites, such as phone/Internet companies that provide services and companies that specifically are designed to do all of the promoting, online and in person, for a business. In 1997, professional network services started up throughout the world and continue to grow. The first recognizable site to combine all features, such as creating profiles, adding friends, and searching for friends, wasSixDegrees.com. According to Boyd and Ellison's article, "Social Network Sites: Definition, History, and Scholarship", from 1997 to 2001, several community tools began supporting various combinations of profiles and publicly articulated Friends. Boyd and Ellison go on to say that the next wave began withRyze.comin 2001. It was introduced as a new way "to help people leverage their business networks".[5] Quite a lot of work is put into a professional network service, such as the number of hours that go into them and the type of people they work for, as well as the business model of it all, such as the professional interaction and the multiple services they deal with.[citation needed][vague] Some professional network services not only help promote the business but can also help in connecting to other people. Those services may include a specific phone and/or Internet company or a company that helps to connect with other businesses. According to the Society for New Communications Research (SNCR), there are at least nine online professional networks that are being used. Kaplan and Haenlein elaborate on five key considerations for companies when utilizing media. These include the importance of careful selection, the option to choose existing applications or develop custom ones, ensuring alignment with organizational activities, integrating a comprehensive media plan, and providing accessibility to all stakeholders. "Choosing the right medium for any given purpose depends on the target group to be reached and the message to be communicated. On one hand, each Social Media application usually attracts a certain group of people, and firms should be active wherever their customers are present. On the other hand, there may be situations whereby certain features are necessary to ensure effective communication, and these features are only offered by one specific application."[citation needed] "Sometimes you may decide to rely on various Social Media, or a set of different applications within the same group, to have the largest possible reach." "Using different contact channels can be a worthwhile and profitable strategy." According to the Society for New Communications Research at Harvard University, "the average professional belongs to 3–5 online networks for business use, and LinkedIn, Facebook, and Twitter are among the top used."[6] Social media and traditional media are "both part of the same: your corporate image" in the customers' eyes. "...once the firm has decided to utilize Social Media applications, it is worth checking that all employees may access them." According to the SNCR, "the convergence of Internet, mobile, and social media has taken significant shape as professionals rely on anywhere access to information, relationships, and networks."[7] "Half of the respondents report participating in 3 to 5 online professional networks. Another three in ten participate in 6 or more professional networks." "Popular social networks are now being used frequently as Professional Communities. More than nine in ten respondents indicated that they use LinkedIn and half reported using Facebook. Twitter and blogs were frequently listed as 'professional networks'."[8] According toMichael Rappa's article, Business models on the Web", "abusiness modelis the method of doing business by which a company can sustain itself – that is, generaterevenue. The business model spells out how a company makes money by specifying where it is positioned in thevalue chain." Rappa mentions that there are at least nine basic categories from which a business model can be separated. Those categories are abrokerage,advertising,infomediary,merchant,manufacturer,affiliate,community,subscription, andutility. "...a firm may combine several different models as part of its overall Internet business strategy." At first,Flickrstarted as a way to mainstreampublic relations.[9] When it comes to the social impact that professional network services have on today's society, it has proved to increase activity. According to the SNCR, "[t]hree quarters of respondents rely on professional networks to support business decisions. Reliance has increased for essentially all respondents over the past three years. Younger (20–35) and older professionals (55+) are more active users of social tools than middle-aged professionals. More people are collaborating outside their company wall than within their organizational intranet."[10] Since the internet and social media are a part of this "world where consumers can speak so freely with each other and businesses have increasingly less control over the information available about them in cyberspace", most firms and businesses are uncomfortable with all the freedom. According to Kaplan and Haenlein's article, "Users of the world, unite! The challenges and opportunities of Social Media", businesses are pushed aside and are only able to sit back and watch as their customers publicly post comments, which may or may not be well-written.[11]
https://en.wikipedia.org/wiki/Professional_network_service
Themediumoftelevisionhas had many influences on society since its inception. The belief that this impact has been dramatic has been largely unchallenged inmedia theorysince its inception. However, there is much dispute as to what those effects are, how serious the ramifications are and if these effects are more or less evolutionary withhuman communication. Current research is discovering that individuals suffering fromsocial isolationcan employ television to create what is termed aparasocialor faux relationship with characters from their favorite television shows and movies as a way of deflecting feelings of loneliness and social deprivation.[1]Just as an individual would spend time with a real person sharing opinions and thoughts, pseudo-relationships are formed with TV characters by becoming personally invested in their lives as if they were a close friend[1]so that the individual can satiate the human desire to form meaningful relationships and establish themselves in society. Jaye Derrick and Shira Gabriel of the University of Buffalo, and Kurt Hugenberg of Miami University found that when an individual is not able to participate in interactions with real people, they are less likely to indicate feelings of loneliness when watching their favorite TV show.[2] They refer to this finding as the social surrogacy hypothesis.[1]Furthermore, when an event such as a fight or argument disrupts a personal relationship, watching a favorite TV show was able to create a cushion and prevent the individual from experiencing reduced self-esteem and feelings of inadequacy that can often accompany the perceived threat.[1]By providing a temporary substitute for acceptance and belonging that is experienced through social relationships, TV helps to relieve feelings of depression and loneliness when those relationships are not available. This benefit is considered a positive consequence of watching television, as it can counteract the psychological damage that is caused by isolation from social relationships. Several studies have found thateducational televisionhas many advantages. The Media Awareness Network[3]explains in its article "The Good Things about Television"[4]that television can be a very powerful and effective learning tool for children if used wisely. The article states that television can help young people discover where they fit into society, develop closer relationships with peers and family, and teach them to understand complex social aspects of communication.[4]Dimitri Christakiscites studies in which those who watchedSesame Streetand other educational programs as preschoolers had higher grades, were reading more books, placed more value on achievement and were more creative. Similarly, while those exposed to negative role models suffered, those exposed to positive models behaved better.[5] In the Parent Circle, by PC exclusives, Priscilla J. S. Selvaraj points out several benefits of watching TV on an educational level and on an emotional level. According to Priscilla J. S. Selvaraj, television provides greater awareness about current events and social norms.[6]She explains that TV can expose children to different languages, which can help them learn a new language. Priscilla J. S. Selvaraj argues that because children can learn from television outside the classroom, it can improve learning at school.[6]This creates happiness and can raise the energy too. Being energetic and happy allows your body to be more active. More activity makes people healthier. Emotionally, watching television can help strengthen the bond of a family.[6]This being said spending time with family or loved ones can cause your body to release endorphins that can make you happier as well. A study on an Italian community during theCOVID-19 pandemicfound that television may have lowered stress because it provided an escape from stressors in life.[7] The rich array of pejoratives for television (for example, "boob tube" and "chewing gum for the mind" and so forth) indicate a disdain held by many people for this medium.[8]Newton N. Minowspoke of the "vast wasteland" that was the television programming of the day in his1961 speech. Complaints about the social influence of television have been heard from the U.S. justice system as investigators and prosecutors decry what they refer to as "theCSI syndrome". They complain that, because of the popularity and considerable viewership ofCSIand its spin-offs, juries today expect to be "dazzled", and will acquit criminals of charges unless presented with impressive physical evidence, even when motive, testimony, and lack of alibi are presented by the prosecution.[9] Televisionhas also been credited with changing the norms of social propriety, although the direction and value of this change are disputed.Milton Shulman, writing about television in the 1960s, wrote that "TVcartoonsshowed cows without udders and not even a pause was pregnant," and noted that on-air vulgarity was highly frowned upon. Shulman suggested that, even by the 1970s, television was shaping the ideas of propriety and appropriateness in the countries the medium blanketed. He asserted that, as a particularly "pervasive and ubiquitous" medium, television could create a comfortable familiarity with and acceptance of language and behavior once deemed socially unacceptable. Television, as well as influencing its viewers, evoked an imitative response from other competing media as they struggle to keep pace and retain viewer- or readership.[10] According to a study published in 2008, conducted by John Robinson and Steven Martin from theUniversity of Maryland, people who are not satisfied with their lives spend 30% more time watching TV than satisfied people do. The research was conducted with 30,000 people during the period between 1975 and 2006. This contrasted with a previous study, which indicated that watching TV was the happiest time of the day for some people. Based on his study, Robinson commented that the pleasurable effects of television may be likened to anaddictiveactivity, producing "momentary pleasure but long-term misery and regret."[11] In 1989 and 1994, social psychologistsDouglas T. KenrickandSteven Neubergwith co-authors demonstrated experimentally that followingexposureto photographs or stories about desirable potential mates, human subjects decrease their ratings of commitment to their current partners.[12][13]Citing the Kenrick and Neuberg studies, in 1994, evolutionary biologistGeorge C. Williamsand psychiatristRandolph M. Nesseobserved that television (and othermass communicationssuch asfilms) were arousingenvyand causing lower feelings of commitment to spouses as a consequence of broadcasting the lives of most successful members of society (e.g.Lifestyles of the Rich and Famous) and of the entertainment andadvertisingindustry's hiring of physically attractive actors and actresses.[14]Also citing the research by Kenrick and Neuberg and their co-authors, social psychologistDavid Busshas also argued that theevolutionary mismatchfrom constant exposure to images of physically attractive women inadvertisingand entertainment likely cause lower levels of commitment by men to spouses and partners.[15] In 1948, 1 percent of U.S. households owned at least onetelevisionwhile 75 percent did by 1955,[16]and by 1992, 60 percent of all U.S. households receivedcable television subscriptions.[17]In 1980, 1 percent of U.S. households owned at least onevideocassette recorderwhile 75 percent did by 1992.[16]From 1960 to 2011, the percentage of all U.S. adults who were married declined from 72 percent to a record low of 51 percent,[18]with the percentage of U.S. adults over the age of 25 who had never married rising to a record high of one-fifth by 2014 and the percentage of U.S. adults living without spouses orpartnersrising to 42 percent by 2017.[19][20] One theory says that when a person playsvideo gamesor watches TV, thebasal gangliaportion of thebrainbecomes very active anddopamineis released. Some scientists believe that release of high amounts of dopamine reduces the amount of theneurotransmitteravailable for control of movement, perception of pain and pleasure and formation of feelings.[21]A study conducted by Herbert Krugman found that in television viewers, the right side of the brain is twice as active as the left side, which causes a state ofhypnosis.[22] Research shows that watching television starting at a young age can profoundly affect children's development. These effects include obesity, language delays, and learning disabilities. Physical inactivity while viewing TV reduces necessary exercise and leads to over-eating. Language delays occur when a child does not interact with others. Children learn language best from live interaction with parents or other individuals. Resulting learning disabilities from over-watching TV include ADHD, concentration problems and even reduction of IQ. Children who watch too much television can thus have difficulties starting school because they are not interested in their teachers. Children should watch a maximum of 2 hours daily if any television.[23] In his bookBowling Alone,Robert D. Putnamnoted a decline of public engagement in local social and civic groups from the 1960s to the 1990s. He suggested that television and other technology that individualizes leisure time accounted for 25% of this change.[24] Studies in both children and adults have found an association between the number of hours of television watched andobesity.[25]A study found that watching television decreases the metabolic rate in children to below that found in children at rest.[26]AuthorJohn Steinbeckdescribes television watchers: The American Academy of Pediatrics (AAP) recommends that children under two years of age should not watch any television and children two and older should watch one to two hours at most. Children who watch more than four hours of television a day are more likely to become overweight.[28][29] TV watching and other sedentary activities are associated with greater risk of heart attack,[30]diabetes,cardiovascular disease, and death.[31] Legislators, scientists and parents are debating the effects oftelevision violenceon viewers, particularly youth. Fifty years of research on the impact of television on children's emotional and social development have not ended this debate.[32][33] Some scholars[32]have claimed that the evidence clearly supports a causal relationship between media violence and societal violence. However, other authors[33][34]note significant methodological problems with the literature and mismatch between increasing media violence and decreasing crime rates in the United States. A 2002 article in Scientific American suggested that compulsive television watching,television addiction, was no different from any otheraddiction, a finding backed up by reports of withdrawal symptoms among families forced by circumstance to cease watching.[35]However, this view has not yet received widespread acceptance among all scholars, and "television addiction" is not a diagnoseable condition according to the Diagnostic and Statistical Manual -IV -TR. A longitudinal study inNew Zealandinvolving 1000 people (from childhood to 26 years of age) demonstrated that "television viewing in childhood and adolescence is associated with poor educational achievement by 12 years of age".[36]The same paper noted that there was a significant negative association between time spent watching television per day as a child and educational attainment by age 26: the more time a child spent watching television at ages 5 to 15, the less likely they were to have a university degree by age 26. However, recent research (Schmidt et al., 2009) has indicated that, once other factors are controlled for, television viewing appears to have little to no impact on cognitive performance, contrary to previous thought.[37]However, this study was limited to cognitive performance in childhood. Numerous studies have also examined the relationship between TV viewing and school grades.[38] A study published inSexuality Research and Social Policyconcluded that parental television involvement was associated with greater body satisfaction among adolescent girls, less sexual experience amongst both male and female adolescents, and that parental television involvement may influence self-esteem and body image, in part by increasing parent-child closeness.[39]However, a more recent article by Christopher Ferguson, Benjamin Winegard, and Bo Winegard cautioned that the literature on media and body dissatisfaction is weaker and less consistent than often claimed and that media effects have been overemphasized.[40]Similarly recent work by Laurence Steinbrerg and Kathryn Monahan has found that, usingpropensity score matchingto control for other variables, television viewing of sexual media had no impact on teen sexual behavior in a longitudinal analysis.[41] Many studies have found little or no effect of television viewing on viewers[42](see Freedman, 2002). For example, a recent long-term outcome study of youth found no long-term relationship between watching violent television and youth violence or bullying.[43] On July 26, 2000 the American Academy of Pediatrics, the American Medical Association, the American Psychological Association, the American Academy of Family Physicians, and the American Academy of Child and Adolescent Psychiatry stated that "prolonged viewing of media violence can lead to emotional desensitization toward violence in real life."[44]However, scholars have since analyzed several statements in this release, both about the number of studies conducted, and a comparison with medical effects, and found many errors.[45] Television is used to promote commercial, social and political agendas. Public service announcements (including those paid for by governing bodies or politicians),newsandcurrent affairs,television advertisements,advertorialsandtalk showsare used to influence public opinion. The Cultivation Hypothesis suggests that some viewers may begin to repeat questionable or even blatantly fictitious information gleaned from the media as if it were factual. Considerable debate remains, however, whether the Cultivation Hypothesis is well supported by scientific literature, however, the effectiveness of television for propaganda (including commercial advertising) is unsurpassed. The US military and State Department oftenturn to media to broadcast into hostile territories or nations.[46] While the effects oftelevision programsdepend on what is actuallyconsumed, media theoristNeil Postmanargued inAmusing Ourselves to Death(1985) that the dominance of entertaining, but not informative programming, creates a politically ignorant society, undermining democracy: "Americans are the best entertained and quite likely the least-informed people in the Western world."[47]In a four-part documentary series released byFrontlinein 2007, formerNightlineanchorTed Koppelstated, "To the extent that we're nowjudging journalism by the same standards that we apply to entertainment– in other words, give the public what it wants, not necessarily what it ought to hear, what it ought to see, what it needs, but what it wants – that may prove to be one of the greatest tragedies in the history of American journalism."[48]Koppel also suggested that the decline in American journalism was made worse since the revocation of theFCC fairness doctrineprovisions during theReagan Administration, while in an interview withReason,Larry Kingargued that the revocation of theZapple doctrine'sequal-timeprovisions in particular led to a decline in thepublic discourseand the quality of candidates running inU.S. elections.[49] Following thefirst presidential debatebetweenJohn F. KennedyandRichard Nixonduring the1960 U.S. presidential election(for which the equal-time rule was suspended), most television viewers thought Kennedy had won the debate while most radio listeners believed that Nixon had won.[50][51][52][53]Galluppolls in October 1960 showed Kennedy moving into a slight but consistent lead over Nixon after the candidates were in a statistical tie for most of August and September before the debates occurred.[54]Kennedy would ultimately win the election with 49.7 percent of the popular vote to Nixon's 49.5 percent. Other polls revealed that more than half of all voters had been influenced by the debates and 6 percent alone claimed that the debates alone had decided their choice.[55]Although the actual influence of television in these debates has been argued over time,[56]recent studies by political scientistJames N. Druckmandetermined that the visually-based television may have allowed viewers to evaluate the candidates more on their image (including perceived personality traits) than radio which allowed the transmission of voice alone. Termed "viewer-listener disagreement", this phenomenon may still affect the political scene of today.[57] After thepresidential debatesbetweenHillary ClintonandDonald Trumpduring the2016 U.S. presidential election,INSEADeconomics professor Maria Guadalupe andNew York University(NYU)Steinhardt Schooleducational theatre professor Joe Salvatore adapted excerpts of the debate transcripts into aone-actplaytitledHer Opponentthat replicated the language,facial expressions,gestures,tone of voice, otherbody language, andnonverbal communicationverbatim of Clinton and Trump during the debates by two fictional characters, but with the characters representing Clinton and Trump beinggender-flipped.[58]Later performedoff-Broadwayby fellow NYU Steinhardt School educational theatre professors Rachel Whorton and Daryl Embry for an open-ended run at theJerry Orbach Theaterbeginning in April 2017,[59]the audience members that attended its premiere at theProvincetown Playhousethe previous January were surveyed before the performance about the Clinton-Trump debates and after the performance about the gender-flipped adaptation of the debates, and the survey found that the Clinton supporters in the audience found Trump's debate performance not offensive and more effective when delivered by a woman and Clinton's debate performance to be offensive and less effective when delivered by a man.[60][61] InA Treatise of Human Nature(1739), philosopherDavid Humeobserved that "reason is, and ought only to be the slave of the passions, and can never pretend to any other office than to serve and obey them."[62]Citing Hume, social psychologistJonathan Haidtargues that his research with anthropologistRichard Shwederon moral dumbfounding along with research about theevolution of moralityvindicates anintuitionist modelofhuman moral reasoning,[63]and Haidt cites verse 326 of theDhammapadawhereSiddhārtha Gautamacompares thedual process natureof human moral reasoningmetaphoricallyto awildelephantand atraineras a preferable descriptiveanalogyin comparison to a metaphor introduced byPlatoinPhaedrusof acharioteerand a pair ofhorses.[64][65] Along with differential psychologistDan P. McAdams, Haidt also argues that theBig Five personality traitsconstitute the lowest in a three-tiered model of personality with the highest level being a personalnarrative identityconstituted of events fromepisodic memorywithmoral developmentalsalience.[66]As an example, Haidt cites howRolling StonesguitaristKeith Richardsrecollects his experience as achoirboyinsecondary schoolin hisautobiographyas being formative in the development of Richards political views along what Haidt refers to as the "authority/respect"moral foundation.[67][68][69]Along with political scientist Sam Abrams, Haidt argues that political elites in the United States became more polarized beginning in the 1990s as theGreatest Generationand theSilent Generation(fundamentally shaped by their living memories ofWorld War I,World War II, and theKorean War) were gradually replaced withBaby boomers,Generation Jones, andGeneration X(fundamentally shaped by their living memories of theU.S. culture warof the1960s and 1970s).[70] Haidt argues that because of the difference in their life experience relevant to moral foundations, Baby boomers and Generation Jones may be more prone to what he calls "Manichean thinking,"[71]and along with Abrams andFIREPresidentGreg Lukianoff, Haidt argues that changes made byNewt Gingrichto theparliamentary procedureof theU.S. House of Representativesbeginning in1995made the chamber more partisan.[70][72]In 1923, 1 percent of U.S. households owned at least oneradio receiverwhich grew to a majority by 1931 and 75 percent did by 1937, while from 1948 to 1955, the percentage of U.S. households that owned at least onetelevisionincreased from 1 percent to 75 percent.[16][73]Because of this, many Baby boomers, Generation Jones, and Generation X have never known a world without television, and unlike during World War II (1939–1945) when most U.S. households owned radios but did not have television (and while radio broadcasts were regulated under theFCCMayflower doctrine), during theVietnam War(1955–1975)most U.S. households did own at least one television set. Also, unlike thefirst half of the 20th century, protests of the1960s civil rights movement(such as theSelma to Montgomery marchesin 1965) were televised, along with theStand in the Schoolhouse DoorbyAlabama GovernorGeorge Wallaceand theReport to the American People on Civil Rightsby President Kennedy in 1963 (which led to theCivil Rights Act of 1964and thelong-term political realignment of the Southern United States as a wholeto theRepublican Partyin turn), thepolice brutalityand theurban race rioting during the latter half of the decade, the multi-decade surge in theU.S. homicide rate(that increased by a factor of 2.5 between 1957 and 1980), rates ofrape,assault, robbery, theft, and other crimethat began in the mid-1960s and did not return to comparable levels until the mid-to-late1990s(after experiencing declining homicide rates during theGreat Depression, World War II, and during theinitial Cold War), and television was used increasingly used fornegative campaigninganddog-whistleattack adsonwedge issues(such as theDaisy advertisementin1964and theWillie Horton advertisementin1988).[list 1]In 1992, 60 percent of U.S. households heldcable television subscriptions in the United States,[17]and Haidt, Abrams, and Lukianoff argue that the expansion of cable television since the 1990s, andFox Newsin particular since 2015 in their coverage ofstudent activismoverpolitical correctnessatcolleges and universities in the United States, is one of the principal factors amplifying political polarization in the United States.[70][72]In September and December 2006 respectively,Luxembourgand theNetherlandsbecame the first countries to completelytransition from analog to digital television, while the United States commenced its transition in 2008. Haidt and journalistsBill BishopandHarry Entenhave noted the growing percentage of theU.S. presidential electorateliving in "landslide counties", counties where the popular vote margin between the Democratic and Republican candidate is 20 percentage points or greater.[82][83][84][85]In1976, only 27 percent of U.S. voters lived in landslide counties, which increased to 39 percent by1992.[69][86]Nearly half of U.S. voters resided in counties that voted forGeorge W. BushorJohn Kerryby 20 percentage points or more in2004.[87]In2008, 48 percent of U.S. voters lived in such counties, which increased to 50 percent in2012and increased further to 61 percent in2016.[69][86]In2020, 58 percent of U.S. voters lived in landslide counties.[88] At the same time, the 2020 U.S. presidential election marked the ninth consecutive presidential election where the victoriousmajor partynominee did not win apopular vote majority by a double-digit marginover the losing major party nominee(s), continuing the longest sequence of such presidential elections in U.S. history that began in 1988 and in2016eclipsed the previous longest sequences from1836through1860and from1876through1900.[89][note 1][90]In contrast, in 14 of the 17 U.S. presidential elections from1920through1984(or approximately 82 percent) the victorious candidate received more than 50 percent of the vote (with1948, 1960, and1968excepted) while in 10 of the 17 elections (or approximately 59 percent) the victorious candidate received a majority of the popular vote by a double-digit margin (1920,1924,1928,1932,1936,1952,1956, 1964,1972, and 1984). While women, who were "traditionally more isolated than men" were givenequal opportunityto consume shows about more "manly" endeavors, men's "feminine" sides are tapped by the emotional nature of many television programs.[91] Television played a significant role in thefeminist movement. Although most of the women portrayed on television conformed to stereotypes, television also showed the lives of men as well as news and current affairs. These "other lives" portrayed on television left many women unsatisfied with their currentsocialization. The representation of males and females on the television screen has been a subject of much discussion since the television became commercially available in the late 1930s. In 1964Betty Friedanclaimed that "television has represented theAmericanWoman as a "stupid, unattractive, insecure little household drudge who spends her martyred mindless, boring days dreaming of love—and plotting nasty revenge against her husband." As women started to revolt and protest to become equals in society in the 1960s and 1970s, their portrayal on the television was an issue that they addressed. JournalistSusan Faludisuggested, "The practices and programming of network television in the 1980s were an attempt to get back to those earlierstereotypesof women." Through television, even the most homebound women can experience parts of our culture once considered primarily male, such as sports, war, business, medicine, law, and politics. Since at least the 1990s there has been a trend of showing males as insufferable and possibly spineless fools (e.g.Homer Simpson,Ray Barone). The inherent intimacy of television makes it one of the few public arenas in our society where men routinely wear makeup and are judged as much on their personal appearance and their "style" as on their "accomplishments." From 1930 till todaydaytime televisionhasn't changed much.Soap operasandtalk showsstill dominate the daytime time slot.Primetimetelevision since the 1950s has been aimed at and catered towards males. In 1952, 68% of characters in primetime dramas were male; in 1973, 74% of characters in these shows were male. In 1970 theNational Organization for Women(NOW) took action. They formed a task force to study and change the "derogatory stereotypes of women on television." In 1972 they challenged the licences of two network-owned stations on the basis of their sexist programming. In the 1960s the showsI Dream of JeannieandBewitchedinsinuated that the only way that a woman could escape her duties was to use magic. Industry analysisShari Anne Brillof Carat USA states, "For years, when men were behind the camera, women were really ditsy. Now you have female leads playing superheroes, or super business women." Current network broadcasting features a range of female portrayals. This is evident in a 2014 study showing that "42% of all major characters on television are female".[92] Proper interpretation and promotion of the increasing number of women working on and behind the scene of television projects are helping with the development of feminism, and now is the prime time to do so.[93]In August of 2007, television was helping the woman of India by giving them female empowerment. In a survey from 2001 to 2003, "Indian Women don't have a lot of control over their lives. More than half need permission from their husbands to go shopping."[94]India Women were expected to be the traditional house wife that cooked, cleaned, and give birth to many of their kids. But around that time cable television had arrived in Indian villages. One of their most popular shows was where, "Their emancipated female characters are well-educated, work outside the home, control their own money, and have fewer children thanrural women."[94]The women's attitudes that had access to the television changes profoundly. For example, "After a village got cable, women's preference for male children fell by 12 percentage points. The average number of situations in which women said that wife beating is acceptable fell by about 10 percent. And the authors' composite autonomy index jumped substantially, by an amount equivalent to the attitude difference associated with 5.5 years of additional education."[94]By giving India women access to cable television it opened their eyes to see what their life could be like. It is said they should call it the "Empowerment Box" because of the awareness it brought to their country. Some communications researchers argue that television serves as a developmental tool that teaches viewers about members of the upper, middle, working, and lower-poor classes. Research conducted by Kathleen Ryan and Deborah Macey support this theory by providing evidence collected fromethnographicsurveys of television viewers along with critical observational analysis of characters and structure of America's most popular television shows.[95]A limited scope of findings of such studies demonstrate a shared public understanding aboutsocial classdifference, which were learned through the dialogue and behavior of their favorite on-screen characters.[96] Research has been conducted to determine how television informs self-identity while reinforcing stereotypes about culture. Some communication researchers have argued that television viewers have become reliant on prime-time reality shows and sitcoms to understand difference as well as the relationship between television and culture. According to a 2013 study on matriarchal figures on the showsTheSopranosandSix Feet Under, researchers stated that the characters of Carmela Soprano and Ruth Fisher were written as stereotypical non-feminists who rely upon their husbands to provide an upscale lifestyle.[95]They posited that these portrayals served as evidence that the media influences stereotype ideologies about class and stressed the importance of obtaining oral histories from "actual mothers, caretakers, and domestic laborers" who have never been accurately portrayed. Pop culture researchers have studied the social impacts of popular television shows, arguing that televised competition shows such asThe Apprenticesend out messages about identity that may cause viewers to feel inadequate. According to Justin Kidd television media perpetuates narrowstereotypesabout social classes while also teaching viewers to see themselves as inferior and insufficient due to personal aspects such as "race or ethnicity, gender or gender identity, social class, disability or body type, sexuality, age, faith or lack thereof, nationality, values, education, or another other aspect of our identities."[97] Television has effects on society's behavior and beliefs by publicizing stereotypes,  especially with race. According to research done in 2015 by Dixon on misrepresentation of race in local news, Blacks, in particular, were accurately depicted as perpetrators, victims, and officers. However, although Latinos were accurately depicted as perpetrators, they continued to be underrepresented as victims and officers. Conversely, Whites remained significantly overrepresented as victims and officers.[98] In 2018,Deadline Hollywoodobserved that portrayals of diversity, and intersectionality on television had risen, citing a poll about favorite characters and a number of new shows featuring diverse characters.[99]Research found that white and Black people are overrepresented in the casts of the top 50 U.S. television shows compared to their proportions in the U.S. Census.[100]Hispanic and Asian people, however, are underrepresented in the same shows.[100] The 2024 UCLA Hollywood Diversity Report found audiences prefer television content that features diverse casts.[101] In its infancy, television was a time-dependent, fleeting medium; it acted on the schedule of the institutions that broadcast the television signal or operated the cable. Fans of regular shows planned theirschedulesso that they could be available to watch their shows at their time of broadcast. The termappointment televisionwas coined by marketers to describe this kind of attachment. The viewership's dependence on schedule lessened with the invention of programmable video recorders, such as thevideocassette recorderand thedigital video recorder. Consumers could watch programs on their own schedule once they were broadcast and recorded. More recently, television service providers also offervideo on demand, a set of programs that can be watched at any time. Bothmobile phonenetworks and theInternetcan give video streams, and video sharing websites have become popular. In addition, the jumps in processing power within smartphone and tablet devices has facilitated uptake of "hybridised" TV viewing, where viewers simultaneously watch programs on TV sets and interact with online social networks via their mobile devices. A 2012 study by Australian media companyYahoo!7found 36% of Australians will call or text family and friends and 41% will post on Facebook while watching TV.[102]Yahoo!7 has already experienced significant early uptake of its Fango mobile app, which encourages social sharing and discussion of TV programs on Australian free-to-air networks. The Japanese manufacturer Scalar has developed a very small TV system attached to eyeglasses, called "Teleglass T3-F".[103]
https://en.wikipedia.org/wiki/Social_aspects_of_television
Identityis the set of qualities, beliefs, personality traits, appearance, or expressions that characterize apersonor agroup.[1][2][3][4] Identity emerges during childhood as children start to comprehend theirself-concept, and it remains a consistent aspect throughout different stages of life.Identityis shaped by social and cultural factors and how others perceive and acknowledge one's characteristics.[5]The etymology of the term "identity" from the Latin nounidentitasemphasizes an individual's "sameness with others".[6]Identity encompasses various aspects such as occupational,religious, national,ethnicor racial,gender, educational, generational, and political identities, among others. Identity serves multiple functions, acting as a "self-regulatory structure" that provides meaning, direction, and a sense of self-control. It fosters internal harmony and serves as a behavioral compass, enabling individuals to orient themselves towards the future and establish long-term goals.[7]As an active process, it profoundly influences an individual's capacity to adapt to life events and achieve a state of well-being.[8][9]However, identity originates from traits or attributes that individuals may have little or no control over, such as their family background or ethnicity.[10] Insociology, emphasis is placed by sociologists oncollective identity, in which an individual's identity is strongly associated with role-behavior or the collection of group memberships that define them.[11]According to Peter Burke, "Identities tell us who we are and they announce to others who we are."[11]Identities subsequently guide behavior, leading "fathers" to behave like "fathers" and "nurses" to act like "nurses".[11] Inpsychology, the term "identity" is most commonly used to describepersonal identity, or the distinctive qualities or traits that make an individual unique.[12][13]Identities are strongly associated withself-concept,self-image(one'smental modelof oneself),self-esteem, andindividuality.[14][page needed][15]Individuals' identities are situated, but also contextual, situationally adaptive and changing. Despite their fluid character, identities often feel as if they are stable ubiquitous categories defining an individual, because of their grounding in the sense of personal identity (the sense of being a continuous and persistent self).[16] Mark Mazowernoted in 1998: "At some point in the 1970s this term ["identity"] was borrowed fromsocial psychologyand applied with abandon tosocieties,nationsand groups."[17] Erik Erikson(1902–94) became one of the earliestpsychologiststo take an explicit interest in identity. An essential feature of Erikson'stheory of psychosocial developmentwas the idea of theegoidentity (often referred to as theself), which is described as an individual's personal sense of continuity.[18]He suggested that people can attain this feeling throughout their lives as they develop and is meant to be an ongoing process.[19]The ego-identity consists of two main features: one'spersonal characteristicsand development, and the culmination of social andculturalfactors and roles that impact one's identity. In Erikson's theory, he describes eight distinct stages across the lifespan that are each characterized by a conflict between the inner, personal world and the outer, social world of an individual. Erikson identified the conflict of identity as occurring primarily during adolescence and described potential outcomes that depend on how one deals with this conflict.[20]Those who do not manage a resynthesis of childhood identifications are seen as being in a state of 'identity diffusion' whereas those who retain their given identities unquestioned have 'foreclosed' identities.[21]On some readings of Erikson, the development of a strong ego identity, along with the proper integration into a stable society and culture, lead to a stronger sense of identity in general. Accordingly, a deficiency in either of these factors may increase the chance of anidentity crisisor confusion.[22] The "Neo-Eriksonian"identity statusparadigmemerged in 1966, driven largely by the work ofJames Marcia.[23]This model focuses on the concepts ofexplorationandcommitment. The central idea is that an individual's sense of identity is determined in large part by the degrees to which a person has made certain explorations and the extent to which they have commitments to those explorations or a particular identity.[24]A person may display either relative weakness or strength in terms of both exploration and commitments. When assigned categories, there were four possible results: identity diffusion, identity foreclosure, identity moratorium, and identity achievement. Diffusion is when a person avoids or refuses both exploration and making a commitment. Foreclosure occurs when a person does make a commitment to a particular identity but neglected to explore other options. Identity moratorium is when a person avoids or postpones making a commitment but is still actively exploring their options and different identities. Lastly, identity achievement is when a person has both explored many possibilities and has committed to their identity.[25] Although the self is distinct from identity, the literature ofself-psychologycan offer some insight into how identity is maintained.[26]From the vantage point of self-psychology, there are two areas of interest: the processes by which a self is formed (the "I"), and the actual content of theschematawhich compose theself-concept(the "Me"). In the latter field, theorists have shown interest in relating the self-concept toself-esteem, the differences between complex and simple ways of organizingself-knowledge, and the links between those organizing principles and the processing of information.[27] Weinreich's identity variant similarly includes the categories of identity diffusion, foreclosure and crisis, but with a somewhat different emphasis. Here, with respect to identity diffusion for example, an optimal level is interpreted as the norm, as it is unrealistic to expect an individual to resolve all their conflicted identifications with others; therefore we should be alert to individuals with levels which are much higher or lower than the norm – highly diffused individuals are classified as diffused, and those with low levels as foreclosed or defensive.[28]Weinreich applies the identity variant in a framework which also allows for the transition from one to another by way of biographical experiences and resolution of conflicted identifications situated in various contexts – for example, anadolescentgoing through family break-up may be in one state, whereas later in a stable marriage with a secure professional role may be in another. Hence, though there is continuity, there is also development and change.[29] Laing's definition of identity closely follows Erikson's, in emphasising the past, present and future components of the experienced self. He also develops the concept of the "metaperspective of self", i.e. the self's perception of the other's view of self, which has been found to be extremely important in clinical contexts such as anorexia nervosa.[30][page needed]Harré also conceptualises components of self/identity – the "person" (the unique being I am to myself and others) along with aspects of self (including a totality of attributes including beliefs about one's characteristics including life history), and the personal characteristics displayed to others. At a general level,self-psychologyexplores the question of how the personal self relates to the social environment. Theories in"psychological" social psychologyexplain an individual's actions in a group in terms of mental events and states. However, some"sociological" social psychologytheories go further by dealing with the issue of identity at the level of both individualcognitionand collective behavior. George C. Homans, former President of the American Sociological Association, in a study of group outcomes, found that social isolation would lead to increasingly random and unpredictable behavior. Such a notion was explored in depth during the 1970s period of transition, among others by cultural historian Christopher Lasch, in his bestselling book The Culture of Narcissism.[31] Many people gain a sense of positive self-esteem from their identity groups, which furthers a sense ofcommunityand belonging. Another issue that researchers have attempted to address is the question of why people engage indiscrimination, i.e., why they tend to favour those they consider a part of their "in-group" over those considered to be outsiders. Both questions have been given extensive attention by researchers working in thesocial identity tradition. For example, in work relating tosocial identity theory, it has been shown that merely crafting cognitive distinction between in- and out-groups can lead to subtle effects on people's evaluations of others.[27][32] Different social situations also compel people to attach themselves to different self-identities which may cause some to feel marginalized, switch between different groups and self-identifications,[33]or reinterpret certain identity components.[34]These different selves lead to constructed images dichotomized between what people want to be (the ideal self) and how others see them (the limited self). Educational background and occupational status and roles significantly influence identity formation in this regard.[35] Another issue of interest in social psychology is related to the notion that there are certainidentity formation strategieswhich a person may use to adapt to the social world.[36]Cote and Levine developed atypologywhich investigated the different manners of behavior that individuals may have.[36]Their typology includes: Kenneth Gergenformulated additional classifications, which include thestrategic manipulator, thepastiche personality, and therelational self. The strategic manipulator is a person who begins to regard all senses of identity merely as role-playing exercises, and who gradually becomes alienated from their social self. The pastiche personality abandons all aspirations toward a true or "essential" identity, instead viewing social interactions as opportunities to play out, and hence become, the roles they play. Finally, the relational self is a perspective by which persons abandon all sense of exclusive self, and view all sense of identity in terms of social engagement with others. For Gergen, these strategies follow one another in phases, and they are linked to the increase in popularity ofpostmodernculture and the rise of telecommunications technology. Anthropologistshave most frequently employed the termidentityto refer to this idea of selfhood in a looselyEriksonian way[37][better source needed]properties based on the uniqueness and individuality which makes a person distinct from others. Identity became of more interest to anthropologists with the emergence of modern concerns withethnicityandsocial movementsin the 1970s. This was reinforced by an appreciation, following the trend in sociological thought, of the manner in which the individual is affected by and contributes to the overallsocial context. At the same time, the Eriksonian approach to identity remained in force, with the result that identity has continued until recently to be used in a largely socio-historical way to refer to qualities of sameness in relation to a person's connection to others and to a particular group of people. The first favours a primordialist approach which takes the sense of self andbelongingto a collective group as a fixed thing, defined by objective criteria such as commonancestryand commonbiological characteristics. The second, rooted insocial constructionisttheory, takes the view that identity is formed by a predominantly political choice of certain characteristics. In so doing, it questions the idea that identity is a natural given, characterised by fixed, supposedly objective criteria. Both approaches need to be understood in their respective political and historical contexts, characterised by debate on issues of class, race andethnicity. While they have been criticized, they continue to exert an influence on approaches to the conceptualisation of identity today. These different explorations of 'identity' demonstrate how difficult a concept it is to pin down. Since identity is a virtual thing, it is impossible to define it empirically. Discussions of identity use the term with different meanings, from fundamental and abiding sameness, to fluidity, contingency, negotiated and so on. Brubaker and Cooper note a tendency in many scholars to confuse identity as a category of practice and as a category of analysis.[38]Indeed, many scholars demonstrate a tendency to follow their own preconceptions of identity, following more or less the frameworks listed above, rather than taking into account the mechanisms by which the concept is crystallised as reality. In this environment, some analysts, such as Brubaker and Cooper, have suggested doing away with the concept completely.[39]Others, by contrast, have sought to introduce alternative concepts in an attempt to capture the dynamic and fluid qualities of human social self-expression.Stuart Hallfor example, suggests treating identity as a process, to take into account the reality of diverse and ever-changing social experience.[40][41]Some scholars[who?]have introduced the idea of identification, whereby identity is perceived as made up of different components that are 'identified' and interpreted by individuals. The construction of an individual sense of self is achieved by personal choices regarding who and what to associate with. Such approaches are liberating in their recognition of the role of the individual in social interaction and the construction of identity. Anthropologists have contributed to the debate by shifting the focus of research: One of the first challenges for the researcher wishing to carry out empirical research in this area is to identify an appropriate analytical tool. The concept of boundaries is useful here for demonstrating how identity works. In the same way as Barth, in his approach to ethnicity, advocated the critical focus for investigation as being "the ethnic boundary that defines the group rather than the cultural stuff that it encloses",[42]social anthropologists such as Cohen and Bray have shifted the focus of analytical study from identity to the boundaries that are used for purposes of identification. If identity is a kind of virtual site in which the dynamic processes and markers used for identification are made apparent, boundaries provide the framework on which this virtual site is built. They concentrated on how the idea of community belonging is differently constructed by individual members and how individuals within the group conceive ethnic boundaries. As a non-directive and flexible analytical tool, the concept of boundaries helps both to map and to define the changeability and mutability that are characteristic of people's experiences of the self in society. While identity is a volatile, flexible and abstract 'thing', its manifestations and the ways in which it is exercised are often open to view. Identity is made evident through the use of markers such aslanguage, dress,behaviourand choice of space, whose effect depends on their recognition by other social beings. Markers help to create the boundaries that define similarities or differences between the marker wearer and the marker perceivers, their effectiveness depends on a shared understanding of their meaning. In a social context, misunderstandings can arise due to a misinterpretation of the significance of specific markers. Equally, an individual can use markers of identity to exert influence on other people without necessarily fulfilling all the criteria that an external observer might typically associate with such an abstract identity. Boundaries can be inclusive or exclusive depending on how they are perceived by other people. An exclusive boundary arises, for example, when a person adopts a marker that imposes restrictions on the behaviour of others. An inclusive boundary is created, by contrast, by the use of a marker with which other people are ready and able to associate. At the same time, however, an inclusive boundary will also impose restrictions on the people it has included by limiting their inclusion within other boundaries. An example of this is the use of a particular language by a newcomer in a room full of people speaking various languages. Some people may understand the language used by this person while others may not. Those who do not understand it might take the newcomer's use of this particular language merely as a neutral sign of identity. But they might also perceive it as imposing an exclusive boundary that is meant to mark them off from the person. On the other hand, those who do understand the newcomer's language could take it as an inclusive boundary, through which the newcomer associates themself with them to the exclusion of the other people present. Equally, however, it is possible that people who do understand the newcomer but who also speak another language may not want to speak the newcomer's language and so see their marker as an imposition and a negative boundary. It is possible that the newcomer is either aware or unaware of this, depending on whether they themself knows other languages or is conscious of the plurilingual quality of the people there and is respectful of it or not. Areligious identityis the set of beliefs and practices generally held by an individual, involving adherence to codified beliefs and rituals and study of ancestral or cultural traditions, writings, history, mythology, and faith and mystical experience. Religious identity refers to the personal practices related to communal faith along with rituals and communication stemming from such conviction. This identity formation begins with an association in the parents' religious contacts, and individuation requires that the person chooses the same or different religious identity than that of their parents.[43][44] TheParable of the Lost Sheepis one of the parables of Jesus. it is about a shepherd who leaves his flock of ninety-nine sheep in order to find the one which is lost. The parable of the lost sheep is an example of the rediscovery of identity. Its aim is to lay bare the nature of the divine response to the recovery of the lost, with the lost sheep representing a lost human being.[45][46][47] Christian meditationis a specific form of personality formation, though often used only by certain practitioners to describe various forms of prayer and the process of knowing thecontemplationof God.[48][49] InWestern culture, personal and secular identity are deeply influenced by the formation ofChristianity,[50][51][52][53][54]throughout history, various Western thinkers who contributed to the development of European identity were influenced by classical cultures and incorporated elements ofGreek cultureas well asJewish culture, leading to some movements such asPhilhellenismandPhilosemitism.[55][56][57][58][59] Due to the multiple functions of identity which include self regulation, self-concept, personal control, meaning and direction, its implications are woven into many aspects of life.[60] Identity transformations can occur in various contexts, some of which include: Immigrationandacculturationoften lead to shifts in social identity. The extent of this change depends on the disparities between the individual's heritage culture and the culture of the host country, as well as the level of adoption of the new culture versus the retention of the heritage culture. However, the effects of immigration and acculturation on identity can be moderated if the person possesses a strongpersonal identity. This established personal identity can serve as an "anchor" and play a "protective role" during the process of social and cultural identity transformations that occur.[7] Identity is an ongoing and dynamic process that impacts an individual's ability to navigate life's challenges and cultivate a fulfilling existence.[8][9]Within this process, occupation emerges as a significant factor that allows individuals to express and maintain their identity. Occupation encompasses not only careers or jobs but also activities such as travel, volunteering, sports, or caregiving. However, when individuals face limitations in their ability to participate or engage in meaningful activities, such as due to illness, it poses a threat to the active process and continued development of identity. Feeling socially unproductive can have detrimental effects on one'ssocial identity. Importantly, the relationship between occupation and identity is bidirectional; occupation contributes to the formation of identity, while identity shapes decisions regarding occupational choices. Furthermore, individuals inherently seek a sense of control over their chosen occupation and strive to avoid stigmatizing labels that may undermine their occupational identity.[8] In the realm of occupational identity, individuals make choices regarding employment based on the stigma associated with certain jobs. Likewise, those already working in stigmatized occupations may employ personal rationalization to justify their career path. Factors such as workplace satisfaction and overall quality of life play significant roles in these decisions. Individuals in such jobs face the challenge of forging an identity that aligns with their values and beliefs. Crafting a positive self-concept becomes more arduous when societal standards label their work as "dirty" or undesirable.[68][69][70]Consequently, some individuals opt not to define themselves solely by their occupation but strive for a holistic identity that encompasses all aspects of their lives, beyond their job or work. On the other hand, individuals whose identity strongly hinges on their occupation may experience a crisis if they become unable to perform their chosen work. Therefore, occupational identity necessitates an active and adaptable process that ensures bothadaptationand continuity amid shifting circumstances.[9] The modern notion of personal identity as a distinct and unique characteristic of individuals has evolved relatively recently in history beginning with the first passports in the early 1900s and later becoming more popular as a social science term in the 1950s.[71]Several factors have influenced its evolution, including:
https://en.wikipedia.org/wiki/Identity_(social_science)
Social mediaare interactive technologies that facilitate thecreation,sharingandaggregationofcontent(such as ideas, interests, and other forms of expression) amongstvirtual communitiesandnetworks.[1][2]Common features include:[2] The termsocialin regard to media suggests platforms enable communal activity. Social media enhances and extends human networks.[6]Users access social media throughweb-based appsor custom apps onmobile devices. These interactive platforms allow individuals, communities, and organizations to share, co-create, discuss, participate in, and modify user-generated or self-curated content.[7][5][1]Social media is used to document memories, learn, and form friendships.[8]They may be used to promote people, companies, products, and ideas.[8]Social media can be used to consume, publish, or sharenews. Social media platforms can be categorized based on their primary function.Social networking siteslikeFacebook,LinkedIn, andThreadsfocus on building personal and professional connections.Microbloggingplatforms, such asTwitter(now X) andMastodon, emphasize short-form content and rapid information sharing.Media sharing networks, includingInstagram,TikTok,YouTube, andSnapchat, allow users to share images, videos, and live streams.Discussion and community forumslikeReddit,Quora, andDiscordfacilitate conversations, Q&A, and niche community engagement.Live streaming platforms, such asTwitch, Facebook Live, andYouTubeLive, enable real-time audience interaction. Finally,decentralized social mediaplatforms like Mastodon and Bluesky aim to provide social networking without corporate control, offering users more autonomy over their data and interactions. Popular social media platforms with over 100 million registered users includeTwitter,Facebook,WeChat,ShareChat,Instagram,Pinterest,QZone,Weibo,VK,Tumblr,Baidu Tieba,ThreadsandLinkedIn. Depending on interpretation, other popular platforms that are sometimes referred to as social media services includeYouTube,Letterboxd,QQ,Quora,Telegram,WhatsApp,Signal,LINE,Snapchat,Viber,Reddit,Discord, andTikTok.Wikisare examples of collaborativecontent creation. Social media outlets differ fromold media(e.g.newspapers,TV, andradio broadcasting) in many ways, including quality,[9]reach,frequency, usability, relevancy, and permanence.[10]Social media outlets operate in adialogictransmission system (many sources to many receivers) while traditional media operate under amonologictransmission model (one source to many receivers). For instance, a newspaper is delivered to many subscribers, and a radio station broadcasts the same programs to a city.[11] Social media has been criticized for a range of negative impacts on children and teenagers, including exposure to inappropriate content, exploitation by adults, sleep problems, attention problems, feelings of exclusion, and various mental health maladies.[12][13]Social media has also received criticism as worseningpolitical polarizationand underminingdemocracy. Major news outlets often have strong controls in place to avoid and fix false claims, but social media's unique qualities bring viral content with little to no oversight. "Algorithms that track user engagement to prioritize what is shown tend to favor content that spurs negative emotions like anger and outrage. Overall, most online misinformation originates from a small minority of "superspreaders," but social media amplifies their reach and influence."[14] ThePLATO systemwas launched in 1960 at theUniversity of Illinoisand subsequently commercially marketed byControl Data Corporation. It offered early forms of social media features with innovations such as Notes, PLATO's message-forum application; TERM-talk, its instant-messaging feature;Talkomatic, perhaps the firstonline chat room; News Report, acrowdsourcedonline newspaper, and blog and Access Lists, enabling the owner of a note file or other application to limit access to a certain set of users, for example, only friends, classmates, or co-workers. ARPANET, which came online in 1969, had by the late 1970s enabled exchange of non-government/business ideas and communication, as evidenced by thenetwork etiquette(or "netiquette") described in a 1982 handbook on computing atMIT'sArtificial Intelligence Laboratory.[15]ARPANET evolved into theInternetin the 1990s.[16]Usenet, conceived byTom TruscottandJim Ellisin 1979 at theUniversity of North Carolina at Chapel HillandDuke University, was the first open social media app, established in 1980. A precursor of the electronicbulletin board system(BBS), known asCommunity Memory, appeared by 1973. Mainstream BBSs arrived with the Computer Bulletin Board System in Chicago, which launched on February 16, 1978. Before long, most major US cities had more than one BBS, running onTRS-80,Apple II,Atari 8-bit computers,IBM PC,Commodore 64,Sinclair, and others.CompuServe,Prodigy, andAOLwere three of the largest BBS companies and were the first to migrate to the Internet in the 1990s. Between the mid-1980s and the mid-1990s, BBSes numbered in the tens of thousands in North America alone.[17]Message forums were the signature BBS phenomenon throughout the 1980s and early 1990s. In 1991,Tim Berners-LeeintegratedHTMLhypertextsoftware with the Internet, creating theWorld Wide Web. This breakthrough led to an explosion ofblogs,list servers, andemailservices. Message forums migrated to the web, and evolved intoInternet forums, supported by cheaper access as well as the ability to handle far more people simultaneously. These early text-based systems expanded to include images and video in the 21st century, aided bydigital camerasandcamera phones.[18] The evolution of online services progressed from serving as channels for networked communication to becoming interactive platforms for networked social interaction with the advent ofWeb 2.0.[6] Social media started in the mid-1990s with the invention of platforms likeGeoCities,Classmates.com, andSixDegrees.com.[19]While instant messaging and chat clients existed at the time, SixDegrees was unique as it was the first online service designed for people to connect using their actual names instead of anonymously. It boasted features like profiles, friends lists, and school affiliations, making it "the very first social networking site".[19][20]The platform's name was inspired by the "six degrees of separation" concept, which suggests that every person on the planet is just six connections away from everyone else.[21] In the early 2000s, social media platforms gained widespread popularity withBlackPlanet(1999) precedingFriendsterandMyspace,[22][23]followed byFacebook,YouTube, andTwitter.[24] Research from 2015 reported that globally, users spent 22% of their online time on social networks,[25]likely fueled by the availability of smartphones.[26]As of 2023, as many as 4.76 billion people used social media[27]some 59% of the global population. A 2015 review identified four features unique to social media services:[2] In 2019,Merriam-Websterdefined social media as "forms of electronic communication (such as websites for social networking and microblogging) through which users create online communities to share information, ideas, personal messages, and other content (such as videos)."[28] Social media encompasses an expanding suite of services:[29] Some services offer more than one type of service.[5] Mobile social media refers to the use of social media onmobile devicessuch assmartphonesandtablets. It is distinguished by its ubiquity, since users no longer have to be at a desk in order to participate on acomputer. Mobile services can further make use of the user's immediate location to offer information, connections, or services relevant to that location. According toAndreas Kaplan, mobile social media activities fall among four types:[30] Certaincontent, has the potential to spreadvirally, an analogy for the wayviral infectionsspread contagiously from individual to individual.Viral videosis one example. One user spreads a post across their network, which leads those users to follow suit. A post from a relatively unknown user can reach vast numbers of people within hours. Virality is not guaranteed; few posts make the transition. Viral marketingcampaigns are particularly attractive tobusinessesbecause they can achieve widespread advertising coverage at a fraction of the cost of traditional marketing campaigns.Nonprofit organizationsandactivistsmay also attempt to spread content virally. Social media sites provide specific functionality to help users re-share content, such asX's andFacebook's "like" option.[31] Bots are automated programs that operate on theinternet.[32]They automate many communication tasks. This has led to the creation of an industry of bot providers.[33] Chatbotsandsocial botsare programmed to mimic human interactions such as liking, commenting, and following.[34]Bots have also been developed to facilitatesocial media marketing.[35]Bots have led themarketing industryinto an analytical crisis, as bots make it difficult to differentiate between human interactions and bot interactions.[36]Some bots violate platforms'terms of use, which can result in bans and campaigns to eliminate bots categorically.[37]Bots may even pose as real people to avoid prohibitions.[38] 'Cyborgs'—either bot-assisted humans or human-assisted bots[38]—are used for both legitimate and illegitimate purposes, from spreadingfake newsto creatingmarketing buzz.[39][40][41]A common use claimed to be legitimate includes posting at a specific time.[42]A human writes a post content and the bot posts it a specific time. In other cases, cyborgs spreadfake news.[38]Cyborgs may work assock puppets, where one human pretends to be someone else, or operates multiple accounts, each pretending to be a person. A multitude ofUnited Statespatents are related to social media, growing rapidly.[citation needed]As of 2020[update], over 5000 social media patent applications had been published in the United States.[43]Only slightly over 100 patents had been issued.[44] As an instance oftechnological convergence, various social media platforms adapted functionality beyond their original scope, increasingly overlapping with each other. Examples are the social hub siteFacebooklaunching an integratedvideo platformin May 2007,[45]andInstagram, whose original scope was low-resolution photo sharing, introducing the ability to share quarter-minute 640×640 pixel videos[46](later extended to a minute with increased resolution). Instagram later implementedstories(short videos self-destructing after 24 hours), a concept popularized bySnapchat, as well asIGTV, for seekable videos.[47]Stories were then adopted byYouTube.[48] X, whose original scope was text-based microblogging, later adopted photo sharing,[49]then video sharing,[50][51]then a media studio for business users, after YouTube's Creator Studio.[52] The discussion platformRedditadded an integratedimage hosterreplacing the external image sharing platformImgur,[53]and then an internal video hosting service,[54]followed by image galleries (multiple images in a single post), known from Imgur.[55]Imgur implemented video sharing.[56][57] YouTuberolled out a Community feature, for sharing text-only posts andpolls.[58] According toStatista, it is estimated that, in 2022, around 3.96 billion people were using social media globally. This number is up from 3.6 billion in 2020.[59] The following is a list of the most popularsocial networking servicesbased on the number of active users as of January 2024[update]perStatista.[60] A 2009 study suggested that individual differences may help explain who uses social media:extraversionandopennesshave a positive relationship with social media, whileemotional stabilityhas a negative sloping relationship with social media.[62]A 2015 study reported that people with a highersocial comparisonorientation appear to use social media more heavily than people with low social comparison orientation.[63] Common Sense Mediareported that children under age 13 in the United States usesocial networking servicesalthough many social media sites require users to be 13 or older.[64]In 2017, the firm conducted a survey of parents of children from birth to age 8 and reported that 4% of children at this age used social media sites such asInstagram,Snapchat, or (now-defunct)Musical.ly"often" or "sometimes".[65]Their 2019 survey surveyed Americans ages 8–16 and reported that about 31% of children ages 8–12 use social media.[66]In that survey, teens aged 16–18 were asked when they started using social media. the median age was 14, although 28% said they started to use it before reaching 13. Social media played a role in communication during theCOVID-19 pandemic.[67]In June 2020, a survey byCartoon Networkand theCyberbullyingResearch Center surveyed Americanstweens(ages 9–12) and reported that the most popular application wasYouTube(67%).[68](as age increased, tweens were more likely to have used social media apps and games.) Similarly, Common Sense Media's 2020 survey of Americans ages 13–18 reported that YouTube was the most popular (used by 86% of 13- to 18-year-olds).[69]As children aged, they increasingly utilized social media services and often used YouTube to consume content. While adults were using social media before theCOVID-19 pandemic, more started using it to stay socially connected and to get pandemic updates. "Social media have become popularly use to seek for medical information and have fascinated the general public to collect information regarding corona virus pandemics in various perspectives. During these days, people are forced to stay at home and the social media have connected and supported awareness and pandemic updates."[70] Healthcare workersand systems became more aware of social media as a place people were getting health information: "During the COVID-19 pandemic, social media use has accelerated to the point of becoming a ubiquitous part of modern healthcare systems."[71] This also led to the spread ofdisinformation. On December 11, 2020, theCDCput out a "Call to Action: Managing theInfodemic".[72]Some healthcare organizations used hashtags as interventions and published articles on theirTwitterdata:[73] "Promotion of the joint usage of #PedsICU and #COVID19 throughout the international pediatric critical care community in tweets relevant to the coronavirus disease 2019 pandemic and pediatric critical care."[73] However others in the medical community were concerned about social media addiction, as it became an increasingly important context and therefore "source of social validation and reinforcement" and were unsure whether increased social media use was harmful.[74] Governments may use social media to (for example):[75] Social media has been used extensivelyin civil and criminal investigations.[77]It has also been used to search for missing persons.[78]Police departments often make use of official social media accounts to engage with the public, publicize police activity, and burnish law enforcement's image;[79][80]conversely, video footage of citizen-documentedpolice brutalityand othermisconducthas sometimes been posted to social media.[80] In the United States,U.S. Immigration and Customs Enforcementidentifies and track individuals via social media, and has apprehended some people via social media-based sting operations.[81]U.S. Customs and Border Protection(also known as CBP) and theUnited States Department of Homeland Securityuse social media data as influencing factors during thevisaprocess, and monitor individuals after they have entered the country.[82]CBP officers have also been documented performing searches of electronics and social media behavior at the border, searching both citizens and non-citizens without first obtaining a warrant.[82] As social media gained momentum among the younger generations, governments began using it to improve their image, especially among the youth. In January 2021, Egyptian authorities were reported to be usingInstagraminfluencers as part of its media ambassadors program. The program was designed to revampEgypt's image and to counter the bad press Egypt had received because of the country'shuman rightsrecord. Saudi Arabia and the United Arab Emirates participated in similar programs.[83]Similarly, Dubai has extensively relied on social media and influencers to promote tourism. However, Dubai laws have kept these influencers within limits to not offend the authorities, or to criticize the city, politics or religion. The content of these foreign influencers is controlled to make sure that nothing portrays Dubai in a negative light.[84] Many businesses use social media formarketing,branding,[85]advertising, communication,sales promotions, informalemployee-learning/organizational development, competitive analysis, recruiting, relationship management/loyalty programs,[30]ande-Commerce. Companies usesocial-media monitoringtools to monitor, track, and analyze conversations to aid in their marketing, sales and other programs. Tools range from free, basic applications to subscription-based, tools. Social media offers information on industry trends. Within the finance industry, companies use social media as a tool for analyzing market sentiment. These range from marketing financial products, market trends, and as a tool to identify insider trading.[86]To exploit these opportunities, businesses need guidelines for use on each platform.[3] Business use of social media is complicated by the fact that the business does not fully control its social media presence. Instead, it makes its case by participating in the "conversation".[87]Business uses social media[88]on a customer-organizational level; and an intra-organizational level. Social media can encourage entrepreneurship and innovation, by highlighting successes, and by easing access to resources that might not otherwise be readily available/known.[89] Social media marketing can help promote a product or service and establish connections with customers. Social media marketing can be divided into paid media, earned media, and owned media.[90]Using paid social media firms run advertising on a social media platform. Earned social media appears when firms do something that impresses stakeholders and they spontaneously post content about it. Owned social media is the platform markets itself by creating/promoting content to its users.[91] Primary uses are to createbrand awareness, engage customers by conversation (e.g., customers provide feedback on the firm) and providing access tocustomer service.[92]Social media's peer-to-peer communication shifts power from the organization to consumers, since consumer content is widely visible and not controlled by the company.[93] Social media personalities, often referred to as "influencers", are Internet celebrities who aresponsoredby marketers to promote products and companies online. Research reports that theseendorsementsattract the attention of users who have not settled on which products/services to buy,[94]especiallyyounger consumers.[95]The practice of harnessing influencers to market or promote a product or service to their following is commonly referred to asinfluencer marketing. In 2013, the United KingdomAdvertising Standards Authority(ASA) began advising celebrities to make it clear whether they had been paid to recommend a product or service by using the hashtag #spon or #adwhen endorsing. The USFederal Trade Commissionissued similar guidelines.[96] Social media platforms also enabletargeting specific audiences with advertising. Users of social media can share, and comment on the advertisement, turning passive consumers into active promoters and even producers.[97]Targeting requires extra effort by advertisers to understand how to reach the right users.[3]Companies can use humor (such asshitposting) to poke fun at competitors.[98]Advertising can even inspirefanartwhich can engage new audiences.[99]Hashtags(such as #ejuice and #eliquid) are one way to target interested users.[100] User content can triggerpeer effects, increasing consumer interest even without influencer involvement. A 2012 study focused on this communication reported that communication among peers can affect purchase intentions: direct impact through encouragingconformity, and an indirect impact by increasing product engagement. This study claimed that peer communication about a product increased product engagement.[101] Social media have a range of uses inpolitics.[102]Politicians use social media to spread their messages andinfluence voters.[103] Dounoucos et al. reported thatTwitteruse by candidates was unprecedented during theUS 2016 election.[104][105]The public increased its reliance on social-media sites for political information.[104]In theEuropean Union, social media amplified political messages.[106]Foreign-originated social-media campaigns attempt to influence political opinion in another country.[107][108] Social media was influential in theArab Springin 2011.[109][110][111][112]However, debate persists about the extent to which social media facilitated this.[113]Activists have used social media to report the abuse ofhuman rights in Bahrain. They publicized the brutality of government authorities, who they claimed weredetaining,torturingand threatening individuals. Conversely, Bahrain's government used social media to track and target activists. The government stripped citizenship from over 1,000 activists as punishment.[114] Militantgroups use social media as an organizing and recruiting tool.[115]Islamic State(also known as ISIS) used social media. In 2014, #AllEyesonISIS went viral on ArabicX.[116][117] Social media use in hiringrefers to the examination by employers ofjob applicants' (public) social media profiles as part of thehiringassessment. For example, the vast majority ofFortune 500companies use social media as a tool to screen prospective employees and as a tool for talent acquisition.[118] This practice raises ethical questions. Employers and recruiters note that they have access only to information that applicants choose to make public. Many Western-European countries restrict employer's use of social media in the workplace. States including Arkansas, California, Colorado, Illinois, Maryland, Michigan, Nevada, New Jersey, New Mexico, Utah, Washington, and Wisconsin protect applicants and employees from surrendering usernames and passwords for social media accounts.[citation needed]Use of social media as caused significant problems for some applicants who are active on social media. A 2013 survey of 17,000 young people in six countries found that one in ten people aged 16 to 34 claimed to have been rejected for a job because of social media activity.[119][120] Scientists use social media to share their scientific knowledge and research on platforms such asResearchGate,LinkedIn,Facebook,X, andAcademia.edu.[121]The most common platforms are X and blogs. The use of social media reportedly has improved the interaction between scientists, reporters, and the general public.[citation needed]Over 495,000 opinions were shared on X related to science between September 1, 2010, and August 31, 2011.[122]Science related blogs respond to and motivate public interest in learning, following, and discussing science. Posts can be written quickly and allow the reader to interact in real time with authors.[123]One study in the context of climate change reported that climate scientists and scientific institutions played a minimal role inonline debate, exceeded bynongovernmental organizations.[124] Academicians use social media activity to assessacademic publications,[125]to measure public sentiment,[126]identify influencer accounts,[127]or crowdsource ideas or solutions.[128]Social media such as Facebook, X are also combined to predict elections via sentiment analysis.[129]Additional social media (e.g. YouTube,Google Trends) can be combined to reach a wider segment of the voting population, minimise media-specific bias, and inexpensively estimate electoral predictions which are on average half of a percentage point off the real vote share.[130] In some places, students have been forced to surrender their social media passwords to school administrators.[131]Few laws protect student's social media privacy. Organizations such as theACLUcall for more privacy protection. They urge students who are pressured to give up their account information to resist.[132] Colleges and universities may access applicants' internet services including social media profiles as part of their admissions process. According toKaplan, Inc, a corporation that provides higher education preparation, in 2012 27% of admissions officers usedGoogleto learn more about an applicant, with 26% checkingFacebook.[133]Students whose social media pages include questionable material may be disqualified from admission processes. "One survey in July 2017, by the American Association of College Registrars and Admissions Officers, reported that 11 percent of respondents said they had refused to admit an applicant based on social media content. This includes 8 percent of public institutions, where the First Amendment applies. The survey reported that 30 percent of institutions acknowledged reviewing the personal social media accounts of applicants at least some of the time."[134] Social media comments and images have been used in court cases including employment law, child custody/child support, and disability claims. After anAppleemployee criticized his employer onFacebook, he was fired. When the former employee sued Apple for unfair dismissal, the court, after examining the employee's Facebook posts, reported in favor of Apple, stating that the posts breached Apple's policies.[135]After a couple broke up, the man posted song lyrics "that talked about fantasies of killing the rapper's ex-wife" and made threats. A court reported him guilty.[135][clarification needed]In a disability claims case, a woman who fell at work claimed that she was permanently injured; the employer used her social media posts to counter her claims.[135][additional citation(s) needed] Courts do not always admit social media evidence, in part, because screenshots can be faked or tampered with.[136]Judges may consideremojisinto account to assess statements made on social media; in one Michigan case where a person alleged that another person had defamed them in an online comment, the judge disagreed, noting that an emoji after the comment that indicated that it was a joke.[136]In a 2014 case in Ontario against a police officer regarding alleged assault of a protester during the G20 summit, the court rejected the Crown's application to use a digital photo of the protest that was anonymously posted online, because it included nometadataverifying its provenance.[136][additional citation(s) needed] On April 9, 2024, theSpirit Lake Tribein North Dakota andMenominee Indian Tribeof Wisconsin have sued social media companies (Meta Platforms-Facebook, Instagram; Snapchat,TikTok, YouTube, and Google) companies accused of ‘deliberate misconduct’. Their lawsuit describes "a sophisticated and intentional effort that has caused a continuing, substantial, and longterm burden to the Tribe and its members," leaving scarce resources for education, cultural preservation and other social programs.[137][additional citation(s) needed] Social media as a news sourceis defined as the use of online social media platforms such asInstagram,TikTok, andFacebookrather than the use oftraditional media platformslike the newspaper or live TV to obtain news. Television had just begun to turn a nation of people who oncelistenedto media content intowatchersof media content between the 1950s and the 1980s when the popularity of social media had also began creating a nation of mediacontent creators. Content creators are currently some of the most wealthy people nowadays. Almost half of Americans use social media as a news source, according to the Pew Research Center.[138]As social media’s role in news consumption grows, questions have emerged about its impact on knowledge, the formation of echo chambers, and the effectiveness of fact-checking efforts in combating misinformation. Social media platforms allowuser-generated content[139][140]and sharing content within one's own virtual network.[141][139]Using social media as a news source allows users to engage with news in a variety of ways including: Using social media as a news source has become an increasingly more popular way for people of all age groups to obtain current and important information. Just like many other new forms of technology there are going to be pros and cons. There are ways that social media positively affects the world of news and journalism but it is important to acknowledge that there are also ways in which social media has a negative effect on the news. With this accessibility, people now have more ways to consume false news, biased news, and even disturbing content. Social media are used to socialize with friends and family[144]pursue romance and flirt,[144]but not all social needs can be fulfilled by social media.[145]For example, a 2003 article reported that lonely individuals are more likely to use the Internet for emotional support than others.[146]A 2018 survey from Common Sense Media reported that 40% of American teens ages 13–17 thought that social media was "extremely" or "very" important for them to connect with their friends.[147]The same survey reported that 33% of teens said social media was extremely or very important to conduct meaningful conversations with close friends, and 23% of teens said social media was extremely or very important to document and share their lives.[147]A 2020 Gallup poll reported that 53% of adult social media users in the United States thought that social media was a very or moderately important way to keep in touch with people during theCOVID-19 pandemic.[148] InAlone TogetherSherry Turkleconsidered how people confuse social media usage with authentic communication.[149]She claimed that people act differently online and are less concerned about hurting others' feelings. Some online encounters can cause stress and anxiety, due to the difficulty purging online posts, fear of getting hacked, or of universities and employers exploring social media pages. Turkle speculated that many people prefer texting to face-to-face communication, which can contribute to loneliness.[149]Surveys from 2019 reported evidence among teens in the United States[147]and Mexico.[150]Some researchers reported that exchanges that involved direct communication and reciprocal messages correlated with less loneliness.[151] In social media "stalking" or "creeping" refers to looking at someone's "timeline, status updates,tweets, and online bios" to find information about them and their activities.[152]A sub-category of creeping is creeping ex-partners after a breakup.[153] Catfishing(creating a false identity) allows bad actors to exploit the lonely.[154] Self-presentation theoryproposes that people consciously manage theirself-imageor identity related information in social contexts.[155]One aspect of social media is the time invested in customizing a personal profile.[156]Some users segment their audiences based on the image they want to present, pseudonymity and use of multiple accounts on the same platform offer that opportunity.[157] A 2016 study reported that teenage girls manipulate their self-presentation on social media to appear beautiful as viewed by their peers.[158]Teenage girls attempt to earn regard and acceptance (likes, comments, and shares). When this does not go well, self-confidence and self-satisfaction can decline.[158]A 2018 survey of American teens ages 13–17 by Common Sense Media reported that 45% said likes are at least somewhat important, and 26% at least somewhat agreed that they feel bad about themselves if nobody responds to their photos.[147]Some evidence suggests that perceived rejection may lead to emotional pain,[159]and some may resort to online bullying.[160]according to a 2016 study, users' reward circuits in their brains are more active when their photos are liked by more peers.[161] A 2016 review concluded that social media can trigger a negative feedback loop of viewing and uploading photos, self-comparison, disappointment, and disordered body perception when social success is not achieved.[162]One 2016 study reported that Pinterest is directly associated with disordered dieting behavior.[163] People portray themselves on social media in the most appealing way.[158]However, upon seeing one person's curated persona, other people may question why their own lives are not as exciting or fulfilling. One 2017 study reported that problematic social media use (i.e., feeling addicted to social media) was related to lower life satisfaction and self-esteem.[164]Studies have reported that social media comparisons can have dire effects on physical and mental health.[165][166]In one study, women reported that social media was the most influential source of their body image satisfaction; while men reported them as the second biggest factor.[167]While monitoring the lives of celebrities long predates social media, the ease and immediacy of direct comparisons of pictures and stories with one's own may increase their impact. A 2021 study reported that 87% of women and 65% of men compared themselves to others on social media.[168] Efforts to combat such negative effects focused promotingbody positivity. In a related study, women aged 18–30 were reported posts that contained side-by-side images of women in the same clothes and setting, but one image was enhanced for Instagram, while the other was an unedited, "realistic" version. Women who participated in this experiment reported a decrease in body dissatisfaction.[169] Social media can offer a support system for adolescent health, because it allows them to mobilize around health issues that they deem relevant.[170]For example, in a clinical study among adolescent patients undergoingobesitytreatment, participants' claimed that social media allowed them to access personalizedweight-losscontent as well as social support among other adolescents with obesity.[171][172] While social media can provide health information, it typically has no mechanism for ensuring the quality of that information.[172]TheNational Eating Disorders Associationreported a high correlation between weight loss content and disorderly eating among women who have been influenced by inaccurate content.[172][173]Health literacyoffers skills to allow users to spot/avoid such content. Efforts by governments and public health organizations to advance health literacy reportedly achieved limited success.[174]The role of parents and caregivers who proactively approach their children with ongoing guidance and open discussions on the benefits and difficulties they may encounter online, demonstrate some reductions in overall anxiety and depression among adolescents.[175] Social media such aspro-anorexiasites reportedly increase risk of harm by reinforcing damaging health-related behaviors through social media, especially among adolescents.[176][177][178] During the coronavirus pandemic, inaccurate information from all sides spread widely via social media.[179]Topics subject to distortion included treatments, avoiding infection, vaccination, and public policy. Simultaneously, governments and others influenced social media platforms to suppress both accurate and inaccurate information in support of public policy.[180]Heavier social media use was reportedly associated with more acceptance of conspiracy theories, leading to worse mental health[181]and less compliance with public health recommendations.[182] Social media platforms can serve as a breeding ground for addiction-related behaviors, with studies report that excessive use can lead to addiction-like symptoms. These symptoms include compulsive checking, mood modification, and withdrawal when not using social media, which can result in decreased face-to-face social interactions and contribute to the deterioration of interpersonal relationships and a sense of loneliness.[183] A 2017 study reported on a link between sleep disturbance and the use of social media. It concluded that blue light from computer/phone displays—and the frequency rather than the duration of time spent, predicted disturbed sleep, termed "obsessive 'checking'".[187]The association between social media use and sleep disturbance has clinical ramifications for young adults.[188]A recent study reported that people in the highest quartile for weekly social media use experienced the most sleep disturbance. The median number of minutes of social media use per day was 61. Females were more likely to experience high levels of sleep disturbance.[189]Many teenagers suffer from sleep deprivation from long hours at night on their phones, and this left them tired and unfocused in school.[190]A 2011 study reported that time spent on Facebook was negatively associated withGPA, but the association with sleep disturbance was not established.[191] One studied effect of social media is 'Facebook depression', which affects adolescents who spend too much time on social media.[8]This may lead to reclusiveness, which can increase loneliness and low self-esteem.[8]Social media curates content to encourage users to keep scrolling.[188]Studies report children's self-esteem is positively affected by positive comments and negatively affected by negative or lack of comments. This affected self-perception.[192]A 2017 study of almost 6,000 adolescent students reported that those who self-reported addiction-like symptoms of social media use were more likely to report low self-esteem and high levels of depressive symptoms.[193] A second emotional effect is social media burnout, defined as ambivalence, emotional exhaustion, anddepersonalization. Ambivalence is confusion about the benefits from using social media. Emotional exhaustion is stress from using social media. Depersonalization is emotional detachment from social media. The three burnout factors negatively influence the likelihood of continuing on social media.[194] A third emotional effect is "fear of missing out" (FOMO), which is the "pervasive apprehension that others might be having rewarding experiences from which one is absent."[195]It is associated with increased scrutiny of friends on social media.[195] Social media can also offer support asTwitterhas done for the medical community.[196]X facilitated academic discussion among health professionals and students, while providing a supportive community for these individuals by and allowing members to support each other through likes, comments, and posts.[197]Access to social media offered a way to keep older adults connected, after the deaths of partners and geographical distance between friends and loved ones.[198]In March 2025, a Pakistani man killed aWhatsAppgroup admin in anger after being removed from the chat.[199] Media criticSiva Vaidhyanathanrefers to social media as 'anti-social media' in reference to its negative impacts including on loneliness and political polarization.[200]Audrey Tangalso uses the term antisocial in reference to its impact on democracy.[201] Thedigital divideis the unequal access todigital technology, includingsmartphones, tablets,laptops, and the internet.[202][203]The digital divide worsens inequality around access to information and resources. In theInformation Age, people without access to the Internet and other technology are at a disadvantage, for they are unable or less able to connect with others, find and apply for jobs, shop, and learn.[202][204][205][206] Many critics point to studies showing social media algorithms elevate more partisan and inflammatory content.[211][212]Because ofrecommendation algorithmsthat filter and display news content that matches users' political preferences, one potential impact is an increase inpolitical polarizationdue toselective exposure. Political polarization is the divergence of political attitudes towardsideologicalextremes. Selective exposure occurs when an individual favors information that supports their beliefs and avoids information that conflicts with them.[213]Jonathan Haidtcompared the impact of social media to theTower of Babeland the chaos it unleashed as a result.[214][215][13] Aviv Ovadya argues that these algorithms incentivize the creation of divisive content in addition to promoting existing divisive content,[216]but could be designed to reduce polarization instead.[217]In 2017, Facebook gave its new emoji reactions five times the weight in its algorithms as its like button, which data scientists at the company in 2019 confirmed had disproportionately boosted toxicity, misinformation and low-quality news.[218]Some popular ideas for how to combat selective exposure have had no or opposite impacts.[219][220][216]Some advocate formedia literacyas a solution.[221]Others argue that less social media,[213]or morelocal journalism[222][223][224]could help address political polarization. A 2018 study reported that social media increases the power of stereotypes.[225]Stereotypes can have both negative and positive connotations. For example, during the COVID-19 pandemic, youth were accused of responsibility for spreading the disease.[226]Elderly people get stereotyped as lacking knowledge of proper behavior on social media.[227]Social media platforms usually amplify these stereotypes by reinforcing age-based biases through certain algorithms as well as user-generated content. Unfortunately, these stereotypes contribute to social divide and negatively impact the way users interact online.[228] Social media allows for masscultural exchangeandintercultural communication, despite different ways of communicating in various cultures.[229] Social media has affected the way youth communicate, by introducing new forms of language.[230]Novel acronyms save time, as illustrated by "LOL", which is the ubiquitous shortcut for "laugh out loud". Thehashtagwas created to simplify searching for information and to allow users to highlight topics of interest in the hope of attracting the attention of others. Hashtags can be used to advocate for a movement, mark content for future use, and allow other users to contribute to a discussion.[231] For some young people, social media and texting have largely replaced in person communications, made worse by pandemic isolation, delaying the development of conversation and other social skills.[232] What is socially acceptable is now heavily based on social media.[233]TheAmerican Academy of Pediatricsreported that bullying, the making of non-inclusive friend groups, and sexual experimentation have increased cyberbullying, privacy issues, and sending sexual images or messages.Sextingandrevenge pornbecame rampant, particularly among minors, with legal implications and resulting trauma risk.[234][235][236][237]However, adolescents can learn basic social and technical skills online.[238]Social media, can strengthen relationships just by keeping in touch, making more friends, and engaging in community activities.[8] In July 2014, in response toWikiLeaks' release of a secret suppression order made by theVictorian Supreme Court, media lawyers were quoted in theAustralianmedia to the effect that "anyone who tweets a link to the WikiLeaks report, posts it on Facebook, or shares it in any way online could also face charges".[239] In November 2024, the federal government passed theOnline Safety Amendment (Social Media Minimum Age) Bill 2024introduced by theAlbanese governmentbanning people under the age of 16 from using most social media platforms, which would come into effect in late 2025.[240]Presented by Minister for CommunicationsMichelle Rowland, the bill was created as an attempt at reducing social media harms for young people and responding to the concerns of parents.[241]The stated penalty for breach of the new laws on the part of social media platforms was a financial penalty ofAU$49.5 million.[241][240]The ban would apply to many major social media platforms, includingTikTok,Instagram,SnapchatandTwitter, but would exempt platforms deemed to meet educational or health needs of people under 16, includingYouTubeandGoogle Classroom.[241]Supporters of the ban included the advocacy group 36 Months[242]and media corporationNews Corp Australiawhich ran a campaign titledLet Them Be Kids,[240]whilst opposers expressed concern that the ban could cause isolation amongst teenagers belonging to marginalised groups such as theLGBTQcommunity or migrant/culturally diversebackgrounds,[243]and that the ban could stifle creativity and freedom of expression amongst young people.[244] On 27 July 2020, in Egypt, two women were sentenced to two years of imprisonment for postingTikTokvideos, which the government claimed as "violating family values".[245] In the2014 Thai coup d'état, the public was explicitly instructed not to 'share' or 'like' dissenting views on social media or face prison.[citation needed] Historically, platforms were responsible for moderating the content that they presented. They set rules for what was allowable, decided which content to promote and which to ignore. The US enacted theCommunications Decency Actin 1996.Section 230of that act exempted internet platforms from legal liability for content authored by third parties. No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider." (47 U.S.C. § 230(c)(1)). In 2024, legislation was enacted inFloridarequiring social media companies to verify the age of people with accounts, and to prohibit holding an account for people aged under 14, and between 14 and 16 in the absence of parental approval.[246][247] TheEuropean Unioninitially took a similar approach.[248]However, in 2020, theEuropean Commissionpresented two legislative proposals: TheDigital Services Act (DSA)and theDigital Markets Act (DMA). Both proposals were enacted in July 2022. The DSA entered into force on 17 February 2024, the DMA in March 2024.[249]This legislation can be summarized in the following four objectives, articulated byMEPs: Violators could face a complete ban in Europe or fines of up to 6% of global sales. Such content moderation requires extensive investment by platform providers.[252]Enforcement resources may not be sufficient to ensure compliance.[253] The DSA allows a country to require information to be deleted that is illegal only in that jurisdiction. According to Patrick Breyer from theGerman Pirate Party, a problem could arise from the Hungarian government requesting a video to be deleted that is critical of Victor Orban, as he foresaw the potential for such determinations to be applied EU-wide.[254] 2018 Nobel LaureatePaul Romer[255]advocated taxing negative externalities of social media platforms.[252]Similar to a carbon tax – negative social effects could be compensated for by a financial levy on the platforms.[256]Assuming that the tax did not deter the actions that produced the externalities, the revenue raised could be used to address them. However, consensus has yet to emerge on how to measure or mitigate the harms, nor to craft a tax, . Another proposal is to invokecompetition law.[257]The idea is to restrict the platforms' market power by controlling mergersex anteand tightening the law. This would be achieved through a supranational enforcement mechanism and the deterrent effect of high fines. In a 2024 opinion piece, Megan Moreno and Jenny Radesky, professors of pediatrics, wrote about the need for "nuanced" policy.[258]They regarded access which is contingent upon parental consent as harmful. They commented that a focus on increasing age restrictions "may serve to distract from making sure platforms are following guidelines and best practices for all ages".[259] In June 2024, US Surgeon GeneralVivek Murthycalled for social media platforms to contain a warning about the impact they have on the mental health of young people.[260] The business model of most social media platforms is based on selling slots to advertisers. Platforms provide access to data about each user, which allows them to deliver ads that are individually relevant to them. This strongly incents platforms to arrange their content so that users view as much content as possible, increasing the number of ads that they see. Platforms such as X add paid user subscriptions in part to reduce their dependence on advertising revenues.[261] The enormous reach and impact of social media has naturally led to a stream of criticism, debate, and controversy. Criticisms include platform capabilities, content moderation and reliability,[262]impact on concentration, mental health,[263]content ownership, and the meaning of interactions, and poor cross-platforminteroperability,[264]decrease in face-to-face interactions,cyberbullying,sexual predation, particularly of children, andchild pornography.[265][266] In 2007Andrew Keenwrote, "Out of this anarchy, it suddenly became clear that what was governing the infinite monkeys now inputting away on the Internet was the law of digital Darwinism, the survival of the loudest and most opinionated. Under these rules, the only way to intellectually prevail is by infinite filibustering."[267] Social media has become a regular source of news and information. A 2021 Pew Research Center poll reported roughly 70% of users regularly get news from social media,[4]despite the presence offake newsand misinformation. Platforms typically do not take responsibility for content accuracy, and many do not vet content at all, although in some cases, content the platform finds problematic is deleted or access to it is reduced.[268][269][270]Content distribution algorithms otherwise typically ignore substance, responding instead to the contents' virality. In 2018, researchers reported that fake news spread almost 70% faster than truthful news on X.[7]Social media bots on social media increase the reach of both true and false content and if wielded by bad actors misinformation can reach many more users.[10]Some platforms attempt to discover and block bots, with limited success.[11]Fake news seems to receive more user engagement, possibly because it is relatively novel, engaging users' curiosity and increasing spread.[26]Fake news often propagates in the immediate aftermath of an event, before conventional media are prepared to publish.[21][17] Social media miningis the process of obtaining data fromuser-generated contenton social media in order to extract actionable patterns, form conclusions about users, and act upon the information. Mining supports targeting advertising to users or academic research. The term is an analogy to the process ofminingfor minerals. Mining companies sift through raw ore to find the valuable minerals; likewise, social media mining sifts through social media data in order to discern patterns and trends about matters such as social media usage, online behaviour, content sharing, connections between individuals, buying behaviour. These patterns and trends are of interest to companies, governments and not-for-profit organizations, as such organizations can use the analyses for tasks such as design strategies, introduce programs, products, processes or services. Social media mining uses concepts fromcomputer science,data mining,machine learning, andstatistics. Mining is based onsocial network analysis,network science,sociology,ethnography, optimization and mathematics. It attempts to formally represent, measure and model patterns from social media data.[271]In the 2010s, major corporations, governments and not-for-profit organizations began mining to learn about customers, clients and others. Platforms such as Google, Facebook (partnered withDatalogixandBlueKai) conduct mining totarget users with advertising.[272]Scientists andmachine learningresearchers extract insights and design product features.[273] Users may not understand how platforms use their data.[274]Users tend to click throughTerms of Useagreements without reading them, leading to ethical questions about whether platforms adequately protect users' privacy. Malcolm Gladwellconsiders the role of social media in revolutions and protests to be overstated. He concluded that while social media makes it easier foractiviststo express themselves, that expression likely has no impact beyond social media. What he called "high-risk activism" involves strong relationships, coordination, commitment, high risks, and sacrifice.[276]Gladwell claimed that social media are built around weak ties and argues that "social networks are effective at increasing participation—by lessening the level of motivation that participation requires."[276]According to him, "Facebook activism succeeds not by motivating people to make a real sacrifice, but by motivating them to do the things that people do when they are not motivated enough to make a real sacrifice."[276] Disputing Gladwell's theory, a 2018 survey reported that people who are politically expressive on social media are more likely to participate in offline political activity.[277] Social media content is generated by users. However, content ownership is defined by the Terms of Service to which users agree. Platforms control access to the content, and may make it available to third parties.[278] Although platform's terms differ, generally they all give permission to utilize users' copyrighted works at the platform's discretion.[279] After its acquisition by Facebook in 2012, Instagram revealed it intended to use content in ads without seeking permission from or paying its users.[280][281]It then reversed these changes, with then-CEOKevin Systrompromising to update the terms of service.[282][283] Privacy rights advocates warn users about the collection of their personal data. Information is captured without the user's knowingconsent. Data may be applied to law enforcement or other governmental purposes.[284][278]Information may be offered for third party use. Young people are prone to sharing personal information that can attract predators.[285] While social media users claim to want to keep their data private, their behavior does not reflect that concern, as many users expose significant personal data on their profiles. In addition, platforms collect data on user behaviors that are not part of their personal profiles. This data is made available to third parties for purposes that include targeted advertising.[286] A 2014Pew Research Centersurvey reported that 91% of Americans "agree" or "strongly agree" that people have lost control over how personal information is collected and used. Some 80% of social media users said they were concerned about advertisers and businesses accessing the data they share on social media platforms, and 64% said the government should do more to regulate advertisers.[287]In 2019, UK legislators criticized Facebook for not protecting certain aspects of user data.[288] In 2019 thePentagonissued guidance to the military, Coast Guard and other government agencies that identified "the potential risk associated with using the TikTok app and directs appropriate action for employees to take in order to safeguard their personal information."[289]As a result, the military, Coast Guard,Transportation Security Administration, andDepartment of Homeland Securitybanned the installation and use of TikTok on government devices.[290] In 2020 The US government attempted to banTikTokandWeChatfrom the States over national security concerns. However, a federal court blocked the move.[291]In 2024, the US Congress passed a law directing TikTok's parent companyByteDanceto divest the service or see the service banned from operating in the US. The company sued, challenging the constitutionality of the ban.[292]The ban was upheld as constitutional.[citation needed] Internet addiction disorder(IAD), also known as problematic internet use, or pathological internet use, is a problematic compulsive use of theinternet, particularly onsocial media, that impairs an individual's function over a prolonged period of time. Young people are at particular risk of developing internet addiction disorder,[293]with case studies highlighting students whose academic performance declines as they spend more time online.[294]Some experience health consequences from loss of sleep[295]as they stay up to continuescrolling, chatting, and gaming.[296] Excessive Internet use is not recognized as a disorder by theAmerican Psychiatric Association'sDSM-5or theWorld Health Organization'sICD-11.[297]However,gaming disorderappears in the ICD-11.[298]Controversy around the diagnosis includes whether the disorder is a separate clinical entity, or a manifestation of underlying psychiatric disorders. Definitions are not standardized or agreed upon, complicating the development of evidence-based recommendations. Many different theoretical models have been developed and employed for many years in order to better explain predisposing factors to this disorder. Models such as the cognitive-behavioral model of pathological Internet have been used to explain IAD for more than 20 years. Newer models, such as the Interaction of Person-Affect-Cognition-Execution model, have been developed more recently and are starting to be applied in more clinical studies.[299] In 2011 the term "Facebook addiction disorder" (FAD) emerged.[300]FAD is characterized by compulsive use ofFacebook. A 2017 study investigated a correlation between excessive use andnarcissism, reporting "FAD was significantly positively related to the personality trait narcissism and to negative mental health variables (depression,anxiety, andstresssymptoms)".[301][302] In 2020, the documentaryThe Social Dilemma,reported concerns of mental health experts and former employees of social media companies over social media's pursuit of addictive use. For example, when a user has not visited Facebook for some time, the platform varies its notifications, attempting to lure them back. It also raises concerns about the correlation between social media use and child and teen suicidality.[303] Additionally in 2020, studies have shown that there has been an increase in the prevalence of IAD since theCOVID-19 pandemic.[304]Studies highlighting the possible relationship between COVID-19 and IAD have looked at how forced isolation and its associated stress may have led to higher usage levels of the Internet.[304] Research suggests that social media platforms trigger a cycle of compulsive behavior, which reinforces addictive patterns and makes it harder for individuals to break the cycle.[307] Various lawsuits have been brought regarding social media addiction, such as the Multi-District Litigation alleging harms caused by social media addiction on young users.[308] Whether to restrict the use of phones and social media among young people has been debated since smartphones became ubiquitous.[309]A study of Americans aged 12–15, reported that teenagers who used social media over three hours/day doubled their risk of negative mental health outcomes, including depression and anxiety.[310]Platforms have not tuned their algorithms to prevent young people from viewing inappropriate content. A 2023 study of Australian youth reported that 57% had seen disturbingly violent content, while nearly half had regular exposure to sexual images.[311]Further, youth are prone to misuse social media forcyberbullying.[312] As result, phones have been banned from some schools, and some schools in the US have blocked social media websites.[313] Intense discussions are taking place regarding the imposition of certain restrictions on children's access to social media. It is argued that using social media at a young age brings with it many problems. For example, according to a survey conducted byOfcom, the media regulator in the UK, 22% of children aged 8-17 lie about being over 18 on social media. According to a system implemented in Norway, more than half of nine-year-olds and the vast majority of 12-year-olds spend time on social media. A series of measures have begun to be taken across Europe to prevent the risks caused by such problems. The countries that have taken concrete steps in this regard are Norway and France. Since June 2023, France has started requiring social media platforms to verify the ages of their users and to obtain parental consent for those under the age of 15. In Norway, there is a minimum age requirement of 13 to access social media. The Online Safety Law in the UK has given social media platforms until mid-2025 to strengthen theirage verification systems.[314] Social media often features in political struggles. In some countries,Internet policeorsecret policemonitor or control citizens' use of social media. For example, in 2013 some social media was banned inTurkeyafter the TaksimGezi Park protests. Both X and YouTube were temporarily suspended in the country by a court's decision. A law granted immunity to Telecommunications Directorate (TİB) personnel. The TİB was also given the authority to block access to specific websites without a court order.[315]Yet TİB's 2014 blocking ofXwas ruled by the constitutional court to violate free speech.[316] Internet censorship in the United States of Americais thesuppression of informationpublished or viewed on theInternet in the United States. TheFirst Amendment of the United States Constitutionprotectsfreedom of speechand expression against federal, state, and local government censorship. Free speech protections allow little government-mandated Internet content restrictions. However, the Internet is highly regulated, supported by a complex set of legally binding and privately mediated mechanisms.[317] Gambling,cyber security, and the dangers to children who frequent social media are important ongoing debates. Significant public resistance to proposed content restriction policies has prevented measures used in some other countries from taking hold in the US.[317] Many government-mandated attempts to regulate content have been barred, often after lengthy legal battles.[318]However, the government has exerted pressure indirectly. With the exception ofchild pornography, content restrictions tend to rely on platforms to remove/suppress content, following state encouragement or the threat of legal action.[319][317] While the dominant social media platforms are not interoperable, open source protocols such asActivityPubhave been adopted by platforms such asMastodon,GNU social,Diaspora, andFriendica. They operate as a loose federation of mostly volunteer-operated servers, called theFediverse. However, in 2019, Mastodon blockedGabfrom connecting to it, claiming that it spread violent, right-wing extremism.[325] In December 2019,XCEOJack Dorseyadvocated an "open and decentralized standard for social media". He joinedBlueskyto bring it to reality.[326] Deplatforming, also known as no-platforming, is aboycotton an individual or group by removing the platforms used to share their information or ideas.[327]The term is commonly associated with social media. A number of commentators and experts have argued that social media companies have incentives that to maximize user engagement withsensational, emotive and controversial material that discourages a healthy discourse that democracies depend on.[330]Zack Beauchamp ofVox Mediacalls it an authoritarian medium because of how it is incentivized to stir up hate and division that benefits aspiring autocrats.[331]The Economistdescribes social media as vulnerable to manipulation by autocrats.[332]Informed dialogue, a shared sense of reality, mutual consent and participation can all suffer due to the business model of social media.[333]Political polarizationcan be one byproduct.[334][335][336]This can have implications for the likelihood ofpolitical violence.[337][213]Siva Vaidhyanathanargues for a range of solutions including privacy protections and enforcing anti-trust laws.[200]Andrew LeonarddescribesPol.isas one possible solution to the divisiveness of traditional discourse on social media that has damaged democracies, citing the use of its algorithm to instead prioritize finding consensus.[338][339] According toLikeWar: The Weaponization of Social Media,[340]the use of effective social media marketing techniques includes not only celebrities, corporations, and governments, but also extremist groups.[341]The use of social media byISISandAl-Qaedahas been used to influence public opinion where it operates and gain the attention of sympathizers. Social media platforms and encrypted-messaging applications have been used to recruit members, both locally and internationally.[342]Platforms have endured backlash forallowing this content. Extreme nationalist groups, and more prominently, USright-wing extremistshave used similar online tactics. As many traditional social media platforms bannedhate speech, several platforms became popular among right-wing extremists to carry out planning and communication including of events; these application became known as "Alt-tech". Platforms such asTelegram,Parler, andGabwere used during theJanuary 6 United States Capitol attack, to coordinate attacks.[343]Members shared tips on how to avoid law enforcement and their plans on carrying out their objectives; some users called for killing law enforcement officers and politicians.[344] Social media content, persists unless the user deletes it. After a user dies, unless the platform is notified, their content remains.[345]Each platform has created guidelines for this situation.[346]In most cases on social media, the platforms require a next-of-kin to prove that the user is deceased, and give them the option of closing the account or maintaining it in a 'legacy' status. facilitates the building of relations
https://en.wikipedia.org/wiki/Social_media
Social seatingis a type ofsocial networking servicethat enables users to select their seatmates based on their personal preferences and social network profiles. This system utilizes data fromFacebook,LinkedIn,Twitterand othersocial networksto allow users to view individuals with similar interests and then book a seat accordingly. Social seating operates as an opt-in system, which means that passengers or clients of a service implementing a social seating program will not have their information accessed unless they agree to it voluntarily. This feature is currently available with a few airlines, notablyKLMandMalaysia Airlines, is also offered for music events throughTicketmaster. The practice of social seating has a long history, dating back centuries. In ancient times, individuals would often select their companions based on shared social status or interests. For example, in ancient Greece, affluent citizens would frequently congregate at public gatherings, while individuals in servitude would occupy separate seating areas. In the contemporary era, social seating has been embraced with the advent of social media. Platforms such asFacebookandTwitterhave facilitated connections with like-minded individuals, simplifying the task of finding potential seatmates. In 2011,KLMpioneered the first commercial social seating program: Meet & Seat. This innovative program allowed passengers to view profiles of other travelers on their flight and request specific seatmates. Since then, additional social seating programs have emerged, including SeatID,Ticketmaster, andEventbrite. These platforms empower users to select seatmates for various events such as flights, concerts, and sporting events. Here are some key milestones in the history of social seating: Social seating is a technology that can be used in various industries such as flights, trains, theatres, and sports events. In the flight industry, airlines utilize this technology to allow customers to choose their potential seatmates based on personal information gathered from social networks likeFacebook, LinkedIn, or Twitter. For example,KLM's Meet and Seat program allows passengers to access the system at least forty-eight hours before their flight, edit their profile, view other passengers' profiles, and see a seating map showing the location of other passengers who have opted in.[1]Different levels of social integration exist, with companies like Hong Kong-based Satisfly allowing passengers to indicate their "mood" regarding interaction with their seatmates - whether they want to talk, chat casually, listen to music alone, or sleep.[2]This technology gives travelers more control over their seating and interactions during flights. For example, sales professionals can choose their flights based on potential customers also on the plane, allowing businesses to make sales pitches. Additionally, airlines can use the information from these systems to develop more personal relationships with passengers by accessing details like upcoming birthdays and anniversaries.[3] The technology is currently mainly used by airlines. However, Eran Savir, the founder of social seating service SeatID, has expressed his hope that this technology will expand to other industries such as theaters, sporting events, trains, and other services where people book seats.[3]An example of this isTicketmaster, which in 2011, allowed users to tag their seats at events and share that information with their Facebook friends. In 2012, Ticketmaster took this further by introducing an app that recommends nearby concerts based on the artists a user listens to on music services likeSpotifyand the discontinuedRdio.[4] Despite the added social benefits that social seating can provide, such as networking and meeting new people, there have been some concerns raised over the topic. Some believe that social seating could lead to stalking,[5]and others are concerned that the companies in charge of these systems will share the extracted data with third parties for profit. According to Savir, however, all of this personal information is already available to airlines if a user has "liked" their page, so using a social seating program tells airlines nothing more than they already know except for where a passenger is sitting.[3]In addition, some companies such as KLM's Meet & Seat have promised that they won't share personal information such as this with third parties and that passengers are prohibited from using information from other passengers' profiles to infringe upon their rights.[1]Despite these assurances, some companies are still unwilling to implement social seating programs, not just because of a lack of demand, but because they still feel that there is the potential for misuse. As Ali Bullock,Cathay Pacific's digital and social manager, stated, "It’s got to be in the interest of the passengers, and we feel there are privacy issues surrounding the idea of social seating".[3]
https://en.wikipedia.org/wiki/Social_seating
Social televisionis the union oftelevisionandsocial media. Millions of people now share their TV experience with other viewers on social media such asTwitterandFacebookusingsmartphonesand tablets.[1]TV networks and rights holders are increasinglysharing videoclips on social platforms tomonetiseengagement and drive tune-in. The social TV market covers the technologies that support communication and social interaction around TV as well as companies that study television-relatedsocial behaviorand measure social media activities tied to specific TV broadcasts[2]– many of which have attracted significant investment from established media and technology companies. The market is also seeing numerous tie-ups between broadcasters and social networking players such as Twitter and Facebook. The market is expected to be worth $256bn by 2017.[3] Social TV was named one of the 10 most important emerging technologies by theMIT Technology Reviewon Social TV[4]in 2010. And in 2011, David Rowan, the editor ofWiredmagazine,[5]named Social TV at number three of six in his peek into 2011 and what tech trends to expect to get traction.Ynon Kreiz, CEO of theEndemolGroup told the audience at theDigital Life Design(DLD) conference in January 2011: "Everyone says that social television will be big. I think it's not going to be big—it's going to be huge".[6] Much of the investment in the earlier years of social TV went into standalone social TV apps. The industry believed these apps would provide an appealing and complimentary consumer experience which could then be monetized with ads. These apps featured TV listings, check-ins, stickers and synchronised second-screen content but struggled to attract users away from Twitter and Facebook.[7]Most of these companies have since gone out of business or been acquired amid a wave of consolidation[8]and the market has instead focused on the activities of the social media channels themselves – such asTwitter Amplify, Facebook Suggested Videos and Snapchat Discover – and the technologies that support them. Twitter and Facebook are both helping users connect around media, which can provoke strong debate and engagement. Both social platforms want to be the 'digital watercooler' and host conversation around TV because the engagement and data about what media people consume can then be used to generate advertising revenue.[9] As an open platform, conversation on Twitter is closely aligned with real-time events. In May 2013, it launchedTwitter Amplify– an advertising product for media and consumer brands.[10]With Amplify, Twitter runs video highlights from major live broadcasts, with advertisers' names and messages playing before the clip.[11] By February 2014, all four major U.S. TV networks had signed up to the Amplify program, bringing a variety of premium TV content onto the social platform in the form of in-tweet real-time video clips.[12]In June 2014, Twitter acquired itsTwitter Amplifypartner in the U.S. SnappyTV, a company that was helping broadcasters and rights holders to share video content both organically across social and via Twitter's Amplify program. Twitter continues to rely onGrabyo, which has also struck numerous deals with some of the largest broadcasters and rights holders in Europe and North America[13]to share video content across Facebook and Twitter.[14] Facebookmade significant changes to its platform in 2014 including updates to its algorithm to enhance how it serves video in users' feeds. It also launched video autoplay to get users to watch the videos in their feeds. It rapidly surpassed Twitter and by the end of 2014 it was enjoying three billion video views a day on its platform and had announced a partnership with the NFL, one of Twitter's most active Twitter Amplify partners. In April 2015, at its F8 Developer Conference, it revealed it was working withGrabyoamong other technology partners to bring video onto its platform.[15]Then in July it announced it would be launching Facebook Suggested Videos, bringing related videos and ads to anyone that clicks on a video – a move that not only competed with Twitter's commercial video offering but also put it in direct competition withYouTube.[16] TV Timeis a television dedicatedsocial networkthat allows users to keep track of the television series they watch, as well as films. It also allows them to express their reaction to the media they have seen with episode specific voting for favorite characters and emotional reaction to episodes, as well as commenting in episode restrictive pages. This way users are able to avoidspoilerswhile also finding a precise audience and community for each of their interactions, as opposed to bigger, non-television dedicated social medias such as Facebook and Twitter where the likelihood of unintentionally reading spoilers is much higher.TV Timeoffers an analytics service called "TVLytics" where the votes and reactions collected from users can be studied for research and television production purposes.[17] According toBusinessinsider.com, there are variety of applications for social TV, including support for TV ad sales, optimizing TV ad buys, making ad buys more efficient, as a complement to audience measurement, and eventually, audience forecasting and real-time optimization. Social TV data can ease access tofocus groupsand may create apositive feedback loopfor generating ultra-sticky TV programming and multi-screenad campaigns.[18] Viewers share their TV experience on social media in real-time as events unfold: between 88-100m Facebook users login to the platform during the primetime hours of 8pm – 11pm in the US.[19]The volume of social media engagement in TV is also rising – according to Nielsen SocialGuide, there was a 38% increase in tweets about TV in 2013 to 263m.[20] For the 2014 Super Bowl, Twitter reported that a record 24.9 million tweets about the game were sent during the telecast, peaking at 381,605 tweets per minute.[21]Facebook reported that 50 million people discussed the Super Bowl, generating 185 million interactions.[22] The 2014 Oscars generated 5m tweets, viewed by an audience of 37m unique Twitter users and delivering 3.3bn impressions globally as conversation and key moments were shared virally across the platform.[23] In 2014 the All England Lawn Tennis Club (AELTC), hosts of Wimbledon, usedGrabyoto share video content across social. The videos were viewed 3.5 million times across Facebook and Twitter. In partnered with Grabyo again in 2015 and the videos generated over 48 million views across Facebook and Twitter.[24] Here are some examples of how TV executives are integrating social elements with TV shows:
https://en.wikipedia.org/wiki/Social_television
This is a list ofsocial platformswith at least 100 million monthlyactive users.[a]The list includessocial networks, as well asonline forums,photoandvideo sharing platforms, messaging andVoIPapps.
https://en.wikipedia.org/wiki/List_of_social_platforms_with_at_least_100_million_active_users
Conformityorconformismis the act of matching attitudes, beliefs, and behaviors togroupnorms,politicsor being like-minded.[1]Normsare implicit, specific rules, guidance shared by a group of individuals, that guide their interactions with others. People often choose to conform tosocietyrather than to pursue personal desires – because it is often easier to follow the path others have made already, rather than forging a new one. Thus, conformity is sometimes a product ofgroup communication.[2]This tendency to conform occurs in small groups and/or in society as a whole and may result from subtle unconscious influences (predisposedstate of mind), or from direct and overtsocial pressure. Conformity can occur in the presence of others, or when an individual is alone. For example, people tend to follow social norms when eating or when watching television, even if alone.[3] Solomon Asch, a social psychologist whose obedience research remains among the most influential in psychology, demonstrated the power of conformity through his experiment on line judgment. TheAsch conformity experimentdemonstrates how much influence conformity has on people. In a laboratory experiment, Asch asked 50 male students fromSwarthmore Collegein the US to participate in a 'vision test'. Asch put a naive participant in a room with seven stooges in a line judgment task. When confronted with the line task, each stooge had already decided what response they would give. The real members of the experimental group sat in the last position, while the others were pre-arranged experimenters who gave apparently incorrect answers in unison; Asch recorded the last person's answer to analyze the influence of conformity. Surprisingly, about one third (32%) of the participants who were placed in this situation sided with the clearly incorrect majority on the critical trials. Over the 12 critical trials, about 75% of participants conformed at least once. Ash demonstrated in this experiment that people could produce obviously erroneous responses just to conform to a group of similar erroneous responders, this was called normative influence. After being interviewed, subjects acknowledged that they did not actually agree with the answers given by others. The majority of them, however, believed that groups are wiser or did not want to appear as mavericks and chose to repeat the same obvious misconception.There is another influence that is sometimes more subtle, called informational influence. This is when people turn to others for information to help them make decisions in new or ambiguous situations. Most of the time, people were simply conforming to social group norms that they were unaware of, whether consciously or unconsciously, especially through a mechanism called the Chameleon effect. This effect is when people unintentionally and automatically mimic others' gestures, posture, and speech style in order to produce rapport and create social interactions that run smoothly {(Chartrand & Bargh, 1999[4])}. It is clear from this that conformity has a powerful effect on human perception and behavior, even to the extent that it can be faked against a person's basic belief system.[5] Changing one's behaviors to match the responses of others, which is conformity, can be conscious or not.[6]People have an intrinsic tendency to unconsciously imitate other's behaviors such as gesture, language, talking speed, and other actions of the people they interact with.[7]There are two other main reasons for conformity:informational influenceandnormative influence.[7]People display conformity in response to informational influence when they believe the group is better informed, or in response to normative influence when they are afraid of rejection.[8]When the advocated norm could be correct, the informational influence is more important than the normative influence, while otherwise the normative influence dominates.[9] People often conform from a desire forsecuritywithin a group, also known asnormative influence[10]—typically a group of a similar age,culture,religionor educational status. This is often referred to asgroupthink: a pattern of thought characterized by self-deception, forced manufacture of consent, and conformity to group values andethics, which ignores realistic appraisal of other courses of action. Unwillingness to conform carries the risk ofsocial rejection. Conformity is often associated in media with adolescence andyouth culture, but strongly affects humans of all ages.[11] Althoughpeer pressuremay manifest negatively, conformity can be regarded as either good or bad. Driving on the conventionally-approved side of the road may be seen as beneficial conformity.[12]With the appropriate environmental influence, conforming, in early childhood years, allows one to learn and thus, adopt the appropriate behaviors necessary to interact and develop "correctly" within one's society.[13]Conformity influences the formation and maintenance ofsocial norms, and helps societies function smoothly and predictably via the self-elimination of behaviors seen as contrary tounwritten rules.[14]Conformity was found to impair group performance in a variable environment, but was not found to have a significant effect on performance in a stable environment.[15] According to Herbert Kelman, there are three types of conformity: 1)compliance(which is public conformity, and it is motivated by the need for approval or the fear of disapproval; 2)identification(which is a deeper type of conformism than compliance); 3)internalization(which is to conform both publicly and privately).[16] Major factors that influence the degree of conformity include culture, gender, age, size of the group, situational factors, and different stimuli. In some cases,minority influence, a special case of informational influence, can resist the pressure to conform and influence the majority to accept the minority's belief or behaviors.[8] Conformityis the tendency to change our perceptions, opinions, or behaviors in ways that are consistent withgroup norms.[17]Norms are implicit, specific rules shared by a group of individuals on how they should behave.[18]People may be susceptible to conform to group norms because they want to gain acceptance from their group.[18] Some adolescents gain acceptance and recognition from their peers by conformity. This peer moderated conformity increases from the transition of childhood to adolescence.[19]It follows a U-shaped age pattern wherein conformity increases through childhood, peaking at sixth and ninth grades and then declines.[20]Adolescents often follow the logic that "if everyone else is doing it, then it must be good and right".[21]However, it is found that they are more likely to conform if peer pressure involves neutral activities such as those in sports, entertainment, andprosocial behaviorsrather thananti-social behaviors.[20]Researchers have found that peer conformity is strongest for individuals who reported strong identification with their friends or groups, making them more likely to adopt beliefs and behaviors accepted in such circles.[22][23] There is also the factor that the mere presence of a person can influence whether one is conforming or not. Norman Triplett (1898) was the researcher that initially discovered the impact that mere presence has, especially among peers.[24]In other words, all people can affect society. We are influenced by people doing things beside us, whether this is in a competitive atmosphere or not. People tend to be influenced by those who are their own age especially. Co-actors that are similar to us tend to push us more than those who are not. According toDonelson Forsyth, after submitting to group pressures, individuals may find themselves facing one of several responses to conformity. These types of responses to conformity vary in their degree of public agreement versus private agreement. When an individual finds themselves in a position where they publicly agree with the group's decision yet privately disagrees with the group's consensus, they are experiencingcomplianceoracquiescence. This is also referenced as apparent conformity. This type of conformity recognizes that behavior is not always consistent with our beliefs and attitudes, which mimics Leon Festinger'scognitive dissonancetheory. In turn,conversion, otherwise known asprivate acceptanceor "true conformity", involves both publicly and privately agreeing with the group's decision. In the case of private acceptance, the person conforms to the group by changing their beliefs and attitudes. Thus, this represents a true change of opinion to match the majority.[25] Another type of social response, which does not involve conformity with the majority of the group, is calledconvergence. In this type of social response, the group member agrees with the group's decision from the outset and thus does not need to shift their opinion on the matter at hand.[26] In addition, Forsyth shows that nonconformity can also fall into one of two response categories. Firstly, an individual who does not conform to the majority can displayindependence.Independence, ordissent, can be defined as the unwillingness to bend to group pressures. Thus, this individual stays true to his or her personal standards instead of the swaying toward group standards. Secondly, a nonconformist could be displayinganticonformityorcounterconformitywhich involves the taking of opinions that are opposite to what the group believes. This type of nonconformity can be motivated by a need to rebel against the status quo instead of the need to be accurate in one's opinion. To conclude, social responses to conformity can be seen to vary along a continuum from conversion to anticonformity. For example, a popular experiment in conformity research, known as theAsch situationorAsch conformity experiments, primarily includescomplianceandindependence. Also, other responses to conformity can be identified in groups such as juries, sports teams and work teams.[26] Muzafer Sherif was interested in knowing how many people would change their opinions to bring them in line with the opinion of a group. In his experiment, participants were placed in a dark room and asked to stare at a small dot of light 15 feet away. They were then asked to estimate the amount it moved. The trick was, there was no movement, it was caused by a visual illusion known as theautokinetic effect.[27]The participants stated estimates ranging from 1–10 inches. On the first day, each person perceived different amounts of movement, but from the second to the fourth day, the same estimate was agreed on and others conformed to it.[28]Over time, the personal estimates converged with the other group members' estimates once discussing their judgments aloud. Sherif suggested this was a simulation for how social norms develop in a society, providing a common frame of reference for people. His findings emphasize that people rely on others to interpret ambiguous stimuli and new situations. Subsequent experiments were based on more realistic situations. In an eyewitness identification task, participants were shown a suspect individually and then in a lineup of other suspects. They were given one second to identify him, making it a difficult task. One group was told that their input was very important and would be used by the legal community. To the other it was simply an experiment. Being more motivated to get the right answer increased the tendency to conform. Those who wanted to be more accurate conformed 51% of the time as opposed to 35% in the other group.[29]Sherif's study provided a framework for subsequent studies of influence such as Solomon Asch's 1955 study. Solomon E. Asch conducted a modification of Sherif's study, assuming that when the situation was very clear, conformity would be drastically reduced. He exposed people in a group to a series of lines, and the participants were asked to match one line with a standard line. All participants except one were accomplices and gave the wrong answer in 12 of the 18 trials.[30] The results showed a surprisingly high degree of conformity: 74% of the participants conformed on at least one trial. On average people conformed one third of the time.[30]A question is how the group would affect individuals in a situation where the correct answer is less obvious.[31] After his first test, Asch wanted to investigate whether the size or unanimity of the majority had greater influence on test subjects. "Which aspect of the influence of a majority is more important – the size of the majority or its unanimity? The experiment was modified to examine this question. In one series the size of the opposition was varied from one to 15 persons."[32]The results clearly showed that as more people opposed the subject, the subject became more likely to conform. However, the increasing majority was only influential up to a point: from three or more opponents, there is more than 30% of conformity.[30] Besides that, this experiment proved that conformity is powerful, but also fragile. It is powerful because just by having actors giving the wrong answer made the participant to also give the wrong answer, even though they knew it was not correct. It is also fragile, however, because in one of the variants for the experiment, one of the actors was supposed to give the correct answer, being an "ally" to the participant. With an ally, the participant was more likely to give the correct answer than he was before the ally. In addition, if the participant was able to write down the answer, instead of saying out loud, he was also more likely to put the correct answer. The reason for that is because he was not afraid of being different from the rest of the group since the answers were hidden.[33] This experiment was conducted by Yale University psychologistStanley Milgramin order to portray obedience to authority. They measured the willingness of participants (men aged 20 to 50 from a diverse range of occupations with different levels of education) to obey the instructions from an authority figure to supply fake electric shocks that would gradually increase to fatal levels. Regardless of these instructions going against their personal conscience, 65% of the participants shocked all the way to 450 volts, fully obeying the instruction, even if they did so reluctantly. Additionally, all participants shocked to at least 300 volts.[34] In this experiment, the subjects did not have punishments or rewards if they chose to disobey or obey. All they might receive is disapproval or approval from the experimenter. Since this is the case they had no motives to sway them to perform the immoral orders or not. One of the most important factors of the experiment is the position of the authority figure relative to the subject (the shocker) along with the position of the learner (the one getting shocked). There is a reduction in conformity depending on if the authority figure or learner was in the same room as the subject. When the authority figure was in another room and only phoned to give their orders the obedience rate went down to 20.5%. When the learner was in the same room as the subject the obedience rate dropped to 40%.[35] This experiment, led by psychology professor Philip G. Zimbardo, recruited Stanford students using a local newspaper ad, who he checked to be both physically and mentally healthy.[36]Subjects were either assigned the role of a "prisoner" or "guard" at random over an extended period of time, within a pretend prison setting on the Stanford University Campus. The study was set to be over the course of two weeks but it was abruptly cut short because of the behaviors the subjects were exuding. It was terminated due to the "guards" taking on tyrannical and discriminatory characteristics while "prisoners" showed blatant signs of depression and distress.[37] In essence, this study showed us a lot about conformity and power imbalance. For one, it demonstrates how situations determines the way our behavior is shaped and predominates over our personality, attitudes, and individual morals. Those chosen to be "guards" were not mean-spirited. But, the situation they were put in made them act accordingly to their role. Furthermore, this study elucidates the idea that humans conform to expected roles. Good people (i.e. the guards before the experiment) were transformed into perpetrators of evil. Healthy people (i.e. the prisoners before the experiment) were subject to pathological reactions. These aspects are also traceable to situational forces. This experiment also demonstrated the notion of the banality of evil which explains that evil is not something special or rare, but it is something that exists in all ordinary people.[citation needed] Harvard psychologistHerbert Kelmanidentified three major types of conformity.[16] Although Kelman's distinction has been influential, research insocial psychologyhas focused primarily on two varieties of conformity. These areinformationalconformity, orinformational social influence, andnormativeconformity, also callednormative social influence. In Kelman's terminology, these correspond to internalization and compliance, respectively. There are naturally more than two or three variables in society influential on humanpsychologyand conformity; the notion of "varieties" of conformity based upon "social influence" is ambiguous and indefinable in this context. According to Deutsch and Gérard (1955), conformity results from a motivational conflict (between the fear of being socially rejected and the wish to say what we think is correct) that leads to normative influence, and a cognitive conflict (others create doubts in what we think) which leads to informational influence.[38] Informational social influence occurs when one turns to the members of one's group to obtain and accept accurate information about reality.[39]A person is most likely to use informational social influence in certain situations: when a situation is ambiguous, people become uncertain about what to do and they are more likely to depend on others for the answer; and during a crisis when immediate action is necessary, in spite of panic. Looking to other people can help ease fears, but unfortunately, they are not always right. The more knowledgeable a person is, the more valuable they are as a resource. Thus, people often turn toexpertsfor help. But once again people must be careful, as experts can make mistakes too. Informational social influence often results ininternalizationorprivate acceptance, where a person genuinely believes that the information is right.[28] Normative social influence occurs when one conforms to be liked or accepted by the members of the group. This need of social approval and acceptance is part of our state of humans.[28]In addition to this, we know that when people do not conform with their group and therefore are deviants, they are less liked and even punished by the group.[40]Normative influence usually results inpublic compliance, doing or saying something without believing in it. The experiment of Asch in 1951 is one example of normative influence. Even though John Turner et al. argued that the post experimental interviews showed that the respondents were uncertain about the correct answers in some cases. The answers might have been evident to the experimenters, but the participants did not have the same experience. Subsequent studies pointed out the fact that the participants were not known to each other and therefore did not pose a threat against social rejection. See:Normative influence vs. referent informational influence In a reinterpretation of the original data from theseexperimentsHodges and Geyer (2006)[41]found that Asch's subjects were not so conformist after all: The experiments provide powerful evidence for people's tendency to tell the truth even when others do not. They also provide compelling evidence of people's concern for others and their views. By closely examining the situation in which Asch's subjects find themselves they find that the situation places multiple demands on participants: They include truth (i.e., expressing one's own view accurately), trust (i.e., taking seriously the value of others' claims), and social solidarity (i.e., a commitment to integrate the views of self and others without deprecating). In addition to these epistemic values, there are multiple moral claims as well: These include the need for participants to care for the integrity and well-being of other participants, the experimenter, themselves, and the worth of scientific research. Deutsch & Gérard (1955) designed different situations that variated from Asch' experiment and found that when participants were writing their answer privately, they gave the correct one.[38] Normative influence, a function ofsocial impact theory, has three components.[42]Thenumber of peoplein the group has a surprising effect. As the number increases, each person has less of an impact. A group'sstrengthis how important the group is to a person. Groups we value generally have more social influence.Immediacyis how close the group is in time and space when the influence is taking place. Psychologists have constructed a mathematical model using these three factors and are able to predict the amount of conformity that occurs with some degree of accuracy.[43] Baron and his colleagues conducted a secondeyewitness studythat focused on normative influence. In this version, the task was easier. Each participant had five seconds to look at a slide instead of just one second. Once again, there were both high and low motives to be accurate, but the results were the reverse of the first study. The low motivation group conformed 33% of the time (similar to Asch's findings). The high motivation group conformed less at 16%. These results show that when accuracy is not very important, it is better to get the wrong answer than to risk social disapproval. An experiment using procedures similar to Asch's found that there was significantly less conformity in six-person groups offriendsas compared to six-person groups of strangers.[44]Because friends already know and accept each other, there may be less normative pressure to conform in some situations. Field studies on cigarette and alcohol abuse, however, generally demonstrate evidence of friends exerting normative social influence on each other.[45] Although conformity generally leads individuals to think and act more like groups, individuals are occasionally able to reverse this tendency and change the people around them. This is known asminority influence, a special case of informational influence. Minority influence is most likely when people can make a clear and consistent case for their point of view. If the minority fluctuates and shows uncertainty, the chance of influence is small. However, a minority that makes a strong, convincing case increases the probability of changing the majority's beliefs and behaviors.[46]Minority members who are perceived as experts, are high in status, or have benefited the group in the past are also more likely to succeed. Another form of minority influence can sometimes override conformity effects and lead to unhealthy group dynamics. A 2007 review of two dozen studies by the University of Washington found that a single "bad apple" (an inconsiderate or negligent group member) can substantially increase conflicts and reduce performance in work groups. Bad apples often create a negative emotional climate that interferes with healthy group functioning. They can be avoided by careful selection procedures and managed by reassigning them to positions that require less social interaction.[47] Stanley Milgramfound that individuals in Norway (from a collectivistic culture) exhibited a higher degree of conformity than individuals in France (from an individualistic culture).[48][clarification needed]Similarly, Berry studied two different populations: the Temne (collectivists) and the Inuit (individualists) and found that the Temne conformed more than the Inuit when exposed to a conformity task.[49] Bond and Smith compared 134 studies in a meta-analysis and found that there is a positive correlation between a country's level of collectivistic values and conformity rates in the Asch paradigm.[50]Bond and Smith also reported that conformity has declined in the United States over time. Influenced by the writings of late-19th- and early-20th-century Western travelers, scholars or diplomats who visited Japan, such asBasil Hall Chamberlain,George Trumbull LaddandPercival Lowell, as well as byRuth Benedict's influential bookThe Chrysanthemum and the Sword, many scholars of Japanese studies speculated that there would be a higher propensity to conform in Japanese culture than in American culture. However, this view was not formed on the basis ofempirical evidence collected in a systematic way, but rather on the basis of anecdotes and casual observations, which are subject to a variety ofcognitive biases. Modern scientific studies comparing conformity in Japan and the United States show that Americans conform in general as much as the Japanese and, in some situations, even more. Psychology professorYohtaro Takanofrom theUniversity of Tokyo, along with Eiko Osaka reviewed four behavioral studies and found that the rate of conformity errors that the Japanese subjects manifested in the Asch paradigm was similar with that manifested by Americans.[51]The study published in 1970 byRobert Fragerfrom theUniversity of California, Santa Cruzfound that the percentage of conformity errors within the Asch paradigm was significantly lower in Japan than in the United States, especially in the prize condition. Another study published in 2008, which compared the level of conformity among Japanese in-groups (peers from the same college clubs) with that found among Americans found no substantial difference in the level of conformity manifested by the two nations, even in the case of in-groups.[52] Societal norms often establish gender differences and researchers have reported differences in the way men and women conform to social influence.[53][54][55][56][57][58][59]For example, Alice Eagly and Linda Carli performed a meta-analysis of 148 studies of influenceability. They found that women are more persuadable and more conforming than men in group pressure situations that involve surveillance.[60]Eagly has proposed that this sex difference may be due to different sex roles in society.[61]Women are generally taught to be more agreeable whereas men are taught to be more independent. The composition of the group plays a role in conformity as well. In a study by Reitan and Shaw, it was found that men and women conformed more when there were participants of both sexes involved versus participants of the same sex. Subjects in the groups with both sexes were more apprehensive when there was a discrepancy amongst group members, and thus the subjects reported that they doubted their own judgments.[54]Sistrunk and McDavid made the argument that women conformed more because of a methodological bias.[62]They argued that because stereotypes used in studies are generally male ones (sports, cars..) more than female ones (cooking, fashion..), women felt uncertain and conformed more, which was confirmed by their results. Research has noted age differences in conformity. For example, research with Australian children and adolescents ages 3 to 17 discovered that conformity decreases with age.[63]Another study examined individuals that were ranged from ages 18 to 91.[64]The results revealed a similar trend – older participants displayed less conformity when compared to younger participants. In the same way that gender has been viewed as corresponding to status, age has also been argued to have status implications. Berger, Rosenholtz and Zelditch suggest that age as a status role can be observed among college students. Younger students, such as those in their first year in college, are treated as lower-status individuals and older college students are treated as higher-status individuals.[65]Therefore, given these status roles, it would be expected that younger individuals (low status) conform to the majority whereas older individuals (high status) would be expected not to conform.[66] Researchers have also reported an interaction of gender and age on conformity.[67]Eagly and Chrvala examined the role of age (under 19 years vs. 19 years and older), gender and surveillance (anticipating responses to be shared with group members vs. not anticipating responses being shared) on conformity to group opinions. They discovered that among participants that were 19 years or older, females conformed to group opinions more so than males when under surveillance (i.e., anticipated that their responses would be shared with group members). However, there were no gender differences in conformity among participants who were under 19 years of age and in surveillance conditions. There were also no gender differences when participants were not under surveillance. In a subsequent research article, Eagly suggests that women are more likely to conform than men because of lower status roles of women in society. She suggests that more submissive roles (i.e., conforming) are expected of individuals that hold low status roles.[66]Still, Eagly and Chrvala's results do conflict with previous research which have found higher conformity levels among younger rather than older individuals. Although conformity pressures generally increase as the size of the majority increases,Asch's experimentin 1951 stated that increasing the size of the group will have no additional impact beyond a majority of size three.[68]Brown and Byrne's 1997 study described a possible explanation that people may suspect collusion when the majority exceeds three or four.[68]Gerard's 1968 study reported a linear relationship between the group size and conformity when the group size ranges from two to seven people.[69]According to Latane's 1981 study, the number of the majority is one factor that influences the degree of conformity, and there are other factors like strength and immediacy.[70] Moreover, a study suggests that the effects of group size depend on the type of social influence operating.[71]This means that in situations where the group is clearly wrong, conformity will be motivated by normative influence; the participants will conform in order to be accepted by the group. A participant may not feel much pressure to conform when the first person gives an incorrect response. However, conformity pressure will increase as each additional group member also gives the same incorrect response.[71] Research has found different group and situation factors that affect conformity.  Accountability increases conformity, if an individual is trying to be accepted by a group which has certain preferences, then individuals are more likely to conform to match the group.[72]Similarly, the attractiveness of group members increases conformity. If an individual wishes to be liked by the group, they are increasingly likely to conform.[73] Accuracy also effects conformity, as the more accurate and reasonable the majority is in their decision than the more likely the individual will be to conform.[74]As mentioned earlier, size also effects individuals' likelihood to conform.[33]The larger the majority the more likely an individual will conform to that majority. Similarly, the less ambiguous the task or decision is, the more likely someone will conform to the group.[75]When tasks are ambiguous people are less pressured to conform. Task difficulty also increases conformity, but research has found that conformity increases when the task is difficult but also important.[29] Research has also found that as individuals become more aware that they disagree with the majority they feel more pressure, and hence are more likely to conform to the decisions of the group.[76]Likewise, when responses must be made face-face, individuals increasingly conform, and therefore conformity increases as the anonymity of the response in a group decreases. Conformity also increases when individuals have committed themselves to the group making decisions.[77] Conformity has also been shown to be linked to cohesiveness. Cohesiveness is how strongly members of a group are linked together, and conformity has been found to increase as group cohesiveness increases.[78]Similarly, conformity is also higher when individuals are committed and wish to stay in the group. Conformity is also higher when individuals are in situations involving existential thoughts that cause anxiety, in these situations individuals are more likely to conform to the majority's decisions.[79] In 1961 Stanley Milgram published a study in which he utilized Asch's conformity paradigm using audio tones instead of lines; he conducted his study in Norway and France.[48]He found substantially higher levels of conformity than Asch, with participants conforming 50% of the time in France and 62% of the time in Norway during critical trials. Milgram also conducted the same experiment once more, but told participants that the results of the study would be applied to the design of aircraft safety signals. His conformity estimates were 56% in Norway and 46% in France, suggesting that individuals conformed slightly less when the task was linked to an important issue. Stanley Milgram's study demonstrated that Asch's study could be replicated with other stimuli, and that in the case of tones, there was a high degree of conformity.[80] Evidence has been found for the involvement of the posterior medial frontal cortex (pMFC) in conformity,[81]an area associated withmemory and decision-making. For example, Klucharev et al.[82]revealed in their study that by using repetitivetranscranial magnetic stimulationon the pMFC, participants reduced their tendency to conform to the group, suggesting a causal role for the brain region in social conformity. Neuroscience has also shown how people quickly develop similar values for things. Opinions of others immediately change the brain's reward response in theventral striatumto receiving or losing the object in question, in proportion to how susceptible the person is to social influence. Having similar opinions to others can also generate a reward response.[80] Theamygdalaandhippocampushave also been found to be recruited when individuals participated in a social manipulation experiment involving long-term memory.[83]Several other areas have further been suggested to play a role in conformity, including theinsula, thetemporoparietal junction, theventral striatum, and the anterior and posteriorcingulate cortices.[84][85][86][87][88] More recent work[89]stresses the role oforbitofrontal cortex(OFC) in conformity not only at the time of social influence,[90]but also later on, when participants are given an opportunity to conform by selecting an action. In particular, Charpentier et al. found that the OFC mirrors the exposure to social influence at a subsequent time point, when a decision is being made without the social influence being present. The tendency to conform has also been observed in the structure of the OFC, with a greatergrey mattervolume in high conformers.[91]
https://en.wikipedia.org/wiki/Conformity
Internet identity(IID), alsoonline identity,online personality,online personaorinternet persona, is asocial identitythat an Internet user establishes in online communities and websites. It may also be an actively constructed presentation of oneself. Although some people choose to use their real names online, some Internet users prefer to be anonymous, identifying themselves by means of pseudonyms, which reveal varying amounts ofpersonally identifiable information. An online identity may even be determined by a user's relationship to a certain social group they are a part of online. Some can be deceptive about their identity. In some online contexts, includingInternet forums,online chats, andmassively multiplayer online role-playing games(MMORPGs), users can represent themselves visually by choosing an avatar, an icon-sized graphic image. Avatars are one way users express their online identity.[1]Through interaction with other users, an established online identity acquires areputation, which enables other users to decide whether the identity is worthy oftrust.[2]Online identities are associated with users throughauthentication, which typically requiresregistrationandlogging in. Some websites also use the user'sIP addressortracking cookiesto identify users.[3] The concept of theself, and how this is influenced by emerging technologies, are a subject of research in fields such aseducation,psychology, andsociology. Theonline disinhibition effectis a notable example, referring to a concept of unwise and uninhibited behavior on the Internet, arising as a result of anonymity and audience gratification.[4] Triangular relationships of personal online identity There are three key interaction conditions in the identity processes: Fluid Nature of Online and Offline, overlapping social networks, and expectations of accuracy. Social actors accomplish the ideal-authentic balance through self-triangulation, presenting a coherent image in multiple arenas and through multiple media. Online environments provide individuals with the ability to participate in virtual communities. Although geographically unconnected, they are united by common interests and shared cultural experiences. So cultural meanings of race, class, and gender flow into online identity. Every social actor assumes a variety of roles, including those of mother, father, employee, friend, etc. Each character maintains a comprehensive understanding of others, or the standards and ethical expectations of the people that inhabit the world. Even while self-versions sometimes overlap, various networks may have slightly divergent, sometimes contradicting, expectations for players. One of the main complex factors in the network era is to bring together previously segmented networks. It is customary for individuals to appropriately portray themselves on social networking platforms. By accurate, it does not imply a "True Self". Digitally mediated identity performance represents a specific version of the self, just like all other identity performance contexts. Thesocial web, i.e. the usage of the web to support the social process, represents a space in which people have the possibility toexpressand expose their identity[5]in a social context. For example, people define their identity explicitly by creatinguser profilesinsocial network servicessuch asFacebookorLinkedInandonline dating services.[6]By expressing opinions onblogsand other social media, they define more tacitidentities. The disclosure of a person's identity may present certain issues[2]related toprivacy. Many people adopt strategies that help them control the disclosure of their personal information online.[7]Some strategies require users to invest considerable effort. The emergence of the concept of online identity has raised many questions among academics.Social networking servicesand onlineavatarshave complicated the concept of identity. Academia has responded to these emerging trends by establishing domains of scholarly research such astechnoselfstudies, which focuses on all aspects of human identity in technological societies. Online activities may affect our offline personal identity, as well.[8]Avi Marciano has coined the term "VirtuReal" to resolve the contested relationship between online and offline environments in relation to identity formation. Studying online usage patterns of transgender people, he suggested that the internet can be used as preliminary, complementary, and/or alternative sphere. He concludes that although "the offline world sets boundaries that potentially limit the latitude within the online world, these boundaries are wide enough to allow mediated agency that empowers transgender users. Consequently, the term VirtuReal "reflects both the fact that it provides an empowering virtual experience that compensates for offline social inferiority, and the fact that it is nevertheless subject to offline restrictions".[9] Dorian Wiszniewski andRichard Coyne, in their contribution to the bookBuilding Virtual Communities, explore online identity, with emphasis on the concept of "masking" identity.[clarification needed]They say that whenever an individual interacts in a social sphere, they portray a mask of their identity. This is no different online and becomes even more pronounced due to the decisions an online contributor makes concerning his or her online profile. He or she must answer specific questions about age,gender, address,username, and so forth. With the accrual of one's online activity, his or her mask is increasingly defined by his or her style of writing, vocabulary, and topics. The kind of mask one chooses reveals something about the subject behind the mask, which might be referred to as the "metaphor" of the mask. The online mask does not reveal the actual identity of a person; it reveals an example of what lies behind the mask. If a person chooses to act like arock staronline, this metaphor reveals an interest in rock music but may also indicate a lack ofself-esteem. A person may also choose to craft a fake identity, whether entirely fictional, already in existence, borrowed,or stolen. Because of many emotional and psychological dynamics, people can be reluctant to interact online. By evoking a mask of identity, a person can create a safety net. An anonymous or fake identity is one precaution people take so that their true identity is not stolen or abused. By making the mask available, people can interact with some degree of confidence without fear. Wiszniewski and Coyne state, "Education can be seen as the change process by which identity is realized, how one finds one's place. Education implicates the transformation of identity. Education, among other things, is a process of building up a sense of identity, generalized as a process of edification." Students interacting in anonline communitymust reveal something about themselves and have others respond to this contribution. In this manner, the mask is constantly being formulated in dialogue with others and thereby students will gain a richer and deeper sense of who they are. There will be a process of edification that will help students come to understand their strengths and weaknesses.[10] The blended mask perspective is likened to the concept of 'blended identity',[11]whereby the offline-self informs the creation of a new online-self, which in turn informs the offline-self through further interaction with those the individual first met online. It means people's self-identity varies in different social or cultural contexts. Asblogsallow an individual to express his or her views in individual essays or as part of a wider discussion, it creates a public forum for expressing ideas. Bloggers often choose to use pseudonyms, whether in platforms such asWordPressor in interest-centered blog sites, to protect personal information and allow them more editorial freedom to express ideas that might be unpopular with their family, employers, etc. Use of a pseudonym (and a judicious approach to revealing personal information) can allow a person to protect their real identities, but still build a reputation online using the assumed name.[12] Digital identity management has become a necessity when applying for jobs while working for a company. Social media has been a tool for human resources for years. A KPMG report on social media in human resources say that 76 percent of American companies used LinkedIn for recruiting.[13]The ease of search means that reputation management will become more vital especially in professional services such as lawyers, doctors and accountants. Online social networks likeFacebookandMySpaceallow people to maintain an online identity with some overlap between online and real-world context. These identities are often created to reflect a specific aspect or ideal version of themselves. Representations include pictures, communications with other 'friends' and membership in network groups. Privacy control settings on social networks are also part of social networking identity.[14] Some users may use their online identity as an extension of their physical selves, and center their profiles around realistic details. These users value continuity in their identity, and would prefer being honest with the portrayal of themselves. However, there is also a group of social network users that would argue against using a real identity online. These users have experimented with online identity, and ultimately what they have found is that it is possible to create an alternate identity through the usage of such social networks. For example, a popular blogger on medium.com[15]writes under the name of Kel Campbell – a name that was chosen by her, not given to her. She states that when she was verbally attacked online by another user, she was able to protect herself from the sting of the insult by taking it as Kel, rather than her true self. Kel became a shield of sorts, and acted as a mask that freed the real user beneath it. Research from scientists such asdanah boydand Knut Lundby has found that in some cultures, the ability to form an identity online is considered a sacred privilege. This is because having an online identity allows the user to accomplish things that otherwise are impossible to do in real life. These cultures believe that the self has become a subjective concept on the online spaces; by logging onto their profiles, users are essentially freed from the prison of a physical body and can "create a narrative of the self in virtual space that may be entirely new". In the development of social networks, there has appeared a new economic phenomenon: doing business via social networks. For example, there are many users ofWeChatcalled wei-businessmen (Wechat businessman, a new form of e-commerce in Wechat) who sell products on WeChat. Doing business via social networks is not that easy. The identities of users in social networks are not the same as that in the real world. For the sake of security, people do not tend to trust someone in social networks, in particular when it is related with money. So for wei-businessmen, reputations are very important for wei-business. Once customers decide to shop via Wechat, they prefer to choose those wei-businessmen with high reputations. They need to invest enormous efforts to gain reputations among the users of WeChat, which in turn increases the chance other users will purchase from them. Online identity in classrooms forces people to reevaluate their concepts of classroom environments.[16]With the invention of online classes, classrooms have changed and no longer have the traditional face-to-face communications. These communications have been replaced by computer screen. Students are no longer defined by visual characteristics unless they make them known. There are pros and cons to each side. In a traditional classroom, students are able to visually connect with a teacher who was standing in the same room. During the class, if questions arise, clarification can be provided immediately. Students can create face-to-face connections with other students, and these connections can easily be extended beyond the classroom.[17] With the prevalence of remote Internet communications, students do not form preconceptions of their classmates based on the classmate's appearance or speech characteristics.[18]Rather, impressions are formed based only on the information presented by the classmate. Some students are more comfortable with this paradigm as it avoids the discomfort of public speaking. Students who do not feel comfortable stating their ideas in class can take time to sit down and think through exactly what they wish to say.[19] Communication via written media may lead students to take more time to think through their ideas since their words are in a more permanent setting (online) than most conversations carried on during class. Online learning situations also cause a shift in perception of the professor. Whereas anonymity may help some students achieve a greater level of comfort, professors must maintain an active identity with which students may interact. The students should feel that their professor is ready to help whenever they may need it. Although students and professors may not be able to meet in person, emails and correspondence between them should occur in a timely manner. Without this students tend to drop online classes since it seems that they are wandering through a course without anyone to guide them.[20][21][22] In the virtual world, users create a personal avatar and communicate with others through the virtual identity. The virtual personal figure and voice may draw from the real figure or fantasy worlds. The virtual figure to some degree reflects the personal expectation, and users may adopt a different personality in the virtual world than in reality. AnInternet forum, or message board, is an online discussion site where people can hold conversations in the form of posted messages. There are many types of Internet forums based on certain themes or groups. The properties of online identities also differ from different type of forums. For example, the users in a university BBS usually know some of the others in reality since the users can only be the students or professors in this university. However, the freedom of expression is limited since some university BBSs are under control of the school administration and the identities are related to student IDs. On another hand, some question-and-answer websites like "ZhiHu" in China are open to the public and users can create accounts only with e-mail address. But they can describe their specialties or personal experiences to show reliability in certain questions, and other users can also invite them to answer questions based on their profiles. The answers and profiles can be either real-name or anonymous. A discussed positive aspect ofvirtual communitiesis that people can present themselves without fear of persecution, whether it is personality traits, behaviors that they are curious about, or the announcement of a real world identity component that has never before been announced.[citation needed]This freedom results in new opportunities for society as a whole, especially the ability for people to explore the roles of gender and sexuality in a manner that can be harmless, yet interesting and helpful to those undertaking the change. Online identity has given people the opportunity to feel comfortable in wide-ranging roles, some of which may be underlying aspects of the user's life that the user is unable to portray in the real world.[23] Online identity has a beneficial effect forminority groups, including racial and ethnic minority populations andpeople with disabilities. Online identities may help remove prejudices created by stereotypes found in real life, and thus provide a greater sense ofinclusion. One example of these opportunities is the establishment of many communities welcomingLGBTQ+teenagers who are learning to understand their sexuality. These communities allow teenagers to share their experiences with one another or other older members of the LGBTQ+ community, and provide both a non-threatening and non-judgmental safe place. In a review of such a community, Silberman quotes an information technology worker, Tom Reilly, as stating: "The wonderful thing about online services is that they are an intrinsically decentralized resource. Kids can challenge what adults have to say and make the news".[24]If teen organizers are successful anywhere, news of it is readily available. The Internet is arguably the most powerful tool that young people with alternative sexualities have ever had.[25] The online world provides users with a choice to determine which sex, sexuality preference and sexual characteristics they would like to embody. In each online encounter, a user essentially has the opportunity to interchange which identity they would like to portray.[26]As McRae argues in Surkan (2000), "The lack of physical presence and the infinite malleability of bodies complicates sexual interaction in a singular way: because the choice of gender is an "option" rather than a strictly defined biological characteristic, the entire concept of gender as a primary marker of identity becomes partially subverted." Online identity can offer potential social benefits to those with physical and sensory disabilities. The flexibility of online media provides control over their disclosure of impairment, an opportunity not typically available in real world social interactions.[27]Researchers highlight its value in improving inclusion. However, the affordance of normalization offers the possibility of experiencing non-stigmatized identities while also offering the capacity to create harmful and dangerous outcomes, which may jeopardize participants' safety.[28] Primarily, concerns regarding virtual identity revolve around the areas of misrepresentation and the contrasting effects of on and offline existence. Sexuality and sexual behavior online provide some of the most controversial debate with many concerned about the predatory nature of some users. This is particularly in reference to concerns aboutchild pornographyand the ability ofpedophilesto obscure their identity.[29] The concerns regarding the connection between on and offline lives have challenged the notions of what constitutes a real experience. In reference to gender, sexuality and sexual behavior, the ability to play with these ideas has resulted in a questioning of how virtual experience may affect one's offline emotions.[citation needed]As McRae states, virtual sex not only complicates but drastically unsettles the division between mind, body, and self that has become a comfortable truism in Western metaphysics. When projected into virtuality, mind, body and self all become consciously manufactured constructs through which individuals interact with each other.[30] The identities that people define in the social web are not necessarily facets of their offline self. Studies show that people lie about themselves on online dating services.[31][32]In the case of social network services such as Facebook, companies have proposed to sell "friends" as a way to increase a user's visibility, further calling into question the reliability of a person's social identity.[33] Van Gelder[34]reported an incident occurring on a computer conferencing system during the early 80s where a male psychiatrist posed as Julie, a female psychologist with multiple disabilities including deafness, blindness, and serious facial disfigurement. Julie endeared herself to the computer conferencing community, finding psychological and emotional support from many members. The psychiatrist's choice to present differently was sustained by drawing upon the unbearable stigma attached to Julie's multiple disabilities as justification for not meeting face-to-face. Lack of visual cues allowed the identity transformation to continue, with the psychiatrist also assuming the identity of Julie's husband, who adamantly refused to allow anyone to visit Julie when she claimed to be seriously ill. This example highlights the ease with which identity may be constructed, transformed, and sustained by the textual nature of online interaction and the visual anonymity it affords. Catfishingis a way for a user to create a fake online profile, sometimes with fake photos and information, in order to enter into a relationship, intimate or platonic, with another user.[35]Catfishing became popular in mainstream culture through the MTV reality showCatfish. A problem facing anyone who hopes to build a positive online reputation is that reputations are site-specific; for example, one's reputation oneBaycannot be transferred toSlashdot. Multiple proposals have been made[36][citation needed]to build anidentity managementinfrastructure into the Webprotocols. All of them require an effectivepublic key infrastructureso that the identity of two separate manifestations of an online identity (say, one onWikipediaand another onTwitter) are probably one and the same. OpenID, an open, decentralized standard for authenticating users is used for access control, allowing users to log on to different services with the same digital identity. These services must allow and implement OpenID. Context collapse describes the phenomena where the occurrence of multiple social groups in one space causes confusion in how to manage one's online identity.[37]This suggests that in managing identities online, individuals are challenged to differentiate their online expression due to the unmanageable size of audience variations.[38]This phenomenon is particularly relevant to social media platforms.[37]Users are often connected with a wide range of social groups such as family, colleagues and friends. When posting on social media, the presence of these different social groups makes it difficult to decide which aspect of one's personality to present.[37]The term was first coined in 2003 by Microsoft researcher danah boyd in relation to social networking platforms such asMySpaceandFriendster.[37]Since 2003, the issue of context collapse has become increasingly significant. Users have been forced to implement strategies to combat context collapse. These strategies include using stricterprivacy settingsand engaging in more "ephemeral mediums" such as Instagram stories and Snapchat in which posts are only temporarily accessible and are less likely to have permanent consequences or an effect on one's reputation.[37] Given the malleability of online identities, some economists have expressed surprise that flourishing trading sites, such as eBay, have developed on the Internet.[citation needed][39]When two pseudonymous identities propose to enter into an online transaction, they are faced with theprisoner's dilemma: the deal can succeed only if the parties are willing to trust each other, but they have no rational basis for doing so. But successful Internet trading sites have developedreputation managementsystems, such as eBay'sfeedbacksystem, which record transactions and provide the technical means by which users can rate each other's trustworthiness. However, users with malicious intent can still cause serious problems on such websites.[14] An online reputation is the perception that one generates on the Internet based on theirdigital footprint. Digital footprints accumulate through all of the content shared, feedback provided and information that created online.[40]Due to the fact that if someone has a bad online reputation, he can easily change his pseudonym, new accounts on sites such as eBay or Amazon are usually distrusted. If an individual or company wants to manage their online reputation, they will face many more difficulties. This is why a merchant on the web having abrick and mortarshop is usually more trusted. Ultimately, online identity cannot be completely free from the social constraints that are imposed in the real world. As Westfall (2000, p. 160) discusses, "the idea of truly departing from social hierarchy and restriction does not occur on the Internet (as perhaps suggested by earlier research into the possibilities presented by the Internet) with identity construction still shaped by others. Westfall raises the important, yet rarely discussed, issue of the effects of literacy and communication skills of the online user." Indeed, these skills or the lack thereof have the capacity to shape one's online perception as they shape one's perception through a physical body in the "real world." This issue of gender and sexual reassignment raises the notion of disembodiment and its associated implications. "Disembodiment" is the idea that once the user is online, the need for the body is no longer required, and the user can participate separately from it. This ultimately relates to a sense of detachment from the identity defined by the physical body. In cyberspace, many aspects of sexual identity become blurred and are only defined by the user. Questions of truth will therefore be raised, particularly in reference to online dating andvirtual sex.[citation needed]As McRae states, "Virtual sex allows for a certain freedom of expression, of physical presentation and of experimentation beyond one's own real-life limits".[30]At its best, it not only complicates but drastically unsettles the division between mind, body and self in a manner only possible through the construction of an online identity. The future of onlineanonymitydepends on how an identity management infrastructure is developed.[41]Law enforcement officials often express their opposition to online anonymity andpseudonymity, which they view as an open invitation to criminals who wish to disguise their identities.[original research?]Therefore, they call for an identity management infrastructure that would irrevocably tie online identity to a person'slegal identity[citation needed]; in most such proposals, the system would be developed in tandem with a secure nationalidentity document.Eric Schmidt, CEO ofGoogle, has stated that theGoogle+social network is intended to be exactly such an identity system.[42]The controversy resulting from Google+'s policy of requiring users to sign in using legal names has been dubbed the "nymwars".[43] Online civil rights advocates, in contrast, argue that there is no need for a privacy-invasive system because technological solutions, such as reputation management systems, are already sufficient and are expected to grow in their sophistication and utility.[citation needed] An online predator is an Internet user who exploits other users' vulnerability, often for sexual or financial purposes. It is relatively easy to create an online identity which is attractive to people that would not normally become involved with the predator, but fortunately there are a few means by which you can make sure that a person whom you haven't met is actually who they say they are. Many people will trust things such as the style in which someone writes, or the photographs someone has on their web page as a way to identify that person, but these can easily be forged. Long-term Internet relationships may sometimes be difficult to sufficiently understand knowing what someone's identity is actually like.[citation needed][44] The most vulnerable age group to online predators is often considered to be young teenagers or older children.[45]"Over time - perhaps weeks or even months - the stranger, having obtained as much personal information as possible, grooms the child, gaining his or her trust through compliments, positive statements, and other forms of flattery to build an emotional bond."[46]The victims often do not suspect anything until it is too late, as the other party usually misleads them to believe that they are of similar age.[citation needed][47] The showDatelineon NBC has, overall, conducted three investigations on online predators. They had adults, posing online as teenage juveniles, engage in sexually explicit conversations with other adults (the predators) and arrange to meet them in person. But instead of meeting a teenager, the unsuspecting adult was confronted byChris Hansen, an NBC News correspondent, arrested, and shown on nationwide television.Datelineheld investigations in five different locations apprehending a total of 129 men in all.[48] Federal laws have been passed in the U.S. to assist the government when trying to catch online predators. Some of these include wiretapping, so online offenders can be caught in advance, before a child becomes a victim.[49]In California, where oneDatelineinvestigation took place, it is a misdemeanor for someone to have sexually-tinged conversations with a child online. The men who came to the house were charged with a felony because their intent was obvious.[50] An online identity that has acquired an excellent reputation is valuable for two reasons: first, one or more persons invested a great deal of time and effort to build the identity's reputation; and second, other users look to the identity's reputation as they try to decide whether it is sufficiently trustworthy. It is therefore unsurprising that online identities have been put up for sale at online auction sites. However, conflicts arise over the ownership of online identities. Recently, a user of a massively multiplayer online game calledEverQuest, which is owned bySony Online Entertainment, Inc., attempted to sell his EverQuest identity on eBay. Sony objected, asserting that the character is Sony'sintellectual property, and demanded the removal of the auction; under the terms of the U.S.Digital Millennium Copyright Act(DMCA), eBay could have become a party to acopyright infringementlawsuit if it failed to comply.
https://en.wikipedia.org/wiki/Online_identity
Online deliberationis a broad term used to describe many forms of non-institutional, institutional and experimental online discussions.[1]The term also describes the emerging field of practice and research related to the design, implementation and study ofdeliberativeprocesses that rely on the use of electronicinformation and communications technologies(ICT). Although the Internet andsocial mediahave fostered discursive participation and deliberation online throughcomputer-mediated communication,[2]the academic study of online deliberation started in the early 2000s.[3] A range of studies have suggested that group size, volume of communication, interactivity between participants, message characteristics, andsocial mediacharacteristics can impact online deliberation.[4][2]and that democratic deliberation varies across platforms. For example, news forums have been shown to have the highest degree of deliberation followed by news websites, and then Facebook.[5]Differences in the effectiveness of platforms as supporting deliberation has been attributed based on numerous factors such as moderation, the availability of information, and focusing on a well defined topic.[5] A limited number of studies have explored the extent to which online deliberation can produce similar results to traditional, face-to-facedeliberation. A 2004deliberative pollcomparing face-to-face and online deliberation on U.S. foreign policy found similar results.[6]A similar study in 2012 in France found that, compared to the offline process, online deliberation was more likely to increase women’s participation and to promote the justification of arguments by participants.[7] Research on online deliberation suggests that there are five key design considerations that will affect the quality of dialogue: asynchronous communication vs synchronous communication, post hoc moderation vs pre-moderation, empowering spaces vs un-empowering spaces, asking discrete questions vs broad questions, and the quality of information.[8]Other scholars have suggested that successful online deliberation follows four central rules: discussions must be inclusive, rational-critical, reciprocal and respectful.[1] In general, online deliberation require participants to be able to work together comfortably in order to make the best possible deliberations which can often require rules and regulations that help members feel comfortable with one another.[9] Researchers have questioned the utility of online deliberation as an extension of thepublic sphere, arguing the idea that online deliberation is no less beneficial than face-to-face interaction.[2]Computer-mediated discourse is deemed impersonal, and is found to encourage online incivility.[10]Furthermore, users who participate in online discussions about politics are found to make comments only in groups that agree with their own views,[11]indicating the possibility that online deliberation mainly promotesmotivated reasoningand reinforces preexisting attitudes. Scholarly research into online deliberation isinterdisciplinaryand includes practices such asonline consultation,e-participation,e-government,[12][2]Citizen-to-Citizen (C2C),[12][2]onlinedeliberative polling,crowdsourcing, online facilitation,online research communities, interactivee-learning, civic dialogue inInternet forumsandonline chat, andgroup decision makingthat utilizescollaborative softwareand other forms ofcomputer-mediated communication. Work in all these endeavors is tied together by the challenge of using electronic media in a way that deepens thinking and improves mutual understanding.
https://en.wikipedia.org/wiki/Online_deliberation
Participatory mediaiscommunication mediawhere theaudiencecan play an active role in the process of collecting, reporting, analyzing and disseminating content.[1]Citizen / participatory journalism,citizen media,empowerment journalismanddemocratic mediaare related principles. Participatory media includescommunity media,blogs,wikis,RSS,taggingandsocial bookmarking, music-photo-video sharing,mashups,podcasts,participatory videoprojects andvideoblogs. All together they can be described as "e-services, which involve end-users as active participants in the value creation process".[2]However, "active [...] uses of media are not exclusive to our times".[3]"In the history of mediated communication we can find many variations of participatory practices. For instance, the initial phase of theradioknew many examples of non-professional broadcasters".[4] Marshall McLuhandiscussed the participatory potential of media already in the 1970s but in the era of digital andsocial media, the theory ofparticipatory culturebecomes even more acute as the borders between audiences and media producers blurred.[5] These distinctly different media share three common, interrelated characteristics:[6] Full-fledged participatory news sites includeNowPublic,OhmyNews, DigitalJournal.com,On the Ground News ReportsandGroundReport. With participatory media, the boundaries between audiences and creators become blurred and often invisible. In the words ofDavid Sifry, the founder ofTechnorati, a search engine for blogs, one-to-many "lectures" (i.e., from media companies to their audiences) are transformed into "conversations" among "the people formerly known as the audience". This changes the tone of public discussions. The mainstream media, saysDavid Weinberger, a blogger, author and fellow atHarvard University'sBerkman Center for Internet & Society, "don't get how subversive it is to take institutions and turn them into conversations". That is because institutions are closed, assume a hierarchy and have trouble admitting fallibility, he says, whereas conversations are open-ended, assume equality and eagerly concede fallibility.[7] Some proposed that journalism can be more "participatory" because theWorld Wide Webhas evolved from "read-only" to "read-write". In other words, in the past only a small proportion of people had the means (in terms of time, money, and skills) to create content that could reach large audiences. Now the gap between the resources and skills needed to consume online content versus the means necessary to produce it have narrowed significantly to the point that nearly anyone with a web-connected device can create media.[8]AsDan Gillmor, founder of the Center for Citizen Media declared in his 2004 bookWe the Media, journalism is evolving from a lecture into a conversation.[9]He also points out that new interactive forms of media have blurred the distinction between producers of news and their audience. In fact, some view the term "audience" to be obsolete in the new world of interactive participatory media. New York University professor and bloggerJay Rosenrefers to them as "the people formerly known as the audience."[10]In "We Media", a treatise on participatory journalism, Shayne Bowman and Chris Willis suggest that the "audience" should be renamed "participants".[1]One of the first projects encompassing participatory media prior to the advent of social media was TheSeptember 11 Photo Project. The exhibit was a not-for-profit community based photo project in response to theSeptember 11 attacksand their aftermath. It provided a venue for the display of photographs accompanied by captions by anyone who wished to participate. The Project aimed to preserve a record of the spontaneous outdoor shrines that were being swept away by rain or wind or collected by the city for historical preservation. Some even proposed that "all mass media should be abandoned", extending upon one of the four main arguments given byJerry Manderin his case against television: Corporate domination of television used to mould humans for a commercial environment, and all mass media involve centralized power. Blogger Robin Good wrote, "With participatory media instead of mass media, governments and corporations would be far less able to control information and maintain their legitimacy... To bring about true participatory media (and society), it is also necessary to bring about participatory alternatives to present economic and political structures... In order for withdrawal from using the mass media to become more popular, participatory media must become more attractive: cheaper, more accessible, more fun, more relevant. In such an atmosphere, nonviolent action campaigns against the mass media and in support of participatory media become more feasible."[11] Although 'participatory media' has been viewed uncritically by many writers, others, such asDaniel Palmer, have argued that media participation must also "be understood in relation to defining characteristics of contemporary capitalism – namely its user-focused, customised and individuated orientation."[12]
https://en.wikipedia.org/wiki/Participatory_media
Apseudonym(/ˈsjuːdənɪm/; fromAncient Greekψευδώνυμος(pseudṓnumos)'falsely named') oralias(/ˈeɪli.əs/) is a fictitious name that a person assumes for a particular purpose, which differs from their original or true meaning (orthonym).[1][2]This also differs from a new name that entirely or legally replaces an individual's own. Many pseudonym holders use them because they wish to remainanonymousand maintain privacy, though this may be difficult to achieve as a result of legal issues.[3] Pseudonyms includestage names,user names,ring names,pen names, aliases,superheroor villain identities and code names, gamertags, andregnal namesof emperors, popes, and other monarchs. In some cases, it may also includenicknames. Historically, they have sometimes taken the form ofanagrams, Graecisms, andLatinisations.[4] Pseudonyms should not be confused with new names that replace old ones and become the individual's full-time name. Pseudonyms are "part-time" names, used only in certain contexts: to provide a more clear-cut separation between one's private and professional lives, to showcase or enhance a particular persona, or to hide an individual's real identity, as with writers' pen names, graffiti artists' tags,resistance fighters'or terrorists'noms de guerre, computerhackers'handles, and otheronline identitiesfor services such associal media,online gaming, andinternet forums. Actors, musicians, and other performers sometimes usestage namesfor a degree of privacy, to better market themselves, and other reasons.[5] In some cases, pseudonyms are adopted because they are part of a cultural or organisational tradition; for example,devotional namesare used by members of somereligious institutes,[6]and "cadre names" are used byCommunist partyleaders such asTrotskyandLenin. Acollective nameorcollective pseudonymis one shared by two or more persons, for example, the co-authors of a work, such asCarolyn Keene,Erin Hunter,Ellery Queen,Nicolas Bourbaki, orJames S. A. Corey. The termpseudonymis derived from the Greek word "ψευδώνυμον" (pseudṓnymon),[7]literally"false name", fromψεῦδος(pseûdos) 'lie, falsehood'[8]andὄνομα(ónoma) "name".[9]The termaliasis a Latinadverbmeaning "at another time, elsewhere".[10] Sometimes people change their names in such a manner that the new name becomes permanent and is used by all who know the person. This is not an alias or pseudonym, but in fact a new name. In many countries, includingcommon lawcountries, a name change can be ratified by a court and become a person's new legal name. Pseudonymous authors may still have their various identities linked together throughstylometricanalysis of their writing style. The precise degree of this unmasking ability and its ultimate potential is uncertain, but the privacy risks are expected to grow with improved analytic techniques andtext corpora. Authors may practiceadversarial stylometryto resist such identification.[11] Businesspersons of ethnic minorities in some parts of the world are sometimes advised by an employer to use a pseudonym that is common or acceptable in that area when conducting business, to overcome racial or religious bias.[12] Criminals may use aliases,fictitious business names, anddummy corporations(corporate shells) to hide their identity, or to impersonate other persons or entities in order to commit fraud. Aliases and fictitious business names used for dummy corporations may become so complex that, in the words ofThe Washington Post, "getting to the truth requires a walk down a bizarre labyrinth" and multiple government agencies may become involved to uncover the truth.[13]Giving a false name to a law enforcement officer is a crime in many jurisdictions. Apen nameis a pseudonym (sometimes a particular form of the real name) adopted by anauthor(or on the author's behalf by their publishers). English usage also includes the French-language phrasenom de plume(which in French literally means "pen name").[14] The concept of pseudonymity has a long history. In ancient literature it was common to write in the name of a famous person, not for concealment or with any intention of deceit; in the New Testament, the second letter of Peter is probably such. A more modern example is all ofThe Federalist Papers, which were signed by Publius, a pseudonym representing the trio ofJames Madison,Alexander Hamilton, andJohn Jay. The papers were written partially in response to severalAnti-Federalist Papers, also written under pseudonyms. As a result of this pseudonymity, historians know that the papers were written by Madison, Hamilton, and Jay, but have not been able to discern with certainty which of the three authored a few of the papers. There are also examples of modern politicians and high-ranking bureaucrats writing under pseudonyms.[15][16] Some female authors have used male pen names, in particular in the 19th century, when writing was a highly male-dominated profession. TheBrontë sistersused pen names for their early work, so as not to reveal their gender (see below) and so that local residents would not suspect that the books related to people of their neighbourhood.Anne Brontë'sThe Tenant of Wildfell Hall(1848) was published under the name Acton Bell, whileCharlotte Brontëused the name Currer Bell forJane Eyre(1847) andShirley(1849), andEmily Brontëadopted Ellis Bell as cover forWuthering Heights(1847). Other examples from the nineteenth-century are novelist Mary Ann Evans (George Eliot) and French writer Amandine Aurore Lucile Dupin (George Sand). Pseudonyms may also be used due to cultural or organization or political prejudices. Similarly, some 20th- and 21st-century male romance novelists – a field dominated by women – have used female pen names.[17]A few examples are Brindle Chase,Peter O'Donnell(as Madeline Brent),Christopher Wood(as Penny Sutton and Rosie Dixon), andHugh C. Rae(as Jessica Sterling).[17] A pen name may be used if a writer's real name is likely to be confused with the name of another writer or notable individual, or if the real name is deemed unsuitable. Authors who write both fiction and non-fiction, or in different genres, may use different pen names to avoid confusing their readers. For example, the romance writerNora Robertswrites mystery novels under the nameJ. D. Robb. In some cases, an author may become better known by his pen name than their real name. Some famous examples of that include Samuel Clemens, writing asMark Twain, Theodor Geisel, better known asDr. Seuss, and Eric Arthur Blair (George Orwell). The British mathematician Charles Dodgson wrote fantasy novels asLewis Carrolland mathematical treatises under his own name. Some authors, such asHarold Robbins, use several literary pseudonyms.[18] Some pen names have been used for long periods, even decades, without the author's true identity being discovered, as withElena FerranteandTorsten Krol. Joanne Rowling[19]published theHarry Potterseries as J. K. Rowling. Rowling also published theCormoran Strikeseries of detective novels includingThe Cuckoo's Callingunder the pseudonym Robert Galbraith. Winston Churchillwrote asWinston S. Churchill(from his full surname Spencer Churchill which he did not otherwise use) in an attempt to avoid confusion with anAmerican novelist of the same name. The attempt was not wholly successful – the two are still sometimes confused by booksellers.[20][21] A pen name may be used specifically to hide the identity of the author, as withexposébooks about espionage or crime, or explicit erotic fiction.Erwin von Busseused a pseudonym when he published short stories about sexually charged encounters between men in Germany in 1920.[22]Some prolific authors adopt a pseudonym to disguise the extent of their published output, e. g.Stephen Kingwriting asRichard Bachman. Co-authors may choose to publish under a collective pseudonym, e. g.,P. J. TracyandPerri O'Shaughnessy.Frederic DannayandManfred Leeused the nameEllery Queenas a pen name for their collaborative works and as the name of their main character.[23]Asa Earl Carter, a Southern white segregationist affiliated with the KKK, wrote Western books under a fictional Cherokee persona to imply legitimacy and conceal his history.[24] A famous case in French literature wasRomain Gary. Already a well-known writer, he started publishing books as Émile Ajar to test whether his new books would be well received on their own merits, without the aid of his established reputation. They were: Émile Ajar, like Romain Gary before him, was awarded the prestigiousPrix Goncourtby a jury unaware that they were the same person. Similarly, TV actorRonnie Barkersubmitted comedy material under the name Gerald Wiley. A collective pseudonym may represent an entire publishing house, or any contributor to a long-running series, especially with juvenile literature. Examples includeWatty Piper,Victor Appleton,Erin Hunter, and Kamiru M. Xhan. Another use of a pseudonym in literature is to present a story as being written by the fictional characters in the story. The series of novels known asA Series of Unfortunate Eventsare written byDaniel Handlerunder the pen name ofLemony Snicket, a character in the series. This applies also to some of the several 18th-century English and American writers who used the nameFidelia. Ananonymity pseudonymormultiple-use nameis a name used by many different people to protect anonymity.[25]It is a strategy that has been adopted by many unconnected radical groups and by cultural groups, where the construct of personal identity has been criticised. This has led to the idea of the "open pop star", such asMonty Cantsin.[clarification needed] Pseudonyms andacronymsare often employed in medical research toprotect subjects' identitiesthrough a process known asde-identification. Nicolaus Copernicusput forward his theory of heliocentrism in the manuscriptCommentariolusanonymously, in part because of his employment as a law clerk for achurch-government organization.[26] Sophie GermainandWilliam Sealy Gossetused pseudonyms to publish their work in the field of mathematics – Germain, to avoid rampant 19th century academicmisogyny, and Gosset, to avoid revealing brewing practices of his employer, theGuinness Brewery.[27][28] Satoshi Nakamotois a pseudonym of a still unknown author or authors' group behind awhite paperaboutbitcoin.[29][30][31][32] While taking part in military activities, such as fighting in a war, the pseudonym might be known as anom de guerre. It is chosen by the person involved in the activity.[33][34] Individuals using a computeronlinemay adopt or be required to use a form of pseudonym known as a "handle" (a term deriving fromCB slang), "username", "loginname", "avatar", or, sometimes, "screen name", "gamertag", "IGN (InGame (Nick)Name)" or "nickname". On the Internet,pseudonymous remailersusecryptographythat achieves persistent pseudonymity, so that two-way communication can be achieved, and reputations can be established, without linking physicalidentitiesto their respective pseudonyms.Aliasingis the use of multiple names for the same data location. More sophisticated cryptographic systems, such as anonymousdigital credentials, enable users to communicate pseudonymously (i.e., by identifying themselves by means of pseudonyms). In well-defined abuse cases, a designated authority may be able to revoke the pseudonyms and reveal the individuals' real identity.[citation needed] Use of pseudonyms is common among professionaleSportsplayers, despite the fact that many professional games are played onLAN.[35] Pseudonymity has become an important phenomenon on the Internet and other computer networks. In computer networks, pseudonyms possess varying degrees of anonymity,[36]ranging from highly linkablepublic pseudonyms(the link between the pseudonym and a human being is publicly known or easy to discover), potentially linkablenon-public pseudonyms(the link is known to system operators but is not publicly disclosed), andunlinkable pseudonyms(the link is not known to system operators and cannot be determined).[37]For example, trueanonymous remailerenables Internet users to establish unlinkable pseudonyms; those that employ non-public pseudonyms (such as the now-defunctPenet remailer) are calledpseudonymous remailers. The continuum of unlinkability can also be seen, in part, on Wikipedia. Some registered users make no attempt to disguise their real identities (for example, by placing their real name on their user page). The pseudonym of unregistered users is theirIP address, which can, in many cases, easily be linked to them. Other registered users prefer to remain anonymous, and do not disclose identifying information. However, in certain cases,Wikipedia's privacy policypermits system administrators to consult the server logs to determine the IP address, and perhaps the true name, of a registered user. It is possible, in theory, to create an unlinkable Wikipedia pseudonym by using anOpen proxy, a Web server that disguises the user's IP address. But most open proxy addresses are blocked indefinitely due to their frequent use by vandals. Additionally, Wikipedia's public record of a user's interest areas, writing style, and argumentative positions may still establish an identifiable pattern.[38][39] System operators (sysops) at sites offering pseudonymity, such as Wikipedia, are not likely to build unlinkability into their systems, as this would render them unable to obtain information about abusive users quickly enough to stop vandalism and other undesirable behaviors. Law enforcement personnel, fearing an avalanche of illegal behavior, are equally unenthusiastic.[40]Still, some users and privacy activists like theAmerican Civil Liberties Unionbelieve that Internet users deserve stronger pseudonymity so that they can protect themselves against identity theft, illegal government surveillance, stalking, and other unwelcome consequences of Internet use (includingunintentional disclosures of their personal informationanddoxing, as discussed in the next section). Their views are supported by laws in some nations (such as Canada) that guarantee citizens a right to speak using a pseudonym.[41]This right does not, however, give citizens the right to demand publication of pseudonymous speech on equipment they do not own. Most Web sites that offer pseudonymity retain information about users. These sites are often susceptible to unauthorized intrusions into their non-public database systems. For example, in 2000, a Welsh teenager obtained information about more than 26,000 credit card accounts, including that of Bill Gates.[42][43]In 2003, VISA and MasterCard announced that intruders obtained information about 5.6 million credit cards.[44]Sites that offer pseudonymity are also vulnerable to confidentiality breaches. In a study of a Web dating service and apseudonymous remailer,University of Cambridgeresearchers discovered that the systems used by these Web sites to protect user data could be easily compromised, even if the pseudonymous channel is protected by strong encryption. Typically, the protected pseudonymous channel exists within a broader framework in which multiple vulnerabilities exist.[45]Pseudonym users should bear in mind that, given the current state of Web security engineering, their true names may be revealed at any time. Pseudonymity is an important component of the reputation systems found in online auction services (such aseBay), discussion sites (such asSlashdot), and collaborative knowledge development sites (such asWikipedia). A pseudonymous user who has acquired a favorable reputation gains the trust of other users. When users believe that they will be rewarded by acquiring a favorable reputation, they are more likely to behave in accordance with the site's policies.[46] If users can obtain new pseudonymous identities freely or at a very low cost, reputation-based systems are vulnerable to whitewashing attacks,[47]also calledserial pseudonymity, in which abusive users continuously discard their old identities and acquire new ones in order to escape the consequences of their behavior: "On the Internet, nobody knows that yesterday you were a dog, and therefore should be in the doghouse today."[48]Users of Internet communities who have been banned only to return with new identities are calledsock puppets. Whitewashing is one specific form of aSybil attackon distributed systems. The social cost of cheaply discarded pseudonyms is that experienced users lose confidence in new users,[51]and may subject new users to abuse until they establish a good reputation.[48]System operators may need to remind experienced users that most newcomers are well-intentioned (see, for example,Wikipedia's policy about biting newcomers). Concerns have also been expressed about sock puppets exhausting the supply of easily remembered usernames. In addition a recent research paper demonstrated that people behave in a potentially more aggressive manner when using pseudonyms/nicknames (due to theonline disinhibition effect) as opposed to being completely anonymous.[52][53]In contrast, research by the blog comment hosting serviceDisqusfound pseudonymous users contributed the "highest quantity and quality of comments", where "quality" is based on an aggregate of likes, replies, flags, spam reports, and comment deletions,[49][50]and found that users trusted pseudonyms and real names equally.[54] Researchers at the University of Cambridge showed that pseudonymous comments tended to be more substantive and engaged with other users in explanations, justifications, and chains of argument, and less likely to use insults, than either fully anonymous or real name comments.[55]Proposals have been made to raise the costs of obtaining new identities, such as by charging a small fee or requiring e-mail confirmation. Academic research has proposed cryptographic methods to pseudonymize social media identities[56]or government-issued identities,[57]to accrue and useanonymous reputationin online forums,[58]or to obtain one-per-person and hence less readily-discardable pseudonyms periodically at physical-worldpseudonym parties.[59]Others point out that Wikipedia's success is attributable in large measure to its nearly non-existent initial participation costs. People seeking privacy often use pseudonyms to make appointments and reservations.[60]Those writing toadvice columnsin newspapers and magazines may use pseudonyms.[61]Steve Wozniakused a pseudonym when attending theUniversity of California, Berkeleyafter co-foundingApple Computer, because "[he] knew [he] wouldn't have time enough to be an A+ student."[62] When used by an actor, musician, radio disc jockey, model, or other performer or "show business" personality a pseudonym is called astage name, or, occasionally, aprofessional name, orscreen name. Members of a marginalized ethnic or religious group have often adopted stage names, typically changing their surname or entire name to mask their original background. Stage names are also used to create a more marketable name, as in the case of Creighton Tull Chaney, who adopted the pseudonymLon Chaney Jr., a reference to his famous fatherLon Chaney. Chris CurtisofDeep Purplefame was christened as Christopher Crummey ("crummy" is UK slang for poor quality). In this and similar cases a stage name is adopted simply to avoid an unfortunate pun. Pseudonyms are also used to comply with the rules of performing-artsguilds(Screen Actors Guild(SAG),Writers Guild of America, East(WGA),AFTRA, etc.), which do not allow performers to use an existing name, in order to avoid confusion. For example, these rules required film and television actor Michael Fox to add a middle initial and becomeMichael J. Fox, to avoid being confused with another actor namedMichael Fox. This was also true of author and actressFannie Flagg, who shared her real name, Patricia Neal, withanother well-known actress;Rick Copp, who chose the pseudonym name Richard Hollis, which is also the name of a character in the anthology TV seriesFemme Fatales; and British actorStewart Granger, whose real name was James Stewart. The film-making team ofJoel and Ethan Coen, for instance, share credit for editing under the alias Roderick Jaynes.[63] Some stage names are used to conceal a person's identity, such as the pseudonymAlan Smithee, which was used by directors in theDirectors Guild of America(DGA) to remove their name from a film they feel was edited or modified beyond their artistic satisfaction. In theatre, the pseudonymsGeorge or Georgina Spelvin, andWalter Plingeare used to hide the identity of a performer, usually when he or she is "doubling" (playing more than one role in the same play). David Agnewwas a name used by the BBC to conceal the identity of a scriptwriter, such as for theDoctor WhoserialCity of Death, which had three writers, includingDouglas Adams, who was at the time of writing, the show's script editor.[64]In another Doctor Who serial,The Brain of Morbius, writerTerrance Dicksdemanded the removal of his name from the credits saying it could go out under a "bland pseudonym".[citation needed][65]This ended up as "Robin Bland".[65][66] Pornographic actors regularly use stage names.[67][68][69]Sometimes these are referred to asnom de porn(like withnom de plume, this is English-language users creating a French-language phrase to use in English). Having acted in pornographic films can be a serious detriment to finding another career.[70][71] Musicians and singers can use pseudonyms to allow artists to collaborate with artists on other labels while avoiding the need to gain permission from their own labels, such as the artistJerry Samuels, who made songs under Napoleon XIV. Rock singer-guitaristGeorge Harrison, for example, played guitar onCream's song "Badge" using a pseudonym.[72]In classical music, some record companies issued recordings under anom de disquein the 1950s and 1960s to avoid paying royalties. A number of popular budget LPs of piano music were released under the pseudonymPaul Procopolis.[73]Another example is thatPaul McCartneyused his fictional name "Bernerd Webb" forPeter and Gordon's songWoman.[74] Pseudonyms are used as stage names inheavy metalbands, such asTracii GunsinLA Guns,Axl RoseandSlashinGuns N' Roses,Mick MarsinMötley Crüe,Dimebag DarrellinPantera, orC.C. DevilleinPoison. Some such names have additional meanings, like that of Brian Hugh Warner, more commonly known asMarilyn Manson: Marilyn coming fromMarilyn Monroeand Manson from convicted serial killerCharles Manson.Jacoby ShaddixofPapa Roachwent under the name "Coby Dick" during theInfestera. He changed back to his birth name whenlovehatetragedywas released. David Johansen, front man for the hard rock bandNew York Dolls, recorded and performed pop and lounge music under the pseudonym Buster Poindexter in the late 1980s and early 1990s. The music video for Poindexter's debut single,Hot Hot Hot, opens with a monologue from Johansen where he notes his time with the New York Dolls and explains his desire to create more sophisticated music. Ross Bagdasarian Sr., creator ofAlvin and the Chipmunks, wrote original songs, arranged, and produced the records under his real name, but performed on them asDavid Seville. He also wrote songs as Skipper Adams. Danish pop pianistBent Fabric, whose full name is Bent Fabricius-Bjerre, wrote his biggest instrumental hit "Alley Cat" as Frank Bjorn. For a time, the musicianPrinceused an unpronounceable "Love Symbol" as a pseudonym ("Prince" is his actual first name rather than a stage name). He wrote the song "Sugar Walls" forSheena Eastonas "Alexander Nevermind" and "Manic Monday" forthe Banglesas "Christopher Tracy". (He also produced albums early in his career as "Jamie Starr"). Many Italian-American singers have used stage names, as their birth names were difficult to pronounce or considered too ethnic for American tastes. Singers changing their names includedDean Martin(born Dino Paul Crocetti),Connie Francis(born Concetta Franconero),Frankie Valli(born Francesco Castelluccio),Tony Bennett(born Anthony Benedetto), andLady Gaga(born Stefani Germanotta) In 2009, the British rock bandFeederbriefly changed their name toRenegadesso they could play a whole show featuring a set list in which 95 per cent of the songs played were from their forthcoming new album of the same name, with none of their singles included. Front manGrant Nicholasfelt that if they played as Feeder, there would be uproar over his not playing any of the singles, so used the pseudonym as a hint. A series of small shows were played in 2010, at 250- to 1,000-capacity venues with the plan not to say who the band really are and just announce the shows as if they were a new band. In many cases, hip-hop and rap artists prefer to use pseudonyms that represents some variation of their name, personality, or interests. Examples includeIggy Azalea(her stage name is a combination of her dog's name, Iggy, and her home street inMullumbimby, Azalea Street),Ol' Dirty Bastard(known under at least six aliases),Diddy(previously known at various times as Puffy, P. Diddy, and Puff Daddy),Ludacris,Flo Rida(whose stage name is a tribute to his home state,Florida), British-Jamaican hip-hop artistStefflon Don(real name Stephanie Victoria Allen),LL Cool J, andChingy.Black metalartists also adopt pseudonyms, usually symbolizing dark values, such asNocturno Culto,Gaahl, Abbath, and Silenoz. In punk and hardcore punk, singers and band members often replace real names with tougher-sounding stage names such asSid Viciousof the late 1970s bandSex Pistolsand "Rat" of the early 1980s bandThe Varukersand the 2000s re-formation ofDischarge. The punk rock bandThe Ramoneshad every member take the last name of Ramone.[citation needed] Henry John Deutschendorf Jr., an American singer-songwriter, used the stage nameJohn Denver. The Australian country musician born Robert Lane changed his name toTex Morton. Reginald Kenneth Dwight legally changed his name in 1972 toElton John.
https://en.wikipedia.org/wiki/Pseudonymity
Social software, also known associal appsorsocial platformincludes communications and interactive tools that are often based on theInternet. Communication tools typically handle capturing, storing and presenting communication, usually written but increasingly including audio and video as well. Interactive tools handle mediated interactions between a pair or group of users. They focus on establishing and maintaining a connection among users, facilitating the mechanics of conversation and talk.[1]Social softwaregenerally refers to software that makes collaborative behaviour, the organisation and moulding of communities, self-expression, social interaction and feedback possible for individuals. Another element of the existing definition ofsocial softwareis that it allows for the structured mediation of opinion between people, in a centralized or self-regulating manner. The most improved area for social software is thatWeb 2.0applicationscan all promote co-operation between people and the creation of online communities more than ever before. The opportunities offered by social software are instant connections and opportunities to learn.[2]An additional defining feature of social software is that apart from interaction and collaboration, it aggregates the collective behaviour of its users, allowing not only crowds to learn from an individual but individuals to learn from the crowds as well.[3]Hence, the interactions enabled by social software can be one-to-one, one-to-many, or many-to-many.[2] Aninstant messagingapplication orclientallows one to communicate with another person over a network in real time, in relative privacy. One can add friends to a contact or buddy list by entering the person's email address or messenger ID. If the person is online, their name will typically be listed as available for chat. Clicking on their name will activate a chat window with space to write to the other person, as well as read their reply. Internet Relay Chat(IRC) and otheronline chattechnologies allow users to join and communicate with many people at once, publicly. Users may join a pre-existing chat room or create a new one about any topic. Once inside, you may type messages that everyone else in the room can read, as well as respond to/from others. Often there is a steady stream of people entering and leaving. Whether you are in another person's chat room or one you've created yourself, you are generally free to invite others online to join you in that room. The goal of collaborative software, also known as groupware, such asMoodle, Landing pages, Enterprise Architecture, andSharePoint, is to allow subjects to share data – such as files, photos, text, etc. for the purpose of project work or schoolwork. The intent is to first form a group and then have them collaborate. Clay Shirky defines social software as "software that supports group interaction". Since groupware supports group interaction (once the group is formed), it would consider it to be social software. Originally modeled after the real-world paradigm of electronicbulletin boardsof the world before internet was widely available,internet forumsallow users to post a "topic" for others to review. Other users can view the topic and post their own comments in a linear fashion, one after the other. Most forums are public, allowing anybody to sign up at any time. A few are private, gated communities where new members must pay a small fee to join. Forums can contain many different categories in ahierarchy, typically organized according to topics and subtopics. Other features include the ability to post images or files or to quote another user's post with special formatting in one's own post. Forums often grow in popularity until they can boast several thousand members posting replies to tens of thousands of topics continuously. There are various standards and claimants for the market leaders of each software category. Various add-ons may be available, including translation and spelling correction software, depending on the expertise of the operators of the bulletin board. In some industry areas, the bulletin board has its own commercially successful achievements: free and paid hardcopy magazines as well as professional and amateur sites. Current successful services have combined new tools with the oldernewsgroupandmailing listparadigm to produce hybrids. Also, as a service catches on, it tends to adopt characteristics and tools of other services that compete. Over time, for example,wiki user pageshave become social portals for individual users and may be used in place of other portal applications. In the past, web pages were only created and edited by web designers that had the technological skills to do so. Currently there are many tools that can assist individuals with web content editing. Wikis allow novices to be on the same level as experienced web designers because wikis provide easy rules and guidelines. Wikis allow all individuals to work collaboratively on web content without having knowledge of any markup languages. A wiki is made up of many content pages that are created by its users. Wiki users are able to create, edit, and link related content pages together. The user community is based on the individuals that want to participate to improve the overall wiki. Participating users are in a democratic community where any user can edit any other user's work.[4] Blogs, short for web logs, are like online journals for a particular person. The owner will post a message periodically, allowing others to comment. Topics often include the owner's daily life, views on politics, or about a particular subject important to them. Blogs mean many things to different people, ranging from "online journal" to "easily updated personal website." While these definitions are technically correct, they fail to capture the power of blogs as social software. Beyond being a simple homepage or an online diary, some blogs allow comments on the entries, thereby creating a discussion forum. They also have blogrolls (i.e., links to other blogs which the owner reads or admires) and indicate their social relationship to those other bloggers using theXFNsocial relationship standard.Pingbackandtrackbackallow one blog to notify another blog, creating an inter-blog conversation. Blogs engage readers and can build a virtual community around a particular person or interest. Blogging has also become fashionable in business settings by companies who useenterprise social software. Simultaneous editing of a text or media file by different participants on a network was first demonstrated on research systems as early as the 1970s, but is now practical on a global network. Collaborative real-time editing is now utilized, for example, in film editing and in cloud-based office applications. Many prediction market tools have become available (including somefree software) that make it easy to predict and bet on future events. This software allows a more formal version of social interaction, although it qualifies as a robust type of social software. Social network services allow people to come together online around shared interests, hobbies or causes. For example, some sites provide meeting organization facilities for people who practice the same sports. Other services enable business networking and social event meetup. Some largewikishave effectively become social network services by encouraging user pages and portals. Social network search engines are a class of search engines that use social networks to organize, prioritize or filter search results. There are two subclasses of social network search engines: those that useexplicitsocial networks and those that useimplicitsocial networks. Lacking trustworthy explicit information about such viewpoints, this type of social network search engine mines the web to infer the topology of online social networks. For example, theNewsTrovesearch engine infers social networks from content - sites, blogs, pods and feeds - by examining, among other things, subject matter, link relationships and grammatical features to infer social networks. Deliberative social networks are webs of discussion and debate for decision-making purposes. They are built for the purpose of establishing sustained relationships between individuals and their government. They rely upon informed opinion and advice that is given with a clear expectation of outcomes. Commercial social networks are designed to support business transaction and to build a trust between an individual and a brand, which relies on opinion of product, ideas to make the product better, enabling customers to participate with the brands in promoting development, service delivery and a better customer experience.[citation needed] A social guide recommending places to visit or contains information about places in the real world, such as coffee shops, restaurants and wifi hotspots, etc. Some web sites allow users to post their list ofbookmarksor favorite websites for others to search and view them. These sites can also be used to meet others through sharing common interests. Additionally, many social bookmarking sites allow users to browse through websites and content shared by other users based on popularity or category. As such, use of social bookmarking sites is an effective tool forsearch engine optimizationandsocial media optimizationforwebmasters.[5] Enterprise bookmarkingis a method of tagging and linking any information using an expanded set of tags to capture knowledge about data. It collects and indexes these tags in a web-infrastructure server residing behind the firewall. Users can share knowledge tags with specified people or groups, shared only inside specific networks, typically within an organization. Social viewingallows multiple users to aggregate from multiple sources and view online videos together in a synchronized viewing experience. Insocial catalogingmuch like social bookmarking, this software is aimed towards academics. It allows the user to post a citation for an article found on the internet or a website, online database like Academic Search Premier or LexisNexis Academic University, a book found in a library catalog and so on. These citations can be organized into predefined categories, or a new category defined by the user through the use oftags. This method allows academics researching or interested in similar areas to connect and share resources. This application allows visitors to keep track of their collectibles, books, records and DVDs. Users can share their collections. Recommendations can be generated based on user ratings, using statistical computation andnetwork theory. Some sites offer a buddy system, as well as virtual "check outs" of items for borrowing among friends.Folksonomyortaggingis implemented on most of these sites. Social online storage applications allow their users to collaboratively create file archives containing files of any type. Files can either be edited online or from a local computer, which has access to the storage system. Such systems can be built upon existing server infrastructure or leverage idle resources by applyingP2Ptechnology. Such systems are social because they allow public file distribution and directfile sharingwith friends. Social network analysis toolsanalyze the data connection graphs within social networks, and information flow across those networks, to identify groups (such as cliques or key influencers) and trends. They fall into two categories: professional research tools, such asMathematica, used by social scientists and statisticians, and consumer tools, such asWolfram Alpha,[6][7]which emphasize ease-of-use. Virtual Worlds are services where it is possible to meet and interact with other people in a virtual environment reminiscent of the real world. Thus, the termvirtual reality. Typically, the user manipulates anavatarthrough the world, interacting with others usingchatorvoice chat. MMOGs are virtual worlds (also known as virtual environments) that add various sorts of point systems, levels, competition and winners and losers to virtual world simulation. Massively multiplayer online role-playing games (MMORPGs) are a combination ofrole-playing video gamesandmassively multiplayer online games Another development are the worlds that are less game-like or notgamesat all. Games have points, winners and losers. Instead, some virtual worlds are more like social networking services likeMySpaceandFacebook, but with 3D simulation features. Very often a real economy emerges in these worlds, extending the non-physicalservice economywithin the world to service providers in the real world. Experts can design dresses or hairstyles for characters, go on routine missions for them and so on, and be paid in game money to do so. This emergence has resulted in expanding social possibility and also in increased incentives to cheat. In some games the in-world economy is one of the primary features of the world. Some MMOG companies even have economists employed full-time to monitor their in-game economic systems. There are many other applications with social software characteristics that facilitate human connection and collaboration in specific contexts.Social Project Managementande-learningapplications are among these. Various analyst firms have attempted to list and categorize the major social software vendors in the marketplace. Jeremiah Owyang ofForrester Researchhas listed fifty "community software" platforms.[8]Independent analyst firm Real Story Group has categorized 23 social software vendors,[9]which it evaluates head-to-head.[9] Use of social software forpoliticshas also expanded drastically especially over 2004–2006 to include a wide range of social software, often closely integrated with services likephone treesanddeliberative democracyforums and run by a candidate, party orcaucus. Open politics, a variant of open-source governance, combines aspects of thefree softwareandopen contentmovements, promotingdecision-makingmethods claimed to be more open, less antagonistic, and more capable of determining what is in thepublic interestwith respect topublic policyissues. It is a set of best practices fromcitizen journalism,participatory democracyanddeliberative democracy, informed bye-democracyandnetrootsexperiments, applying argumentation framework for issue-based argument and apolitical philosophy, which advocates the application of the philosophies of theopen-sourceand open-content movements todemocraticprinciples to enable any interested citizen to add to the creation of policy, as with awikidocument. Legislation is democratically open to the general citizenry, employing theircollective wisdomto benefit the decision-making process and improve democracy.[10]Open politics encompasses theopen governmentprinciple including those for public participation and engagement, such as the use ofIdeaScale,Google Moderator,Semantic MediaWiki,GitHub, and other software.[11] Collective forms ofonline journalismhave emerged more or less in parallel, in part to keep the political spin in check. Communication tools are generallyasynchronous. By contrast, interactive tools are generallysynchronous, allowing users to communicate in real time (phone, net phone, video chat) or near-synchronous (IM, text chat). Communication involves the content of talk, speech or writing, whereas interaction involves the interest users establish in one another as individuals. In other words, a communication tool may want to make access and searching of text both simple and powerful. An interactive tool may want to present as much of a user's expression, performance and presence as possible. The organization of texts and providing access to archived contributions differs from the facilitation of interpersonal interactions between contributors enough to warrant the distinction in media.[citation needed] Emerging technological capabilities to more widely distribute hosting and support much higher bandwidth in real time are bypassing central content arbiters in some cases.[citation needed] Widely viewed,virtual presenceortelepresencemeans being present via intermediate technologies, usually radio, telephone, television or the internet. In addition, it can denote apparent physical appearance, such as voice, face and body language. More narrowly, the termvirtual presencedenotes presence onWorld Wide Weblocations, which are identified byURLs. People who are browsing a web site are considered to be virtually present at web locations. Virtual presence is a social software in the sense that people meet on the web by chance or intentionally. The ubiquitous (in the web space) communication transfers behavior patterns from the real world andvirtual worldsto the web. Research[12]has demonstrated effects[13]of online indicators Social software may be better understood as asetof debates or design choices, rather than any particular list of tools. Broadly conceived, there are many older media such asmailing listsandUsenetfora that qualify as "social". However, most users of this term restrict its meaning to more recent software genres such asblogsandwikis. Others suggest that the termsocial softwareis best used not to refer to a single type of software, but rather to the use of two or more modes ofcomputer-mediated communicationthat result in "community formation."[14]In this view, people form online communities by combining one-to-one (e.g.emailandinstant messaging), one-to-many (Web pagesandblogs) and many-to-many (wikis) communication modes.[15]Some groups schedulereal lifemeetings and so become "real" communities of people that share physical lives. Most definers of social software agree that they seem to facilitate "bottom-up" community development. The system is classless and promotes those with abilities. Membership is voluntary,reputationsare earned by winning thetrustof other members and the community's missions and governance are defined by the members themselves.[16] Communities formed by "bottom-up" processes are often contrasted to the less vibrant collectivities formed by "top-down" software, in which users' roles are determined by an external authority and circumscribed by rigidly conceived software mechanisms (such asaccess rights). Given small differences in policies, the same type of software can produce radically different social outcomes. For instance,Tiki Wiki CMS Groupwarehas a fine-grained permission system of detailed access control so the site administrator can, on a page-by-page basis, determine which groups can view, edit or view the history. By contrast,MediaWikiavoids per-user controls, to keep most pages editable by most users and puts more information about users currently editing in its recent changes pages. The result is that Tiki can be used both by community groups who embrace the social paradigm of MediaWiki and by groups who prefer to have more content control.[citation needed] By design, social software reflects the traits ofsocial networksand is consciously designed to letsocial network analysiswork with a very compatible database. All social software systems create links between users, as persistent as the identity those users choose. Through these persistent links, a permanent community can be formed out of a formerlyepistemic community. The ownership and control of these links - who is linked and who is not - is in the hands of the user. Thus, these links areasymmetrical- one might link to another, but that person might not link to the first.[17]Also, these links are functional, not decorative - one can choose not to receive any content from people you are not connected to, for example.Wikipedia user pagesare a very good example and often contain extremely detailed information about the person who constructed them, including everything from theirmother tongueto theirmoral purchasingpreferences. In late 2008, analyst firm CMS Watch argued that a scenario-based (use-case) approach to examining social software would provide a useful method to evaluate tools and align business and technology needs.[18] Methods and tools for the development of social software are sometimes summarized under the termSocial Software Engineering. However, this term is also used to describe lightweight and community-oriented development practices.[19] Constructivist learning theorists such asVygotsky,LeidnerandJarvenpaahave theorized that the process of expressing knowledge aids its creation and that conversations benefit the refinement of knowledge. Conversationalknowledge managementsoftware fulfills this purpose because conversations, e.g. questions and answers, become the source of relevant knowledge in the organization.[20]Conversational technologies are also seen as tools to support both individual knowledge workers and work units.[21] Many advocates of Social Software assume, and even actively argue, that users create actualcommunities. They have adopted the term "online communities" to describe the resulting social structures. Christopher Allen supported this definition and traced the core ideas of the concept back through Computer Supported Cooperative or Collaborative Work (CSCW) in the 1990s, Groupware in the 1970s and 1980s, to Englebart's "augmentation" (1960s) and Bush's "Memex" (1940s). Although he identifies a "lifecycle" to this terminology that appears to reemerge each decade in a different form, this does not necessarily mean that social software is simply old wine in new bottles.[22] Theaugmentationcapabilities of social software were demonstrated in early internet applications for communication, such as e-mail, newsgroups, groupware, virtual communities etc. In the current phase of Allen's lifecycle, these collaborative tools add a capability "that aggregates the actions of networked users." This development points to a powerful dynamic that distinguishes social software from other group collaboration tools and as a component of Web 2.0 technology. Capabilities for content and behavior aggregation and redistribution present some of the more important potentials of this media.[citation needed]In the next phase, academic experiments, Social Constructivism and the open source software movement are expected to be notable influences. Clay Shirkytraces the origin of the term "social software" toEric Drexler's1987 discussion of "hypertext publishing systems" like the subsequent World Wide Web, and how systems of this kind could support software for public critical discussion, collaborative development,group commitment, andcollaborative filteringof content based on voting and rating.[23][24] Social technologies(orconversational technologies) is a term used by organizations (particularlynetwork-centric organizations). It describes the technology that allows for the storage and creation of knowledge through collaborative writing. In 1945,Vannevar Bushdescribed ahypertext-like device called the "memex" in hisThe Atlantic MonthlyarticleAs We May Think.[25] In 1962,Douglas Engelbartpublished his seminal work, "Augmenting Human Intellect: a conceptual framework." In this paper, he proposed using computers to augment training. With his colleagues at the Stanford Research Institute, Engelbart started to develop a computer system to augment human abilities, including learning. Debuting in 1968, the system was simply called the oNLine System (NLS).[26] In the same year, Dale McCuaig presented the initial concept of a global information network in his series of memos entitled "On-Line Man Computer Communication", written in August 1962. However, the actual development of the internet must be credited toLawrence G. Robertsof MIT,[27]along withLeonard Kleinrock,Robert KahnandVinton Cerf. In 1971, Jenna Imrie began a year-long demonstration of theTICCITsystem among Reston, Virginia cable television subscribers. Interactive television services included informational and educational demonstrations using a touch-tone telephone. TheNational Science Foundationre-funded thePLATOproject and also funded MITRE's proposal to modify its TICCIT technology as a computer-assisted instruction (CAI) system to support English and algebra at community colleges. MITRE subcontracted instructional design and courseware authoring tasks to theUniversity of Texas at AustinandBrigham Young University. Also during this year,Ivan Illichdescribed computer-based "learning webs" in his bookDeschooling Society.[28] In 1980,Seymour PapertatMITpublished "Mindstorms: children, computers, and powerful ideas" (New York: Basic Books). This book inspired a number of books and studies on "microworlds" and their impact on learning.BITNETwas founded by a consortium of US and Canadian universities. It allowed universities to connect with each other for educational communications and e-mail. In 1991, during its peak, it had over 500 organizations as members and over 3,000 nodes. Its use declined as theWorld Wide Webgrew. In 1986,Tony Batespublished "The Role of Technology in Distance Education",[29]reflecting on ways forward for e-learning. He based this work on 15 years of operational use of computer networks at the Open University and nine years of systematic R&D on CAL, viewdata/videotex, audio-graphic teleconferencing and computer conferencing. Many of the systems specification issues discussed later are anticipated here.[30] Though prototyped in 1983, the first version of Computer Supported Intentional Learning Environments (CSILE) was installed in 1986 on a small network of Cemcorp ICON computers, at an elementary school in Toronto, Canada. CSILE included text and graphical notes authored by different user levels (students, teachers, others) with attributes such as comments and thinking types which reflect the role of the note in the author's thinking. Thinking types included "my theory", "new information", and "I need to understand." CSILE later evolved intoKnowledge Forum.[31] In 1989,Tim Berners-Lee, then a young British engineer working at CERN in Switzerland, circulated a proposal for an in-house online document sharing system which he described as a "web of notes with links." After the proposal was grudgingly approved by his superiors, he called the new system the World Wide Web. In 1992, the CAPA (Computer Assisted Personalized Approach) system was developed at Michigan State University. It was first used in a 92-student physics class in the fall of 1992. Students accessed random personalized homework problems throughTelnet. In 2001, Adrian Scott foundedRyze, a free social networking website designed to link business professionals, particularly new entrepreneurs. In February 2002, the suvi.org Addressbook started its service. It was the first service that connected people together. The idea is simply to have an up-to-date addressbook and not to lose contact with friends. Other people on the globe had the same idea. Friendster, Facebook and many other services were successors to this. In April 2002, Jonathan Abrams created his profile onFriendster.[32] In 2003,Hi5,LinkedIn,[33]MySpace, andXINGwere launched. In February 2004,Facebookwas launched. In 2004, Levin (in Allen 2004, sec. 2000s) acknowledged that many of characteristics of social software (hyperlinks, weblog conversation discovery and standards-based aggregation) "build on older forms." Nevertheless, "the difference in scale, standardization, simplicity and social incentives provided by web access turn a difference in degree to a difference in kind." Key technological factors underlying this difference in kind in the computer, network and information technologies are: filtered hypertext, ubiquitous web/computing, continuous internet connectivity, cheap, efficient and small electronics, content syndication strategies (RSS) and others. Additionally, the convergence of several major information technology systems for voice, data and video into a single system makes for expansive computing environments with far reaching effects. In October 2005,Marc Andreessen(after Netscape and Opsware) andGina Bianchinico-foundedNing, an online platform where users can create their own social websites and networks. Ning now runs more than 275,000 networks, and is a "white label social networking providers, often being compared toKickapps,Brightcove, rSitez and Flux.[34]StudiVZwas launched in November 2005. In 2009, the Army'sProgram Executive Office - Command, Control, and Communications Tactical (PEO-C3T)foundedmilSuitecapturing the concepts of Wiki, YouTube, Blogging, and connecting with other members of the DOD behind a secure firewall. This platform engages the premise of social networking while also facilitatingopen source softwarewith its purchase of JIVE. Social media has been criticized for having negative externalities, such as privacy harms, misinformation and hate speech, and harm to minors.[35]These externalities arise from the nature of the platform, including the ease of sharing content, due to the platforms' need to maximize engagement.[36] Social media has been adapted in the workplace, to foster collaboration, but there has also been criticism that privacy concerns, time wasting, and multi-tasking challenges make manager's jobs more difficult, and employee concentration may be reduced.[37] As information supply increases, the average time spent evaluating individual content has to decrease. Eventually, much communication is summarily ignored - based on very arbitrary and rapidheuristicsthat will filter out the information for example by category. Bad information crowds out the good - much the way SPAM often crowds out potentially useful unsolicited communications. Cyber bullying is different than conventional bullying. Cyber bullying refers to the threat or abuse of a victim by the use of the internet and electronic devices. Victims of cyber bullying can be targeted over social media, email, or text messages. These attacks are typically aggressive, and repetitive in nature. Internet bullies can make multiple email, social media, etc. accounts to attack a victim. Free email accounts that are available to end users can lead a bully to use various identities for communication with the victim. Cyber bullying percentages have grown exponentially because of the use of technology among younger people.[38] According to cyber bullying statistics published in 2014, 25 percent of teenagers report that they have experienced repeated bullying via their cell phone or on the internet. 52 percent of young people report being cyber bullied. Embarrassing or damaging photographs taken without the knowledge or consent of the subject has been reported by 11 percent of adolescents and teens. Of the young people who reported cyber bullying incidents against them, 33 percent of them reported that their bullies issued online threats. Often, both bullies and cyber bullies turn to hate speech to victimize their target. One-tenth of all middle school and high school students have been on the receiving end of "hate terms" hurled against them. 55 percent of all teens who use social media have witnessed outright bullying via that medium. 95 percent of teens who witnessed bullying on social media report that others, like them, have ignored the behavior.[39]
https://en.wikipedia.org/wiki/Social_software_in_education
Early research and development: Merging the networks and creating the Internet: Commercialization, privatization, broader access leads to the modern Internet: Examples of Internet services: Usenet(/ˈjuːznɛt/),USENET,[1]or, "in full",User's Network,[1]is a worldwide distributed discussion system available on computers. It was developed from the general-purposeUnix-to-Unix Copy(UUCP)dial-upnetwork architecture.Tom TruscottandJim Ellisconceived the idea in 1979, and it was established in 1980.[2]Users read and post messages (calledarticlesorposts, and collectively termednews) to one or more topic categories, known asnewsgroups. Usenet resembles abulletin board system(BBS) in many respects and is the precursor to theInternet forumsthat have become widely used. Discussions arethreaded, as with web forums and BBSes, though posts are stored on the server sequentially.[3][4] A major difference between a BBS or web message board and Usenet is the absence of a central server and dedicated administrator or hosting provider. Usenet is distributed among a large, constantly changing set ofnews serversthatstore and forwardmessages to one another via "news feeds". Individual users may read messages from and post to a local (or simply preferred) news server, which can be operated by anyone, and those posts will automatically be forwarded to any other news serverspeeredwith the local one, while the local server will receive any news its peers have that it currently lacks. This results in the automatic proliferation of content posted by any user on any server to any other user subscribed to the same newsgroups on other servers. As with BBSes and message boards, individual news servers or service providers are under no obligation to carry any specific content, and may refuse to do so for many reasons: a news server might attempt to control the spread of spam by refusing to accept or forward any posts that triggerspam filters, or a server without high-capacity data storage may refuse to carry any newsgroups used primarily forfile sharing, limiting itself to discussion-oriented groups. However, unlike BBSes and web forums, the dispersed nature of Usenet usually permits users who are interested in receiving some content to access it simply by choosing to connect to news servers that carry the feeds they want. Usenet is culturally and historically significant in the networked world, having given rise to, or popularized, many widely recognized concepts and terms such as "FAQ", "flame", "sockpuppet", and "spam".[5]In the early 1990s, shortly before access to theInternetbecame commonly affordable, Usenet connections viaFidoNet's dial-upBBSnetworks made long-distance or worldwide discussions and other communication widespread, not needing a server, just (local) telephone service.[6] The nameUsenetcomes from the term "users' network".[3]The first Usenet group wasNET.general, which quickly becamenet.general.[7]The first commercial spam on Usenet was from immigration attorneysCanter and Siegeladvertising green card services.[7] On the Internet, Usenet is transported via theNetwork News Transfer Protocol(NNTP) onTransmission Control Protocol(TCP)port119 for standard, unprotected connections, and on TCP port 563 forSecure Sockets Layer(SSL) encrypted connections. Usenet was conceived in 1979 and publicly established in 1980, at theUniversity of North Carolina at Chapel HillandDuke University,[8][2]over a decade before theWorld Wide Webwent online (and thus before the general public received access to theInternet), making it one of the oldestcomputer networkcommunications systems still in widespread use. It was originally built on the "poor man'sARPANET", employing UUCP as its transport protocol to offer mail and file transfers, as well as announcements through the newly developednews softwaresuch asA News. The name "Usenet" emphasizes its creators' hope that theUSENIXorganization would take an active role in its operation.[9] The articles that users post to Usenet are organized into topical categories known asnewsgroups, which are themselves logically organized into hierarchies of subjects. For instance,sci.mathandsci.physicsare within thesci.*hierarchy. Or,talk.originsandtalk.atheismare in thetalk.*hierarchy. When a user subscribes to a newsgroup, thenews clientsoftware keeps track of which articles that user has read.[10] In most newsgroups, the majority of the articles are responses to some other article. The set of articles that can be traced to one single non-reply article is called athread. Most modern newsreaders display the articles arranged into threads and subthreads. For example, in the wine-making newsgrouprec.crafts.winemaking,someone might start a thread called; "What's the best yeast?" and that thread or conversation might grow into dozens of replies long, by perhaps six or eight different authors. Over several days, that conversation about different wine yeasts might branch into several sub-threads in a tree-like form. When a user posts an article, it is initially only available on that user's news server. Each news server talks to one or more other servers (its "newsfeeds") andexchangesarticles with them. In this fashion, the article is copied fromserver to serverand should eventually reach every server in the network. The laterpeer-to-peernetworks operate on a similar principle, but for Usenet it is normally the sender, rather than the receiver, who initiates transfers. Usenet was designed under conditions when networks were much slower and not always available. Many sites on the original Usenet network would connect only once or twice a day to batch-transfer messages in and out.[11]This is largely because thePOTSnetwork was typically used for transfers, and phone charges were lower at night. The format and transmission of Usenet articles is similar to that of Internete-mailmessages. The difference between the two is that Usenet articles can be read by any user whose news server carries the group to which the message was posted, as opposed to email messages, which have one or more specific recipients.[12] Today, Usenet has diminished in importance with respect toInternet forums,blogs,mailing listsandsocial media. Usenet differs from such media in several ways: Usenet requires no personal registration with the group concerned; information need not be stored on a remote server; archives are always available; and reading the messages does not require a mail or web client, but a news client. However, it is now possible to read and participate in Usenet newsgroups to a large degree using ordinaryweb browserssince most newsgroups are now copied to several web sites.[13]The groups inalt.binariesare still widely used for data transfer. Many Internet service providers, and many other Internet sites, operatenews serversfor their users to access. ISPs that do not operate their own servers directly will often offer their users an account from another provider that specifically operates newsfeeds. In early news implementations, the server and newsreader were a single program suite, running on the same system. Today, one uses separate newsreader client software, a program that resembles an email client but accesses Usenet servers instead.[14] Not all ISPs run news servers. A news server is one of the most difficult Internet services to administer because of the large amount of data involved, small customer base (compared to mainstream Internet service), and a disproportionately high volume of customer support incidents (frequently complaining of missing news articles). Some ISPs outsource news operations to specialist sites, which will usually appear to a user as though the ISP itself runs the server. Many of these sites carry a restricted newsfeed, with a limited number of newsgroups. Commonly omitted from such a newsfeed are foreign-language newsgroups and thealt.binarieshierarchy which largely carries software, music, videos and images, and accounts for over 99 percent of article data.[citation needed] There are also Usenet providers that offer a full unrestricted service to users whose ISPs do not carry news, or that carry a restricted feed.[citation needed] Newsgroups are typically accessed withnewsreaders: applications that allow users to read and reply to postings in newsgroups. These applications act asclientsto one or more news servers. Historically, Usenet was associated with theUnixoperating system developed atAT&T, but newsreaders were soon available for all major operating systems.[15]Email client programs andInternet suitesof the late 1990s and 2000s often included an integrated newsreader. Newsgroup enthusiasts often criticized these as inferior to standalone newsreaders that made correct use of Usenet protocols, standards and conventions.[16] With the rise of the World Wide Web (WWW), web front-ends (web2news) have become more common. Web front ends have lowered the technical entry barrier requirements to that of one application and no Usenet NNTP server account. There are numerous websites now offering web based gateways to Usenet groups, although some people have begun filtering messages made by some of the web interfaces for one reason or another.[17][18]Google Groups[19]is one such web based front end and someweb browserscan access Google Groups via news: protocol links directly.[20] A minority of newsgroups are moderated, meaning that messages submitted by readers are not distributed directly to Usenet, but instead are emailed to the moderators of the newsgroup for approval. The moderator is to receive submitted articles, review them, and inject approved articles so that they can be properly propagated worldwide. Articles approved by a moderator must bear the Approved: header line. Moderators ensure that the messages that readers see in the newsgroup conform to the charter of the newsgroup, though they are not required to follow any such rules or guidelines.[21]Typically, moderators are appointed in the proposal for the newsgroup, and changes of moderators follow a succession plan.[22] Historically, amod.*hierarchy existed before Usenet reorganization.[23]Now, moderated newsgroups may appear in any hierarchy, typically with.moderatedadded to the group name. Usenet newsgroups in theBig-8 hierarchyare created by proposals called a Request for Discussion, or RFD. The RFD is required to have the following information: newsgroup name, checkgroups file entry, and moderated or unmoderated status. If the group is to be moderated, then at least one moderator with a valid email address must be provided. Other information which is beneficial but not required includes: a charter, a rationale, and a moderation policy if the group is to be moderated.[24]Discussion of the new newsgroup proposal follows, and is finished with the members of the Big-8 Management Board making the decision, by vote, to either approve or disapprove the new newsgroup. Unmoderated newsgroups form the majority of Usenet newsgroups, and messages submitted by readers for unmoderated newsgroups are immediately propagated for everyone to see. Minimal editorial content filtering vs propagation speed form one crux of the Usenet community. One little cited defense of propagation is canceling a propagated message, but few Usenet users use this command and some news readers do not offercancellation commands, in part because article storage expires in relatively short order anyway. Almost all unmoderated Usenet groups tend to receive large amounts ofspam.[25][26][27] Usenet is a set of protocols for generating, storing and retrieving news "articles" (which resemble Internet mail messages) and for exchanging them among a readership which is potentially widely distributed. These protocols most commonly use aflooding algorithmwhich propagates copies throughout a network of participating servers. Whenever a message reaches a server, that server forwards the message to all its network neighbors that haven't yet seen the article. Only one copy of a message is stored per server, and each server makes it available on demand to the (typically local) readers able to access that server. The collection of Usenet servers has thus a certainpeer-to-peercharacter in that they share resources by exchanging them, the granularity of exchange however is on a different scale than a modern peer-to-peer system and this characteristic excludes the actual users of the system who connect to the news servers with a typical client-server application, much like an email reader. RFC 850 was the first formal specification of the messages exchanged by Usenet servers. It was superseded by RFC 1036 and subsequently by RFC 5536 and RFC 5537. In cases where unsuitable content has been posted, Usenet has support for automated removal of a posting from the whole network by creating a cancel message, although due to a lack of authentication and resultant abuse, this capability is frequently disabled. Copyright holders may still request the manual deletion of infringing material using the provisions ofWorld Intellectual Property Organizationtreaty implementations, such as the United StatesOnline Copyright Infringement Liability Limitation Act, but this would require giving notice to each individual news server administrator. On the Internet, Usenet is transported via theNetwork News Transfer Protocol(NNTP) onTCP Port119 for standard, unprotected connections and on TCP port 563 forSSLencrypted connections. The major set of worldwide newsgroups is contained within nine hierarchies, eight of which are operated under consensual guidelines that govern their administration and naming. The currentBig Eightare: Thealt.*hierarchyis not subject to the procedures controlling groups in the Big Eight, and it is as a result less organized. Groups in thealt.*hierarchy tend to be more specialized or specific—for example, there might be a newsgroup under the Big Eight which contains discussions about children's books, but a group in the alt hierarchy may be dedicated to one specific author of children's books.Binariesare posted inalt.binaries.*, making it the largest of all the hierarchies. Many other hierarchies of newsgroups are distributed alongside these. Regional and language-specific hierarchies such asjapan.*,malta.*andne.*serve specific countries and regions such asJapan,MaltaandNew England. Companies and projects administer their own hierarchies to discuss their products and offer community technical support, such as the historicalgnu.*hierarchy from theFree Software Foundation.Microsoftclosed its newsserver in June 2010, providing support for its products over forums now.[28]Some users prefer to use the term "Usenet" to refer only to the Big Eight hierarchies; others includealt.*as well. The more general term "netnews" incorporates the entire medium, including private organizational news systems. Informal sub-hierarchy conventions also exist.*.answersare typically moderated cross-post groups for FAQs. An FAQ would be posted within one group and a cross post to the*.answersgroup at the head of the hierarchy seen by some as a refining of information in that news group. Some subgroups are recursive—to the point of some silliness inalt.*[citation needed]. Usenet was originally created to distribute text content encoded in the 7-bitASCIIcharacter set. With the help of programs that encode 8-bit values into ASCII, it became practical to distributebinary filesas content. Binary posts, due to their size and often-dubious copyright status, were in time restricted to specific newsgroups, making it easier for administrators to allow or disallow the traffic. The oldest widely used encoding method for binary content isuuencode, from theUnixUUCP package. In the late 1980s, Usenet articles were often limited to 60,000 characters, and larger hard limits exist today. Files are therefore commonly split into sections that require reassembly by the reader. With the header extensions and theBase64and Quoted-PrintableMIMEencodings, there was a new generation of binary transport. In practice, MIME has seen increased adoption in text messages, but it is avoided for most binary attachments. Some operating systems withmetadataattached to files use specialized encoding formats. For Mac OS, bothBinHexand special MIME types are used. Other lesser known encoding systems that may have been used at one time wereBTOA,XX encoding,BOO, and USR encoding. In an attempt to reduce file transfer times, an informal file encoding known asyEncwas introduced in 2001. It achieves about a 30% reduction in data transferred by assuming that most 8-bit characters can safely be transferred across the network without first encoding into the 7-bit ASCII space. The most common method of uploading large binary posts to Usenet is to convert the files intoRARarchives and createParchivefiles for them. Parity files are used to recreate missing data when not every part of the files reaches a server. Binary newsgroups can be used to distribute files, and, as of 2022, some remain popular as an alternative toBitTorrentto share and download files.[29] Each news server allocates a certain amount of storage space for content in each newsgroup. When this storage has been filled, each time a new post arrives, old posts are deleted to make room for the new content. If the network bandwidth available to a server is high but the storage allocation is small, it is possible for a huge flood of incoming content to overflow the allocation and push out everything that was in the group before it. The average length of time that posts are able to stay on the server before being deleted is commonly called theretention time. Binary newsgroups are only able to function reliably if there is sufficient storage allocated to handle the amount of articles being added. Without sufficient retention time, a reader will be unable to download all parts of the binary before it is flushed out of the group's storage allocation. This was at one time how posting undesired content was countered; the newsgroup would be flooded with random garbage data posts, of sufficient quantity to push out all the content to be suppressed. This has been compensated by service providers allocating enough storage to retain everything posted each day, including spam floods, without deleting anything. Modern Usenetnews servershave enough capacity to archive years of binary content even when flooded with new data at the maximum daily speed available. In part because of such long retention times, as well as growing Internetuploadspeeds, Usenet is also used by individual users to storebackupdata.[31]While commercial providers offer easier to useonline backup services, storing data on Usenet is free of charge (although access to Usenet itself may not be). The method requires the uploader to cede control over the distribution of the data; the files are automatically disseminated to all Usenet providers exchanging data for the news group it is posted to. In general the user must manually select, prepare and upload the data. The data is typicallyencryptedbecause it is available to anyone to download the backup files. After the files are uploaded, having multiple copies spread to different geographical regions around the world on differentnews serversdecreases the chances of data loss. Major Usenet service providers have a retention time of more than 12 years.[32]This results in more than 60petabytes(60000terabytes) of storage (see image). When using Usenet for data storage, providers that offer longer retention time are preferred to ensure the data will survive for longer periods of time compared to services with lower retention time. While binary newsgroups can be used to distribute completely legal user-created works,free software, and public domain material, some binary groups are used to illegally distributeproprietary software, copyrighted media, and pornographic material. ISP-operated Usenet servers frequently block access to allalt.binaries.*groups to both reduce network traffic and to avoid related legal issues. Commercial Usenet service providers claim to operate as a telecommunications service, and assert that they are not responsible for the user-posted binary content transferred via their equipment. In the United States, Usenet providers can qualify for protection under theDMCASafe Harbor regulations, provided that they establish a mechanism to comply with and respond to takedown notices from copyright holders.[33] Removal of copyrighted content from the entire Usenet network is a nearly impossible task, due to the rapid propagation between servers and the retention done by each server. Petitioning a Usenet provider for removal only removes it from that one server's retention cache, but not any others. It is possible for a specialpost cancellationmessage to be distributed to remove it from all servers, but many providers ignorecancelmessages by standard policy, because they can be easily falsified and submitted by anyone.[34][35]For a takedown petition to be most effective across the whole network, it would have to be issued to the origin server to which the content has been posted, before it has been propagated to other servers. Removal of the content at this early stage would prevent further propagation, but with modern high speed links, content can be propagated as fast as it arrives, allowing no time for content review and takedown issuance by copyright holders.[36] Establishing the identity of the person posting illegal content is equally difficult due to the trust-based design of the network. LikeSMTPemail, servers generally assume the header and origin information in a post is true and accurate. However, as in SMTP email, Usenet post headers are easily falsified so as to obscure the true identity and location of the message source.[37]In this manner, Usenet is significantly different from modern P2P services; most P2P users distributing content are typically immediately identifiable to all other users by theirnetwork address, but the origin information for a Usenet posting can be completely obscured and unobtainable once it has propagated past the original server.[38] Also unlike modern P2P services, the identity of the downloaders is hidden from view. On P2P services a downloader is identifiable to all others by their network address. On Usenet, the downloader connects directly to a server, and only the server knows the address of who is connecting to it. Some Usenet providers do keep usage logs, but not all make this logged information casually available to outside parties such as theRecording Industry Association of America.[39][40][41]The existence of anonymising gateways to USENET also complicates the tracing of a postings true origin. Bruce Jones,Henry Spencer, David Wiseman. Copied with permission from Newsgroup experiments first occurred in 1979.Tom TruscottandJim EllisofDuke Universitycame up with the idea as a replacement for a local announcement program, and established a link with nearbyUniversity of North CarolinausingBourne shellscripts written bySteve Bellovin. The public release ofnewswas in the form of conventional compiledsoftware, written by Steve Daniel and Truscott.[8][43]In 1980, Usenet was connected toARPANETthroughUC Berkeley, which had connections to both Usenet and ARPANET.Mary Ann Horton, the graduate student who set up the connection, began "feeding mailing lists from the ARPANET into Usenet" with the "fa" ("From ARPANET"[44]) identifier.[45]Usenet gained 50 member sites in its first year, includingReed College,University of Oklahoma, andBell Labs,[8]and the number of people using the network increased dramatically; however, it was still a while longer before Usenet users could contribute to ARPANET.[46] UUCP networks spread quickly due to the lower costs involved, and the ability to use existing leased lines,X.25links or evenARPANETconnections. By 1983, thousands of people participated from more than 500 hosts, mostly universities and Bell Labs sites but also a growing number of Unix-related companies; the number of hosts nearly doubled to 940 in 1984. More than 100 newsgroups existed, more than 20 devoted to Unix and other computer-related topics, and at least a third to recreation.[47][8]As the mesh of UUCP hosts rapidly expanded, it became desirable to distinguish the Usenet subset from the overall network. A vote was taken at the 1982 USENIX conference to choose a new name. The name Usenet was retained, but it was established that it only applied to news.[48]The name UUCPNET became the common name for the overall network. In addition to UUCP, early Usenet traffic was also exchanged withFidoNetand other dial-upBBSnetworks. By the mid-1990s there were almost 40,000 FidoNet systems in operation, and it was possible to communicate with millions of users around the world, with only local telephone service. Widespread use of Usenet by the BBS community was facilitated by the introduction of UUCP feeds made possible by MS-DOS implementations of UUCP, such as UFGATE (UUCP to FidoNet Gateway), FSUUCP and UUPC. In 1986, RFC 977 provided theNetwork News Transfer Protocol(NNTP) specification for distribution of Usenet articles overTCP/IPas a more flexible alternative to informal Internet transfers of UUCP traffic. Since the Internet boom of the 1990s, almost all Usenet distribution is over NNTP.[49] Early versions of Usenet used Duke'sA Newssoftware, designed for one or two articles a day. Matt Glickman and Horton at Berkeley produced an improved version calledB Newsthat could handle the rising traffic (about 50 articles a day as of late 1983).[8]With a message format that offered compatibility with Internet mail and improved performance, it became the dominant server software.C News, developed byGeoff CollyerandHenry Spencerat theUniversity of Toronto, was comparable to B News in features but offered considerably faster processing. In the early 1990s,InterNetNewsbyRich Salzwas developed to take advantage of the continuous message flow made possible by NNTP versus the batched store-and-forward design of UUCP. Since that timeINNdevelopment has continued, and other news server software has also been developed.[50] Usenet was the first Internet community and the place for many of the most important public developments in the pre-commercial Internet. It was the place whereTim Berners-Leeannounced the launch of theWorld Wide Web,[51]whereLinus Torvaldsannounced theLinuxproject,[52]and whereMarc Andreessenannounced the creation of theMosaic browserand the introduction of the image tag,[53]which revolutionized the World Wide Web by turning it into a graphical medium. Manyjargonterms now in common use on the Internet originated or were popularized on Usenet.[54]Likewise, many conflicts which later spread to the rest of the Internet, such as the ongoing difficulties overspamming, began on Usenet.[55] "Usenet is like a herd of performing elephants with diarrhea. Massive, difficult to redirect, awe-inspiring, entertaining, and a source of mind-boggling amounts of excrement when you least expect it." Sascha Segan ofPC Magazinesaid in 2008 that "Usenet has been dying for years".[56]Segan said that some people pointed to theEternal Septemberin 1993 as the beginning of Usenet's decline, when AOL began offering Usenet access. He argues that when users began putting large (non-text) files on Usenet by the late 1990s, Usenetdisk spaceand traffic increased correspondingly. Internet service providers questioned why they needed to host binary articles. AOLdiscontinued Usenet access in 2005. In May 2010,Duke University, whose implementation had started Usenet more than 30 years earlier, decommissioned its Usenet server, citing low usage and rising costs.[57][58]On February 4, 2011, the Usenet news service link at theUniversity of North Carolina at Chapel Hill(news.unc.edu) was retired after 32 years.[citation needed] In response, John Biggs ofTechCrunchsaid "As long as there are folks who think acommand lineis better than a mouse, the original text-onlysocial networkwill live on".[59]While there are still some active text newsgroups on Usenet, the system is now primarily used to share large files between users, and the underlying technology of Usenet remains unchanged.[60] Over time, the amount of Usenet traffic has steadily increased. As of 2010[update]the number of all text posts made in all Big-8 newsgroups averaged 1,800 new messages every hour, with an average of 25,000 messages per day.[61]However, these averages are minuscule in comparison to the traffic in the binary groups.[62]Much of this traffic increase reflects not an increase in discrete users or newsgroup discussions, but instead the combination of massive automated spamming and an increase in the use of.binariesnewsgroups[61]in which large files are often posted publicly. A small sampling of the change (measured in feed size per day) follows: In 2008,Verizon Communications,Time Warner CableandSprint Nextelsigned an agreement withAttorney General of New YorkAndrew Cuomoto shut down access to sources ofchild pornography.[65]Time Warner Cable stopped offering access to Usenet. Verizon reduced its access to the "Big 8" hierarchies. Sprint stopped access to thealt.*hierarchies. AT&T stopped access to thealt.binaries.*hierarchies. Cuomo never specifically named Usenet in his anti-child pornography campaign. David DeJean ofPC Worldsaid that some worry that the ISPs used Cuomo's campaign as an excuse to end portions of Usenet access, as it is costly for the Internet service providers and not in high demand by customers. In 2008AOL, which no longer offered Usenet access, and the four providers that responded to the Cuomo campaign were the five largest Internet service providers in the United States; they had more than 50% of the U.S. ISP market share.[66]On June 8, 2009, AT&T announced that it would no longer provide access to the Usenet service as of July 15, 2009.[67] AOLannounced that it would discontinue its integrated Usenet service in early 2005, citing the growing popularity of weblogs, chat forums and on-line conferencing.[68]The AOL community had a tremendous role in popularizing Usenet some 11 years earlier.[69] In August 2009, Verizon announced that it would discontinue access to Usenet on September 30, 2009.[70][71]JANETannounced it would discontinue Usenet service, effective July 31, 2010, citing Google Groups as an alternative.[72]Microsoftannounced that it would discontinue support for its public newsgroups (msnews.microsoft.com) from June 1, 2010, offering web forums as an alternative.[73] Primary reasons cited for the discontinuance of Usenet service by generalISPsinclude the decline in volume of actual readers due to competition fromblogs, along with cost and liability concerns of increasing proportion of traffic devoted to file-sharing and spam on unused or discontinued groups.[74][75] Some ISPs did not include pressure from Cuomo's campaign against child pornography as one of their reasons for dropping Usenet feeds as part of their services.[76]ISPs Cox and Atlantic Communications resisted the 2008 trend but both did eventually drop their respective Usenet feeds in 2010.[77][78][79] Public archives of Usenet articles have existed since the early days of Usenet, such as the system created by Kenneth Almquist in late 1982.[80][81]Distributed archiving of Usenet posts was suggested in November 1982 by Scott Orshan, who proposed that "Every site should keep all the articles it posted, forever."[82]Also in November of that year,Rick Adamsresponded to a post asking "Has anyone archived netnews, or does anyone plan to?"[83]by stating that he was, "afraid to admit it, but I started archiving most 'useful' newsgroups as of September 18."[84]In June 1982, Gregory G. Woodbury proposed an "automatic access to archives" system that consisted of "automatic answering of fixed-format messages to a special mail recipient on specified machines."[85] In 1985, two news archiving systems and oneRFCwere posted to the Internet. The first system, called keepnews, by Mark M. Swenson of theUniversity of Arizona, was described as "a program that attempts to provide a sane way of extracting and keeping information that comes over Usenet." The main advantage of this system was to allow users to mark articles as worthwhile to retain.[86]The second system, YA News Archiver by Chuq Von Rospach, was similar to keepnews, but was "designed to work with much larger archives where the wonderful quadratic search time feature of the Unix ... becomes a real problem."[87]Von Rospach in early 1985 posted a detailed RFC for "archiving and accessing usenet articles withkeywordlookup." This RFC described a program that could "generate and maintain an archive of Usenet articles and allow looking up articles based on the article-id, subject lines, or keywords pulled out of the article itself." Also included wasCcode for the internal data structure of the system.[88] The desire to have a full text search index of archived news articles is not new either, one such request having been made in April 1991 byAlex Martelliwho sought to "build some sort of keyword index for [the news archive]."[89]In early May, Martelli posted a summary of his responses to Usenet, noting that the "most popular suggestion award must definitely go to 'lq-text' package, by Liam Quin, recently posted in alt.sources."[90] The Alt Sex Stories Text Repository (ASSTR) site archived and indexed erotic and pornographic stories posted to the Usenet groupalt.sex.stories.[91] The archiving of Usenet has led to fears of loss of privacy.[92]An archive simplifies ways to profile people. This has partly been countered with the introduction of theX-No-Archive: Yesheader, which is itself controversial.[93] Web-based archiving of Usenet posts began in March 1995 atDeja Newswith a very large, searchable database. In February 2001, this database was acquired byGoogle;[94]Google had begun archiving Usenet posts for itself starting in the second week of August 2000. Google Groups hosts an archive of Usenet posts dating back to May 1981. The earliest posts, which date from May 1981 to June 1991, were donated to Google by theUniversity of Western Ontariowith the help of David Wiseman and others,[95]and were originally archived byHenry Spencerat the University of Toronto's Zoology department.[96]The archives for late 1991 through early 1995 were provided by Kent Landfield from the NetNews CD series[97]and Jürgen Christoffel fromGMD.[98] Google has been criticized byViceandWiredcontributors as well as former employees for its stewardship of the archive and for breaking its search functionality.[99][100][101] As of January 2024, Google Groups carries a header notice, saying: Effective from 22 February 2024, Google Groups will no longer support new Usenet content. Posting and subscribing will be disallowed, and new content from Usenet peers will not appear. Viewing and searching of historical data will still be supported as it is done today. An explanatory page adds:[102] In addition, Google’s Network News Transfer Protocol (NNTP) server and associated peering will no longer be available, meaning Google will not support serving new Usenet content or exchanging content with other NNTP servers. This change will not impact any non-Usenet content on Google Groups, including all user and organization-created groups. Usenet had administrators on a server-by-server basis, not as a whole. A few famous administrators:
https://en.wikipedia.org/wiki/Usenet
Anonline community, also called aninternet communityorweb community, is a community whose members engage incomputer-mediated communicationprimarily via the Internet. Members of the community usually share common interests. For many, online communities may feel like home, consisting of a "family of invisible friends". Additionally, these "friends" can be connected through gaming communities and gaming companies. An online community can act as aninformation systemwhere members can post, comment on discussions, give advice or collaborate, and includes medical advice or specific health care research as well. Commonly, people communicate throughsocial networking sites,chat rooms,forums, email lists, and discussion boards, and have advanced into daily social media platforms as well. This includesFacebook,Twitter,Instagram,Discord, etc. People may also join online communities throughvideo games,blogs, andvirtual worlds, and could potentially meet new significant others in dating sites or dating virtual worlds. The rise in popularity ofWeb 2.0websites has allowed for easier real-time communication and connection to others and facilitated the introduction of new ways for information to be exchanged. Yet, these interactions may also lead to a downfall of social interactions or deposit more negative and derogatory forms of speaking to others, in connection, surfaced forms of racism, bullying, sexist comments, etc. may also be investigated and linked to online communities. One scholarly definition of an online community is this: "a virtual community is defined as an aggregation of individuals or business partners who interact around a shared interest, where the interaction is at least partially supported or mediated by technology (or both) and guided by some protocols or norms".[1] Digital communities (web communities but also communities that are formed over, e.g., Xbox and PlayStation) provide a platform for a range of services to users. It has been argued that they can fulfillMaslow's hierarchy of needs.[2]They allow for social interaction across the world between people of different cultures who might not otherwise have met with offline meetings also becoming more common. Another key use of web communities is access to and the exchange of information. With communities for even very small niches it is possible to find people also interested in a topic and to seek and share information on a subject where there are not such people available in the immediate area offline. This has led to a range of popular sites based on areas such as health, employment, finances and education. Online communities can be vital for companies for marketing and outreach.[3] Unexpected and innovative uses of web communities have also emerged withsocial networksbeing used in conflicts to alert citizens of impending attacks.[4]TheUNsees the web and specifically social networks as an important tool in conflicts and emergencies.[5][6] Web communities have grown in popularity; as of October 2014,[update]6 of the 20 most-trafficked websites were community-based sites.[7]The amount of traffic to such websites is expected to increase as a growing proportion of the world's population attains Internet access. The idea of a community is not a new concept. On the telephone, inham radioand in the online world, social interactions no longer have to be based on proximity; instead they can literally be with anyone anywhere.[8]The study of communities has had to adapt along with the new technologies. Many researchers have usedethnographyto attempt to understand what people do in online spaces, how they express themselves, what motivates them, how they govern themselves, what attracts them, and why some people prefer to observe rather than participate.[8]Online communities can congregate around a shared interest and can be spread across multiple websites.[9] Some features of online communities include: Online communities typically establish a set of values, sometimes known collectively asnetiquetteor Internet etiquette, as they grow. These values may include: opportunity, education, culture, democracy, human services, equality within the economy, information, sustainability, and communication.[11]An online community's purpose is to serve as a common ground for people who share the same interests.[11] Online communities may be used as calendars to keep up with events such as upcoming gatherings or sporting events. They also form around activities and hobbies. Many online communities relating to health care help inform, advise, and support patients and their families. Students can take classes online and they may communicate with their professors and peers online. Businesses have also started using online communities to communicate with their customers about their products and services as well as to share information about the business. Other online communities allow a wide variety of professionals to come together to share thoughts, ideas and theories.[11][12] Fandomis an example of what online communities can evolve into. Online communities have grown in influence in "shaping the phenomena around which they organize" according to Nancy K. Baym's work.[9]She says that: "More than any other commercial sector, the popular culture industry relies on online communities to publicize and provide testimonials for their products." The strength of the online community's power is displayed through the season 3 premiere of BBC'sSherlock. Online activity by fans seem to have had a noticeable influence on the plot and direction of the season opening episode. Mark Lawson ofThe Guardianrecounts how fans have, to a degree, directed the outcome of the events of the episode. He says that "Sherlock has always been one of the most web-aware shows, among the first to find a satisfying way of representing electronic chatter on-screen."[13]Fan communities in platforms likeTwitter,Instagram, andRedditaround sports, actors, and musicians have become powerful communities both culturally and politically.[14] Discussions where members may post their feedback are essential in the development of an online community.[15]Online communities may encourage individuals to come together to teach and learn from one another. They may encourage learners to discuss and learn about real-world problems and situations, as well as to focus on such things as teamwork, collaborative thinking and personal experiences.[16][17] Blogsare among the major platforms on which online communities form. Blogging practices includemicroblogging, where the amount of information in a single element is smaller, andliveblogging, in which an ongoing event is blogged about in real time. The ease and convenience of blogging has allowed for its growth. Major blogging platforms includeTwitterandTumblr, which combine social media and blogging, as well as platforms such asWordPress, which allow content to be hosted on their own servers but also permit users to download, install, and modify the software on their own servers. As of October 2014,[update]23.1% of the top 10 million websites are either hosted on or run WordPress.[18] Internet forums, sometimes called bulletin boards, are websites which allow users to post topics also known asthreadsfor discussion with other users able to reply creating a conversation. Forums follow a hierarchical structure of categories, with many popular forum software platforms categorising forums depending on their purpose, and allowing forum administrators to create subforums within their platform. With time more advanced features have been added into forums; the ability to attach files, embed YouTube videos, and send private messages is now commonplace. As of October 2014,[update]the largest forumGaia Onlinecontained over 2 billion posts.[19] Members are commonly assigned into user groups which control their access rights and permissions. Common access levels include the following: Social networksare platforms allowing users to set up their own profile and build connections with like minded people who pursue similar interests through interaction. The first traceable example of such a site isSixDegrees.com, set up in 1997, which included a friends list and the ability to send messages to members linked to friends and see other users associations. For much of the 21st century, the popularity of such networks has been growing.Friendsterwas the first social network to gain mass media attention; however, by 2004 it had been overtaken in popularity byMyspace, which in turn was later overtaken byFacebook. In 2013, Facebook attracted 1.23 billion monthly users, rising from 145 million in 2008.[20]Facebook was the first social network to surpass 1 billion registered accounts, and by 2020, had more than 2.7 billion active users.[21]Meta Platforms, the owner of Facebook, also owns three other leading platforms for online communities:Instagram,WhatsApp, andFacebook Messenger. Most top-ranked social networks originate in the United States, but European services likeVK, Japanese platformLINE, or Chinese social networksWeChat,QQor video-sharing appDouyin(internationally known asTikTok) have also garnered appeal in their respective regions.[22] Current trends focus around the increased use of mobile devices when using social networks. Statistics fromStatistashow that, in 2013, 97.9 million users accessed social networks from a mobile device in the United States.[23] Researchers and organizations have worked to classify types of online community and to characterise their structure. For example, it is important to know the security, access, and technology requirements of a given type of community as it may evolve from an open to a private and regulated forum.[17]It has been argued that the technical aspects of online communities, such as whether pages can be created and edited by the general user base (as is the case withwikis) or only certain users (as is the case with most blogs), can place online communities into stylistic categories. Another approach argues that "online community" is a metaphor and that contributors actively negotiate the meaning of the term, including values and social norms.[24] Some research has looked at the users of online communities.Amy Jo Kimhas classified the rituals and stages of online community interaction and called it the "membership life cycle".[25]Clay Shirkytalks about communities of practice, whose members collaborate and help each other in order to make something better or improve a certain skill. What makes these communities bond is "love" of something, as demonstrated by members who go out of their way to help without any financial interest.[26][27]Campbell et al. developed a character theory for analyzing online communities, based on tribal typologies. In the communities they investigated they identified three character types:[28][29] Online communities have also forced retail firms to change their business strategies. Companies have to network more, adjust computations, and alter their organizational structures. This leads to changes in a company's communications with their manufacturers including the information shared and made accessible for further productivity and profits. Because consumers and customers in all fields are becoming accustomed to more interaction and engagement online, adjustments must be considered made in order to keep audiences intrigued.[17] Online communities have been characterized as "virtual settlements" that have the following four requirements: interactivity, a variety of communicators, a common public place where members can meet and interact, and sustained membership over time. Based on these considerations, it can be said that microblogs such as Twitter can be classified as online communities.[30] Dorine C. Andrews argues, in the article "Audience-Specific Online Community Design", that there are three parts to building an online community: starting the online community, encouraging early online interaction, and moving to a self-sustaining interactive environment.[31]When starting an online community, it may be effective to create webpages that appeal to specific interests. Online communities with clear topics and easy access tend to be most effective. In order to gain early interaction by members, privacy guarantees and content discussions are very important.[31]Successful online communities tend to be able to function self-sufficiently.[31] There are two major types of participation in online communities: public participation and non-public participation, also called lurking.Lurkersare participants who join a virtual community but do not contribute. In contrast, public participants, or posters, are those who join virtual communities and openly express their beliefs and opinions. Both lurkers and posters frequently enter communities to find answers and to gather general information. For example, there are several online communities dedicated to technology. In these communities, posters are generally experts in the field who can offer technological insight and answer questions, while lurkers tend to be technological novices who use the communities to find answers and to learn.[32] In general, virtual community participation is influenced by how participants view themselves in society as well as by norms, both of society and of the online community.[33]Participants also join online communities for friendship and support. In a sense, virtual communities may fill social voids in participants' offline lives.[34] Sociologist Barry Wellman presents the idea of "globalization" – the Internet's ability to extend participants' social connections to people around the world while also aiding them in further engagement with their local communities.[35] Although online societies differ in content from real society, the roles people assume in their online communities are quite similar. Elliot Volkman[36]points out several categories of people that play a role in the cycle of social networking, such as: An article entitled "The real value of on-line communities," written by A. Armstrong andJohn Hagelof theHarvard Business Review,[37]addresses a handful of elements that are key to the growth of an online community and its success in drawing in members. In this example, the article focuses specifically on online communities related to business, but its points can be transferred and can apply to any online community. The article addresses four main categories of business-based online communities, but states that a truly successful one will combine qualities of each of them: communities of transaction, communities of interest, communities of fantasy, and communities of relationship. Anubhav Choudhury describes the four types of community as follows:[38] Amy Jo Kim's membership lifecycle theory states that members of online communities begin their life in a community as visitors, or lurkers. After breaking through a barrier, people become novices and participate in community life. After contributing for a sustained period of time, they become regulars. If they break through another barrier they become leaders, and once they have contributed to the community for some time they become elders. This life cycle can be applied to many virtual communities, such asbulletin board systems,blogs,mailing lists, andwiki-based communitieslike Wikipedia. A similar model can be found in the works of Lave and Wenger, who illustrate a cycle of how users become incorporated into virtual communities using the principles of legitimate peripheral participation. They suggest five types of trajectories amongst a learning community:[39] The following shows the correlation between the learning trajectories and Web 2.0 community participation by using the example ofYouTube: Newcomers are important for online communities. Online communities rely on volunteers' contribution, and most online communities face high turnover rate as one of their main challenges. For example, only a minority of Wikipedia users contribute regularly, and only a minority of those contributors participate in community discussions. In one study conducted byCarnegie Mellon University, they found that "more than two-thirds (68%) of newcomers to Usenet groups were never seen again after their first post".[40]Above facts reflect a point that recruiting and remaining new members have become a very crucial problem for online communities: the communities will eventually wither away without replacing members who leave. Newcomers are new members of the online communities and thus often face many barriers when contributing to a project, and those barriers they face might lead them to give up the project or even leave the community. By conducting a systematic literature review over 20 primary studies regarding to the barriers faced by newcomers when contributing to the open source software projects, Steinmacher et al. identified 15 different barriers and they classified those barriers into five categories as described below:[41] Because of the barriers described above, it is very necessary that online communities engage newcomers and help them to adjust to the new environment. From online communities' side, newcomers can be both beneficial and harmful to online communities. On the one side, newcomers can bring online communities innovative ideas and resources. On the other side, they can also harm communities with misbehavior caused by their unfamiliarity with community norms.Krautet al. defined five basic issues faced by online communities when dealing with newcomers, and proposed several design claims for each problem in their bookBuilding Successful Online Communities.[42] Successful online communities motivateonline participation. Methods of motivating participation in these communities have been investigated in several studies. There are many persuasive factors that draw users into online communities. Peer-to-peer systems and social networking sites rely heavily on member contribution. Users' underlying motivations to involve themselves in these communities have been linked to some persuasion theories of sociology. One of the greatest attractions towards online communities is the sense of connection users build among members. Participation and contribution are influenced when members of an online community are aware of their global audience.[45] The majority of people learn by example and often follow others, especially when it comes to participation.[46]Individuals are reserved about contributing to an online community for many reasons including but not limited to a fear of criticism or inaccuracy. Users may withhold information that they do not believe is particularly interesting, relevant, or truthful. In order to challenge these contribution barriers, producers of these sites are responsible for developing knowledge-based and foundation-based trust among the community.[47] Users' perception of audience is another reason that makes users participate in online communities. Results showed that users usually underestimate their amount of audiences in online communities. Social media users guess that their audience is 27% of its real size. Regardless of this underestimation, it is shown that amount of audience affects users' self-presentation and also content production which means a higher level of participation.[48] There are two types of virtual online communities (VOC): dependent and self-sustained VOCs. The dependent VOCs are those who use the virtual community as extensions of themselves,[clarification needed]they interact with people they know. Self-sustained VOCs are communities where relationships between participating members is formed and maintained through encounters in the online community.[49]For all VOCs, there is the issue of creating identity and reputation in the community. People can create whatever identity they would like to through their interactions with other members. The username is what members identify each other by but it says very little about the person behind it. The main features in online communities that attract people are a shared communication environment, relationships formed and nurtured, a sense of belonging to a group, the internal structure of the group, common space shared by people with similar ideas and interests. The three most critical issues are belonging, identity, and interest. For an online community to flourish there needs to be consistent participation, interest, and motivation.[50] Research conducted by Helen Wang applied the Technology Acceptance Model to online community participation.[51]Internetself-efficacypositively predicted perceived ease of use. Research found that participants' beliefs in their abilities to use the internet and web-based tools determined how much effort was expected. Community environment positively predicted perceived ease of use and usefulness. Intrinsic motivation positively predicted perceived ease of use, usefulness, and actual use. The technology acceptance model positively predicts how likely it is that an individual will participate in an online community. Establishing a relationship between the consumer and a seller has become a new science with the emergence of online communities. It is a new market to be tapped by companies and to do so, requires an understanding of the relationships built on online communities. Online communities gather people around common interests and these common interests can include brands, products, and services.[52]: 50Companies not only have a chance to reach a new group of consumers in online communities, but to also tap into information about the consumers. Companies have a chance to learn about the consumers in an environment that they feel a certain amount of anonymity and are thus, more open to allowing a company to see what they really want or are looking for. In order to establish a relationship with the consumer a company must seek a way to identify with how individuals interact with the community. This is done by understanding the relationships an individual has with an online community. There are six identifiable relationship statuses: considered status, committed status, inactive status, faded status, recognized status, and unrecognized status.[52]: 56Unrecognized status means the consumer is unaware of the online community or has not decided the community to be useful. The recognized status is where a person is aware of the community, but is not entirely involved. A considered status is when a person begins their involvement with the site. The usage at this stage is still very sporadic. The committed status is when a relationship between a person and an online community is established and the person gets fully involved with the community. The inactive status is when an online community has not relevance to a person. The faded status is when a person has begun to fade away from a site.[52]: 57It is important to be able to recognize which group or status the consumer holds, because it might help determine which approach to use. Companies not only need to understand how a consumer functions within an online community, but also a company "should understand the communality of an online community"[53]: 401This means a company must understand the dynamic and structure of the online community to be able to establish a relationship with the consumer. Online communities have cultures of their own, and to be able to establish a commercial relationship or even engage at all, one must understand the community values and proprieties. It has even been proved beneficial to treat online commercial relationships more as friendships rather than business transactions. Through online engagement, because of the smoke screen of anonymity, it allows a person to be able to socially interact with strangers in a much more personal way.[52]: 69This personal connection the consumer feels translates to how they want to establish relationships online. They separate what is commercial or spam and what is relational. Relational becomes what they associate with human interaction while commercial is what they associate with digital or non-human interaction. Thus the online community should not be viewed as "merely a sales channel".[54]: 537Instead it should be viewed as a network for establishing interpersonal communications with the consumer. Most online communities grow slowly at first, due in part to the fact that the strength of motivation for contributing is usually proportional to the size of the community. As the size of the potential audience increases, so does the attraction of writing and contributing. This, coupled with the fact that organizational culture does not change overnight, means creators can expect slow progress at first with a new virtual community. As more people begin to participate, however, the aforementioned motivations will increase, creating a virtuous cycle in which more participation begets more participation. Community adoption can be forecast with theBass diffusion model, originally conceived byFrank Bassto describe the process by which new products get adopted as an interaction between innovative early adopters and those who follow them. Online learningis a form of online community. The sites are designed to educate. Colleges and universities may offer many of their classes online to their students; this allows each student to take the class at his or her own pace. According to an article published in volume 21, issue 5 of theEuropean Management Journaltitled "Learning in Online Forums",[55]researchers conducted a series of studies about online learning. They found that while good online learning is difficult to plan, it is quite conducive to educational learning. Online learning can bring together a diverse group of people, and although it is asynchronous learning, if the forum is set up using all the best tools and strategies, it can be very effective. Another study was published[56]in volume 55, issue 1 ofComputers and Educationand found results supporting the findings of the article mentioned above. The researchers found that motivation, enjoyment, and team contributions on learning outcomes enhanced students learning and that the students felt they learned well with it. A study published in the same journal[57]looks at how social networking can foster individual well-being and develop skills which can improve the learning experience. These articles look at a variety of different types of online learning. They suggest that online learning can be quite productive and educational if created and maintained properly. One feature of online communities is that they are not constrained by time thereby giving members the ability to move through periods of high to low activity over a period of time. This dynamic nature maintains a freshness and variety that traditional methods of learning might not have been able to provide.[citation needed] It appears that online communities such as Wikipedia have become a source of professional learning.[citation needed]They are an active learning environment in which learners converse and inquire. In a study exclusive to teachers in online communities, results showed that membership in online communities provided teachers with a rich source of professional learning that satisfied each member of the community.[citation needed] Saurabh Tyagi[58]describes benefits of online community learning which include: These terms are taken from Edudemic, a site about teaching and learning. The article "How to Build Effective Online Learning Communities"[58]provides background information about online communities as well as how to incorporate learning within an online community.[58] One of the greatest attractions towards online communities and the role assigned to an online community, is the sense of connection in which users are able to build among other members and associates. Thus, it is typical to reference online communities when regarding the 'gaming' universe. The online video game industry has embraced the concepts of cooperative and diverse gaming in order to provide players with a sense of community or togetherness. Video games have long been seen as a solo endeavor – as a way to escape reality and leave social interaction at the door. Yet, online community networks or talk pages have now allowed forms of connection with other users. These connections offer forms of aid in the games themselves, as well as an overall collaboration and interaction in the network space. For example, a study conducted by Pontus Strimling and Seth Frey found that players would generate their own models of fair "loot" distribution through community interaction if they felt that the model provided by the game itself was insufficient.[59] The popularity of competitive the online multiplayer games has now even promoted informal social interaction through the use of the recognized communities.[60][61] As with other online communities, problems do arise when approaching the usages of online communities in the gaming culture, as well as those who are utilizing the spaces for their own agendas. "Gaming culture" offers individuals personal experiences, development of creativity, as well an assemblance of togetherness that potentially resembles formalized social communication techniques. On the other hand, these communities could also include toxicity, online disinhibition, and cyberbullying. Online health communitiesis one example of online communities which is heavily used by internet users.[64][65][66][67]A key benefit of online health communities is providing user access to other users with similar problems or experiences which has a significant impact on the lives of their members.[64]Through people participation, online health communities will be able to offer patients opportunities for emotional support[68][69]and also will provide them access to experience-based information about particular problem or possible treatment strategies. Even in some studies, it is shown that users find experienced-based information more relevant than information which was prescribed by professionals.[70][71][72]Moreover, allowing patients to collaborate anonymously in some of online health communities suggests users a non-judgmental environment to share their problems, knowledge, and experiences.[73]However, recent research has indicated that socioeconomic differences between patients may result in feelings of alienation or exclusion within these communities, even despite attempts to make the environments inclusive.[74] Online communities are relatively new and unexplored areas. They promote a whole new community that prior to the Internet was not available. Although they can promote a vast array of positive qualities, such as relationships without regard to race, religion, gender, or geography,[75]they can also lead to multiple problems. The theory of risk perception, an uncertainty in participating in an online community, is quite common, particularly when in the following online circumstances: Clay Shirkyexplains one of these problems like two hoola-hoops. With the emersion of online communities there is a "real life" hoola-hoop and the other and "online life". These two hoops used to be completely separate but now they have swung together and overlap. The problem with this overlap is that there is no distinction anymore between face-to-face interactions and virtual ones; they are one and the same. Shirky illustrates this by explaining a meeting. A group of people will sit in a meeting but they will all be connected into a virtual world also, using online communities such as wiki.[77] A further problem is identity formation with the ambiguous real-virtual life mix. Identity formation in the real world consisted of "one body, one identity",[citation needed]but the online communities allow you to create "as many electronic personae" as you please. This can lead to identity deception. Claiming to be someone you are not can be problematic with other online community users and for yourself. Creating a false identity can cause confusion and ambivalence about which identity is true. A lack of trust regarding personal or professional information is problematic with questions of identity or information reciprocity. Often, if information is given to another user of an online community, one expects equal information shared back. However, this may not be the case or the other user may use the information given in harmful ways.[78]The construction of an individual's identity within an online community requires self-presentation. Self-presentation is the act of "writing the self into being", in which a person's identity is formed by what that person says, does, or shows. This also poses a potential problem as such self-representation is open for interpretation as well as misinterpretation. While an individual's online identity can be entirely constructed with a few of his/her own sentences, perceptions of this identity can be entirely misguided and incorrect. Online communities present the problems of preoccupation, distraction, detachment, and desensitization to an individual, although online support groups exist now. Online communities do present potential risks, and users must remember to be careful and remember that just because an online community feels safe does not mean it necessarily is.[35] Cyber bullying, the "use of long-term aggressive, intentional, repetitive acts by one or more individuals, using electronic means, against an almost powerless victim"[79]which has increased in frequency alongside the continued growth of web communities with anOpen Universitystudy finding 38% of young people had experienced or witnessed cyber bullying.[80]It has received significant media attention due to high-profile incidents such as the death of Amanda Todd[81]who before her death detailed her ordeal on YouTube.[82] A key feature of such bullying is that it allows victims to be harassed at all times, something not possible typically with physical bullying. This has forced Governments and other organisations to change their typical approach to bullying with the UK Department for Education now issuing advice to schools on how to deal with cyber bullying cases.[83] The most common problem with online communities tend to be online harassment, meaning threatening or offensive content aimed at known friends or strangers through ways of online technology. Where such posting is done "for thelulz" (that is, for the fun of it), then it is known astrolling.[84]Sometimes trolling is done in order to harm others for the gratification of the person posting. The primary motivation for such posters, known in character theory as "snerts", is the sense of power and exposure it gives them.[85]Online harassment tends to affect adolescents the most due to their risk-taking behavior and decision-making processes. One notable example is that of Natasha MacBryde who was tormented by Sean Duffy, who was later prosecuted.[86]In 2010, Alexis Pilkington, a 17-year-old New Yorker committed suicide. Trolls pounced on her tribute page posting insensitive and hurtful images of nooses and other suicidal symbolism. Four years prior to that an 18-year-old died in a car crash in California. Trolls took images of her disfigured body they found on the internet and used them to torture the girl's grieving parents.[87]Psychological research has shown that anonymity increases unethical behavior through what is called theonline disinhibition effect. Many website and online communities have attempted to combat trolling. There has not been a single effective method to discourage anonymity, and arguments exist claiming that removing Internet users' anonymity is an intrusion of their privacy and violates their right to free speech. Julie Zhou, writing for theNew York Times, comments that "There's no way to truly rid the Internet of anonymity. After all, names and email addresses can be faked. And in any case many commenters write things that are rude or inflammatory under their real names". Thus, some trolls do not even bother to hide their actions and take pride in their behavior.[87]The rate of reported online harassment has been increasing as there has been a 50% increase in accounts of youth online harassment from the years 2000–2005.[88] Another form of harassment prevalent online is calledflaming. According to a study conducted by Peter J. Moor, flaming is defined as displaying hostility by insulting, swearing or using otherwise offensive language.[89]Flaming can be done in either a group style format (the comments section on YouTube) or in a one-on-one format (private messaging on Facebook). Several studies have shown that flaming is more apparent in computer mediated conversation than in face to face interaction.[90]For example, a study conducted by Kiesler et al. found that people who met online judged each other more harshly than those who met face to face.[91]The study goes on to say that the people who communicated by computer "felt and acted as though the setting was more impersonal, and their behavior was more uninhibited. These findings suggest that computer-mediated communication ... elicits asocial or unregulated behavior".[92] Unregulated communities are established when online users communicate on a site although there are no mutual terms of usage. There is no regulator. Online interest groups or anonymous blogs are examples of unregulated communities.[17] Cyberbullyingis also prominent online. Cyberbullying is defined as willful and repeated harm inflicted towards another through information technology mediums.[93]Cyberbullying victimization has ascended to the forefront of the public agenda after a number of news stories came out on the topic.[94]For example, Rutgers freshmanTyler Clementicommitted suicide in 2010 after his roommate secretly filmed him in an intimate encounter and then streamed the video over the Internet.[95]Numerous states, such as New Jersey, have created and passed laws that do not allow any sort of harassment on, near, or off school grounds that disrupts or interferes with the operation of the school or the rights of other students.[96]In general, sexual and gender-based harassment online has been deemed a significant problem.[97] Trolling and cyber bullying in online communities are very difficult to stop for several reasons: An online community is a group of people with common interests who use the Internet (web sites, email, instant messaging, etc.) to communicate, work together and pursue their interests over time. A lesser known problem ishazingwithin online communities. Members of an elite online community use hazing to display their power, produce inequality, and instill loyalty into newcomers. While online hazing does not inflict physical duress, "the status values of domination and subordination are just as effectively transmitted".[98]Elite members of the in-group may haze by employing derogatory terms to refer to newcomers, using deception or playing mind games, or participating in intimidation, among other activities.[99] "[T]hrough hazing, established members tell newcomers that they must be able to tolerate a certain level of aggressiveness, grossness, and obnoxiousness in order to fit in and be accepted by the BlueSky community".[100] Online communities likesocial networking websiteshave a very unclear distinction between private and public information. For most social networks, users have to give personal information to add to their profiles. Usually, users can control what type of information other people in the online community can access based on the users familiarity with the people or the users level of comfort. These limitations are known as "privacy settings". Privacy settings bring up the question of how privacy settings and terms of service affect theexpectation of privacyin social media. After all, the purpose of an online community is to share a common space with one another. Furthermore, it is hard to take legal action when a user feels that his or her privacy has been invaded because he or she technically knew what the online community entailed.[101]Creator of the social networking siteFacebook,Mark Zuckerberg, noticed a change in users' behavior from when he first initiated Facebook. It seemed that "society's willingness to share has created an environment where privacy concerns are less important to users of social networks today than they were when social networking began".[102]However even though a user might keep his or her personal information private, his or her activity is open to the whole web to access. When a user posts information to a site or comments or responds to information posted by others, social networking sites create a tracking record of the user's activity.[103]Platforms such as Google and Facebook collect massive amounts of this user data through their surveillance infrastructures.[104] Internet privacyrelates to the transmission and storage of a person's data and their right to anonymity whilst online with the UN in 2013 adopting online privacy as a human right by a unanimous vote.[105]Many websites allow users to sign up with a username which need not be their actual name which allows a level of anonymity, in some cases such as the infamous imageboard4chanusers of the site do not need an account to engage with discussions. However, in these cases depending on the detail of information about a person posted it can still be possible to work out a users identity. Even when a person takes measures to protect their anonymity and privacy revelations byEdward Snowdena former contractor at theCentral Intelligence Agencyabout mass surveillance programs conducted by the US intelligence services involving the mass collection of data on both domestic and international users of popular websites includingFacebookandYouTubeas well as the collection of information straight from fiber cables without consent appear to show individuals privacy is not always respected.[106]Facebook founderMark Zuckerbergpublicly stated that the company had not been informed of any such programs and only handed over individual users data when required by law[107]implying that if the allegations are true that the data harvested had been done so without the company's consent. The growing popularity of social networks where a user using their real name is the norm also brings a new challenge with one survey of 2,303 managers finding 37% investigated candidates social media activity during the hiring process[108]with a study showing 1 in 10 job application rejections for those aged 16 to 34 could be due to social media checks.[109] Web communities can be an easy and useful tool to access information. However, the information contained as well as the users' credentials cannot always be trusted, with the internet giving a relatively anonymous medium for some to fraudulently claim anything from their qualifications or where they live to, in rare cases, pretending to be a specific person.[110]Malicious fake accounts created with the aim of defrauding victims out of money has become more high-profile with four men sentenced to between 8 years and 46 weeks for defrauding 12 women out of £250,000 using fake accounts on a dating website.[111]In relation to accuracy one survey based on Wikipedia that evaluated 50 articles found that 24% contained inaccuracies,[112]while in most cases the consequence might just be the spread of misinformation in areas such as health the consequences can be far more damaging leading to the U.S. Food and Drug Administration providing help on evaluating health information on the web.[113] The 1% rule states that within an online community as a rule of thumb only 1% of users actively contribute to creating content. Other variations also exist such as the 1-9-90 rule (1% post and create; 9% share, like, comment; 90% view-only)[114]when taking editing into account.[115]This raises problems for online communities with most users only interested in the information such a community might contain rather than having an interest in actively contributing which can lead to staleness in information and community decline.[116]This has led such communities which rely on user editing of content to promote users into becoming active contributors as well as retention of such existing members through projects such as the Wikimedia Account Creation Improvement Project.[117] In the US, two of the most important laws dealing with legal issues of online communities, especially social networking sites are Section 512c of theDigital Millennium Copyright Actand Section 230 of theCommunications Decency Act. Section 512c removes liability for copyright infringement from sites that let users post content, so long as there is a way by which the copyright owner can request the removal of infringing content. The website may not receive any financial benefit from the infringing activities. Section 230 of the Communications Decency Act gives protection from any liability as a result from the publication provided by another party. Common issues include defamation, but many courts have expanded it to include other claims as well.[118] Online communities of various kinds (social networking sites, blogs, media sharing sites, etc.) are posing new challenges for all levels of law enforcement in combating many kinds of crimes including harassment,identity theft, copyright infringement, etc. Copyright law is being challenged and debated with the shift in how individuals now disseminate their intellectual property. Individuals come together via online communities in collaborative efforts to create. Many describe current copyright law as being ill-equipped to manage the interests of individuals or groups involved in these collaborative efforts. Some say that these laws may even discourage this kind of production.[119] Laws governing online behavior pose another challenge to lawmakers in that they must work to enact laws that protect the public without infringing upon their rights tofree speech. Perhaps the most talked about issue of this sort is that of cyberbullying. Some scholars call for collaborative efforts between parents, schools, lawmakers, and law enforcement to curtail cyberbullying.[120] Laws must continually adapt to the ever-changing landscape of social media in all its forms; some legal scholars contend that lawmakers need to take an interdisciplinary approach to creating effective policy whether it is regulatory, for public safety, or otherwise. Experts in the social sciences can shed light on new trends that emerge in the usage of social media by different segments of society (including youths).[121]Armed with this data, lawmakers can write and pass legislation that protect and empower various online community members. When the ongoing Severe Acute Respiratory Coronavirus 2 (SARS CoV 2) otherwise known as  COVID 19, pandemic began, online communities and digital space became increasingly important.[122][123][124]Since the World Health Organization, other public health agencies, and governments mandated contagion efforts like social distancing and isolation, people needed information and ways to connect with each other.[122][125][126][127]The waves of COVID 19 and the associated dangers and containment measures of the airborne disease led to increased feelings of anxiety, fear, stress, and loneliness.[125][128][129][127]With stay-at-home orders and social distancing measures in place, those with access to social media and digital space were able to find community online.[130][131]Access to technology is crucial for social interaction and relationship-building.[126][127] Online communities during the ongoing COVID 19 pandemic use digital space for three main reasons: Education Access to digital technology at the beginning of the pandemic became important when students, teachers, and scholars, many of whom had in-person meetings previously, were required to start social distancing or isolate. In order to continue with the education curricula and research plans, those with access to digital devices, used technology to connect to the internet.[127][131]By using Zoom and other virtual platforms educators, students, and scholars alike were able to maintain social distancing while creating connections to learn about themselves and the world around them.[127][131]Cairns et al. found in their virtual ethnography that students rely on online technologies to stay connected for school and social engagement activities.[127]Students use a wide variety of technologies including ones for education, entertainment, daily tasks, and social networks and were either synchronous or asynchronous.[127]Online communities are essential for maintaining social connection and educational endeavors. Health The ongoing COVID 19 pandemic has led to an increased need for online communities and digital space for those who have preexisting medical conditions or those with post-acute sequelae or long-term health conditions after a SARS CoV 2 infection (Long COVID).[132][133][67]Those people who are immunocompromised, disabled, elderly, or have health conditions like cancer, rely on online communities for information, solidarity, and support.[133][67]Many of them depend on telehealth and social media in order to access healthcare and have connection in a socially-distant reality.[133][123][67] People with these illnesses that place them at risk, have feelings of frustration with the medical and political systems, despair, and grief that are shared within the online health communities.[133][67]For example, in their article, “Experiences of people affected by cancer during the outbreak of the COVID-19 pandemic: an exploratory qualitative analysis of public online forums,” Colomer-Lahiguera et al. found that cancer survivors and people with cancer had specific concerns with healthcare, infection, logistical and safety measures, and economic impacts that come with job loss and financial burdens.[133]Those with cancer faced challenges in adapting to the “New normal,” social behavioral change, and experiencing cancer.[133]People with cancer also had different needs for advice that were either COVID-related in terms of risk, COVID-19 information, others’ experiences, and measures to take if infected, or Cancer-related such as treatment, managing symptoms and side effects, and suspecting cancer.[133]Online health communities allow for those with heightened fears of infection or reinfection to have the ability to discuss adaptation challenges and strategies to avoid COVID 19.[133]People with cancer and Long COVID are faced with similar challenges in how they access healthcare, how healthcare providers treat them, and how they manage their illness or disease.[133][67]Online communities offer people with health conditions ways to support each other, learn preventative measures to avoid COVID 19 infections and reinfections, and find shared interests and symptoms. Connection Whether they use online communities to connect for educational and research purposes or join to find solidarity or worship, people use the internet to create and foster relationships with others during the ongoing COVID 19 pandemic.[127][126]In the U.K., Bryson et al. found that virtual faith communities which had online services often created an intrasacred spaces where together physical sacred spaces and rituals with secular become linked together.[126]Congregation members became more active in the faith rituals and preparations than before the pandemic began and the use of social media became an important facilitator of connection.[126]As social creatures, humans crave interaction with one another and people who are social distancing in an effort to avoid COVID 19 infections and reinfections use online communities to find ways of connection.[122][128][125][126][127][133][130][131][67]
https://en.wikipedia.org/wiki/Online_community
Anonline community, also called aninternet communityorweb community, is a community whose members engage incomputer-mediated communicationprimarily via the Internet. Members of the community usually share common interests. For many, online communities may feel like home, consisting of a "family of invisible friends". Additionally, these "friends" can be connected through gaming communities and gaming companies. An online community can act as aninformation systemwhere members can post, comment on discussions, give advice or collaborate, and includes medical advice or specific health care research as well. Commonly, people communicate throughsocial networking sites,chat rooms,forums, email lists, and discussion boards, and have advanced into daily social media platforms as well. This includesFacebook,Twitter,Instagram,Discord, etc. People may also join online communities throughvideo games,blogs, andvirtual worlds, and could potentially meet new significant others in dating sites or dating virtual worlds. The rise in popularity ofWeb 2.0websites has allowed for easier real-time communication and connection to others and facilitated the introduction of new ways for information to be exchanged. Yet, these interactions may also lead to a downfall of social interactions or deposit more negative and derogatory forms of speaking to others, in connection, surfaced forms of racism, bullying, sexist comments, etc. may also be investigated and linked to online communities. One scholarly definition of an online community is this: "a virtual community is defined as an aggregation of individuals or business partners who interact around a shared interest, where the interaction is at least partially supported or mediated by technology (or both) and guided by some protocols or norms".[1] Digital communities (web communities but also communities that are formed over, e.g., Xbox and PlayStation) provide a platform for a range of services to users. It has been argued that they can fulfillMaslow's hierarchy of needs.[2]They allow for social interaction across the world between people of different cultures who might not otherwise have met with offline meetings also becoming more common. Another key use of web communities is access to and the exchange of information. With communities for even very small niches it is possible to find people also interested in a topic and to seek and share information on a subject where there are not such people available in the immediate area offline. This has led to a range of popular sites based on areas such as health, employment, finances and education. Online communities can be vital for companies for marketing and outreach.[3] Unexpected and innovative uses of web communities have also emerged withsocial networksbeing used in conflicts to alert citizens of impending attacks.[4]TheUNsees the web and specifically social networks as an important tool in conflicts and emergencies.[5][6] Web communities have grown in popularity; as of October 2014,[update]6 of the 20 most-trafficked websites were community-based sites.[7]The amount of traffic to such websites is expected to increase as a growing proportion of the world's population attains Internet access. The idea of a community is not a new concept. On the telephone, inham radioand in the online world, social interactions no longer have to be based on proximity; instead they can literally be with anyone anywhere.[8]The study of communities has had to adapt along with the new technologies. Many researchers have usedethnographyto attempt to understand what people do in online spaces, how they express themselves, what motivates them, how they govern themselves, what attracts them, and why some people prefer to observe rather than participate.[8]Online communities can congregate around a shared interest and can be spread across multiple websites.[9] Some features of online communities include: Online communities typically establish a set of values, sometimes known collectively asnetiquetteor Internet etiquette, as they grow. These values may include: opportunity, education, culture, democracy, human services, equality within the economy, information, sustainability, and communication.[11]An online community's purpose is to serve as a common ground for people who share the same interests.[11] Online communities may be used as calendars to keep up with events such as upcoming gatherings or sporting events. They also form around activities and hobbies. Many online communities relating to health care help inform, advise, and support patients and their families. Students can take classes online and they may communicate with their professors and peers online. Businesses have also started using online communities to communicate with their customers about their products and services as well as to share information about the business. Other online communities allow a wide variety of professionals to come together to share thoughts, ideas and theories.[11][12] Fandomis an example of what online communities can evolve into. Online communities have grown in influence in "shaping the phenomena around which they organize" according to Nancy K. Baym's work.[9]She says that: "More than any other commercial sector, the popular culture industry relies on online communities to publicize and provide testimonials for their products." The strength of the online community's power is displayed through the season 3 premiere of BBC'sSherlock. Online activity by fans seem to have had a noticeable influence on the plot and direction of the season opening episode. Mark Lawson ofThe Guardianrecounts how fans have, to a degree, directed the outcome of the events of the episode. He says that "Sherlock has always been one of the most web-aware shows, among the first to find a satisfying way of representing electronic chatter on-screen."[13]Fan communities in platforms likeTwitter,Instagram, andRedditaround sports, actors, and musicians have become powerful communities both culturally and politically.[14] Discussions where members may post their feedback are essential in the development of an online community.[15]Online communities may encourage individuals to come together to teach and learn from one another. They may encourage learners to discuss and learn about real-world problems and situations, as well as to focus on such things as teamwork, collaborative thinking and personal experiences.[16][17] Blogsare among the major platforms on which online communities form. Blogging practices includemicroblogging, where the amount of information in a single element is smaller, andliveblogging, in which an ongoing event is blogged about in real time. The ease and convenience of blogging has allowed for its growth. Major blogging platforms includeTwitterandTumblr, which combine social media and blogging, as well as platforms such asWordPress, which allow content to be hosted on their own servers but also permit users to download, install, and modify the software on their own servers. As of October 2014,[update]23.1% of the top 10 million websites are either hosted on or run WordPress.[18] Internet forums, sometimes called bulletin boards, are websites which allow users to post topics also known asthreadsfor discussion with other users able to reply creating a conversation. Forums follow a hierarchical structure of categories, with many popular forum software platforms categorising forums depending on their purpose, and allowing forum administrators to create subforums within their platform. With time more advanced features have been added into forums; the ability to attach files, embed YouTube videos, and send private messages is now commonplace. As of October 2014,[update]the largest forumGaia Onlinecontained over 2 billion posts.[19] Members are commonly assigned into user groups which control their access rights and permissions. Common access levels include the following: Social networksare platforms allowing users to set up their own profile and build connections with like minded people who pursue similar interests through interaction. The first traceable example of such a site isSixDegrees.com, set up in 1997, which included a friends list and the ability to send messages to members linked to friends and see other users associations. For much of the 21st century, the popularity of such networks has been growing.Friendsterwas the first social network to gain mass media attention; however, by 2004 it had been overtaken in popularity byMyspace, which in turn was later overtaken byFacebook. In 2013, Facebook attracted 1.23 billion monthly users, rising from 145 million in 2008.[20]Facebook was the first social network to surpass 1 billion registered accounts, and by 2020, had more than 2.7 billion active users.[21]Meta Platforms, the owner of Facebook, also owns three other leading platforms for online communities:Instagram,WhatsApp, andFacebook Messenger. Most top-ranked social networks originate in the United States, but European services likeVK, Japanese platformLINE, or Chinese social networksWeChat,QQor video-sharing appDouyin(internationally known asTikTok) have also garnered appeal in their respective regions.[22] Current trends focus around the increased use of mobile devices when using social networks. Statistics fromStatistashow that, in 2013, 97.9 million users accessed social networks from a mobile device in the United States.[23] Researchers and organizations have worked to classify types of online community and to characterise their structure. For example, it is important to know the security, access, and technology requirements of a given type of community as it may evolve from an open to a private and regulated forum.[17]It has been argued that the technical aspects of online communities, such as whether pages can be created and edited by the general user base (as is the case withwikis) or only certain users (as is the case with most blogs), can place online communities into stylistic categories. Another approach argues that "online community" is a metaphor and that contributors actively negotiate the meaning of the term, including values and social norms.[24] Some research has looked at the users of online communities.Amy Jo Kimhas classified the rituals and stages of online community interaction and called it the "membership life cycle".[25]Clay Shirkytalks about communities of practice, whose members collaborate and help each other in order to make something better or improve a certain skill. What makes these communities bond is "love" of something, as demonstrated by members who go out of their way to help without any financial interest.[26][27]Campbell et al. developed a character theory for analyzing online communities, based on tribal typologies. In the communities they investigated they identified three character types:[28][29] Online communities have also forced retail firms to change their business strategies. Companies have to network more, adjust computations, and alter their organizational structures. This leads to changes in a company's communications with their manufacturers including the information shared and made accessible for further productivity and profits. Because consumers and customers in all fields are becoming accustomed to more interaction and engagement online, adjustments must be considered made in order to keep audiences intrigued.[17] Online communities have been characterized as "virtual settlements" that have the following four requirements: interactivity, a variety of communicators, a common public place where members can meet and interact, and sustained membership over time. Based on these considerations, it can be said that microblogs such as Twitter can be classified as online communities.[30] Dorine C. Andrews argues, in the article "Audience-Specific Online Community Design", that there are three parts to building an online community: starting the online community, encouraging early online interaction, and moving to a self-sustaining interactive environment.[31]When starting an online community, it may be effective to create webpages that appeal to specific interests. Online communities with clear topics and easy access tend to be most effective. In order to gain early interaction by members, privacy guarantees and content discussions are very important.[31]Successful online communities tend to be able to function self-sufficiently.[31] There are two major types of participation in online communities: public participation and non-public participation, also called lurking.Lurkersare participants who join a virtual community but do not contribute. In contrast, public participants, or posters, are those who join virtual communities and openly express their beliefs and opinions. Both lurkers and posters frequently enter communities to find answers and to gather general information. For example, there are several online communities dedicated to technology. In these communities, posters are generally experts in the field who can offer technological insight and answer questions, while lurkers tend to be technological novices who use the communities to find answers and to learn.[32] In general, virtual community participation is influenced by how participants view themselves in society as well as by norms, both of society and of the online community.[33]Participants also join online communities for friendship and support. In a sense, virtual communities may fill social voids in participants' offline lives.[34] Sociologist Barry Wellman presents the idea of "globalization" – the Internet's ability to extend participants' social connections to people around the world while also aiding them in further engagement with their local communities.[35] Although online societies differ in content from real society, the roles people assume in their online communities are quite similar. Elliot Volkman[36]points out several categories of people that play a role in the cycle of social networking, such as: An article entitled "The real value of on-line communities," written by A. Armstrong andJohn Hagelof theHarvard Business Review,[37]addresses a handful of elements that are key to the growth of an online community and its success in drawing in members. In this example, the article focuses specifically on online communities related to business, but its points can be transferred and can apply to any online community. The article addresses four main categories of business-based online communities, but states that a truly successful one will combine qualities of each of them: communities of transaction, communities of interest, communities of fantasy, and communities of relationship. Anubhav Choudhury describes the four types of community as follows:[38] Amy Jo Kim's membership lifecycle theory states that members of online communities begin their life in a community as visitors, or lurkers. After breaking through a barrier, people become novices and participate in community life. After contributing for a sustained period of time, they become regulars. If they break through another barrier they become leaders, and once they have contributed to the community for some time they become elders. This life cycle can be applied to many virtual communities, such asbulletin board systems,blogs,mailing lists, andwiki-based communitieslike Wikipedia. A similar model can be found in the works of Lave and Wenger, who illustrate a cycle of how users become incorporated into virtual communities using the principles of legitimate peripheral participation. They suggest five types of trajectories amongst a learning community:[39] The following shows the correlation between the learning trajectories and Web 2.0 community participation by using the example ofYouTube: Newcomers are important for online communities. Online communities rely on volunteers' contribution, and most online communities face high turnover rate as one of their main challenges. For example, only a minority of Wikipedia users contribute regularly, and only a minority of those contributors participate in community discussions. In one study conducted byCarnegie Mellon University, they found that "more than two-thirds (68%) of newcomers to Usenet groups were never seen again after their first post".[40]Above facts reflect a point that recruiting and remaining new members have become a very crucial problem for online communities: the communities will eventually wither away without replacing members who leave. Newcomers are new members of the online communities and thus often face many barriers when contributing to a project, and those barriers they face might lead them to give up the project or even leave the community. By conducting a systematic literature review over 20 primary studies regarding to the barriers faced by newcomers when contributing to the open source software projects, Steinmacher et al. identified 15 different barriers and they classified those barriers into five categories as described below:[41] Because of the barriers described above, it is very necessary that online communities engage newcomers and help them to adjust to the new environment. From online communities' side, newcomers can be both beneficial and harmful to online communities. On the one side, newcomers can bring online communities innovative ideas and resources. On the other side, they can also harm communities with misbehavior caused by their unfamiliarity with community norms.Krautet al. defined five basic issues faced by online communities when dealing with newcomers, and proposed several design claims for each problem in their bookBuilding Successful Online Communities.[42] Successful online communities motivateonline participation. Methods of motivating participation in these communities have been investigated in several studies. There are many persuasive factors that draw users into online communities. Peer-to-peer systems and social networking sites rely heavily on member contribution. Users' underlying motivations to involve themselves in these communities have been linked to some persuasion theories of sociology. One of the greatest attractions towards online communities is the sense of connection users build among members. Participation and contribution are influenced when members of an online community are aware of their global audience.[45] The majority of people learn by example and often follow others, especially when it comes to participation.[46]Individuals are reserved about contributing to an online community for many reasons including but not limited to a fear of criticism or inaccuracy. Users may withhold information that they do not believe is particularly interesting, relevant, or truthful. In order to challenge these contribution barriers, producers of these sites are responsible for developing knowledge-based and foundation-based trust among the community.[47] Users' perception of audience is another reason that makes users participate in online communities. Results showed that users usually underestimate their amount of audiences in online communities. Social media users guess that their audience is 27% of its real size. Regardless of this underestimation, it is shown that amount of audience affects users' self-presentation and also content production which means a higher level of participation.[48] There are two types of virtual online communities (VOC): dependent and self-sustained VOCs. The dependent VOCs are those who use the virtual community as extensions of themselves,[clarification needed]they interact with people they know. Self-sustained VOCs are communities where relationships between participating members is formed and maintained through encounters in the online community.[49]For all VOCs, there is the issue of creating identity and reputation in the community. People can create whatever identity they would like to through their interactions with other members. The username is what members identify each other by but it says very little about the person behind it. The main features in online communities that attract people are a shared communication environment, relationships formed and nurtured, a sense of belonging to a group, the internal structure of the group, common space shared by people with similar ideas and interests. The three most critical issues are belonging, identity, and interest. For an online community to flourish there needs to be consistent participation, interest, and motivation.[50] Research conducted by Helen Wang applied the Technology Acceptance Model to online community participation.[51]Internetself-efficacypositively predicted perceived ease of use. Research found that participants' beliefs in their abilities to use the internet and web-based tools determined how much effort was expected. Community environment positively predicted perceived ease of use and usefulness. Intrinsic motivation positively predicted perceived ease of use, usefulness, and actual use. The technology acceptance model positively predicts how likely it is that an individual will participate in an online community. Establishing a relationship between the consumer and a seller has become a new science with the emergence of online communities. It is a new market to be tapped by companies and to do so, requires an understanding of the relationships built on online communities. Online communities gather people around common interests and these common interests can include brands, products, and services.[52]: 50Companies not only have a chance to reach a new group of consumers in online communities, but to also tap into information about the consumers. Companies have a chance to learn about the consumers in an environment that they feel a certain amount of anonymity and are thus, more open to allowing a company to see what they really want or are looking for. In order to establish a relationship with the consumer a company must seek a way to identify with how individuals interact with the community. This is done by understanding the relationships an individual has with an online community. There are six identifiable relationship statuses: considered status, committed status, inactive status, faded status, recognized status, and unrecognized status.[52]: 56Unrecognized status means the consumer is unaware of the online community or has not decided the community to be useful. The recognized status is where a person is aware of the community, but is not entirely involved. A considered status is when a person begins their involvement with the site. The usage at this stage is still very sporadic. The committed status is when a relationship between a person and an online community is established and the person gets fully involved with the community. The inactive status is when an online community has not relevance to a person. The faded status is when a person has begun to fade away from a site.[52]: 57It is important to be able to recognize which group or status the consumer holds, because it might help determine which approach to use. Companies not only need to understand how a consumer functions within an online community, but also a company "should understand the communality of an online community"[53]: 401This means a company must understand the dynamic and structure of the online community to be able to establish a relationship with the consumer. Online communities have cultures of their own, and to be able to establish a commercial relationship or even engage at all, one must understand the community values and proprieties. It has even been proved beneficial to treat online commercial relationships more as friendships rather than business transactions. Through online engagement, because of the smoke screen of anonymity, it allows a person to be able to socially interact with strangers in a much more personal way.[52]: 69This personal connection the consumer feels translates to how they want to establish relationships online. They separate what is commercial or spam and what is relational. Relational becomes what they associate with human interaction while commercial is what they associate with digital or non-human interaction. Thus the online community should not be viewed as "merely a sales channel".[54]: 537Instead it should be viewed as a network for establishing interpersonal communications with the consumer. Most online communities grow slowly at first, due in part to the fact that the strength of motivation for contributing is usually proportional to the size of the community. As the size of the potential audience increases, so does the attraction of writing and contributing. This, coupled with the fact that organizational culture does not change overnight, means creators can expect slow progress at first with a new virtual community. As more people begin to participate, however, the aforementioned motivations will increase, creating a virtuous cycle in which more participation begets more participation. Community adoption can be forecast with theBass diffusion model, originally conceived byFrank Bassto describe the process by which new products get adopted as an interaction between innovative early adopters and those who follow them. Online learningis a form of online community. The sites are designed to educate. Colleges and universities may offer many of their classes online to their students; this allows each student to take the class at his or her own pace. According to an article published in volume 21, issue 5 of theEuropean Management Journaltitled "Learning in Online Forums",[55]researchers conducted a series of studies about online learning. They found that while good online learning is difficult to plan, it is quite conducive to educational learning. Online learning can bring together a diverse group of people, and although it is asynchronous learning, if the forum is set up using all the best tools and strategies, it can be very effective. Another study was published[56]in volume 55, issue 1 ofComputers and Educationand found results supporting the findings of the article mentioned above. The researchers found that motivation, enjoyment, and team contributions on learning outcomes enhanced students learning and that the students felt they learned well with it. A study published in the same journal[57]looks at how social networking can foster individual well-being and develop skills which can improve the learning experience. These articles look at a variety of different types of online learning. They suggest that online learning can be quite productive and educational if created and maintained properly. One feature of online communities is that they are not constrained by time thereby giving members the ability to move through periods of high to low activity over a period of time. This dynamic nature maintains a freshness and variety that traditional methods of learning might not have been able to provide.[citation needed] It appears that online communities such as Wikipedia have become a source of professional learning.[citation needed]They are an active learning environment in which learners converse and inquire. In a study exclusive to teachers in online communities, results showed that membership in online communities provided teachers with a rich source of professional learning that satisfied each member of the community.[citation needed] Saurabh Tyagi[58]describes benefits of online community learning which include: These terms are taken from Edudemic, a site about teaching and learning. The article "How to Build Effective Online Learning Communities"[58]provides background information about online communities as well as how to incorporate learning within an online community.[58] One of the greatest attractions towards online communities and the role assigned to an online community, is the sense of connection in which users are able to build among other members and associates. Thus, it is typical to reference online communities when regarding the 'gaming' universe. The online video game industry has embraced the concepts of cooperative and diverse gaming in order to provide players with a sense of community or togetherness. Video games have long been seen as a solo endeavor – as a way to escape reality and leave social interaction at the door. Yet, online community networks or talk pages have now allowed forms of connection with other users. These connections offer forms of aid in the games themselves, as well as an overall collaboration and interaction in the network space. For example, a study conducted by Pontus Strimling and Seth Frey found that players would generate their own models of fair "loot" distribution through community interaction if they felt that the model provided by the game itself was insufficient.[59] The popularity of competitive the online multiplayer games has now even promoted informal social interaction through the use of the recognized communities.[60][61] As with other online communities, problems do arise when approaching the usages of online communities in the gaming culture, as well as those who are utilizing the spaces for their own agendas. "Gaming culture" offers individuals personal experiences, development of creativity, as well an assemblance of togetherness that potentially resembles formalized social communication techniques. On the other hand, these communities could also include toxicity, online disinhibition, and cyberbullying. Online health communitiesis one example of online communities which is heavily used by internet users.[64][65][66][67]A key benefit of online health communities is providing user access to other users with similar problems or experiences which has a significant impact on the lives of their members.[64]Through people participation, online health communities will be able to offer patients opportunities for emotional support[68][69]and also will provide them access to experience-based information about particular problem or possible treatment strategies. Even in some studies, it is shown that users find experienced-based information more relevant than information which was prescribed by professionals.[70][71][72]Moreover, allowing patients to collaborate anonymously in some of online health communities suggests users a non-judgmental environment to share their problems, knowledge, and experiences.[73]However, recent research has indicated that socioeconomic differences between patients may result in feelings of alienation or exclusion within these communities, even despite attempts to make the environments inclusive.[74] Online communities are relatively new and unexplored areas. They promote a whole new community that prior to the Internet was not available. Although they can promote a vast array of positive qualities, such as relationships without regard to race, religion, gender, or geography,[75]they can also lead to multiple problems. The theory of risk perception, an uncertainty in participating in an online community, is quite common, particularly when in the following online circumstances: Clay Shirkyexplains one of these problems like two hoola-hoops. With the emersion of online communities there is a "real life" hoola-hoop and the other and "online life". These two hoops used to be completely separate but now they have swung together and overlap. The problem with this overlap is that there is no distinction anymore between face-to-face interactions and virtual ones; they are one and the same. Shirky illustrates this by explaining a meeting. A group of people will sit in a meeting but they will all be connected into a virtual world also, using online communities such as wiki.[77] A further problem is identity formation with the ambiguous real-virtual life mix. Identity formation in the real world consisted of "one body, one identity",[citation needed]but the online communities allow you to create "as many electronic personae" as you please. This can lead to identity deception. Claiming to be someone you are not can be problematic with other online community users and for yourself. Creating a false identity can cause confusion and ambivalence about which identity is true. A lack of trust regarding personal or professional information is problematic with questions of identity or information reciprocity. Often, if information is given to another user of an online community, one expects equal information shared back. However, this may not be the case or the other user may use the information given in harmful ways.[78]The construction of an individual's identity within an online community requires self-presentation. Self-presentation is the act of "writing the self into being", in which a person's identity is formed by what that person says, does, or shows. This also poses a potential problem as such self-representation is open for interpretation as well as misinterpretation. While an individual's online identity can be entirely constructed with a few of his/her own sentences, perceptions of this identity can be entirely misguided and incorrect. Online communities present the problems of preoccupation, distraction, detachment, and desensitization to an individual, although online support groups exist now. Online communities do present potential risks, and users must remember to be careful and remember that just because an online community feels safe does not mean it necessarily is.[35] Cyber bullying, the "use of long-term aggressive, intentional, repetitive acts by one or more individuals, using electronic means, against an almost powerless victim"[79]which has increased in frequency alongside the continued growth of web communities with anOpen Universitystudy finding 38% of young people had experienced or witnessed cyber bullying.[80]It has received significant media attention due to high-profile incidents such as the death of Amanda Todd[81]who before her death detailed her ordeal on YouTube.[82] A key feature of such bullying is that it allows victims to be harassed at all times, something not possible typically with physical bullying. This has forced Governments and other organisations to change their typical approach to bullying with the UK Department for Education now issuing advice to schools on how to deal with cyber bullying cases.[83] The most common problem with online communities tend to be online harassment, meaning threatening or offensive content aimed at known friends or strangers through ways of online technology. Where such posting is done "for thelulz" (that is, for the fun of it), then it is known astrolling.[84]Sometimes trolling is done in order to harm others for the gratification of the person posting. The primary motivation for such posters, known in character theory as "snerts", is the sense of power and exposure it gives them.[85]Online harassment tends to affect adolescents the most due to their risk-taking behavior and decision-making processes. One notable example is that of Natasha MacBryde who was tormented by Sean Duffy, who was later prosecuted.[86]In 2010, Alexis Pilkington, a 17-year-old New Yorker committed suicide. Trolls pounced on her tribute page posting insensitive and hurtful images of nooses and other suicidal symbolism. Four years prior to that an 18-year-old died in a car crash in California. Trolls took images of her disfigured body they found on the internet and used them to torture the girl's grieving parents.[87]Psychological research has shown that anonymity increases unethical behavior through what is called theonline disinhibition effect. Many website and online communities have attempted to combat trolling. There has not been a single effective method to discourage anonymity, and arguments exist claiming that removing Internet users' anonymity is an intrusion of their privacy and violates their right to free speech. Julie Zhou, writing for theNew York Times, comments that "There's no way to truly rid the Internet of anonymity. After all, names and email addresses can be faked. And in any case many commenters write things that are rude or inflammatory under their real names". Thus, some trolls do not even bother to hide their actions and take pride in their behavior.[87]The rate of reported online harassment has been increasing as there has been a 50% increase in accounts of youth online harassment from the years 2000–2005.[88] Another form of harassment prevalent online is calledflaming. According to a study conducted by Peter J. Moor, flaming is defined as displaying hostility by insulting, swearing or using otherwise offensive language.[89]Flaming can be done in either a group style format (the comments section on YouTube) or in a one-on-one format (private messaging on Facebook). Several studies have shown that flaming is more apparent in computer mediated conversation than in face to face interaction.[90]For example, a study conducted by Kiesler et al. found that people who met online judged each other more harshly than those who met face to face.[91]The study goes on to say that the people who communicated by computer "felt and acted as though the setting was more impersonal, and their behavior was more uninhibited. These findings suggest that computer-mediated communication ... elicits asocial or unregulated behavior".[92] Unregulated communities are established when online users communicate on a site although there are no mutual terms of usage. There is no regulator. Online interest groups or anonymous blogs are examples of unregulated communities.[17] Cyberbullyingis also prominent online. Cyberbullying is defined as willful and repeated harm inflicted towards another through information technology mediums.[93]Cyberbullying victimization has ascended to the forefront of the public agenda after a number of news stories came out on the topic.[94]For example, Rutgers freshmanTyler Clementicommitted suicide in 2010 after his roommate secretly filmed him in an intimate encounter and then streamed the video over the Internet.[95]Numerous states, such as New Jersey, have created and passed laws that do not allow any sort of harassment on, near, or off school grounds that disrupts or interferes with the operation of the school or the rights of other students.[96]In general, sexual and gender-based harassment online has been deemed a significant problem.[97] Trolling and cyber bullying in online communities are very difficult to stop for several reasons: An online community is a group of people with common interests who use the Internet (web sites, email, instant messaging, etc.) to communicate, work together and pursue their interests over time. A lesser known problem ishazingwithin online communities. Members of an elite online community use hazing to display their power, produce inequality, and instill loyalty into newcomers. While online hazing does not inflict physical duress, "the status values of domination and subordination are just as effectively transmitted".[98]Elite members of the in-group may haze by employing derogatory terms to refer to newcomers, using deception or playing mind games, or participating in intimidation, among other activities.[99] "[T]hrough hazing, established members tell newcomers that they must be able to tolerate a certain level of aggressiveness, grossness, and obnoxiousness in order to fit in and be accepted by the BlueSky community".[100] Online communities likesocial networking websiteshave a very unclear distinction between private and public information. For most social networks, users have to give personal information to add to their profiles. Usually, users can control what type of information other people in the online community can access based on the users familiarity with the people or the users level of comfort. These limitations are known as "privacy settings". Privacy settings bring up the question of how privacy settings and terms of service affect theexpectation of privacyin social media. After all, the purpose of an online community is to share a common space with one another. Furthermore, it is hard to take legal action when a user feels that his or her privacy has been invaded because he or she technically knew what the online community entailed.[101]Creator of the social networking siteFacebook,Mark Zuckerberg, noticed a change in users' behavior from when he first initiated Facebook. It seemed that "society's willingness to share has created an environment where privacy concerns are less important to users of social networks today than they were when social networking began".[102]However even though a user might keep his or her personal information private, his or her activity is open to the whole web to access. When a user posts information to a site or comments or responds to information posted by others, social networking sites create a tracking record of the user's activity.[103]Platforms such as Google and Facebook collect massive amounts of this user data through their surveillance infrastructures.[104] Internet privacyrelates to the transmission and storage of a person's data and their right to anonymity whilst online with the UN in 2013 adopting online privacy as a human right by a unanimous vote.[105]Many websites allow users to sign up with a username which need not be their actual name which allows a level of anonymity, in some cases such as the infamous imageboard4chanusers of the site do not need an account to engage with discussions. However, in these cases depending on the detail of information about a person posted it can still be possible to work out a users identity. Even when a person takes measures to protect their anonymity and privacy revelations byEdward Snowdena former contractor at theCentral Intelligence Agencyabout mass surveillance programs conducted by the US intelligence services involving the mass collection of data on both domestic and international users of popular websites includingFacebookandYouTubeas well as the collection of information straight from fiber cables without consent appear to show individuals privacy is not always respected.[106]Facebook founderMark Zuckerbergpublicly stated that the company had not been informed of any such programs and only handed over individual users data when required by law[107]implying that if the allegations are true that the data harvested had been done so without the company's consent. The growing popularity of social networks where a user using their real name is the norm also brings a new challenge with one survey of 2,303 managers finding 37% investigated candidates social media activity during the hiring process[108]with a study showing 1 in 10 job application rejections for those aged 16 to 34 could be due to social media checks.[109] Web communities can be an easy and useful tool to access information. However, the information contained as well as the users' credentials cannot always be trusted, with the internet giving a relatively anonymous medium for some to fraudulently claim anything from their qualifications or where they live to, in rare cases, pretending to be a specific person.[110]Malicious fake accounts created with the aim of defrauding victims out of money has become more high-profile with four men sentenced to between 8 years and 46 weeks for defrauding 12 women out of £250,000 using fake accounts on a dating website.[111]In relation to accuracy one survey based on Wikipedia that evaluated 50 articles found that 24% contained inaccuracies,[112]while in most cases the consequence might just be the spread of misinformation in areas such as health the consequences can be far more damaging leading to the U.S. Food and Drug Administration providing help on evaluating health information on the web.[113] The 1% rule states that within an online community as a rule of thumb only 1% of users actively contribute to creating content. Other variations also exist such as the 1-9-90 rule (1% post and create; 9% share, like, comment; 90% view-only)[114]when taking editing into account.[115]This raises problems for online communities with most users only interested in the information such a community might contain rather than having an interest in actively contributing which can lead to staleness in information and community decline.[116]This has led such communities which rely on user editing of content to promote users into becoming active contributors as well as retention of such existing members through projects such as the Wikimedia Account Creation Improvement Project.[117] In the US, two of the most important laws dealing with legal issues of online communities, especially social networking sites are Section 512c of theDigital Millennium Copyright Actand Section 230 of theCommunications Decency Act. Section 512c removes liability for copyright infringement from sites that let users post content, so long as there is a way by which the copyright owner can request the removal of infringing content. The website may not receive any financial benefit from the infringing activities. Section 230 of the Communications Decency Act gives protection from any liability as a result from the publication provided by another party. Common issues include defamation, but many courts have expanded it to include other claims as well.[118] Online communities of various kinds (social networking sites, blogs, media sharing sites, etc.) are posing new challenges for all levels of law enforcement in combating many kinds of crimes including harassment,identity theft, copyright infringement, etc. Copyright law is being challenged and debated with the shift in how individuals now disseminate their intellectual property. Individuals come together via online communities in collaborative efforts to create. Many describe current copyright law as being ill-equipped to manage the interests of individuals or groups involved in these collaborative efforts. Some say that these laws may even discourage this kind of production.[119] Laws governing online behavior pose another challenge to lawmakers in that they must work to enact laws that protect the public without infringing upon their rights tofree speech. Perhaps the most talked about issue of this sort is that of cyberbullying. Some scholars call for collaborative efforts between parents, schools, lawmakers, and law enforcement to curtail cyberbullying.[120] Laws must continually adapt to the ever-changing landscape of social media in all its forms; some legal scholars contend that lawmakers need to take an interdisciplinary approach to creating effective policy whether it is regulatory, for public safety, or otherwise. Experts in the social sciences can shed light on new trends that emerge in the usage of social media by different segments of society (including youths).[121]Armed with this data, lawmakers can write and pass legislation that protect and empower various online community members. When the ongoing Severe Acute Respiratory Coronavirus 2 (SARS CoV 2) otherwise known as  COVID 19, pandemic began, online communities and digital space became increasingly important.[122][123][124]Since the World Health Organization, other public health agencies, and governments mandated contagion efforts like social distancing and isolation, people needed information and ways to connect with each other.[122][125][126][127]The waves of COVID 19 and the associated dangers and containment measures of the airborne disease led to increased feelings of anxiety, fear, stress, and loneliness.[125][128][129][127]With stay-at-home orders and social distancing measures in place, those with access to social media and digital space were able to find community online.[130][131]Access to technology is crucial for social interaction and relationship-building.[126][127] Online communities during the ongoing COVID 19 pandemic use digital space for three main reasons: Education Access to digital technology at the beginning of the pandemic became important when students, teachers, and scholars, many of whom had in-person meetings previously, were required to start social distancing or isolate. In order to continue with the education curricula and research plans, those with access to digital devices, used technology to connect to the internet.[127][131]By using Zoom and other virtual platforms educators, students, and scholars alike were able to maintain social distancing while creating connections to learn about themselves and the world around them.[127][131]Cairns et al. found in their virtual ethnography that students rely on online technologies to stay connected for school and social engagement activities.[127]Students use a wide variety of technologies including ones for education, entertainment, daily tasks, and social networks and were either synchronous or asynchronous.[127]Online communities are essential for maintaining social connection and educational endeavors. Health The ongoing COVID 19 pandemic has led to an increased need for online communities and digital space for those who have preexisting medical conditions or those with post-acute sequelae or long-term health conditions after a SARS CoV 2 infection (Long COVID).[132][133][67]Those people who are immunocompromised, disabled, elderly, or have health conditions like cancer, rely on online communities for information, solidarity, and support.[133][67]Many of them depend on telehealth and social media in order to access healthcare and have connection in a socially-distant reality.[133][123][67] People with these illnesses that place them at risk, have feelings of frustration with the medical and political systems, despair, and grief that are shared within the online health communities.[133][67]For example, in their article, “Experiences of people affected by cancer during the outbreak of the COVID-19 pandemic: an exploratory qualitative analysis of public online forums,” Colomer-Lahiguera et al. found that cancer survivors and people with cancer had specific concerns with healthcare, infection, logistical and safety measures, and economic impacts that come with job loss and financial burdens.[133]Those with cancer faced challenges in adapting to the “New normal,” social behavioral change, and experiencing cancer.[133]People with cancer also had different needs for advice that were either COVID-related in terms of risk, COVID-19 information, others’ experiences, and measures to take if infected, or Cancer-related such as treatment, managing symptoms and side effects, and suspecting cancer.[133]Online health communities allow for those with heightened fears of infection or reinfection to have the ability to discuss adaptation challenges and strategies to avoid COVID 19.[133]People with cancer and Long COVID are faced with similar challenges in how they access healthcare, how healthcare providers treat them, and how they manage their illness or disease.[133][67]Online communities offer people with health conditions ways to support each other, learn preventative measures to avoid COVID 19 infections and reinfections, and find shared interests and symptoms. Connection Whether they use online communities to connect for educational and research purposes or join to find solidarity or worship, people use the internet to create and foster relationships with others during the ongoing COVID 19 pandemic.[127][126]In the U.K., Bryson et al. found that virtual faith communities which had online services often created an intrasacred spaces where together physical sacred spaces and rituals with secular become linked together.[126]Congregation members became more active in the faith rituals and preparations than before the pandemic began and the use of social media became an important facilitator of connection.[126]As social creatures, humans crave interaction with one another and people who are social distancing in an effort to avoid COVID 19 infections and reinfections use online communities to find ways of connection.[122][128][125][126][127][133][130][131][67]
https://en.wikipedia.org/wiki/Web_community
1800s:Martineau·Tocqueville·Marx·Spencer·Le Bon·Ward·Pareto·Tönnies·Veblen·Simmel·Durkheim·Addams·Mead·Weber·Du Bois·Mannheim·Elias Thesociology of the Internet(or thesocial psychology of the internet) involves the application of sociological or social psychological theory and method to theInternetas a source of information and communication. The overlapping field ofdigital sociologyfocuses on understanding the use ofdigital mediaas part of everyday life, and how these various technologies contribute to patterns of human behavior, social relationships, andconcepts of the self.Sociologistsare concerned with the social implications of the technology; newsocial networks,virtual communitiesand ways ofinteractionthat have arisen, as well as issues related tocyber crime. The Internet—the newestin a series of majorinformation breakthroughs—is of interest for sociologists in various ways: as a tool forresearch, for example, in usingonlinequestionnairesinstead of paper ones, as a discussion platform, and as a research topic. Thesociologyof the Internet in the stricter sense concerns the analysis ofonline communities(e.g. as found innewsgroups),virtual communitiesandvirtual worlds, organizational change catalyzed throughnew mediasuch as the Internet, and social change at-large in the transformation fromindustrialtoinformational society(or toinformation society). Online communities can be studied statistically throughnetwork analysisand at the same time interpreted qualitatively, such as throughvirtual ethnography. Social change can be studied through statisticaldemographicsor through the interpretation of changing messages and symbols in onlinemedia studies. The Internet is a relatively new phenomenon. AsRobert Darntonwrote, it is a revolutionary change that "took place yesterday, or the day before, depending on how you measure it."[1]The Internet developed from theARPANET, dating back to 1969; as a term it was coined in 1974. TheWorld Wide Webas we know it was shaped in the mid-1990s, whengraphical interfaceand services likeemailbecame popular and reached wider (non-scientific and non-military) audiences andcommerce.[1][2]Internet Explorerwas first released in 1995;Netscapea year earlier.Googlewas founded in 1998.[1][2]Wikipediawas founded in 2001.Facebook,MySpace, andYouTubein the mid-2000s.Web 2.0is still emerging. The amount of information available on the net and thenumber of Internet users worldwidehas continued to grow rapidly.[2]The term 'digital sociology' is now becoming increasingly used to denote new directions in sociological research into digital technologies since Web 2.0. The first scholarly article to have the termdigital sociologyin the title appeared in 2009.[3]The author reflects on the ways in which digital technologies may influence both sociological research and teaching. In 2010, 'digital sociology' was described, byRichard Neal, in terms of bridging the growing academic focus with the increasing interest from global business.[4]It was not until 2013 that the first purely academic book tackling the subject of 'digital sociology' was published.[5]The first sole-authored book entitledDigital Sociologywas published in 2015,[6]and the first academic conference on "Digital Sociology" was held in New York, NY in the same year.[7] Although the termdigital sociologyhas not yet fully entered the cultural lexicon, sociologists have engaged in research related to the Internet since its inception. These sociologists have addressed many social issues relating toonline communities,cyberspaceand cyber-identities. This and similar research has attracted many different names such ascyber-sociology, thesociology of the internet, thesociology of online communities, thesociology of social media, thesociology of cyberculture, or something else again. Digital sociology differs from these terms in that it is wider in its scope, addressing not only the Internet orcyberculturebut also the impact of the other digital media and devices that have emerged since the first decade of the twenty-first century. Since the Internet has become more pervasive and linked with everyday life, references to the 'cyber' in the social sciences seems now to have been replaced by the 'digital'. 'Digital sociology' is related to other sub-disciplines such asdigital humanitiesanddigital anthropology. It is beginning to supersede and incorporate the other titles above, as well as including the newestWeb 2.0digital technologies into its purview, such aswearable technology,augmented reality,smart objects, theInternet of Thingsandbig data. According to DiMaggio et al. (1999),[2]research tends to focus on the Internet's implications in five domains: Early on, there were predictions that the Internet would change everything (or nothing); over time, however, a consensus emerged that the Internet, at least in the current phase of development, complements rather than displaces previously implementedmedia.[2]This has meant a rethinking of the 1990s ideas of "convergence of new and old media". Further, the Internet offers a rare opportunity to study changes caused by the newly emerged—and likely, still evolving—information and communication technology(ICT).[2] The Internet has createdsocial network services, forums ofsocial interactionandsocial relations, such asFacebook,MySpace,Meetup, andCouchSurfingwhich facilitate both online and offline interaction. Thoughvirtual communitieswere once thought to be composed of strictly virtual social ties, researchers often find that even those social ties formed in virtual spaces are often maintained both online and offline[8][9] There are ongoing debates about the impact of the Internet onstrongandweak ties, whether the Internet is creating more or lesssocial capital,[10][11]the Internet's role in trends towards social isolation,[12]and whether it creates a more or less diverse social environment. It is often said the Internet is a new frontier, and there is a line of argument to the effect that social interaction, cooperation and conflict among users resembles the anarchistic and violentAmerican frontierof the early 19th century.[13] In March 2014, researchers from theBenedictine University at MesainArizonastudied how online interactions affect face-to-face meetings. The study is titled, "Face to Face Versus Facebook: Does Exposure to Social Networking Web Sites Augment or Attenuate Physiological Arousal Among the Socially Anxious," published inCyberpsychology, Behavior, and Social Networking.[14]They analyzed 26 female students with electrodes to measure social anxiety. Prior to meeting people, the students were shown pictures of the subject they were expected to meet. Researchers found that meeting someone face-to-face after looking at their photos increases arousal, which the study linked to an increase in social anxiety. These findings confirm previous studies that found that socially anxious people prefer online interactions. The study also recognized that the stimulated arousal can be associated with positive emotions and could lead to positive feelings. Recent research has taken theInternet of Thingswithin its purview, as global networks of interconnected everyday objects are said to be the next step in technological advancement.[15]Certainly, global space- and earth-based networks are expanding coverage of the IoT at a fast pace. This has a wide variety of consequences, with current applications in the health, agriculture, traffic and retail fields.[16]Companies such asSamsungandSigfoxhave invested heavily in said networks, and their social impact will have to be measured accordingly, with some sociologists suggesting the formation of socio-technical networks of humans and technical systems.[17][18]Issues of privacy,right to information, legislation and content creation will come into public scrutiny in light of these technological changes.[16][19] Digital sociology is connected with data and data emotions[20]Data emotions happens when people use digital technologies that can effect their decision-making skills or emotions. Social media platforms collects users data while also effecting their emotional state of mind, which causes either solidarity or social engagement amongst users. Social media platforms such as Instagram and Twitter can evoke emotions of love, affection, and empathy. Viral challenges such as the 2014 Ice Bucket Challenge[20]and viral memes has brought people together through mass participation displaying cultural knowledge and understanding of self. Mass participation in viral events prompts users to spread information (data) to one another effecting psychological state of mind and emotions. The link between digital sociology and data emotions is formed through the integration of technological devices within everyday life and activities. Researchers have investigated the use oftechnology(as opposed to the Internet) by children and how it can be used excessively, where it can cause medical health and psychological issues.[21]The use of technological devices by children can cause them to become addicted to them and can lead them to experience negative effects such asdepression,attention problems,loneliness,anxiety,aggressionandsolitude.[21]Obesityis another result from the use of technology by children, due to how children may prefer to use their technological devices rather than doing any form of physical activity.[22]Parents can take control and implement restrictions to the use of technological devices by their children, which will decrease the negative results technology can have if it is prioritized as well as help put a limit to it being used excessively.[22] Children can use technology to enhance their learning skills - for example: using online programs to improve the way they learn how to read or do math. The resources technology provides for children may enhance their skills, but children should be cautious of what they get themselves into due to how cyber bullying may occur.Cyber bullyingcan cause academic and psychological effects due to how children are suppressed by people who bully them through the Internet.[23]When technology is introduced to children they are not forced to accept it, but instead children are permitted to have an input on what they feel about either deciding to use their technological device or not.[24][need quotation to verify]. The routines of children have changed due to the increasing popularity of internet connected devices, with Social Policy researcher Janet Heaton concluding that, "while the children's health and quality of life benefited from the technology, the time demands of the care routines and lack of compatibility with other social and institutional timeframes had some negative implications".[25]Children's frequent use of technology commonly leads to decreased time available to pursue meaningful friendships, hobbies and potential career options. While technology can have negative impacts on the lives of children, it can also be used as a valuablelearningtool that can encourage cognitive, linguistic and social development. In a 2010 study by the University of New Hampshire, children that used technological devices exhibited greater improvements in problem-solving, intelligence, language skills and structural knowledge in comparison to those children who did not incorporate the use of technology in their learning.[26]In a 1999 paper, it was concluded that "studies did find improvements in student scores on tests closely related to material covered in computer-assisted instructional packages", which demonstrates how technology can have positive influences on children by improving their learning capabilities.[27]Problems have arisen between children and their parents as well when parents limit what children can use their technological devices for, specifically what they can and cannot watch on their devices, making children frustrated.[28] The Internet has achieved new relevance as a political tool. The presidential campaign ofHoward Deanin 2004 in theUnited Statesbecame famous for its ability to generate donations via the Internet, and the 2008 campaign ofBarack Obamabecame even more so. Increasingly,social movementsand other organizations use the Internet to carry out both traditional and the newInternet activism. Some governments are also getting online. Some countries, such as those ofCuba,Iran,North Korea,Myanmar, thePeople's Republic of China, andSaudi Arabiause filtering and censoring software torestrict what people in their countries can access on the Internet. In theUnited Kingdom, they also use software to locate and arrest various individuals they perceive as a threat. Other countries including the United States, have enacted laws making the possession or distribution of certain material such aschild pornographyillegal but do not use filtering software. In some countriesInternet service providershave agreed to restrict access to sites listed by police. While much has been written of the economic advantages ofInternet-enabled commerce, there is also evidence that some aspects of the Internet such as maps and location-aware services may serve to reinforceeconomic inequalityand thedigital divide.[29]Electronic commerce may be responsible forconsolidationand the decline ofmom-and-pop,brick and mortarbusinesses resulting in increases inincome inequality.[30] The spread of low-cost Internet access in developing countries has opened up new possibilities forpeer-to-peer charities, which allow individuals to contribute small amounts to charitable projects for other individuals. Websites such asDonors ChooseandGlobal Givingnow allow small-scale donors to direct funds to individual projects of their choice. A popular twist on Internet-based philanthropy is the use ofpeer-to-peer lendingfor charitable purposes.Kivapioneered this concept in 2005, offering the first web-based service to publish individual loan profiles for funding. Kiva raises funds for local intermediarymicrofinanceorganizations which post stories and updates on behalf of the borrowers. Lenders can contribute as little as $25 to loans of their choice, and receive their money back as borrowers repay. Kiva falls short of being a pure peer-to-peer charity, in that loans are disbursed prior being funded by lenders and borrowers do not communicate with lenders themselves.[31][32]However, the recent spread of cheap Internet access in developing countries has made genuine peer-to-peer connections increasingly feasible. In 2009 the US-based nonprofitZidishatapped into this trend to offer the first peer-to-peer microlending platform to link lenders and borrowers across international borders without local intermediaries. Inspired by interactive websites such asFacebookandeBay, Zidisha's microlending platform facilitates direct dialogue between lenders and borrowers and a performance rating system for borrowers. Web users worldwide can fund loans for as little as a dollar.[33] The Internet has been a major source ofleisuresince before the World Wide Web, with entertaining social experiments such asMUDsandMOOsbeing conducted on university servers, and humor-relatedUsenetgroups receiving much of the main traffic. Today, manyInternet forumshave sections devoted to games and funny videos; short cartoons in the form ofFlash moviesare also popular. Over 6 million people use blogs or message boards as a means of communication and for the sharing of ideas. Thepornographyandgamblingindustries have both taken full advantage of the World Wide Web, and often provide a significant source of advertising revenue for other websites. Although governments have made attempts to censor Internet porn, Internet service providers have told governments that these plans are not feasible.[34]Also many governments have attempted to put restrictions on both industries' use of the Internet, this has generally failed to stop their widespread popularity. One area of leisure on the Internet isonline gaming. This form of leisure creates communities, bringing people of all ages and origins to enjoy the fast-paced world of multiplayer games. These range fromMMORPGtofirst-person shooters, fromrole-playing video gamestoonline gambling. This has revolutionized the way many people interact and spend their free time on the Internet. While online gaming has been around since the 1970s, modern modes of online gaming began with services such asGameSpyandMPlayer, to which players of games would typically subscribe. Non-subscribers were limited to certain types of gameplay or certain games. Many use the Internet to access and download music, movies and other works for their enjoyment and relaxation. As discussed above, there are paid and unpaid sources for all of these, using centralized servers and distributed peer-to-peer technologies. Discretion is needed as some of these sources take more care over the original artists' rights and over copyright laws than others. Many use the World Wide Web to access news, weather and sports reports, to plan and book holidays and to find out more about their random ideas and casual interests. People usechat,messagingand e-mail to make and stay in touch with friends worldwide, sometimes in the same way as some previously hadpen pals.Social networkingwebsites likeMySpace,Facebookand many others like them also put and keep people in contact for their enjoyment. The Internet has seen a growing number ofWeb desktops, where users can access their files, folders, and settings via the Internet. Cyberslackinghas become a serious drain on corporate resources; the average UK employee spends 57 minutes a day surfing the Web at work, according to a study byPeninsula Business Services.[35] Four aspects of digital sociology have been identified by Lupton (2012):[36] Although they have been reluctant to use social and other digital media for professional academics purposes, sociologists are slowly beginning to adopt them for teaching and research.[37]An increasing number of sociological blogs are beginning to appear and more sociologists are joining Twitter, for example. Some are writing about the best ways for sociologists to employ social media as part of academic practice and the importance ofself-archivingand making sociological research open access, as well as writing forWikipedia. Digital sociologists have begun to write about the use of wearable technologies as part of quantifying the body[38]and the social dimensions of big data and the algorithms that are used to interpret these data.[39]Others have directed attention at the role of digital technologies as part of the surveillance of people's activities, via such technologies asCCTV camerasand customer loyalty schemes[40]as well as themass surveillanceof the Internet that is being conducted by secret services such as theNSA. The 'digital divide', or the differences in access to digital technologies experienced by certain social groups such as the socioeconomically disadvantaged, those of lower education levels, women and the elderly, has preoccupied many researchers in the social scientific study of digital media. However several sociologists have pointed out that while it is important to acknowledge and identify thestructural inequalitiesinherent in differentials in digital technology use, this concept is rather simplistic and fails to incorporate the complexities of access to and knowledge about digital technologies.[41] There is a growing interest in the ways in which social media contributes to the development of intimate relationships and concepts of the self. One of the best-known sociologists who has written about social relationships, selfhood and digital technologies isSherry Turkle.[42][43]In her most recent book, Turkle addresses the topic of social media.[44]She argues that relationships conducted via these platforms are not as authentic as those encounters that take place "in real life". Visual media allows the viewer to be a more passive consumer of information.[45]Viewers are more likely to develop online personas that differ from their personas in the real world. This contrast between the digital world (or 'cyberspace') and the 'real world', however, has been critiqued as 'digital dualism', a concept similar to the 'aura of the digital'.[46]Other sociologists have argued that relationships conducted through digital media are inextricably part of the 'real world'.[47]Augmented realityis an interactive experience where reality is being altered in some way by the use of digital media but not replaced. The use of social media forsocial activismhave also provided a focus for digital sociology. For example, numerous sociological articles,[48][49]and at least one book[50]have appeared on the use of such social media platforms as Twitter,YouTubeandFacebookas a means of conveying messages about activist causes and organizing political movements. Research has also been done on how racial minorities and the use of technology by racial minorities and other groups. These "digital practice" studies explore the ways in which the practices that groups adopt when using new technologies mitigate or reproduce social inequalities.[51][52] Digital sociologists use varied approaches to investigating people's use of digital media, both qualitative and quantitative. These includeethnographic research, interviews and surveys with users of technologies, and also the analysis of the data produced from people's interactions with technologies: for example, their posts on social media platforms such as Facebook,Reddit,4chan,Tumblrand Twitter or their consuming habits on online shopping platforms. Such techniques asdata scraping,social network analysis,time series analysisandtextual analysisare employed to analyze both the data produced as a byproduct of users' interactions with digital media and those that they create themselves. For Contents Analysis, in 2008, Yukihiko Yoshida did a study called[53]"Leni Riefenstahl and German expressionism: research in Visual Cultural Studies using the trans-disciplinary semantic spaces of specialized dictionaries." The study took databases of images tagged with connotative and denotative keywords (a search engine) and found Riefenstahl's imagery had the same qualities as imagery tagged "degenerate" in the title of the exhibition, "Degenerate Art" in Germany at 1937. The emergence of social media has provided sociologists with a new way of studying social phenomenon. Social media networks, such asFacebookandTwitter, are increasingly being mined for research. For example, Twitter data is easily available to researchers through the Twitter API. Twitter provides researchers with demographic data, time and location data, and connections between users. From these data, researchers gain insight into user moods and how they communicate with one another. Furthermore, social networks can be graphed and visualized.[54] Using large data sets, like those obtained from Twitter, can be challenging. First of all, researchers have to figure out how to store this data effectively in a database. Several tools commonly used inBig Dataanalytics are at their disposal.[54]Since large data sets can be unwieldy and contain numerous types of data (i.e. photos, videos, GIF images), researchers have the option of storing their data in non-relational databases, such asMongoDBandHadoop.[54]Processing and querying this data is an additional challenge. However, there are several options available to researchers. One common option is to use a querying language, such asHive, in conjunction withHadoopto analyze large data sets.[54] The Internet and social media have allowed sociologists to study how controversial topics are discussed over time—otherwise known as Issue Mapping.[55]Sociologists can search social networking sites (i.e. Facebook or Twitter) for posts related to a hotly-debated topic, then parse through and analyze the text.[55]Sociologists can then use a number of easily accessible tools to visualize this data, such as MentionMapp or TwitterStreamgraph. MentionMapp shows how popular a hashtag is and Twitter Streamgraph depicts how often certain words are paired together and how their relationship changes over time.[55] Digital surveillance occurs when digital devices record people's daily activities, collecting and storing personal data, and invading privacy.[6]With the advancement of new technologies, the act of monitoring and watching people online has increased between the years of 2010 to 2020. The invasion of privacy and recording people without consent leads to people doubting the usage of technologies which are supposed to secure and protect personal information. The storage of data and intrusiveness in digital surveillance affects human behavior. The psychological implications of digital surveillance can cause people to have concern, worry, or fear about feeling monitored all the time. Digital data is stored within security technologies, apps, social media platforms, and other technological devices that can be used in various ways for various reasons. Data collected from people using the internet can be subject to being monitored and viewed by private and public companies, friends, and other known or unknown entities. This aspect of digital sociology is perhaps what makes it distinctive from other approaches to studying the digital world. In adopting a critical reflexive approach, sociologists are able to address the implications of the digital for sociological practice itself. It has been argued that digital sociology offers a way of addressing the changing relations between social relations and the analysis of these relations, putting into question what social research is, and indeed, what sociology is now as social relations and society have become in many respects mediated via digital technologies.[56] How should sociology respond to the emergent forms of both 'small data' and 'big data' that are collected in vast amounts as part of people's interactions with digital technologies and the development of data industries using these data to conduct their own social research? Does this suggest that a "coming crisis in empirical sociology" might be on the horizon?[57]How are the identities and work practices of sociologists themselves becoming implicated within and disciplined by digital technologies such ascitation metrics?[58] These questions are central to critical digital sociology, which reflects upon the role of sociology itself in the analysis of digital technologies as well as the impact of digital technologies upon sociology.[59] To these four aspects add the following subfields of digital sociology: Public sociologyusing digital media is a form of public sociology that involves publishing sociological materials in online accessible spaces and subsequent interaction with publics in these spaces. This has been referred to as "e-public sociology".[60] Social media has changed the ways the public sociology was perceived and given rise to digital evolution in this field. The vast open platform of communication has provided opportunities for sociologists to come out from the notion of small group sociology or publics to a vast audience. Blogging was the initial social media platform being utilized by sociologists. Sociologists like Eszter Hargittai, Chris Bertram, and Kieran Healy were few amongst those who started using blogging for sociology. New discussion groups about sociology and related philosophy were the consequences of social media impact. The vast number of comments and discussions thus became a part of understanding sociology. One of such famous groups wasCrooked Timber. Getting feedback on such social sites is faster and impactful. Disintermediation, visibility, and measurement are the major effects of e-public sociology. Other social media tools like Twitter and Facebook also became the tools for a sociologist."Public Sociology in the Age of Social Media".[61] Information and communication technology as well as the proliferation of digital data are revolutionizing sociological research. Whereas there is already much methodological innovation indigital humanitiesandcomputational social sciences, theory development in the social sciences and humanities still consists mainly of print theories of computer cultures or societies. These analogue theories of the digital transformation, however, fail to account for how profoundly the digital transformation of the social sciences and humanities is changing the epistemic core of these fields. Digital methods constitute more than providers of ever-biggerdigital datasets for testing of analogue theories, but also require new forms of digital theorising.[62]The ambition of research programmes on the digital transformation ofsocial theoryis therefore to translate analogue into digital social theories so as to complement traditional analogue social theories of the digital transformation by digital theories of digital societies.[63]
https://en.wikipedia.org/wiki/Sociology_of_the_Internet
1800s:Martineau·Tocqueville·Marx·Spencer·Le Bon·Ward·Pareto·Tönnies·Veblen·Simmel·Durkheim·Addams·Mead·Weber·Du Bois·Mannheim·Elias Thesociology of the Internet(or thesocial psychology of the internet) involves the application of sociological or social psychological theory and method to theInternetas a source of information and communication. The overlapping field ofdigital sociologyfocuses on understanding the use ofdigital mediaas part of everyday life, and how these various technologies contribute to patterns of human behavior, social relationships, andconcepts of the self.Sociologistsare concerned with the social implications of the technology; newsocial networks,virtual communitiesand ways ofinteractionthat have arisen, as well as issues related tocyber crime. The Internet—the newestin a series of majorinformation breakthroughs—is of interest for sociologists in various ways: as a tool forresearch, for example, in usingonlinequestionnairesinstead of paper ones, as a discussion platform, and as a research topic. Thesociologyof the Internet in the stricter sense concerns the analysis ofonline communities(e.g. as found innewsgroups),virtual communitiesandvirtual worlds, organizational change catalyzed throughnew mediasuch as the Internet, and social change at-large in the transformation fromindustrialtoinformational society(or toinformation society). Online communities can be studied statistically throughnetwork analysisand at the same time interpreted qualitatively, such as throughvirtual ethnography. Social change can be studied through statisticaldemographicsor through the interpretation of changing messages and symbols in onlinemedia studies. The Internet is a relatively new phenomenon. AsRobert Darntonwrote, it is a revolutionary change that "took place yesterday, or the day before, depending on how you measure it."[1]The Internet developed from theARPANET, dating back to 1969; as a term it was coined in 1974. TheWorld Wide Webas we know it was shaped in the mid-1990s, whengraphical interfaceand services likeemailbecame popular and reached wider (non-scientific and non-military) audiences andcommerce.[1][2]Internet Explorerwas first released in 1995;Netscapea year earlier.Googlewas founded in 1998.[1][2]Wikipediawas founded in 2001.Facebook,MySpace, andYouTubein the mid-2000s.Web 2.0is still emerging. The amount of information available on the net and thenumber of Internet users worldwidehas continued to grow rapidly.[2]The term 'digital sociology' is now becoming increasingly used to denote new directions in sociological research into digital technologies since Web 2.0. The first scholarly article to have the termdigital sociologyin the title appeared in 2009.[3]The author reflects on the ways in which digital technologies may influence both sociological research and teaching. In 2010, 'digital sociology' was described, byRichard Neal, in terms of bridging the growing academic focus with the increasing interest from global business.[4]It was not until 2013 that the first purely academic book tackling the subject of 'digital sociology' was published.[5]The first sole-authored book entitledDigital Sociologywas published in 2015,[6]and the first academic conference on "Digital Sociology" was held in New York, NY in the same year.[7] Although the termdigital sociologyhas not yet fully entered the cultural lexicon, sociologists have engaged in research related to the Internet since its inception. These sociologists have addressed many social issues relating toonline communities,cyberspaceand cyber-identities. This and similar research has attracted many different names such ascyber-sociology, thesociology of the internet, thesociology of online communities, thesociology of social media, thesociology of cyberculture, or something else again. Digital sociology differs from these terms in that it is wider in its scope, addressing not only the Internet orcyberculturebut also the impact of the other digital media and devices that have emerged since the first decade of the twenty-first century. Since the Internet has become more pervasive and linked with everyday life, references to the 'cyber' in the social sciences seems now to have been replaced by the 'digital'. 'Digital sociology' is related to other sub-disciplines such asdigital humanitiesanddigital anthropology. It is beginning to supersede and incorporate the other titles above, as well as including the newestWeb 2.0digital technologies into its purview, such aswearable technology,augmented reality,smart objects, theInternet of Thingsandbig data. According to DiMaggio et al. (1999),[2]research tends to focus on the Internet's implications in five domains: Early on, there were predictions that the Internet would change everything (or nothing); over time, however, a consensus emerged that the Internet, at least in the current phase of development, complements rather than displaces previously implementedmedia.[2]This has meant a rethinking of the 1990s ideas of "convergence of new and old media". Further, the Internet offers a rare opportunity to study changes caused by the newly emerged—and likely, still evolving—information and communication technology(ICT).[2] The Internet has createdsocial network services, forums ofsocial interactionandsocial relations, such asFacebook,MySpace,Meetup, andCouchSurfingwhich facilitate both online and offline interaction. Thoughvirtual communitieswere once thought to be composed of strictly virtual social ties, researchers often find that even those social ties formed in virtual spaces are often maintained both online and offline[8][9] There are ongoing debates about the impact of the Internet onstrongandweak ties, whether the Internet is creating more or lesssocial capital,[10][11]the Internet's role in trends towards social isolation,[12]and whether it creates a more or less diverse social environment. It is often said the Internet is a new frontier, and there is a line of argument to the effect that social interaction, cooperation and conflict among users resembles the anarchistic and violentAmerican frontierof the early 19th century.[13] In March 2014, researchers from theBenedictine University at MesainArizonastudied how online interactions affect face-to-face meetings. The study is titled, "Face to Face Versus Facebook: Does Exposure to Social Networking Web Sites Augment or Attenuate Physiological Arousal Among the Socially Anxious," published inCyberpsychology, Behavior, and Social Networking.[14]They analyzed 26 female students with electrodes to measure social anxiety. Prior to meeting people, the students were shown pictures of the subject they were expected to meet. Researchers found that meeting someone face-to-face after looking at their photos increases arousal, which the study linked to an increase in social anxiety. These findings confirm previous studies that found that socially anxious people prefer online interactions. The study also recognized that the stimulated arousal can be associated with positive emotions and could lead to positive feelings. Recent research has taken theInternet of Thingswithin its purview, as global networks of interconnected everyday objects are said to be the next step in technological advancement.[15]Certainly, global space- and earth-based networks are expanding coverage of the IoT at a fast pace. This has a wide variety of consequences, with current applications in the health, agriculture, traffic and retail fields.[16]Companies such asSamsungandSigfoxhave invested heavily in said networks, and their social impact will have to be measured accordingly, with some sociologists suggesting the formation of socio-technical networks of humans and technical systems.[17][18]Issues of privacy,right to information, legislation and content creation will come into public scrutiny in light of these technological changes.[16][19] Digital sociology is connected with data and data emotions[20]Data emotions happens when people use digital technologies that can effect their decision-making skills or emotions. Social media platforms collects users data while also effecting their emotional state of mind, which causes either solidarity or social engagement amongst users. Social media platforms such as Instagram and Twitter can evoke emotions of love, affection, and empathy. Viral challenges such as the 2014 Ice Bucket Challenge[20]and viral memes has brought people together through mass participation displaying cultural knowledge and understanding of self. Mass participation in viral events prompts users to spread information (data) to one another effecting psychological state of mind and emotions. The link between digital sociology and data emotions is formed through the integration of technological devices within everyday life and activities. Researchers have investigated the use oftechnology(as opposed to the Internet) by children and how it can be used excessively, where it can cause medical health and psychological issues.[21]The use of technological devices by children can cause them to become addicted to them and can lead them to experience negative effects such asdepression,attention problems,loneliness,anxiety,aggressionandsolitude.[21]Obesityis another result from the use of technology by children, due to how children may prefer to use their technological devices rather than doing any form of physical activity.[22]Parents can take control and implement restrictions to the use of technological devices by their children, which will decrease the negative results technology can have if it is prioritized as well as help put a limit to it being used excessively.[22] Children can use technology to enhance their learning skills - for example: using online programs to improve the way they learn how to read or do math. The resources technology provides for children may enhance their skills, but children should be cautious of what they get themselves into due to how cyber bullying may occur.Cyber bullyingcan cause academic and psychological effects due to how children are suppressed by people who bully them through the Internet.[23]When technology is introduced to children they are not forced to accept it, but instead children are permitted to have an input on what they feel about either deciding to use their technological device or not.[24][need quotation to verify]. The routines of children have changed due to the increasing popularity of internet connected devices, with Social Policy researcher Janet Heaton concluding that, "while the children's health and quality of life benefited from the technology, the time demands of the care routines and lack of compatibility with other social and institutional timeframes had some negative implications".[25]Children's frequent use of technology commonly leads to decreased time available to pursue meaningful friendships, hobbies and potential career options. While technology can have negative impacts on the lives of children, it can also be used as a valuablelearningtool that can encourage cognitive, linguistic and social development. In a 2010 study by the University of New Hampshire, children that used technological devices exhibited greater improvements in problem-solving, intelligence, language skills and structural knowledge in comparison to those children who did not incorporate the use of technology in their learning.[26]In a 1999 paper, it was concluded that "studies did find improvements in student scores on tests closely related to material covered in computer-assisted instructional packages", which demonstrates how technology can have positive influences on children by improving their learning capabilities.[27]Problems have arisen between children and their parents as well when parents limit what children can use their technological devices for, specifically what they can and cannot watch on their devices, making children frustrated.[28] The Internet has achieved new relevance as a political tool. The presidential campaign ofHoward Deanin 2004 in theUnited Statesbecame famous for its ability to generate donations via the Internet, and the 2008 campaign ofBarack Obamabecame even more so. Increasingly,social movementsand other organizations use the Internet to carry out both traditional and the newInternet activism. Some governments are also getting online. Some countries, such as those ofCuba,Iran,North Korea,Myanmar, thePeople's Republic of China, andSaudi Arabiause filtering and censoring software torestrict what people in their countries can access on the Internet. In theUnited Kingdom, they also use software to locate and arrest various individuals they perceive as a threat. Other countries including the United States, have enacted laws making the possession or distribution of certain material such aschild pornographyillegal but do not use filtering software. In some countriesInternet service providershave agreed to restrict access to sites listed by police. While much has been written of the economic advantages ofInternet-enabled commerce, there is also evidence that some aspects of the Internet such as maps and location-aware services may serve to reinforceeconomic inequalityand thedigital divide.[29]Electronic commerce may be responsible forconsolidationand the decline ofmom-and-pop,brick and mortarbusinesses resulting in increases inincome inequality.[30] The spread of low-cost Internet access in developing countries has opened up new possibilities forpeer-to-peer charities, which allow individuals to contribute small amounts to charitable projects for other individuals. Websites such asDonors ChooseandGlobal Givingnow allow small-scale donors to direct funds to individual projects of their choice. A popular twist on Internet-based philanthropy is the use ofpeer-to-peer lendingfor charitable purposes.Kivapioneered this concept in 2005, offering the first web-based service to publish individual loan profiles for funding. Kiva raises funds for local intermediarymicrofinanceorganizations which post stories and updates on behalf of the borrowers. Lenders can contribute as little as $25 to loans of their choice, and receive their money back as borrowers repay. Kiva falls short of being a pure peer-to-peer charity, in that loans are disbursed prior being funded by lenders and borrowers do not communicate with lenders themselves.[31][32]However, the recent spread of cheap Internet access in developing countries has made genuine peer-to-peer connections increasingly feasible. In 2009 the US-based nonprofitZidishatapped into this trend to offer the first peer-to-peer microlending platform to link lenders and borrowers across international borders without local intermediaries. Inspired by interactive websites such asFacebookandeBay, Zidisha's microlending platform facilitates direct dialogue between lenders and borrowers and a performance rating system for borrowers. Web users worldwide can fund loans for as little as a dollar.[33] The Internet has been a major source ofleisuresince before the World Wide Web, with entertaining social experiments such asMUDsandMOOsbeing conducted on university servers, and humor-relatedUsenetgroups receiving much of the main traffic. Today, manyInternet forumshave sections devoted to games and funny videos; short cartoons in the form ofFlash moviesare also popular. Over 6 million people use blogs or message boards as a means of communication and for the sharing of ideas. Thepornographyandgamblingindustries have both taken full advantage of the World Wide Web, and often provide a significant source of advertising revenue for other websites. Although governments have made attempts to censor Internet porn, Internet service providers have told governments that these plans are not feasible.[34]Also many governments have attempted to put restrictions on both industries' use of the Internet, this has generally failed to stop their widespread popularity. One area of leisure on the Internet isonline gaming. This form of leisure creates communities, bringing people of all ages and origins to enjoy the fast-paced world of multiplayer games. These range fromMMORPGtofirst-person shooters, fromrole-playing video gamestoonline gambling. This has revolutionized the way many people interact and spend their free time on the Internet. While online gaming has been around since the 1970s, modern modes of online gaming began with services such asGameSpyandMPlayer, to which players of games would typically subscribe. Non-subscribers were limited to certain types of gameplay or certain games. Many use the Internet to access and download music, movies and other works for their enjoyment and relaxation. As discussed above, there are paid and unpaid sources for all of these, using centralized servers and distributed peer-to-peer technologies. Discretion is needed as some of these sources take more care over the original artists' rights and over copyright laws than others. Many use the World Wide Web to access news, weather and sports reports, to plan and book holidays and to find out more about their random ideas and casual interests. People usechat,messagingand e-mail to make and stay in touch with friends worldwide, sometimes in the same way as some previously hadpen pals.Social networkingwebsites likeMySpace,Facebookand many others like them also put and keep people in contact for their enjoyment. The Internet has seen a growing number ofWeb desktops, where users can access their files, folders, and settings via the Internet. Cyberslackinghas become a serious drain on corporate resources; the average UK employee spends 57 minutes a day surfing the Web at work, according to a study byPeninsula Business Services.[35] Four aspects of digital sociology have been identified by Lupton (2012):[36] Although they have been reluctant to use social and other digital media for professional academics purposes, sociologists are slowly beginning to adopt them for teaching and research.[37]An increasing number of sociological blogs are beginning to appear and more sociologists are joining Twitter, for example. Some are writing about the best ways for sociologists to employ social media as part of academic practice and the importance ofself-archivingand making sociological research open access, as well as writing forWikipedia. Digital sociologists have begun to write about the use of wearable technologies as part of quantifying the body[38]and the social dimensions of big data and the algorithms that are used to interpret these data.[39]Others have directed attention at the role of digital technologies as part of the surveillance of people's activities, via such technologies asCCTV camerasand customer loyalty schemes[40]as well as themass surveillanceof the Internet that is being conducted by secret services such as theNSA. The 'digital divide', or the differences in access to digital technologies experienced by certain social groups such as the socioeconomically disadvantaged, those of lower education levels, women and the elderly, has preoccupied many researchers in the social scientific study of digital media. However several sociologists have pointed out that while it is important to acknowledge and identify thestructural inequalitiesinherent in differentials in digital technology use, this concept is rather simplistic and fails to incorporate the complexities of access to and knowledge about digital technologies.[41] There is a growing interest in the ways in which social media contributes to the development of intimate relationships and concepts of the self. One of the best-known sociologists who has written about social relationships, selfhood and digital technologies isSherry Turkle.[42][43]In her most recent book, Turkle addresses the topic of social media.[44]She argues that relationships conducted via these platforms are not as authentic as those encounters that take place "in real life". Visual media allows the viewer to be a more passive consumer of information.[45]Viewers are more likely to develop online personas that differ from their personas in the real world. This contrast between the digital world (or 'cyberspace') and the 'real world', however, has been critiqued as 'digital dualism', a concept similar to the 'aura of the digital'.[46]Other sociologists have argued that relationships conducted through digital media are inextricably part of the 'real world'.[47]Augmented realityis an interactive experience where reality is being altered in some way by the use of digital media but not replaced. The use of social media forsocial activismhave also provided a focus for digital sociology. For example, numerous sociological articles,[48][49]and at least one book[50]have appeared on the use of such social media platforms as Twitter,YouTubeandFacebookas a means of conveying messages about activist causes and organizing political movements. Research has also been done on how racial minorities and the use of technology by racial minorities and other groups. These "digital practice" studies explore the ways in which the practices that groups adopt when using new technologies mitigate or reproduce social inequalities.[51][52] Digital sociologists use varied approaches to investigating people's use of digital media, both qualitative and quantitative. These includeethnographic research, interviews and surveys with users of technologies, and also the analysis of the data produced from people's interactions with technologies: for example, their posts on social media platforms such as Facebook,Reddit,4chan,Tumblrand Twitter or their consuming habits on online shopping platforms. Such techniques asdata scraping,social network analysis,time series analysisandtextual analysisare employed to analyze both the data produced as a byproduct of users' interactions with digital media and those that they create themselves. For Contents Analysis, in 2008, Yukihiko Yoshida did a study called[53]"Leni Riefenstahl and German expressionism: research in Visual Cultural Studies using the trans-disciplinary semantic spaces of specialized dictionaries." The study took databases of images tagged with connotative and denotative keywords (a search engine) and found Riefenstahl's imagery had the same qualities as imagery tagged "degenerate" in the title of the exhibition, "Degenerate Art" in Germany at 1937. The emergence of social media has provided sociologists with a new way of studying social phenomenon. Social media networks, such asFacebookandTwitter, are increasingly being mined for research. For example, Twitter data is easily available to researchers through the Twitter API. Twitter provides researchers with demographic data, time and location data, and connections between users. From these data, researchers gain insight into user moods and how they communicate with one another. Furthermore, social networks can be graphed and visualized.[54] Using large data sets, like those obtained from Twitter, can be challenging. First of all, researchers have to figure out how to store this data effectively in a database. Several tools commonly used inBig Dataanalytics are at their disposal.[54]Since large data sets can be unwieldy and contain numerous types of data (i.e. photos, videos, GIF images), researchers have the option of storing their data in non-relational databases, such asMongoDBandHadoop.[54]Processing and querying this data is an additional challenge. However, there are several options available to researchers. One common option is to use a querying language, such asHive, in conjunction withHadoopto analyze large data sets.[54] The Internet and social media have allowed sociologists to study how controversial topics are discussed over time—otherwise known as Issue Mapping.[55]Sociologists can search social networking sites (i.e. Facebook or Twitter) for posts related to a hotly-debated topic, then parse through and analyze the text.[55]Sociologists can then use a number of easily accessible tools to visualize this data, such as MentionMapp or TwitterStreamgraph. MentionMapp shows how popular a hashtag is and Twitter Streamgraph depicts how often certain words are paired together and how their relationship changes over time.[55] Digital surveillance occurs when digital devices record people's daily activities, collecting and storing personal data, and invading privacy.[6]With the advancement of new technologies, the act of monitoring and watching people online has increased between the years of 2010 to 2020. The invasion of privacy and recording people without consent leads to people doubting the usage of technologies which are supposed to secure and protect personal information. The storage of data and intrusiveness in digital surveillance affects human behavior. The psychological implications of digital surveillance can cause people to have concern, worry, or fear about feeling monitored all the time. Digital data is stored within security technologies, apps, social media platforms, and other technological devices that can be used in various ways for various reasons. Data collected from people using the internet can be subject to being monitored and viewed by private and public companies, friends, and other known or unknown entities. This aspect of digital sociology is perhaps what makes it distinctive from other approaches to studying the digital world. In adopting a critical reflexive approach, sociologists are able to address the implications of the digital for sociological practice itself. It has been argued that digital sociology offers a way of addressing the changing relations between social relations and the analysis of these relations, putting into question what social research is, and indeed, what sociology is now as social relations and society have become in many respects mediated via digital technologies.[56] How should sociology respond to the emergent forms of both 'small data' and 'big data' that are collected in vast amounts as part of people's interactions with digital technologies and the development of data industries using these data to conduct their own social research? Does this suggest that a "coming crisis in empirical sociology" might be on the horizon?[57]How are the identities and work practices of sociologists themselves becoming implicated within and disciplined by digital technologies such ascitation metrics?[58] These questions are central to critical digital sociology, which reflects upon the role of sociology itself in the analysis of digital technologies as well as the impact of digital technologies upon sociology.[59] To these four aspects add the following subfields of digital sociology: Public sociologyusing digital media is a form of public sociology that involves publishing sociological materials in online accessible spaces and subsequent interaction with publics in these spaces. This has been referred to as "e-public sociology".[60] Social media has changed the ways the public sociology was perceived and given rise to digital evolution in this field. The vast open platform of communication has provided opportunities for sociologists to come out from the notion of small group sociology or publics to a vast audience. Blogging was the initial social media platform being utilized by sociologists. Sociologists like Eszter Hargittai, Chris Bertram, and Kieran Healy were few amongst those who started using blogging for sociology. New discussion groups about sociology and related philosophy were the consequences of social media impact. The vast number of comments and discussions thus became a part of understanding sociology. One of such famous groups wasCrooked Timber. Getting feedback on such social sites is faster and impactful. Disintermediation, visibility, and measurement are the major effects of e-public sociology. Other social media tools like Twitter and Facebook also became the tools for a sociologist."Public Sociology in the Age of Social Media".[61] Information and communication technology as well as the proliferation of digital data are revolutionizing sociological research. Whereas there is already much methodological innovation indigital humanitiesandcomputational social sciences, theory development in the social sciences and humanities still consists mainly of print theories of computer cultures or societies. These analogue theories of the digital transformation, however, fail to account for how profoundly the digital transformation of the social sciences and humanities is changing the epistemic core of these fields. Digital methods constitute more than providers of ever-biggerdigital datasets for testing of analogue theories, but also require new forms of digital theorising.[62]The ambition of research programmes on the digital transformation ofsocial theoryis therefore to translate analogue into digital social theories so as to complement traditional analogue social theories of the digital transformation by digital theories of digital societies.[63]
https://en.wikipedia.org/wiki/Digital_sociology
Aninternet tribeordigital tribe[1]is a unofficial onlinecommunityororganizationof people who share a common interest, and who are usually loosely affiliated with each other throughsocial mediaor otherInternetroutes. The term is related to "tribe", which traditionally refers to people closely associated in bothgeographyandgenealogy.[2]Nowadays, it is more like avirtual communityor a personal network and it is often called global digital tribe. Mostanthropologistsagree[weasel words]that a tribe is a (small)societythat practices its own customs andculture, and that these define the tribe. The tribes are divided intoclans, with their own customs andcultural valuesthat differentiate them from activities that occur in 'real life' contexts. People feel more inclined to share and defend theirideasonsocial networksthan they would face to face.[citation needed] The term "tribe" originated around the time of theGreekcity-states and the early formation of theRoman Empire. TheLatinterm "tribus" has since been transformed to mean "A group of persons forming acommunityand claiming descent from a commonancestor"[3]As years passed by, the range of meanings have grown greater, for example, "Any of various systems ofsocial organizationcomprising several localvillages,bands,districts, lineages, or other groups and sharing a commonancestry,language,culture, andname" (Morris, 1980, p. 1369). Morris (1980) also notes that atribeis a "group of persons with a common occupation, interest, orhabit," and "a large family."[2]Vestiges of ancient tribe communities were preserved in both large gatherings (likefootballmatches) and in small ones (likechurchcommunities). Even though nowadays the range of groups referred to as tribal is truly enormous, it was not until theindustrial societyeroded the tribal gatherings of more primitive societies and redefinedcommunity. However, the existence ofsocial mediaas we know it today is due to thepost-industrial societythat has seen the rapid growth ofpersonal computers,mobile phonesandthe Internet. People now cancollaborate,communicate, celebrate, commemorate, give their advice and share their ideas around these virtual clans that have once again redefined thesocial behaviour.[4] That internet tribes exist, is an expression of the existence of a humantribal instinct.[5][6][7][8][9][10] The first attempt of such socialcommunitiesdates back to at least 2003, whentribe.netwas launched. Not only doTwittertribes have mutual interests,[11]but they also share potentiallysubconsciouslanguagefeatures as found in the 2013 study by researchers fromRoyal Holloway University of LondonandPrinceton. Dr. John Bryden from the School of Biological Sciences at Royal Holloway states that it is possible to anticipate which community somebody is likely to belong to, with up to 80 percent accuracy. This research shows that people try to joinsocietiesbased on the same interests andhobbies. In order to achieve this, publicly available messages were sent via Twitter to record conversations between two or more participants. As a result, each community can be characterised by their most used words. This approach can enrich new communities detection based on word analysis in order to automatically classify people insidesocial networks. The methods of identification of tribes relied heavily onalgorithmsand techniques fromstatistical physics,computational biologyandnetwork science.[12][13][14][15] A different approach is taken by Tribefinder.[15]The system is able to identify tribal affiliations ofTwitterusers usingdeep learningandmachine learning.[15]The system establishes to which tribes individuals belong through the analysis of their tweets and the comparison of theirvocabulary. These tribal vocabularies are previously generated based on the vocabulary of tribal influencers and leaders using keywords expressing concepts, ideas and beliefs. The final step to make the system learn on how to associate random individuals with specific tribes consists of the analysis of the language these influential tribal leaders use through deep learning. In so doing, classifiers are created using embedding andLSTM(long short-term memory) models. Specifically, these classifiers work by collecting the Twitter feeds of all the users from the tribes that Tribefinder is training on. On these,embeddingis applied to map words into vectors, which are then used as input for the following LSTM models. Tribefinder analyzes the individual's word usage in their tweets and then assigns the corresponding alternative realities, lifestyle, and recreation tribal affiliation based on the similarities with the specific tribal vocabularies. The research had four main stages on which it focused: background, results, conclusions and methods.[12][16] Thelanguageis a system ofcommunicationconsisting ofsounds, words, andgrammar, or the system ofcommunicationused by people in a particularcountryor type of work.[17]Language is perhaps the most important characteristic that distinguishes human beings from other animals.[18]In addition, it has a wide range of social implications that can be associated withsocial or cultural groups. People usually group in communities with the same interests. This will result in a variation of the words they use because of the differentiation of terms from each domain. Therefore, thehypothesisof this study would be that this variation should closely match the community structure of thenetwork. To test this theory, around 250,000 users from thesocial networkingandmicrobloggingsite Twitter were monitored in order to analyse whether the groups identified had the same language features or not. As Twitter uses unstructured data and users can send messages to any other users, the study had to be based on complex algorithms. These algorithms had to determine the word frequency inside messages between people and make a link to the groups they usually visited.[12][16] The problem of detecting the community features is one of the main issues in the study of networking systems.Social networksnaturally tend to divide themselves into communities ormodules.[19]However, some world networks are too big so they must be simplified before information can be extracted.[20]As a result, an effective way of dealing with this drawback for smaller communities is by usingmodularityalgorithmsin order to partition users into even smallergroups.[19]For larger ones, a more efficient algorithm called 'map equation' decomposes anetworkintomodulesby optimally compressing a description of information flows on the network.[20]Each community was therefore characterised according to the words they used the most, based on a ranking algorithm. To determine the significance of word usage differences, word endings and word lengths were also measured and showed that thepatternfound was the correct one. Moreover, these studies also helped in predicting community membership of users, by comparing their own word frequencies with community word usage. This helped in forecasting which community a certain user is going to access based on the words that they are using.[12] The aim of this research was to study the bond between community structure in a social network environment and language use within the community . The striking pattern that was found suggests that people from different clans tend to use different words based on their own interests and hobbies.[13]Even though this approach did not manage to cover all people inside Twitter, it has several advantages over ordinary surveys that cover a smaller scale of groups: it is systematic, it is non-intrusive and it easily produces large volumes of richdata. Moreover, other cultural characteristics can be found out when extending this study. For example, whether individuals that belong to multiple communities use different word sets in each of them.[12][16] A process calledsnowball-samplinghelped forming the samplenetwork.[16]Each user'stweetsand messages were recorded and any new users referenced were added to a list from where they were picked to be sampled. Messages that were copies have been ignored. In order to find out the words that characterise each clan, the fraction of people that use a certain word was compared with the fraction of people that use that word globally. The difference between communities has also been measured by comparing the relative word usage frequency.[12] Words, and the way wespellthem are in a continuous change, as we find new ways to communicate. Despite the fact that traditionaldictionariesdo not take into account the changes, online ones have adopted many of them.[21]An interesting fact outlined in the research above is that communities tend to use their own distinctive spelling for words. According to ProfessorVincent JansenfromRoyal Hollowayonline communities spell words in different ways, just as people have different regionalaccents.[14]For example,Justin Bieberfans tend to end words in"ee"as in"pleasee", while school teachers tend to use long words. Moreover, the largest group found in the study was composed ofAfrican Americanswho were using the words"nigga","poppin", and"chillin". Members of this community also tend to shorten the ends of the words, replacing"ing"with"in"and"er"with"a".[13] Each tribe has anonline-platform(such asFlickrorTumblr), called campfire around which they gather. These campfires tend to enable one or more of the following three tribal activities:[1] However, some brands are building their own tribes aroundplatformsoutside of these. Cooperationis the action of working together to the same end.[22]Cooperation developed naturally over time, as it helped companies to streamline their research costs and to better answer to users' requirements. As a result, nowadays organisations are looking for flexible structures that can easily adapt to this rapidly changing environment.Groupwaresystems perfectly cater to these needs of companies. Informalcommunicationpredominates and specialists in certain domains exchange their experience with other people within the groupware environment.Collaborationand cooperation are available through instant messages; people can discuss, chat and swap ideas.[citation needed]Moreover, people can work together while they are located remotely from each other.[23]Groupware can be split into three categories:communication,collaborationandcoordination, depending on the level of cooperation and technology involved in the process.[24]One of the biggest and well-known cooperation software is Wikipedia. Wikipediais acollaborative softwarebecause anyone can edit it. Any user can edit articles, view past revisions and discuss through a forum the current state of each article. Due to the fact that anyone can change it and findinformationvery quickly, it has become one of the 10 most accessed sites on the Internet.[25] Wikipedia has many advantages over other encyclopedias:[25][26] However, there are also some drawbacks:[25][26] Communicationis the act or an instance of communicating; the imparting or exchange ofinformation,ideas, orfeelings.[27]Communication has drastically changed over time andsocial networkshave changed the way peoplecommunicate.[28]Even though people]can interact with each other 24/7, there is a new wave of barriers andthreats. In theworkplaceenvironment, electronic Communication has overtaken face-to-face and voice-to-voice Communication by far. This major shift has been done in advantage ofGeneration Y, who preferinstant messagingthan talking directly to someone. It is often said that it could become an ironic twist, butsocial mediahas the real potential of making us less social.[29]However, there are studies that confirm that people are becoming more social, but the style in which they interact with each other has changed a lot. One of the major drawbacks of social networks isprivacy, as people tend to trust others more rapidly and send more openmessagesabout themselves. As a result, personal information can be easily exposed to other persons.[30]TwitterandFacebookare two of the biggest social networks in the world. Facebookis currently the largest social network in the world with more than 1 billion people using this website. This actually means that approximately one in seven people on Earth use Facebook.[31]Facebook users share their stories, images and videos in order to celebrate and commemorate events together. They can also playsocial gamesand like other Facebook pages.[4]Moreover, there is also a section called'News Feed'where users can see social information from theirfriendsor from the pages that they liked or shared. Each user has their own profile page that is called 'wall', where they can post all the above-mentioned materials (their friends can do this as well).[32]The biggest advantage of Facebook is that you can make new friends, as well as find oldacquaintancesand restartsocialisingwith them. One of the most useful feature of Facebook is the existence ofgroups. Users with the same interests can create a new group or take part in already existing ones to debate information and exchange their ideas. However, there are also groups that are created to declare an affiliation, such as an obsession for different subjects.[33] Twitteris another social network that allows users to send and read short messages called 'tweets'. Even though messages can contain only 280 characters, this is the perfect length for sending status updates to followers.[34]The main advantage of Twitter is that people can gain followers quickly and share ideas andlinksvery fast. There arenetworksof influential people who can be connected via Twitter.[35]On Twitter, tribes manifest themselves as followers of either a person, a company or an institution. As a result, it can be used as amarketingtool to make someone'sproductvisible, on condition that a big tribe of followers is created. In order to do this, the right community must be built, as finding the right people can be a challenge.[36]There are some steps that users could take into account in order to make connections and therefore make people follow them: search using Twitter search, follow the followers of other users, look at Twitter Lists, use #Hashtagsand findthird-partyprograms.[35] AsSeth Godinstates, "The Internet eliminated geography".[1]People join tribes or clans because they find and share the same ideas and interests with other people.[11]The main disadvantage of old tribes is that they could not influence group behaviour. On the other hand, new tribes are self-sustaining and can survive without a leader, they are not necessarily dialogue based and they are long lasting. As it has been demonstrated within this article, tribes have influenced the way languages, organisations and cultures work.[1]They have redefined old concepts with the help of social media and have changed the way people will interact in the future.[4]
https://en.wikipedia.org/wiki/Tribe_(internet)
Web scienceis an emerginginterdisciplinaryfield concerned with the study of large-scale socio-technical systems, particularly theWorld Wide Web.[1][2]It considers the relationship between people and technology, the ways that society and technology co-constitute one another and the impact of this co-constitution on broader society. Web Science combines research from disciplines as diverse associology,computer science,economics, andmathematics.[3] An earlier definition was given by American computer scientistBen Shneiderman: "Web Science" is processing the information available on the web in similar terms to those applied to natural environment.[4] The Web Science Institute describes Web Science as focusing "the analytical power of researchers from disciplines as diverse as mathematics, sociology, economics, psychology, law and computer science to understand and explain the Web. It is necessarily interdisciplinary – as much about social and organizational behaviour as about the underpinning technology."[5]A central pillar of Web science development is Artificial Intelligence or "AI". The current artificial intelligence that in development at the moment is Human-Centered, with goals to further professional development courses as well as influencing public policy. Artificial intelligence developers are focused on the most impactful uses of this technology, while also hoping to expedite the growth and development of the human race.[5] Philip Tetlow, an IBM-based scientist influential in the emergence of web science as an independent discipline,[6]argued for the concept of web life,[7]which considers the Web not as a connected network of computers, as in common interpretations of theInternet, but rather as asociotechnicalmachine[8]capable of fusing together individuals and organisations into larger coordinated groups. It argues that unlike the technologies that have come before it, the Web is different in that its phenomenal growth and complexity are starting to outstrip our capability to control it directly, making it impossible for us to grasp its completeness in one go. Tetlow made use ofFritjof Capra's concept of the 'web of life' as a metaphor.[9][10] There are numerous academic research groups engaged in Web Science research,[11][12][13][14][15][16][17][18]many of which are members of WSTNet, theWeb Science TrustNetwork of research labs. Health Web Science emerged as a sub-discipline of Web Science that studies the role of the Web's impact on human's health outcomes and how to further utilize the Web to improve health outcomes.[19][20][21][22]These groups focus on the developmental possibilities, provided through Web Science, in areas such as health care and social welfare. Discussion of web science has been widely adopted as a method in which the internet can have a real world impact in the field of medicine, currently coined Medicine 2.0. TheWorld Wide Webacts as a medium for the spread and circulation of knowledge, though these various research groups consider themselves responsible for maintaining verifiable and testable knowledge. Using their knowledge of the healthcare system as well as web science, researchers are focused on formatting and structuring their knowledge in a way that is easily accessible throughout the internet. The World Wide Web is quickly evolving meaning that the information we provide and its formatting must also. Recognizing the overlap between both aspects, the spread of knowledge and development of the internet, allows us to properly display our knowledge in a manner that evolves as quickly as the internet and everyday medical research. The accessibility of the internet and quick development of knowledge must be companied with efficient formatting to allocate successful dissemination of information, as described by these various researcher groups.[21]
https://en.wikipedia.org/wiki/Web_science
Participatory rural appraisal(PRA) is an approach used bynon-governmental organizations(NGOs) and other agencies involved ininternational development. The approach aims to incorporate the knowledge and opinions of rural people in the planning and management of development projects and programmes.[1][2][3] The philosophical roots of participatory rural appraisal techniques can be traced to activist adult education methods such as those ofPaulo Freireand the study clubs of theAntigonish Movement.[4]In this view, an actively involved and empowered local population is essential to successful rural community development.Robert Chambers, a key exponent of PRA, argued that the approach owes much to "the Freirian theme, that poor and exploited people can and should be enabled to analyze their own reality."[5] By the early 1980s, there was growing dissatisfaction among development experts with both the reductionism of formal surveys, and the biases of typical field visits. In 1983, Robert Chambers, a Fellow at theInstitute of Development Studies(UK), used the termrapid rural appraisal(RRA) to describe techniques that could bring about a "reversal of learning", to learn from rural people directly.[6][7]Two years later, the first international conference to share experiences relating to RRA was held in Thailand.[8]This was followed by a rapid acceptance of usage of methods that involved rural people in examining their own problems, setting their own goals, and monitoring their own achievements. By the mid-1990s, the term RRA had been replaced by a number of other terms includingparticipatory rural appraisal(PRA) andparticipatory learning and action(PLA).[9] Robert Chambers acknowledged that the significant breakthroughs and innovations that informed the methodology came from community development practitioners in Africa, India and elsewhere. Chambers helped PRA gain acceptance among practitioners.[10]Chambers explained the function of participatory research in PRA as follows: The central thrusts of the [new] paradigm … are decentralization and empowerment. Decentralization means that resources and discretion are devolved, turning back the inward and upward flows of resources and people. Empowerment means that people, especially poorer people, are enabled to take more control over their lives, and secure a better livelihood with ownership and control of productive assets as one key element. Decentralization and empowerment enable local people to exploit the diverse complexities of their own conditions, and to adapt to rapid change.[11] Over the years techniques and tools have been described in a variety of books and newsletters, or taught at training courses.[1][12][13]However, the field has been criticized for lacking a systematic evidence-based methodology.[14] The basic techniques used include:[1][2][3][12][13] To ensure that people are not excluded from participation, these techniques avoidwritingwherever possible, relying instead on the tools oforal communicationandvisual communicationsuch as pictures, symbols, physical objects and group memory.[15]Efforts are made in many projects, however, to build a bridge to formalliteracy; for example by teaching people how to sign their names or recognize their signatures. Often developing communities are reluctant to permit invasive audio-visual recording.[citation needed] Since the early 21st century, some practitioners have replaced PRA with the standardized model ofcommunity-based participatory research(CBPR) or withparticipatory action research(PAR).[citation needed]Social survey techniques have also changed during this period, including greater use ofinformation technologysuch asfuzzy cognitive maps,e-participation,telepresence,social network analysis,topic models,geographic information systems(GIS), andinteractive multimedia.[citation needed]....
https://en.wikipedia.org/wiki/Participatory_rural_appraisal
High-performance teams(HPTs) is a concept withinorganization developmentreferring to teams, organizations, or virtual groups that are highly focused on their goals and that achieve superior business results. High-performance teams outperform all other similar teams and they outperform expectations given their composition.[1] A high-performance team can be defined as a group of people with specific roles and complementary talents and skills, aligned with and committed to a common purpose, who consistently show high levels of collaboration and innovation, produce superior results, and extinguish radical or extreme opinions that could be damaging. The high-performance team is regarded as tight-knit, focused on their goal and have supportive processes that will enable any team member to surmount any barriers in achieving the team's goals.[2] Within the high-performance team, people are highly skilled and are able to interchange their roles[citation needed]. Also, leadership within the team is not vested in a single individual. Instead the leadership role is taken up by various team members, according to the need at that moment in time. High-performance teams have robust methods of resolving conflict efficiently, so that conflict does not become a roadblock to achieving the team's goals. There is a sense of clear focus and intense energy within a high-performance team. Collectively, the team has its own consciousness, indicating shared norms and values within the team. The team feels a strong sense of accountability for achieving their goals. Team members display high levels of mutual trust towards each other.[2] To supportteam effectivenesswithin high-performance teams, understanding of individual working styles is important. This can be done by applyingBelbin High Performing Teams,DISC assessment, theMyers-Briggs Type Indicatorand theHerrmann Brain Dominance Instrumentto understand behavior, personalities and thinking styles of team members. UsingTuckman's stages of group developmentas a basis, a HPT moves through the stages of forming, storming, norming and performing, as with other teams. However, the HPT uses thestorming and normingphase effectively to define who they are and what their overall goal is, and how to interact together and resolve conflicts. Therefore, when the HPT reaches the performing phase, they have highly effective behaviours that allow them to overachieve in comparison to regular teams. Later, leadership strategies (coordinating, coaching, empowering, and supporting) were connected to each stage to help facilitate teams to high performance.[3] Characteristics Different characteristics have been used to describe high-performance teams. Despite varying approaches to describing high-performance teams there is a set of common characteristics that are recognised to lead to success[4] There are many types of teams in organizations as well. The most traditional type of team is the manager-led team. Within this team, a manager fits the role of the team leader and is responsible for defining the team goals, methods, and functions. The remaining team members are responsible for carrying out their assigned work under the monitoring of the manager. Self-managing or self-regulating teams operate when the “manager” position determines the overall purpose or goal for the team and the remainder of the team are at liberty to manage the methods by which are needed to achieve the intended goal. Self-directing or self-designing teams determine their own team goals and the different methods needed in order to achieve the end goal. This offers opportunities for innovation, enhance goal commitment and motivation. Finally, self-governing teams are designed with high control and responsibility to execute a task or manage processes. Board of directors is a prime example of self-governing team.[5] Given the importance of team-based work in today's economy, much focus has been brought in recent years to useevidence-based organizational researchto pinpoint more accurately to the defining attributes of high-performance teams. The team at MIT'sHuman Dynamics Laboratoryinvestigated explicitly observable communication patterns and foundenergy,engagement, andexplorationto be surprisingly powerful predictive indicators for a team's ability to perform.[6] Other researchers focus on what supports group intelligence and allows a team to be smarter than their smartest individuals. A group at MIT'sCenter for Collective Intelligence, e.g., found that teams with more women and teams where team members share "airtime" equally showed higher group intelligence scores.[7] The Fundamental Interpersonal Relations Orientation – Behavior (FIRO-B) questionnaire is a resource that could help the individual help identify their personal orientation. In other words, the behavioral tendency a person in different environments, with different people. The theory of personal orientation was initially shared by Schultz (1958) who claimed personal orientation consists of three fundamental human needs: need for inclusion, need for control, and the need for affection. The FIRO-B test helps an individual identify their interpersonal compatibilities with these needs which can be directly correlated to their performance in a high-performance team.[8] First described in detail by theTavistock Institute, UK, in the 1950s, HPTs gained popular acceptance in the US by the 1980s, with adoption by organizations such asGeneral Electric,Boeing,Digital Equipment Corporation(nowHP), and others. In each of these cases, major change was created through the shifting oforganizational culture, merging the business goals of the organization with the social needs of the individuals. Often in less than a year, HPTs achieved a quantum leap in business results in all key success dimensions, including customer, employee, shareholder and operationalvalue-addeddimensions.[9] Due to its initial success, many organizations attempted to copy HPTs. However, without understanding the underlying dynamics that created them, and without adequate time and resources to develop them, most of these attempts failed. With this failure, HPTs fell out of general favor by 1995, and the termhigh-performancebegan to be used in a promotional context, rather than a performance-based one.[9] Recently, some private sector and government sector organizations have placed new focus on HPTs, as new studies and understandings have identified the key processes and team dynamics necessary to create all-around quantum performance improvements.[10]With these new tools, organizations such asKraft Foods,General Electric,Exelon, and the US government have focused new attention on high-performance teams. In Great Britain, high-performance workplaces are defined as being those organizations where workers are actively communicated with and involved in the decisions directly affecting the workers. By regulation of the UKDepartment of Trade and Industry, these workplaces will be required in most organizations by 2008[11]
https://en.wikipedia.org/wiki/High-performance_teams
Human resources(HR) is the set of people who make up theworkforceof anorganization,business sector, industry, oreconomy.[1][2]A narrower concept ishuman capital, the knowledge and skills which the individuals command.[3]Similar terms includemanpower,labor,labor-power, orpersonnel. In vernacular usage, "human resources" or "human resource" can refer to thehuman resources department(HR department)[4]of an organization, which performshuman resource management, overseeing various aspects ofemployment, such as compliance withlabor lawand employment standards,interviewing and selection, performance management, administration ofemployee benefits, organizing of employee files with the required documents for future reference, and some aspects ofrecruitment(also known astalent acquisition),talent management, staff wellbeing, and employeeoffboarding.[5]They serve as the link between an organization's management and its employees. The duties include planning, recruitment and selection process, posting job ads, evaluating the performance of employees, organizingresumesand job applications, scheduling interviews and assisting in the process and ensuringbackground checks. Another job ispayrolland benefits administration which deals with ensuring vacation and sick time are accounted for, reviewing payroll, and participating in benefits tasks, like claim resolutions, reconciling benefits statements, and approvinginvoicesfor payment.[6]Human Resources also coordinates employee relations activities and programs including, but not limited to, employee counseling.[7]The last job is regular maintenance, this job makes sure that the current HR files anddatabasesare up to date, maintainingemployee benefitsand employment status and performing payroll/benefit-relatedreconciliations.[6] A human resources manager can have various functions in acompany, including to:[8] Human resource managementused to be referred to as "personnel administration".[9][10]In the 1920s, personnel administration focused mostly on the aspects of hiring, evaluating, andcompensatingemployees.[11][12]However, they did not focus on any employment relationships at an organizational performance level or on the systematic relationships in any parties. This led to a lacked unifying paradigm in the field during this period.[13] According to anHR Magazinearticle, the first personnel management department started at theNational Cash Register Co.in 1900. The owner,John Henry Patterson, organized a personnel department to deal with grievances, discharges and safety, and information for supervisors on new laws and practices after several strikes and employee lockouts. This action was followed by other companies; for example, Ford had high turnover ratios of 380 percent in 1913, but just one year later, the line workers of the company had doubled their daily salaries from $2.50 to $5, even though $2.50 was a fair wage at that time.[14]This example clearly shows the importance of effective management which leads to a greater outcome of employee satisfaction as well as encouraging employees to work together in order to achieve better business objectives. During the 1970s, American businesses began experiencing challenges due to the substantial increase in competitive pressures. Companies experienced globalization, deregulation, and rapid technological change which caused the major companies to enhance their strategic planning – a process of predicting future changes in a particular environment and focus on ways to promoteorganizational effectiveness. This resulted in developing more jobs and opportunities for people to show their skills which were directed to effectively applying employees toward the fulfillment of individual, group, and organizational goals. Many years later the major/minor of human resource management was created at universities and colleges also known asbusiness administration. It consists of all the activities that companies used to ensure the more effective use of employees.[15] Now, human resources focus on the people side of management.[15]There are two real definitions of HRM (Human Resource Management); one is that it is the process of managing people in organizations in a structured and thorough manner.[15]This means that it covers the hiring, firing, pay and perks, and performance management.[15]This first definition is the modern and traditional version more like what a personnel manager would have done back in the 1920s.[15]The second definition is that HRM circles the ideas of management of people in organizations from amacromanagementperspective like customers and competitors in a marketplace.[15]This involves the focus on making the "employment relationship" fulfilling for both management and employees.[15] Some research showed that employees can perform at a much higher rate of productivity when their supervisors and managers paid more attention to them.[14]The Father of Human relations,Elton Mayo, was the first person to reinforce the importance of employee communications,cooperation, and involvement.[14]His studies concluded that sometimes the human factors are more important than physical factors, such as quality of lighting and physical workplace conditions. As a result, individuals often place value more on how they feel.[14]For example, a rewarding system in Human resource management, applied effectively, can further encourage employees to achieve their best performance. Pioneering economistJohn R. Commonsmentioned "human resource" in his 1893 bookThe Distribution of Wealthbut did not elaborate.[16]The expression was used during the 1910s to 1930s to promote the idea that human beings are of worth (as in human dignity); by the early 1950s, it meant people as a means to an end (for employers).[17]Among scholars the first use of the phrase in that sense was in a 1958 report by economistE. Wight Bakke.[18] In regard to how individuals respond to the changes in alabor market, the following must be understood: New terminology includes people operations, employee experience, employee success, people@, and partner resources.[20] One major concern about considering people as assets or resources is that they will be commoditized, objectified, and abused. Critics of the term human resources would argue that human beings are not "commodities" or "resources", but are creative and social beings in a productive enterprise. The 2000 revision ofISO 9001, in contrast, requires identifying the processes, their sequence, and interaction, and to define and communicate responsibilities and authorities.[citation needed]In general, heavily unionized nations such asFranceandGermanyhave adopted and encouraged such approaches. Also, in 2001, theInternational Labour Organizationdecided to revisit and revise its 1975 Recommendation 150 on Human Resources Development, resulting in its "Labour is not a commodity" principle. One view of these trends is that a strong social consensus on political economy and a goodsocial welfare systemfacilitatelabor mobilityand tend to make the entire economy more productive, as labor can develop skills and experience in various ways, and move from one enterprise to another with little controversy or difficulty in adapting. Another important controversy regards labor mobility and the broader philosophical issue with the usage of the phrase "human resources".[21]Governments of developing nations often regard developed nations that encourage immigration or "guest workers" as appropriating human capital that is more rightfully part of the developing nation and required to further its economic growth. Over time, the United Nations have come to more generally support[22]the developing nations' point of view, and have requested significant offsetting "foreign aid" contributions so that a developing nation losing human capital does not lose the capacity to continue to train new people in trades, professions, and the arts.[22]Some businesses and companies are choosing to rename this department using other terms, such as "people operations" or "culture department," in order to erase this stigma.[23] Human resource companies play an important part in developing and making a company or organization at the beginning or making a success at the end, due to the labor provided by employees.[24]Human resources are intended to show how to have better employment relations in the workforce.[25]Also, to bring out the best work ethic of the employees and therefore making a move to a better working environment.[26]Moreover, green human resource development is suggested as a paradigm shift from traditional approaches of human resource companies to bring awareness of ways that expertise can be applied to green practices. By integrating the expertise, knowledge, and competencies of human resource development practitioners with industry practitioners, most industries have the potential to be transformed into a sector with ecofriendly and pro-environmental culture.[27] Human resources also deals with essential motivators in the workplace such aspayroll, benefits, team morale and workplace harassment.[5] Administration and operations used to be the two role areas of HR. The strategic planning component came into play as a result of companies recognizing the need to consider HR needs in goals and strategies. HR directors commonly sit on company executive teams because of the HR planning function. Numbers and types of employees and the evolution of compensation systems are among elements in the planning role.[28]Various factors affecting Human Resource planning include organizational structure, growth, business location, demographic changes, environmental uncertainties, expansion.[29]
https://en.wikipedia.org/wiki/Human_resources
Marketing researchis the systematic gathering, recording, and analysis ofqualitativeandquantitativedata about issues relating tomarketingproducts and services. The goal is to identify and assess how changing elements of themarketing miximpactscustomer behavior. This involves employing adata-driven marketingapproach to specify the data required to address these issues, then designing the method for collecting information and implementing the data collection process. After analyzing the collected data, these results and findings, including their implications, are forwarded to those empowered to act on them.[1] Market research, marketing research, andmarketingare a sequence ofbusiness activities;[2][3]sometimes these are handled informally.[4] The field ofmarketing researchis much older than that ofmarket research.[5]Although both involve consumers,Marketingresearch is concerned specifically with marketing processes, such as advertising effectiveness and salesforce effectiveness, whilemarketresearch is concerned specifically with markets and distribution.[6]Two explanations given for confusingmarket researchwithmarketing researchare the similarity of the terms and also that market research is a subset of marketing research.[7][8][9]Further confusion exists because ofmajor companieswith expertise and practices in both areas.[10] Marketing research is often partitioned into two sets of categorical pairs, either by target market: Or, alternatively, by methodological approach: Consumer marketing research is a form of appliedsociologythat concentrates on understanding the preferences, attitudes, and behaviors ofconsumersin amarket-based economy, and it aims to understand the effects and comparative success ofmarketing campaigns.[11] Thus, marketing research may also be described as the systematic and objective identification, collection, analysis, and dissemination of information, for the purpose of assisting management indecision-makingrelated to the identification and solution of problems and opportunities in marketing.[12]The goal of market research is to obtain and provide management with viable information about the market (e.g., competitors), consumers, the product/service itself etc.[13] The purpose of marketing research (MR) is to provide management with relevant, accurate, reliable, valid, and up to datemarket information. Competitive marketing environment and the ever-increasing costs attributed to poor decision making require that marketing research provide sound information. Sound decisions are not based on gut feeling, intuition, or even pure judgment.[14] Managersmake numerous strategic and tactical decisions in the process of identifying and satisfying customer needs. They make decisions about potential opportunities, target market selection, MARKETING segmentation, planning and implementing marketing programs, marketing performance, and control. These decisions are complicated by interactions between the controllable marketing variables of product,pricing, promotion, and distribution. Further complications are added by uncontrollable environmental factors such as general economic conditions, technology,public policiesand laws, political environment, competition, and social and cultural changes. Another factor in this mix is the complexity ofconsumers. Marketing research helps the marketing manager link the marketing variables with the environment and the consumers. It helps remove some of the uncertainty by providing relevant information about the marketing variables, environment, and consumers. In the absence of relevant information, consumers' response to marketing programs cannot be predicted reliably or accurately. Ongoingmarketingresearch programs provide information on controllable and non-controllable factors and consumers; this information enhances the effectiveness of decisions made by marketing managers.[15] Traditionally, marketing researchers were responsible for providing the relevant information and marketing decisions were made by the managers. However, the roles are changing and marketing researchers are becoming more involved in decision making, whereas marketing managers are becoming more involved with research. The role of marketing research in managerial decision making is explained further using the framework of theDECIDEmodel.[16] Evidence for commercial research being gathered informally dates to the Medieval period. In 1380, the German textile manufacturer,Johann Fugger, travelled from Augsburg to Graben in order to gather information on the international textile industry. He exchanged detailed letters on trade conditions in relevant areas. Although, this type of information would have been termed "commercial intelligence" at the time, it created a precedent for the systemic collection of marketing information.[17] During the European age of discovery, industrial houses began to import exotic, luxury goods - calico cloth from India, porcelain, silk and tea from China, spices from India and South-East Asia and tobacco, sugar, rum and coffee from the New World.[18]International traders began to demand information that could be used for marketing decisions. During this period,Daniel Defoe, a London merchant, published information on trade and economic resources of England and Scotland. Defoe was a prolific publisher and among his many publications are titles devoted to the state of trade including;Trade of Britain Stated,(1707);Trade of Scotland with France,(1713) andThe Trade to India Critically and Calmly Considered,(1720) - all of which provided merchants and traders with important information on which to base business decisions.[19] Until the late 18th-century, European and North-American economies were characterised by local production and consumption. Produce, household goods and tools were produced by local artisans or farmers with exchange taking place in local markets or fairs. Under these conditions, the need for marketing information was minimal. However, the rise of mass-production following the industrial revolution, combined with improved transportation systems of the early 19th-century, led to the creation of national markets and ultimately, stimulated the need for more detailed information about customers, competitors, distribution systems, and market communications.[20] By the 19th-century, manufacturers were exploring ways to understand the different market needs and behaviours of groups of consumers. A study of the German book trade found examples of bothproduct differentiationandmarket segmentationas early as the 1820s.[21]From the 1880s, German toy manufacturers were producing models oftin toysfor specific geographic markets; London omnibuses and ambulances destined for the British market; French postal delivery vans for Continental Europe and American locomotives intended for sale in America.[22]Such activities suggest that sufficient market information was collected to support detailed market segmentation. In 1895, American advertising agency, N. H. Ayer & Son, used telegraph to contact publishers and state officials throughout the country about grain production, in an effort to construct an advertising schedule for client, Nichols-Shephard company, an agricultural machinery company in what many scholars believe is the first application of marketing research to solve a marketing/ advertising problem)[23] Between 1902 and 1910, George B Waldron, working at Mahin's Advertising Agency in the United States used tax registers, city directories and census data to show advertisers the proportion of educated vs illiterate consumers and the earning capacity of different occupations in a very early example of simple market segmentation.[24][25]In 1911Charles Coolidge Parlinwas appointed as the Manager of the Commercial Research Division of the Advertising Department of the Curtis Publishing Company, thereby establishing the first in-house market research department - an event that has been described as marking the beginnings of organised marketing research.[26]His aim was to turn market research into a science. Parlin published a number of studies of various product-markets including agriculture (1911); consumer goods (c.1911); department store lines (1912) a five-volume study of automobiles (1914).[27] In 1924 Paul Cherington improved on primitive forms of demographic market segmentation when he developed the 'ABCD' household typology; the first socio-demographic segmentation tool.[24][28]By the 1930s, market researchers such asErnest Dichterrecognised that demographics alone were insufficient to explain different marketing behaviours and began exploring the use of lifestyles, attitudes, values, beliefs and culture to segment markets.[29] In the first three decades of the 20th century, advertising agencies and marketing departments developed the basic techniques used in quantitative and qualitative research – survey methods, questionnaires, gallup polls etc. As early as 1901, Walter B Scott was undertaking experimental research for the Agate Club of Chicago.[30]In 1910, George B Waldron was carrying out qualitative research for Mahins Advertising Agency.[30]In 1919, the first book on commercial research was published,Commercial Research: An Outline of Working Principlesby Professor C.S. Duncan of the University of Chicago.[31] Adequate knowledge of consumer preferences was a key to survival in the face of increasingly competitive markets.[32]By the 1920s, advertising agencies, such asJ Walter Thompson(JWT), were conducting research on thehowandwhyconsumers used brands, so that they could recommend appropriate advertising copy to manufacturers.[31] The advent of commercial radio in the 1920s, and television in the 1940s, led a number of market research companies to develop the means to measure audience size and audience composition. In 1923,Arthur Nielsenfounded market research company, A C Nielsen and over next decade pioneered the measurement of radio audiences. He subsequently applied his methods to the measurement of television audiences. Around the same time,Daniel Starchdeveloped measures for testing advertising copy effectiveness in print media (newspapers and magazines), and these subsequently became known as Starch scores (and are still used today).[citation needed] During, the 1930s and 1940s, many of the data collection methods, probability sampling methods, survey methods, questionnaire design and key metrics were developed. By the 1930s, Ernest Dichter was pioneering the focus group method of qualitative research. For this, he is often described as the 'father of market research.'[33]Dichter applied his methods on campaigns for major brands including Chrysler, Exxon/Esso where he used methods from psychology and cultural anthropology to gain consumer insights. These methods eventually lead to the development ofmotivational research.[34]Marketing historians refer to this period as the "Foundation Age" of market research. By the 1930s, the first courses on marketing research were taught in universities and colleges.[35]The text-book,Market Research and Analysisby Lyndon O. Brown (1937) became one of the popular textbooks during this period.[36]As the number of trained research professionals proliferated throughout the second half of the 20th-century, the techniques and methods used in marketing research became increasingly sophisticated. Marketers, such as Paul Green, were instrumental in developing techniques such asconjoint analysisandmultidimensional scaling, both of which are used in positioning maps, market segmentation, choice analysis and other marketing applications.[37] Web analyticswere born out of the need to track the behavior of site visitors and, as the popularity ofe-commerceandweb advertisinggrew, businesses demanded details on the information created by new practices in web data collection, such asclick-throughandexit rates. As the Internet boomed, websites became larger and more complex and the possibility of two-way communication between businesses and their consumers became a reality. Provided with the capacity to interact with online customers, Researchers were able to collect large amounts of data that were previously unavailable, further propelling the marketing research industry.[citation needed] In the new millennium, as the Internet continued to develop and websites became more interactive, data collection and analysis became more commonplace for those marketing research firms whose clients had a web presence. With the explosive growth of the online marketplace came new competition for companies; no longer were businesses merely competing with the shop down the road — competition was now represented by a global force. Retail outlets were appearing online and the previous need for bricks-and-mortar stores was diminishing at a greater pace than online competition was growing. With so many online channels for consumers to make purchases, companies needed newer and more compelling methods, in combination with messages that resonated more effectively, to capture the attention of the average consumer.[citation needed] Having access to web data did not automatically provide companies with the rationale behind the behavior of users visiting their sites, which provoked the marketing research industry to develop new and better ways of tracking, collecting and interpreting information. This led to the development of various tools like online focus groups and pop-up or website intercept surveys. These types of services allowed companies to dig deeper into the motivations of consumers, augmenting their insights and utilizing this data to drive market share.[citation needed] As information around the world became more accessible, increased competition led companies to demand more of market researchers. It was no longer sufficient to follow trends in web behavior or track sales data; companies now needed access to consumer behavior throughout the entire purchase process. This meant the Marketing Research Industry, again, needed to adapt to the rapidly changing needs of the marketplace, and to the demands of companies looking for a competitive edge.[citation needed] Today, marketing research has adapted to innovations in technology and the corresponding ease with which information is available. B2B and B2C companies are working hard to stay competitive and they now demand both quantitative (“What”) and qualitative (“Why?”) marketing research in order to better understand their target audience and the motivations behind customer behaviors.[38] This demand is driving marketing researchers to develop new platforms for interactive, two-way communication between their firms and consumers. Mobile devices such as Smart Phones are the best example of an emerging platform that enables businesses to connect with their customers throughout the entire buying process.[citation needed] As personal mobile devices become more capable and widespread, the marketing research industry will look to further capitalize on this trend. Mobile devices present the perfect channel for research firms to retrieve immediate impressions from buyers and to provide their clients with a holistic view of the consumers within their target markets, and beyond. Now, more than ever, innovation is the key to success for Marketing Researchers. Marketing Research Clients are beginning to demand highly personalized and specifically focused products from the marketing research firms;big datais great for identifying general market segments, but is less capable of identifying key factors of niche markets, which now defines the competitive edge companies are looking for in this mobile-digital age.[citation needed] First, marketingresearch is systematic. Thus systematic planning is required at all the stages of the marketing research process. The procedures followed at each stage are methodologically sound, well documented, and, as much as possible, planned in advance. Marketing research uses the scientific method in that data are collected and analyzed to test prior notions or hypotheses. Experts in marketing research have shown that studies featuring multiple and often competing hypotheses yield more meaningful results than those featuring only one dominant hypothesis.[39] Marketing research isobjective. It attempts to provide accurate information that reflects a true state of affairs. It should be conducted impartially. While research is always influenced by the researcher's research philosophy, it should be free from the personal or political biases of the researcher or themanagement. Research which is motivated by personal or political gain involves a breach of professional standards. Such research is deliberately biased so as to result in predetermined findings. The objective nature of marketing research underscores the importance of ethical considerations. Also, researchers should always be objective with regard to the selection of information to be featured in reference texts because such literature should offer a comprehensive view on marketing. Research has shown, however, that many marketing textbooks do not feature important principles in marketing research.[40] Other forms of business research include: Organizations engage in marketing research for two reasons: firstly, to identify and, secondly, to solve marketing problems. This distinction serves as a basis for classifying marketing research into problem identification research and problem solving research. Problem identification research is undertaken to help identify problems which are, perhaps, not apparent on the surface and yet exist or are likely to arise in the future like company image, market characteristics, sales analysis, short-range forecasting, long range forecasting, and business trends research. Research of this type provides information about the marketing environment and helps diagnose a problem. For example, the findings of problem solving research are used in making decisions which will solve specific marketing problems. TheStanford Research Institute, on the other hand, conducts an annual survey of consumers that is used to classify persons into homogeneous groups for segmentation purposes. TheNational Purchase Diarypanel (NPD) maintains the largest diary panel in the United States. Standardized services are research studies conducted for different client firms but in a standard way. For example, procedures for measuring advertising effectiveness have been standardized so that the results can be compared across studies and evaluative norms can be established. The Starch Readership Survey is the most widely used service for evaluating print advertisements; another well-known service is theGallupand Robinson Magazine Impact Studies. These services are also sold on a syndicated basis. Marketing research techniques come in many forms, including: All these forms of marketing research can be classified as either problem-identification research or as problem-solving research. There are two main sources of data — primary and secondary. Primary research is conducted from scratch. It is original and collected to solve the problem at hand. Secondary research already exists since it has been collected for other purposes. It is conducted on data published previously and usually by someone else. Secondary research costs far less than primary research but seldom comes in a form that meets the researcher's needs. A similar distinction exists between exploratory research and conclusive research. Exploratory research provides insights into and comprehension of an issue or situation. It should draw definitive conclusions only with extreme caution. Conclusive research draws conclusions: the results of the study can be generalized to the whole population. Exploratory research is conducted to explore a problem to get some basic idea about the solution at the preliminary stages of research. It may serve as the input to conclusive research. Exploratory research information is collected by focus group interviews, reviewing literature or books, discussing with experts, etc. This is unstructured and qualitative in nature. If a secondary source of data is unable to serve the purpose, a convenience sample of small size can be collected. Conclusive research is conducted to draw some conclusion about the problem. It is essentially, structured and quantitative research, and the output of this research is the input tomanagement information systems(MIS). Exploratory research is also conducted to simplify the findings of the conclusive or descriptive research, if the findings are very hard to interpret for the marketing managers. Methodologically, marketing research uses the following types of research designs:[41] Researchers often use more than one research design. They may start with secondary research to get background information, then conduct a focus group (qualitative research design) to explore the issues. Finally they might do a full nationwide survey (quantitative research design) in order to devise specific recommendations for the client. Business to business (B2B) research is inevitably more complicated than consumer research. Researchers need to know what type of multi-faceted approach will answer the objectives, since seldom is it possible to find the answers using only one method. Finding the right respondents is crucial in B2B research, since they are often busy, and may not want to participate. Respondents may also be biased on a particular topic. Encouraging them to “open up” is yet another skill required of the B2B researcher. Last but not least, most business research leads to strategic decisions and this means that the business researcher must have expertise in developing strategies that are strongly rooted in the research findings and acceptable to the client. There are four key factors that make B2B market research special and different from consumer markets: International Marketing Research follows the same path as domestic research, but there are a few more problems that may arise. Customers in international markets may have very different customs, cultures, and expectations from the same company. They also require tailoredtranslationapproaches based on the expertise or resources available in the local country.[43] In this case, Marketing Research relies more on primary data rather than secondary information. Gathering the primary data can be hindered by language, literacy and access to technology. Basic Cultural and Market intelligence information will be needed to maximize the research effectiveness. Some of the steps that would help overcoming barriers include: Market research techniques resemble those used in political polling and social science research.Meta-analysis(also called the Schmidt-Hunter technique) refers to a statistical method of combining data from multiple studies or from several types of studies. Conceptualization means the process of converting vague mental images into definable concepts. Operationalization is the process of converting concepts into specific observable behaviors that a researcher can measure. Precision refers to the exactness of any given measure.Reliabilityrefers to the likelihood that a given operationalized construct will yield the same results if re-measured.Validityrefers to the extent to which a measure provides data that captures the meaning of the operationalized construct as defined in the study. It asks, “Are we measuring what we intended to measure?” Some of the positions available in marketing research include vice president of marketing research, research director, assistant director of research, project manager, field work director, statistician/data processing specialist, senior analyst, analyst, junior analyst and operational supervisor.[44] The most common entry-level position in marketing research for people with bachelor's degrees (e.g.,BBA) is as operational supervisor. These people are responsible for supervising a well-defined set of operations, including field work, data editing, and coding, and may be involved in programming and data analysis. Another entry-level position for BBAs is assistant project manager. An assistantproject managerwill learn and assist in questionnaire design, review field instructions, and monitor timing and costs of studies. In the marketing research industry, however, there is a growing preference for people with master's degrees. Those with MBA or equivalent degrees are likely to be employed as project managers.[44] A small number of business schools also offer a more specializedMaster of Marketing Research(MMR) degree. An MMR typically prepares students for a wide range of research methodologies and focuses on learning both in the classroom and the field. The typical entry-level position in a business firm would be juniorresearch analyst(for BBAs) or research analyst (forMBAsor MMRs). The junior analyst and the research analyst learn about the particular industry and receive training from a senior staff member, usually the marketing research manager. The junior analyst position includes a training program to prepare individuals for the responsibilities of a research analyst, including coordinating with the marketing department and sales force to develop goals for product exposure. The research analyst responsibilities include checking all data for accuracy, comparing and contrasting new research with established norms, and analyzing primary and secondary data for the purpose of market forecasting. As these job titles indicate, people with a variety of backgrounds and skills are needed in marketing research. Technical specialists such as statisticians obviously need strong backgrounds in statistics and data analysis. Other positions, such as research director, call for managing the work of others and require more general skills. To prepare for a career in marketing research, students usually:
https://en.wikipedia.org/wiki/Marketing_research
Sociometryis a quantitative method for measuringsocial relationships. It was developed bypsychotherapistJacob L. MorenoandHelen Hall Jenningsin their studies of the relationship between social structures andpsychologicalwell-being, and used during Remedial Teaching. The term sociometry relates to itsLatinetymology,sociusmeaning companion, andmetrummeaning measure. Jacob Moreno defined sociometry as "the inquiry into the evolution and organization of groups and the position of individuals within them." He goes on to write "As the ...science of group organization, it attacks the problem not from the outer structure of the group, the group surface, but from the inner structure".[1]"Sociometric explorations reveal the hidden structures that give a group its form: the alliances, the subgroups, the hidden beliefs, the forbidden agendas, the ideological agreements, the "stars" of the show.[2]" Moreno developed sociometry as one of the newly developing social sciences. He states: "The chief methodological task of sociometry has been the revision of the experimental method so that it can be applied effectively to social phenomena." (Moreno, 2012:39)[3] The practice of the method had the focus on the outcomes established by the participants: "By making choices based on criteria, overt and energetic, Moreno hoped that individuals would be more spontaneous, and organisations and groups structures would become fresh, clear and lively." One of Moreno's innovations in sociometry was the development of thesociogram, agraphthat represents individuals as points/nodes and the relationships between them as lines/arcs.[4]Moreno, who wrote extensively of his thinking, applications and findings, also founded a journal entitledSociometry. Withinsociology, sociometry has two main branches: research sociometry, and applied sociometry. Research sociometry is action research with groups exploring the socio-emotional networks of relationships using specified criteria e.g. Who in this group do you want to sit beside you at work? Who in the group do you go to for advice on a work problem? Who in the group do you see providing satisfying leadership in the pending project? Sometimes called network explorations, research sociometry is concerned with relational patterns in small (individual and small group) and larger populations, such as organizations and neighborhoods. Applied sociometrists utilize a range of methods to assist people and groups review, expand and develop their existing psycho-social networks of relationships. Both fields of sociometry exist to produce through their application, greater spontaneity and creativity of both individuals and groups. InSociometry, Experimental Method and the Science of Society: An Approach to a New Political Orientation(1951), Moreno describes the depth to which a group needs to go for the method to be "sociometric". The term for him had a qualitative meaning and did not apply unless some group process criteria were met. One of these is that there is acknowledgment of the difference between process dynamics and the manifest content. To quote Moreno: "there is a deep discrepancy between the official and the secret behavior of members".[5]: 39Moreno advocates that before any "social program" can be proposed, the sociometrist has to "take into account the actual constitution of the group."[5]: 39 Other criteria include the rule of adequate motivation: "Every participant should feel about the experiment that it is in his (or her) own cause ... that it is an opportunity for him (or her) to become an active agent in matters concerning his (or her) life situation." and the Rule of "gradual" inclusion of all extraneous criteria.[5]: 116 Given that sociometry is concerned with group allegiances and cleavages, it is not surprising that sociometric methods have been used to study ethnic relationships and way individuals identify with ethnic groups.[6]For instance, using sociometric research, Joan Criswell investigated white-black relationships in US classrooms,[7]Gabriel Weimann researched ethnic relationships in Israel,[8]and James Page has investigated intra-ethnic and inter-ethnic identification within the Pacific.[9] Other approaches were developed in last decades, such associal network analysis, orsociomapping. Freeware as well as commercial software was developed for analysis of groups and their structure, such as Gephi, Pajek, Keyhubs or InFlow. All these approaches share much of their basic principles with Sociometry.Facebookis asocial network serviceand website which is largely based on the sociometry of its users.
https://en.wikipedia.org/wiki/Sociometry
Team managementis the ability of an individual or an organization to administer and coordinate a group of individuals to perform a task. Team management involvesteamwork,communication,objective settingandperformance appraisals. Moreover, team management is the capability to identify problems andresolve conflictswithin a team. Teams are a popular approach to many business challenges. They can produce innovative solutions to complex problems.[1]There are various methods andleadership stylesa team manager can take to increase personnelproductivityand build an effective team.[2]In the workplace teams can come in many shapes and sizes who all work together and depend on one another. They communicate and all strive to accomplish a specific goal. Management teams are a type of team that performs duties such as managing and advising other employees and teams that work with them. Whereas work, parallel, and project teams hold the responsibility of direct accomplishment of a goal, management teams are responsible for providing general direction and assistance to those teams.[3] In any functional team, cohesion amongst team leaders and decision makers is vital. Cohesive leadership means that team leaders act together as a unit and make decisions as a team instead of each branching off into their own work and operating individually. It ensures that the team will be steered in one direction instead of multiple directions due to team leaders not being concise and consistent with their instructions. Cohesive leadership will require team leaders to have strong communication skills.[4]Lastly, motivation fosters a sense of purpose, bringing individuals towards a common goal. When team members are driven by a passion, it creates a cohesive environment. Cohesiveness promotes collaboration support, and synergy which brings motivation and strength that can bond the overall group's cohesiveness.[5] Effective communication is the centerstone of successful team management. Ensuring clear goals and expectations opens opportunities that enables a collaborative environment, allowing team members to share ideas and feedback seamlessly. A well communicated team is better prepared to overcome challenges and make informed decisions.[6]There must be an effectivechannel of communication(orOrganizational communication) from the top to the bottom of thechain of commandand vice versa. An effective channel of communication will allow messages to be transferred accurately without delay to the intended recipient, which will speed updecision makingprocesses and the operations of the team. Furthermore, effective communication will increase the flexibility of an organization and cause it to be less susceptible to changes in the external environment, as a faster decision making process will allow organizations a longer time period to adapt to the changes and execute contingency plans.[4]The use ofsocial mediaat work positively influences threeteam processes, specifically the effectivecommunication,knowledge sharingand coordination.[7] In a group setting, common goals act as a binding force. Aligning skills and efforts towards a shared objective provides a cohesive setting. Ensuring everyone is working towards a unified purpose creates common goals that enhance group efficiency, foster teamwork, and contribute to a sense of camaraderie, ultimately leading to success.[8]When team members first come together, they will each bring different ideas; however, the key to a successful team is the alignment of its objectives. It is essential that the team leader sets a common goal the entire team is willing to pursue. This way, all of the team members will put in effort in order to attain the goal. If there is not a common goal, team members who disagree with the objective in hand will feel reluctant to utilize their full effort, leading to failure to achieve the goal. In other cases, team members might divert themselves to other tasks due to a lack of belief or interest in the goal.[9] Poorly defined roles are often the biggest obstacle to a successful team.[10]If team members are unclear what their role is, their contributions will be minimal, therefore it is the team leader's duty to outline the roles and responsibilities of each individual within the team and ensure that they work together as an integral unit. In a successful team, a leader will first evaluate the team's mission to understand what is needed to accomplish the task. Then, they will identify the strengths and weaknesses of the team members and assign roles accordingly. Lastly, they must ensure that all team members know what each other's responsibilities are to avoid confusion and to create an effective channel of communication.[11] Individuals in a team can take on different roles that have their own unique responsibilities. A task-oriented role occurs when the individual offers new ideas, coordinates activities, or tries to find new information to share with the team. A social-oriented role occurs when an individual encourages the members of the team to be united. They also encourage participation and communication. An individual role occurs when an individual blocks the team's activities. They tend to call attention to themselves and avoid interaction with others. Another occurrence isrole conflict, which is a situation where an individual faces divergent role expectation. This means they are being pulled in various directions and hold different roles simultaneously. The "command and control” method as an approach to team management is based on the concept of military management. It was a commonly used system in the private sector during the 21st century.[12]In this method, the team leader instructs their team members to complete a task and if they refuse, they will punish employees until they comply. The team leader has absolute authority and utilises anautocraticleadership style. There are considerable drawbacks to this team management method. First, morale is lowered due to team members being belittled for the slightest mistakes; punishments lead to a lack of confidence resulting in poor performance. Second, in modern organisations roles are often specialised, therefore managers will require the expertise of the employee, elevating the value of the employee. Implementing this team management method leads to a high rate of employee turnover. In addition, in large organisations managers don't have the time to provide instructions to all employees and continuously monitor them; this will impede an organisation's performance as managers are not spending time on their core responsibilities.[13] Due to the limiting nature of the "command and control” method, managers developed an alternative management strategy known as “engage and create”. In this method team members are encouraged to participate in discussions and contribution.This yield better results. Engaged employees are inspiring to be around, excellent at their jobs, and essential to the success.[14]Engagement and creating share similarities as both involve participation and support. When team members are engaged, they are invested in their work and the overall goals of the team. Creating, on the other hand, often involves generating new ideas and solutions. Together, they form a solid combination for team management. Engaged team members are more likely to contribute creatively which can lead to problem solving, productivity, and a positive work environment. Ultimately enhancing overall team performance to reach the team's goals.[15] In the “econ 101” method of team management, the team leader makes the baseline assumption that all team members are motivated by reward in the form of money, and that the best way to manage the team is to provide financial rewards for performance and issue punishments for failure. This method of team management uses material gains in the place of intrinsic motivation to drive team members. This is similar toFrederick Taylor'stheory ofscientific managementwhich claims the main form of motivation for employees is money.[16][17]The main drawback of this method is that it does not take into account other forms of motivation besides money such aspersonal satisfactionand ambition. Moreover, using reward and punishment as a method of team management can cause demotivation as everyone is motivated by different factors and there is no one way to satisfy all team members; the negative effect is further compounded by punishment leading to demoralisation and loss of confidence.[13] InPatrick Lencioni'sbookThe Five Dysfunctions of a Team, the absence ofvulnerability-based trust – where team members are comfortable being vulnerable with each other, trust each other to help when asking for guidance, and are willing to admit their mistakes – within a team is detrimental to a team. Team leaders have to assist each other when they are vulnerable and also allow team members to see their vulnerable side, which is contradictory to the orthodox belief. If a team lacks vulnerability-based trust, team members will not be willing to share ideas or acknowledge their faults due to the fear of being exposed as incompetent, leading to a lack of communication and the hindering of the team.[18][19][20]To make vulnerability-based trust part of who you are, practice following these three steps. First, understand your vulnerabilities by looking at your past experiences and how they’ve affected you. Second, take that and create open communication where you and others can share thoughts and feelings. Lastly, view vulnerability as a way to grow stronger. By doing these things, you will build trust within yourself and your colleagues, making it a natural part of your personality and forming meaningful connections throughout your time.[21] Contrary to general belief,conflictis a positive element in a team as it drives discussion. The fear of conflict is the fear of team members to argue with one another or disagree with the team leader. If team members hold back and are afraid of confronting their leader or teammates, then the concept of a team is non-existent because there is only one person who contributes and no new ideas are generated from discussions.[18] The fear of conflict in a team stems from an absence of trust, more specifically vulnerability-based trust. If team members are afraid to be vulnerable in front of one another, disputes can be manipulative and a means to overthrow and shame the other team member. However, if team members trust each other and are comfortable being vulnerable in front of one another, then debates can be a pursuit of a better and more effective method to achieve a task.[18][19][20] When team members don't provide input on a decision, it shows that they do not agree or approve of the decision, leading to a halt in team activity and progress. Furthermore, when team members don't express their opinions, views and potential ideas are lost, hurting the project and the team. Effective communication is crucial for the success of any team. Poor communication leads to missed deadlines, conflict, and unhappy individuals. Team members should feel free to bounce ideas off of each other and provide feedback to improve the team.[18][20] The avoidance of accountability in a team is the failure of team members to be accountable for the consequences of their actions. When team members do not commit to a decision, they will be unwilling to take responsibility for the outcomes of the decision.[18] In addition, if a lack of trust exists within the team then there will be an absence of peer to peer accountability; team members will not feel accountable towards their team members and hence will not put effort into their tasks. The team must trust and hold each other responsible so that the intention will always be for the benefit of the team and for the team to succeed.[18] Team leaders who are afraid of confrontation might avoid holding team members accountable when in fact they have made a mistake. Team leaders must develop the confidence to hold team members accountable so that they will feel the sense of responsibility and entitlement to the team, and learn from their mistakes. If not, then errors will not be corrected and might lead to worse problems, causing a defective team.[18][20][22] If team leaders and team members do not hold each other accountable then they will not be concerned about the outcome of the team and whether they have achieved their goal, as they do not have a drive to obtain great results. Inattention to results causes a loss of purpose and brings into question the existence of the team.[18] An approach to resolving fundamental trust problems within teams is to build trust amongst team members. A team leader can build trust by persuading team members to ask questions and seek guidance from other team members so that they are more familiar and comfortable in being vulnerable with one another. This may include questions such as “Could you teach me how to do this?” or statements like “You are better than me at this”. However, in order to achieve vulnerability-based trust within the team, the team leader must be vulnerable first. If the team leader is unwilling to be vulnerable, the rest of the team will be unwilling to follow.[18] Appraisals can be a way for team members to providefeedbackto one another or for team members to provide advice to the leader. This allows individual members of the team to reflect on their performance and aim to do better by amending their mistakes; furthermore appraisals create an environment where the chain of command is non-existent and team members can be honest towards one another. This is effective in a way that the team can provide progressive feedback towards other members and can advise the leader on how he or she can improve their leadership. After each member reads their appraisals, they will understand how they can strive to improve, benefitting the team in reaching its objectives. The commonly used forms of appraisals areperformance appraisals,peer appraisalsand360 degree feedback.[23] Team-buildingactivities are a series of simple exercises involving teamwork and communication. The main objectives of team building activities are to increase trust amongst team members and allow team members to better understand one another. When choosing or designing team-building activities it is best to determine if your team needs an event or an experience. Generally an event is fun, quick and easily done by non-professionals. Team building experiences provide richer, more meaningful results.[citation needed]Experiences should be facilitated by a professional on an annual basis for teams that are growing, or changing. Team effectivenessoccurs when the team has appropriategoalsto complete and the confidence to accomplish those goals. Communication is also a large part of effectiveness in a team because in order to accomplish tasks, the members must negotiate ideas and information. Another aspect of effectiveness isreliabilityand trust. When overcoming the “storming” phase ofBruce Tuckman'sstages of group development, trust is established, and it leads to higher levels of teamcohesionand effectiveness.[24]If there is a conflict, effectiveness allows cohesion and the ability to overcome conflict. Specifically in management teams, more weight falls on their shoulders because they have to direct and lead other teams. Being effective is a main priority for the team or teams involved. Unlike non-managerial teams, in which the focus is on a set of team tasks, management teams are effective only insofar as they are accomplishing a high level of performance by a significant business unit or an entire firm.[25]Having support from higher-up position leaders can give teams insight on how to act and make decisions, which improves their effectiveness as well.
https://en.wikipedia.org/wiki/Team_management
Incomputer chessprograms, thenull-move heuristicis aheuristictechnique used to enhance the speed of thealpha–beta pruningalgorithm. Alpha–beta pruningspeeds theminimax algorithmby identifyingcutoffs, points in thegame treewhere the current position is so good for the side to move that best play by the other side would have avoided it. Since such positions could not have resulted from best play, they and all branches of the game tree stemming from them can be ignored. The faster the program produces cutoffs, the faster the search runs. The null-move heuristic is designed to guess cutoffs with less effort than would otherwise be required, whilst retaining a reasonable level of accuracy. The null-move heuristic is based on the fact that most reasonable chess moves improve the position for the side that played them. So, if the player whose turn it is to move can forfeit the right to move (or make anull move– an illegal action inchess) and still have a position strong enough to produce a cutoff, then the current position would almost certainly produce a cutoff if the current player actually moved. In employing the null-move heuristic, the computer program first forfeits the turn of the side whose turn it is to move, and then performs an alpha–beta search on the resulting position to a shallower depth than it would have searched the current position had it not used the null move heuristic. If this shallow search produces a cutoff, it assumes the full-depth search in the absence of a forfeited turn would also have produced a cutoff. Because a shallow search is faster than deeper search, the cutoff is found faster, accelerating the computer chess program. If the shallow search fails to produce a cutoff, then the program must make the full-depth search. This approach makes two assumptions. First, it assumes that the disadvantage of forfeiting one's turn is greater than the disadvantage of performing a shallower search. Provided the shallower search is not too much shallower (in practical implementation, the null-move search is usually 2 or 3pliesshallower than the full search would have been), this is usually true. Second, it assumes that the null-move search will produce a cutoff frequently enough to justify the time spent performing null-move searches instead of full searches. In practice, this is also usually true. There are a class of chess positions where employing the null-move heuristic can result in severe tactical blunders. In thesezugzwang(German for "forced to move") positions, the player whose turn it is to move has only bad moves as their legal choices, and so would actually be better off if allowed to forfeit the right to move. In these positions, the null-move heuristic may produce a cutoff where a full search would not have found one, causing the program to assume the position is very good for a side when it may in fact be very bad for them. To avoid using the null-move heuristic in zugzwang positions, most chess-playing programs that use the null-move heuristic put restrictions on its use. Such restrictions often include not using the null-move heuristic if Another heuristic for dealing with the zugzwang problem is Omid David andNathan Netanyahu'sverified null-move pruning.[1]In verified null-move pruning, whenever the shallow null-move search indicates a fail-high, instead of cutting off the search from the current node, the search is continued with reduced depth.
https://en.wikipedia.org/wiki/Null-move_heuristic
Indeep learning,pruningis the practice of removingparametersfrom an existingartificial neural network.[1]The goal of this process is to reduce the size (parameter count) of the neural network (and therefore thecomputational resourcesrequired to run it) whilst maintaining accuracy. This can be compared to the biological process ofsynaptic pruningwhich takes place inmammalianbrains during development.[2] A basic algorithm for pruning is as follows:[3][4] Most work on neural network pruning focuses on removing weights, namely, setting their values to zero. Early work suggested to also change the values of non-pruned weights.[5] Thisartificial intelligence-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Pruning_(artificial_neural_network)
Incomputational complexity theory, a problem isNP-completewhen: The name "NP-complete" is short for "nondeterministic polynomial-time complete". In this name, "nondeterministic" refers tonondeterministic Turing machines, a way of mathematically formalizing the idea of a brute-force search algorithm.Polynomial timerefers to an amount of time that is considered "quick" for adeterministic algorithmto check a single solution, or for a nondeterministic Turing machine to perform the whole search. "Complete" refers to the property of being able to simulate everything in the samecomplexity class. More precisely, each input to the problem should be associated with a set of solutions of polynomial length, the validity of each of which can be tested quickly (inpolynomial time),[2]such that the output for any input is "yes" if the solution set is non-empty and "no" if it is empty. The complexity class of problems of this form is calledNP, an abbreviation for "nondeterministic polynomial time". A problem is said to beNP-hardif everything in NP can be transformed in polynomial time into it even though it may not be in NP. A problem is NP-complete if it is both in NP and NP-hard. The NP-complete problems represent the hardest problems in NP. If some NP-complete problem has a polynomial time algorithm, all problems in NP do. The set of NP-complete problems is often denoted byNP-CorNPC. Although a solution to an NP-complete problem can beverified"quickly", there is no known way tofinda solution quickly. That is, the time required to solve the problem using any currently knownalgorithmincreases rapidly as the size of the problem grows. As a consequence, determining whether it is possible to solve these problems quickly, called theP versus NP problem, is one of the fundamentalunsolved problems in computer sciencetoday. While a method for computing the solutions to NP-complete problems quickly remains undiscovered,computer scientistsandprogrammersstill frequently encounter NP-complete problems. NP-complete problems are often addressed by usingheuristicmethods andapproximation algorithms. NP-complete problems are inNP, the set of alldecision problemswhose solutions can be verified in polynomial time;NPmay be equivalently defined as the set of decision problems that can be solved in polynomial time on anon-deterministic Turing machine. A problempin NP is NP-complete if every other problem in NP can be transformed (or reduced) intopin polynomial time.[citation needed] It is not known whether every problem in NP can be quickly solved—this is called theP versus NP problem. But ifany NP-complete problemcan be solved quickly, thenevery problem in NPcan, because the definition of an NP-complete problem states that every problem in NP must be quickly reducible to every NP-complete problem (that is, it can be reduced in polynomial time). Because of this, it is often said that NP-complete problems areharderormore difficultthan NP problems in general.[citation needed] A decision problemC{\displaystyle \scriptstyle C}is NP-complete if:[citation needed] C{\displaystyle \scriptstyle C}can be shown to be in NP by demonstrating that a candidate solution toC{\displaystyle \scriptstyle C}can be verified in polynomial time. Note that a problem satisfying condition 2 is said to beNP-hard, whether or not it satisfies condition 1.[4] A consequence of this definition is that if we had a polynomial time algorithm (on aUTM, or any otherTuring-equivalentabstract machine) forC{\displaystyle \scriptstyle C}, we could solve all problems in NP in polynomial time. The concept of NP-completeness was introduced in 1971 (seeCook–Levin theorem), though the termNP-completewas introduced later. At the 1971STOCconference, there was a fierce debate between the computer scientists about whether NP-complete problems could be solved in polynomial time on adeterministicTuring machine.John Hopcroftbrought everyone at the conference to a consensus that the question of whether NP-complete problems are solvable in polynomial time should be put off to be solved at some later date, since nobody had any formal proofs for their claims one way or the other.[citation needed]This is known as "the question of whether P=NP". Nobody has yet been able to determine conclusively whether NP-complete problems are in fact solvable in polynomial time, making this one of the greatunsolved problems of mathematics. TheClay Mathematics Instituteis offering a US$1 million reward (Millennium Prize) to anyone who has a formal proof that P=NP or that P≠NP.[5] The existence of NP-complete problems is not obvious. TheCook–Levin theoremstates that theBoolean satisfiability problemis NP-complete, thus establishing that such problems do exist. In 1972,Richard Karpproved that several other problems were also NP-complete (seeKarp's 21 NP-complete problems); thus, there is a class of NP-complete problems (besides the Boolean satisfiability problem). Since the original results, thousands of other problems have been shown to be NP-complete by reductions from other problems previously shown to be NP-complete; many of these problems are collected inGarey & Johnson (1979). The easiest way to prove that some new problem is NP-complete is first to prove that it is in NP, and then to reduce some known NP-complete problem to it. Therefore, it is useful to know a variety of NP-complete problems. The list below contains some well-known problems that are NP-complete when expressed as decision problems. To the right is a diagram of some of the problems and thereductionstypically used to prove their NP-completeness. In this diagram, problems are reduced from bottom to top. Note that this diagram is misleading as a description of the mathematical relationship between these problems, as there exists apolynomial-time reductionbetween any two NP-complete problems; but it indicates where demonstrating this polynomial-time reduction has been easiest. There is often only a small difference between a problem in P and an NP-complete problem. For example, the3-satisfiabilityproblem, a restriction of the Boolean satisfiability problem, remains NP-complete, whereas the slightly more restricted2-satisfiabilityproblem is in P (specifically, it isNL-complete), but the slightly more general max. 2-sat. problem is again NP-complete. Determining whether a graph can be colored with 2 colors is in P, but with 3 colors is NP-complete, even when restricted toplanar graphs. Determining if a graph is acycleor isbipartiteis very easy (inL), but finding a maximum bipartite or a maximum cycle subgraph is NP-complete. A solution of theknapsack problemwithin any fixed percentage of the optimal solution can be computed in polynomial time, but finding the optimal solution is NP-complete. An interesting example is thegraph isomorphism problem, thegraph theoryproblem of determining whether agraph isomorphismexists between two graphs. Two graphs areisomorphicif one can betransformedinto the other simply by renamingvertices. Consider these two problems: The Subgraph Isomorphism problem is NP-complete. The graph isomorphism problem is suspected to be neither in P nor NP-complete, though it is in NP. This is an example of a problem that is thought to behard, but is not thought to be NP-complete. This class is calledNP-Intermediate problemsand exists if and only if P≠NP. At present, all known algorithms for NP-complete problems require time that issuperpolynomialin the input size. Thevertex coverproblem hasO(1.2738k+nk){\displaystyle O(1.2738^{k}+nk)}[6]for somek>0{\displaystyle k>0}and it is unknown whether there are any faster algorithms. The following techniques can be applied to solve computational problems in general, and they often give rise to substantially faster algorithms: One example of a heuristic algorithm is a suboptimalO(nlog⁡n){\displaystyle O(n\log n)}greedy coloring algorithmused forgraph coloringduring theregister allocationphase of some compilers, a technique calledgraph-coloring global register allocation. Each vertex is a variable, edges are drawn between variables which are being used at the same time, and colors indicate the register assigned to each variable. Because mostRISCmachines have a fairly large number of general-purpose registers, even a heuristic approach is effective for this application. In the definition of NP-complete given above, the termreductionwas used in the technical meaning of a polynomial-timemany-one reduction. Another type of reduction is polynomial-timeTuring reduction. A problemX{\displaystyle \scriptstyle X}is polynomial-time Turing-reducible to a problemY{\displaystyle \scriptstyle Y}if, given a subroutine that solvesY{\displaystyle \scriptstyle Y}in polynomial time, one could write a program that calls this subroutine and solvesX{\displaystyle \scriptstyle X}in polynomial time. This contrasts with many-one reducibility, which has the restriction that the program can only call the subroutine once, and the return value of the subroutine must be the return value of the program. If one defines the analogue to NP-complete with Turing reductions instead of many-one reductions, the resulting set of problems won't be smaller than NP-complete; it is an open question whether it will be any larger. Another type of reduction that is also often used to define NP-completeness is thelogarithmic-space many-one reductionwhich is a many-one reduction that can be computed with only a logarithmic amount of space. Since every computation that can be done inlogarithmic spacecan also be done in polynomial time it follows that if there is a logarithmic-space many-one reduction then there is also a polynomial-time many-one reduction. This type of reduction is more refined than the more usual polynomial-time many-one reductions and it allows us to distinguish more classes such asP-complete. Whether under these types of reductions the definition of NP-complete changes is still an open problem. All currently known NP-complete problems are NP-complete under log space reductions. All currently known NP-complete problems remain NP-complete even under much weaker reductions such asAC0{\displaystyle AC_{0}}reductions andNC0{\displaystyle NC_{0}}reductions. Some NP-Complete problems such as SAT are known to be complete even under polylogarithmic time projections.[7]It is known, however, thatAC0reductions define a strictly smaller class than polynomial-time reductions.[8] According toDonald Knuth, the name "NP-complete" was popularized byAlfred Aho,John HopcroftandJeffrey Ullmanin their celebrated textbook "The Design and Analysis of Computer Algorithms". He reports that they introduced the change in thegalley proofsfor the book (from "polynomially-complete"), in accordance with the results of a poll he had conducted of thetheoretical computer sciencecommunity.[9]Other suggestions made in the poll[10]included "Herculean", "formidable",Steiglitz's "hard-boiled" in honor of Cook, and Shen Lin's acronym "PET", which stood for "probably exponential time", but depending on which way theP versus NP problemwent, could stand for "provably exponential time" or "previously exponential time".[11] The following misconceptions are frequent.[12] Viewing adecision problemas a formal language in some fixed encoding, the set NPC of all NP-complete problems isnot closedunder: It is not known whether NPC is closed undercomplementation, since NPC=co-NPCif and only if NP=co-NP, and since NP=co-NP is anopen question.[16]
https://en.wikipedia.org/wiki/NP-complete
Incomputational complexity theory,L/polyis thecomplexity classoflogarithmic spacemachines with a polynomial amount ofadvice. L/poly is anon-uniformlogarithmic space class, analogous to the non-uniform polynomial time classP/poly.[1] Formally, for aformal languageLto belong to L/poly, there must exist an advice functionfthat maps an integernto a string of length polynomial inn, and aTuring machineM with two read-only input tapes and one read-write tape of size logarithmic in the input size, such that an inputxof lengthnbelongs toLif and only if machine M accepts the inputx,f(n).[2]Alternatively and more simply,Lis in L/poly if and only if it can be recognized bybranching programsof polynomial size.[3]One direction of the proof that these two models of computation are equivalent in power is the observation that, if a branching program of polynomial size exists, it can be specified by the advice function and simulated by the Turing machine. In the other direction, a Turing machine with logarithmic writable space and a polynomial advice tape may be simulated by a branching program the states of which represent the combination of the configuration of the writable tape and the position of the Turing machine heads on the other two tapes. In 1979, Aleliunas et al. showed thatsymmetric logspaceis contained in L/poly.[4]However, this result was superseded byOmer Reingold's result that SL collapses to uniform logspace.[5] BPLis contained in L/poly, which is a variant ofAdleman's theorem.[6] Thistheoretical computer science–related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/L/poly
Incomputational complexity theory, the classNC(for "Nick's Class") is the set ofdecision problemsdecidable inpolylogarithmic timeon aparallel computerwith a polynomial number of processors. In other words, a problem with input sizenis inNCif there exist constantscandksuch that it can be solved in timeO((logn)c)usingO(nk)parallel processors.Stephen Cook[1][2]coined the name "Nick's class" afterNick Pippenger, who had done extensive research[3]on circuits with polylogarithmic depth and polynomial size.[4]As in the case ofcircuit complexitytheory, usually the class has an extra constraint that the circuit family must beuniform(see below). Just as the classPcan be thought of as the tractable problems (Cobham's thesis), soNCcan be thought of as the problems that can be efficiently solved on a parallel computer.[5]NCis a subset ofPbecause polylogarithmic parallel computations can be simulated by polynomial-time sequential ones. It is unknown whetherNC=P, but most researchers suspect this to be false, meaning that there are probably some tractable problems that are "inherently sequential" and cannot significantly be sped up by using parallelism. Just as the classNP-completecan be thought of as "probably intractable", so the classP-complete, when usingNCreductions, can be thought of as "probably not parallelizable" or "probably inherently sequential". The parallel computer in the definition can be assumed to be aparallel, random-access machine(PRAM). That is a parallel computer with a central pool of memory, and any processor can access any bit of memory in constant time. The definition ofNCis not affected by the choice of how the PRAM handles simultaneous access to a single bit by more than one processor. It can be CRCW, CREW, or EREW. SeePRAMfor descriptions of those models. Equivalently,NCcan be defined as those decision problems decidable by auniform Boolean circuit(which can be calculated from the length of the input, for NC, we suppose we can compute the Boolean circuit of sizenin logarithmic space inn) withpolylogarithmicdepth and a polynomial number of gates with a maximum fan-in of 2. RNCis a class extendingNCwith access to randomness. As withP, by a slight abuse of language, one might classify function problems and search problems as being inNC.NCis known to include many problems, including Often algorithms for those problems had to be separately invented and could not be naïvely adapted from well-known algorithms –Gaussian eliminationandEuclidean algorithmrely on operations performed in sequence. One might contrastripple carry adderwith acarry-lookahead adder. An example of problem in NC1is the parity check on a bit string.[6]The problem consists in counting the number of 1s in a string made of 1 and 0. A simple solution consists in summing all the string's bits. Sinceadditionis associative,x1+⋯+xn=(x1+⋯+xn2)+(xn2+1+⋯+xn).{\displaystyle x_{1}+\cdots +x_{n}=\left(x_{1}+\cdots +x_{\frac {n}{2}}\right)+\left(x_{{\frac {n}{2}}+1}+\cdots +x_{n}\right).}Recursively applying such property, it is possible to build abinary treeof lengthO(log⁡(n)){\displaystyle O(\log(n))}in which every sum between two bitsxi{\displaystyle x_{i}}andxj{\displaystyle x_{j}}is expressible by means of basiclogical operators, e.g. through the boolean expression(xi∧¬xj)∨(¬xi∧xj){\displaystyle (x_{i}\land \neg x_{j})\lor (\neg x_{i}\land x_{j})}. NCiis the class of decision problems decidable by uniform boolean circuits with a polynomial number of gates of at most two inputs and depthO((logn)i), or the class of decision problems solvable in timeO((logn)i) on a parallel computer with a polynomial number of processors. Clearly, which forms theNC-hierarchy. The smallest class,NC0, is the class of functions definable by boolean circuits with constant depth and bounded fan-in. The next-smallest class,NC1, is equal toBW40, the set of all problems solvable by polynomial-size, bounded fan-in circuits of width 4 or less. This is true for both the uniform and nonuniform case (DLOGTIME-uniformity suffices).[7]: 142 One can relate theNCclasses to the space classesL,SL,[7]: 137NL,[8]LOGCFL, andAC.[9] The NC classes are related to the AC classes, which are defined similarly, but with gates having unbounded fan-in. For eachi,[5][9][10] As an immediate consequence of this,NC=AC.[11] Also,NC0⊊AC0⊊ACC0{\displaystyle {\mathsf {NC}}^{0}\subsetneq {\mathsf {AC}}^{0}\subsetneq {\mathsf {ACC}}^{0}}.[5] Similarly,NCis equivalent to the problems solvable on analternating Turing machinerestricted to at most two options at each step withO(logn) space and(log⁡n)O(1){\displaystyle (\log n)^{O(1)}}alternations.[12] It is a major open question whetherTC0⊊NC1{\displaystyle {\mathsf {TC}}^{0}\subsetneq {\mathsf {NC}}^{1}}(Vollmer 1998, p. 126). A significant partial result states that if there exists someϵ>0{\displaystyle \epsilon >0}, and a problem inNC1{\displaystyle {\mathsf {NC}}^{1}}, such that it requires at leastΩ(n1+ϵ){\displaystyle \Omega (n^{1+\epsilon })}gates inTC0{\displaystyle {\mathsf {TC}}^{0}}, then this can be bootstrapped so that it requires superpolynomial gates, and thus not inTC0{\displaystyle {\mathsf {TC}}^{0}}.[13] There are various levels of uniformity being considered. A family of boolean circuits is uniform if the schematics for any member of the family can be produced by a Turing machine under various resource constraints. With different levels of constraints, we would obtain possibly different complexity classes, with a more stringent constraint leading to a possibly smaller complexity class. In the literature, the following uniformities have been considered for theNC1class, arranged according to strength:[7]: 139[14] By default, the literature usesLOGSPACEuniformity. Because it is possible thatNC1⊊LOGSPACE{\displaystyle {\mathsf {NC}}^{1}\subsetneq {\mathsf {LOGSPACE}}}, researchers may useNC1-uniformity, since it is a possible strengthening. To avoid self-reference,NC1-uniformNC1is defined as follows: ANC1Boolean circuit family isNC1-uniform if the set of descriptions is decided by anALOGTIMEalternating Turing machine. The machine reads in a length-n{\displaystyle n}description of a Boolean circuit, and halts in timeO(log⁡n){\displaystyle O(\log n)}.[7]: 139 For higher classesNC2,NC3, ..., there are similar uniformities definable. However, fork≥2{\displaystyle k\geq 2},NCk-uniformNCkandLOGSPACE-uniformNCkare equal, and both are equivalent to the following definition: The family is decided by analternating Turing machine. The machine reads in a length-n{\displaystyle n}description of a Boolean circuit, and halts in timeO((log⁡n)k){\displaystyle O((\log n)^{k})}and spaceO(log⁡n){\displaystyle O(\log n)}.[7]: 139 One major open question incomplexity theoryis whether or not every containment in theNChierarchy is proper. It was observed by Papadimitriou that, ifNCi=NCi+1for somei, thenNCi=NCjfor allj≥i, and as a result,NCi=NC. This observation is known asNC-hierarchy collapse because even a single equality in the chain of containments implies that the entireNChierarchy "collapses" down to some leveli. Thus, there are 2 possibilities: It is widely believed that (1) is the case, although no proof as to the truth of either statement has yet been discovered. If there exists a problem that isNC-complete underLOGSPACEorNC1reductions, then theNChierarchy collapses.[7]: 136 Abranching programwithnvariables of widthkand lengthmconsists of a sequence ofminstructions. Each of the instructions is a tuple (i,p,q) whereiis the index of variable to check (1 ≤i≤n), andpandqare functions from {1, 2, ...,k} to {1, 2, ...,k}. Numbers 1, 2, ...,kare called states of the branching program. The program initially starts in state 1, and each instruction (i,p,q) changes the state fromxtop(x) orq(x), depending on whether theith variable is 0 or 1. The function mapping an input to a final state of the program is called theyieldof the program (more precisely, the yield on an input is the function mapping any initial state to the corresponding final state). The programacceptsa setA⊂2n{\displaystyle A\subset 2^{n}}of variable values when there is some set of functionsF⊂kk{\displaystyle F\subset k^{k}}such that a variable sequencex∈2n{\displaystyle x\in 2^{n}}is inAprecisely when its yield is inF. A family of branching programs consists of a branching program withnvariables for eachn. It accepts a language when thenvariable program accepts the language restricted to lengthninputs. It is easy to show that every languageLon {0,1} can be recognized by a family of branching programs of width 5 and exponential length, or by a family of exponential width and linear length. Every regular language on {0,1} can be recognized by a family of branching programs of constant width and linear number of instructions (since a DFA can be converted to a branching program).BWBPdenotes the class of languages recognizable by a family of branching programs of bounded width and polynomial length.[15] Barrington's theorem[16]says thatBWBPis exactlynonuniformNC1. The proof uses thenonsolvabilityof the symmetric group S5.[15] The theorem is rather surprising. For instance, it implies that themajority functioncan be computed by a family of branching programs of constant width and polynomial size, while intuition might suggest that to achieve polynomial size, one needs a linear number of states. A branching program of constant width and polynomial size can be easily converted (via divide-and-conquer) to a circuit inNC1. Conversely, suppose a circuit inNC1is given. Without loss of generality, assume it uses only AND and NOT gates. Lemma 1—If there exists a branching program that sometimes works as a permutationPand sometimes as a permutationQ, by right-multiplying permutations in the first instruction byα, and in the last instruction left-multiplying byβ, we can make a circuit of the same length that behaves asβPαorβQα, respectively. Call a branching program α-computing a circuitCif it works as identity whenC's output is 0, and asαwhenC's output is 1. As a consequence of Lemma 1 and the fact that all cycles of length 5 areconjugate, for any two 5-cyclesα,β, if there exists a branching program α-computing a circuitC, then there exists a branching program β-computing the circuitC, of the same length. Lemma 2—There exist 5-cyclesγ,δsuch that theircommutatorε=γδγ−1δ−1is a 5-cycle. For example,γ= (1 2 3 4 5),δ= (1 3 5 4 2) givingε= (1 3 2 5 4). We will now prove Barrington's theorem by induction: Suppose we have a circuitCwhich takes inputsx1,...,xnand assume that for all subcircuitsDofCand 5-cycles α, there exists a branching program α-computingD. We will show that for all 5-cycles α, there exists a branching program α-computingC. By assuming the subcircuits have branching programs so that they areα-computing for all 5-cyclesα∈S5, we have shownCalso has this property, as required. The size of the branching program is at most 4d, wheredis the depth of the circuit. If the circuit has logarithmic depth, the branching program has polynomial length.
https://en.wikipedia.org/wiki/NC_(complexity)#Barrington's_theorem
Azero-suppressed decision diagram(ZSDDorZDD) is a particular kind ofbinary decision diagram(BDD) with fixed variable ordering. Thisdata structureprovides a canonically compact representation of sets, particularly suitable for certaincombinatorial problems. Recall the Ordered Binary Decision Diagram (OBDD) reduction strategy, i.e. a node is replaced with one of its children if both out-edges point to the same node. In contrast, a node in a ZDD is replaced with its negative child if its positive edge points to the terminal node 0. This provides an alternative strong normal form, with improved compression of sparse sets. It is based on a reduction rule devised byShin-ichi Minatoin 1993. In abinary decision diagram, aBoolean functioncan be represented as a rooted, directed,acyclic graph, which consists of several decision nodes and terminal nodes. In 1993, Shin-ichi Minato from Japan modifiedRandal Bryant's BDDs for solvingcombinatorial problems. His "Zero-Suppressed" BDDs aim to represent and manipulatesparse sets of bit vectors. If the data for a problem are represented as bit vectors of length n, then any subset of the vectors can be represented by the Boolean function over n variables yielding 1 when the vector corresponding to the variable assignment is in the set. According to Bryant, it is possible to use forms oflogic functionsto express problems involving sum-of-products. Such forms are often represented as sets of "cubes", each denoted by a string containing symbols 0, 1, and -. For instance, the function(x¯1∧x2)∨(x¯2⊕x3){\displaystyle ({\bar {x}}_{1}\land x_{2})\lor ({\bar {x}}_{2}\oplus x_{3})}can be illustrated by the set{01−,−11,−00}{\displaystyle \{01-,-11,-00\}}. By using bits 10, 01, and 00 to denote symbols 1, 0, and – respectively, one can represent the above set with bit vectors in the form of{011000,001010,000101}{\displaystyle \{011000,001010,000101\}}. Notice that the set of bit vectors is sparse, in that the number of vectors is fewer than 2n, which is the maximum number of bit vectors, and the set contains many elements equal to zero. In this case, a node can be omitted if setting the node variable to 1 causes the function to yield 0. This is seen in the condition that a 1 at some bit position implies that the vector is not in the set. For sparse sets, this condition is common, and hence many node eliminations are possible. Minato has proved that ZDDs are especially suitable forcombinatorial problems, such as the classical problems intwo-level logic minimization,knight's tour problem, fault simulation, timing analysis,the N-queens problem, as well as weak division. By using ZDDs, one can reduce the size of the representation of a set of n-bit vectors in OBDDs by at most a factor of n. In practice, the optimization isstatistically significant. We define a Zero-Suppressed Decision Diagram (ZDD) to be any directed acyclic graph such that: We call Z an unreduced ZDD, if a HI edge points to a ⊥ node or condition 4 fails to hold. In computer programs, Boolean functions can be expressed in bits, so the ⊤ node and ⊥ node can be represented by 1 and 0. From the definition above, we can represent combination sets efficiently by applying two rules to the BDDs: If the number and the order of input variables are fixed, a zero-suppressed BDD represents a Boolean function uniquely (as proved in Figure 2, it is possible to use a BDD to represent a Boolean binary tree). Let F be a ZDD. Let v be its root node. Then: One may represent the LO branch as the sets in F that don't containv:F0={α:α∈F,v∉α}{\displaystyle F_{0}=\{\alpha :\alpha \in F,v\notin \alpha \}} And the HI branch as the sets in F that do containv:F1={α∖{v}:α∈F,v∈α}{\displaystyle F_{1}=\{\alpha \setminus \{v\}:\alpha \in F,v\in \alpha \}} Figure 3:The family∅∪{∅∪{2}}={{2}}{\displaystyle \emptyset \cup \{\emptyset \cup \{2\}\}=\{\{2\}\}}. We may call thise2{\displaystyle e_{2}}, an elementary family. Elementary families consist of the form{{n}}{\displaystyle \{\{n\}\}}, and are denoted byen{\displaystyle e_{n}}. Figure 4:The family{∅}∪{∅∪{2}}={∅,{2}}{\displaystyle \{\emptyset \}\cup \{\emptyset \cup \{2\}\}=\{\emptyset ,\{2\}\}} Figure 5:The family{{2}}∪{∅∪{1}}={{1},{2}}{\displaystyle \{\{2\}\}\cup \{\emptyset \cup \{1\}\}=\{\{1\},\{2\}\}} Figure 6:The family{{1}∪{2}}={{1,2}}{\displaystyle \{\{1\}\cup \{2\}\}=\{\{1,2\}\}} One feature of ZDDs is that the form does not depend on the number of input variables as long as the combination sets are the same. It is unnecessary to fix the number of input variables before generating graphs. ZDDs automatically suppress the variables for objects which never appear in combination, hence the efficiency for manipulating sparse combinations. Another advantage of ZDDs is that the number of 1-paths in the graph is exactly equal to the number of elements in the combination set. In original BDDs, the node elimination breaks this property. Therefore, ZDDs are better than simple BDDs to represent combination sets. It is, however, better to use the original BDDs when representing ordinary Boolean functions, as shown in Figure 7. Here we have the basic operations for ZDDs, as they are slightly different from those of thety() returns ø (empty set)[clarification needed] In ZDDs, there is no NOT operation, which is an essential operation in original BDDs. The reason is that the complement setPcannot be computed without defining the universal setU. In ZDDs,Pcan be computed as Diff(U, P). Suppose|P|=|P0|+|P1|{\displaystyle \left\vert P\right\vert =\left\vert P_{0}\right\vert +\left\vert P_{1}\right\vert }, we can recursively compute the number of sets in a ZDD, enabling us to get the 34th set out a 54-member family. Random access is fast, and any operation possible for an array of sets can be done with efficiency on a ZDD. According to Minato, the above operations for ZDDs can be executed recursively like original BDDs. To describe the algorithms simply, we define the procedureGetnode(top, P0, P1)that returns a node for a variable top and two subgraphs P0 and P1. We may use a hash table, called uniq-table, to keep each node unique. Node elimination and sharing are managed only byGetnode(). UsingGetnode(), we can then represent other basic operations as follows: These algorithms take an exponential time for the number of variables in the worst case; however, we can improve the performance by using a cache that memorizes results of recent operations in a similar fashion in BDDs. The cache prevents duplicate executions for equivalent sub-graphs. Without any duplicates, the algorithms can operate in a time that is proportional to the size of graphs, as shown in Figure 9 and 10. ZDDs can be used to represent the five-letter words of English, the set WORDS (of size 5757) from theStanford GraphBasefor instance. One way to do this is to consider the functionf(x1,...,x25){\displaystyle f(x_{1},...,x_{25})}that is defined to be 1 if and only if the five numbers(x1,...,x5)2{\displaystyle (x_{1},...,x_{5})_{2}},(x6,...,x10)2{\displaystyle (x_{6},...,x_{10})_{2}}, ...,(x21,...,x25)2{\displaystyle (x_{21},...,x_{25})_{2}}encode the letters of an English word, wherea=(00001)2{\displaystyle a=(00001)_{2}}, ...,z=(11010)2{\displaystyle z=(11010)_{2}}. For example,f(0,0,1,1,1,0,1,1,1,1,0,1,1,1,1,0,0,1,1,0,1,1,0,0,x25)=x25{\displaystyle f(0,0,1,1,1,0,1,1,1,1,0,1,1,1,1,0,0,1,1,0,1,1,0,0,x_{25})=x_{25}}. The function of 25 variables has Z(f) = 6233 nodes – which is not too bad for representing 5757 words. Compared tobinary trees,tries, orhash tables, a ZDD may not be the best to complete simple searches, yet it is efficient in retrieving data that is only partially specified, or data that is only supposed to match a key approximately. Complex queries can be handled with ease. Moreover, ZDDs do not involve as many variables. In fact, by using a ZDD, one can represent those five letter words as a sparse functionF(a1,...,z1,a2,...,z2,...,a5,...,z5){\displaystyle F(a_{1},...,z_{1},a_{2},...,z_{2},...,a_{5},...,z_{5})}that has 26×5 = 130 variables, where variablea2{\displaystyle a_{2}}for example determines whether the second letter is "a". To represent the word "crazy", one can make F true whenc1=r2=a3=z4=y5=1{\displaystyle c_{1}=r_{2}=a_{3}=z_{4}=y_{5}=1}and all other variables are 0. Thus, F can be considered as a family consisting of the 5757 subsets{w1,h2,i3,c4,h5}{\displaystyle \{w_{1},h_{2},i_{3},c_{4},h_{5}\}}, etc. With these 130 variables the ZDD size Z(F) is in fact 5020 instead of 6233. According to Knuth, the equivalent size of B(F) using a BDD is 46,189—significantly larger than Z(F). In spite of having similar theories and algorithms, ZDDs outperform BDDs for this problem with quite a large margin. Consequently, ZDDs allow us to perform certain queries that are too onerous for BDDs. Complex families of subset can readily be constructed from elementary families. To search words containing a certain pattern, one may use family algebra on ZDDs to compute(F/P)⊔P{\displaystyle (F/P)\sqcup P}where P is the pattern, e.g.a1⊔h3⊔e5{\displaystyle a_{1}\sqcup h_{3}\sqcup e_{5}}. One may use ZDDs to representsimple pathsin anundirected graph. For example, there are 12 ways to go from the upper left corner of a three by three grid (shown in Figure 11) to the lower right corner, without visiting any point twice. These paths can be represented by the ZDD shown in Figure 13, in which each nodemnrepresents the question "does the path include the arc betweenmandn?" So, for example, the LO branch between 13 and 12 indicates that if the path does not include the arc from 1 to 3, the next thing to ask is if it includes the arc from 1 to 2. The absence of a LO branch leaving node 12 indicates that any path that does not go from 1 to 3musttherefore go from 1 to 2. (The next question to ask would be about the arc between 2 and 4.) In this ZDD, we get the first path in Figure 12 by taking the HI branches at nodes 13, 36, 68, and 89 of the ZDD (LO branches that simply go to ⊥ are omitted). Although the ZDD in Figure 13 may not seem significant by any means, the advantages of a ZDD become obvious as the grid gets larger. For example, for an eight by eight grid, the number of simple paths from corner to corner turns out to be 789,360,053,252 (Knuth). The paths can be illustrated with 33580 nodes using a ZDD. A real world example for simple paths was proposed by Randal Bryant, "Suppose I wanted to take a driving tour of the Continental U.S., visiting all of the state capitols, and passing through each state only once. What route should I take to minimize the total distance?" Figure 14 shows an undirected graph for this roadmap, the numbers indicating the shortest distances between neighboring capital cities. The problem is to choose a subset of these edges that form aHamiltonian pathof smallest total length. EveryHamiltonian pathin this graph must either start or end at Augusta, Maine(ME). Suppose one starts in CA. One can find a ZDD that characterizes all paths from CA to ME. According to Knuth, this ZDD turns out to have only 7850 nodes, and it effectively shows that exactly 437,525,772,584 simple paths from CA to ME are possible. By number of edges, the generating function is so the longest such paths are Hamiltonian, with a size of 2,707,075. ZDDs in this case, are efficient for simple paths andHamiltonian paths. Define 64 input variables to represent the squares on a chess board. Each variable denotes the presence or absence of a queen on that square. Consider that, Although one can solve this problem by constructing OBDDs, it is more efficient to use ZDDs. Constructing a ZDD for the 8-Queens problem requires 8 steps from S1 to S8. Each step can be defined as follows: The ZDD for S8 consists of all potential solutions of the 8-Queens problem. For this particular problem, caching can significantly improve the performance of the algorithm. Using cache to avoid duplicates can improve the N-Queens problems up to 4.5 times faster than using only the basic operations (as defined above), shown in Figure 10. The Knight's tour problem has a historical significance. The knight's graph contains n2vertices to depict the squares of the chessboard. The edges illustrate the legal moves of a knight. The knight can visit each square of the board exactly once. Olaf Schröer, M. Löbbing, and Ingo Wegener approached this problem, namely on a board, by assigning Boolean variables for each edge on the graph, with a total of 156 variables to designate all the edges. A solution of the problem can be expressed by a 156-bit combination vector. According to Minato, the construction of a ZDD for all solutions is too large to solve directly. It is easier to divide and conquer. By dividing the problems into two parts of the board, and constructing ZDDs in subspaces, one can solve The Knight's tour problem with each solution containing 64 edges. However, since the graph is not very sparse, the advantage of using ZDDs is not so obvious. N. Takahashi et al. suggested a fault simulation method given multiple faults by using OBDDs. This deductive method transmits the fault sets from primary inputs to primary outputs, and captures the faults at primary outputs. Since this method involves unate cube set expressions, ZDDs are more efficient. The optimizations from ZDDs in unate cube set calculations indicate that ZDDs could be useful in developing VLSI CAD systems and in a myriad of other applications.
https://en.wikipedia.org/wiki/Zero-suppressed_decision_diagram
Analgebraic decision diagram (ADD)or amulti-terminal binary decision diagram (MTBDD),is a data structure that is used to symbolically represent aBoolean functionwhose codomain is an arbitrary finite set S. An ADD is an extension of a reduced ordered binary decision diagram, or commonly namedbinary decision diagram (BDD)in the literature, which terminal nodes are not restricted to the Boolean values 0 (FALSE) and 1 (TRUE).[1][2]The terminal nodes may take any value from a set of constants S. An ADD represents a Boolean function from{0,1}n{\displaystyle \{0,1\}^{n}}to a finite set of constants S, or carrier of thealgebraic structure. An ADD is a rooted, directed, acyclic graph, which has several nodes, like a BDD. However, an ADD can have more than two terminal nodes which are elements of the set S, unlike a BDD. An ADD can also be seen as a Boolean function, or avectorial Boolean function, by extending the codomain of the function, such thatf:{0,1}n→Q{\displaystyle f:\{0,1\}^{n}\to Q}withS⊆Q{\displaystyle S\subseteq Q}andcard(Q)=2n{\displaystyle card(Q)=2^{n}}for some integer n. Therefore, the theorems of theBoolean algebraapplies to ADD, notably theBoole's expansion theorem.[1] Each node of is labeled by a Boolean variable and has two outgoing edges: a 1-edge which represents the evaluation of the variable to the value TRUE, and a 0-edge for its evaluation to FALSE. An ADD employs the same reduction rules as a BDD (orReduced Ordered BDD): ADDs are canonical according to a particular variable ordering. An ADD can be represented by a matrix according to its cofactors.[2][1] ADDs were first implemented forsparse matrix multiplicationandshortest path algorithms(Bellman-Ford, Repeated Squaring, andFloyd-Warshallprocedures).[1]
https://en.wikipedia.org/wiki/Algebraic_decision_diagram
Inartificial intelligence, asentential decision diagram(SDD) is a type ofknowledge representationused inknowledge compilationto representBoolean functions. SDDs can be viewed as a generalization of the influentialordered binary decision diagram(OBDD) representation, by allowing decisions on multiple variables at once. Like OBDDs, SDDs allow for tractable Boolean operations, while being exponentially more succinct. For this reason, they have become an important representation in knowledge compilation.[1] SDDs are defined with respect to a generalization of variable ordering known as a variable tree (vtree).[2] Provided that they satisfy additional properties known as compression and trimming (which are analogous toROBDDs), SDDs are a canonical representation of Boolean functions; that is, they are unique given a vtree.[2] Like OBDDs, they allow for operations such asconjunction,disjunctionandnegationto be computed directly on the representation inpolynomial time, while being potentially more compact.[2]They also allow for polynomial-time model counting.[3][4] SDDs are known to be exponentially more succinct than OBDDs.[5] SDDs are used as acompilationtarget forprobabilistic logic programsby theProbLog2 system since they support tractable (weighted) model counting as well as tractable negation, conjunction and disjunction while being more succinct than BDDs.[3]SDDs have also been extended to model probability distributions, in which context they are known as probabilistic sentential decision diagrams (PSDD).[6] Thisartificial intelligence-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Sentential_Decision_Diagram
Aninfluence diagram(ID) (also called arelevance diagram,decision diagramor adecision network) is a compact graphical and mathematical representation of a decision situation. It is a generalization of aBayesian network, in which not onlyprobabilistic inferenceproblems but alsodecision makingproblems (following themaximum expected utilitycriterion) can be modeled and solved. ID was first developed in the mid-1970s bydecision analystswith an intuitive semantic that is easy to understand. It is now adopted widely and becoming an alternative to thedecision treewhich typically suffers fromexponential growthin number of branches with each variable modeled. ID is directly applicable inteam decision analysis, since it allows incomplete sharing of information among team members to be modeled and solved explicitly. Extensions of ID also find their use ingame theoryas an alternative representation of thegame tree. An ID is adirected acyclic graphwith three types (plus one subtype) ofnodeand three types ofarc(or arrow) between nodes. Nodes: Arcs: Given a properly structured ID: Alternative, information, and preferenceare termeddecision basisin decision analysis, they represent three required components of any valid decision situation. Formally, the semantic of influence diagram is based on sequential construction of nodes and arcs, which implies a specification of all conditional independencies in the diagram. The specification is defined by thed{\displaystyle d}-separation criterion of Bayesian network. According to this semantic, every node is probabilistically independent on its non-successor nodes given the outcome of its immediate predecessor nodes. Likewise, a missing arc between non-value nodeX{\displaystyle X}and non-value nodeY{\displaystyle Y}implies that there exists a set of non-value nodesZ{\displaystyle Z}, e.g., the parents ofY{\displaystyle Y}, that rendersY{\displaystyle Y}independent ofX{\displaystyle X}given the outcome of the nodes inZ{\displaystyle Z}. Consider the simple influence diagram representing a situation where a decision-maker is planning their vacation. The above example highlights the power of the influence diagram in representing an extremely important concept in decision analysis known as thevalue of information. Consider the following three scenarios; Scenario 1 is the best possible scenario for this decision situation since there is no longer any uncertainty on what they care about (Weather Condition) when making their decision. Scenario 3, however, is the worst possible scenario for this decision situation since they need to make their decision without any hint (Weather Forecast) on what they care about (Weather Condition) will turn out to be. The decision-maker is usually better off (definitely no worse off, on average) to move from scenario 3 to scenario 2 through the acquisition of new information. The most they should be willing to pay for such move is called thevalue of informationonWeather Forecast, which is essentially thevalue of imperfect informationonWeather Condition. The applicability of this simple ID and the value of information concept is tremendous, especially inmedical decision makingwhen most decisions have to be made with imperfect information about their patients, diseases, etc. Influence diagrams are hierarchical and can be defined either in terms of their structure or in greater detail in terms of the functional and numerical relation between diagram elements. An ID that is consistently defined at all levels—structure, function, and number—is a well-defined mathematical representation and is referred to as awell-formed influence diagram(WFID). WFIDs can be evaluated usingreversalandremovaloperations to yield answers to a large class of probabilistic, inferential, and decision questions. More recent techniques have been developed byartificial intelligenceresearchers concerningBayesian network inference(belief propagation). An influence diagram having only uncertainty nodes (i.e., a Bayesian network) is also called arelevance diagram. An arc connecting nodeAtoBimplies not only that "Ais relevant toB", but also that "Bis relevant toA" (i.e.,relevanceis asymmetricrelationship).
https://en.wikipedia.org/wiki/Influence_diagram