text
stringlengths 16
172k
| source
stringlengths 32
122
|
---|---|
Crowdfundingis a process in which individuals or groups pool money and other resources to fund projects initiated by other people or organizations "without standard financial intermediaries."[1]Crowdfunded projects may include creative works, products, nonprofit organizations, supporting entrepreneurship, businesses, or donations for a specific purpose (e.g., to pay for a medical procedure). Crowdfunding usually takes place via an online portal that handles the financial transactions involved and may also provide services such as media hosting, social networking, and facilitating contact with contributors. It has increased since the passage of the Jumpstart Our Business Startups (JOBS) Act.
Crowdfunding has a wide reach and may be utilized by businesses, nonprofit organizations, an individual, group, or an organization. Crowdfunding is used in a variety of platforms from social media to face-to-face environments. The use of crowdfunding has increased; however, success is not guaranteed.
Funds may be sought out to start a business, to support a cause, or to reach a fundraising goal. Most crowdfunding projects are small and "seek to raise small amounts of capital, often under $1000."[1]An individual or organization may not qualify for a traditional bank loan, and crowdfunding provides another opportunity to gain financial support from others. The use of crowdfunding has gained an increased presence since the JOBS Act and has a significant social media presence. "Approximately 25 percent of real-world relationships start online, with people of all ages migrating online to find a partner. Crowdfunding is doing for small businesses and entrepreneurs what dating sites have done for singles."[2]Those unable to procure funding from traditional methods may be interested in pursuing crowdfunding as an option; however, the success rate may be a deterrent. E. Mollick examined Kickstarter projects from 2009 through 2012 and found many projects were not successful as only "3% raise 50% of their goal," and he stated that successful projects succeed "by relatively small margins."[1]
Crowdfunding is donation-based fundraising for businesses or creative projects, typically via an online funding portal.[3]Some but not all crowdfunding projects offer contributors rewards, which may differ based on the amount of money donated. Rewards can include copies of a creative work, products generated with the funding, special or personalized incentives (such as autographed works orpromotional merchandise), or public recognition. One can classify crowdfunding as using one of the following models:
Inequity crowdfunding, a crowdfunding approach is used to raise investment capital, and contributors receiveequityin the resulting business. It is a joint effort made between individuals to support the causes of other people or organizations in the form of equity. Contributors may act as investors and receive shares directly, or the crowdfunding service may act as anominated agent.[4]Equity crowdfunding helps "the 90 percent of businesses that were left out in the cold" by traditional funding methods, which is why it has become such a viable option for business startups.[2]
Equity-based funding is illegal in many countries, such asIndia. In the United States theJOBS Actof 2012 regulated the trend. This "legislation was intended to increase access to capital for the innovative companies" in need of investment capital and allows a pool of small investors to come together.[2]The Regulation was updated in 2021 by theSECallowing companies to up to $5 million per year from unaccredited investors and allowing investors to invest more.[5]
This mode, also known as "non-equity" funding, has become increasingly popular, with a 230 percent increase in 2012.[2]Reward-based crowdfunding may fund campaigns supporting the free development of software, the promotion of motion pictures, scientific research, development of inventions, etc. Reward-based funders expect a return from the project.
This is known as "Peer to Peer", "P2P", "marketplace lending", or "crowdlending". Borrowers set up campaigns to fulfill their financial needs, and lenders contribute toward the goal for an interest. This method of online funding may prove to be "a threat to the traditional banking system in the areas of consumer and business loans, as has already been demonstrated by the rapid success of [these] online lending marketplaces."[2]
A plaintiff requests a monetary donation to fund a court case. If the claimant wins, investors may get more than their initial investment.[3]
This type of crowdfunding "is part of a trend in which people are relying less on charities to help them fulfill their philanthropic aims".[6]The best example might be raising funds from individuals to support personal or social causes.
People make donations for different reasons and give to different entities. Donors may give to feel good about themselves or because they believe in a cause. Some donations are made to individuals while others are made to organizations. The same is true in online crowdfunding. However, some differences exist in their method of giving, geography, and demographics.
Online crowdfunding donors differ from traditional fundraising donors in that donors give anonymously, do not have a connection to the recipient, and donors may seek out a cause or recipient to give to.[7]Another important factor is that online donors are not limited by their geographic location and can give to individuals or organizations anywhere in the world. Once a fundraiser is created, individuals can share the details anywhere to attract donors and gather funds for their cause.When it comes to motives, donations are made to individuals to help them reach a goal and typically drop off once that is met; however, donations to organizations are made for a greater societal good.[7]The demographics of online donors vary from traditional donors as "online donors tend to be younger and give larger gifts than traditional donors."[8]This is important for online campaign organizers to note as they determine their target audience; however, those over 50 have increased their social media usage and have a presence on Facebook.[8]
More research is needed in regard to the topic of crowdfunding in general. There are benefits to online crowdfunding as it has the ability to tap into audiences that are not close in geographic proximity to an individual or organization and to increase awareness about a campaign. However, with relatively low funding success rates reported, "social networking and traditional approaches to fundraising may be complements" to help individuals and organizations raise funds but not a replacement."[8]
|
https://en.wikipedia.org/wiki/Comparison_of_crowdfunding_services
|
Crowdmappingis a subtype ofcrowdsourcing[1][2]by whichaggregationof crowd-generated inputs such as captured communications andsocial mediafeeds are combined withgeographic datato create adigital mapthat is as up-to-date as possible[3]on events such aswars,humanitarian crises,crime,elections, ornatural disasters.[4][5]Such maps are typically created collaboratively by people coming together over theInternet.[3][6]
The information can typically be sent to the map initiator or initiators bySMSor by filling out a form online and are then gathered on a map online automatically or by a dedicated group.[7]In 2010,Ushahidireleased "Crowdmap" − afree and open-sourceplatform by which anyone can start crowdmapping projects.[8][9][10][11][12]
Crowdmapping can be used to track fires, floods,pollution,[6]crime, political violence, the spread of disease and bring a level of transparency to fast-moving events that are difficult fortraditional mediato adequately cover, or problem areas[6]and longer-term trends and that may be difficult to identify through the reporting of individual events.[5]
During disasters the timeliness of relevant maps is critical as the needs and locations of victims may change rapidly.[3]
The use of crowdmapping by authorities can improvesituational awarenessduring an incident and be used to supportincident response.[6]
Crowdmaps are an efficient way to visually demonstrate the geographical spread of a phenomenon.[7]
|
https://en.wikipedia.org/wiki/Crowdmapping#Examples
|
This is a comprehensive list ofGrid computinginfrastructure projects.
These projects attempt to make large physical computation infrastructures available for researchers to use:
|
https://en.wikipedia.org/wiki/List_of_grid_computing_projects
|
Citizen scienceprojects are activities sponsored by a wide variety of organizations so non-scientists can meaningfully contribute to scientific research.
Activities vary widely from transcribing old ship logbooks to digitize the data as part of theOld Weatherproject to observing and counting birds at home or in the field foreBird.[1][2]Participation can be as simple as playing a computer game for a project calledEyewirethat may help scientists learn more aboutretinalneurons.[3]It can also be more in depth, such as when citizens collect water quality data over time to assess the health of local waters, or help discover and name new species of insects.[4][5]An emerging branch of Citizen Science are Community Mapping projects that utilize smartphone and tablet technology. For example, TurtleSAT[6]is a community mapping project that is mapping freshwater turtle deaths throughoutAustralia.
This list of citizen science projects involves projects that engage all age groups. There are projects specifically aimed at the younger age demographic like iTechExplorers[7]which was created by a 14 year old in theUKto assess the effects of bedtime technology on the body'scircadian rhythmand can be completed in a classroom setting. Other projects like AgeGuess[8]focus on the senior demographics and enable the elderly to upload photos of themselves so the public can guess different ages.
Lists of citizen science projects may change. For example, the Old Weather project website indicates that as of[update]January 10, 2015, 51% of the logs were completed.[9]When that project reaches 100 percent, it will move to the completed list.
Citizen scientists anywhere in the world can participate in these projects.
These projects require that citizen scientists be local to a region of study.
Outcomes:
(1) eBook: 'Those Who Suffer Much, Know Much' 2010http://pandora.nla.gov.au/pan/104801/20100925-0000/2010.pdf(2) Spurred significant increase in scientific research into LDN (low dose naltrexone)https://www.ncbi.nlm.nih.gov/pubmed/?term=%22low-dose+naltrexone%22%5BALL+FIELDS%5D+NOT+(dependence%5BTitle%5D)+NOT+(dependent%5BTitle%5D)+NOT+(oxycodone%5BTitle%5D)+NOT+(withdrawal%5BTitle%5D)+NOT+(cocaine%5BTitle%5D)+NOT+(morphine%5BTitle%5D)+NOT+(itch-related%5BTitle%5D)+NOT+(drinking%5BTitle%5D)+NOT+(alcohol%5BTitle%5D)+NOT+(cigarette%5BTitle%5D)+NOT+(smoker%5BTitle%5D)+NOT+(smoking%5BTitle%5D)+NOT+(smokers%5BTitle%5D)+NOT+(nicotine%5BTitle%5D)+NOT+(detoxification%5BTitle%5D)+NOT+(gambling%5BTitle%5D)+NOT+(self-biting%5BTitle%5D)
|
https://en.wikipedia.org/wiki/List_of_citizen_science_projects
|
This is a list ofnotableapplications(apps) that run on theAndroid platformwhich meet guidelines forfree softwareandopen-source software.
There are a number of third-party maintained lists of open-source Android applications, including:
|
https://en.wikipedia.org/wiki/List_of_free_and_open-source_Android_applications
|
Ininformation systems, atagis akeyword or termassigned to a piece of information (such as anInternet bookmark,multimedia, databaserecord, orcomputer file). This kind ofmetadatahelps describe an item and allows it to be found again by browsing or searching.[1]Tags are generally chosen informally and personally by the item's creator or by its viewer, depending on the system, although they may also be chosen from acontrolled vocabulary.[2]: 68
Tagging was popularized bywebsitesassociated withWeb 2.0and is an important feature of many Web 2.0 services.[2][3]It is now also part of otherdatabase systems,desktop applications, andoperating systems.[4]
People use tags to aidclassification, mark ownership, noteboundaries, and indicateonline identity. Tags may take the form of words, images, or other identifying marks. An analogous example of tags in the physical world ismuseumobject tagging. People were using textualkeywordstoclassify informationand objects long before computers. Computer basedsearch algorithmsmade the use of such keywords a rapid way of exploring records.
Tagging gained popularity due to the growth ofsocial bookmarking,image sharing, andsocial networkingwebsites.[2]These sites allow users to create and manage labels (or "tags") that categorize content using simple keywords. Websites that include tags often display collections of tags astag clouds,[a]as do some desktop applications.[b]On websites that aggregate the tags of all users, an individual user's tags can be useful both to them and to the larger community of the website's users.
Tagging systems have sometimes been classified into two kinds:top-downandbottom-up.[3]: 142[4]: 24Top-downtaxonomiesare created by an authorized group of designers (sometimes in the form of acontrolled vocabulary), whereas bottom-up taxonomies (calledfolksonomies) are created by all users.[3]: 142This definition of "top down" and "bottom up" should not be confused with the distinction between asingle hierarchicaltree structure(in which there is one correct way to classify each item) versusmultiple non-hierarchicalsets(in which there are multiple ways to classify an item); the structure of both top-down and bottom-up taxonomies may be either hierarchical, non-hierarchical, or a combination of both.[3]: 142–143Some researchers and applications have experimented with combining hierarchical and non-hierarchical tagging to aid in information retrieval.[7][8][9]Others are combining top-down and bottom-up tagging,[10]including in some large library catalogs (OPACs) such asWorldCat.[11][12]: 74[13][14]
When tags or other taxonomies have further properties (orsemantics) such asrelationshipsandattributes, they constitute anontology.[3]: 56–62
In folder system a file cannot exist in two or more folders so tag system has been thought more convinient. But transitioning to tag system requires awareness of differece between properties of two systems. In foler system the information of classification is put outside of the file and we can change folder at once. In tag system the information of classification is put inside the file so changing its tag means changing the file and it needs to be saved again and takes time.
Metadata tags as described in this article should not be confused with the use of the word "tag" in some software to refer to an automatically generatedcross-reference; examples of the latter aretags tablesinEmacs[15]andsmart tagsinMicrosoft Office.[16]
The use of keywords as part of an identification and classification system long predates computers.Paper data storagedevices, notablyedge-notched cards, that permitted classification and sorting by multiple criteria were already in use prior to the twentieth century, andfaceted classificationhas been used by libraries since the 1930s.
In the late 1970s and early 1980s,Emacs, the text editor forUnixsystems, offered a companion software program calledTagsthat could automatically build a table of cross-references called atags tablethat Emacs could use to jump between afunction calland that function's definition.[17]This use of the word "tag" did not refer to metadata tags, but was an early use of the word "tag" in software to refer to aword index.
Online databasesand early websites deployed keyword tags as a way for publishers to help users find content. In the early days of theWorld Wide Web, thekeywordsmeta elementwas used byweb designersto tellweb search engineswhat the web page was about, but these keywords were only visible in a web page'ssource codeand were not modifiable by users.
In 1997, the collaborative portal "A Description of the Equator and Some ØtherLands" produced bydocumentaX, Germany, used thefolksonomictermTagfor its co-authors and guest authors on its Upload page.[18]In "The Equator" the termTagfor user-input was described as anabstract literal or keywordto aid the user. However, users defined singularTags, and did not shareTagsat that point.
In 2003, thesocial bookmarkingwebsiteDeliciousprovided a way for its users to add "tags" to their bookmarks (as a way to help find them later);[2]: 162Delicious also provided browseable aggregated views of the bookmarks of all users featuring a particular tag.[19]Within a couple of years, thephoto sharingwebsiteFlickrallowed its users to add their own text tags to each of their pictures, constructing flexible and easy metadata that made the pictures highly searchable.[20]The success of Flickr and the influence of Delicious popularized the concept,[21]and othersocial softwarewebsites—such asYouTube,Technorati, andLast.fm—also implemented tagging.[22]In 2005, theAtomweb syndication standard provided a "category" element for inserting subject categories intoweb feeds, and in 2007Tim Brayproposed a "tag"URN.[23]
Many systems (and other webcontent management systems) allow authors to add free-form tags to a post, along with (or instead of) placing the post into a predetermined category.[a]For example, a post may display that it has been tagged withbaseballandtickets. Each of those tags is usually aweb linkleading to an index page listing all of the posts associated with that tag. The blog may have a sidebar listing all the tags in use on that blog, with each tag leading to an index page. To reclassify a post, an author edits its list of tags. All connections between posts are automatically tracked and updated by the blog software; there is no need to relocate the page within a complex hierarchy of categories.
Somedesktop applicationsandweb applicationsfeature their own tagging systems, such as email tagging inGmailandMozilla Thunderbird,[12]: 73bookmark tagging inFirefox,[24]audio tagging iniTunesorWinamp, and photo tagging in various applications.[25]Some of these applications display collections of tags astag clouds.[b]
There are various systems for applying tags to the files in a computer'sfile system.
InApple'sMacSystem 7, released in 1991, users could assign one ofseven editable colored labels(with editable names such as "Essential", "Hot", and "In Progress") to each file and folder.[26]In later iterations of the Mac operating system ever sinceOS X 10.9was released in 2013, users could assign multiple arbitrary tags asextended file attributesto any file or folder,[27]and before that time theopen-sourceOpenMeta standard provided similar tagging functionality forMac OS X.[28]
Severalsemantic file systemsthat implement tags are available for theLinux kernel, includingTagsistant.[29]
Microsoft Windowsallows users to set tags only onMicrosoft Officedocuments and some kinds of picture files.[30]
Cross-platformfile tagging standards includeExtensible Metadata Platform(XMP), anISO standardfor embedding metadata into popular image, video and document file formats, such asJPEGandPDF, without breaking their readability by applications that do not support XMP.[31]XMP largely supersedes the earlierIPTC Information Interchange Model.Exifis a standard that specifies the image and audiofile formatsused bydigital cameras, including some metadata tags.[32]TagSpacesis an open-source cross-platform application for tagging files; it inserts tags into thefilename.[33]
Anofficial tagis a keyword adopted by events and conferences for participants to use in their web publications, such as blog entries, photos of the event, and presentation slides.[34]Search engines can then index them to make relevant materials related to the event searchable in a uniform way. In this case, the tag is part of acontrolled vocabulary.
A researcher may work with a large collection of items (e.g. press quotes, a bibliography, images) in digital form. If he/she wishes to associate each with a small number of themes (e.g. to chapters of a book, or to sub-themes of the overall subject), then a group of tags for these themes can be attached to each of the items in the larger collection.[35]In this way, freeformclassificationallows the author to manage what would otherwise be unwieldy amounts of information.[36]
Atriple tagormachine taguses a specialsyntaxto define extrasemanticinformation about the tag, making it easier or more meaningful for interpretation by a computer program.[37]Triple tags comprise three parts: anamespace, apredicate, and a value. For example,geo:long=50.123456is a tag for the geographicallongitudecoordinate whose value is 50.123456. This triple structure is similar to theResource Description Frameworkmodel for information.
The triple tag format was first devised for geolicious in November 2004,[38]to mapDeliciousbookmarks, and gained wider acceptance after its adoption by Mappr and GeoBloggers to mapFlickrphotos.[39]In January 2007, Aaron Straup Cope at Flickr introduced the termmachine tagas an alternative name for the triple tag, adding some questions and answers on purpose, syntax, and use.[40]
Specialized metadata for geographical identification is known asgeotagging; machine tags are also used for other purposes, such as identifying photos taken at a specific event or naming species usingbinomial nomenclature.[41]
A hashtag is a kind of metadata tag marked by the prefix#, sometimes known as a "hash" symbol. This form of tagging is used onmicrobloggingandsocial networking servicessuch asTwitter,Facebook,Google+,VKandInstagram. The hash is used to distinguish tag text, as distinct, from other text in the post.
Aknowledge tagis a type ofmeta-informationthat describes or defines some aspect of a piece of information (such as adocument,digital image,database table, orweb page).[42]Knowledge tags are more than traditional non-hierarchicalkeywords or terms; they are a type ofmetadatathat captures knowledge in the form of descriptions, categorizations, classifications,semantics, comments, notes, annotations,hyperdata,hyperlinks, or references that are collected in tag profiles (a kind ofontology).[42]These tag profiles reference an information resource that resides in a distributed, and often heterogeneous, storage repository.[42]
Knowledge tags are part of aknowledge managementdiscipline that leveragesEnterprise 2.0methodologies for users to capture insights, expertise, attributes, dependencies, or relationships associated with a data resource.[3]: 251[43]Different kinds of knowledge can be captured in knowledge tags, including factual knowledge (that found in books and data), conceptual knowledge (found in perspectives and concepts), expectational knowledge (needed to make judgments and hypothesis), and methodological knowledge (derived from reasoning and strategies).[43]These forms ofknowledgeoften exist outside the data itself and are derived from personal experience, insight, or expertise. Knowledge tags are considered an expansion of the information itself that adds additional value, context, and meaning to the information. Knowledge tags are valuable for preserving organizational intelligence that is often lost due toturnover, for sharing knowledge stored in the minds of individuals that is typically isolated and unharnessed by the organization, and for connecting knowledge that is often lost or disconnected from an information resource.[44]
In a typical tagging system, there is no explicit information about the meaning orsemanticsof each tag, and a user can apply new tags to an item as easily as applying older tags.[2]Hierarchical classification systems can be slow to change, and are rooted in the culture and era that created them; in contrast, the flexibility of tagging allows users to classify their collections of items in the ways that they find useful, but the personalized variety of terms can present challenges when searching and browsing.
When users can freely choose tags (creating afolksonomy, as opposed to selecting terms from acontrolled vocabulary), the resulting metadata can includehomonyms(the same tags used with different meanings) andsynonyms(multiple tags for the same concept), which may lead to inappropriate connections between items and inefficient searches for information about a subject.[45]For example, the tag "orange" may refer to thefruitor thecolor, and items related to a version of theLinux kernelmay be tagged "Linux", "kernel", "Penguin", "software", or a variety of other terms. Users can also choose tags that are differentinflectionsof words (such as singular and plural),[46]which can contribute to navigation difficulties if the system does not includestemmingof tags when searching or browsing. Larger-scale folksonomies address some of the problems of tagging, in that users of tagging systems tend to notice the current use of "tag terms" within these systems, and thus use existing tags in order to easily form connections to related items. In this way, folksonomies may collectively develop a partial set of tagging conventions.
Despite the apparent lack of control, research has shown that a simple form of shared vocabulary emerges in social bookmarking systems. Collaborative tagging exhibits a form ofcomplex systemsdynamics (orself-organizingdynamics).[47]Thus, even if no central controlled vocabulary constrains the actions of individual users, the distribution of tags converges over time to stablepower lawdistributions.[47]Once such stable distributions form, simplefolksonomicvocabularies can be extracted by examining thecorrelationsthat form between different tags. In addition, research has suggested that it is easier formachine learningalgorithms to learn tag semantics when users tag "verbosely"—when they annotate resources with a wealth of freely associated, descriptive keywords.[48]
Tagging systems open to the public are also open to tag spam, in which people apply an excessive number of tags or unrelated tags to an item (such as aYouTubevideo) in order to attract viewers. This abuse can be mitigated using human or statistical identification of spam items.[49]The number of tags allowed may also be limited to reduce spam.
Some tagging systems provide a singletext boxto enter tags, so to be able totokenizethe string, aseparatormust be used. Two popular separators are thespace characterand thecomma. To enable the use of separators in the tags, a system may allow for higher-level separators (such asquotation marks) orescape characters. Systems can avoid the use of separators by allowing only one tag to be added to each inputwidgetat a time, although this makes adding multiple tags more time-consuming.
A syntax for use withinHTMLis to use therel-tagmicroformatwhich uses therelattributewith value "tag" (i.e.,rel="tag") to indicate that the linked-to page acts as a tag for the current context.[50]
|
https://en.wikipedia.org/wiki/Knowledge_tagging
|
Ahashtagis ametadata tagoperator that is prefaced by thehash symbol,#. Onsocial media, hashtags are used onmicrobloggingandphoto-sharingservices–especiallyTwitterandTumblr–as a form ofuser-generatedtagging that enablescross-referencingof content by topic or theme.[1]For example, a search withinInstagramfor the hashtag#blueskyreturns all posts that have been tagged with that term. After the initial hash symbol, a hashtag may include letters, numerals or other punctuation.[2]
The use of hashtags was first proposed by Americanbloggerand product consultantChris Messinain a 2007 tweet.[3][4]Messina made no attempt to patent the use because he felt that "they were born of the internet, and owned by no one".[5][6]Hashtags became entrenched in the culture of Twitter[7]and soon emerged across Instagram, Facebook, and YouTube.[8][9]In June 2014,hashtagwas added to theOxford English Dictionaryas "a word or phrase with the symbol#in front of it, used on social media websites and apps so that you can search for all messages with the same subject".[10][11]
Thenumber sign or hash symbol,#, has long been used ininformation technologyto highlight specific pieces of text. In 1970, the number sign was used to denoteimmediateaddress modein theassembly languageof thePDP-11[12]when placed next to a symbol or a number, and around 1973, '#' was introduced in theC programming languageto indicate special keywords that theC preprocessorhad to process first.[13]The pound sign was adopted for use within IRC (Internet Relay Chat) networks around 1988 to label groups and topics.[14]Channels or topics that are available across an entire IRC network are prefixed with a hash symbol # (as opposed to those local to a server, which uses anampersand'&').[15]
The use of the pound sign in IRC inspired[16]Chris Messinato propose a similar system on Twitter to tag topics of interest on themicrobloggingnetwork.[17]He proposed the usage of hashtags on Twitter:
How do you feel about using # (pound) for groups. As in #barcamp [msg]?
According to Messina, he suggested use of the hashtag to make it easy for lay users without specialized knowledge of search protocols to find specific relevant content. Therefore, the hashtag "was created organically by Twitter users as a way to categorize messages".[18]
The first published use of the term "hash tag" was in a blog post "Hash Tags = Twitter Groupings" by Stowe Boyd,[19]on August 26, 2007, according tolexicographerBen Zimmer, chair of theAmerican Dialect Society's New Words Committee.
Messina's suggestion to use the hashtag was not immediately adopted by Twitter, but the convention gained popular acceptance when hashtags were used in tweets relating to the2007 San Diego forest firesin Southern California.[20][21]The hashtag gained international acceptance during the2009–2010 Iranian election protests; Twitter users used both English- andPersian-language hashtags in communications during the events.[22]
Hashtags have since played critical roles in recentsocial movementssuch as#jesuischarlie,#BLM,[23]and#MeToo.[24][25]
Beginning July 2, 2009,[26]Twitter began to hyperlink all hashtags in tweets to Twitter search results for the hashtagged word (and for the standard spelling of commonly misspelled words). In 2010, Twitter introduced "Trending Topics" on the Twitter front page, displaying hashtags that are rapidly becoming popular, and the significance of trending hashtags has become so great that the company makes significant efforts to foil attempts tospamthe trending list.[27]During the 2010 World Cup, Twitter explicitly encouraged the use of hashtags with the temporary deployment of "hashflags", which replaced hashtags ofthree-letter country codeswith their respective national flags.[28]
Other platforms such asYouTubeandGawker Mediafollowed in officially supporting hashtags,[29]and real-time search aggregators such asGoogle Real-Time Searchbegan supporting hashtags.
A hashtag must begin with a hash (#) character followed by other characters, and is terminated by a space or the end of the line. Some platforms may require the # to be preceded with a space. Most or all platforms that support hashtags permit the inclusion of letters (withoutdiacritics), numerals, and underscores.[2]Other characters may be supported on a platform-by-platform basis. Some characters, such as "&", are generally not supported as they may already serve other search functions.[30]Hashtags are not case sensitive (a search for "#hashtag" will match "#HashTag" as well), but the use of embedded capitals (i.e.,CamelCase) increases legibility and improves accessibility.
Languages that do not useword dividershandle hashtags differently. In China, microblogsSina WeiboandTencent Weibouse a double-hashtag-delimited #HashName# format, since the lack of spacing betweenChinese charactersnecessitates a closing tag.Twitteruses a different syntax for Chinese characters andorthographieswith similar spacing conventions: the hashtag contains unspaced characters, separated from preceding and following text by spaces (e.g., '我 #爱 你' instead of '我#爱你')[31]or byzero-width non-joinercharacters before and after the hashtagged element, to retain a linguistically natural appearance (displaying as unspaced '我#爱你', but with invisible non-joiners delimiting the hashtag).[32]
Some communities may limit, officially or unofficially, the number of hashtags permitted on a single post.[33]
Misuse of hashtags can lead to account suspensions. Twitter warns that adding hashtags to unrelated tweets, or repeated use of the same hashtag without adding to a conversation can filter an account from search results, or suspend the account.[34]
Individual platforms may deactivate certain hashtags either for being too generic to be useful, such as #photography on Instagram, or due to their use to facilitate illegal activities.[35][36]
In 2009,StockTwitsbegan usingticker symbolspreceded by thedollar sign(e.g.,$XRX).[37][38]In July 2012, Twitter began supporting the tag convention and dubbed it the "cashtag".[39][40]The convention has extended to national currencies, andCash Apphas implemented the cashtag to mark usernames.
Hashtags are particularly useful in unmoderated forums that lack a formalontologicalorganization. Hashtags help users find content similar interest. Hashtags are neither registered nor controlled by any one user or group of users. They do not contain any set definitions, meaning that a single hashtag can be used for any number of purposes, and that the accepted meaning of a hashtag can change with time.
Hashtags intended for discussion of a particular event tend to use an obscure wording to avoid being caught up with generic conversations on similar subjects, such as a cake festival using #cakefestival rather than simply #cake. However, this can also make it difficult for topics to become "trending topics" because people often use different spelling or words to refer to the same topic. For topics to trend, there must be a consensus, whether silent or stated, that the hashtag refers to that specific topic.
Hashtags may be used informally to express context around a given message, with no intent to categorize the message for later searching, sharing, or other reasons. Hashtags may thus serve as a reflexive meta-commentary.[41]
This can help express contextual cues or offer more depth to the information or message that appears with the hashtag. "My arms are getting darker by the minute. #toomuchfaketan". Another function of the hashtag can be used to express personal feelings and emotions. For example, with "It's Monday!! #excited #sarcasm" in which the adjectives are directly indicating the emotions of the speaker.[42]
Verbal use of the wordhashtagis sometimes used in informal conversations.[43]Use may be humorous, such as "I'm hashtag confused!"[42]By August 2012, use of ahand gesture, sometimes called the "finger hashtag", in which the index and middle finger both hands are extended and arranged perpendicularly to form the hash, was documented.[44][45]
Companies, businesses, and advocacy organizations have taken advantage of hashtag-based discussions for promotion of their products, services or campaigns.
In the early 2010s, sometelevisionbroadcasters began to employ hashtags related to programs indigital on-screen graphics, to encourage viewers to participate in abackchannelof discussion via social media prior to, during, or after the program.[46]Television commercialshave sometimes contained hashtags for similar purposes.[47]
The increased usage of hashtags as brand promotion devices has been compared to the promotion of branded "keywords" byAOLin the late 1990s and early 2000s, as such keywords were also promoted at the end of television commercials and series episodes.[48]
Organized real-world events have used hashtags and ad hoc lists for discussion and promotion among participants. Hashtags are used as beacons by event participants to find each other, both on Twitter and, in many cases, during actual physical events.
Since the 2012–13 season, theNBAhas allowed fans to vote players in as All-Star Game starters on Twitter and Facebook using #NBAVOTE.[49]
Hashtag-centered biomedical Twitter campaigns have shown to increase the reach, promotion, and visibility of healthcare-related open innovation platforms.[50]
Political protests and campaigns in the early 2010s, such as#OccupyWallStreetand#LibyaFeb17, have been organized around hashtags or have made extensive usage of hashtags for the promotion of discussion. Hashtags are frequently employed to either show support or opposition towards political figures. For example, the hashtag#MakeAmericaGreatAgainsignifies support for Trump, whereas#DisinfectantDonnieexpresses ridicule of Trump.[51]Hashtags have also been used to promote official events; theFinnish Ministry of Foreign Affairsofficially titled the2018 Russia–United States summitas the "#HELSINKI2018 Meeting".[52]
Hashtags have been used to gather customer criticism of large companies. In January 2012,McDonald'screated the #McDStories hashtag so that customers could share positive experiences about the restaurant chain, but the marketing effort was cancelled after two hours when critical tweets outnumbered praising ones.[53]
In 2017, the#MeToohashtag became viral in response to the sexual harassment accusations againstHarvey Weinstein. The use of this hashtag can be considered part ofhashtag activism, spreading awareness across eighty-five different countries with more than seventeen million Tweets using the hashtag #MeToo. This hashtag was not only used to spread awareness of accusations regarding Harvey Weinstein but allowed different women to share their experiences ofsexual violence. Using this hashtag birthed multiple different hashtags in connection to #MeToo to encourage more women to share their stories, resulting in further spread of the phenomenon of hashtag activism. The use of hashtags, especially, in this case, allowed for better and easier access to search for content related to this social media movement.[54]
The use of hashtags also reveals what feelings or sentiment an author attaches to a statement. This can range from the obvious, where a hashtag directly describes the state of mind, to the less obvious. For example, words in hashtags are the strongestpredictorof whether or not a statement issarcastic[55]—a difficultAIproblem.[56]
Hashtags play an important role for employees and students in professional fields and education. In industry, individuals' engagement with a hashtags can provide opportunities for them develop and gain some professional knowledge in their fields.[57]
In education, research on language teachers who engaged in the #MFLtwitterati hashtag demonstrates the uses of hashtags for creating community and sharing teaching resources. The majority of participants reported positive impact on their teaching strategies as inspired by many ideas shared by different individuals in the Hashtag.[58]
Emerging research in communication and learning demonstrates how hashtag practices influence the teaching and development of students. An analysis of eight studies examined the use of hashtags in K–12 classrooms and found significant results. These results indicated that hashtags assisted students in voicing their opinions. In addition, hashtags also helped students understand self-organisation and the concept of space beyond place.[clarification needed][59]Related research demonstrated how high school students engagement with hashtag communication practices allowed them to develop story telling skills and cultural awareness.[60]
For young people at risk of poverty and social exclusion during theCOVID-19 pandemic,Instagramhashtags were shown in a 2022 article to foster scientific education and promote remote learning.[61]
During theApril 2011 Canadian party leader debate,Jack Layton, then-leader of theNew Democratic Party, referred toConservativePrime MinisterStephen Harper's crime policies as "a [sic] hashtag fail" (presumably #fail).[62][63]
In 2010Kanye Westused the term "hashtagrap" to describe a style of rapping that, according to Rizoh of theHouston Press,uses "a metaphor, a pause, and a one-wordpunch line, often placed at the end of a rhyme".[64][65]RappersNicki Minaj,Big Sean,Drake, andLil Wayneare credited with the popularization of hashtag rap, while the style has been criticized byLudacris,The Lonely Island,[66]and various music writers.[67]
On September 13, 2013, a hashtag, #TwitterIPO, appeared in the headline of aNew York Timesfront-page article regarding Twitter'sinitial public offering.[68][69]
In 2014Bird's Eyefoods released "Mashtags", amashed potatoproduct with pieces shaped either like@or#.[70]
In 2019, theBritish Ornithological Unionincluded a hash character in the design of its new Janet Kear Union Medal, to represent "science communication and social media".[71]
Linguists argue that hashtagging is a morphological process and that hashtags function aswords.[42][72]
The popularity of a hashtag is influenced less by its conciseness and clarity, and more by the presence of preexisting popular hashtags with similar syntactic formats. This suggests that, similar toword formation, users may see the syntax of an existing viral hashtag as a blueprint for creating new ones. For instance, the viral hashtag#JeSuisCharliegave rise to other popular indicative mood hashtags like #JeVoteMacron and #JeChoisisMarine.[51]
|
https://en.wikipedia.org/wiki/Hashtag
|
Cooperative bankingis retail and commercial banking organized on a cooperative basis. Cooperativebanking institutionstake deposits and lend money in most parts of the world.
Cooperative banking, as discussed here, includes retail banking carried out bycredit unions,mutual savings banks,building societiesandcooperatives, as well as commercial banking services provided bymutual organizations(such ascooperative federations) to cooperative businesses.
Cooperative banks are owned by their customers and follow thecooperative principleof one person, one vote. Co-operative banks are often regulated under both banking and cooperative legislation. They provide services such as savings and loans to non-members as well as to members, and some participate in the wholesale markets for bonds, money and even equities.[1]Many cooperative banks are traded on publicstock markets, with the result that they are partly owned by non-members.
Member control can be diluted by these outside stakes, so they may be regarded as semi-cooperative.
Cooperative banking systems are also usually more integrated than credit union systems. Local branches of co-operative banks select their own boards of directors and manage their own operations, but most strategic decisions require approval from a central office. Credit unions usually retain strategic decision-making at a local level, though they share back-office functions, such as access to the global payments system, by federating.
Some cooperative banks are criticized for diluting their cooperative principles. Principles 2-4 of the "Statement on the Co-operative Identity" can be interpreted to require that members must control both the governance systems and capital of their cooperatives. A cooperative bank that raises capital on public stock markets creates a second class of shareholders who compete with the members for control. In some circumstances, the members may lose control. This effectively means that the bank ceases to be a cooperative. Accepting deposits from non-members may also lead to a dilution of member control.
Credit unions have the purpose of promoting thrift, providing credit at reasonable rates, and providing other financial services to its members.[2]Its members are usually required to share acommon bond, such as locality, employer, religion or profession, and credit unions are usually funded entirely by member deposits, and avoid outside borrowing.
They are typically (though not exclusively) the smaller form of cooperative banking institution.
In some countries they are restricted to providing only unsecured personal loans, whereas in others, they can provide business loans to farmers, and mortgages.
The special banks providing Long Term Loans are calledLand Development Banks(LDBs). The first LDB was started at Jhang inPunjabin 1920. This bank is also based oncooperative. The main objective of the LDBs are to promote the development of land, agriculture and increase the agricultural production. The LDBs provide long-term finance to members directly through their branches.[3]
Building societies exist in Britain, Ireland and several Commonwealth countries. They are similar to credit unions in organisation, though few enforce acommon bond. However, rather than promoting thrift and offering unsecured and business loans, their purpose is to provide home mortgages for members. Borrowers and depositors are society members, setting policy and appointing directors on a one-member, one-vote basis. Building societies often provide other retail banking services, such as current accounts, credit cards and personal loans. In the United Kingdom, regulations permit up to half of their lending to be funded by debt to non-members, allowing societies to access wholesale bond and money markets to fund mortgages. The world's largest building society is Britain'sNationwide Building Society.
Mutual savings banksand mutualsavings and loan associationswere very common in the 19th and 20th centuries, but declined in number and market share in the late 20th century, becoming globally less significant than cooperative banks, building societies and credit unions.
Trustee savings banksare similar to other savings banks, but they are not cooperatives, as they are controlled by trustees, rather than their depositors.
The most important international associations of cooperative banks are the Brussels-basedEuropean Association of Co-operative Bankswhich has 28 European and non-European members, and the Paris-based International Cooperative Banking Association (ICBA), which has member institutions from around the world too.
In Canada, cooperative banking is provided by credit unions (caisses populairesin French). As of September 30, 2012, there were 357 credit unions andcaisses populairesaffiliated with Credit Union Central of Canada. They operated 1,761 branches across the country with 5.3 million members and $149.7 billion in assets.[4]
Thecaisse populairemovement started byAlphonse DesjardinsinQuebec,Canada, pioneered credit unions. Desjardins opened the first credit union in North America in 1900, from his home inLévis, Quebec, marking the beginning of theMouvement Desjardins. He was interested in bringing financial protection to working people.
Britishbuilding societiesdeveloped into general-purpose savings and banking institutions with ‘one member, one vote’ ownership and can be seen as a form of financial cooperative (although manyde-mutualisedinto conventionally owned banks in the 1980s and 1990s). Until 2017, theCo-operative GroupincludedThe Co-operative Bank; however, despite its name, the Co-operative Bank was not itself a trueco-operativeas it was not owned directly by its members. Instead it was part-owned by a holding company which was itself a co-operative – theCo-operative Banking Group.[5]It still retains aninsuranceprovider,The Co-operative Insurance, noted for promotingethical investment. For the financial year 2021/2022, the British building society sector had assets of around £483, of which more than half were accounted for by the cooperativeNationwide Building Society.
Important continental cooperative banking systems include theCrédit Agricole,Crédit Mutuel, andGroupe BPCEin France,Caja Rural Cooperative GroupandCajamar Cooperative Groupin Spain,Rabobankin the Netherlands, theGerman Cooperative Financial Groupin Germany,ICCREA BancaandCassa Centrale Banca - Credito Cooperativo Italianoin Italy,Migrosand Coop Bank in Switzerland, and theRaiffeisen Banking Groupin Austria. The cooperative banks that are members of theEuropean Association of Co-operative Bankshave 130 million customers, 4 trillion euros in assets, and 17% of Europe's deposits. The International Confederation of Cooperative Banks (CIBP) is the oldest association of cooperative banks at international level.
In theNordic countries, there is a clear distinction betweenmutual savings banks(Sparbank) and truecredit unions(Andelsbank).
In Italy, a 2015 reform required popular banks (Italian:Banca Popolare) with assets of greater than €8 billion todemutualizeinto joint-stock companies (Italian:società per azioni).[6]
Credit unions in the United Stateshad 96.3 million members in 2013 and assets of $1.06 trillion.[7][8]The sector had five times lower failure rate than other banks during the2008 financial crisis[9]and more than doubled lending to small businesses between 2008 and 2016, from $30 billion to $60 billion, while lending to small businesses overall during the same period declined by around $100 billion.[10]
Public trust in credit unions in the United States stands at 60%, compared to 30% for big banks[11]and small businesses are 80% less likely to be dissatisfied with a credit union than with a big bank.[12]
Cooperative banks serve an important role in theIndian economy, especially in rural areas. In urban areas, they mainly serve to small industry and self-employed workers. They are registered under the Cooperative Societies Act, 1912. They are regulated by theReserve Bank of Indiaunder theBanking Regulation Act, 1949and Banking Laws (Application to Cooperative Societies) Act, 1965.[13]Anyonya Sahakari Mandali, established in 1889 in the province ofBaroda, is the earliest known cooperative credit union in India.[14]
The Cooperative Credit System in India consists of Short Term and Long Term credit institutions. The short-term credit structure which takes care of the short term (1 to 5 years) credit needs of the farmers is a three-tier structure in most of the States viz., Primary Agricultural Cooperative Societies (PACCS) at the village level, District Central Cooperative Banks at the District level and State Cooperative Bank at the State level and two-tier in some States voz., State Cooperative Banks and PACCS. The long term credit structure caters to the long term credit needs of the farmers (up to 20 years) is a two-tier structure with Primary Agriculture and Rural Development Banks (PARDBs) at the village level and State Agriculture and Rural Development Banks. The State Cooperative Banks and Central Cooperative Banks are licensed by Reserve Bank of India under the Banking Regulation Act. While the StCBs and DCCBs function like a normal Bank they focus mainly on agricultural credit. While Reserve Bank of India is the Regulating Authority,National Bank for Agriculture and Rural Development(NABARD) provides refinance support and takes care of inspection of StCBs and DCCBs. The first Cooperative Credit Society in India was started in 1904 at Thiroor in Tiruvallur District in Tamil Nadu
Primary Cooperative Banks which are otherwise known as Urban Cooperative Banks are registered as Cooperative Societies under the Cooperative Societies Acts of the concerned States or the Multi-State Cooperative Societies Act function in urban areas and their business is similar to that of Commercial Banks. They are licensed by RBI to do banking business. Reserve Bank of India is both the controlling and inspecting authority for the Primary Cooperative Banks.
Ofek(Hebrew: אופק) is a cooperative initiative founded in mid-2012 that intended to establish the first cooperative bank in Israel.[15]
The recent phenomena ofmicrocreditandmicrofinanceare often based on a cooperative model. These focus onsmall businesslending. In 2006,Muhammad Yunus, founder of theGrameen Bankin Bangladesh, won theNobel Peace Prizefor his ideas regarding development and his pursuit of the microcredit concept. In this concept the institution provides micro loans to people who couldn't otherwise secure loans through conventional means.
However, cooperative banking differs from modern microfinance. Particularly, members’ control over financial resources is the distinguishing feature between the cooperative model and modern microfinance. The not-for-profit orientation of modern microfinance has gradually been replaced by full-cost recovery and self-sustainable microfinance approaches. The microfinance model has been gradually absorbed by market-oriented or for-profit institutions in most underdeveloped economies. The current dominant model of microfinance, whether it is provided by not-for-profit or for-profit institutions, places the control over financial resources and their allocation in the hands of a small number of microfinance providers that benefit from the highly profitable sector.
Cooperative banking is different in many aspects from standard microfinance institutions, both for-profit and not-for-profit organizations. Although group lending may seemingly share some similarities with cooperative concepts, in terms of joint liability, the distinctions are much bigger, especially when it comes to autonomy, mobilization and control over resources, legal and organizational identity, and decision-making. Early financial cooperatives founded in Germany were more able to provide larger loans relative to the borrowers’ income, with longer-term maturity at lower interest rates compared to modern standard microfinance institutions. The main source of funds for cooperatives are local savings, while microfinance institutions in underdeveloped economies rely heavily on donations, foreign funds, external borrowing, or retained earnings, which implies high-interest rates. High-interest rates, short-term maturities, and tight repayment schedules are destructive instruments for low- and middle-income borrowers which may lead to serious debt traps, or in best scenarios will not support any sort of capital accumulation. Without improving the ability of agents to earn, save, and accumulate wealth, there are no real economic gains from financial markets to the lower- and middle-income populations.[16]
Head office: Zakir Hossain Road, Khulshi, Chittagong-4209, Bangladesh.
A 2013 report byILOconcluded that cooperative banks outperformed their competitors during the2008 financial crisis. The cooperative banking sector had a 20% market share of the European banking sector, but accounted for only 7% of all the write-downs and losses between the third quarter of 2007 and first quarter of 2011. Cooperative banks were also over-represented in lending to small and medium-sized businesses in all of the 10 countries included in the report.[28]
Credit unionsin the US had five times the lower failure rate of other banks during the crisis[9]and more than doubled lending to small businesses between 2008 and 2016, from $30 billion to $60 billion, while lending to small businesses overall during the same period declined by around $100 billion.[10]
|
https://en.wikipedia.org/wiki/Cooperative_banking
|
Count Me In(full name:Count Me In for Women's Economic Independence) is acharitable organizationthat provides financial assistance, business coaching andconsultingservices towoman-ownedbusinesses. The assistance is provided through three basic programs: anonline communityfor womenbusiness ownerssupplemented by live events; the "Make Mine a Million $ Business" award, providing up toUS$50,000 to businesses with a minimum of two years in business and $250,000 in annualrevenue; and the "Micro to Millions" award, offering up to $10,000 for businesses not meeting the time or revenue requirements for the larger award.[1]
Count Me In is a leading national not-for-profit provider of resources, business education and community support for women entrepreneurs seeking to grow micro businesses to million dollar enterprises. Founded in 1999 by Iris Burnett andNell Merlino,[2]Count Me In began as the first onlinemicrolender, and in the following years discontinued the microlending program in order to focus on providing the education and resources women need to grow their businesses and find funding from other sources.
The estimated economic impact of accelerating women's business success will generate at least four million new jobs and $700 billion in economic activity. Leading the charge and making their organizational vision a reality is co-founder with Iris Burnett, and CEO, Nell Merlino, the creative force behind Take Our Daughters to Work Day. Merlino was an entrepreneur who founded the organization based upon her personal experiences in growing her own small business. Facing questions regarding sources of capital, hiring quality talent and financial planning, she was unsure where to find answers. Recognizing that other women were likely facing similar circumstances and questions, she founded Count Me In for Women's Economic Independence to act as that informational resource.
In 2005 the "Make Mine a Million $ Business Competition"—known informally as "M3"—was launched in the cities ofDallas, Texas;Chicago, Illinois;Long Beach, California;Atlanta, Georgia; andNew York, New York. Awardees were chosen by panels of local business owners. Following the initial launch, the M3 Competition spread across the country to include women entrepreneurs from every city and in every service or industry. “Beatriz Helena Ramos, founder and president ofDancing Diablo, a creative advertising company located in Brooklyn and Caracas, went from seeing herself as an artist/animator making $200K in annual business revenue to being the CEO of a million-dollar plus company creating jobs. She was the inspiration for our Make Mine a Million $ Business program.” ~Nell Marino.[3]Today, the organization boasts a growing community of tens-of-thousands of women entrepreneurs utilizing an array of resources and tools to develop and grow their businesses.
The organization announced a goal of helping one million woman-owned companies achieve $1,000,000 in revenues by 2010. In support of this goal, Count Me In established partnerships withAmerican Express' OPENcredit cardas well as theQVCnetwork andCisco Systemsto provide marketing and technological support.[4]
|
https://en.wikipedia.org/wiki/Count_Me_In_(charity)
|
Crowdfundingis the practice of funding a project or venture by raising money from a large number of people, typically via the internet.[1][2]Crowdfunding is a form ofcrowdsourcingandalternative finance. In 2015, overUS$34 billionwas raised worldwide by crowdfunding.[3]
Although similar concepts can also be executed through mail-order subscriptions, benefit events, and other methods, the term crowdfunding refers to internet-mediated registries.[4]This modern crowdfunding model is generally based on three types of actors – theprojectinitiator who proposes the idea or project to be funded, individuals or groups who support the idea, and a moderating organization (the "platform") that brings the parties together to launch the idea.[5]
The term crowdfunding was coined in 2006 by entrepreneur and technologist, Michael Sullivan, to differentiate traditional fundraising with the trends of native Internet projects, companies and community efforts to support various kinds of creators. Crowdfunding has been used to fund a wide range of for-profitentrepreneurial venturessuch as artistic and creative projects,[6]medical expenses, travel, and community-orientedsocial entrepreneurshipprojects.[7]Although crowdfunding has been suggested to be highly linked to sustainability, empirical validation has shown that sustainability plays only a fractional role in crowdfunding.[8]Its use has also been criticized for fundingquackery, especially costly and fraudulent cancer treatments.[9][10][11][12]
Funding by collecting small donations from many people has a long history with many roots. Books have been funded in this way in the past; authors and publishers would advertise book projects inpraenumerationorsubscriptionschemes. The book would be written and published if enough subscribers signaled their readiness to buy the book once it was out. Thesubscription business modelis not exactly crowdfunding, since the actual flow of money only begins with the arrival of the product. However, the list of subscribers has the power to create the necessary confidence among investors that is needed to risk the publication.[14]
War bondsare theoretically a form of crowdfunding military conflicts. London's mercantile community saved theBank of Englandin the 1730s when customers demanded their pounds to be converted into gold – they supported the currency until confidence in the pound was restored, thus crowdfunding their own money. A clearer case of modern crowdfunding isAuguste Comte's scheme to issue notes for the public support of his further work as a philosopher. The "Première Circulaire Annuelle adressée par l'auteur duSystème de Philosophie Positive" was published on March 14, 1850, and several of these notes, blank and with sums, have survived.[13]Thecooperative movementof the 19th and 20th centuries is a broader precursor. It generated collective groups, such as community or interest-based groups, pooling subscribed funds to develop new concepts, products, and means of distribution and production, particularly in rural areas of Western Europe and North America. In 1885, when government sources failed to provide funding to build a monumental base for theStatue of Liberty, a newspaper-led campaign attracted small donations from 160,000 donors.[14]
Crowdfunding on the internet first gained popular and mainstream use in the arts and music communities.[15]One of the earlier instances of online crowdfunding in the music industry was in 1997, when fans of the British rock bandMarillionraised US$60,000 in donations through an Internet campaign to underwrite an entire U.S. tour however this was not crowdfunding in its true sense as it wasn't asked for by the band and only reluctantly taken. The band subsequently used this method to fund their studio albums.[16][17][18]This built on the success of crowdfunding via magazines, such as the 1992 campaign bythe Vegan Societythat crowdfunded the production of theTruth or Dairyvideo documentary.[19]In the film industry, writer/directorMark Tapio Kinesdesigned a website in 1997 for his then-unfinished first feature film, the independent dramaForeign Correspondents. By early 1999, he had raised more than US$125,000 through the site from various fans and investors, providing him with the funds to complete his film.[20]In 2002, the "Free Blender" campaign was an early software crowdfunding precursor.[21][22]The campaign aimed foropen-sourcingtheBlender3D computer graphics softwareby collecting €100,000 from the community, while offering additional benefits for donating members.[23][24]
The first company to engage in this business model was the U.S. websiteArtistShare(2001).[25][26]As the model matured, more crowdfunding sites started to appear on the web such asKiva(2005), The Point (2008, precursor toGroupon),Indiegogo(2008),Kickstarter(2009),GoFundMe(2010),Microventures(2010),YouCaring(2011).,[27][28]andRedshine Publication(2012) for book publication.[29]
The phenomenon of crowdfunding is older than the term "crowdfunding". The earliest recorded use of the word was in August 2006.[30]Crowdfunding is a part of crowdsourcing, which is a much wider phenomenon itself.
The Crowdfunding Centre's May 2014 report identified two primary types of crowdfunding:
Reward-based crowdfunding has been used for a wide range of purposes, including album recording and motion-picture promotion,[32]free softwaredevelopment, inventions development, scientific research,[33]and civic projects.[34]
Many characteristics of rewards-based crowdfunding, also called non-equity crowdfunding, have been identified by research studies. In rewards-based crowdfunding, funding does not rely on location. The distance between creators and investors onSellabandwas about 3,000 miles when the platform introduced royalty sharing. The funding for these projects is distributed unevenly, with a few projects accounting for the majority of overall funding. Additionally, funding increases as a project nears its goal, encouraging what is called "herding behavior". Research also shows that friends and family account for a large, or even majority, portion of early fundraising. This capital may encourage subsequent funders to invest in the project. While funding does not depend on location, observation shows that funding is largely tied to the locations of traditional financing options. In reward-based crowdfunding, funders are often too hopeful about project returns and must revise expectations when returns are not met.[15]
Equity crowdfundingis the collective effort of individuals to support efforts initiated by other people or organizations through the provision of finance in the form of equity.[35]In the United States, legislation that is mentioned in the 2012JOBS Actwill allow for a wider pool of small investors with fewer restrictions following the implementation of the act.[36]Unlike non-equity crowdfunding, equity crowdfunding contains heightened "information asymmetries." The creator must not only produce the product for which they are raising capital, but also create equity through the construction of a company.[15]Equity crowdfunding, unlike donation and rewards-based crowdfunding, involves the offer of securities which include the potential for a return on investment. Syndicates, which involve many investors following the strategy of a single lead investor, can be effective in reducing information asymmetry and in avoiding the outcome ofmarket failureassociated with equity crowdfunding.[37]
Another kind of crowdfunding is to raise funds for a project where a digital security is offered as a reward to funders which is known asInitial coin offering(abbreviated to ICO).[38]Some value tokens areendogenouslycreated by particular open decentralized networks that are used toincentivizeclient computersof the network to expend scarce computer resources on maintaining the protocol network. These value tokens may or may not exist at the time of the crowdsale, and may require substantial development effort and eventual software release before the token is live and establishes a market value. Although funds may be raised simply for the value token itself, funds raised onblockchain-based crowdfunding can also representequity,bonds, or even "market-maker seatsofgovernance" for the entity being funded.[39]Examples of such crowd sales areAugurdecentralized, distributedprediction marketsoftware which raisedUS$4 millionfrom more than 3500 participants;[39]Ethereumblockchain; and "the Decentralized Autonomous Organization".[40][41][42][43]
Debt-based crowdfunding, (also known as "peer-to-peer", "P2P", "marketplace lending", or "crowdlending") arose with the founding ofZopain the UK in 2005[44]and in the US in 2006, with the launches ofLending ClubandProsper.com.[45]Borrowers apply online, generally for free, and their application is reviewed and verified by an automated system, which also determines the borrower's credit risk and interest rate. Investors buy securities in a fund that makes the loans to individual borrowers or bundles of borrowers. Investors make money from interest on the unsecured loans; the system operators make money by taking a percentage of the loan and a loan servicing fee.[45]In 2009,institutional investorsentered the P2P lending arena; for example in 2013, Google invested $125 million in Lending Club.[45]In 2014, in the US, P2P lending totaled about $5 billion.[46]In 2014, in the UK, P2P platforms lent businesses £749 million, a growth of 250% from 2012 to 2014, and lent retail customers £547 million, a growth of 108% from 2012 to 2014.[47]: 23In both countries in 2014, about 75% of all the money transferred through crowdfunding went through P2P platforms.[46]Lending Club went public in December 2014 at a valuation around $9 billion.[45]
Litigation crowdfunding allows plaintiffs or defendants to reach out to hundreds of their peers simultaneously in a semi-private and confidential manner to obtain funding, either seeking donations or providing a reward in return for funding. It also allows investors to purchase a stake in a claim they have funded, which may allow them to get back more than their investment if the case succeeds (the reward is based on the compensation received by the litigant at the end of his or her case, known as acontingent feein the United States, a success fee in the United Kingdom, or apactum de quota litisin many civil law systems).[48]LexSharesis a platform that allows accredited investors to invest in lawsuits.[49]
Donation-based crowdfunding is the collective effort of individuals to help charitable causes.[50]In donation-based crowdfunding, funds are raised for religious, social environmental, or other purposes.[51]Donors come together to create an online community around a common cause to help fund services and programs to combat a variety of issues including healthcare[52]and community development.[53]The major aspect of donor-based crowdfunding is that there is no reward for donating; rather, it is based on the donor'saltruisticreasoning.[54]Ethical concerns have been raised to the increasing popularity of donation-based crowdfunding, which can be affected by fraudulent campaigns and privacy issues.[55]
The inputs of the individuals in the crowd trigger the crowdfunding process and influence the ultimate value of the offerings or outcomes of the process. Individuals act as agents of the offering, selecting, and promoting of the projects in which they believe. They sometimes play a donor role oriented towards providing help on social projects. In some cases, they become shareholders and contribute to the development and growth of the offering. Individuals disseminate information about projects they support in their online communities, generating further support (promoters).
The motivation for consumer participation stems from the feeling of being at least partly responsible for the success of others people's initiatives (desire for patronage), striving to be a part of a communal social initiative (desire for social participation), and seeking a payoff from monetary contributions (desire for investment).[5]Additionally, individuals participate in crowdfunding to see new products before the public. Early access often allows funders to participate more directly in the development of the product. Crowdfunding is also particularly attractive to funders who are family and friends of a creator. It helps to mediate the terms of their financial agreement and manage each group's expectations for the project.[15]
An individual who takes part in crowdfunding initiatives tends to have several distinct traits – innovative orientation, which stimulates the desire to try new modes of interacting with firms and other consumers; social identification with the content, cause, or project selected for funding, which sparks the desire to be a part of the initiative; and (monetary) exploitation, which motivates the individual to participate by expecting a payoff.[5]Crowdfunding platforms are motivated to generate income by drawing worthwhile projects and generous funders. These sites also seek widespread public attention for their projects and platform.[15]
Crowdfunding websites helped companies and individuals worldwide raise US$89 million from members of the public in 2010, $1.47 billion in 2011, and $2.66 billion in 2012 — $1.6 billion of the 2012 amount was raised in North America.[56]
Crowdfunding is expected to reach US$1 trillion in 2025.[57]A May 2014 report, released by the United Kingdom-based The Crowdfunding Centre and titled "The State of the Crowdfunding Nation", presented data showing that during March 2014, more than US$60,000 were raised on an hourly basis via global crowdfunding initiatives. Also during this period, 442 crowdfunding campaigns were launched globally on a daily basis.[31]
The future growth potential of crowdfunding platforms also depends on their financing volume with venture capital. Between January 2017 and April 2020 globally 99 venture capital financing rounds for crowdfunding platforms took place with more than half a billion USD of total money raised. The median amount per venture capital financing rounds for crowdfunding was $5 million in the U.S. and $1.5 million in Europe between January 2017 and April 2020.[58]
In 2015, it was predicted that over 2,000 crowdfunding sites would be available to choose from in 2016.[59]As of 2021, there are 1,478 crowdfunding organizations in the US (Crunchbase, 2021).[60]As of January 2021, Kickstarter has raised more than $5.6 billion spread over 197,425 projects.[61]
Crowdfunding platforms have differences in the services they provide and the type of projects they support.[5]
Curated crowdfunding platforms serve as "network orchestrators" by curating the offerings that are allowed on the platform. They create the necessary organizational systems and conditions for resource integration among other players to take place.[5]Relational mediators act as an intermediary between supply and demand. They replace traditional intermediaries (such as traditional record companies, venture capitalists). These platforms link new artists, designers, project initiators with committed supporters who believe in the persons behind the projects strongly enough to provide monetary support.[15]
In response to arbitrary crowdfunding curation on existing platforms, an open source alternative called Selfstarter[62]emerged in late 2012 from the projectLockitronafter it was rejected from Kickstarter.[63]While Selfstarter required the creators of the project to set up hosting and payment processing, it proved that projects could successfully crowdfund without middlemen taking a significant percentage of the money raised.
In the summer of 1885, crowdfunding averted a crisis that threatened the completion of the Statue of Liberty. Construction of the statue'spedestalstalled due to a lack of financing. Fundraising efforts for the project fell short of the necessary amount by more than a third. New York GovernorGrover Clevelandrefused to appropriate city funds for the project, and Congress could not agree on a funding package.
Recognizing the social and symbolic significance of the statue, publisherJoseph Pulitzercame to the rescue by launching a five-month fundraising campaign in his newspaperThe World. The paper solicited contributions by publishing articles that appealed to the emotions of New Yorkers. Donations of all sizes poured in, ranging from $0.15 to $250. More than 160,000 people across America gave, including businessmen, waiters, children, and politicians. The paper chronicled each donation, published letters from contributors on the front page, and kept a running tally of funds raised.
The campaign raised over $100,000 (roughly $2 million today) allowing the city to complete construction of the pedestal. Pulitzer andThe Worldsimultaneously saved the Statue of Liberty and gave birth to crowdfunding in American politics.
Crowdfunding for Cairo University
The Egyptian national leader,Mustafa Kamel, launched an initiative for public subscription in favor of establishing the first Egyptian university, and published an advertisement in Al-Ahram newspaper in October 1906 calling on Egyptians to fulfill the nation's debt and not procrastinate with it. Indeed, many people including school children rushed to donate, and the patriots encouraged this subscription until donations exceeded 4,400 Egyptian pounds.
The National University was opened on December 21, 1908, in a large ceremony in the hall of the Shura Council of Laws, in the presence ofKhedive Abbas IIand senior statesmen and notables. Its director was the politician and writer Ahmed Lutfi al-Sayyid while the chairman of its board of directors wasKing Fouad the first. In 1953 the National University changed its name toCairo University.
Marillionstarted crowdfunding in 1997. Fans of the British rock band raised $60,000 (£39,000) via the internet to help finance a North American tour.[64][17]TheProfessional Contractors Group, a trade body representing freelancers in the UK, raised £100,000 over a two-week period in 1999[65]from some 2000 freelancers threatened by a government measure known asIR35. In 2003, jazz composerMaria Schneider (musician)launched the first crowdfunding campaign onArtistSharefor a new recording.[66]The recording was funded by her fans and became the first recording in history to win aGrammy Awardwithout being available in retail stores.
Oliver Twisted(Erik Estrada, Karen Black) was an early crowdfunded film.[67]Subscribers of The Blue Sheet formed The Florida Film Investment Co (FFI) in January 1995, and started selling shares of stock at $10 a share to fund the $80,000 – $100,000 film. The Movie was filmed in Oct 1996. The film was distributed by RGH/Lion's Shares Pictures.[68]
In 2004,Electric Eel Shock, a Japanese rock band, raised £10,000 from 100 fans (the Samurai 100) by offering them a lifetime membership on the band's guestlist.[69]Two years later, they became the fastest band to raise a US$50,000 budget onSellaBand.[70]Franny Armstronglater created a donation system for her feature filmThe Age of Stupid.[71]Over five years, from June 2004 to June 2009 (release date), she raised£1,500,000.[72]
As of the beginning of 2025, the highest reported funding by a crowdfunded project to date isStar Citizen, an onlinespace trading and combatvideo game being developed byChris Robertsand Cloud Imperium Games; it has raised over $800M to date, and while it has a devoted fan base, criticism has arisen for being a potential scam.[73]
On April 17, 2014,The Guardianmedia outlet published a list of "20 of the most significant projects" launched on the Kickstarter platform prior to the date of publication,[74]including: MusicianAmanda Palmerraised US$1.2 million from 24,883 backers in June 2012 to make a new album and art book.[75]
Other campaigns include:
Kickstarter has been used to successfully revive or launch television and film projects that could not get funding elsewhere.[79]These are the current record holders for projects in the "film" category:
A number of private companies thrive off of crowdfunding and offer services related to a number of platforms. Examples include large companies likeBackerKitthat principally offer data analysis ofcampaigns, orY Combinator, which acts as a startup accelerator and receives a significant number of its applicants from platforms such as Kickstarter and Indiegogo.[83]The Italian-American companyAtellani USAwas originally founded with the intent to market, accelerate, and invest in startups wanting to publicize their ideas via crowdfunding platforms like Kickstarter, often designing the startup's campaign and online material.
Crowdfunding is being explored as a potential funding mechanism for creative work such as blogging and journalism,[84]music,independent film(seecrowdfunded film),[85][86]and for fundingstartup companies.[87][88][89][90]
Community music labels are usually for-profit organizations where "fans assume the traditional financier role of a record label for artists they believe in by funding the recording process".[91]Since pioneering crowdfunding in the film industry, Spanner Films has published a "how-to" guide.[92]AFinancialarticle published in mid-September 2013 stated that "the niche for crowdfunding exists in financing films with budgets in the [US]$1 to $10 million range" and crowdfunding campaigns are "much more likely to be successful if they tap into a significant pre-existing fan base and fulfill an existing gap in the market."[93]Innovative new platforms, such asRocketHub, have emerged that combine traditional funding for creative work with branded crowdsourcing—helping artists and entrepreneurs unite with brands "without the need for a middle man."[94]
A variety of crowdfunding platforms have emerged to allow ordinary web users to support specific philanthropic projects without the need for large amounts of money.[34]GlobalGivingallows individuals to browse through a selection of small projects proposed by nonprofit organizations worldwide, donating funds to projects of their choice.Microcreditcrowdfunding platforms such asKiva (organization)facilitate crowdfunding of loans managed by microcredit organizations in developing countries. The US-based nonprofitZidishaapplies a directperson-to-person lendingmodel to microcredit lending for low-income small business owners in developing countries.[95]In 2017, Facebook initiated "Fundraisers", an internal plug-in function that allows its users to raise money for nonprofits.[96]
DonorsChoose.org, founded in 2000, allowspublic schoolteachers in the United States to request materials for their classrooms. Individuals can lend money to teacher-proposed projects, and the organization fulfills and delivers supplies to schools. There are also a number of own-brandeduniversity crowdfunding websites, which enable students and staff to create projects and receive funding from alumni of the university or the general public. Several dedicated civic crowdfunding platforms have emerged in the US and the UK, some of which have led to the first direct involvement of governments in crowdfunding. In the UK,Spacehiveis used by theMayor of LondonandManchester City Councilto co-fund civic projects created by citizens.[97]Similarly, dedicatedHumanitarian Crowdfundinginitiatives are emerging, involving humanitarian organizations, volunteers, and supporters in solving and modeling how to build innovative crowdfunding solutions for the humanitarian community. Likewise, international organizations like theOffice for the Coordination of Humanitarian Affairs(OCHA) have been researching and publishing about the topic.[98]
One crowdfunding project, iCancer, was used to support a Phase 1 trial of AdVince, an anti-cancer drug in 2016.[99][100]
Research into the suitability of crowdfunding for civic investment in the UK highlights that the public sector has not fully realized the benefits of a crowdfunding approach.[101]
Real estatecrowdfunding is the online pooling of capital from investors to fundmortgagessecured by real estate, such as "fix and flip" redevelopment of distressed or abandoned properties, equity for commercial and residential projects, acquisition of pools of distressed mortgages, home buyer down payments, and similar real estate related outlets. Investment, via specialized online platforms in the US, is generally completed under Title II of theJOBS Actand is limited to accredited investors. The platforms offer low minimum investments, often $100 – $10,000.[102][103]There are over 75 real estate crowdfunding platforms in the United States.[104]The growth of real estate crowdfunding is a global trend. During 2014 and 2015, more than 150 platforms have been created throughout the world, such as in China, the Middle East, or France. In Europe, some compare this growing industry to that ofe-commerceten years earlier.[105]Examples of real estate crowdfunding platforms are EquityMultiple,Fundrise,Yieldstreet, CrowdStreet, RealtyMogul, and SmartCrowd, the first digital real estate crowdfunding platform of its kind in the Middle East.[106][107][108]
In Europe, the requirements towards investors are not as high as in the United States, lowering the entry barrier into the real estate investments in general.[109]Real estate crowdfunding can include various project types from commercial to residential developments, planning gain opportunities, build to hold (such associal housing), and many more. The report fromCambridge Centre for Alternative Financeaddresses both real estate crowdfunding andpeer 2 peerlending (property) in the UK.[110]
One of the challenges of posting new ideas on crowdfunding sites is there may be little or no intellectual property (IP) protection provided by the sites themselves. Once an idea is posted, it can be copied. As Slava Rubin, founder of IndieGoGo, said: "We get asked that all the time, 'How do you protect me from someone stealing my idea?' We're not liable for any of that stuff."[111]Inventor advocates, such as Simon Brown, founder of the UK-based United Innovation Association, counsel that ideas can be protected on crowdfunding sites through early filing ofpatent applications, use ofcopyrightand trademark protection as well as a new form of idea protection supported by theWorld Intellectual Property Organizationcalled Creative Barcode.[112]
A number of platforms have also emerged that specialize in the crowdfunding of scientific projects, such asexperiment.com, and The Open Source Science Project.[113][114]In the scientific community, these new options for research funding are seen ambivalently. Advocates of crowdfunding for science emphasize that it allows early-career scientists to apply for their own projects early on, that it forces scientists to communicate clearly and comprehensively to a broader public, that it may alleviate problems of the established funding systems which are seen to fund conventional, mainstream projects, and that it gives the public a say in science funding.[115]In turn, critics are worried about quality control on crowdfunding platforms. If non-scientists were allowed to make funding decisions, it would be more likely that "panda bear science" is funded, i.e. research with broad appeal but lacking scientific substance.[116]
Initial studies found that crowdfunding is used within science, mostly by young researchers to fund small parts of their projects, and with high success rates. At the same time, funding success seems to be strongly influenced by non-scientific factors like humor, visualizations, or the ease and security of payment.[117]
In order to fund online and print publications, journalists are enlisting the help of crowdfunding. Crowdfunding allows for small start-ups and individual journalists to fund their work without the institutional help of major public broadcasters. Stories are publicly pitched using crowdfunding platforms such asKickstarter,Indiegogo, orSpot.us. The funds collected from crowdsourcing may be put toward travel expenses or purchasing equipment. Crowdfunding in journalism may also be viewed as a way to allow audiences to participate in news production and in creating aparticipatory culture.[118]Though deciding which stories are published is a role that traditionally belongs to editors at more established publications, crowdfunding can give the public an opportunity to provide input in deciding which stories are reported. This is done by funding certain reporters and their pitches. Donating can be seen as an act that "bonds" reporters and their readers. This is because readers are expressing interest for their work, which can be "personally motivating" or "gratifying" for reporters.[119]
Spot.us, which was closed in February 2015, was a crowdfunding platform that was specifically meant for journalism.[118][120]The website allowed for readers, individual donors, registeredSpot.usreporters, or news organizations to fund or donate talent toward a pitch of their choosing. While funders are not normally involved in editorial control,Spot.usallowed for donors or "community members" to become involved with the co-creation of a story. This gave them the ability to edit articles, submit photographs, or share leads and information.[119]According to an analysis byPublic Insight Network,Spot.uswas not sustainable for various reasons. Many contributors were not returning donors and often, projects were funded by family and friends. The overall market for crowdfunding journalism may also be a factor; donations for journalism projects accounted for .13 percent of the $2.8 billion that was raised in 2013.[120]
Traditionally, journalists are not involved in advertising and marketing. Crowdfunding means that journalists are attracting funders while trying to remain independent, which may pose a conflict. Therefore, being directly involved with financial aspects can calljournalistic integrityandjournalistic objectivityinto question. This is also due to the fact that journalists may feel some pressure or "a sense of responsibility" toward funders who support a particular project.[118]Crowdfunding can also allow for a blurred line between professional and non-professional journalism because if enough interest is generated, anyone may have their work published.[121]Crowdfunding enables freelance journalists to travel to the sites to find the new sources.[118]
There is some hope that crowdfunding has potential as a tool open for use by groups of people traditionally more marginalized. The World Bank published a report titled "Crowdfunding's Potential for the Developing World" which states that "While crowdfunding is still largely a developed world phenomenon, with the support of governments and development organizations it could become a useful tool in the developing world as well. Substantial reservoirs of entrepreneurial talent, activity, and capital lay dormant in many emerging economies ... Crowdfunding and crowdfund investing have several important roles to play in the developing world's entrepreneurial and venture finance ecosystem."[122]
As the popularity of crowdfunding expanded, the SEC, state governments, and Congress responded by enacting and refining many capital-raising exemptions to allow easier access to alternative funding sources. Initially, theSecurities Act of 1933banned companies from soliciting capital from the general public for private offerings. However, "President Obama signed the Jumpstart Our Small Businesses Act ('JOBS Act') into law on April 5, 2012, which removed the ban on general solicitation activities for issuers qualifying under a new exemption called 'Rule 506(c).'" A company can now broadly solicit and generally advertise an offering and still be compliant with the exemption's requirements if:
Another change was the amendment of SEC Rule 147. Section 3(a)(11) of the Securities Act allows for unlimited capital raising from investors in a single state through an intrastate exemption. However, the SEC created Rule 147 with a number of requirements to ensure compliance. For example, the intrastate solicitation was allowed, but a single out-of-state offer could destroy the exemption. Additionally, the issuer was required to be incorporated and do business in the same state of the intrastate offering. With the expansion of interstate business activities because of the internet, it became difficult for businesses to comply with the exemption. Therefore, on October 26, 2016, the SEC adopted Rule 147(a) which removed many of the restrictions to modernize the Rules. For example, companies would have to do business and have its principal place of business in the state where the offering is sold, and not necessarily whereofferedper the prior rule.[124]
As of 2024 33 crowdfunding permits were issued for financial institutions.[125]
Crowdfunding campaigns provide producers with several benefits, beyond the strict financial gains.[126]The following are the non-financial benefits of crowdfunding.
There are also financial benefits to the creator. For one, crowdfunding allows creators to attain low-cost capital. Traditionally, a creator would need to look at "personal savings, home equity loans, personal credit cards, friends and family members, angel investors, andventure capitalists." With crowdfunding, creators can find funders from around the world, sell both their product and equity, and benefit from increased information flow. Additionally, crowdfunding that supports pre-buying allows creators to obtain early feedback on the product.[15]Another potential positive effect is the propensity of groups to "produce an accurate aggregate prediction" about market outcomes as identified by the authorJames Surowieckiin his bookThe Wisdom of Crowds, thereby placing financial backing behind ventures likely to succeed.
Proponents also identify a potential outcome of crowdfunding as an exponential increase in available venture capital. One report claims that if every American family gave one percent of their investable assets to crowdfunding, $300 billion (a 10X increase) would come into venture capital.[129]Proponents also cite that a benefit for companies receiving crowdfunding support is that they retain control of their operations, as voting rights are not conveyed along with ownership when crowdfunding. As part of his response to the Amanda Palmer Kickstarter controversy,Steve Albiniexpressed his supportive views of crowdfunding for musicians, explaining: "I've said many times that I think they're part of the new way bands and their audience interact and they can be a fantastic resource, enabling bands to do things essentially in cooperation with their audience." Albini described the concept of crowdfunding as "pretty amazing".[130]
Crowdfunding, while gaining popularity, also comes with a number of potential risks or barriers.[4]For the creator, as well as the investor, studies show that crowdfunding contains "high levels of risk, uncertainty, and information asymmetry."[15]
For crowdfunding of equity stock purchases, there is some research in social psychology that indicates that, like in all investments, people don't always do their due diligence to determine if it is a sound investment before investing, which leads to making investment decisions based on emotion rather than financial logic.[132]By using crowdfunding, creators also forgo potential support and value that a single angel investor or venture capitalist might offer. Likewise, crowdfunding requires that creators manage their investors. This can be time-consuming and financially burdensome as the number of investors in the crowd rises.[15]Crowdfunding draws a crowd: investors and other interested observers who follow the progress, or lack of progress, of a project. Sometimes it proves easier to raise the money for a project than to make the project a success. Managing communications with many possibly disappointed investors and supporters can be a substantial, and potentially diverting, task.[133]
Some of the most popular fundraising drives are for commercial companies that use the process to reach customers and at the same time market their products and services. This favors companies like microbreweries and specialist restaurants – in effect creating a "club" of people who are customers as well as investors. In the US in 2015, new rules from theSECto regulate equity crowdfunding will mean that larger businesses with more than 500 investors and more than $25 million in assets will have to file reports like a public company.The Wall Street Journalcommented: "It is all the pain of an IPO without the benefits of the IPO."[134]These two trends may mean crowdfunding is most suited to small consumer-facing companies rather than tech start-ups.
There are several ways in which a well-regulated crowdfunding platform may provide the possibility of attractive returns for investors:
On crowdfunding platforms, the problem ofinformation asymmetryis exacerbated due to the reduced ability of the investor to conduct due diligence.[37]Early-stage investing is typically localized, as the costs of conducting due diligence before making investment decisions and the costs of monitoring after investing both rise with distance. However, this trend is not observed on crowdfunding platforms – these platforms are not geographically constrained and bring in investors from near and far.[36][136]On non-equity or reward-based platforms, investors try to mitigate this risk by using the amount of capital raised as a signal of performance or quality. On equity-based platforms,crowdfunding syndicatesreduce information asymmetry through dual channels – through portfolio diversification and better due diligence as in the case of offline early-stage investing, but also by allowinglead investorswith more information and better networks to leadcrowdsof backers to make investment decisions.[37][137][138]
Crowdfunding platforms also carry the risk of money laundering.[139][140]
The rise of crowdfunding for medical expenses is considered, in large part, a symptom of an inadequate and failing healthcare system in countries such as the United States.[141][142]Healthcare through crowdfunding relies on perceived deservingness and worth, which reproduces unequal outcomes in access.[142]
Rob Solomon, the CEO ofGoFundMe, has commented on this: "The system is terrible. It needs to be rethought and retooled. Politicians are failing us. Health care companies are failing us. Those are realities. I don't want to mince words here. We are facing a huge potential tragedy. We provide relief for a lot of people. But there are people who are not getting relief from us or from the institutions that are supposed to be there. We shouldn't be the solution to a complex set of systemic problems."[143]
There are ethical issues in medical crowdfunding. Firstly, there is a loss of patient privacy.[144]Crowdfunding campaigns are generally more financially successful if extensive personal information is disclosed to the public.[144]Secondly, the oversight regarding the veracity of claims is generally limited.[144][145]For instance, physicians are obliged to uphold the ethics of the medical profession, such as patient confidentiality, but this runs in conflict with dishonest crowdfunding efforts.[145]Thirdly, medical crowdfunding perpetuates inequalities—associated with variables such as gender, class, and race—in access to healthcare.[142][144]For instance, there's a socioeconomic gradient with medical fundraising, in which a highersocioeconomic statuscoincides with higher donation amounts, higher proportions of fundraising targets reached, higher numbers of donations received, and more shares on social media.[146]Finally, the use of medical crowdfunding might reduce the impetus to reform failing infrastructures to healthcare.[144]
|
https://en.wikipedia.org/wiki/Crowdfunding
|
Crowdsourcinginvolves a large group of dispersed participants contributing or producinggoods or services—including ideas,votes,micro-tasks, and finances—for payment or as volunteers. Contemporary crowdsourcing often involvesdigital platformsto attract and divide work between participants to achieve a cumulative result. Crowdsourcing is not limited to online activity, however, and there are various historical examples of crowdsourcing. The word crowdsourcing is aportmanteauof "crowd" and "outsourcing".[1][2][3]In contrast to outsourcing, crowdsourcing usually involves less specific and more public groups of participants.[4][5][6]
Advantages of using crowdsourcing include lowered costs, improved speed, improved quality, increased flexibility, and/or increasedscalabilityof the work, as well as promotingdiversity.[7][8]Crowdsourcing methods include competitions, virtual labor markets, open online collaboration and data donation.[8][9][10][11]Some forms of crowdsourcing, such as in "idea competitions" or "innovation contests" provide ways for organizations to learn beyond the "base of minds" provided by their employees (e.g.Lego Ideas).[12][13][promotion?]Commercial platforms, such asAmazon Mechanical Turk, matchmicrotaskssubmitted by requesters to workers who perform them. Crowdsourcing is also used bynonprofit organizationsto developcommon goods, such asWikipedia.[14]
The termcrowdsourcingwas coined in 2006 by two editors atWired, Jeff Howe and Mark Robinson, to describe how businesses were using the Internet to "outsourcework to the crowd", which quickly led to the portmanteau "crowdsourcing".[15]TheOxford English Dictionarygives a first use: "OED's earliest evidence for crowdsourcing is from 2006, in the writing of J. Howe."[16]The online dictionaryMerriam-Websterdefines it as: "the practice of obtaining needed services, ideas, or content by soliciting contributions from a large group of people and especially from the online community rather than from traditional employees or suppliers."[17]
Daren C. Brabham defined crowdsourcing as an "online, distributed problem-solving and production model."[18]Kristen L. Guth and Brabham found that the performance of ideas offered in crowdsourcing platforms are affected not only by their quality, but also by the communication among users about the ideas, and presentation in the platform itself.[19]
Despite the multiplicity of definitions for crowdsourcing, one constant has been the broadcasting of problems to the public, and an open call for contributions to help solve the problem.[original research?]Members of the public submit solutions that are then owned by the entity who originally broadcast the problem. In some cases, the contributor of the solution is compensated monetarily with prizes or public recognition. In other cases, the only rewards may bepraiseor intellectual satisfaction. Crowdsourcing may produce solutions fromamateursorvolunteersworking in their spare time, from experts, or from small businesses.[15]
While the term "crowdsourcing" was popularized online to describe Internet-based activities,[18]some examples of projects, in retrospect, can be described as crowdsourcing.
Crowdsourcing has often been used in the past as a competition to discover a solution. The French government proposed several of these competitions, often rewarded withMontyon Prizes.[44]These included theLeblanc process, or the Alkali prize, where a reward was provided for separating the salt from the alkali, and theFourneyron's turbine, when the first hydraulic commercial turbine was developed.[45]
In response to a challenge from the French government,Nicolas Appertwon a prize for inventing a new way offood preservationthat involved sealing food in air-tight jars.[46]The British government provided a similar reward to find an easy way to determine a ship'slongitudeinthe Longitude Prize. During the Great Depression, out-of-work clerks tabulated higher mathematical functions in theMathematical Tables Projectas an outreach project.[47][unreliable source?]One of the largest crowdsourcing campaigns was a public design contest in 2010 hosted by the Indian government's finance ministry to create a symbol for theIndian rupee. Thousands of people sent in entries before the government zeroed in on the final symbol based on theDevanagariscript using the letter Ra.[48]
A number of motivations exist for businesses to use crowdsourcing to accomplish their tasks. These include the ability to offload peak demand, access cheap labor and information, generate better results, access a wider array of talent than what is present in one organization, and undertake problems that would have been too difficult to solve internally.[49]Crowdsourcing allows businesses to submit problems on which contributors can work—on topics such as science, manufacturing, biotech, and medicine—optionally with monetary rewards for successful solutions. Although crowdsourcing complicated tasks can be difficult, simple work tasks[specify]can be crowdsourced cheaply and effectively.[50]
Crowdsourcing also has the potential to be a problem-solving mechanism for government and nonprofit use.[51]Urban and transit planning are prime areas for crowdsourcing. For example, from 2008 to 2009, a crowdsourcing project for transit planning in Salt Lake City was created to test the public participation process.[52]Another notable application of crowdsourcing for governmentproblem-solvingisPeer-to-Patent, which was an initiative to improve patent quality in the United States through gathering public input in a structured, productive manner.[53]
Researchers have used crowdsourcing systems such as Amazon Mechanical Turk or CloudResearch to aid their research projects by crowdsourcing some aspects of the research process, such asdata collection, parsing, and evaluation to the public. Notable examples include using the crowd to create speech and language databases,[54][55]to conduct user studies,[56]and to run behavioral science surveys and experiments.[57]Crowdsourcing systems provided researchers with the ability to gather large amounts of data, and helped researchers to collect data from populations and demographics they may not have access to locally.[58][failed verification]
Artists have also used crowdsourcing systems. In a project called the Sheep Market,Aaron Koblinused Mechanical Turk to collect 10,000 drawings of sheep from contributors around the world.[59]ArtistSam Brownleveraged the crowd by asking visitors of his websiteexplodingdogto send him sentences to use as inspirations for his paintings.[60]Art curator Andrea Grover argues that individuals tend to be more open in crowdsourced projects because they are not being physically judged or scrutinized.[61]As with other types of uses, artists use crowdsourcing systems to generate and collect data. The crowd also can be used to provide inspiration and to collect financial support for an artist's work.[62]
Innavigation systems, crowdsourcing from 100 million drivers were used byINRIXto collect users' driving times to provide better GPS routing and real-time traffic updates.[63]
The use of crowdsourcing in medical and health research is increasing systematically. The process involves outsourcing tasks or gathering input from a large, diverse groups of people, often facilitated through digital platforms, to contribute to medical research, diagnostics, data analysis, promotion, and various healthcare-related initiatives. Usage of this innovative approach supplies a useful community-based method to improve medical services.
From funding individual medical cases and innovative devices to supporting research, community health initiatives, and crisis responses, crowdsourcing proves its versatile impact in addressing diverse healthcare challenges.[64]
In 2011,UNAIDSinitiated the participatory online policy project to better engage young people in decision-making processes related toAIDS.[65]The project acquired data from 3,497 participants across seventy-nine countries through online and offline forums. The outcomes generally emphasized the importance of youth perspectives in shaping strategies to effectively addressAIDSwhich provided a valuable insight for future community empowerment initiatives.
Another approach is sourcing results of clinical algorithms from collective input of participants.[66]Researchers fromSPIEdeveloped a crowdsourcing tool, to train individuals, especially middle and high school students in South Korea, to diagnosemalaria-infected red blood cells. Using a statistical framework, the platform combined expert diagnoses with those from minimally trained individuals, creating a gold standard library. The objective was to swiftly teach people to achieve great diagnosis accuracy without any prior training.
Cancer medicinejournal conducted a review of the studies published between January 2005 and June 2016 on crowdsourcing in cancer research, with the usagePubMed,CINAHL,Scopus,PsychINFO, andEmbase.[67]All of them strongly advocate for continuous efforts to refine and expand crowdsourcing applications in academic scholarship. Analysis highlighted the importance of interdisciplinary collaborations and widespread dissemination of knowledge; the review underscored the need to fully harness crowdsourcing's potential to address challenges within cancer research.[67]
Crowdsourcing in astronomy was used in the early 19th century by astronomerDenison Olmsted. After being awakened in a late November night due to ameteor showertaking place, Olmsted noticed a pattern in the shooting stars. Olmsted wrote a brief report of this meteor shower in the local newspaper. "As the cause of 'Falling Stars' is not understood by meteorologists, it is desirable to collect all the facts attending this phenomenon, stated with as much precision as possible", Olmsted wrote to readers, in a report subsequently picked up and pooled to newspapers nationwide. Responses came pouring in from many states, along with scientists' observations sent to theAmerican Journal of Science and Arts.[68]These responses helped him to make a series of scientific breakthroughs including observing the fact that meteor showers are seen nationwide and fall from space under the influence of gravity. The responses also allowed him to approximate a velocity for the meteors.[69]
A more recent version of crowdsourcing in astronomy is NASA's photo organizing project,[70]which asked internet users to browse photos taken from space and try to identify the location the picture is documenting.[71]
Behavioral science
In the field of behavioral science, crowdsourcing is often used to gather data and insights onhuman behavioranddecision making. Researchers may create online surveys or experiments that are completed by a large number of participants, allowing them to collect a diverse and potentially large amount of data.[57]Crowdsourcing can also be used to gather real-time data on behavior, such as through the use of mobile apps that track and record users' activities and decision making.[72]The use of crowdsourcing in behavioral science has the potential to greatly increase the scope and efficiency of research, and has been used in studies on topics such as psychology research,[73]political attitudes,[74]and social media use.[75]
Energy system modelsrequire large and diversedatasets, increasingly so given the trend towards greater temporal and spatial resolution.[76]In response, there have been several initiatives to crowdsource this data. Launched in December 2009,OpenEIis acollaborativewebsiterun by the US government that providesopenenergy data.[77][78]While much of its information is from US government sources, the platform also seeks crowdsourced input from around the world.[79]ThesemanticwikianddatabaseEnipedia also publishes energy systems data using the concept of crowdsourced open information. Enipedia went live in March 2011.[80][81]: 184–188
Genealogicalresearch used crowdsourcing techniques long before personal computers were common. Beginning in 1942, members ofthe Church of Jesus Christ of Latter-day Saintsencouraged members to submit information about their ancestors. The submitted information was gathered together into a single collection. In 1969, to encourage more participation, the church started the three-generation program. In this program, church members were asked to prepare documented family group record forms for the first three generations. The program was later expanded to encourage members to research at least four generations and became known as the four-generation program.[82]
Institutes that have records of interest to genealogical research have used crowds of volunteers to create catalogs and indices to records.[citation needed]
Genetic genealogy research
Genetic genealogyis a combination of traditional genealogy withgenetics. The rise of personal DNA testing, after the turn of the century, by companies such asGene by Gene,FTDNA,GeneTree,23andMe, andAncestry.com, has led to public and semi public databases of DNA testing using crowdsourcing techniques.Citizen scienceprojects have included support, organization, and dissemination ofpersonal DNA (genetic) testing.Similar toamateur astronomy, citizen scientists encouraged by volunteer organizations like theInternational Society of Genetic Genealogy[83]have provided valuable information and research to the professional scientific community.[84]TheGenographic Project, which began in 2005, is a research project carried out by theNational Geographic Society's scientific team to reveal patterns of human migration using crowdsourcedDNAtesting and reporting of results.[85]
Another early example of crowdsourcing occurred in the field ofornithology. On 25 December 1900, Frank Chapman, an early officer of theNational Audubon Society, initiated a tradition dubbed the"Christmas Day Bird Census". The project called birders from across North America to count and record the number of birds in each species they witnessed on Christmas Day. The project was successful, and the records from 27 different contributors were compiled into one bird census, which tallied around 90 species of birds.[86]This large-scale collection of data constituted an early form of citizen science, the premise upon which crowdsourcing is based. In the 2012 census, more than 70,000 individuals participated across 2,369 bird count circles.[87]Christmas 2014 marked the National Audubon Society's 115th annualChristmas Bird Count.
TheEuropean-Mediterranean Seismological Centre (EMSC)has developed a seismic detection system by monitoring the traffic peaks on its website and analyzing keywords used on Twitter.[88]
Crowdsourcing is increasingly used in professional journalism. Journalists are able to organize crowdsourced information by fact checking the information, and then using the information they have gathered in their articles as they see fit.[citation needed]A daily newspaper in Sweden has successfully used crowdsourcing in investigating the home loan interest rates in the country in 2013–2014, which resulted in over 50,000 submissions.[89]A daily newspaper in Finland crowdsourced an investigation into stock short-selling in 2011–2012, and the crowdsourced information led to revelations of atax evasionsystem by a Finnish bank. The bank executive was fired and policy changes followed.[90]TalkingPointsMemoin the United States asked its readers to examine 3,000 emails concerning the firing of federal prosecutors in 2008. The British newspaperThe Guardiancrowdsourced the examination of hundreds of thousands of documents in 2009.[91]
Data donation is a crowdsourcing approach to gather digital data. It is used by researchers and organizations to gain access to data from online platforms, websites, search engines and apps and devices. Data donation projects usually rely on participants volunteering their authentic digital profile information. Examples include:
Crowdsourcing is used in large scale media, such as thecommunity notessystem of the X platform. Crowdsourcing on such platforms is thought to be effective in combating partisan misinformation on social media when certain conditions are met.[99][100]Success may depend on trust in fact-checking sources, the ability to present information that challenges previous beliefs without causing excessive dissonance, and having a sufficiently large and diverse crowd of participants. Effective crowdsourcing interventions must navigate politically polarized environments where trusted sources may be less inclined to provide dissonant opinions. By leveraging network analysis to connect users with neighboring communities outside their ideological echo chambers, crowdsourcing can provide an additional layer of content moderation.
Crowdsourcing public policy and the production of public services is also referred to ascitizen sourcing. While some scholars argue crowdsourcing for this purpose as a policy tool[101]or a definite means of co-production,[102]others question that and argue that crowdsourcing should be considered just as a technological enabler that simply increases speed and ease of participation.[103]Crowdsourcing can also play a role indemocratization.[104]
The first conference focusing on Crowdsourcing for Politics and Policy took place atOxford University, under the auspices of the Oxford Internet Institute in 2014. Research has emerged since 2012[105]which focused on the use of crowdsourcing for policy purposes.[106][107]These include experimentally investigating the use of Virtual Labor Markets for policy assessment,[108]and assessing the potential for citizen involvement in process innovation for public administration.[109]
Governments across the world are increasingly using crowdsourcing for knowledge discovery and civic engagement.[citation needed]Iceland crowdsourced their constitution reform process in 2011, and Finland has crowdsourced several law reform processes to address their off-road traffic laws. The Finnish government allowed citizens to go on an online forum to discuss problems and possible resolutions regarding some off-road traffic laws.[citation needed]The crowdsourced information and resolutions would then be passed on to legislators to refer to when making a decision, allowing citizens to contribute to public policy in a more direct manner.[110][111]Palo Altocrowdsources feedback for its Comprehensive City Plan update in a process started in 2015.[112]The House of Representatives in Brazil has used crowdsourcing in policy-reforms.[113]
NASAused crowdsourcing to analyze large sets of images. As part of theOpen Government Initiativeof theObama Administration, theGeneral Services Administrationcollected and amalgamated suggestions for improving federal websites.[113]
For part of the Obama andTrump Administrations, theWe the Peoplesystem collected signatures on petitions, which were entitled to an official response from theWhite Houseonce a certain number had been reached. Several U.S. federal agencies raninducement prize contests, including NASA and theEnvironmental Protection Agency.[114][113]
Crowdsourcing has been used extensively for gathering language-related data.
For dictionary work, crowdsourcing was applied over a hundred years ago by theOxford English Dictionaryeditors using paper and postage. It has also been used for collecting examples ofproverbson a specific topic (e.g.religious pluralism) for a printed journal.[115]Crowdsourcing language-related data online has proven very effective and many dictionary compilation projects used crowdsourcing. It is used particularly for specialist topics and languages that are not well documented, such as for theOromo language.[116]Software programs have been developed for crowdsourced dictionaries, such asWeSay.[117]A slightly different form of crowdsourcing for language data was the online creation of scientific and mathematical terminology forAmerican Sign Language.[118]
In linguistics, crowdsourcing strategies have been applied to estimate word knowledge, vocabulary size, and word origin.[119]Implicit crowdsourcing on social media has also approximating sociolinguistic data efficiently.Redditconversations in various location-based subreddits were analyzed for the presence of grammatical forms unique to a regional dialect. These were then used to map the extent of the speaker population. The results could roughly approximate large-scale surveys on the subject without engaging in field interviews.[120]
Mining publicly available social media conversations can be used as a form of implicit crowdsourcing to approximate the geographic extent of speaker dialects.[120]Proverb collectionis also being done via crowdsourcing on the Web, most notably for thePashto languageof Afghanistan and Pakistan.[121][122][123]Crowdsourcing has been extensively used to collect high-quality gold standards for creating automatic systems in natural language processing (e.g.named entity recognition,entity linking).[124]
Organizations often leverage crowdsourcing to gather ideas for new products as well as for the refinement of established product.[41]Lego allows users to work on new product designs while conducting requirements testing. Any user can provide a design for a product, and other users can vote on the product. Once the submitted product has received 10,000 votes, it will be formally reviewed in stages and go into production with no impediments such as legal flaws identified. The creator receives royalties from the net income.[125]Labelling new products as "customer-ideated" through crowdsourcing initiatives, as opposed to not specifying the source of design, leads to a substantial increase in the actual market performance of the products. Merely highlighting the source of design to customers, particularly, attributing the product to crowdsourcing efforts from user communities, can lead to a significant boost in product sales. Consumers perceive "customer-ideated" products as more effective in addressing their needs, leading to a quality inference. The design mode associated with crowdsourced ideas is considered superior in generating promising new products, contributing to the observed increase in market performance.[126]
Crowdsourcing is widely used by businesses to source feedback and suggestions on how to improve their products and services.[41]Homeowners can useAirbnbto list their accommodation or unused rooms. Owners set their own nightly, weekly and monthly rates and accommodations. The business, in turn, charges guests and hosts a fee. Guests usually end up spending between $9 and $15.[127]They have to pay a booking fee every time they book a room. The landlord, in turn, pays a service fee for the amount due. The company has 1,500 properties in 34,000 cities in more than 190 countries.[citation needed]
Crowdsourcing is frequently used in market research as a way to gather insights and opinions from a large number of consumers.[128]Companies may create online surveys or focus groups that are open to the general public, allowing them to gather a diverse range of perspectives on their products or services. This can be especially useful for companies seeking to understand the needs and preferences of a particular market segment or to gather feedback on the effectiveness of their marketing efforts. The use of crowdsourcing in market research allows companies to quickly and efficiently gather a large amount of data and insights that can inform their business decisions.[129]
Internet and digital technologies have massively expanded the opportunities for crowdsourcing. However, the effect of user communication and platform presentation can have a major bearing on the success of an online crowdsourcing project.[19]The crowdsourced problem can range from huge tasks (such as finding alien life or mapping earthquake zones) or very small (identifying images). Some examples of successful crowdsourcing themes are problems that bug people, things that make people feel good about themselves, projects that tap into niche knowledge of proud experts, and subjects that people find sympathetic.[145]
Crowdsourcing can either take an explicit or an implicit route:
In his 2013 book,Crowdsourcing, Daren C. Brabham puts forth a problem-based typology of crowdsourcing approaches:[147]
Ivo Blohm identifies four types of Crowdsourcing Platforms: Microtasking, Information Pooling, Broadcast Search, and Open Collaboration. They differ in the diversity and aggregation of contributions that are created. The diversity of information collected can either be homogenous or heterogenous. The aggregation of information can either be selective or integrative.[definition needed][148]Some common categories of crowdsourcing have been used effectively in the commercial world include crowdvoting, crowdsolving,crowdfunding,microwork,creative crowdsourcing,crowdsource workforce management, andinducement prize contests.[149]
In their conceptual review of the crowdsourcing,Linus Dahlander, Lars Bo Jeppesen, and Henning Piezunka distinguish four steps in the crowdsourcing process: Define, Broadcast, Attract, and Select.[150]
Crowdvoting occurs when a website gathers a large group's opinions and judgments on a certain topic. Some crowdsourcing tools and platforms allow participants to rank each other's contributions, e.g. in answer to the question "What is one thing we can do to make Acme a great company?" One common method for ranking is "like" counting, where the contribution with the most "like" votes ranks first. This method is simple and easy to understand, but it privileges early contributions, which have more time to accumulate votes.[citation needed]In recent years, several crowdsourcing companies have begun to use pairwise comparisons backed by ranking algorithms. Ranking algorithms do not penalize late contributions.[citation needed]They also produce results quicker. Ranking algorithms have proven to be at least 10 times faster than manual stack ranking.[151]One drawback, however, is that ranking algorithms are more difficult to understand than vote counting.
TheIowa Electronic Marketis a prediction market that gathers crowds' views on politics and tries to ensure accuracy by having participants pay money to buy and sell contracts based on political outcomes.[152]Some of the most famous examples have made use of social media channels: Domino's Pizza, Coca-Cola, Heineken, and Sam Adams have crowdsourced a new pizza, bottle design, beer, and song respectively.[153]A website calledThreadlessselected the T-shirts it sold by having users provide designs and vote on the ones they like, which are then printed and available for purchase.[18]
TheCalifornia Report Card(CRC), a program jointly launched in January 2014 by theCenter for Information Technology Research in the Interest of Society[154]and Lt. GovernorGavin Newsom, is an example of modern-day crowd voting. Participants access the CRC online and vote on six timely issues. Throughprincipal component analysis, the users are then placed into an online "café" in which they can present their own political opinions and grade the suggestions of other participants. This system aims to effectively involve the greater public in relevant political discussions and highlight the specific topics with which people are most concerned.
Crowdvoting's value in the movie industry was shown when in 2009 a crowd accurately predicted the success or failure of a movie based on its trailer,[155][156]a feat that was replicated in 2013 by Google.[157]
On Reddit, users collectively rate web content, discussions and comments as well as questions posed to persons of interest in "AMA" and AskScienceonline interviews.[cleanup needed]
In 2017,Project Fanchisepurchased a team in theIndoor Football Leagueand created theSalt Lake Screaming Eagles, a fan run team. Using a mobile app, the fans voted on the day-to-day operations of the team, the mascot name, signing of players and evenoffensiveplay callingduring games.[158]
Crowdfunding is the process of funding projects by a multitude of people contributing a small amount to attain a certain monetary goal, typically via the Internet.[159]Crowdfunding has been used for both commercial and charitable purposes.[160]The crowdfuding model that has been around the longest is rewards-based crowdfunding. This model is where people can prepurchase products, buy experiences, or simply donate. While this funding may in some cases go towards helping a business, funders are not allowed to invest and become shareholders via rewards-based crowdfunding.[161]
Individuals, businesses, and entrepreneurs can showcase their businesses and projects by creating a profile, which typically includes a short video introducing their project, a list of rewards per donation, and illustrations through images.[citation needed]Funders make monetary contribution for numerous reasons:
The dilemma for equity crowdfunding in the US as of 2012 was during a refinement process for the regulations of theSecurities and Exchange Commission, which had until 1 January 2013 to tweak the fundraising methods. The regulators were overwhelmed trying to regulate Dodd-Frank and all the other rules and regulations involving public companies and the way they traded. Advocates of regulation claimed that crowdfunding would open up the flood gates for fraud, called it the "wild west" of fundraising, and compared it to the 1980s days of penny stock "cold-call cowboys". The process allowed for up to $1 million to be raised without some of the regulations being involved. Companies under the then-current proposal would have exemptions available and be able to raise capital from a larger pool of persons, which can include lower thresholds for investor criteria, whereas the old rules required that the person be an "accredited" investor. These people are often recruited from social networks, where the funds can be acquired from an equity purchase, loan, donation, or ordering. The amounts collected have become quite high, with requests that are over a million dollars for software such as Trampoline Systems, which used it to finance the commercialization of their new software.[citation needed]
Web-based idea competitions or inducement prize contests often consist of generic ideas, cash prizes, and an Internet-based platform to facilitate easy idea generation and discussion. An example of these competitions includes an event like IBM's 2006 "Innovation Jam", attended by over 140,000 international participants and yielded around 46,000 ideas.[163][164]Another example is theNetflix Prizein 2009. People were asked to come up with arecommendation algorithmthat is more accurate than Netflix's current algorithm. It had a grand prize of US$1,000,000, and it was given to a team which designed an algorithm that beat Netflix's own algorithm for predicting ratings by 10.06%.[citation needed]
Another example of competition-based crowdsourcing is the 2009DARPA balloonexperiment, whereDARPAplaced 10 balloon markers across the United States and challenged teams to compete to be the first to report the location of all the balloons. A collaboration of efforts was required to complete the challenge quickly and in addition to the competitive motivation of the contest as a whole, the winning team (MIT, in less than nine hours) established its own "collaborapetitive" environment to generate participation in their team.[165]A similar challenge was theTag Challenge, funded by the US State Department, which required locating and photographing individuals in five cities in the US and Europe within 12 hours based only on a single photograph. The winning team managed to locate three suspects by mobilizing volunteers worldwide using a similar incentive scheme to the one used in the balloon challenge.[166]
Usingopen innovationplatforms is an effective way to crowdsource people's thoughts and ideas for research and development. The companyInnoCentiveis a crowdsourcing platform for corporate research and development where difficult scientific problems are posted for crowds of solvers to discover the answer and win a cash prize that ranges from $10,000 to $100,000 per challenge.[18]InnoCentive, ofWaltham, Massachusetts, and London, England, provides access to millions of scientific and technical experts from around the world. The company claims a success rate of 50% in providing successful solutions to previously unsolved scientific and technical problems. TheX Prize Foundationcreates and runs incentive competitions offering between $1 million and $30 million for solving challenges.Local Motorsis another example of crowdsourcing, and it is a community of 20,000 automotive engineers, designers, and enthusiasts that compete to build off-road rally trucks.[167]
Implicit crowdsourcing is less obvious because users do not necessarily know they are contributing, yet can still be very effective in completing certain tasks.[citation needed]Rather than users actively participating in solving a problem or providing information, implicit crowdsourcing involves users doing another task entirely where a third party gains information for another topic based on the user's actions.[18]
A good example of implicit crowdsourcing is theESP game, where users find words to describe Google images, which are then used asmetadatafor the images. Another popular use of implicit crowdsourcing is throughreCAPTCHA, which asks people to solveCAPTCHAsto prove they are human, and then provides CAPTCHAs from old books that cannot be deciphered by computers, to digitize them for the web. Like many tasks solved using the Mechanical Turk, CAPTCHAs are simple for humans, but often very difficult for computers.[146]
Piggyback crowdsourcing can be seen most frequently by websites such as Google that data-mine a user's search history and websites to discover keywords for ads, spelling corrections, and finding synonyms. In this way, users are unintentionally helping to modify existing systems, such asGoogle Ads.[56]
Thecrowdis an umbrella term for the people who contribute to crowdsourcing efforts. Though it is sometimes difficult to gather data about thedemographicsof the crowd as a whole, several studies have examined various specific online platforms. Amazon Mechanical Turk has received a great deal of attention in particular. A study in 2008 byIpeirotisfound that users at that time were primarily American, young, female, and well-educated, with 40% earning more than $40,000 per year. In November 2009, Ross found a very different Mechanical Turk population where 36% of which was Indian. Two-thirds of Indian workers were male, and 66% had at least a bachelor's degree. Two-thirds had annual incomes less than $10,000, with 27% sometimes or always depending on income from Mechanical Turk to make ends meet.[186]More recent studies have found that U.S. Mechanical Turk workers are approximately 58% female, and nearly 67% of workers are in their 20s and 30s.[57][187][188][189]Close to 80% are White, and 9% are Black. MTurk workers are less likely to be married or have children as compared to the general population. In the US population over 18, 45% are unmarried, while the proportion of unmarried workers on MTurk is around 57%. Additionally, about 55% of MTurk workers do not have any children, which is significantly higher than the general population. Approximately 68% of U.S. workers are employed, compared to 60% in the general population. MTurk workers in the U.S. are also more likely to have a four-year college degree (35%) compared to the general population (27%). Politics within the U.S. sample of MTurk are skewed liberal, with 46% Democrats, 28% Republicans, and 26% "other". MTurk workers are also less religious than the U.S. population, with 41% religious, 20% spiritual, 21% agnostic, and 16% atheist.
The demographics of Microworkers.com differ from Mechanical Turk in that the US and India together accounting for only 25% of workers; 197 countries are represented among users, with Indonesia (18%) and Bangladesh (17%) contributing the largest share. However, 28% of employers are from the US.[190]
Another study of the demographics of the crowd atiStockphotofound a crowd that was largely white, middle- to upper-class, higher educated, worked in a so-called "white-collar job" and had a high-speed Internet connection at home.[191]In a crowd-sourcing diary study of 30 days in Europe, the participants were predominantly higher educated women.[144]
Studies have also found that crowds are not simply collections of amateurs or hobbyists. Rather, crowds are often professionally trained in a discipline relevant to a given crowdsourcing task and sometimes hold advanced degrees and many years of experience in the profession.[191][192][193][194]Claiming that crowds are amateurs, rather than professionals, is both factually untrue and may lead to marginalization of crowd labor rights.[195]
Gregory Saxton et al. studied the role of community users, among other elements, during his content analysis of 103 crowdsourcing organizations. They developed a taxonomy of nine crowdsourcing models (intermediary model, citizen media production, collaborative software development, digital goods sales, product design, peer-to-peer social financing, consumer report model, knowledge base building model, and collaborative science project model) in which to categorize the roles of community users, such as researcher, engineer, programmer, journalist, graphic designer, etc., and the products and services developed.[196]
Many researchers suggest that bothintrinsicandextrinsicmotivations cause people to contribute to crowdsourced tasks and these factors influence different types of contributors.[111][191][192][194][197][198][199][200][201]For example, people employed in a full-time position rate human capital advancement as less important than part-time workers do, while women rate social contact as more important than men do.[198]
Intrinsic motivations are broken down into two categories: enjoyment-based and community-based motivations. Enjoyment-based motivations refer to motivations related to the fun and enjoyment contributors experience through their participation. These motivations include: skill variety, task identity, taskautonomy, direct feedback from the job, and taking the job as apastime.[citation needed]Community-based motivations refer to motivations related to community participation, and include community identification and social contact. In crowdsourced journalism, the motivation factors are intrinsic: the crowd is driven by a possibility to make social impact, contribute to social change, and help their peers.[197]
Extrinsic motivations are broken down into three categories: immediate payoffs, delayed payoffs, and social motivations. Immediate payoffs, through monetary payment, are the immediately received compensations given to those who complete tasks. Delayed payoffs are benefits that can be used to generate future advantages, such as training skills and being noticed by potential employers. Social motivations are the rewards of behaving pro-socially,[202]such as thealtruisticmotivations ofonline volunteers. Chandler and Kapelner found that US users of the Amazon Mechanical Turk were more likely to complete a task when told they were going to help researchers identify tumor cells, than when they were not told the purpose of their task. However, of those who completed the task, quality of output did not depend on the framing.[203]
Motivation in crowdsourcing is often a mix of intrinsic and extrinsic factors.[204]In a crowdsourced law-making project, the crowd was motivated by both intrinsic and extrinsic factors. Intrinsic motivations included fulfilling civic duty, affecting the law for sociotropic reasons, to deliberate with and learn from peers. Extrinsic motivations included changing the law for financial gain or other benefits. Participation in crowdsourced policy-making was an act of grassroots advocacy, whether to pursue one's own interest or more altruistic goals, such as protecting nature.[111]Participants in online research studies report their motivation as both intrinsic enjoyment and monetary gain.[205][206][188]
Another form of social motivation is prestige or status. TheInternational Children's Digital Libraryrecruited volunteers to translate and review books. Because all translators receive public acknowledgment for their contributions, Kaufman and Schulz cite this as a reputation-based strategy to motivate individuals who want to be associated with institutions that have prestige. The Mechanical Turk uses reputation as a motivator in a different sense, as a form of quality control. Crowdworkers who frequently complete tasks in ways judged to be inadequate can be denied access to future tasks, whereas workers who pay close attention may be rewarded by gaining access to higher-paying tasks or being on an "Approved List" of workers. This system may incentivize higher-quality work.[207]However, this system only works when requesters reject bad work, which many do not.[208]
Despite the potential global reach of IT applications online, recent research illustrates that differences in location[which?]affect participation outcomes in IT-mediated crowds.[209]
While there it lots of anecdotal evidence that illustrates the potential of crowdsourcing and the benefits that organizations have derived, there is scientific evidence that crowdsourcing initiatives often fail.[210]At least six major topics cover the limitations and controversies about crowdsourcing:
Crowdsourcing initiatives often fail to attract sufficient or beneficial contributions. The vast majority of crowdsourcing initiatives hardly attract contributions; an analysis of thousands of organizations' crowdsourcing initiatives illustrates that only the 90th percentile of initiatives attracts more than one contribution a month.[201]While crowdsourcing initiatives may be effective in isolation, when faced with competition they mail fail to attract sufficient contributions. Nagaraj and Piezunka (2024) illustrate thatOpenStreetMapstruggled to attract contributions onceGoogle Mapsentered a country.
Crowdsourcing allows anyone to participate, allowing for many unqualified participants and resulting in large quantities of unusable contributions.[211]Companies, or additional crowdworkers, then have to sort through the low-quality contributions. The task of sorting through crowdworkers' contributions, along with the necessary job of managing the crowd, requires companies to hire actual employees, thereby increasing management overhead.[212]For example, susceptibility to faulty results can be caused by targeted, malicious work efforts. Since crowdworkers completing microtasks are paid per task, a financial incentive often causes workers to complete tasks quickly rather than well.[57]Verifying responses is time-consuming, so employers often depend on having multiple workers complete the same task to correct errors. However, having each task completed multiple times increases time and monetary costs.[213]Some companies, likeCloudResearch, control data quality by repeatedly vetting crowdworkers to ensure they are paying attention and providing high-quality work.[208]
Crowdsourcing quality is also impacted by task design. Lukyanenkoet al.[214]argue that, the prevailing practice of modeling crowdsourcing data collection tasks in terms of fixed classes (options), unnecessarily restricts quality. Results demonstrate that information accuracy depends on the classes used to model domains, with participants providing more accurate information when classifying phenomena at a more general level (which is typically less useful to sponsor organizations, hence less common).[clarification needed]Further, greater overall accuracy is expected when participants could provide free-form data compared to tasks in which they select from constrained choices. In behavioral science research, it is often recommended to include open-ended responses, in addition to other forms of attention checks, to assess data quality.[215][216]
Just as limiting, oftentimes there is not enough skills or expertise in the crowd to successfully accomplish the desired task. While this scenario does not affect "simple" tasks such as image labeling, it is particularly problematic for more complex tasks, such as engineering design or product validation. A comparison between the evaluation of business models from experts and an anonymous online crowd showed that an anonymous online crowd cannot evaluate business models to the same level as experts.[217]In these cases, it may be difficult or even impossible to find qualified people in the crowd, as their responses represent only a small fraction of the workers compared to consistent, but incorrect crowd members.[218]However, if the task is "intermediate" in its difficulty, estimating crowdworkers' skills and intentions and leveraging them for inferring true responses works well,[219]albeit with an additional computation cost.[citation needed]
Crowdworkers are a nonrandom sample of the population. Many researchers use crowdsourcing to quickly and cheaply conduct studies with larger sample sizes than would be otherwise achievable. However, due to limited access to the Internet, participation in low developed countries is relatively low. Participation in highly developed countries is similarly low, largely because the low amount of pay is not a strong motivation for most users in these countries. These factors lead to a bias in the population pool towards users in medium developed countries, as deemed by thehuman development index.[220]Participants in these countries sometimes masquerade as U.S. participants to gain access to certain tasks. This led to the "bot scare" on Amazon Mechanical Turk in 2018, when researchers thought bots were completing research surveys due to the lower quality of responses originating from medium-developed countries.[216][221]
The likelihood that a crowdsourced project will fail due to lack of monetary motivation or too few participants increases over the course of the project. Tasks that are not completed quickly may be forgotten, buried by filters and search procedures. This results in a long-tail power law distribution of completion times.[222]Additionally, low-paying research studies online have higher rates of attrition, with participants not completing the study once started.[58]Even when tasks are completed, crowdsourcing does not always produce quality results. WhenFacebookbegan its localization program in 2008, it encountered some criticism for the low quality of its crowdsourced translations.[223]One of the problems of crowdsourcing products is the lack of interaction between the crowd and the client. Usually little information is known about the final product, and workers rarely interacts with the final client in the process. This can decrease the quality of product as client interaction is considered to be a vital part of the design process.[224]
An additional cause of the decrease in product quality that can result from crowdsourcing is the lack of collaboration tools. In a typical workplace, coworkers are organized in such a way that they can work together and build upon each other's knowledge and ideas. Furthermore, the company often provides employees with the necessary information, procedures, and tools to fulfill their responsibilities. However, in crowdsourcing, crowd-workers are left to depend on their own knowledge and means to complete tasks.[212]
A crowdsourced project is usually expected to be unbiased by incorporating a large population of participants with a diverse background. However, most of the crowdsourcing works are done by people who are paid or directly benefit from the outcome (e.g. most ofopen sourceprojects working onLinux). In many other cases, the end product is the outcome of a single person's endeavor, who creates the majority of the product, while the crowd only participates in minor details.[225]
To make an idea turn into a reality, the first component needed is capital. Depending on the scope and complexity of the crowdsourced project, the amount of necessary capital can range from a few thousand dollars to hundreds of thousands, if not more. The capital-raising process can take from days to months depending on different variables, including the entrepreneur's network and the amount of initial self-generated capital.[citation needed]
The crowdsourcing process allows entrepreneurs to access a wide range of investors who can take different stakes in the project.[226]As an effect, crowdsourcing simplifies the capital-raising process and allows entrepreneurs to spend more time on the project itself and reaching milestones rather than dedicating time to get it started. Overall, the simplified access to capital can save time to start projects and potentially increase the efficiency of projects.[citation needed]
Others argue that easier access to capital through a large number of smaller investors can hurt the project and its creators. With a simplified capital-raising process involving more investors with smaller stakes, investors are more risk-seeking because they can take on an investment size with which they are comfortable.[226]This leads to entrepreneurs losing possible experience convincing investors who are wary of potential risks in investing because they do not depend on one single investor for the survival of their project. Instead of being forced to assess risks and convince large institutional investors on why their project can be successful, wary investors can be replaced by others who are willing to take on the risk.
Some translation companies and translation tool consumers pretend to use crowdsourcing as a means for drastically cutting costs, instead of hiringprofessional translators. This situation has been systematically denounced byIAPTIand other translator organizations.[227]
The raw number of ideas that get funded and the quality of the ideas is a large controversy over the issue of crowdsourcing.
Proponents argue that crowdsourcing is beneficial because it allows the formation of startups with niche ideas that would not surviveventure capitalistorangelfunding, which are oftentimes the primary investors in startups. Many ideas are scrapped in their infancy due to insufficient support and lack of capital, but crowdsourcing allows these ideas to be started if an entrepreneur can find a community to take interest in the project.[228]
Crowdsourcing allows those who would benefit from the project to fund and become a part of it, which is one way for small niche ideas get started.[229]However, when the number of projects grows, the number of failures also increases. Crowdsourcing assists the development of niche and high-risk projects due to a perceived need from a select few who seek the product. With high risk and small target markets, the pool of crowdsourced projects faces a greater possible loss of capital, lower return, and lower levels of success.[230]
Because crowdworkers are considered independent contractors rather than employees, they are not guaranteedminimum wage. In practice, workers using Amazon Mechanical Turk generally earn less than minimum wage. In 2009, it was reported that United States Turk users earned an average of $2.30 per hour for tasks, while users in India earned an average of $1.58 per hour, which is below minimum wage in the United States (but not in India).[186][231]In 2018, a survey of 2,676 Amazon Mechanical Turk workers doing 3.8 million tasks found that the median hourly wage was approximately $2 per hour, and only 4% of workers earned more than the federal minimum wage of $7.25 per hour.[232]Some researchers who have considered using Mechanical Turk to get participants for research studies have argued that the wage conditions might be unethical.[58][233]However, according to other research, workers on Amazon Mechanical Turk do not feel they are exploited and are ready to participate in crowdsourcing activities in the future.[234]A more recent study using stratified random sampling to access a representative sample of Mechanical Turk workers found that the U.S. MTurk population is financially similar to the general population.[188]Workers tend to participate in tasks as a form of paid leisure and to supplement their primary income, and only 7% view it as a full-time job. Overall, workers rated MTurk as less stressful than other jobs. Workers also earn more than previously reported, about $6.50 per hour. They see MTurk as part of the solution to their financial situation and report rare upsetting experiences. They also perceive requesters on MTurk as fairer and more honest than employers outside of the platform.[188]
When Facebook began its localization program in 2008, it received criticism for using free labor in crowdsourcing the translation of site guidelines.[223]
Typically, no written contracts, nondisclosure agreements, or employee agreements are made with crowdworkers. For users of the Amazon Mechanical Turk, this means that employers decide whether users' work is acceptable and reserve the right to withhold pay if it does not meet their standards.[235]Critics say that crowdsourcing arrangements exploit individuals in the crowd, and a call has been made for crowds to organize for their labor rights.[236][195][237]
Collaboration between crowd members can also be difficult or even discouraged, especially in the context of competitive crowd sourcing. Crowdsourcing site InnoCentive allows organizations to solicit solutions to scientific and technological problems; only 10.6% of respondents reported working in a team on their submission.[192]Amazon Mechanical Turk workers collaborated with academics to create a platform, WeAreDynamo.org, that allows them to organize and create campaigns to better their work situation, but the site is no longer running.[238]Another platform run by Amazon Mechanical Turk workers and academics, Turkopticon, continues to operate and provides worker reviews on Amazon Mechanical Turk employers.[239]
America Onlinesettled the caseHallissey et al. v. America Online, Inc.for $15 million in 2009, after unpaid moderators sued to be paid theminimum wageas employees under the U.S.Fair Labor Standards Act.
Besides insufficient compensation and other labor-related disputes, there have also been concerns regardingprivacy violations, the hiring ofvulnerable groups, breaches ofanonymity, psychological damage, the encouragement ofaddictive behaviors, and more.[240]Many but not all of the issues related to crowdworkes overlap with concerns related tocontent moderators.
|
https://en.wikipedia.org/wiki/Crowd_sourcing
|
Inaccountingandfinance,flat interest ratemortgagesandloanscalculateinterestbased on the amount of money a borrower receives at the beginning of the loan. However, ifrepayment is scheduledto occur at regular intervals throughout the term, the average amount to which the borrower has access is lower and so theeffectiveor true rate of interest is higher. Only if theprincipalis available in full throughout the loan term does the flat rate equate to the true rate. This is the case in the example to the right, where the loan contract is for 400,000Cambodian rielsover 4 months. Interest is set at 16,000 riels (4%) a month while principal is due in a single paymentat the end.
Loans with interest quoted using a flat rate originated before currency was invented and continued to feature regularly up to and beyond the 20th century within developed countries. More recently, they have also come to be used in theinformal economyofdeveloping countries, frequently adopted bymicrocreditinstitutions. One reason for the popularity of flat rates is their ease of use. For example, a loan of $1,200 can be structured with 12 monthly repayments of $100, plus interest, due on the same dates, of 1% ($12) a month, resulting in a total monthly payment of $112. However, the borrower only has access to $1,200 at the very beginning of the loan. Since $100 in principal is being paid each month, the average amount to which the borrower has access during the loan term is approximately half, in fact just over $600. For this reason, as mentioned above, the true rate of interest is nearly double. "A general rule known by financial managers is that when flat interest is used, the APR is almost twice as much as the quoted interest rate."[1][2]
In order to show the true rate underlying a flat rate, it is necessary to use the declining balanceamortization schedule, dividing the total cost to the borrower by the average amount outstanding. In the first three examples on the right the borrower is quoted 1% a month. These are loans of $1,200 each, amortized with level payments over 4, 12 and 24 months. In the 4-month example, the borrower will make four equal payments of $300 in principal and 4 equal payments of $12 (1% of $1,200) in interest. The total cost of this loan is the principal plus $48.00 in interest, whilst the average amount outstanding was approximately $600. This yields an annualized flat rate of 12%, and an annualized effective or true rate of 19.05%. The true rate can also be calculated by iteration from the amortization schedule, using thecompound interestformula.
To keep quoted interest rates as low as possible, institutions also often call for one-time origination or administration fees. However, an origination fee as low as 4% of the total loan can have a large impact on the borrower's total costs. This is especially true for short-term loans, a typical characteristic of microcredit. As these fees represent an inherent cost of borrowing, they must also be added to the charge for interest in order to show the effective APR.
Flat interest rates have the following advantages:
Flat interest rates have the following disadvantages:
The less developed an economy, the less capacity the government may have to regulate informal lenders. As a result, Brigit Helms argues for an evolutionary approach to interest rates, in which they can be expected to gradually drop as competition increases and the government gains greater capacity to effectively enforce comparable interest rate disclosures on financial sector actors.[7]
F.W. Raiffeisenas early as 1889, writing to thecredit unionsthen emerging in Germany, campaigned against maintaining the total charge for credit unchanged, even when a loan is repaid early. “It is immoral to charge interest in advance, and also objectionable as a business method. Every member shall have the right at any time to pay back his loan. If interest has been charged for a full year in advance, the members who have made repayments ahead of time, pay too much interest, unless the Credit Union makes a refund.”[8]
Separately, interest rate ceilings and popular conflation of flat with true rates, has led some institutions to replace or complement interest with transaction fees and other charges, sometimes circumventing disclosure norms consistent with APR. For these reasons, interest is no longer quoted by reference to the flat rate in certain developed countries (for example, in the US, see theTruth in Lending Act), whilst many insist that loans always quote the APR.
Nevertheless, loans originally quoted and engaged with a flat rate remain contractually valid and still feature widely in both developed and developing countries.
|
https://en.wikipedia.org/wiki/Flat_rate_(finance)
|
Microcredit for water supply and sanitationis the application ofmicrocreditto provide loans to small enterprises and households in order to increase access to animproved water sourceandsanitationindeveloping countries.
For background, most investments inwater supplyandsanitationinfrastructure are financed by thepublic sector, but investment levels have been insufficient to achieve universal access. Commercial credit to public utilities was limited by low tariffs and insufficient cost-recovery. Microcredits are a complementary or alternative approach to allow the poor to gain access to water supply and sanitation in the aforementioned regions.[1][2]
Funding is allocated either to small-scale independent water-providers who generate an income stream from selling water, or to households in order to finance house connections, plumbing installations, or on-site sanitation such aslatrines. Many microfinance institutions have only limited experience with financing investments in water supply and sanitation.[3]While there have been manypilot projectsin both urban and rural areas, only a small number of these have been expanded.[4][5]A water connection can significantly lower a family's water expenditures, if it previously had to rely on water vendors, allowing cost-savings to repay the credit. The time previously required to physically fetch water can be put to income-generating purposes, and investments in sanitation provide health benefits that can also translate into increased income.[6]
There are three broad types of microcredit products in the water sector:[3]
Microcredits can be targeted specifically at water and sanitation, or general-purpose microcredits may be used for this purpose. Such use is typically to finance household water andsewerageconnections,bathrooms,toilets,pit latrines,rainwater harvestingtanks orwater purifiers. The loans are generallyUS$30–250with a tenure of less than three years.
Microfinanceinstitutions, such asGrameen Bank, the Vietnam Bank for Social Policies, and numerous microfinance institutions in India and Kenya, offer credits to individuals for water and sanitation facilities. Non-government organisations (NGOs) that are not microfinance institutions, such as Dustha Shasthya Kendra (DSK) in Bangladesh or Community Integrated Development Initiatives in Uganda, also provide credits for water supply and sanitation. The potential market size is considered huge in both rural and urban areas and some of these water and sanitation schemes have achieved a significant scale. Nevertheless, compared to the microfinance institution's overall size, they still play a minor role.[3]
In 1999, all microfinance institutions inBangladeshand more recently inVietnamhad reached only about 9 percent and 2.4 percent of rural households respectively.[citation needed][needs update]In either country, water and sanitation amounts to less than two percent of the microfinance institution's total portfolio.[citation needed]However, borrowers for water supply and sanitation comprised 30 percent of total borrowers for Grameen Bank and 10 percent of total borrowers from Vietnam Bank for Social Policies.[citation needed]For instance, the water and sanitation portfolio of the Indian microfinance institutionSEWABank comprised 15 percent of all loans provided in the city of Admedabad over a period of five years.[citation needed]
The US-based NGO Water.org, through its WaterCredit initiative, had since 2003 supported microfinance institutions and NGOs in India, Bangladesh, Kenya and Uganda in providing microcredit for water supply and sanitation. As of 2011, it had helped its 13 partner organisations to make 51,000 credits.[needs update]The organisation claimed a 97% repayment rate and stated that 90% of its borrowers were women.[7]WaterCredit did not subsidise interest rates and typically did not make microcredits directly. Instead, it connected microfinance institutions with water and sanitation NGOs to develop water and sanitation microcredits, including through market assessments and capacity-building. Only in exceptional cases did it provide guarantees, standing letters of credit or the initial capital to establish a revolving fund managed by an NGO that was not previously engaged in microcredit.[6]
Since 2003Bank Rakyat Indonesiafinanced water connections with the water utility PDAM through microcredits with support from the USAID Environmental Services Program. According to an impact assessment conducted in 2005, the program helped the utility to increase its customer base by 40% which reduced its costs per cubic meter of water sold by 42% and reduced itsnon-revenue waterfrom 56.5% in 2002 to 36% percent at the end of 2004.[8]
In 1999, theWorld Bankin cooperation with the governments of Australia, Finland and Denmark supported the creation of a Sanitation Revolving Fund with an initial working capital ofUS$3million. The project was carried out in the cities of Danang, Haiphong, and Quang Ninh. The aim was to provide small loans (US$145) to low-income households for targeted sanitation investments such asseptic tanks, urine diverting/compostinglatrinesor sewer connections. Participating households had to join a savings and credit group of 12 to 20 people, who were required to live near each other to ensure community control. The loans had a catalyst effect for household investment. With loans covering approximately two-thirds of investment costs, households had to find complementary sources of finance (from family and friends).
In contrast to a centralised, supply-driven approach, where government institutions design a project with little community consultation and no capacity-building for the community, this approach was strictly demand-driven and thus required the Sanitation Revolving Fund to develop awareness-raising campaigns for sanitation. Managed by the microfinance-experienced Women's Union of Vietnam, the Sanitation Revolving Fund gave 200,000 households the opportunity to finance and build sanitation facilities over a period of seven years. With a leverage effect of up to 25 times the amount of public spending on household investment and repayment rates of almost 100 percent, the fund is seen as a best practice example by its financiers. In 2009 it was considered to be scaled up with further support of the World Bank and the Vietnam Bank for Social Policies.[9][needs update]
Small and medium enterprise (SME) loans are used for investments by community groups, for private providers in greenfield contexts, or for rehabilitation measures of water supply and sanitation. Supplied by mature microfinance institutions, these loans are seen as suitable for other suppliers in the value chain such as pit latrine emptiers and tanker suppliers. With the right conditions such as a solid policy environment and clear institutional relationships, there is a market potential for small-scale water supply projects.
In comparison to retail loans on the household level, the experience with loan products for SME is fairly limited. These loan programs remain mostly at the pilot level. However, the design of some recent projects using microcredits for community-based service providers in some African countries (such as those of the K-Rep Bank in Kenya and Togo) shows a sustainable expansion potential. In the case of Kenya's K-Rep Bank, the Water and Sanitation Program, which facilitated the project, is already exploring a country-wide scaling up.[citation needed]
Kenyahas numerous community-managed small-water enterprises. The Water and Sanitation Program (WSP) has launched an initiative to use microcredits to promote these enterprises. As part of this initiative, the commercial microfinance bankK-Rep Bankprovided loans to 21 community-managed water projects. The Global Partnership on Output-based Aid (GPOBA) supported the programme by providing partial subsidies. Every project is pre-financed with a credit of up to 80 percent of the project costs (averagingUS$80,000). After an independent verification process, certifying a successful completion, a part of the loan is refinanced by a 40 percent output-based aid subsidy. The remaining loan repayments have to be generated from water revenues. In addition, technical-assistance grants are provided to assist with the project development.
InTogo, CREPA (Centre Regional pour l'Eau Potable et L'Assainissement à Faible Côut) had encouraged the liberalisation of water services in 2001. As a consequence, six domestic microfinance institutions were preparing microcredit scheme for a shallow borehole (US$3,000) or rainwater-harvesting tank (US$1,000). The loans were originally dedicated to households, which act as a small private provider, selling water in bulk or in buckets. However, the funds were disbursed directly to the private (drilling) companies. In the period from 2001 to 2006, roughly 1,200 water points were built and have been used for small-business activities by the households which participated in that programme.[10][11][needs update]
This type of credits has not been used widely.
|
https://en.wikipedia.org/wiki/Microcredit_for_water_supply_and_sanitation
|
Amicrograntis a small sum of money distributed to an individual or organization, typically for hundreds or thousands of dollars, with the intent of enabling the recipient to develop or sustain an income-generating enterprise. Often they target individuals living on less than $1/day,extreme poverty, for the purpose of creating asustainablelivelihoodormicroenterprise.[1]Recipients of microgrants can also be organizations orgrassrootsgroups that are engaged incharitable activities.[2][3]Whilemicrofinanceand other financial services are intended to serve the poor, many of the poorest are either too risk-averse to seek out a loan, or do not qualify for amicroloanor other form ofmicrocredit.
There are three primary types of microgrants; one is a small sum of money (~US$50–500) granted to an individual to start an income-generating project, another is a small grant (~$2,000–$10,000) to a community for an impact-oriented project and a third is a small grant to an individual for any cause they see fit.
The termmicrograntcan also refer to a grant that is low in value.[4]
Microgrants are available for individuals or small groups to start income generating projects. Unlikemicrocredits, microgrants do not need to be repaid and so the project does not start with debt that needs to be repaid. Whilemicrofinanceand other financial services are intended to serve the poor, many of the poorest are either too risk-averse or unaware of such offers.
A microgrant serves as an opportunity for communities facing poverty to receive funding for impact-oriented projects, such as schools, health centers, farms and more. Microgrants for community projects provide a novel opportunity for people facing poverty to solve their own local problems with financing that need not be paid back.
For example,Spark MicroGrantsis known for such community-based approach to microgranting. Spark pairs capacity building facilitation with their microgrants to ensure communities receiving the grants are well positioned to take them on.
|
https://en.wikipedia.org/wiki/Microgrant
|
M-PESA(Mfor mobile,PESAisSwahilifor money) is amobile phone-based money transfer service, payments andmicro-financingservice, launched in 2007 byVodafoneandSafaricom, the largest mobile network operator inKenya.[1]It has since expanded toTanzania,Mozambique,DRC,Lesotho,Ghana,Egypt,Afghanistan,South AfricaandEthiopia. The rollouts inIndia,Romania, andAlbaniawere terminated amid low market uptake. M-PESA allows users to deposit, withdraw, transfer money, pay for goods and services (Lipa na M-PESA,Swahilifor "Pay with M-PESA"), access credit and savings, all with a mobile device.[2]
The service allows users to deposit money into an account stored on their cell phones, to send balances usingPIN-securedSMS text messagesto other users, including sellers of goods and services, and to redeem deposits for regular money. Users are charged a fee for sending and withdrawing money using the service.[3]
M-PESA is abranchless bankingservice; M-PESA customers can deposit and withdraw money from a network of agents that includesairtimeresellers and retail outlets acting asbanking agents.
M-PESA spread quickly, and by 2010 had become the most successful mobile-phone-based financial service in the developing world.[4]By 2012, a stock of about 17 million M-PESA accounts had been registered in Kenya. By June 2016, a total of 7 million M-PESA accounts had been opened in Tanzania by Vodacom. The service has been lauded for giving millions of people access to the formal financial system and for reducing crime in otherwise largely cash-based societies.[5]However, the near-monopolistic providers of the M-PESA service are sometimes criticized for the high cost that the service imposes on its often poor users.
Safaricom and Vodafone launched M-PESA, a mobile-based payment service targeting the un-banked, pre-pay mobile subscribers in Kenya on a pilot basis in October 2005.[6]It was started as a public/private sector initiative after Vodafone was successful in winning funds from the Financial Deepening Challenge Fund competition established by the UK government'sDepartment for International Developmentto encourage private sector companies to engage in innovative projects so as to deepen the provision of financial services in emerging economies.[7]
The initial obstacle in the pilot was gaining the agent's trust and encouraging them to process cash withdrawals and agent training.[8]However, once Vodafone introduced the ability to buy airtime using M-PESA, the transaction volume increased rapidly. A 5% discount was offered on any airtime purchased through M-PESA and this served as an effective incentive. By 1 March 2006,KSh50.7 million had been transferred through the system. The successful operation of the pilot was a key component in Vodafone and Safaricom's decision to take the product full scale. The learning from the pilot helped to confirm the market need for the service and although it mainly revolved around facilitating loan repayments and disbursements forFaulucustomers, it also tested features such as airtime purchase and national remittance. The full commercial launch was initiated in March 2007.
A snapshot of the market then depicted that only a small percentage of people in Kenya used traditional banking services. There were low levels of bank income, high bank fees incurred and charged; most of the services were out of geographical reach to the rural Kenyan.[9]Notably, a high level of mobile penetration was evident throughout the country making the adoption of mobile payments a viable alternative to the traditional banking channels. According to a survey done by CBS in 2005, Kenya then had over 5,970,600 people employed in the informal sector.
In 2002, researchers atGamosand theCommonwealth Telecommunications Organisation, funded by the UK'sDepartment for International Development(DFID), documented that in Uganda,Botswanaand Ghana, people were using airtime as a proxy for money transfer.[10]Kenyans were transferring airtime to their relatives or friends who were then using it or reselling it. Gamos researchers approached MCel[11]in Mozambique, and in 2004 MCel introduced the first authorised airtime credit swapping – a precursor step towards M-PESA.[12]The idea was discussed by theCommission for Africa[13]and DFID introduced the researchers to Vodafone who had been discussing supporting microfinance and back office banking with mobile phones. S. Batchelor (Gamos) and N. Hughes (VodafoneCSR) discussed how a system of money transfer could be created in Kenya. DFID amended the terms of reference for its grant to Vodafone, and piloting began in 2005. Safaricom launched a newmobile phone-based paymentand money transfer service, known as M-PESA.[4]
The initial work of developing the product was given to a product and technology development company known asSagentia.[14]Development and second line support responsibilities were transferred toIBMin September 2009, where most of the original Sagentia team transferred to.[15]Following a three-year migration project to a new technology stack, as of 26 February 2017, IBM's responsibilities have been transferred to Huawei in all markets.[16]
The initial concept of M-PESA was to create a service which would allow microfinance borrowers to conveniently receive and repay loans using the network of Safaricom airtime resellers.[17]This would enable microfinance institutions (MFIs) to offer more competitive loan rates to their users, as costs are lower than when dealing in cash. The users of the service would gain through being able to track their finances more easily. When the service was piloted, customers adopted the service for a variety of alternative uses and complications arose withFaulu, the partnering MFI. In discussion with other parties, M-PESA was re-focused and launched with a different value proposition: sending remittances home across the country and making payments.[17]
M-PESA is operated by Safaricom and Vodacom,mobile network operators(MNO) not classed as deposit-taking institutions, such as a bank. M-PESA customers can deposit and withdraw currency from a network of agents that includes airtime resellers and retail outlets acting asbanking agents. The service enables its users to:
Partnerships with Kenyan banks offer expanded banking services like interest-bearing accounts, loans, and insurance.[22]
Theuser interfacetechnology of M-PESA differs between Safaricom of Kenya and Vodacom of Tanzania, although the underlying platform is the same. While Safaricom usesSIM toolkit (STK)to provide handset menus for accessing the service, Vodacom relies mostly onUSSDto provide users with menus, but also supports STK.[23]
Transaction charges depend on the amount of money being transferred and whether the payee is a registered user of the service. The actual cost is a fixed amount for a given range of transaction sizes; for example Safaricom charges up toKSh66 (US$0.6) for a transaction to an unregistered user for transactions between KSh10 and KSh500 (US$0.92–US$4.56). For registered users the charge is KSh27 (US$0.25) or 5.4% to 27% for the same amount. At the highest transfer bracket of KSh50,001–70,000, the fee for a transfer to a registered user is KSh110 (US$1) or 0.16–0.22 %. The maximum amount that can be transferred to a non-registered user of the system is KSh35,000 (US$319.23), with a fee of KSh275 (US$2.51) or 0.8%. Cash withdrawal fees are also charged. With a charge of KSh10 (US$0.09) for a withdrawal of KSh50–100 or 10% to 20%, and up to KSh330 (US$3.01) for a withdrawal of KSh50,001–70,000 or 0.47% to 0.66% .[24][25]
In an article published in 2015, Anja Bengelstorff cites theCentral Bank of Kenyawhen she states thatCHF1 billion is moved in fiscal year 2014, with a profit of CHF 268 million, that is close to 27% of the moved money.[26]In 2016, M-PESA moved KSh15 billion (US$147776845.14) per day, with a revenue of KSh41 billion. In 2017 KSh6,869 billion were moved according to a figure in Safaricoms own annual report, with a revenue of KSh55 billion. This would put Safaricom's profit ratio at around <1 % of total money transferred.[27][28][full citation needed]
With the support ofFinancial Sector Deepening Kenyaand theBill & Melinda Gates Foundation,Tavneet Surifrom theMassachusetts Institute of Technologyand William Jack fromGeorgetown Universityhave produced a series of papers extolling the benefits of M-PESA. In particular, their 2016 article published inSciencehas been influential in the international development community. The much cited result of the paper was that "access to M-PESA increased per capita consumption levels and lifted 194,000 households, or 2 % of Kenyan households, out of poverty."[29]Global development institutions focusing on the development potential of financial technology frequently cite M-PESA as a major success story in this respect, citing the poverty-reduction claim and including a reference to Suri and Jack's 2016 signature article. In a report on "Financing for Development", the United Nations write: "The digitalization of finance offers new possibilities for greater financial inclusion and alignment with the 2030 Agenda for Sustainable Development and implementation of the Social Development Goals. In Kenya, the expansion of mobile money lifted two per cent of households in the country above the poverty line."[30]
However, these findings on the role of M-PESA in reducing poverty have been contested in a 2019 paper, arguing that "Suri and Jack’s work contains so many serious errors, omissions, logical inconsistencies and flawed methodologies that it is actually correct to say that they have helped to catalyse into existence a largely false narrative surrounding the power of the fin-tech industry to advance the cause of poverty reduction and sustainable development in Africa (and elsewhere)".[31]
M-PESA was first launched by the Kenyan mobile network operator Safaricom, whereVodafoneis technically a minority shareholder (40%), in March 2007.[17]M-PESA quickly captured a significant market share for cash transfers, and grew to 17 million subscribers by December 2011 in Kenya alone.[1]
The growth of the service forced formal banking institutions to take note of the new venture. In December 2008, a group of banks reportedly lobbied the Kenyan finance minister to audit M-PESA, in an effort to at least slow the growth of the service. This ploy failed, as the audit found that the service was robust.[32]At this time the Banking Act did not provide a basis to regulate products offered by non-banks, of which M-PESA was one such very successful product. As at November 2014, M-PESA transactions for the 11 months of 2014 were valued at KSh2.1 trillion, a 28% increase from 2013, and almost half the value of the country's GDP.[citation needed]
On 19 November 2014, Safaricom launched a companionAndroidapp,Safaricom M-Ledger[33][non-primary source needed]for its M-Pesa users. The application, currently[when?]available only on Android, but as of now it's supported byiOSdevices. The app gives M-PESA users a historical view of all their transactions. Many other companies business models rely on the M-PESA system in Kenya, such asM-kopaandSportpesa.[34]
On 23 February 2018, it was reported that the Google Play store started taking payments for apps via Kenya's M-PESA service.[35]On 8 January 2019, Safaricom launchedFuliza, an M-PESA overdraft facility.[36]
M-PESA was launched in Tanzania by Vodacom in 2008 but its initial ability to attract customers fell short of expectations. In 2010, theInternational Finance Corporationreleased a report which explored many of these issues in greater depth and analyzed the strategic changes that Vodacom has implemented to improve their market position.[37]As of May 2013, M-PESA in Tanzania had five million subscribers.[38]
In 2008, Vodafone partnered withRoshan, Afghanistan's primary mobile operator, to provide M-PESA, the local brand of the service.[39]When the service was launched, it was initially used to pay policemen's salaries set to be competitive with what theTalibanwere earning. Soon after the product was launched, theAfghan National Policefound that under the previous cash model, 10% of their workforce were ghost police officers who did not exist; their salaries had been pocketed by others. When corrected in the new system, many police officers believed that they had received a raise or that there had been a mistake, as their salaries rose significantly. The National Police discovered that there was so much corruption when payments had been made using the previous model that the policemen did not know their true salary. The service was so successful that it was expanded to include limited merchant payments, peer-to-peer transfers, loan disbursements and payments.[40]
In September 2010, Vodacom and Nedbank announced the launch of the service inSouth Africa, where there were estimated to be more than 13 million "economically active" people without a bank account.[41]M-PESA has been slow to gain a toehold in the South African market compared to Vodacom's projections that it would sign up 10 million users in the following three years. By May 2011, it had registered approximately 100,000 customers.[42]The gap between expectations for M-PESA's performance and its actual performance can be partly attributed to differences between the Kenyan and South African markets, including the banking regulations at the time of M-PESA's launch in each country.[43]According to MoneyWeb,[44]a South African investment website, "A tough regulatory environment with regards to customer registration and the acquisition of outlets also compounded the company's troubles, as the local regulations are more stringent in comparison to our African counterparts. Lack of education and product understanding also hindered efforts in the initial roll out of the product." In June 2011, Vodacom and Nedbank launched a campaign to re-position M-PESA, targeting the product to potential customers who have a higher Living Standard Measures (LSM)[45]than were first targeted.[46]
Despite efforts, as at March 2015, M-PESA still struggled to grow its customer base. South Africa lags behind Tanzania and Kenya with only ca. 1 million subscribers. This comes as no surprise as South Africa is well known for being ahead of financial institutions globally in terms of maturity and technological innovation. According to Genesis Analytics, 70% of South Africans are "banked", meaning that they have at least one bank account with an established financial institution which have their own banking products which directly compete with the M-PESA offering.[47]
M-PESA was launched inIndia[48]as a close partnership withICICIbank in November 2011.[49]Development for the bank began as early as 2008. Vodafone India had partnered with both ICICI and ICICI bank,[50]ICICI launched M-Pesa on 18 April 2013.[51]Vodafone had planned to roll out this service throughout India.[52]The user needed to register for this service, registration was free and there were charges levied per M-PESA transaction for money transfer services and DTH and Prepaid recharges could be done through M-PESA for free.[53][54]
M-PESA was shut down from 15 July 2019 due to regulatory curbs and stress in the sector,[55]with Vodafone surrendering their PPI licence on 1 October 2019.[56]
In March 2014, M-PESA expanded intoRomania, while mentioning that it may continue to expand elsewhere into Eastern Europe, as a number of individuals there possess mobile phones but do not possess traditional bank accounts. As of May 2014, however, it was considered unlikely that the service would expand into Western Europe anytime soon.[57]In December 2017, Vodafone closed its M-PESA product in Romania.[58]
In May 2015, M-PESA was also launched inAlbania. It was shut down on 14 July 2017.[59]
M-PESA expanded intoMozambique,Lesotho, andEgyptin May, June, and July 2013, respectively. A full listing of countries in which M-PESA currently operates can be found on M-Pesa's website.[citation needed]
M-PESA sought to engage Kenyan regulators and keep them updated on the development process. M-PESA also reached out to international regulators, such as the UK'sFinancial Conduct Authority(FCA) and thepayment card industryto understand how best to protect client information and adhere to internationally recognized best practices.[60]
Know your customer(KYC) requirements impose obligations on prospective clients and on banks to collect identification documents of clients and then to have those documents verified by banks.[61]The Kenyan government issues national identity cards that M-PESA leveraged in their business processes to satisfy their KYC requirements.[62]
M-PESA obtained a "special" license from regulators, despite concerns by regulators about non-branch banking adding to the current state of financial instability.
Safaricom released the new M-PESA platform dubbed M-PESA G2 to offer versatile integration capabilities for development partners.
Client-to-business and business-to-client disbursements are some of features available through the API.
The near-monopolistic providers of the M-PESA service are sometimes criticized for the high cost that the service imposes on its often poor users. The Bill and Melinda Gates Foundation warned in 2013 that lack of competition could drive up prices for customers of mobile money services and used M-PESA in Kenya as a negative example. According to the Foundation, a transfer of $1.50 cost $0.30 at the time, while the same provider charged only a tenth of this in neighboring Tanzania, where it was exposed to more competition.[63]A study sponsored by USAID found that poor uneducated customers, who often had bad vision, were a target of unfair practices within M-PESA. They had expensive subscriptions for ring-tones and similar unnecessary services pushed on them, with opaque pricing, and thus did not understand why their M-PESA deposits depleted so quickly. If they did, they were often unable to unsubscribe from those services without help. The authors concluded that it is not the marginalized people in Kenya who benefit from M-PESA, but mostly Safaricom.[64]A similar conclusion was reached by development economist Alan Gibson in a study commissioned byFinancial Sector Deepening Trust Kenya(FSD Kenya) on the occasion of the 10th anniversary of FSD Kenya in 2016.[65]He wrote that credit to business did not improve due to M-PESA and that credit to the agricultural sector even declined. He concluded in his otherwise very friendly survey that the financial sector benefitted handsomely from the expansion of M-PESA, while the living conditions of the people were not noticeably improved.
Milford Bateman et al. even conclude that M-PESA's expansion resulted in holding back economic development in Kenya. They diagnose serious weaknesses in the much cited paper by Suri and Jack, which had found positive effects on poverty, as M-PESA enabled female clients to move out of subsistence agriculture into micro-enterprise or small-scale trading activities. Alleged weaknesses include a failure to incorporate business failures and crowding out of competitors in the analysis. Bateman et al. call M-Pesa an extractive activity, by which large profits are created from taxing small-scale payments, which would be free if cash was used instead. As a large part of these profits are sent abroad to foreign shareholders of Safaricom, local spending power and demand are reduced, and with it the development potential for local enterprise.[66][67]
Kenya does not have a data protection law, which enables Safaricom to use sensitive data of its subscribers rather freely. A data scandal surfaced in 2019 when Safaricom was sued in court for the alleged breach of data privacy of an estimated 11.5 million subscribers who had used their Safaricom numbers for sports betting. The data was allegedly offered on the black market.[68]
|
https://en.wikipedia.org/wiki/M-Pesa
|
Project Enterprisewas an Americanmicrofinancenonprofit organization inNew York Cityprovidingentrepreneursfrom underserved areas with loans, business training and networking opportunities. Operating on theGrameen Bankmodel of microlending, as of 2008[update], Project Enterprise (PE) had served more than 2,500 entrepreneurs in New York City, and provides microloans from $1,500 to $12,000.[2][3]The organizations web site was closed in 2017.
Project Enterprise was started in 1997 as the only provider of business microloans in New York City that does not require prior business experience, credit history or collateral to provide market-rate financing for small businesses.[4][5]PE has been a certifiedCommunity development financial institutionsince 1998. Founding Executive Director Vanessa Rudin was replaced by Arva Rice in November 2003.
From 2004-2006 PE saw substantial growth with increasing numbers of loans and total amounts lent. After conducting focus groups new loan products, events and resources for entrepreneurs were developed. PE launched a networking event programme, Big Connections, and an Access to Markets program addressing bringing products and services to the marketplace.[6]
During the economic downturn, Project Enterprise saw an increase in demand and in 2008 had its best year since inception.[7]Mel Washington became the Executive Director on 1 September 2009.
In 2006, PE won the Association of Enterprise Opportunity's Innovation in Program Design Award for the Access to Markets Initiative. In 2007, PE staff member Althea Burton was made the New York Small Business Administration Home-Based Business Champion of the Year.[citation needed]
The organizations web site was closed and appeared to stop operating in 2017.
|
https://en.wikipedia.org/wiki/Project_Enterprise
|
Solidarity lendingis alendingpractice where small groups borrow collectively and group members encourage one another to repay. It is an important building block ofmicrofinance.
Solidarity lending takes place through 'solidarity groups'. These groups are a distinctive bankingdistribution channelused primarily to delivermicrocreditto poor people. Solidarity lending lowers the costs to a financial institution related to assessing, managing and collecting loans, and can eliminate the need forcollateral. Since there is afixed costassociated with each loan delivered, a bank that bundles individual loans together and permits a group to manage individual relationships can realize substantial savings in administrative and management costs.
In many developing countries the legal system offers little, if any support for the property rights of poor people. Laws related tosecured transactions– a cornerstone of Western banking – may also be absent or unenforced. Instead, solidarity lending levers various types ofsocial capitallike peer pressure, mutual support and a healthy culture of repayment. These characteristics make solidarity lending more useful in rural villages than in urban centres where mobility is greater and social capital is weaker.
Efforts to replicate solidarity lending in developed countries have generally not succeeded. For example, theCalmeadow Foundationtested an analogous 'peer lending' model in three locations in Canada: rural Nova Scotia and urban Toronto and Vancouver during the 1990s. It concluded that a variety of factors – including difficulties in reaching the target market, the high risk profile of clients, their general distaste for the joint liability requirement, and high overhead costs – made solidarity lending unviable without subsidies.[1]However, debates have continued about whether the required subsidies may be justified as an alternative to other subsidies targeted to the entrepreneurial poor, andVanCity Credit Union, which took over Calmeadow's Vancouver operations, continues to use peer lending.
Tapping social capital to lend money is not new tomicrofinance. Earlier precedents include theinformalpractices ofROSCAsand thebonds of associationused incredit unions. In India, the practice ofself-help group bankingis inspired by similar principles.
However, solidarity groups are distinctly different from earlier approaches in several important ways.
First, solidarity groups are very small, typically involving 5 individuals who are allowed to choose one another but cannot be related. Five is often cited as an ideal size because it is:
Much evidence has also shown that social pressure is more effective among women than among men. The vast majority of loans using this methodology are delivered to women.
Learning from the failure of theComilla Modelof cooperative credit piloted byAkhtar Hameed Khanin the 1950s and '60s,Grameen Bankand many othermicrocreditinstitutions have also taken an assertive approach to targeting poor women and excluding non-poor individuals entirely.
A major reason for the prior failure of credit cooperatives in Bangladesh was that the groups were too big and consisted of people with varied economic backgrounds. These large groups did not work because the more affluent members captured the organizations.[2]
An early pioneer of solidarity lending,Dr. Muhammad YunusofGrameen BankinBangladeshdescribes the dynamics of lending through solidarity groups this way:
... Group membership not only creates support and protection but also smooths out the erratic behavior patterns of individual members, making each borrower more reliable in the process. Subtle and at times not-so-subtle peer pressure keeps each group member in line with the broader objectives of the credit program. … Because the group approves the loan request of each member, the group assumes moral responsibility for the loan. If any member of the group gets into trouble, the group usually comes forward to help.[3]
The Grameen approach uses solidarity groups as its primary building block. However, responsibility for delinquent loans is handled by the elected leaders of a larger, village-level group called a 'centre' composed of eight solidarity groups. Because all the members are from the same village and loan payments take place during the centre meeting, the principle of using social capital for leverage is not compromised; the only difference is that all the members of the centre are collectively responsible for unpaid loans.[4]
Manymicrocreditinstitutions use solidarity groups essentially as a form ofjoint liability. That is, they will take any action practical to collect a seriously delinquent loan not just from the individual member, but from any member of the solidarity group with the capability to repay it. But Yunus has always rejected this concept, arguing that whatever moral responsibility may pertain among group members, there is no formal or legal "... form of joint liability, i.e. group members are not responsible to pay on behalf of a defaulting member."[5]
Solidarity lending is widespread in microfinance today, particularly in Asia. In addition to Grameen Bank, major practitioners includeSEWA,Bank Rakyat Indonesia,Accion International,FINCA,BRACandSANASA. The Calmeadow Foundation was another important pioneer.
TheMicrobanking Bulletintracks solidarity lending as a separate microfinance methodology. Of 446 microfinance institutions worldwide that it was tracking at the end of 2005, 39 lent only through this method, while another 205 used a mix of solidarity and individual lending. The average loan balance outstanding at solidarity lenders was $109 (19% of localgross national income), compared to $1,024 (61% of local gross national income) among individual lenders. This shows not only that solidarity lenders are meeting the needs of a significantly poorer market segment, but also that they are doing it in significantly poorer countries.[6]
InCosta Rica, many companies enable their employees to organizeAsociaciones Solidaristas(Solidarism Associations), which enables them to create saving funds, loans, and financial activities (for example managing the company's coffee shop) with financial support from the company.[7]
Solidarity lending has clearly had a positive impact on many borrowers. Without it, many would not have borrowed at all, or would have been forced to rely onloan sharks. However, it has been the subject of much criticism. A recent survey of the empirical research concludes that the search for alternative approaches must continue, and highlights problems such as "borrowers growing frustrated at the cost of attending regular meetings, loan officers refusing to sanction good borrowers who happen to be in 'bad' groups, and constraints imposed by the diverging ambitions of group members."[8]
Efforts to ensure that all members of solidarity groups are equally poor may not always improve group performance. Greater socio-economic diversity "means that group members' incomes are less likely to vary together, and thus group members' ability to insure each other increases".[9]The solidarity lending approach, which excludes less-poor borrowers, was adopted in large part because of a view that the more inclusive cooperative 'bond of association' had failed in Bangladesh (see the Comilla Model). But the founder of Bangladesh's credit cooperatives,Akhter Hameed Khandocumented that the Model's practices contravened two fundamental credit union operating principles: independence from government intervention, and local financial self-reliance.[10]The case that the 'inclusive' approach to organizational development used by the Comilla Model accounts for its failure has not been made.
While poverty-targeting has had many successes, social solidarity is not solely a tool for the lending institution – it can also be used by borrowers. A loan 'strike', if it gains the sympathy of a large number of borrowers, can lead to a rapid and highly unstabilizing escalation in delinquencies. It was this type of circumstance that led in 1998, to the rapid escalation of delinquency at Grameen Bank that resulted in the redesign dubbed 'Grameen II'.[11]
The photo above – of a group of women seated in rows on the ground before a male NGO-officer who sits in a chair processing their payments – encapsulates another common critique. Solidarity groups may be composed entirely of women, but the staff who decide when and if they receive financial services are often dominated by males.
|
https://en.wikipedia.org/wiki/Solidarity_lending
|
TheWomen's Development Bank(Spanish:Banco Nacional de la MujerorBanmujer), was established inVenezuelain 2001 to remedy the political, economic, and social disadvantages faced by women. The Bank offers both financial and non-financial services to women. The first President wasNora Castañeda.
The bank provides small, low-interest loans, known asmicro-creditloans, ranging from 500,000 to 1,000,000bolívares(500 to 1000 bolívares fuertes,US$260 to $520) per woman, for the establishment of business ventures. Loans are not granted to individuals, but rather to groups of five to ten women. In this manner, the bank is ideologically aligned withPresidentHugo Chávez, by promoting community solidarity over individualism, which is associated with capitalism. The bank has provided over 40,000 such loans since its establishment. The bank also offers financial advice to women, and serves as a consultant in the formation and development of business projects.
The Women's Development Bank also offers a number of non-financial services. The bank provides administrative training for aspiring female entrepreneurs, as well as workshops on personal development, self-esteem, family planning and health. The workshops encourage dialogue within the community and stimulate a greater involvement of women in politics.
The Bank is distinct from other banks in that it does not have branch offices; rather, it consists of a network of supporters who visit 149 impoverished and over-populated areas on a weekly basis, and offer the bank's services to women who otherwise would not have access to banking services. Bank members also make house calls.
The Bank attempts to promote self-sufficiency, by minimizing the requirements to receive a loans. The bank offers direction to encourage the success of women's projects, but does not dictate how their businesses should be run. This presents a challenge to many marginalized women who are illiterate. In instances where women are illiterate or otherwise have difficulty in overseeing a business venture, a female family member or friend will oversee the project until the woman becomes literate. The Bank also directs women toMission Robinson, a literacy campaign launched byChávez's government.
|
https://en.wikipedia.org/wiki/The_Women%27s_Development_Bank
|
Oikocredit(in fullOikocredit, Ecumenical DevelopmentCooperativeSociety U.A.[1]) is a cooperative society that offers loans or investment capital formicrofinanceinstitutions,cooperativesandsmall and medium-sized enterprisesin developing countries. It is one of the world's largest private financiers of the microfinance sector. The idea for Oikocredit came from a 1968 meeting of theWorld Council of Churches. Following this, Oikocredit was established in 1975 in theNetherlands.[2]
As a co-operative, Oikocredit finances and invests infair tradeorganisations, co-operatives, microfinance institutions, and small to medium-sized enterprises in many developing countries. In 2016 Oikocredit had offices in 31 countries.[2]
On 25 March 2010, Oikocredit announced that it had reached €1 billion in cumulative committed loans and investments since the organisation began its operations. In 2009, a year when financial institutions encountered difficult market conditions, Oikocredit achieved record high inflows: Oikocredit's capital inflow reached €62.9 million and its total assets grew by 13% to €537 million at the year end.[3]
Oikocredit’s financial resources primarily derive from investments (£150 minimum, no maximum) contributed by 54,000 individuals,[2]institutions and faith-based organisations worldwide. In return, investors have received stable gross dividends paid (or re-invested) annually since 2000 with no fixed notice period, together with annual social performance reports that monitor the impact of Oikocredit and their partner organisations in developing countries.
As of 2014, Oikocredit channeled €40.3m (£31.7m) of its investment capital into 85 fair trade partner organisations, many in coffee and cocoa enterprises in Latin America and Africa.[4]In the same year Oikocredit launched an agriculture unit led from Lima, Peru. The cooperative also diversified into renewable energy, recruiting a renewable energy financing expert, to source deals in low-income countries.
On 16 November 2016,Alternative Bank Schweiz(ABS) launched a banking partnership with Oikocredit.[5]
"Oiko" is derived from the Greek word "οἶκος" or "oikos", meaning "the house, the place where the people live together", which is also the root of the word, "economy".
There are aligations, that Oikocredit has failed to conduct adequatedue diligenceon its investments inCambodia’smicrofinance sector since at least 2017, despite evidence of harms directly linked to those investments.[6]Cambodian League for the Promotion and Defense of Human Rights(LICADHO), Equitable Cambodia (EC) and FIAN Germany filed a complaint against Oikocredit at NCP Netherlands.[7]In response Oikocredit said, that they had not identified any forced land sales and that customer protection is their top priority.[8]
|
https://en.wikipedia.org/wiki/Oikocredit
|
Governanceis a broader concept than government and also includes the roles played by the community sector and the private sector in managing and planning countries, regions and cities.[1]Collaborative governanceinvolves thegovernment,communityandprivate sectorscommunicating with each other and working together to achieve more than any one sector could achieve on its own. Ansell and Gash (2008) have explored the conditions required for effective collaborative governance. They say "The ultimate goal is to develop acontingencyapproach of collaboration that can highlight conditions under which collaborative governance will be more or less effective as an approach to policy making and public management"[2]Collaborative governance covers both the informal and formal relationships in problem solving and decision-making. Conventional government policy processes can be embedded in wider policy processes by facilitating collaboration between the public, private and community sectors.[3]Collaborative Governance requires three things, namely: support;leadership; and a forum. The support identifies the policy problem that needs to be fixed. Theleadershipgathers the sectors into a forum. Then, the members of the forum collaborate to develop policies, solutions and answers.[4]
There are many different forms of collaborative governance such asConsensus Buildingand aCollaborative Network:
Over the past two decades new collaborative approaches to governing and managing have developed in a range of fields, including:urbanandregional planning;public administrationand law;natural resource management; and environmental management.Collaborative governancehas emerged as a response to the failures of government policy implementation and to the high cost and politicization of regulation and as an alternative tomanagerialismand adversarial approaches.[5]The field ofpublic administrationhas changed its focus frombureaucracyto that of collaboration in the context of thenetwork society. Public administrators have blurred the lines between the people, the private sector and the government. Although bureaucracies still remain, public administrators have begun to recognize that more can potentially be achieved by collaboration and networking.[6]Collaboration and partnerships are nothing new in the political realm, however the wider use of this leadership style has gained momentum in recent years. In part, this is a response toneoliberalismwith its focus on the primacy of thefree-market economyand the private sector.
Ansell and Gash (2008) define collaborative governance as follows:[7]
'A governing arrangement where one or more public agencies directly engage non-state stakeholders in a collective decision-making process that is formal, consensus-oriented, and deliberative and that aims to make or implement public policy or manage public programs or assets'.
This definition involves six criteria: (1) the forum is initiated by public agencies; (2) participants in the forum include non-state actors; (3) participants engage in decision making and are not merelyconsulted; (4) the forum is formally organized; (5) the forum aims to make decisions by consensus; and (6) the focus of collaboration is on public policy or public management.
Emerson, Nabatchi and Balogh (2012) have developed a less normative and less restrictive definition, as follows:[8]
'The processes and structures of public policy decision making and management that engage people constructively across the boundaries of public agencies, levels of government, and/or the public, private and civic spheres in order to carry out a public purpose that could not otherwise be accomplished.'.
This framework definition is a broader analytic concept and does not limit collaborative governance to state-initiated arrangements and to engagement between government and non-government sectors. For example, the definition encompasses collaboration between governments at different levels and hybrid partnerships initiated by the private or community sectors.
The intent of collaborative governance is to improve the overall practice and effectiveness of public administration. The advantages of effective collaborative governance are that it enables a better and shared understanding of complex problems involving manystakeholdersand allows these stakeholders to work together and agree on solutions. It can help policy makers identify and target problems and deliver action more effectively. Stakeholders that are involved in developing a solution are more inclined to accept directions given or decisions made. It can thus serve as a way to identify policy solutions that have greater traction in the community. Additionally, it can contribute new perspectives on issues and policy solutions and thus offer new ways to implement strategies for change. For public officials who work in administration and management, collaborative governance can serve as a way of genuinely allowing a wider array of ideas and suggestions in the policy process. It may also be used to test ideas and analyze responses before implementation. For those who are not involved in formal government, it allows them to better understand the inner workings of government and carry more influence in the decision making process. It also enables them to see beyond government institutions being merely a vehicle for service delivery. They are able to feel ownership and a closer relationship to the system, furtherempoweringthem to be agents within institutional decision making.[9]For both public and private sectors, a commitment to collaboration is likely to drive organizational change and affect resource reallocation. Other advantages include combining relevant skills and capacities, as well as allowing specialization. Overall, collaborative governance can lead to mutual learning and shared experiences, while also providing direction for institutional capacity building inside and outside agencies and organizations.[10]
The disadvantages of collaborative governance in relation to complex problems are that the process is time consuming, it may not reach agreement on solutions, and the relevant government agencies may not implement the agreed solutions. In a complex structure with many entities working together, individual roles can become unclear and confusing. Some individuals act largely in a personal capacity, while others may act on behalf of agencies or organizations. Powerful stakeholder groups may seek to manipulate the process. Stakeholders can also begin to feel 'stakeholder fatigue', a feeling they get when they are repeatedly consulted by different agencies on similar issues. This kind of dynamic can be burdensome and time consuming.[9]Structural issues also affect agendas and outcomes. Open structures with loose leadership and membership allow multiple participants to gain access to a fast expanding agenda. Achieving goals in such a wide agenda becomes more difficult as an increasing number of players struggle to resolve differences and coordinate actions. Furthermore, challenges arise for implementation when representatives are allowed to come and go with no real obligations to other collaborators. Accountability of participating members, unequal or hiddenagendas, trust between members, power imbalances, and language and cultural barriers are all issues that can arise in collaborative government regimes. Critics argue that collaborative governance does not provide the institutional stability and consistency required, and therefore deters progress.[11]The work of Ansell and Gash (2008) and Emerson, Nabatchi and Balogh (2012) seeks to understand these issues and challenges and identify the social and process conditions required for effective collaborative governance.
Collaborative governance has been used to address many complex social, environmental andurban planningissues, including: flood crisis management and urban growth management inAustralia; community visioning and planning inNew Zealand; and public participation in the redesign of theGround Zero site in New York.[12]
In theUK, theUSAand countries across much ofWestern Europe, governments have attempted to shift the focus towards various forms of co-production with other agencies and sectors and with citizens themselves in order to increasecivic participation.[13]The classic forms of hierarchical governance andrepresentative democracyare seen as inefficient when it comes to engaging citizens and making them a part of the decision making process. Large projects and initiatives require involvement and communication with not only citizens but partnerships with other government and non-government agencies and, in some instances, international cooperation with foreign governments and organizations. For example, managing the growing number of official and non-official crossings of the US-Mexico border has required input from all levels of US andMexican governments, multiple government agencies (likeU.S. Forest ServicesandU.S. Border Patrol),land management, and other non-federal agencies for social affairs. All of these parties had to communicate and collaborate to address issues of border security and protecting natural resources. As a result, theU.S. Border PatrolandForest Servicessuccessfully enacted the terms of the 2006 memorandum of understanding, creating inter-agency forums, increasing field coordination and joint operations, and constructing fences and other tactical infrastructure.[14]
Governing and managing large and growing metropolitan urban areas, covering numerous local governments and various levels of State and National governments, provides many governance challenges and opportunities.[15]Abbott has reviewed metropolitan planning in South EastQueensland(SEQ), Australia where collaborative governance arrangements, between State and local governments and the regional community, have evolved over a 20-year period leading to positive outputs and outcomes.[16]
The positive outputs and outcomes of collaborative governance and metropolitan planning in SEQ have been extensive and broad and extend well beyond statutory regional land use planning.
These include: three endorsed non-statutory regional plans; two endorsed statutory regional plans; an infrastructure program linked to the State budget; regional sectoral plans for transport, water supply, natural resource management, etc.; new legislation and institutional arrangements for metropolitan governance; and capital works such as the SEQ busway network.
India
TheGovernment of Indialaunched theIntegrated Child Development Services (ICDS)program in 1975 to ensure appropriate growth and development of all children but this implementation was weak. To improve, in thecity of Mumbai, they partnered up with anon-profitSociety for Nutrition, Education and Health Action (SNEHA), to build a child nutrition program for care and prevention of acutemalnutrition. This partnership also included theMunicipal Corporation of Greater Mumbai (MCGM)to team up with their Nutritional Rehabilitation and Research Centre (NRRC) atLokmanya Tilak Municipal General Hospital. The collaboration between SNEHA, a non-state actor, and ICDS and MCGM, state actors, led to what is considered the only large-scale successful program that implemented community- based approaches to identify, treat, and prevent wasting in urban informal settlements of India.[17]
|
https://en.wikipedia.org/wiki/Collaborative_governance
|
Deliberative democracyordiscursive democracyis a form ofdemocracyin whichdeliberationis central todecision-making. Deliberative democracy seeks quality over quantity by limiting decision-makers to a smaller but more representative sample of the population that is given the time and resources to focus on one issue.[1]
It often adopts elements of bothconsensus decision-makingandmajority rule. Deliberative democracy differs from traditional democratictheoryin that authentic deliberation, not merevoting, is the primary source of legitimacy for thelaw. Deliberative democracy is related toconsultative democracy, in which public consultation with citizens is central to democratic processes. The distance between deliberative democracy and concepts likerepresentative democracyordirect democracyis debated. While some practitioners and theorists use deliberative democracy to describe elected bodies whose members propose and enact legislation,Hélène Landemoreand others increasingly use deliberative democracy to refer to decision-making byrandomly-selected lay citizenswithequal power.[2]
Deliberative democracy has a long history of practice and theory traced back to ancient times, with an increase in academic attention in the 1990s, and growing implementations since 2010. Joseph M. Bessette has been credited with coining the term in his 1980 workDeliberative Democracy: The Majority Principle in Republican Government.[3]
Deliberative democracy holds that, for a democratic decision to be legitimate, it must be preceded by authentic deliberation, not merely the aggregation of preferences that occurs in voting.Authentic deliberationis deliberation among decision-makers that is free from distortions ofunequal political power, such as power a decision-maker obtains through economic wealth or the support of interest groups.[4][5][6]
The roots of deliberative democracy can be traced back toAristotleand his notion of politics; however, the German philosopherJürgen Habermas' work on communicative rationality and the public sphere is often identified as a major work in this area.[7]
Deliberative democracy can be practiced by decision-makers in bothrepresentative democraciesanddirect democracies.[8]Inelitist deliberative democracy,principles of deliberative democracy apply to elite societal decision-making bodies, such aslegislaturesandcourts; inpopulist deliberative democracy,principles of deliberative democracy apply to groups of lay citizens who are empowered to make decisions.[5]One purpose of populist deliberative democracy can be to use deliberation among a group of lay citizens to distill a more authenticpublic opinionabout societal issues for other decision-makers to consider; devices such as thedeliberative opinion pollhave been designed to achieve this goal. Another purpose of populist deliberative democracy can, likedirect democracy, result directly in binding law.[5][9]If political decisions are made by deliberation but not by the people themselves or their elected representatives, then there is no democratic element; this deliberative process is calledelite deliberation.[10][11]
James Fearonand Portia Pedro believe deliberative processes most often generate ideal conditions ofimpartiality,rationalityandknowledgeof the relevant facts, resulting in moremorally correctoutcomes.[12][13][14]Former diplomatCarne Rosscontends that the processes are more civil, collaborative, and evidence-based than the debates in traditional town hall meetings or in internet forums if citizens know their debates will impact society.[15]Some fear the influence of a skilled orator.[16][17]John Burnheimcritiques representative democracy as requiring citizens to vote for a large package of policies and preferences bundled together, much of which a voter might not want. He argues that this does not translate voter preferences as well as deliberative groups, each of which are given the time and the ability to focus on one issue.[18]
James Fishkin, who has designed practical implementations of deliberative democracy throughdeliberative pollingfor over 15 years in various countries,[15]describes five characteristics essential for legitimate deliberation:
Studies byJames Fishkinand others have concluded that deliberative democracy tends to produce outcomes which are superior to those in other forms of democracy.[20][21]Desirable outcomes in their research include less partisanship and more sympathy with opposing views; more respect for evidence-based reasoning rather than opinion; a greater commitment to the decisions taken by those involved; and a greater chance for widely shared consensus to emerge, thus promoting social cohesion between people from different backgrounds.[10][15]Fishkin cites extensive empirical support for the increase in public spiritedness that is often caused by participation in deliberation, and says theoretical support can be traced back to foundational democratic thinkers such asJohn Stuart MillandAlexis de Tocqueville.[22][23]
Joshua Cohen, a student ofJohn Rawls, argued that the five main features of deliberative democracy include:[24]
Cohen presents deliberative democracy as more than a theory of legitimacy, and forms a body of substantive rights around it based on achieving "ideal deliberation":[24]
InDemocracy and Liberty, an essay published in 1998, Cohen updated his idea of pluralism to "reasonable pluralism" – the acceptance of different, incompatible worldviews and the importance of good faith deliberative efforts to ensure that as far as possible the holders of these views can live together on terms acceptable to all.[25]
Amy GutmannandDennis F. Thompson's definition captures the elements that are found in most conceptions of deliberative democracy. They define it as "a form of government in which free and equal citizens and their representatives justify decisions in a process in which they give one another reasons that are mutually acceptable and generally accessible, with the aim of reaching decisions that are binding on all at present but open to challenge in the future".[26]
They state that deliberative democracy has four requirements, which refer to the kind of reasons that citizens and their representatives are expected to give to one another:
For Bächtiger,Dryzek,Mansbridgeand Warren, the ideal standards of "good deliberation" which deliberative democracy should strive towards have changed:[6]
Consensus-based decision making similar to deliberative democracy has been found in different degrees and variations throughout the world going back millennia.[27]The most discussed early example of deliberative democracy arose in Greece asAthenian democracyduring the sixth century BC. Athenian democracy was bothdeliberativeand largelydirect: some decisions were made by representatives but most were made by "the people" directly. Athenian democracy came to an end in 322 BC. Even some 18th century leaders advocating forrepresentative democracymention the importance of deliberation among elected representatives.[28][29][30]
The deliberative element of democracy was not widely studied by academics until the late 20th century. According to Professor Stephen Tierney, perhaps the earliest notable example of academic interest in the deliberative aspects of democracy occurred inJohn Rawls1971 workA Theory of Justice.[31]Joseph M. Bessette has been credited with coining the term "deliberative democracy" in his 1980 workDeliberative Democracy: The Majority Principle in Republican Government,[32][33]and went on to elaborate and defend the notion in "The Mild Voice of Reason" (1994). In the 1990s, deliberative democracy began to attract substantial attention from political scientists.[33]According to ProfessorJohn Dryzek, early work on deliberative democracy was part of efforts to develop a theory ofdemocratic legitimacy.[34]Theorists such asCarne Rossadvocate deliberative democracy as a complete alternative to representative democracy. The more common view, held by contributors such asJames Fishkin, is that direct deliberative democracy can be complementary to traditional representative democracy. Others contributing to the notion of deliberative democracy includeCarlos Nino,Jon Elster, Roberto Gargarella,John Gastil,Jürgen Habermas,David Held,Joshua Cohen,Amy Gutmann,Noëlle McAfee, Rense Bos,Jane Mansbridge,Jose Luis Marti,Dennis Thompson, Benny Hjern, Hal Koch,Seyla Benhabib,Ethan Leib,Charles Sabel,Jeffrey K. Tulis,David Estlund, Mariah Zeisberg, Jeffrey L. McNairn,Iris Marion Young,Robert B. Talisse, andHélène Landemore.[citation needed]
Although political theorists took the lead in the study of deliberative democracy, political scientists have in recent years begun to investigate its processes. One of the main challenges currently is to discover more about the actual conditions under which the ideals of deliberative democracy are more or less likely to be realized.[35]
Drawing on the work ofHannah Arendt, Shmuel Lederman laments the fact that "deliberation andagonismhave become almost two different schools of thought" that are discussed as "mutually exclusive conceptions of politics"[36]as seen in the works ofChantal Mouffe,[37]Ernesto Laclau, andWilliam E. Connolly. Giuseppe Ballacci argues that agonism and deliberation are not only compatible but mutually dependent:[38]"a properly understood agonism requires the use of deliberative skills but also that even a strongly deliberative politics could not be completely exempt from some of the consequences of agonism".
Most recently, scholarship has focused on the emergence of a 'systemic approach' to the study of deliberation. This suggests that the deliberative capacity of a democratic system needs to be understood through the interconnection of the variety of sites of deliberation which exist, rather than any single setting.[39]Some studies have conducted experiments to examine how deliberative democracy addresses the problems of sustainability andunderrepresentation of future generations.[40]Although not always the case, participation in deliberation has been found to shift participants opinions in favour of environmental positions.[41][42][43]
Aviv Ovadya also argues for implementingbridging-based algorithmsin major platforms by empowering deliberative groups that are representative of the platform's users to control the design and implementation of the algorithm.[44]He argues this would reducesensationalism,political polarizationanddemocratic backsliding.[45]Jamie Susskindlikewise calls for deliberative groups to make these kind of decisions.[46]Metacommissioned a representative deliberative process in 2022 to advise the company on how to deal with climate misinformation on its platforms.[47]
TheOECDdocumented hundreds of examples and finds their use increasing since 2010.[48][49]For example, a representative sample of 4000 lay citizens used a 'Citizens' congress' to coalesce around a plan on how to rebuildNew OrleansafterHurricane Katrina.[50][15]
|
https://en.wikipedia.org/wiki/Deliberative_democracy
|
Condorcet methods
Positional voting
Cardinal voting
Quota-remainder methods
Approval-based committees
Fractional social choice
Semi-proportional representation
By ballot type
Pathological response
Strategic voting
Paradoxes ofmajority rule
Positive results
Ingovernance,sortitionis the selection of publicofficialsor jurors at random, i.e. bylottery, in order to obtain a representative sample.[1][2][3][4]
In ancientAthenian democracy, sortition was the traditional and primary method for appointing political officials, and its use was regarded as a principal characteristic ofdemocracy.[5][6]Sortition is often classified as a method for bothdirect democracyanddeliberative democracy.
Today sortition is commonly used to select prospective jurors incommon-lawsystems. What has changed in recent years is the increased number ofcitizen groups with political advisory power,[7][8]along with calls for making sortition more consequential thanelections, as it was inAthens,Venice, andFlorence.[9][10][11][12]
Athenian democracydeveloped in the 6th century BC out of what was then calledisonomia(equality of law and political rights). Sortition was then the principal way of achieving this fairness. It was utilized to pick most[13][page needed]of themagistratesfor their governing committees, and for their juries (typically of 501 men).
Most Athenians believed sortition, not elections, to be democratic[13][page needed]and used complex procedures with purpose-built allotment machines (kleroteria) to avoid the corrupt practices used by oligarchs to buy their way into office. According to the authorMogens Herman Hansen, the citizen's court was superior to the assembly because the allotted members swore an oath which ordinary citizens in the assembly did not; therefore, the court could annul the decisions of the assembly. Most Greek writers who mention democracy (includingAristotle,[13][page needed][Note 1][Note 2]Plato,[Note 3]Herodotus,[Note 4]andPericles[Note 5]) emphasize the role of selection by lot, or state outright that being allotted is more democratic than elections (which were seen as oligarchic).Socrates[Note 6]andIsocrates[Note 7]however questioned whether randomly-selected decision-makers had enough expertise.
Past scholarship maintained that sortition had roots in the use of chance to divine the will of the gods, but this view is no longer common among scholars.[14][page needed]In Ancient Greek mythology, Zeus, Poseidon, and Hades used sortition to determine who ruled over which domain. Zeus got the sky, Poseidon the sea, and Hades the underworld.[15]
InAthenian democracy, to be eligible to be chosen by lot, citizens self-selected into the available pool, then onto lotteries in the kleroteria machines. The magistracies assigned by lot generally had terms of service of one year. A citizen could not hold any particular magistracy more than once in his lifetime, but could hold other magistracies. All male citizens over 30 years of age, who were not disenfranchised byatimia, were eligible. Those selected through lot underwent examination calleddokimasiato ensure citizenship and consider life, character, and at times, property; capacity for a post was assumed. Rarely were selected citizens discarded.[14][page needed]Magistrates, once in place, were subjected to constant monitoring by the Assembly. Magistrates appointed by lot had to render account of their time in office upon their leave, called euthynai. However, any citizen could request the suspension of a magistrate with due reason.
AKleroterionwas used to select eligible and willing citizens to serve jury duty. This bolstered the initialAthenian system of democracyby getting new and different jury members from each tribe to avoid corruption.[citation needed]James Wycliffe Headlam explains that the Athenian Council (500 administrators randomly selected), would commit occasional mistakes such as levying taxes that were too high. Headlam found minor instances of corruption but deemed systematic oppression and organized fraud as impossible due to widely (and randomly) distributed power combined with checks-and-balances.[16]Furthermore, power did not tend to go to those who sought it. The Athenians used an intricate machine, akleroterion, to allot officers. Headlam found the Athenians largely trusted the system of random selection, regarding it as the most natural and the simplest way of appointment.[17]While sortition was used for most positions, elections were sometimes used for positions like for military commanders (strategos).[18]
Thebreviawas used in the city states of Lombardy during the 12th and 13th centuries and in Venice until the late 18th century.[19]Men, who were chosen randomly, swore an oath that they were not acting under bribes, and then they elected members of the council. Voter and candidate eligibility probably included property owners, councilors, guild members, and perhaps, at times, artisans. The Doge of Venice was determined through a complex process of nomination, voting and sortition.
Lot was used in theVenetian systemonly in order to select members of the committees that served to nominate candidates for the Great Council. A combination of election and lot was used in this multi-stage process. Lot was not used alone to select magistrates, unlike in Florence and Athens. The use of lot to select nominators made it more difficult for political sects to exert power, and discouraged campaigning.[14][page needed]By reducing intrigue and power moves within the Great Council, lot maintained cohesiveness among the Venetian nobility, contributing to the stability of this republic. Top magistracies generally still remained in the control of elite families.[20]
Scrutinywas used inFlorencefor over a century starting in 1328.[19]Nominations and voting together created a pool of candidates from different sectors of the city. The names of these men were deposited into a sack, and a lottery draw determined who would get to be a magistrate. The scrutiny was gradually opened up to minor guilds, reaching the greatest level of Renaissance citizen participation in 1378–1382.
In Florence, lot was used to select magistrates and members of the Signoria during republican periods. Florence utilized a combination of lot and scrutiny by the people, set forth by the ordinances of 1328.[14][page needed]In 1494, Florence founded a Great Council in the model ofVenice. The nominatori were thereafter chosen by lot from among the members of the Great Council, indicating an increase in aristocratic power.[21]
During theAge of Enlightenment, many of the political ideals originally championed by thedemocratic city-states of ancient Greecewere revisited. The use of sortition as a means of selecting the members of government while receiving praise from notableEnlightenment thinkers, received almost no discussion during the formation of the American and French republics.
Montesquieu's bookThe Spirit of Lawsprovides one of the most cited discussions of the concept in Enlightenment political writing. In which, he argues sortition is natural to democracy, just as elections are to aristocracy.[22]He echoes the philosophy of much earlier thinkers such asAristotle, who found elections as aristocratic.[14][page needed]Montesquieu caveats his support by saying that there should also be some mechanisms to ensure the pool of selection is competent and not corrupt.[23]Rousseaualso found that a mixed model of sortition and election provided a healthier path for democracy than one or the other.[24]Harrington, also found the Venetian model of sortition compelling, recommending it for his ideal republic of Oceana.[25]Edmund Burke, in contrast, worried that those randomly selected to serve would be less effective and productive than self-selected politicians.[26][Note 8]
Bernard Manin, a French political theorist, was astonished to find so little consideration of sortition in the early years of representative government. He wonders if perhaps the choosing of rulers by lot may have been viewed as impractical on such a large scale as the modern state, or if elections were thought to give greater political consent than sortition.[14][page needed]
However,David Van Reybrouckdisagrees with Manin's theories on the lack of consideration of sortition. He suggests that the relatively limited knowledge aboutAthenian democracyplayed a major role, with the first thorough examination coming only in 1891 withElection by Lot at Athens.He also argues that wealthy enlightenment figures preferred to retain more power by holding elections, with most not even offering excuses on the basis of practicality but plainly saying they preferred to retain significant elite power,[27]citing commentators of 18th century France and the United States suggesting that they simply dislodged a hereditary aristocracy to replace it with an elected aristocracy.[28]
Because financial gain could be achieved through the position of mayor, some parts of Switzerland used random selection during the years between 1640 and 1837 to prevent corruption.[29]
Before the random selection can be done, the pool of candidates must be defined. Systems vary as to whether they allot from eligible volunteers, from those screened by education, experience, or a passing grade on a test, or screened by election by those selected by a previous round of random selection, or from the membership or population at large. A multi-stage process in which random selection is alternated with other screening methods can be used, as in the Venetian system.
David Chaumproposed selecting a random sample of eligible voters to study and vote on a public policy,[30][31]whileDeliberative opinion pollinginvites a random sample to deliberate together before voting on a policy.[30]
Andranik Tangian critiqueselectoral politicsas over-representing politically active people and groups in a society.[32][18]Cognitive diversity(orwisdom of the crowd) utilizes a variety of perspectives and cognitive skills to find better solutions.[33]According to numerous scholars such as Page and Landemore,[34]this diversity is more important to creating successful ideas than the average ability level of a group. Page argues that random selection of persons of average intelligence perform better than a collection of the best individual problem solvers.[35]This "diversity trumps ability theorem"[36]is central to the arguments for sortition.[34]
Some argue that randomly-allocating decision-making is moreefficientthan representative democracy through elections.[37][38]John Burnheimcritiques representative democracy as requiring citizens to vote for a large package of policies and preferences bundled together in one representative or party, much of which a voter might not want. He argues that this does not translate voter preferences as well as sortition, where a group of people have the time and the ability to focus on a single issue.[39]By allowing decision-makers to focus on positive-sum endeavors rather than zero-sum elections, it could help to lessenpolitical polarization[38][40]and the influence of money and interest-groups in politics.[28]Some studies show an overrepresentation of psychopathic and narcissistic traits in elected officials, which can be solved through sortition by not selecting for people who seek power.[41][42]
Burnheim also notes the importance of legitimacy for the effectiveness of the practice.[43]Legitimacy does depend on the success in achieving representativeness, which if not met, could limit the use cases of sortition to serving as consultative or political agenda-setting bodies.[44]Oliver Dowlen points to theegalitariannature of all citizens having an equal chance of entering office irrespective of any bias in society that appear in representative bodies that can make them more representative.[45][46]To bolster legitimacy, other sortition bodies have been used and proposed to set the rules to improve accountability without the need for elections.[47]The introduction of a variable percentage of randomly selected independent legislators in a Parliament can increase the global efficiency of a legislature, in terms of both number of laws passed and average social welfare obtained[48](this work is consistent with a 2010 paper on how the adoption of random strategies can improve the efficiency of hierarchical organizations[49]).[50]
As participants grow in competence by contributing to deliberation, they also become more engaged and interested in civic affairs.[51]Most societies have some type of citizenship education, but sortition-based committees allow ordinary people to develop their own democratic capacities through direct participation.[52]
Sortition is most commonly used to formdeliberative mini-publicslikecitizens' assemblies(or the smaller citizen juries).[53]The OECD has counted almost 600 examples of citizens' assemblies with members selected by lottery for public decision making.[2]
Sortition is commonly used in selecting juries in Anglo-Saxon[54]legal systems and in small groups (e.g., picking a school class monitor bydrawing straws). In public decision-making, individuals are often determined by allotment if other forms of selection such aselectionfail to achieve a result. Examples include certain hung elections and certain votes in the UK Parliament. Some contemporary thinkers likeDavid Van Reybrouckhave advocated a greater use of selection by lot in today'spolitical systems.
Sortition is also used in military conscription, as one method of awarding US green cards, and in placing students into some schools, university classes, and university residences.[55][56]
Sortition also has potential for helping large associations to govern themselves democratically without the use of elections. Co-ops, employee-owned businesses, housing associations, Internet platforms, student governments, and other large membership organizations whose members generally do not know many other members yet seek to run their organization democratically often find elections problematic.[57][58]Examples include the Samaritan Ministries Health Plan using a panel of 13 randomly selected members to resolve select disputes[59]and the New Zealand Health Research council awarding funding at random to applicants considered equally qualified.[60]
Citizens' assemblyis a group of people selected by lottery from the general population to deliberate on important public questions so as to exert an influence.[61][62][63][64]Other names and variations of deliberative mini-publics include citizens' jury, citizens' panel, people's panel, people's jury, policy jury, consensus conference and citizens' convention.[65][66][67][68]
A citizens' assembly uses elements of ajuryto create public policy.[69]Its members form a representative cross-section of the public, and are provided with time, resources and a broad range of viewpoints to learn deeply about an issue. Through skilled facilitation, the assembly members weightrade-offsand work to find common ground on a shared set of recommendations. Citizens' assemblies can be more representative and deliberative than public engagement, polls, legislatures orballot initiatives.[70][71]They seek quality of participation over quantity. They also have added advantages in issues where politicians have aconflict of interest, such as initiatives that will not show benefits before the next election or decisions that impact the types of income politicians can receive. They also are particularly well-suited to complex issues with trade-offs and values-driven dilemmas.[72]
Political scientistRobert A. Dahlsuggests that an advanced democratic state could form groups which he calls minipopuli. Each group would consist of perhaps a thousand citizens randomly selected, and would either set an agenda of issues or deal with a particular major issue. It would hold hearings, commission research, and engage in debate and discussion. Dahl suggests having the minipopuli as supplementing, rather than replacing, legislative bodies.[77]Claudia Chwalisz has also advocated for using citizens' assemblies selected by sortition to inform policymaking on an ongoing basis.[78][79][80][81]
John Burnheimenvisioned a political system in which many smallcitizens' jurieswould deliberate and make decisions about public policies.[84]His proposal included the dissolution of the state and of bureaucracies. The term demarchy was coined by Burnheim and is now sometimes used to refer to any political system in which sortition plays a central role.[85][86]While Burnheim preferred using only volunteers,[87]Christopher Freyuses the German termLottokratieand recommends testing lottocracy in town councils. Lottocracy, according to Frey, will improve the direct involvement of each citizen and minimize the systematical errors caused bypolitical partiesinEurope.[88]Influenced by Burnheim, Marxist economistsPaul Cockshottand Allin Cottrell propose that, to avoid formation of a new social elite in a post-capitalist society, citizens' committees chosen by lot (or partially chosen by lot) should make major decisions.[89]
Michael Donovan proposes that the percentage of voters who do not turnout have their representatives chosen by sortition. For example, with 60% voter turnout a number of legislators are randomly chosen to make up 40% of the overall parliament.[90]A number of proposals for an entire legislative body to be chosen by sortition have been made for the United States,[91]Canada,[92][93]the United Kingdom,[94][95]Denmark,[96]and France.[97][98]
Étienne Chouardadvocates strongly that those seeking power (elected officials) should not write the rules, making sortition the best choice for creating constitutions and other rules around the allocation of power within a democracy.[99]He and others propose replacing elections with bodies that use sortition to decide on key issues.[100][101][26]
Simon Threlkeld proposed a wide range of public officials be chosen by randomly sampled juries, rather than by politicians or popular election.[102]
|
https://en.wikipedia.org/wiki/Demarchy
|
Green politics, orecopolitics, is apolitical ideologythat aims to foster an ecologically sustainable society often, but not always, rooted inenvironmentalism,nonviolence,social justiceandgrassroots democracy.[1][2][3]It began taking shape in the Western world in the 1970s; since then,green partieshave developed and established themselves in many countries around the globe and have achieved some electoral success.
The political termgreenwas used initially in relation todie Grünen(German for "the Greens"),[4][5]a green party formed in the late 1970s.[6]The termpolitical ecologyis sometimes used in academic circles, but it has come to represent an interdisciplinary field of study as the academic discipline offers wide-ranging studies integrating ecological social sciences withpolitical economyin topics such as degradation and marginalization,environmental conflict, conservation and control and environmental identities and social movements.[7][8]
Supporters of green politics share many ideas with theconservation,environmental,feministandpeace movements. In addition to democracy and ecological issues, green politics is concerned withcivil liberties, social justice, nonviolence, sometimes variants oflocalismand tends to supportsocial progressivism.[9]Green party platforms are largely considered left in thepolitical spectrum. The green ideology has connections with various other ecocentric political ideologies, includingecofeminism,eco-socialism,degrowthandgreen anarchism, but to what extent these can be seen as forms of green politics is a matter of debate.[10]As theleft-winggreen political philosophy developed, there also came into separate existence opposite movements on theright-wingthat include ecological components such aseco-capitalismandgreen conservatism.
Adherents to green politics tend to consider it to be part of a higher worldview and not simply a political ideology. Green politics draws its ethical stance from a variety of sources, from the values ofindigenous peoples, to the ethics ofMahatma Gandhi,Baruch Spinoza, andJakob von Uexküll.[11][12]These people influenced green thought in their advocacy of long-termseventh generationforesight, and on the personal responsibility of every individual to make moral choices.
Unease about adverse consequences of human actions on nature predates the modern concept ofenvironmentalism. Social commentators as far apart as ancient Rome and China complained of air, water andnoise pollution.[13]
The philosophical roots of environmentalism can be traced back to enlightenment thinkers such asRousseauin France, and later the author and naturalistThoreauin America.[14]Organised environmentalism began in late 19th-century Europe and the United States, as a reaction to theIndustrial Revolutionwith its emphasis on unbridled economic expansion.[15]
"Green politics" first began as conservation and preservation movements, such as theSierra Club, founded in San Francisco in 1892.
Left-green platforms of the form that make up the green parties today draw terminology from the science ofecology, and policy fromenvironmentalism,deep ecology,feminism,pacifism,anarchism,libertarian socialism,libertarian possibilism,[16]social democracy,eco-socialism, and/orsocial ecologyorgreen libertarianism. In the 1970s, as these movements grew in influence, green politics arose as a new philosophy which synthesized their goals. The Green Party political movement is not to be confused with the unrelated fact that in some far-right and fascist parties, nationalism has on occasion been tied into a sort of green politics which promotes environmentalism as a form of pride in the "motherland" according to a minority of authors.[17][18][19]
In June 1970, a Dutch group calledKabouterswon 5 of the 45 seats on theAmsterdamGemeenteraad (City Council), as well as two seats each on councils inThe HagueandLeeuwardenand one seat apiece inArnhem,AlkmaarandLeiden. The Kabouters were an outgrowth ofProvo's environmental White Plans and they proposed "Groene Plannen" ("Green Plans").[20]
The first political party to be created with its basis in environmental issues was theUnited Tasmania Group, founded in Australia in March 1972 to fight against deforestation and the creation of a dam that would damageLake Pedder; whilst it only gained three percent in state elections, it inspired the creation of Green parties all over the world.[21]In May 1972, a meeting atVictoria University of Wellingtonlaunched theValues Party, the world's first countrywide green party to contest Parliamentary seats nationally.[22]In November 1972, Europe's first green party,PEOPLEin the UK came into existence.[23]
The German Green Party was not the first Green Party in Europe to have members elected nationally but the impression was created that they had been, because they attracted the most media attention: TheGerman Greens, contended in their first national election in the1980 federal election. They started as a provisional coalition of civic groups and political campaigns which, together, felt their interests were not expressed by the conventional parties. After contesting the1979 European electionsthey held a conference which identified Four Pillars of the Green Party which all groups in the original alliance could agree as the basis of a common Party platform: welding these groups together as a single Party. This statement of principles has since been utilised by many Green Parties around the world. It was this party that first coined the term "Green" ("Grün" in German) and adopted thesunflowersymbol. The term "Green" was coined by one of the founders of the German Green Party,Petra Kelly, after she visited Australia and saw the actions of theBuilders Labourers Federationand theirgreen banactions.[24]In the1983 federal election, the Greens won 27 seats in theBundestag.
The first Canadian foray into green politics took place in the Maritimes when 11 independent candidates (including one in Montreal and one in Toronto) ran in the 1980 federal election under the banner of the Small Party. Inspired by Schumacher's Small is Beautiful, the Small Party candidates ran for the expressed purpose of putting forward an anti-nuclear platform in that election. It was not registered as an official party, but some participants in that effort went on to form theGreen Party of Canadain 1983 (theOntario GreensandBritish Columbia Greenswere also formed that year).Green Party of CanadaleaderElizabeth Maywas the instigator and one of the candidates of the Small Party and she was eventually elected as a member of the Green Party in2011 Canadian federal election.[25]
In Finland, theGreen Leaguebecame the first European Green Party to form part of a state-level Cabinet in 1995. The German Greens followed, forming a government with theSocial Democratic Party of Germany(the "Red-Green Alliance") from 1998 to 2005. In 2001, they reached an agreement to end reliance onnuclear powerin Germany, and agreed to remain in coalition and support the German government of ChancellorGerhard Schröderin the2001 Afghan War. This put them at odds with many Greens worldwide, but demonstrated that they were capable of difficult political tradeoffs.
In Latvia,Indulis Emsis, leader of the Green Party and part of theUnion of Greens and Farmers, an alliance of a Nordic agrarian party and the Green Party, wasPrime Minister of Latviafor ten months in 2004, making him the firstGreenpolitician to lead a country in the history of the world. In 2015, Emsis' party colleague,Raimonds Vējonis, was elected President of Latvia by the Latvian parliament. Vējonis became the first greenhead of stateworldwide.
In the German state ofBaden-Württenburg, the Green Party became the leader of the coalition with the Social Democrats after finishing second in the2011 Baden-Württemberg state election. In the followingstate election, 2016, the Green Party became the strongest party for the first time in a GermanLandtag.
In 2016, the former leader of the AustrianGreens(1997 to 2008),Alexander Van der Bellen, officially running as an independent, won the2016 Austrian presidential election, making him the second green head of state worldwide and the first directly elected by popular vote. Van der Bellen placed second in the election's first round with 21.3% of the vote, the best result for the Austrian Greens in their history. He won the second-round run-off against the far-rightFreedom Party'sNorbert Hoferwith 53.8% of the votes, making him the first president of Austria who was not backed by either the People's Party or the Social Democratic Party.
According toDerek Wall, a prominent British green proponent, there are four pillars that define green politics:[2]
In 1984, the Green Committees of Correspondence in the United States expanded the Four Pillars into Ten Key Values,[26]which further included:
In 2001, theGlobal Greenswere organized as an international green movement. TheGlobal Greens Charteridentified six guiding principles:
Green economics focuses on the importance of the health of thebiospheretohuman well-being. Consequently, most Greens distrust conventional capitalism, as it tends to emphasizeeconomic growthwhile ignoring ecological health; the "full cost" of economic growth often includes damage to the biosphere, which is unacceptable according to green politics. Green economics considers such growth to be "uneconomic growth"— material increase that nonetheless lowers the overall quality of life. Green economics inherently takes a longer-term perspective than conventional economics, because such a loss in quality of life is often delayed. According to green economics, the present generation should not borrow from future generations, but rather attempt to achieve whatTim Jacksoncalls "prosperity without growth".
Some Greens[which?]refer toproductivism,consumerismandscientism[citation needed]as "grey", as contrasted with "green", economic views. "Grey" approaches focus on behavioral changes.[27]
Therefore, adherents to green politics advocate economic policies designed to safeguard the environment. Greens want governments to stopsubsidizingcompanies that waste resources or pollute the natural world, subsidies that Greens refer to as "dirty subsidies". Some currents of green politics place automobile andagribusinesssubsidies in this category, as they may harm human health. On the contrary, Greens look to agreen tax shiftthat are seen to encourage both producers and consumers to make ecologically friendly choices.
Many aspects of green economics could be consideredanti-globalist. According to many left-wing greens,economic globalizationis considered a threat to well-being, which will replace natural environments and local cultures with a single trade economy, termed theglobal economic monoculture.[citation needed]This is not a universal policy of greens, asgreen liberalsandgreen conservativessupport a regulatedfree marketeconomy with additional measures to advancesustainable development.
Since green economics emphasizes biospheric health andbiodiversity, an issue outside the traditional left-right spectrum, different currents within green politics incorporate ideas from socialism and capitalism. Greens on the Left are often identified aseco-socialists, who merge ecology and environmentalism with socialism andMarxismand blame the capitalist system forenvironmental degradation, social injustice, inequality and conflict.eco-capitalists, on the other hand, believe that thefree marketsystem, with some modification, is capable of addressing ecological problems. This belief is documented in the business experiences of eco-capitalists in the book,The Gort Cloudthat describes thegort cloudas the green community that supports eco-friendly businesses.
Since the beginning, green politics has emphasized local,grassroots-level political activity and decision-making. According to its adherents, it is crucial that citizens play a direct role in the decisions that influence their lives and their environment. Therefore, green politics seeks to increase the role ofdeliberative democracy,[28]based on direct citizen involvement andconsensus decision making, wherever it is feasible.
Green politics also encourages political action on the individual level, such asethical consumerism, or buying things that are made according to environmentally ethical standards. Indeed, many green parties emphasize individual and grassroots action at the local and regional levels overelectoral politics. Historically, green parties have grown at the local level, gradually gaining influence and spreading to regional or provincial politics, only entering the national arena when there is a strong network of local support.
In addition, many greens believe that governments should not levy taxes against strictly local production and trade. Some Greens advocate new ways of organizing authority to increase local control, includingurban secession,bioregional democracy, and co-operative/local stakeholder ownership.
Although Greens in theUnited States"call for an end to the 'War on Drugs'" and "for the decriminalization ofvictimless crimes", they also call for developing "a firm approach to law enforcement that directly addresses violent crime, including trafficking in hard drugs".[31]
In Europe, some green parties have tended to support the creation of a democraticfederal Europe, while others have opposedEuropean integration.[citation needed]
In the spirit of nonviolence, green politics oppose thewar on terrorismand the curtailment ofcivil rights, focusing instead on nurturingdeliberative democracyin war-torn regions and the construction of acivil societywith an increased role for women.[citation needed]
In keeping with their commitment to the preservation of diversity, greens are often committed to the maintenance and protection of indigenous communities, languages, and traditions. An example of this is theIrish Green Party's commitment to the preservation of the Irish Language.[32]Some of the green movement has focused on divesting in fossil fuels.Academics Stand Against Povertystates "it is paradoxical for universities to remain invested in fossil fuel companies".Thomas Poggesays that thefossil fuel divestmentmovement can increase political pressure at events like the international climate change conference (COP).[33]Alex Epstein of Forbes notes that it is hypocritical to ask for divestment without a boycott and that a boycott would be more effective.[34]Some institutions that are leading by example in the academic area areStanford University,Syracuse University,Sterling Collegeand over 20 more. A number of cities, counties and religious institutions have also joined the movement to divest.[35][36]
Green politics mostlyopposes nuclear fission powerand the buildup ofpersistent organic pollutants, supporting adherence to theprecautionary principle, by which technologies are rejected unless they can be proven to not cause significant harm to the health of living things or thebiosphere.[37]
Green platforms generally favor tariffs onfossil fuels, restrictinggenetically modified organisms, and protections forecoregionsorcommunities.[citation needed]
The Green Party supports phasing out of nuclear power, coal, and incineration of waste.[38]However, the Green Party in Finland has come out against its previous anti-nuclear stance and has stated that addressing global warming in the next 20 years is impossible without expanding nuclear power.[39]These officials have proposed usingnuclear-generated heatto heat buildings by replacing the use ofcoalandbiomassto reach zero-emission outputs by 2040.
Green ideology emphasizesparticipatory democracyand the principle of "thinking globally, acting locally." As such, the ideal Green Party is thought to grow from the bottom up, from neighborhood to municipal to (eco-)regional to national levels. The goal is to rule by aconsensus decision makingprocess.
Strong local coalitions are considered a prerequisite to higher-level electoral breakthroughs. Historically, the growth of Green parties has been sparked by a single issue where Greens can appeal to ordinary citizens' concerns. In Germany, for example, the Greens' early opposition to nuclear power won them their first successes in the federal elections.[41]
There is a growing level of global cooperation between Green parties. Global gatherings of Green Parties now happen. The first Planetary Meeting of Greens was held 30–31 May 1992, in Rio de Janeiro, immediately preceding theUnited Nations Conference on Environment and Developmentheld there. More than 200 Greens from 28 nations attended. The first formal Global Greens Gathering took place inCanberra, in 2001, with more than 800 Greens from 72 countries in attendance. The second Global Green Congress was held in São Paulo, Brazil, in May 2008, when 75 parties were represented.
Global Green networking dates back to 1990. Following the Planetary Meeting of Greens inRio de Janeiro, a Global Green Steering Committee was created, consisting of two seats for each continent. In 1993 this Global Steering Committee met in Mexico City and authorized the creation of a Global Green Network including a Global Green Calendar, Global Green Bulletin, and Global Green Directory. The Directory was issued in several editions in the next years. In 1996, 69 Green Parties from around the world signed a common declaration opposing French nuclear testing in the South Pacific, the first statement of global greens on a current issue. A second statement was issued in December 1997, concerning the Kyoto climate change treaty.[42]
At the 2001 Canberra Global Gathering delegates for Green Parties from 72 countries decided upon aGlobal Greens Charterwhich proposes six key principles. Over time, each Green Party can discuss this and organize itself to approve it, some by using it in the local press, some by translating it for their website, some by incorporating it into their manifesto, some by incorporating it into their constitution.[43]This process is taking place gradually, with online dialogue enabling parties to say where they are up to with this process.[44]
The Gatherings also agree on organizational matters. The first Gathering voted unanimously to set up theGlobal Green Network(GGN). The GGN is composed of three representatives from each Green Party. A companion organization was set up by the same resolution:Global Green Coordination(GGC). This is composed of three representatives from each Federation (Africa, Europe, The Americas, Asia/Pacific, see below). Discussion of the planned organization took place in several Green Parties prior to the Canberra meeting.[45]The GGC communicates chiefly by email. Any agreement by it has to be by unanimity of its members. It may identify possible global campaigns to propose to Green Parties worldwide. The GGC may endorse statements by individual Green Parties. For example, it endorsed a statement by the US Green Party on the Israel-Palestine conflict.[46]
Thirdly, GlobalGreen Gatheringsare an opportunity for informal networking, from which joint campaigning may arise. For example, a campaign to protect theNew Caledoniancoral reef, by getting it nominated for World Heritage Status: a joint campaign by the New Caledonia Green Party, New Caledonian indigenous leaders, theFrench Green Party, and theAustralian Greens.[47]Another example concernsIngrid Betancourt, the leader of the Green Party inColombia, the Green Oxygen Party (Partido Verde Oxigeno). Ingrid Betancourt and the party's Campaign Manager, Claire Rojas, were kidnapped by a hard-line faction ofFARCon 7 March 2002, while travelling in FARC-controlled territory. Betancourt had spoken at the Canberra Gathering, making many friends. As a result, Green Parties all over the world have organized, pressing their governments to bring pressure to bear. For example, Green Parties in African countries, Austria, Canada, Brazil, Peru, Mexico, France, Scotland, Sweden and other countries have launched campaigns calling for Betancourt's release.Bob Brown, the leader of theAustralian Greens, went to Colombia, as did an envoy from the European Federation,Alain Lipietz, who issued a report.[48]The four Federations of Green Parties issued a message to FARC.[49]Ingrid Betancourt was rescued by the Colombian military inOperation Jaquein 2008.
Separately from the Global Green Gatherings, Global Green Meetings take place. For instance, one took place on the fringe of theWorld Summit on Sustainable Developmentin Johannesburg. Green Parties attended from Australia, Taiwan, Korea, South Africa, Mauritius, Uganda, Cameroon, the Republic of Cyprus, Italy, France, Belgium, Germany, Finland, Sweden, Norway, the US, Mexico and Chile.
The Global Green Meeting discussed the situation of Green Parties on the African continent; heard a report fromMike Feinstein, former mayor ofSanta Monica, about setting up a website of the GGN; discussed procedures for the better working of the GGC; and decided two topics on which the Global Greens could issue statements in the near future: Iraq and the 2003 WTO meeting in Cancun.
Affiliated members in Asia, Pacific and Oceania form theAsia-Pacific Green Network.
The member parties of theGlobal Greensare organised into four continental federations:
The European Federation of Green Parties formed itself as theEuropean Green Partyon 22 February 2004, in the run-up toEuropean Parliamentelections in June 2004, a further step in trans-national integration.
Green movements are calling for social change to reduce the misuse of natural resources. These include grassroots non-governmental organizations likeGreenpeaceand green parties:
|
https://en.wikipedia.org/wiki/Green_politics
|
Takis Fotopoulos(Greek:Τάκης Φωτόπουλος; born 14 October 1940) is aGreekpolitical philosopher, economist and writer who founded theInclusive Democracymovement, aiming at asynthesisofclassical democracywithlibertarian socialism[1]and the radical currents in thenew social movements. He is anacademic, and has written many books and over 900 articles. He is the editor ofThe International Journal of Inclusive Democracy(which succeededDemocracy & Nature) and is the author ofTowards An Inclusive Democracy(1997) in which the foundations of the Inclusive Democracy project were set.[2]His latest book isThe New World Order in Action: Volume 1: Globalization, the Brexit Revolution and the "Left"- Towards a Democratic Community of Sovereign Nations(December 2016). Fotopoulos is Greek and lives inLondon.[3]
Fotopoulos was born on the Greek island ofChiosand his family moved toAthenssoon afterwards. After graduating from theUniversity of Athenswith degrees inEconomicsandPolitical Scienceand inLaw, he moved to London in 1966 for postgraduate study at theLondon School of Economicson a Varvaressos scholarship from Athens University. He was a studentsyndicalistand activist in Athens[a]and then a political activist in London, taking an active part in the1968 student proteststhere, and in organisations of the revolutionary Greek Left during the struggle against theGreek military junta of 1967–1974. During this period, he was a member of the Greek group calledRevolutionary Socialist Groupsin London, which published the newspaperΜαμή("Midwife", from the Marxian dictum, "violence is the midwife of revolution") for which he wrote several articles.[4]Fotopoulos married Sia Mamareli (a former lawyer) in 1966; the couple have a son, Costas (born in 1974), who is a composer and pianist.
Fotopoulos was a Senior Lecturer in Economics at thePolytechnic of North Londonfrom 1969 to 1989, until he began editing the journalSociety & Nature, laterDemocracy & Natureand subsequently the onlineInternational Journal of Inclusive Democracy.[2][3]He was also a columnist ofEleftherotypia,[5]the second-biggest newspaper in Greece.[6]
Fotopoulos developed thepolitical projectofInclusive Democracy(ID) in 1997 (an exposition can be found inTowards An Inclusive Democracy). The first issue ofSociety & Naturedeclared that:
our ambition is to initiate an urgently needed dialogue on the crucial question of developing a new liberatory social project, at a moment inHistorywhen theLefthas abandoned this traditional role.[7]
It specified that the new project should be seen as the outcome of a synthesis of the democratic, libertarian socialist andradical Greentraditions.[8]Since then, a dialogue has followed in the pages of the journal, in which supporters of theautonomyproject likeCornelius Castoriadis,social ecologysupporters including its founderMurray Bookchin, and Green activists and academics likeSteven Besthave taken part.
The starting point for Fotopoulos' work is that the world faces a multi-dimensional crisis (economic, ecological, social, cultural and political) which is caused by the concentration of power in elites, as a result of the market economy, representative democracy and related forms of hierarchical structure. An inclusive democracy, which involves the equal distribution of power at all levels, is seen not as a utopia (in the negative sense of the word) or a "vision" but as perhaps the only way out of the present crisis, with trends towards its creation manifesting themselves today in many parts of the world. Fotopoulos is in favor ofmarket abolitionism, although he would not identify himself as a market abolitionist as such because he considers market abolition as one aspect of an inclusive democracy which refers only to theeconomic democracycomponent of it. He maintains that "modern hierarchical society," which for him includes both thecapitalistmarket economy and "socialist" statism, is highly oriented toward economic growth, which has glaring environmental contradictions. Fotopoulos proposes a model of economic democracy for a stateless, marketless and moneyless economy but he considers that the economic democracy component is equally significant to the other components of ID, i.e. political ordirect democracy, economic democracy, ecological democracy and democracy in the social realm. Fotopoulos' work has been critically assessed by important activists, theorists and scholars.[1][9][10][11][12][13][14]
Overviews
Selected interviews
Selected talks
|
https://en.wikipedia.org/wiki/Inclusive_Democracy
|
Open-source governance(also known asopen governanceandopen politics) is apolitical philosophywhich advocates the application of the philosophies of theopen-sourceandopen-contentmovements todemocraticprinciples to enable any interested citizen to add to the creation of policy, as with awikidocument. Legislation is democratically opened to the general citizenry, employing theircollective wisdomto benefit the decision-making process and improve democracy.[1]
Theories on how to constrain, limit or enable this participation vary. Accordingly, there is no one dominant theory of how to go about authoring legislation with this approach. There are a wide array of projects and movements which are working on building open-source governance systems.[2]
Manyleft-libertarianandradical centristorganizations around the globe have begun advocating open-source governance and its related political ideas as a reformist alternative to current governance systems. Often, these groups have their origins indecentralizedstructures such as the Internet and place particular importance on the need for anonymity to protect an individual's right to free speech in democratic systems. Opinions vary, however, not least because the principles behind open-source government are still very loosely defined.[3]
In practice, several applications have evolved and been used by democratic institutions:[4]
Some models are significantly more sophisticated than a plain wiki, incorporating semantic tags, levels of control or scoring to mediate disputes – however this always risks empowering a clique of moderators more than would be the case given their trust position within the democratic entity – a parallel to the common wiki problem ofofficial vandalismby persons entrusted with power by owners or publishers (so-called "sysop vandalism" or "administrative censorship").
Some advocates of these approaches, by analogy to software code, argue[citation needed]for a "central codebase" in the form of a set of policies that are maintained in a public registry and that areinfinitely reproducible. "Distributions" of this policy-base are released (periodically or dynamically) for use in localities, which can apply "patches" to customize them for their own use. Localities are also able to cease subscribing to the central policy-base and "fork" it or adopt someone else's policy-base. In effect, the government stems from emergent cooperation and self-correction among members of a community. As the policies are put into practice in a number of localities, problems and issues are identified and solved, and where appropriate communicated back to the core.
These goals for instance were cited often during theGreen Party of Canada's experiments with open-political-platform development.[citation needed]As one of over a hundred nationalGreen partyentities worldwide and the ability to co-ordinate policy among provincial and municipal equivalents within Canada, it was in a good position to maintain just such a central repository of policy, despite being legally separate from those other entities.
Open-source governance differs from previous open-government initiatives in its broader emphasis on collaborative processes. After all...
...simply publishing snapshots of government information is not enough to make it open.
The "Imagine Halifax" (IH) project was designed to create a citizens' forum for elections inHalifax, Nova Scotiain fall 2004. Founded by Angela Bischoff, the widow ofTooker Gomberg, a notable advocate of combiningdirect actionwith open politics methods, IH brought a few dozen activists together to compile a platform (using live meetings and email and seedwiki followup). When it became clear that candidates could not all endorse all elements of the platform, it was then turned into questions for candidates in the election. The best ideas from candidates were combined with the best from activists – the final scores reflected a combination of convergence and originality. In contrast to most such questionnaires, it was easier for candidates to excel by contributing original thought than by simply agreeing. One high scorer,Andrew Younger, had not been involved with the project originally but was elected and appeared on TV with project leaderMartin Willison. The project had not only changed its original goal from a partisan platform to a citizen questionnaire, but it had recruited a previously uninvolved candidate to its cause during the election. A key output of this effort was aglossaryof about 100 keywords relevant to municipal laws.
The 2004–05Green Party of Canada Living Platformwas a much more planned and designed effort at open politics. As it prepared itself for an electoral breakthrough in the2004 federal election, theGreen Party of Canadabegan to compile citizen, member and expert opinions in preparation of its platform. During the election, it gathered input even fromInternet trollsincluding supporters of other parties, with no major problems:anonymitywas respected and, if they were within the terms of use, comments remained intact. Despite, or perhaps because of, its early success, it was derailed byJim Harris, the party's leader, when he discovered that it was a threat to his status as aparty boss.[citation needed]The Living Platform split off as another service entirely out of GPC control and eventually evolved into OpenPolitics.ca[11]and a service to promote wiki usage among citizens and political groups.
TheLiberal Party of Canadaalso attempted a deep policy renewal effort in conjunction with its leadership race in 2006.[12][13]While candidates in that race, notablyCarolyn Bennett,Stéphane DionandMichael Ignatieff, all made efforts to facilitate web-threaded policy-driven conversations between supporters, all failed to create lateral relationships and thus also failed to contribute much to the policy renewal effort.
Numerous very different projects related to open-source governance collaborate under the umbrella of theMetagovernmentproject;[14]Metagovernment uses the term "collaborative governance",[15]most of which are building platforms of open-source governance.
Future Melbourne is a wiki-based collaborative environment for developing Melbourne's 10-year plan. During public consultation periods, it enables the public to edit the plan with the same editing rights as city personnel and councilors.[21]
The New Zealand Police Act Review was a wiki used to solicit public commentary during the public consultation period of the acts review.[22]
Atlinux.conf.auon January 14, 2015, inAuckland,New Zealand,AustralianAudrey Lobo-PulopresentedEvaluating Government Policies Using Open Source Models, agitating for government policy related knowledge, data and analysis to be freely available to everyone to use, modify and distribute without restriction — "a parallel universe where public policy development and analysis is a dynamic, collaborative effort between government and its citizens". Audrey reported that the motivation for her work was personal uncertainty about the nature and accuracy of models, estimates and assumptions used to prepare policies released with the 2014 Australian Federal Government Budget, and whether and to what extent their real world impact is assessed following implementation.[23]A white paper on "Evaluating Government Policies using Open Source Models" was released on September 10, 2015.[24]
The open-politics theory, a narrow application of open-source governance, combines aspects of thefree softwareandopen-contentmovements, promotingdecision-makingmethods claimed to be more open, less antagonistic, and more capable of determining what is in thepublic interestwith respect topublic policyissues. It takes special care for instance to deal with equity differences, geographic constraints, defamation versus free political speech, accountability to persons affected by decisions, and the actual standing law and institutions of a jurisdiction. There is also far more focus on compiling actual positions taken by real entities than developing theoretical "best" answers or "solutions". One example,DiscourseDB, simply lists articles pro and con a given position without organizing their argument or evidence in any way.
While some interpret it as an example of "open-source politics", open politics is not a top–down theory but a set of best practices fromcitizen journalism,participatory democracyanddeliberative democracy, informed bye-democracyandnetrootsexperiments, applying argumentation framework for issue-based argument as they evolved in academic and military use through the 1980s to present. Some variants of it draw on the theory ofscientific methodandmarket methods, includingprediction marketsandanticipatory democracy.
Its advocates often engage in legal lobbying and advocacy to directly change laws in the way of the broader application of the technology, e.g. opposingpolitical libelcases in Canada, fightinglibel chillgenerally, and calling for clarification of privacy and human rights law especially as they relate to citizen journalism. They are less focused on tools although thesemantic mediawikiandtikiwikiplatforms seem to be generally favored above all others.
|
https://en.wikipedia.org/wiki/Open_source_governance
|
Participatory action research(PAR) is an approach toaction researchemphasizing participation and action by members of communities affected by that research. It seeks to understand the world by trying to change it, collaboratively and following reflection. PAR emphasizes collective inquiry and experimentation grounded in experience and social history. Within a PAR process, "communities of inquiry and action evolve and address questions and issues that are significant for those who participate as co-researchers".[1]PAR contrasts with mainstreamresearch methods, which emphasizecontrolled experimentation,statistical analysis, andreproducibilityof findings.
PAR practitioners make a concerted effort to integrate three basic aspects of their work: participation (life in society and democracy), action (engagement with experience and history), and research (soundness in thought and the growth of knowledge).[2]"Action unites, organically, with research" and collective processes of self-investigation.[3]The way each component is actually understood and the relative emphasis it receives varies nonetheless from one PAR theory and practice to another. This means that PAR is not a monolithic body of ideas and methods but rather a pluralistic orientation to knowledge making and social change.[4][5][6]
In the UK and North America the work of Kurt Lewin and the Tavistock Institute in the 1940s has been influential. However alternative traditions of PAR, begin with processes that include more bottom-up organising and popular education than were envisaged by Lewin.
PAR has multiple progenitors and resists definition. It is a broad tradition of collectiveself-experimentationbacked up by evidential reasoning, fact-finding and learning. All formulations of PAR have in common the idea that research and action must be done 'with' people and not 'on' or 'for' people.[1][2][7][8][9][10][11][12][13]It countersscientismby promoting the grounding of knowledge in human agency and social history (as in much of political economy). Inquiry based on PAR principles makes sense of the world through collective efforts to transform it, as opposed to simply observing and studying human behaviour and people's views about reality, in the hope that meaningful change will eventually emerge.
PAR draws on a wide range of influences, both among those with professional training and those who draw on their life experience and those of their ancestors. Many draw on the work ofPaulo Freire,[14]new thinking on adult education research,[15]theCivil Rights Movement,[16]South Asian social movements such as theBhumi Sena,[3][17]and key initiatives such as the Participatory Research Network created in 1978 and based in New Delhi. "It has benefited from an interdisciplinary development drawing its theoretical strength from adult education, sociology, political economy, community psychology, community development, feminist studies, critical psychology, organizational development and more".[18]The Colombian sociologistOrlando Fals Bordaand others organized the first explicitly PAR conference in Cartagena, Colombia in 1977.[19]Based on his research with peasant groups in rural Boyaca and with other underserved groups, Fals Borda called for the 'community action' component to be incorporated into the research plans of traditionally trained researchers. His recommendations to researchers committed to the struggle for justice and greater democracy in all spheres, including the business of science, are useful for all researchers and echo the teaching from many schools of research:
PAR can be thought of as a guiding paradigm to influence and democratize the creation of knowledge making, and ground it in real community needs and learning. Knowledge production controlled by elites can sometimes further oppress marginalized populations. PAR can be a way of overcoming the ineffectiveness and elitism of conventional schooling and science, and the negative effects of market forces and industry on the workplace, community life and sustainable livelihoods.[21][22]
Fundamentally, PAR pushes against the notion that experiential distance is required for objectivity in scientific and sociological research. Instead, PAR values embodied knowledge beyond "gated communities" of scholarship, bridging academia and social movements such that research and advocacy — often thought to be mutually exclusive — become intertwined.[23]Rather than be confined by academia, participatory settings are believed to have "social value," confronting epistemological gaps that may deepen ruts of inequality and injustice.[24]
These principles and the ongoing evolution of PAR have had a lasting legacy in fields ranging from problem solving in the workplace to community development and sustainable livelihoods, education, public health, feminist research, civic engagement and criminal justice. It is important to note that these contributions are subject to many tensions and debates on key issues such as the role ofclinical psychology,critical social thinkingand the pragmatic concerns oforganizational learningin PAR theory and practice. Labels used to define each approach (PAR, critical PAR, action research, psychosociology, sociotechnical analysis, etc.) reflect these tensions and point to major differences that may outweigh the similarities. While a common denominator, the combination of participation, action and research reflects the fragile unity of traditions whose diverse ideological and organizational contexts kept them separate and largely ignorant of one another for several decades.[21][22]
The following review focuses on traditions that incorporate the three pillars of PAR. Closely related approaches that overlap but do not bring the three components together are left out.Applied research, for instance, is not necessarily committed to participatory principles and may be initiated and controlled mostly by experts, with the implication that 'human subjects' are not invited to play a key role in science building and the framing of the research questions. As in mainstream science, this process "regards people as sources of information, as having bits of isolated knowledge, but they are neither expected nor apparently assumed able to analyze a given social reality".[15]PAR also differs from participatory inquiry or collaborative research, contributions to knowledge that may not involve direct engagement with transformative action and social history. PAR, in contrast, has evolved from the work of activists more concerned with empowering marginalized peoples than with generating academic knowledge for its own sake.[25][26][27]Lastly, given its commitment to the research process, PAR overlaps but is not synonymous withaction learning, action reflection learning (ARL),participatory developmentandcommunity development—recognized forms of problem solving and capacity building that may be carried out with no immediate concern for research and the advancement of knowledge.[28]
Action researchin the workplace took its initial inspiration from Lewin's work on organizational development (andDewey's emphasis on learning from experience). Lewin's seminal contribution involves a flexible, scientific approach to planned change that proceeds through a spiral of steps, each of which is composed of 'a circle of planning, action, and fact-finding about the result of the action', towards an organizational 'climate' of democratic leadership and responsible participation that promotes critical self-inquiry and collaborative work.[29]These steps inform Lewin's work with basic skill training groups,T-groupswhere community leaders and group facilitators use feedback, problem solving, role play and cognitive aids (lectures, handouts, film) to gain insights into themselves, others and groups with a view to 'unfreezing' and changing their mindsets, attitudes and behaviours.
Lewin's understanding of action-research coincides with key ideas and practices developed at the influentialTavistock Institute(created in 1947)) in the UK and National Training Laboratories (NTL) in the US. An important offshoot of Tavistock thinking and practise is thesociotechnical systemsperspective on workplace dynamics, guided by the idea that greater productivity or efficiency does not hinge on improved technology alone. Improvements in organizational life call instead for the interaction and 'joint optimization' of the social and technical components of workplace activity. In this perspective, the best match between the social and technical factors of organized work lies in principles of 'responsible group autonomy' andindustrial democracy, as opposed to deskilling and top-down bureaucracy guided byTaylor's scientific management and linear chain of command.[30][31][32][33][34][35][36]
NTL played a central role in the evolution of experiential learning and the application of behavioral science to improving organizations. Process consultation, team building, conflict management, and workplace group democracy and autonomy have become recurrent themes in the prolific body of literature and practice known asorganizational development(OD).[37][38]As with 'action science',[39][40][41][42]OD is a response to calls for planned change and 'rational social management' involving a normativehuman relations movementand approach to worklife in capital-dominated economies.[43]Its principal goal is to enhance an organization's performance and the worklife experience, with the assistance of a consultant, a change agent or catalyst that helps the sponsoring organization define and solve its own problems, introduce new forms of leadership[44]and change organizational culture and learning.[45][46]Diagnostic and capacity-building activities are informed, to varying degrees, by psychology, the behavioural sciences, organizational studies, or theories of leadership and social innovation.[47][48]Appreciative Inquiry(AI), for instance, is an offshoot of PAR based onpositive psychology.[49]Rigorous data gathering or fact-finding methods may be used to support the inquiry process and group thinking and planning. On the whole, however, science tends to be a means, not an end. Workplace and organizational learning interventions are first and foremost problem-based, action-oriented and client-centred.
Tavistock broke new ground in other ways, by meshing general medicine and psychiatry with Freudian and Jungian psychology and the social sciences to help the British army face various human resource problems. This gave rise to a field of scholarly research and professional intervention loosely known as psychosociology, particularly influential in France (CIRFIP). Several schools of thought and 'social clinical' practise belong to this tradition, all of which are critical of the experimental and expert mindset ofsocial psychology.[50]Most formulations of psychosociology share with OD a commitment to the relative autonomy and active participation of individuals and groups coping with problems of self-realization and goal effectiveness within larger organizations and institutions. In addition to this humanistic and democratic agenda, psychosociology uses concepts ofpsychoanalyticinspiration to address interpersonal relations and the interplay between self and group. It acknowledges the role of the unconscious in social behaviour and collective representations and the inevitable expression oftransferenceandcountertransference—language and behaviour that redirect unspoken feelings and anxieties to other people or physical objects taking part in the action inquiry.[2]
The works of Balint,[51]Jaques,[52]andBion[53]are turning points in the formative years of psychosociology. Commonly cited authors in France include Amado,[54]Barus-Michel,[55][56]Dubost,[57]Enriquez,[58]Lévy,[59][60]Gaujelac,[61]and Giust-Desprairies.[62]Different schools of thought and practice include Mendel's action research framed in a 'sociopsychoanalytic' perspective[63][64]and Dejours's psychodynamics of work, with its emphasis on work-induced suffering and defence mechanisms.[65]Lapassade and Lourau's 'socianalytic' interventions focus rather on institutions viewed as systems that dismantle and recompose norms and rules of social interaction over time, a perspective that builds on the principles of institutional analysis and psychotherapy.[66][67][68][69][70]Anzieuand Martin's[71]work on group psychoanalysis and theory of the collective 'skin-ego' is generally considered as the most faithful to the Freudian tradition. Key differences between these schools and the methods they use stem from the weight they assign to the analyst's expertise in making sense of group behaviour and views and also the social aspects of group behaviour and affect. Another issue is the extent to which the intervention is critical of broader institutional and social systems. The use of psychoanalytic concepts and the relative weight of effort dedicated to research, training and action also vary.[2]
PAR emerged in the postwar years as an important contribution to intervention and self-transformation within groups, organizations and communities. It has left a singular mark on the field of rural and community development, especially in the Global South. Tools and concepts for doing research with people, including "barefoot scientists" and grassroots "organic intellectuals" (seeGramsci), are now promoted and implemented by many international development agencies, researchers, consultants, civil society and local community organizations around the world. This has resulted in countless experiments in diagnostic assessment, scenario planning[72]and project evaluation in areas ranging from fisheries[73]and mining[74]to forestry,[75]plant breeding,[76]agriculture,[77]farming systems research and extension,[7][78][79]watershed management,[80]resource mapping,[10][81][82]environmental conflictand natural resource management,[2][83][84][85]land rights,[86]appropriate technology,[87][88]local economic development,[89][90]communication,[91][92]tourism,[93]leadership for sustainability,[94]biodiversity[95][96]and climate change.[97]This prolific literature includes the many insights and methodological creativity ofparticipatory monitoring,participatory rural appraisal(PRA) and participatory learning and action (PLA)[98][99][100]and all action-oriented studies of local, indigenous ortraditional knowledge.[101]
On the whole, PAR applications in these fields are committed to problem solving and adaptation to nature at the household or community level, using friendly methods of scientific thinking and experimentation adapted to support rural participation and sustainable livelihoods.
In education, PAR practitioners inspired by the ideas ofcritical pedagogyandadult educationare firmly committed to the politics of emancipatory action formulated byFreire,[25]with a focus on dialogical reflection and action as means to overcome relations of domination and subordination between oppressors and the oppressed, colonizers and the colonized. The approach implies that "the silenced are not just incidental to the curiosity of the researcher but are the masters of inquiry into the underlying causes of the events in their world".[14]Although a researcher and a sociologist,Fals Bordaalso has a profound distrust of conventional academia and great confidence in popular knowledge, sentiments that have had a lasting impact on the history of PAR, particularly in the fields of development,[27]literacy,[102][103]counterhegemoniceducation as well asyouth engagementon issues ranging from violence to criminality, racial or sexual discrimination, educational justice, healthcare and the environment.[104][105][106]When youth are included as research partners in the PAR process, it is referred to as Youth Participatory Action Research, or YPAR.[107]
Community-based participatory researchandservice-learningare a more recent attempts to reconnect academic interests with education and community development.[108][109][110][111][112][113]TheGlobal Alliance on Community-Engaged Researchis a promising effort to "use knowledge and community-university partnership strategies for democratic social and environmental change and justice, particularly among the most vulnerable people and places of the world." It calls for the active involvement of community members and researchers in all phases of the action inquiry process, from defining relevant research questions and topics to designing and implementing the investigation, sharing the available resources, acknowledging community-based expertise, and making the results accessible and understandable to community members and the broader public. Service learning or education is a closely related endeavour designed to encourage students to actively apply knowledge and skills to local situations, in response to local needs and with the active involvement of community members.[114][115][116]Many online or printed guides now show how students and faculty can engage in community-based participatory research and meet academic standards at the same time.[117][118][119][120][121][122][123][124][125][126][127][128][129]
Collaborative research in education is community-based research where pre-university teachers are the community and scientific knowledge is built on top of teachers' own interpretation of their experience and reality, with or without immediate engagement in transformative action.[130][131][132][133][134][135]
PAR has made important inroads in the field of public health, in areas such asdisaster relief,community-based rehabilitation, public health genomics, accident prevention, hospital care and drug prevention.[136][2]: ch 10, 15[137][138][139][140][141][142][143]
Because of its link to radical democratic struggles of theCivil Rights Movementand other social movements in South Asia and Latin America (see above), PAR is seen as a threat to their authority by some established elites. An international alliance university-based participatory researchers, ICPHR, omit the word "Action", preferring the less controversial term "participatory research".
Photovoice is one of the strategies used in PAR and is especially useful in the public health domain. Keeping in mind the purpose of PAR, which is to benefit communities, Photovoice allows the same to happen through the media of photography. Photovoice considers helping community issues and problems reach policy makers as its primary goal.[144]
Participatory programs within the workplace involve employees within all levels of a workplace organization, from management to front-line staff, in the design and implementation of health and safety interventions.[145]Some research has shown that interventions are most successful when front-line employees have a fundamental role in designing workplace interventions.[145]Success through participatory programs may be due to a number of factors. Such factors include a better identification of potential barriers and facilitators, a greater willingness to accept interventions than those imposed strictly from
upper management, and enhanced buy-in to intervention design, resulting in greater sustainability though promotion and acceptance.[145][146]When designing an intervention, employees are able to consider lifestyle and other behavioral influences into solution activities that go beyond the immediate workplace.[146]
Feminist researchandwomen's development theory[147]also contributed to rethinking the role of scholarship in challenging existing regimes of power, using qualitative and interpretive methods that emphasize subjectivity and self-inquiry rather than the quantitative approach of mainstream science.[140][148][149][150][151][152][153]As did most research in the 1970s and 1980s, PAR remained androcentric. In 1987, Patricia Maguire critiqued this male-centered participatory research, arguing that "rarely have feminist and participatory action researchers acknowledged each other with mutually important contributions to the journey."[154]Given that PAR aims to give equitable opportunity for diverse and marginalized voices to be heard, engaging gender minorities is an integral pillar in PAR's tenants.[155]In addition to gender minorities, PAR must consider points of intersecting oppressions individuals may experience.[155]After Maguire publishedTraveling Companions: Feminism, Teaching, And Action Research, PAR began to extend toward not only feminism, but alsoIntersectionalitythroughBlack Feminist ThoughtandCritical Race Theory(CRT).[155]Today, applying an intersectional feminist lens to PAR is crucial to recognize the social categories, such as race, class, ability, gender, and sexuality, that construct individuals' power relations and lived experiences.[156][157]PAR seeks to recognize the deeply complex condition of human living. Therefore, framing PAR's qualitative study methodologies through an intersectional feminist lens mobilizes all experiences – regardless of various social categories and oppressions – as legitimate sources of knowledge.[158]
Neurodiversityhas contributed to scholarship by includingneurodivergentpopulations within research, by asking neurodivergent adults to get involved in discussing the various stages of the scientific methodology, which allows them to provide a better understanding of the research priorities within these communities.[159][160]This research can challenge the ableist structure within academia where general assumptions (e.g. neurodivergence is inferior to neurotypicality),[161]promote neurodivergent individuals as active collaborators, thus involving them in knowledge generation[162]and ensure the theories of human cognition include strengths and weaknesses, together with lived experiences.[161][163]Additional benefits include co-production and mutuality practices of research, including the promotion of wider epistemic justice, equality in knowledge production, greater relevance of research to lived experience, and greater translational potential of research findings.[164][165][166]
Novel approaches to PAR in thepublic spherehelp scale up the engaged inquiry process beyond smallgroup dynamics.Touraineand others thus propose a 'sociology of intervention' involving the creation of artificial spaces for movement activists and non-activists to debate issues of public concern.[167][168][169]Citizen scienceis another recent move to expand the scope of PAR, to include broader 'communities of interest' and citizens committed to enhancing knowledge in particular fields. In this approach to collaborative inquiry, research is actively assisted by volunteers who form an active public or network of contributing individuals.[170][171]Efforts to promotepublic participationin the works of science owe a lot to the revolution ininformation and communications technology(ICT).Web 2.0applications support virtual community interactivity and the development of user-driven content and social media, without restricted access or controlled implementation. They extend principles ofopen-source governanceto democratic institutions, allowing citizens to actively engage in wiki-based processes of virtual journalism, public debate and policy development.[172]Although few and far between, experiments in open politics can thus make use of ICT and the mechanics ofe-democracyto facilitate communications on a large scale, towards achieving decisions that best serve the public interest.
In the same spirit, discursive ordeliberative democracycalls for public discussion, transparency and pluralism in political decision-making, lawmaking and institutional life.[173][174][175][176]Fact-finding and the outputs of science are made accessible to participants and may be subject to extensive media coverage, scientific peer review, deliberative opinion polling and adversarial presentations of competing arguments and predictive claims.[177]The methodology ofCitizens' juryis interesting in this regard. It involves people selected at random from a local or national population who are provided opportunities to question 'witnesses' and collectively form a 'judgment' on the issue at hand.[178]
ICTs, open politics and deliberative democracy usher in new strategies to engage governments, scientists, civil society organizations and interested citizens in policy-related discussions of science and technology. These trends represent an invitation to explore novel ways of doing PAR on a broader scale.[2]
Compared to other fields, PAR frameworks in criminal justice are relatively new. But growing support for community-based alternatives to the criminal justice system has sparked interest in PAR in criminological settings.[24]Participatory action research in criminal justice includes system-impacted people themselves in research and advocacy conducted by academics or other experts. Because system-impacted people hold experiential knowledge of the conditions and practices of the justice system, they may be able to more effectively expose and articulate problems with that system.[179]Many people who have been incarcerated are also able to share with researchers facets of the justice system that are invisible to the outside world or are difficult to understand without first-hand experience. Proponents of PAR in criminal justice believe that including those most impacted by the justice system in research is crucial because the presence of these individuals precludes the possibility of misunderstanding or compounding harms of the justice system in that research.[23]
Participants in PAR may also hold knowledge or education in more traditional academic fields, like law, policy or government that can inform criminological research. But PAR in criminology bridges the epistemological gap between knowledge gained through academia and through lived experience, connecting research to justice reform.[23][24]
Given the often delicate power balances between researchers and participants in PAR, there have been calls for a code of ethics to guide the relationship between researchers and participants in a variety of PAR fields. Norms inresearch ethicsinvolving humans include respect for the autonomy of individuals and groups to deliberate about a decision and act on it. This principle is usually expressed through the free,informed and ongoing consentof those participating in research (or those representing them in the case of persons lacking the capacity to decide). Another mainstream principle is the welfare of participants who should not be exposed to any unfavourable balance of benefits and risks with participation in research aimed at the advancement of knowledge, especially those that are serious and probable. Since privacy is a factor that contributes to people's welfare, confidentiality obtained through the collection and use of data that are anonymous (e.g. survey data) or anonymized tends to be the norm. Finally, the principle of justice—equal treatment and concern for fairness and equity—calls for measures of appropriate inclusion and mechanisms to address conflicts of interests.
While the choice of appropriate norms of ethical conduct is rarely an either/or question, PAR implies a different understanding of what consent, welfare and justice entail. For one thing the people involved are not mere 'subjects' or 'participants'. They act instead as key partners in an inquiry process that may take place outside the walls of academic or corporate science. As Canada's Tri-Council Policy Statement: Ethical Conduct for Research Involving Humans suggests, PAR requires that the terms and conditions of the collaborative process be set out in a research agreement or protocol based on mutual understanding of the project goals and objectives between the parties, subject to preliminary discussions and negotiations.[180]Unlike individual consent forms, these terms of reference (ToR) may acknowledge collective rights, interests and mutual obligations. While they are legalistic in their genesis, they are usually based on interpersonal relationships and a history of trust rather than the language of legal forms and contracts.
Another implication of PAR ethics is that partners must protect themselves and each other against potential risks, by mitigating the negative consequences of their collaborative work and pursuing the welfare of all parties concerned.[181]This does not preclude battles against dominant interests. Given their commitment to social justice and transformative action, some PAR projects may be critical of existing social structures and struggle against the policies and interests of individuals, groups and institutions accountable for their actions, creating circumstances of danger. Public-facing action can also be dangerous for some marginalized populations, such as survivors of domestic violence.[24]
In some fields of PAR it is believed that an ethics of participation should go beyond avoidance of harm.[24]For participatory settings that engage with marginalized or oppressed populations, including criminal justice, PAR can be mobilized to actively support individuals. An "ethic of empowerment" encourages researchers to consider participants as standing on equal epistemological footing, with equal say in research decisions.[24]Within this ethical framework, PAR doesn't just affect change in the world but also directly improves the lives of the research participants. An "ethic of empowerment" may require a systemic shift in the way researchers view and talk about oppressed communities — often as degenerate or helpless.[24]If not practiced in a way that actively considers the knowledge of participants, PAR can become manipulative. Participatory settings in which participants are tokenized or serve only as sources of information without joint power in decision-making processes can exploit rather than empower.
By definition, PAR is always a step into the unknown, raising new questions and creating new risks over time. Given itsemergent propertiesand responsiveness to social context and needs, PAR cannot limit discussions and decisions about ethics to the design and proposal phase. Norms of ethical conduct and their implications may have to be revisited as the project unfolds.[2]: Chapter 8This has implications, both in resources and practice, for the ability to subject the research to true ethical oversight in the way that traditional research has come to be regulated.
PAR offers a long history of experimentation with evidence-based and people-based inquiry, a groundbreaking alternative to mainstream positive science. As with positivism, the approach creates many challenges[182]as well as debates on what counts as participation, action and research. Differences in theoretical commitments (Lewinian, Habermasian, Freirean, psychoanalytic, feminist, etc.) and methodological inclinations (quantitative, qualitative, mixed) are numerous and profound.[2][183][184][185][186][187][188][189]This is not necessarily a problem, given the pluralistic value system built into PAR. Ways to better answer questions pertaining to PAR's relationship with science and social history are nonetheless key to its future.
One critical question concerns the problem-solving orientation of engaged inquiry—the rational means-ends focus of most PAR experiments as they affect organizational performance or material livelihoods, for instance. In the clinical perspective of French psychosociology, a pragmatic orientation to inquiry neglects forms of understanding and consciousness that are not strictlyinstrumentaland rational.[190]PAR must pay equal attention the interconnections of self-awareness, the unconscious and life in society.
Another issue, more widely debated, is scale—how to address broad-based systems of power and issues ofcomplexity, especially those ofanother developmenton a global scale.[2][191][192]How can PAR develop a macro-orientation to democratic dialogue[193]and meet challenges of the 21st Century, by joining movements to support justice and solidarity on both local and global scales? By keeping things closely tied to localgroup dynamics, PAR runs the risk of substituting small-scale participation for genuine democracy and fails to develop strategies for social transformation on all levels.[194]Given its political implications,[98]community-based action research and its consensus ethos have been known to fall prey to powerful stakeholders and serve as Trojan horses to bring global and environmental restructuring processes directly to local settings, bypassing legitimate institutional buffers and obscuring diverging interests and the exercise of power during the process. Cooptation can lead to highly manipulated outcomes.[195][196][197][198]Against this criticism, others argue that, given the right circumstances, it is possible to build institutional arrangements for joint learning and action across regional and national borders that can have impacts on citizen action, national policies and global discourses.[199][200]
The role of science and scholarship in PAR is another source of difference.[201]In the Lewinian tradition, "there is nothing so practical as a good theory".[202][203]Accordingly, the scientific logic of developing theory, forming and testing hypotheses, gathering measurable data and interpreting the results plays a central role. While more clinically oriented, psychosociology in France also emphasizes the distinctive role of formal research and academic work, beyond problem solving in specific contexts.[204]Many PAR practitioners critical of mainstream science and its overemphasis on quantitative data also point out that research based onqualitative methodsmay be theoretically-informed and rigorous in its own way.[124]In other traditions, however, PAR keeps great distance from both academic and corporate science. Given their emphasis on pluralism and living knowledge, many practitioners of grassroots inquiry are critical of grand theory and advanced methods for collaborative inquiry, to the point of abandoning the word "research" altogether, as in participatoryaction learning. Others equate research with any involvement in reflexive practice aimed at assessing problems and evaluating project or program results against group expectations. As a result, inquiry methods tend to be soft and theory remains absent or underdeveloped. Practical and theoretical efforts to overcome this ambivalence towards scholarly activity are nonetheless emerging.[1][2]
|
https://en.wikipedia.org/wiki/Participatory_action_research
|
Participatory culture, an opposing concept toconsumer culture, is a culture in which private individuals (the public) do not act as consumers only, but also as contributors or producers (prosumers).[1]The term is most often applied to the production or creation of some type ofpublished media.
Recent advances in technologies (mostlypersonal computersand theInternet) have enabled private persons to create and publish such media, usually through the Internet.[2]Since technology now enables new forms of expression and engagement in public discourse, participatory culture not only supports individual creation but also informal relationships that pair novices with experts.[3]This new culture, as it relates to the Internet, has been described asWeb 2.0.[4]In participatory culture, "young people creatively respond to a plethora of electronic signals and culturalcommoditiesin ways that surprise their makers, finding meanings and identities never meant to be there and defying simple nostrums that bewail the manipulation or passivity of 'consumers'."[2]
The increasing access to the Internet has come to play an integral part in the expansion of participatory culture because it increasingly enables people to work collaboratively, generate and disseminate news, ideas, and creative works, and connect with people who share similar goals and interests (seeaffinity groups). The potential of participatory culture for civic engagement and creative expression has been investigated by media scholarHenry Jenkins. In 2009, Jenkins and co-authors Ravi Purushotma,Katie Clinton, Margaret Weigel and Alice Robison authored awhite paperentitledConfronting the Challenges of Participatory Culture: Media Education for the 21st Century.[5]This paper describes a participatory culture as one:
Participatory culture has been around longer than the Internet. The emergence of theAmateur Press Associationin the middle of the 19th century is an example of historical participatory culture; at that time, young people were hand typing and printing their own publications. These publications were mailed throughout a network of people and resemble what are now called social networks. The evolution fromzines, radio shows, group projects, and gossip to blogs, podcasts, wikis, and social networks has impacted society greatly. With web services such aseBay,Blogger,Wikipedia,Photobucket,Facebook, andYouTube, it is no wonder that culture has become more participatory. The implications of the gradual shift from production toprodusageare profound and will affect the very core of culture, economy, society, and democracy.[6]
Forms of participatory culture can be manifested in affiliations, expressions, collaborative problem solving, and circulations. Affiliations include both formal and informal memberships in online communities such as discussion boards or social media. Expression refers to the types of media that could be created. This may manifest as memes, fanfiction, or other forms of mash-ups. When individuals and groups work together on a particular form of media or media product, like a wiki, then they engage in collaborative problem solving. Finally, circulation refers to the means through which the communication may be spread. This could include blogs, vlogs, podcasts, and even some forms of social media.[3]Some of the most popular apps that involve participation include: Facebook, Snapchat, Instagram, Tinder, LinkedIn, Twitter, and TikTok.
Fanfiction creators were one of the first communities to showcase the public could participate in pop culture,[7]by changing, growing, and altering TV show storylines during their run times, as well as strengthening the series’ popularity after the last episode aired. Some fan fiction creators develop theories and speculation, while others create ‘new’ material outside of the confines of the original content. Fans expand on the original story, putting the characters falling in love within the series through different adventures and sexualities. These communities are composed of audiences and readers from around the world, at different ages, with different backgrounds, coming together to develop theories and possibilities about current TV shows, books and films, or expand and continue the stories of TV shows, books, and movies that have come to a close.[8]
Astechnologycontinues to enable new avenues for communication, collaboration, and circulation of ideas, it has also given rise to new opportunities for consumers to create their own content. Barriers like time and money are beginning to become less significant to large groups of consumers. For example, the creation of movies once required large amounts of expensive equipment, but now movie clips can be made with equipment that is affordable to a growing number of people. The ease with whichconsumerscreate new material has also grown. Extensive knowledge of computer programming is no longer necessary to create content on the internet. Media sharing over the Internet acts as a platform to invite users to participate and create communities that share similar interests through duplicated sources, original content, and re-purposed material.
People no longer blindly absorb and consume what large media corporations distribute.[9]Today there are a great deal of people who are consumers who also produce their own content (referring to "prosumers").[10]The reason participatory culture is a high interest topic is due to the fact that there are just so many different social media platforms to participate in and contribute to. These happen to be some of the leaders in the social media industry,[11]and are the reason people are able to have such an advantage to participate in media creation. Today, millions of people across the world have the ability to post, quote, film, or create whatever they want.[12]With the aid of these platforms, the ability to reach a global audience has never been easier.[13]
Social media have become a huge factor in politics and civics in not just elections, but gaining funds, spreading information, getting legislation and petition support, and other political activities.[14]Social media make it easier for the public to make an impact and participate in politics. A study that showed the connection between Facebook messages among friends and how these messages have influenced political expression, voting, and information seeking in the 2012 United States presidential election.[15]Social media mobilizes people easily and effectively, and does the same for the circulation of information. These can accomplish political goals such as gaining support for legislation, but social media can also greatly influence elections. The impact social media can have on elections was shown in the 2016 United States presidential election, hundreds of fake news stories about candidates were shared on Facebook tens of millions of times. Some people do not recognize fake news and vote based on false information.[16]
Not only has hardware increased the individual's ability to submit content to the internet so that it may be reached by a wide audience, but in addition numerous internet sites have increased access. Websites likeFlickr,Wikipedia, andFacebookencourage the submission of content to the Internet. They increase the ease with which a user may post content by allowing them to submit information even if they only have a web browser. The need for additional software is eliminated. These websites also serve to create online communities for the production of content. These communities and theirweb serviceshave been labelled as part ofWeb 2.0.[17]
The relationship between Web 2.0 tools and participatory culture is more than just material, however. As the mindsets and skillsets of participatory practices have been increasingly taken up, people are increasingly likely to exploit new tools and technology in 2.0 ways. One example is the use of cellphone technology to engage "smart mobs" for political change worldwide. In countries where cellphone usage exceeds use of any other form of digital technology, passing information via mobile phone has helped bring about significant political and social change. Notable examples include the so-called "Orange Revolution" inUkraine,[18]the overthrow of Philippine PresidentJoseph Estrada,[19]and regular political protests worldwide[20]
There have been several ways that participatory media allows people to create, connect, and share their content or build friendships throughout the media.YouTubeencourages people to create and upload their content to share it around the world, creating an environment for content creators new or old.Discordallows people, primarilygamers, to connect with each other around the world and acts as a livechatroom.Twitchis astreaming mediawebsite where content creators can "go live" for viewers all around the world. A lot of times, these participatory sites have community events such as charity events or memorial streams for someone important to the people in the Twitch community.
Thesmartphoneis one example that combines the elements of interactivity, identity, and mobility. The mobility of the smartphone demonstrates that media is no longer bound by time and space and can be used in any context. Technology continues to progress in this direction as it becomes more user driven and less restricted to schedules and locations: for example, the progression of movies from theaters to private home viewing, to now the smartphone that can be watched anytime and anywhere. The smartphone also enhances the participatory culture by increased levels of interactivity. Instead of merely watching, users are actively involved in making decisions, navigating pages, contributing their own content and choosing what links to follow. This goes beyond the "keyboard" level of interactivity, where a person presses a key and the expected letter appears, and becomes rather a dynamic activity with continually new options and changing setting, without a set formula to follow. The consumer role shifts from a passive receiver to an active contributor. The smartphone epitomizes this by the endless choices and ways to get personally involved with multiple media at the same time, in a nonlinear way.[21]
The smartphone also contributes to participatory culture because of how it changes the perception of identity. A user can hide behind an avatar, false profile, or simply an idealized self when interacting with others online. There is no accountability to be who one says one is. The ability to slide in and out of roles changes the effect of media on culture, and also the user himself.[22]Now not only are people active participants in media and culture, but also their imagined selves.
In Vincent Miller'sUnderstanding Digital Culture,he makes the argument that the lines between producer and consumers have become blurry. Producers are those that create content and cultural objects, and consumers are the audience or purchasers of those objects. By referring toAxel Bruns' idea of "prosumer," Miller argues "With the advent of convergent new media and the plethora of choice in sources for information, as well as the increased capacity for individuals to produce content themselves, this shift away from producer hegemony to audience or consumer power would seem to have accelerated, thus eroding the producer-consumer distinction" (p. 87). "Prosumer" is the ending result of a strategy that has been increasingly used which encourages feedback between producers and consumers (prosumers), "which allows for more consumer influence over the production of goods."[23]
Bruns (2008) refers toprodusage, therefore, as a community collaboration that participants can access in order to share "content, contributions, and tasks throughout the networked community" (p. 14). This is similar to how Wikipedia allows users to write, edit, and ultimately use content. Producers are active participants who are empowered by their participation as network builders. Bruns (2008) describes the empowerment for users as different from the typical "top-down mediated spaces of the traditional mediaspheres" (p. 14). Produsage occurs when the users are the producers and vice versa, essentially eliminating the need for these "top-down" interventions. The collaboration of each participant is based on a principle of inclusivity; each member contributes valuable information for another user to use, add to, or change. In a community of learners, collaboration through produsage can provide access to content for every participant, not just those with some kind of authority. Every participant has authority.
This leads to Bruns' (2008) idea of "equipotentiality: the assumption that while the skills and abilities of all the participants in the produsage project are not equal, they have an equal ability to make a worthy contribution to the project" (p. 25). Because there are no more distinctions between producers and consumers, every participant has an equal chance to participate meaningfully in produsage.[24]
In July 2020, an academic description reported on the nature and rise of the "robot prosumer", derived frommodern-day technologyand related participatory culture, that, in turn, was substantially predicted earlier byFrederik Pohland otherscience fiction writers.[25][26][27]
An important contribution has been made by media theoristMirko Tobias Schäferwho distinguishes explicit and implicit participation (2011). Explicit participation describes the conscious and active engagement of users in fan communities or of developers in creative processes. Implicit participation is more subtle and unfolds often without the user's knowledge. In her book,TheCulture of Connectivity,Jose Van Dijckemphasizes the importance of recognizing this distinction in order to thoroughly analyze user agency as a techno-cultural construct (2013).
Dijck (2013) outlines the various ways in which explicit participation can be conceptualized. The first is the statistical conception of user demographics. Websites may “publish facts and figures about their user intensity (e.g., unique monthly users), their national and global user diversity, and relevant demographic facts” (p. 33). For instance,Facebookpublishes user demographic data such as gender, age, income, education level and more.[28]Explicit participation can also take place on the research end, where an experimental subject interacts with a platform for research purposes. Dijck (2013) references Leon et al. (2011), giving an example of an experimental study where “a number of users may be selected to perform tasks so researchers can observe their ability to control privacy settings “(p. 33). Lastly, explicit participation may informethnographicdata throughobservational studies, orqualitativeinterview-based research concerning user habits.[29]
Implicit participation is achieved by implementing user activities into user interfaces and back-end design. Schäfer argues that the success of popular Web 2.0 and social media applications thrives on implicit participation. The notion of implicit participation expands theories of participatory culture as formulated by Henry Jenkins and Axel Bruns who both focus most prominently on explicit participation (p. 44). Considering implicit participation allows therefore for a more accurate analysis of the role technology in co-shaping user interactions and user generated content (pp. 51–52).[30]
The term "textual poachers" was originated by de Certeau and has been popularized by Jenkins.[31]Jenkins uses this term to describe how some fans go through content like their favourite movie and engage with the parts that they are interested in, unlike audiences who watch the show more passively and move on to the next thing.[32]Jenkins takes a stand against the stereotypical portrayal of fans as obsessive nerds who are out of touch with reality. He demonstrates that fans are pro-active constructors of an alternative culture using elements "poached" and reworked from the mass media.[32]Specifically, fans use what they have poached to become producers themselves, creating new cultural materials in a variety of analytical and creative formats from "meta" essays to fanfiction, comics, music, and more.[33]In this way, fans become active participants in the construction and circulation of textual meanings. Fans usually interact with each other through fan groups, fanzines, social events, and even in the case of Trekkers (fans of Star Trek) interact with each other through annual conferences.[34]
In a participatory culture, fans are actively involved in the production, which may also influence producer decisions within the medium. Fans do not only interact with each other but also try to interact with media producers to express their opinions.[34]For example, what would be the ending between two characters in a TV show? Therefore, fans are readers and producers of culture. Participatory culture transforms the media consumption experience into the production of new texts, in fact, the production of new cultures and new communities. The result is an autonomous, self-sufficient fan culture.[35]
Participatory culture lacks representation of the female, which has created a misrepresentation of women online. This in turn, makes it difficult for women to represent themselves with authenticity, and deters participation of females in participatory culture. The content that is viewed on the internet in participatory situations is biased because of the overrepresentation of male generated information, and the ideologies created by the male presence in media, thus creates a submissive role for the female user, as they unconsciously accept patriarchal ideologies as reality. With males in the dominant positions "media industries [engage]… existing technologies to break up and reformulate media texts for reasons of their own".[36]
Design intent from the male perspective is a main issue deterring accurate female representation. Females active in participatory culture are at a disadvantage because the content they are viewing is not designed with their participation in mind. Instead of producing male biased content, "feminist interaction design should seek to bring about political emancipation… it should also force designers to question their own position to assert what an "improved society" is and how to achieve it".[37]The current interactions and interfaces of participatory culture fails to "challenge the hegemonic dominance, legitimacy and appropriateness of positivist epistemologies; theorize from the margins; and problematize gender".[38]Men typically are more involved in the technology industry as "relatively fewer women work in the industry that designs technology now... only in the areas of HCI/usability is the gender balance of workforce anything like equal".[38]Since technology and design is at the crux of the creation of participatory culture "much can – and should – be said about who does what, and it is fair to raise the question of whether an industry of men can design for women".[38]"Although the members of the group are not directly teaching or perhaps even indicating the object of… representation, their activities inevitably lead to the exposure of the other individual to that object and this leads to that individual acquiring the same narrow… representations as the other group members have. Social learning of this type (another, similar process is known aslocal enhancement) has been shown to lead to relatively stable social transmission of behavior over time".[36]Local enhancement is the driving mechanism that influences the audience to embody and recreate the messages produced in media. Statistically, men are actively engaging in the production of these problematic representations, whereas, women are not contributing to the portrayal of women experiences because of local enhancement that takes place on the web. There is no exact number to determine the precise percentage for female users; in 2011 there were numerous surveys that slightly fluctuate in numbers, but none seem to surpass 15 percent.[39]This shows a large disparity of online users in regards to gender when looking at Wikipedia content. Bias arises as the content presented in Wikipedia seems to be more male oriented.[40]
Participatory culture has been hailed by some as a way to reform communication and enhance the quality ofmedia. According to media scholar Henry Jenkins, one result of the emergence of participatory cultures is an increase in the number of media resources available, giving rise to increased competition between media outlets. Producers of media are forced to pay more attention to the needs of consumers who can turn to other sources for information.[41]
Howard Rheingoldand others have argued that the emergence of participatory cultures will enable deep social change. Until as recently as the end of the 20th century, Rheingold argues, a handful of generally privileged, generally wealthy people controlled nearly all forms of mass communication—newspapers, television, magazines, books and encyclopedias. Today, however, tools for media production and dissemination are readily available and allow for what Rheingold labels "participatory media."[42]
As participation becomes easier, the diversity of voices that can be heard also increases. At one time only a few mass media giants controlled most of the information that flowed into the homes of the public, but with the advance of technology even a single person has the ability to spread information around the world. The diversification of media has benefits because in cases where the control of media becomes concentrated it gives those who have control the ability to influence the opinions and information that flows to thepublic domain.[43]Media concentration provides opportunity for corruption, but as information continues to become accessed from more and more places it becomes increasingly difficult to control the flow of information to the will of an agenda. Participatory Culture is also seen as a moredemocraticform of communication as it stimulates the audience to take an active part because they can help shape the flow of ideas across media formats.[43]The democratic tendency lent to communication by participatory culture allows new models of production that are not based on a hierarchical standard. In the face of increased participation, the traditional hierarchies will not disappear, but "Community, collaboration, and self-organization" can become the foundation of corporations as powerful alternatives.[44]Although there may be no real hierarchy evident in many collaborative websites, their ability to form large pools ofcollective intelligenceis not compromised.
Participatory culture civics organizations mobilize participatory cultures towards political action. They build on participatory cultures and organize such communities toward civic and political goals.[45]Examples includeHarry Potter Alliance,Invisible Children, Inc., andNerdfighters, which each leverage shared cultural interests to connect and organize members towards explicit political goals. These groups run campaigns by informing, connecting, and eventually organizing their members through new media platforms. Neta Kligler-Vilenchik identified three mechanisms used to translate cultural interests into political outcomes:[46]
Social and participatory media allow for—and, indeed, call for—a shift in how we approach teaching and learning in the classroom. The increased availability of the Internet in classrooms allows for greater access to information. For example, it is no longer necessary for relevant knowledge to be contained in some combination of the teacher and textbooks; today, knowledge can be more de-centralized and made available for all learners to access. The teacher, then, can help facilitate efficient and effective means of accessing, interpreting, and making use of that knowledge.[47]
Jenkins believes that participatory culture can play a role in the education of young people as a new form of implicit curriculum.[48]He finds a growing body of academic research showing the potential benefits of participatory cultures, both formal and informal, for the education of young people. Including Peer-to-peer learning opportunities, the awareness of intellectual property and multiculturalism, cultural expression and the development of skills valued in the modern workplace, and a more empowered conception of citizenship.[48]
Rachael Sullivan discusses how some online platforms can be a challenge. According to Rachael Sullivan's book review, she emphasizes on Reddit, and the content used that can be offensive and inappropriate.[49]Memes, GIFs, and other content that users create are negative, and are used primarily for trolling. Reddit has a platform where any users in the community can post without restrictions or barriers, regardless of whether it's positive or negative. This has the potential for backlash against Reddit, as it doesn't restrict content that could be considered offensive or pejorative, and can reflect negatively on the community as a whole. On the other hand, Reddit would likely face similar backlash for restricting what others would consider their right to free speech, although free speech only pertains to government backlash and not private companies.
YouTube has been the start-up for many up and coming pop stars; Both Justin Bieber and One Direction can credit their presence on YouTube as the catalyst for their respective careers. Other users have gained fame or notoriety by expounding on how simple it can be to become a popular YouTuber. Charlie “How to Get Featured on YouTube,” is one such example, in that his library consists solely of videos on how to get featured, and nothing else. YouTube offers the younger generation the opportunity to test out their content, while gaining feedback via likes, dislikes, and comments to find out where they need to improve.
All people want to be a consumer in some and an active contributor in other situations. Being a consumer or active contributor is not an attribute of a person, but of a context.[50]The important criteria that needs to be taken into account is personally meaningful activities. Participatory cultures empower humans to be active contributors in personally meaningful activities. The drawback of such cultures is that they may force humans to cope with the burden of being an active contributor in personally irrelevant activities.
This trade-off can be illustrated with the potential and drawbacks of "Do-It-Yourself Societies": starting with self-service restaurants and self-service gas stations a few decades ago, and this trend has been greatly accelerated over the last 10 years. Through modern tools (including electronic commerce supported by the Web), humans are empowered to do many tasks themselves that were done previously by skilled domain workers serving as agents and intermediaries. While this shift provides power, freedom, and control to customers (e.g., banking can be done at any time of the day with ATMs, and from any location with the Web), it has led also to some less desirable consequences. People may consider some of these tasks not very meaningful personally and therefore would be more than content with a consumer role. Aside from simple tasks that require a small or no learning effort, customers lack the experience the professionals have acquired and maintained through daily use of systems, and the broad background knowledge to do these tasks efficiently and effectively. The tools used to do these tasks — banking, travel reservations, buying airline tickets, checking out groceries at the supermarket — are core technologies for the professionals, but occasional technologies for the customers. This will put a new, substantial burden on customers rather than having skilled domain workers doing these tasks.[50]
Significantly, too, as businesses increasingly recruit participatory practices and resources to market goods and services, consumers who are comfortable working within participatory media are at a distinct advantage over those who are less comfortable. Not only do consumers who are resistant to making use of the affordances of participatory culture have decreased access to knowledge, goods, and services, but they are less likely to take advantage of the increased leverage inherent in engaging with businesses as aprosumer.[50]
This category is linked to the issue of thedigital divide, the concern with providing access to technology for all learners. The movement to break down the digital divide has included efforts to bring computers into classrooms, libraries, and other public places. These efforts have been largely successful, but as Jenkins et al. argue, the concern is now with the quality access to available technologies. They explain:
What a person can accomplish with an outdated machine in a public library with mandatory filtering software and no opportunity for storage or transmission pales in comparison to what [a] person can accomplish with a home computer with unfettered Internet access, high band-width, and continuous connectivity.(Current legislation to block access to social networking software in schools and public libraries will further widen the participation gap.) The school system's inability to close this participation gap has negative consequences for everyone involved. On the one hand, those youth who are most advanced in media literacies are often stripped of their technologies and robbed of their best techniques for learning in an effort to ensure a uniform experience for all in the classroom. On the other hand, many youth who have had no exposure to these new kinds of participatory cultures outside school find themselves struggling to keep up with their peers. (Jenkins et al. pg. 15)
Passing out the technology free of charge is not enough to ensure youth and adults learn how to use the tools effectively. Most American youths now have at least minimal access to networked computers, be it at school or in public libraries, but "children who have access to home computers demonstrate more positive attitudes towards computers, show more enthusiasm, and report more enthusiast and ease when using computer than those who do not (Page 8 Wartella, O'Keefe, and Scantlin (2000)). As the children with more access to computers gain more comfort in using them, the less tech-savvy students get pushed aside. It is more than a simple binary at work here, as working-class youths may still have access so some technologies (e.g. gaming consoles) while other forms remain unattainable. This inequality would allow certain skills to develop in some children, such as play, while others remain unavailable, such as the ability to produce and distribute self-created media.[3]
In a participatory culture, one of the key challenges that is encountered is participatory gap. This comes into play with the integration of media and society. Some of the largest challenges we face in regards to the participation gap is in education, learning, accessibility, and privacy. All of these factors are huge setbacks when it comes to the relatively new integration of youth participating in today's popular forms of media.
Education is one realm where the participatory gap is very prominent. In today's society, our education system heavily focuses on integrating media into its curriculum. More and more our classrooms are utilizing computers and technology as learning aides. While this is beneficial for students and teachers to enhance learning environments and allow them to access a plethora of information, it also presents many problems. The participation gap leaves many schools as well as its teachers and students at a disadvantage as they struggle to utilize current technology in their curriculum. Many schools do not have to funding to invest in computers or new technologies for their academic programs. They are unable to afford computers, cameras, and interactive learning tools, which prevents students from accessing the tools that other, wealthier schools have.
Another challenge is that as we integrate new technology into schools and academics, we need to be able to teach people how to use these instruments. Teaching both student and adults how to use new media technologies is essential so that they can actively participate as their peers do. Additionally, teaching children how to navigate the information available on new media technologies is very important as there is so much content available on the internet these days. For beginners this can be overwhelming and teaching kids as well as adults how to access what is pertinent, reliable and viable information will help them improve how they utilize media technologies.
One huge aspect of the participation gap is access. Access to the Internet and computers is a luxury in some households, and in the today's society, access to a computer and the Internet is often overlooked by both the education system and many other entities. In today's society, almost everything we do is based online, from banking to shopping to homework and ordering food, we spend all of our time doing everyday tasks online. For those who are unable to access these things, they are automatically put at a severe disadvantage. They cannot participate in activities that their peers do and may suffer both academically and socially.
The last feature of the participation gap is privacy concerns. We put everything on the Internet these days, from pictures to personal information. It is important to question how this content will be used. Who owns this content? Where does it go or where is it stored? For example, the controversy of Facebook and its ownership and rights of user's content has been a hot button issue over the past few years. It is disconcerting to a lot of people to find out that their content they have posted to a particular website is no longer under their control, but may be retained and used by the website in the future.
All of the above-mentioned issued are key factors in the participation gap. They play a large role is the challenges we face as we incorporate new media technology into everyday life. These challenges affect how many populations interact with the changing media in society and unfortunately leave many at a disadvantage. This divide between users of new media and those who are unable to access these technologies is also referred to as the digital divide. It leaves low-income families and children at a severe disadvantage that affects them in the present as well as the future. Students for example are largely affected because without access to the Internet or a computer they are unable to do homework and projects and will moreover be unsuccessful in school. These poor grades can lead to frustration with academia and furthermore may lead to delinquent behavior, low income jobs, decreased chanced of pursuing higher educations, and poor job skills.
Increased facility with technology does not necessarily lead to increased ability to interpret how technology exerts its own pressure on us. Indeed, with increased access to information, the ability to interpret the viability of that information becomes increasingly difficult.[51]It is crucial, then, to find ways to help young learners develop tactics for engaging critically with the tools and resources they use.
This is identified as a "breakdown of traditional forms of professional training and socialization that might prepare young people for their increasingly public roles as media makers and community participants" (Jenkins et al. pg. 5). For example, throughout most of the last half of the 20th century learners who wanted to become journalists would generally engage in a formal apprenticeship through journalism classes and work on a high school newspaper. This work would be guided by a teacher who was an expert in the rules and norms of journalism and who would confer that knowledge to student-apprentices. With increasing access toWeb 2.0tools, however, anybody can be a journalist of sorts, with or without an apprenticeship to the discipline. A key goal in media education, then, must be to find ways to help learners develop techniques for active reflection on the choices they make—and contributions they offer—as members of a participatory culture.
As teachers, administrators, and policymakers consider the role of new media and participatory practices in the school environment, they will need to find ways to address the multiple challenges. Challenges include finding ways to work with the decentralization of knowledge inherent in online spaces; developing policies with respect to filtering software that protects learners and schools without limiting students' access to sites that enable participation; and considering the role of assessment in classrooms that embrace participatory practices.
Cultures are substantially defined by their media and their tools for thinking, working, learning, and collaborating. Unfortunately a large number of new media are designed to see humans only as consumers; and people, particularly young people in educational institutions, form mindsets based on their exposure to specific media.
The current mindset about learning, teaching, and education is dominated by a view in which teaching is often fitted "into a mold in which a single, presumably omniscient teacher explicitly tells or shows presumably unknowing learners something they presumably know nothing about".[52]A critical challenge is a reformulation and reconceptualization of this impoverished and misleading conception. Learning should not take place in a separate phase and in a separate place, but should be integrated into people's lives allowing them to construct solutions to their own problems. As they experience breakdowns in doing so, they should be able to learn on demand by gaining access to directly relevant information. The direct usefulness of new knowledge for actual problem situations greatly improves the motivation to learn the new material because the time and effort invested in learning are immediately worthwhile for the task at hand — not merely for some putative long-term gain.
In order to create active contributor mindsets serving as the foundation of participatory cultures, learning cannot be restricted to finding knowledge that is "out there". Rather than serving as the "reproductive organ of a consumer society"[53]educational institutions must cultivate the development of an active contributor mindset by creating habits, tools and skills that help people become empowered and willing to actively contribute to the design of their lives and communities.
Beyond supporting contributions from individual designers, educational institutions need to build a culture and mindset of sharing, supported by effective technologies and sustained by personal motivation to occasionally work for the benefit of groups and communities. This includes finding ways for people to see work done for the benefits of others being "on-task", rather than as extra work for which there is no recognition and no reward.
Jenkinset al.believes that conversation surrounding thedigital divideshould focus on opportunities to participate and to develop the cultural competencies and social skills required to take part rather than get stuck on the question of technological access. As institutions, schools have been slow on the uptake of participatory culture. Instead, afterschool programs currently devote more attention to the development of new media literacies, or, a set of cultural competencies and social skills that young people need in the new media landscape. Participatory culture shifts this literacy from the individual level to community involvement. Networking and collaboration develop social skills that are vital to the new literacies. Although new, these skills build on an existing foundation of traditional literacy, research skills, technical skills, and critical analysis skills taught in the classroom.
Metadesignis "design for designers"[54]It represents an emerging conceptual framework aimed at defining and creating social and technical infrastructures in which participatory cultures can come alive and new forms of collaborative design can take place. It extends the traditional notion of system design beyond the original development of a system to allow users become co-designers and co-developers. It is grounded in the basic assumption that future uses and problems cannot be completely anticipated at design time, when a system is developed. Users, at use time, will discover mismatches between their needs and the support that an existing system can provide for them. These mismatches will lead to breakdowns that serve as potential sources of new insights, new knowledge, and new understanding.
Meta-design supports participatory cultures as follows:
|
https://en.wikipedia.org/wiki/Participatory_culture
|
Participatory justice, broadly speaking, refers to the direct participation of those affected most by a particular decision, in the decision-making process itself: this could refer to decisions made in a court of law or by policymakers. Popular participation has been called "the ethical seal of a democratic society" byFriedhelm Hengsbach, a professor of Christian Social Science and Economic and Social Ethics at the Philosophical-Theological College Sankt Georgen in Frankfurt[1]and "the politics of the future" by Gene Stephens, professor of criminology at the University of South Carolina.[2]It is about people and relationships.[3]
Various authors have claimed that examples of participatory justice date back to civilizations as old as that of the Canadian Aboriginals and Ancient Athenians, even if the terminology had not been in use then.[4][5][6][7]In the society of Canadian Aboriginals, citizens were given the opportunity to give their own account of a dispute in public and determine the proper course of action, which sometimes involved issuing a public apology.[6][7]Elders were viewed as authorities due to their unique knowledge of the circumstances of community members.[6][7]In ancient Athens, large popular courts, made up of 200 to 1000 randomly selected male citizens, shared in both functions of forming and of applying the law.[4][5]The term "participatory justice" itself, however, was first used by Bellevue, Washington-based attorney Claire Sherman Thomas in 1984 to describe the process by which people act as responsible participants in the law making process, thereby contributing to causes of social justice.[8]In 1986, Gene Stephens first used the term to describe an alternative to the adversarial model of justice system used in court.[2]
Both definitions of participatory justice relate to the concept ofparticipatory democracy, which shares similar aspirations: to provide the government with democratic legitimacy and make for a more inclusive, transparent, equal society, by allowing citizens to participate directly in political decision-making and lawmaking processes that affect their lives.[6][7][8][9]
In rare cases, it also refers to the use ofthe Internetor atelevisionreality showto catch a perpetrator.[10]
Participatory justice can refer to the use ofalternative dispute resolution, such asmediation,conciliation, andarbitration, in criminal and civil courts, instead of, or before, going to court.[2][11]It is sometimes called "community dispute resolution".[12]NGOs (Non-governmental organizations) may get involved in the administration of criminal justice.[12][13]According to the National Advisory Commission of Criminal Justice Standards and Goals, delays in sentencing and lack of protection of rights of the accused contribute to attitudes of legal cynicism.[2]According to a large cohort of citizens, the guilty are freed while the innocent, and often the black and poor, are harassed.[2]The participatory justice model, in turn, attempts to restore public confidence in the legal system.
Whereas the adversarial and disposition system is often slow-functioning, expensive, and inconsistent, the participatory justice model is a cheap and efficient way of resolution-making.[2][12]Rather than rely on expensive attorneys and expert witnesses, the model relies on volunteers from the community, who are trained in mediation and counseling techniques.[2]The resolution is often achieved quicker, because, by reaching a consent agreement implemented by all parties involved, there is no possibility of re-litigation.[2]In the participatory justice model, cooperation is valued instead of competition and reconciliation instead of winner-take-all. The need to protect the public and to respect the rights of ordinary citizens to a free but secure society are considered.[2]This in turn helps preserve positive relationships between the parties involved.[12]In modern-day Canada, for instance, community members are involved in almost every step of the judicial process, even before people are arrested and sent to court; community organizations establish working partnerships with police to focus attention on growing social problems, like child abandonment or housing code violations, and prevent crime.[9]
Not only does the participatory justice model promote inclusion, according to several authors, but also socioeconomic equality. The adversarial/dispositional system requires enforcing laws that often represent the will of those with the most educational and monetary resources.[2]As Stephens points out, most people who are perpetrators in a particular incident, whether civil or criminal, had also been victims at some point, so every person's circumstances should be taken into consideration.[2]Stahn mentions the importance of consulting victims at the reparation stage to determine whether they really believe the person who committed the crime against them is deserving of incarceration.[14]Once used primarily inScandinavia,Asia, andAfrica, participatory justice has been "exported" to theUnited States.[15][16]
Finally, participatory justice serves as a crucial check on state power, that legitimizes the rule of law itself. As long as citizens believe in their ability to contribute to the law making and evaluation process, public consensus supports the rule of law.[2][8]Without consensus, the government must rely on the letter of the law and threat of prosecution to maintain order; the government might resort to censorship and surveillance.[8]The law becomes "instead of a vehicle of justice, the instrument of a bureaucratic, institutionalized, dehumanized government."[8]Therefore, by reducing legal cynicism in communities, participatory justice effectively decreases the likelihood that the state will respond to this cynicism through use of overly punitive justice.[8]
Once used primarily in Scandinavia, Asia, and Africa, participatory justice has been "exported" to the United States andCanada.[3][6][7][17]It is used in a variety of cases, including between "Landlordsand Tenants, Neighbours, Parents and Children, Families and Schools, Consumers and Merchants ... [and]victims of crimeand offenders."[12]For war-torn countries, participatory justice can promote coexistence and reconciliation, through an emphasis on universal participation.[2][14]
An online and self-financed form of participatory justice, called the crowdjury system, has been promoted as an improved way of managing trials in the future.[5]Witnesses to a crime can upload evidence online into a secure vault.[5]Data can then be organized into useful knowledge by groups of 9 to 12 self-selected volunteers with expertise.[5]If a defendant pleads guilty, they can propose a form of restoration, as a way to avoid harsher punishment; if they do not plead, an online trial will be held with a massive randomly-selected jury.[5]Participants in the evidence review process will receive monetary compensation through Bitcoin or alt coin. According to crowdjury's proponents, this will help the government cut costs and create a more transparent judicial process.[5]
Critics of the participatory justice model cite its purpose to often humiliate a particular party.[10]Inkiko-Gacaca, a system of community courts established in 2002 to respond to the large number of suspected perpetrators imprisoned after the 1994 Rwandan genocide, is a famous example.[18][19]Meant to achieve lasting peace through the promotion of restorative justice, Gacaca, according to several authors, has only become more retributive and coercive.[18][19][20]Through the process, Tutsi genocide survivors allegedly impose guilt on the Hutu, asking them to confess their deeds, express apologies to all victims and kin, and repay them tangibly, throughpublic shaming.[19][20]The participatory justice model has also been critiqued for its lack of checks and balances and lack of participation of professional experts.[21]Because the negotiators are usually not trained in the collection of evidence and are not privy to the criminal background of the alleged offender, the resolution may be made without full facts and knowledge. Furthermore, the offender's motivation is difficult to assess if the alternative is more formal punishment.
Participatory justice can also refer to the rights of individuals and groups to actively participate in policy-making and engage in debates about social justice.[22]In a participatory justice model, rule makers rely on the participation of affected interests rather than on administrators, politicians, and the general population. This often leads to the redistribution of resources and recognition of those whose voices have historically been excluded, due in part to a lack of financial and educational resources to contribute.[22]
The Negotiated Rulemaking Act made it a priority to ensure that people most affected by a particular issue, particularly poor people, would be able to take part in the negotiation process; the government provides agency funding to defray costs of participation in rulemaking.[23]Giving marginalized groups the chance to participate in the decision making process can help ensure they participate in the community more generally as well. For example, during the United Nations Convention on the Rights of Persons with Disabilities (CRPD), disabled peoples organizations (DPOs) were engaged in and consulted during the drafting of a comprehensive program that would enable the disabled to participate in the civil, political, economic, social, and cultural life of the community.[24]Also, within the CRPD, states were encouraged to involve DPOs when preparing reports for the body meant to monitor the implementation of the program.[24]
Arguments supporting various participatory justice models in the U.S. have also cited the equal protection clause 14th amendment, protection of individual legal rights, upholding of autonomy, integrationism, and democratic principles in their support.[23]Participatory justice models are seen as a way to fight against the paternalistic approach of the government in which legislators choose for citizens, without their input.[24][23]When affected individuals can participate in the policymaking process, they become viewed as subjects rather than objects.[24]
Consensus rule is more administratively efficient in the long run because it avoids lengthy post-enactment litigation. The legislature or administrative body using the participatory justice model also gains legitimacy, since it implies accountability. Participatory justice models have long been used byenvironmental justicemovements. Oftentimes, participation was originally denied not because of institutional or political failure, but because those in question aren’t recognized as in the domain of justice.[25]Young argues that participatory justice rather than distributive justice was the primary demand of communities like Afton, North Carolina. People objected that they were being subjected to risks and exposure without their consent and without mechanisms to articulate their opposition. The unfortunate reality is that those people who live in countries that will be destroyed first due torising sea levelswill not be included in decisions about when decisions are made.
One of the common criticisms of participatory justice models is that they might reduce efficiency, like in the environmental justice model discussed.[26]Incorporating the voices of all affected interests is a difficult and long process, especially when the issue being decided upon is significantly controversial.[23][26]Another disadvantage is that, even when you have a negotiating body and it does include affected interests, it might be difficult for all interests to be equally represented.[23]This problem, however, can be fixed by providing those negotiating with negotiation skills, as well as development of relevant information and payment of expenses involved in participation, like in the PJ model employed SSA's representative payment program. Another disadvantage of using a participatory justice model is the inexperience of those participating. The participants may not have as much respect for the wide number of legal and ethical considerations that need to be made when writing policy proposals.[4][5]For this reason, some critics argue that policy experts should be able to mediate the conversations on various policies, especially when modern laws are much more complex than those in places like ancient Athens, where laws were inscribed on panels all over the city and set up in the agora.[4][8]
|
https://en.wikipedia.org/wiki/Participatory_justice
|
Abusiness incubatoris an organization that helpsstartup companiesand individual entrepreneurs to develop their businesses by providing a fullscale range of services, starting withmanagement trainingandoffice space, and ending with venture capital financing.[1]The National Business Incubation Association (NBIA) defines business incubators as a catalyst tool for either regional or national economic development. NBIA categorizes its members' incubators by the following five incubator types:academic institutions;non-profitdevelopment corporations;for-profitproperty developmentventures;venture capital firms, and a combination of the above.[2]
Business incubators differ from research and technologyparksin their dedication tostartupand early-stage companies. Research and technology parks, on the other hand, tend to be large-scale projects that house everything from corporate, government, or university labs to very small companies. Most research and technology parks do not offer business assistance services, which are the hallmark of a business incubation program. However, many research and technology parks house incubation programs.[3]
Incubators also differ from theU.S. Small Business Administration'sSmall Business Development Centers (and similar business support programs) in that they serve only selected clients. Congress created the Small Business Administration in the Small Business Act of July 30, 1953. Its purpose is to "aid, counsel, assist and protect, insofar as is possible, the interests of small business concerns." In addition, the charter ensures that small businesses receive a "fair proportion" of any government contracts and sales of surplus property.[4]SBDCs work with any small businesses at any stage of development, and not only with startup companies. Many business incubation programs partner with their local SBDC to create a "one-stop shop" for entrepreneurial support.[5]
Within European Union countries, there are different EU and state funded programs that offer support in form of consulting, mentoring, prototype creation, and other services and co-funding for them.[6]
In India, the business incubators are promoted in a varied fashion: astechnology business incubators(TBI) and as startup incubators—the first deals with technology business (mostly, consultancy and promoting technology related businesses) and the later deals with promoting startups (with more emphasis on establishing new companies, scaling the businesses, prototyping, patenting, and so forth).[7][8][9][10][11]
The first business incubator was the Batavia Industrial Center, which opened in 1959 inBatavia, New York.[12]Two years earlier,Massey-Harrishad announced the closure of its Batavia farm machinery factory, resulting in a giant vacant building and a local unemployment rate of 18 percent.[12]The Mancuso family, the dominant business family in that area ofWestern New York, was desperate to resuscitate the regional economy, whose imminent collapse threatened to bring down their various business enterprises.[12]They bought the formerharvesterfactory and placed Joseph Mancuso in charge of finding commercial tenants.[12]It soon became clear that large corporations preferred to build new factories from scratch rather than shoehorn them into someone else's 80-year-old building, thereby forcing Mancuso to subdivide the vast space and lease smaller spaces to smaller tenants.[12]
In Mancuso's frantic search for tenants, he offered creative incentives to anyone willing to sign a lease, such as "short-term leases, shared office supplies and equipment, business advice, and secretarial services", as well as assistance with linking up with local banks to secure financing.[12]One tenant was a nearby chicken hatchery in need of space to house additional chicken coops, which explains the origin of the term "business incubator".[12]In 1963, while giving a tour to a reporter of the various tenants in the Batavia Industrial Center, Mancuso pointed out the coops and remarked, "These guys are incubating chickens...I guess we’re incubating businesses".[12]
Business incubation expanded across the U.S. in the 1980s and spread to theUKand Europe through various related forms (e.g. innovation centres, pépinières d'entreprises, technopoles/science parks).
The U.S.-based International Business Innovation Association estimates that there are about 7,000 incubators worldwide. A study funded by the European Commission in 2002 identified around 900 incubation environments in Western Europe.[13]As of October 2006, there were more than 1,400 incubators in North America, up from only 12 in 1980. Her Majesty's Treasury identified around 25 incubation environments in the UK in 1997; by 2005, UKBI identified around 270 incubation environments across the country. In 2005 alone, North American incubation programs assisted more than 27,000 companies that provided employment for more than 100,000 workers and generated annual revenues of $17 billion.[14]
Incubation activity has not been limited to developed countries; incubation environments are now being implemented in developing countries and raising interest for financial support from organizations such asUNIDOand theWorld Bank.
The first high-tech incubator located inSilicon Valleywas Catalyst Technologies started byNolan Bushnellafter he leftAtari. "My idea was that I would fund [the businesses] with a key," says Bushnell. "And the key would fit a lock in a building. In the building would be a desk and chair, and down the hall would be a Xerox machine. They would sign their name 35 times and the company would be incorporated." All the details would be handled: "They'd have a health care plan, their payroll system would be in place, and the books would be set up. So in 15 minutes, they would be in business working on the project."[15]
Since startup companies lack many resources, experience and networks, incubators provide services which helps them get through initial hurdles in starting up a business. These hurdles include space, funding, legal, accounting, computer services and other prerequisites to running the business.
According to the Small Business Administration's website, their mission provides small businesses with four main services. These services are:
Among the most common incubator services are:[14]
There are a number of business incubators that have focused on particular industries or on a particular business model, earning them their own name.
More than half of all business incubation programs are "mixed-use" projects, meaning they work with clients from a variety of industries. Technology incubators account for 39% of incubation programs.[14]
One example of a specialized type of incubator is abio incubator. Bioincubators specialize in supportinglife science-basedstartup companies. Entrepreneurs with feasible projects in life sciences are selected and admitted to these programs.
Unlike many business assistance programs, business incubators do not serve any and all companies. Entrepreneurs who wish to enter a business incubation program must apply for admission. Acceptance criteria vary from program to program, but in general only those with feasible business ideas and a workable business plan are admitted.[19]It is this factor that makes it difficult to compare the success rates of incubated companies against general business survival statistics.[20]
Although most incubators offer their clients office space and shared administrative services, the heart of a true business incubation program is the services it provides to startup companies. More than half of incubation programs surveyed by the National Business Incubation Association[21]in 2006 reported that they also served affiliate or virtual clients.[14]These companies do not reside in the incubator facility. Affiliate clients may be home-based businesses or early-stage companies that have their own premises but can benefit from incubator services. Virtual clients may be too remote from an incubation facility to participate on site, and so receive counseling and other assistance electronically.
The amount of time a company spends in an incubation program can vary widely depending on a number of factors, including the type of business and the entrepreneur's level of business expertise.Life scienceand other firms with long research and development cycles require more time in an incubation program than manufacturing or service companies that can immediately produce and bring a product or service to market. On average, incubator clients spend 33 months in a program.[14]Many incubation programs set graduation requirements by developmentbenchmarks, such as company revenues or staffing levels, rather than time.
Business incubation has been identified as a means of meeting a variety ofeconomicandsocioeconomicpolicy needs, which may include job creation, fostering a community's entrepreneurial climate, technology commercialization, diversifying local economies, building or accelerating growth of local industry clusters, business creation and retention, encouraging minority entrepreneurship, identifying potential spin-in or spin-out business opportunities, or community revitalization.[14]
About one-third of business incubation programs are sponsored by economic development organizations. Government entities (such as cities or counties) account for 21% of program sponsors. Another 20% are sponsored by academic institutions, including two- and four-year colleges, universities, and technical colleges.[14]In many countries, incubation programs are funded by regional or national governments as part of an overall economic development strategy. In the United States, however, most incubation programs are independent, community-based and resourced projects. The U.S.Economic Development Administrationis a frequent source of funds for developing incubation programs, but once a program is open and operational it typically receives no federal funding; few states offer centralized incubator funding. Rents and/or client fees account for 59% of incubator revenues, followed by service contracts or grants (18%) and cash operating subsidies (15%).[14]
As part of a major effort to address the ongoing economic crisis of the US, legislation was introduced to "reconstituteProject Socrates". The updated version of Socrates supports incubators by enabling users with technology-based facts about the marketplace, competitor maneuvers, potential partners, and technology paths to achieve competitive advantage. Michael Sekora, the original creator and director of Socrates says that a key purpose of Socrates is to assist government economic planners in addressing the economic and socioeconomic issues (see above) with unprecedented speed, efficiency and agility.[22]
Many for-profit or "private" incubation programs were launched in the late 1990s by investors and other for-profit operators seeking to hatch businesses quickly and bring in big payoffs. At the time, NBIA estimated that nearly 30% of all incubation programs were for-profit ventures. In the wake of thedot-com bust, however, many of those programs closed. In NBIA's 2002 State of the Business Incubation survey, only 16% of responding incubators were for-profit programs. By the 2006 SOI, just 6% of respondents were for-profit.[14]
Although some incubation programs (regardless of nonprofit or for-profit status) take equity in client companies, most do not. Only 25% of incubation programs report that they take equity in some or all of their clients.[14]
Incubators often aggregate themselves into networks which are used to share good practices and new methodologies.
Europe's European Business and Innovation Centre Network ("EBN")[23]association federates more than 250European Business and Innovation Centres(EU|BICs) throughout Europe. France has its own national network oftechnopoles, pre-incubators, and EU|BICs, called RETIS Innovation. This network focuses on internationalizing startups.[citation needed]
Of 1000 incubators across Europe, 500 are situated in Germany. Many of them are organized federally within the ADT (Arbeitsgemeinschaft Deutscher Innovations-, Technologie-, und Gründerzentren e.V.).[24]
San Francisco and Silicon Valley are home to 'founder houses.'[25]These involve a collective of founders sharing an apartment or house while working to get their companies off the ground. Similar to tech/hacker houses in the same area, the founders collaborate to promote one another's success while enjoying the financial benefits of co-living in one of the most expensive regions of the country.[26]These collectives are typically located in San Francisco or near to Stanford University's campus.[27]Many of the founders have dropped out of Stanford University to pursue their careers– in fact, there is a more than a 1 in 10 chance that billion-dollar startups have one or more founders who attended Stanford.[28]In addition to the financial incentives of co-living, founders share investor recommendations, funding strategies, VC contacts, and other elements critical to a startup company's success in its early days.[29]These set-ups allow for largely virtual work, eliminating the burden on new founders to find a physical space for their company.[29]Due to the collaborative nature of these spaces, residents who have failed companies often pivot to taking a high-ranking position at a roommate's company.[25]Collectives such as these build on a legacy set forth by Mark Zuckerberg and Facebook. The house featured in the filmThe Social Networkwas a hacker's den rented by Zuckerberg that ultimately gave rise to a tech supergiant.[30]This house and the fortune it gave rise to was well-documented in the 2010 filmThe Social Network.[31]
|
https://en.wikipedia.org/wiki/Public_incubator
|
Public participation, also known ascitizen participationorpatient and public involvement, is the inclusion of the public in the activities of any organization or project. Public participation is similar to but more inclusive thanstakeholder engagement.
Generally public participation seeks and facilitates theinvolvement of those potentially affected by or interested in a decision. This can be in relation to individuals, governments, institutions, companies or any other entities that affect public interests. The principle of public participation holds that those who are affected by a decision have a right to be involved in the decision-making process. Public participation implies that the public's contribution will influence the decision.[1][2]Public participation may be regarded as a form ofempowermentand as a vital part of democratic governance.[2]In the context ofknowledge management, the establishment of ongoing participatory processes is seen by some as the facilitator ofcollective intelligenceand inclusiveness, shaped by the desire for the participation of the whole community or society.[2]
Public participation is part of "people centred" or "human centric" principles, which have emerged inWesternculture over the last thirty years, and has had some bearings ofeducation,business,public policyand international relief and development programs. Public participation is advanced by thehumanistmovements. Public participation may be advanced as part of a "people first" paradigm shift. In this respect, public participation may challenge the concept that "big is better" and the logic of centralized hierarchies, advancing alternative concepts of "more heads are better than one" and arguing that public participation can sustain productive and durable change.[3]
Some legal and other frameworks have developed a human rights approach to public participation. For example, theright to public participationin economic and human development was enshrined in the 1990African Charter for Popular Participation in Development and Transformation.[4]Similarly, major environmental and sustainability mechanisms have enshrined a right to public participation, such as theRio Declaration.[5]
In recent years, loss ofpublic trustin authorities and politicians has become a widespread concern in many democratic societies. The relationship between citizens and local governments has weakened over the past two decades due to shortcomings in public service delivery.[10]Public participation is regarded as one potential solution to the crisis in public trust and governance, particularly in theUK,Europe, and other democracies. Establishing direct citizen participation can increase governance's effectiveness, legitimacy, and social justice.[11]The idea is that public should be involved more fully in the policy process in that, authorities seek public views and participation, instead of treating the public as simply passive recipients of policy decisions.
The underlying assumption by political theorists, social commentators, and even politicians is that public participation increase public trust in authorities, improving citizen political efficacy, enhancing democratic ideals and even improving the quality of policy decisions. However, the assumed benefits of public participation in restoring public trust are yet to be confirmed.[12][13][14]Citizen participation is only sustained if citizens support it and if their involvement is actively supported by the governing body.
Public participation may also be viewed asaccountabilityenhancing. The argument being that public participation can be a means for the participating communities to hold public authorities accountable for implementation.[15]In the United Kingdom citizens are used to ensure the fair and humane detention of prisoners. Volunteers comprise theIndependent Monitoring Boardthat reports on the fair and humane detention of prisoners and detainees.[16]
Many community organizations are composed of affluent middle-class citizens with the privilege and the time to participate.[11]It is well documented that low-income citizens face difficulty organizing themselves and engaging in public issues.[17]Obstacles like: finding affordable childcare, getting time off of work, and access to education in public matters exacerbate the lack of participation by low-income citizens.[11]To foster greater participation of all social groups, vanguard privileged classes work to bring in low-income citizens through collaboration. The organizations establish an incentive for participation through accessible language and friendly environments.[18]This allows for an atmosphere of consensus between middle and lower-income citizens.
Why is the public motivated to participate in policy making in the first place? A study from Christopher M. Weible[19]argues in his stakeholder example that individuals are motivated by their belief systems. More specifically, people are "motivated to convert their beliefs into policy" (Weible, 2007). Weible digresses his statements with a "three-tier hierarchical belief system" (Weible, 2007). The first tier is unchanging fundamental beliefs that a person holds. The middle tier is compose of core beliefs regarding policy and is more "pliable than deep core beliefs" (Weible, 2007), which is found in tier one. The final tier is merely secondary beliefs.
As shown in Christopher Weible's study, it is obvious that the public has an intrinsic desire to participate in policy making to some degree. That being said, how effective is public participation in the sphere of policy making? A study by Milena I. Neshkova and Hai Guo[20]illuminates the effectiveness of public participation by using analyzing data from U.S. state transportation agencies. The way these authors measured this is by observing the "effect of public participation on organizational performance" (Neshkova and Guo, 2012). The organizational performance in this instance is U.S. state transportation agencies. The researchers concluded that "public participation is, in fact, associated with enhanced organizational performance" (Neshkova and Guo, 2012). Bureaucratic agencies in general become more effective when including the public in their decision making. What these two studies show is that the public has not only an interest in policy making, but is driven by their beliefs. Lastly, public participation is indeed an effective tool when lawmakers or bureaucratic agencies are making policies or laws.
Evaluating methods of public participation and involvement in across multiple disciplines and languages has been an ongoing challenge, making it difficult to assess effectiveness.[21][22]Some novel tools for reporting involvement, engagement and participation across disciplines using standardised terminology have been developed. For example, a beta version of Standardised Data on Initiatives (STARDIT) was published in 2022, using Wikidata to encourage consistent terminology across languages to describe the tasks of involvement, the methods, communication modes and any impacts or outcomes from involvement.[22][23]STARDIT has already been used by a number of organisations to report initiatives, includingCochrane, Australian Genomics 'Guidelines For Community Involvement In Genomics Research',[24]NIHRfunded research projects, La Trobe University's Academic and Research Collaborative in Health (ARCH),[25]citizen science projects and theWiki Journals.[22][23]
Participatory budgeting is a process of democratic deliberation and decision-making, in which ordinary city residents decide how to allocate part of a municipal or public budget. Participatory budgeting is usually characterized by several basic design features: identification of spending priorities by community members, election of budget delegates to represent different communities, facilitation and technical assistance by public employees, local and higher level assemblies to deliberate and vote on spending priorities, and the implementation of local direct-impact community projects.
Participatory budgeting may be used by towns and cities around the world, and has been widely publicised inPorto Alegre,Brazil, were the first full participatory budgeting process was developed starting in 1989.
In economic development theory, there is a school ofparticipatory development. The desire to increase public participation in humanitarian aid and development has led to the establishment of a numerous context-specific, formal methodologies, matrices, pedagogies and ad hoc approaches. These include conscientization and praxis;Participatory action research(PAR),rapid rural appraisal(RRA) and participatory rural appraisal (PRA);appreciation influence control analysis(AIC); "open space" approaches;Objectives Oriented Project Planning(ZOPP);vulnerability analysisandcapacity analysis.[3]
In recent years, public participation has become seen as a vital part of addressing environmental problems and bringing aboutsustainable development. In this context are limits of solely relying on technocratic bureaucratic monopoly of decision making, and it is argued that public participation allows governments to adopt policies and enact laws that are relevant to communities and take into account their needs.[15]
Public participation is recognised as an environmental principle, seeEnvironmental Principles and Policies, and has been enshrined in theRio Declaration. There have emerged a number of arguments in favor of a more participatory approach, which stress that public participation is a crucial element in environmental governance that contributes to betterdecision making. It is recognised that environmental problems cannot be solved by government alone.[26]Participation in environmental decision-making effectively links the public to environmental governance. By involving the public, who are at the root of both causes and solutions of environmental problems, in environmental discussions, transparency and accountability are more likely to be achieved, thus securing the democratic legitimacy of decision-making that good environmental governance depends on.[27][28]Arguably, a strong public participation in environmental governance could increase the commitment among stockholders, which strengthens the compliance and enforcement of environmental laws. GIS can provide a valuable tool for such work (seeGIS and environmental governance). In addition, some opponents argue that the right to participate in environmental decision-making is a procedural right that "can be seen as part of the fundamental right to environmental protection".[29]From this ethical perspective, environmental governance is expected to operate within a framework coinciding the "constitutional principle of fairness (inclusive of equality)", which inevitably requires the fulfillment of "environmental rights" and ultimately calls for the engagement of public.[29]Further, in the context of considerable scientific uncertainties surrounding environmental issues, public participation helps to counter such uncertainties and bridges the gap between scientifically-defined environmental problems and the experiences and values of stakeholders.[27][30]Through joint effort of the government and scientists in collaboration with the public, better governance of environment is expected to be achieved by making the most appropriate decision possible.
Although broad agreements exist, the notion of public participation in environmental decision-making has been subject to a sustained critique concerning the real outcome of participatory environmental governance. Critics argue that public participation tends to focus on reaching a consensus between actors who share the same values and seek the same outcomes. However, the uncertain nature of many of the environmental issues would undermine the validity of public participation, given that in many cases the actors come to the table of discussion hold very different perceptions of the problem and solution which are unlikely to be welded into a consensus due to the incommensurability of different positions.[31]This may run the risk of expert bias, which generates further exclusion as those who are antagonistic to the consensus would be marginalised in the environmental decision-making process, which violates the assumed advantage of participatory approach to produce democratic environmental decisions. This raises the further question of whether consensus should be the measure of a successful outcome of participation.[32]As Davies suggests, participative democracy could not guarantee the substantive environmental benefits 'if there are competing views of what the environment should be like and what it is valuable for'.[33]Consequently, who should be involved at what points in the process of environmental decision-making and what is the goal of this kind of participation become central to the debates on public participation as a key issue in environmental governance.[27]
Around the globe, experts work closely with local communities. Local communities are crucial stakeholders for heritage.[34]
Consultation with local communities is acknowledged formally in cultural management processes.[35]They are necessary for defining the significance of a cultural place/site, otherwise you run the risk of overseeing many values, focusing on “experts’” views.[36]This has been the case in heritage management until the end of the 20th century. A paradigm shift started with the Burra Charter by ICOMOS Australia in 1979[37]and was later developed by the work of the GCI around 2000.[38][39]Today, so called “value-led conservation” is at the base of heritage management for WH sites: establishing stakeholders and associated values is a fundamental step in creating a Management Plan for such sites.
The concept of stakeholders has widened to include local communities.
Various levels oflocal government,research institutions, enterprises,charitable organisations, and communities are all important parties. Activities such asknowledge exchangeArchived2021-05-03 at theWayback Machine, education, consultation, exhibitions, academic events, publicity campaigns, among others are all effective means for local participation.
For instance, local charities inHoms,Syriahave been undertaking several projects with local communities to protect their heritage.[40]
A conservation programme in Dangeil, Sudan, has used social and economic relationship with the community to make the project sustainable over the long term.[41]
In Australia, Indigenous communities increasingly have stewardship of conservation and management programs to care for, monitor and maintain their cultural heritage places and landscapes, particularly those containing rock art.[42]
In some countries public participation has become a central principle ofpublic policymaking within democratic bodies, policies are rendered legitimate when citizens have the opportunity to influence the politicians and parties involved.[11]In the UK and Canada it has been observed that all levels ofgovernmenthave started to buildcitizenandstakeholder engagementinto their policy-making processes. Situating citizens as active actors in policy-making can work to offset government failures by allowing for reform that will better emulate the needs of citizens.[10]By incorporating citizens, policies will reflect everyday needs and realities, and not the machinations of politicians and political parties.[43]This may involve large-scaleconsultations,focus groupresearch, online discussion forums, or deliberative citizens' juries. There are many different public participation mechanisms, although these often share common features (for a list over 100, and a typology of mechanisms, see Rowe and Frewer, 2005).[44]
Public participation is viewed as a tool, intended to inform planning, organising or funding of activities. Public participation may also be used to measure attainable objectives, evaluate impact, and identify lessons for future practice.[45]In Brazil's housing councils, mandated in 2005, citizen engagement in policy drafting increased effectiveness and responsiveness of government public service delivery.[43]All modern constitutions and fundamental laws contain and declare the concept and principle of popular sovereignty, which essentially means that the people are the ultimate source of public power or government authority. The concept of popular sovereignty holds simply that in a society organized for political action, the will of the people as a whole is the only right standard of political action. It can be regarded as an important element in the system of the checks and balances, and representative democracy. Therefore, the people are implicitly entitled even to directly participate in the process of public policy and law making.[46]
In the United States, public participation in administrative rulemaking refers to the process by which proposed rules are subject to public comment for a specified period of time. Public participation is typically mandatory for rules promulgated by executive agencies of the US government. Statutes or agency policies may mandatepublic hearingsduring this period.[47]
Citizen science is a coined term commonly used to describe the participation of non-scientists in scientific research. Greater inclusion of non-professional scientists in policy research and "democratization of policy research" can be important.[48]Several benefits are claimed: having citizens involved in not just the contribution of data, but also the framing and development of research itself. The key to success in applying citizen science to policy development is data which is "suitable, robust, and of a known quality for evidence-based policy making".[49]Barriers to applying citizen science to policy development include a lack of suitability between the data collected and the policy in question and skepticism regarding the data collected by non-experts.[49]
Public participation can be a method of capturing community activity into regimes of power and control although it has also been noted that capture and empowerment can co-exist.[50]
|
https://en.wikipedia.org/wiki/Public_participation
|
Thepublic sphere(German:Öffentlichkeit) is an area insocial lifewhere individuals can come together to freely discuss and identify societal problems, and through that discussioninfluencepolitical action. A "Public" is "of or concerning the people as a whole." Such a discussion is called public debate and is defined as the expression of views on matters that are of concern to the public—often, but not always, with opposing or diverging views being expressed by participants in the discussion.[1]Public debatetakes place mostly through the mass media, but also at meetings or throughsocial media, academic publications and government policy documents.[2]
The term was originally coined by German philosopherJürgen Habermaswho defined the public sphere as "made up of private people gathered together as a public and articulating the needs of society with the state".[3]CommunicationscholarGerard A. Hauserdefines it as "a discursive space in which individuals and groups associate to discuss matters of mutual interest and, where possible, to reach a common judgment about them".[4]The public sphere can be seen as "a theater in modern societies in which political participation is enacted through the medium of talk"[5]and "a realm of social life in which public opinion can be formed".[6]
Describing the emergence of the public sphere in the 18th century, Habermas noted that the public realm, or sphere, originally was "coextensive with public authority",[7]while "theprivate spherecomprisedcivil societyin the narrower sense, that is to say, the realm of commodity exchange and of social labor".[8]Whereas the "sphere of public authority" dealt with the state, or realm of the police, and the ruling class,[8]or the feudal authorities (church, princes and nobility) the "authentic 'public sphere'", in a political sense, arose at that time from within the private realm, specifically, in connection with literary activities, the world of letters.[9]This new public sphere spanned the public and the private realms, and "through the vehicle of public opinion it put the state in touch with the needs of society".[10]"This area is conceptually distinct from the state: it [is] a site for the production and circulation of discourses that can in principle be critical of the state."[11]The public sphere "is also distinct from the official economy; it is not an arena of market relations but rather one of the discursive relations, a theater for debating and deliberating rather than for buying and selling".[11]These distinctions between "state apparatuses, economic markets, and democratic associations... are essential to democratic theory".[11]The people themselves came to see the public sphere as a regulatory institution against the authority of the state.[12]The study of the public sphere centers on the idea ofparticipatory democracy, and howpublic opinionbecomes political action.
Theideologyof the public sphere theory is that the government's laws and policies should be steered by the public sphere and that the only legitimate governments are those that listen to the public sphere.[13]"Democratic governance rests on the capacity of and opportunity for citizens to engage in enlightened debate".[14]Much of the debate over the public sphere involves what is the basic theoretical structure of the public sphere, how information is deliberated in the public sphere, and what influence the public sphere has over society.
Jürgen Habermasclaims "We call events and occasions 'public' when they are open to all, in contrast to closed or exclusive affairs".[15]This 'public sphere' is a "realm of our social life in which something approaching public opinion can be formed. Access is guaranteed to all citizens".[16]
This notion of the public becomes evident in terms such as public health, public education, public opinion or public ownership. They are opposed to the notions of private health, private education, private opinion, and private ownership. The notion of the public is intrinsically connected to the notion of the private.
Habermas[17]stresses that the notion of the public is related to the notion of the common. ForHannah Arendt,[18]the public sphere is therefore "the common world" that "gathers us together and yet prevents our falling over each other".
Habermas defines the public sphere as a "society engaged in critical public debate".[19]
Conditions of the public sphere are according to Habermas:[20][16]
Most contemporary conceptualizations of the public sphere are based on the ideas expressed inJürgen Habermas' bookThe Structural Transformation of the Public Sphere – An Inquiry into a Category of Bourgeois Society, which is a translation of hisHabilitationsschrift,Strukturwandel der Öffentlichkeit:Untersuchungen zu einer Kategorie der bürgerlichen Gesellschaft.[21]TheGermantermÖffentlichkeit(public sphere) encompasses a variety of meanings and it implies a spatial concept, the social sites or arenas where meanings are articulated, distributed, and negotiated, as well as the collective body constituted by, and in this process, "the public".[22]The work is still considered the foundation of contemporary public sphere theories, and most theorists cite it when discussing their own theories.
Thebourgeoispublic sphere may be conceived above all as the sphere of private people come together as a public; they soon claimed the public sphere regulated from above against the public authorities themselves, to engage them in a debate over the general rules governing relations in the basically privatized but publicly relevant sphere of commodity exchange and social labor.[23]
Through this work, he gave a historical-sociological account of the creation, brief flourishing, and demise of a "bourgeois" public sphere based on rational-critical debate and discussion:[24]Habermas stipulates that, due to specific historical circumstances, a new civic society emerged in the eighteenth century. Driven by a need for open commercial arenas where news and matters of common concern could be freely exchanged and discussed—accompanied by growing rates of literacy, accessibility to literature, and a new kind of critical journalism—a separate domain from ruling authorities started to evolve across Europe. "In its clash with the arcane and bureaucratic practices of the absolutist state, the emergent bourgeoisie gradually replaced a public sphere in which the ruler's power was merely represented before the people with a sphere in which state authority was publicly monitored through informed and critical discourse by the people".[25]
In his historical analysis, Habermas points out three so-called "'institutional criteria" as preconditions for the emergence of the new public sphere. The discursive arenas, such as Britain's coffee houses, France's salons, and Germany'sTischgesellschaften"may have differed in the size and compositions of their publics, the style of their proceedings, the climate of their debates, and their topical orientations", but "they all organized discussion among people that tended to be ongoing; hence they had a number of institutional criteria in common":[26]
Habermas argued that the bourgeois society cultivated and upheld these criteria. The public sphere was well established in various locations including coffee shops and salons, areas of society where various people could gather and discuss matters that concerned them.[27]The coffee houses in London society at this time became the centers of art and literary criticism, which gradually widened to include even the economic and the political disputes as matters of discussion. In French salons, as Habermas says, "opinion became emancipated from the bonds of economic dependence".[7]Any new work, or a book or a musical composition had to get its legitimacy in these places. It not only paved a forum for self-expression but in fact had become a platform for airing one's opinions and agendas for public discussion.
The emergence of a bourgeois public sphere was particularly supported by the 18th-century liberal democracy making resources available to this new political class to establish a network of institutions like publishing enterprises, newspapers and discussion forums, and the democratic press was the main tool to execute this. The key feature of this public sphere was its separation from the power of both the church and the government due to its access to a variety of resources, both economic and social.
As Habermas argues, in due course, this sphere of rational anduniversalistic politics, free from both the economy and the State, was destroyed by the same forces that initially established it. This collapse was due to the consumeristic drive that infiltrated society, so citizens became more concerned about consumption than political actions. Furthermore, the growth of capitalistic economy led to an uneven distribution of wealth, thus widening economic polarity. Suddenly the media became a tool of political forces and a medium for advertising rather than the medium from which the public got their information on political matters. This resulted in limiting access to the public sphere and the political control of the public sphere was inevitable for the modern capitalistic forces to operate and thrive in the competitive economy.
Therewith emerged a new sort of influence, i.e., media power, which, used for purposes of manipulation, once and for all took care of the innocence of the principle of publicity. The public sphere, simultaneously restructured and dominated by the mass media, developed into an arena infiltrated by power in which, by means of topic selection and topical contributions, a battle is fought not only over influence but over the control of communication flows that affect behavior while their strategic intentions are kept hidden as much as possible.[28]
AlthoughStructural Transformationwas (and is) one of the most influential works in contemporary German philosophy and political science, it took 27 years until an English version appeared on the market in 1989. Based on a conference on the occasion of the English translation, at which Habermas himself attended,Craig Calhoun(1992) editedHabermas and the Public Sphere[29][30]– a thorough dissection of Habermas' bourgeois public sphere by scholars from various academic disciplines. The core criticism at the conference was directed towards the above stated "institutional criteria":
Nancy Fraseridentified the fact that marginalized groups are excluded from a universal public sphere, and thus it was impossible to claim that one group would, in fact, be inclusive. However, she claimed that marginalized groups formed their own public spheres, and termed this concept asubaltern counter publicor counter-public.
Fraser worked from Habermas' basic theory because she saw it to be "an indispensable resource" but questioned the actual structure and attempted to address her concerns.[11]She made the observation that "Habermas stops short of developing a new, post-bourgeois model of the public sphere".[32]Fraser attempted to evaluate Habermas' bourgeois public sphere, discuss some assumptions within his model, and offer a modern conception of the public sphere.[32]
In the historical reevaluation of the bourgeois public sphere, Fraser argues that rather than opening up the political realm to everyone, the bourgeois public sphere shifted political power from "a repressive mode of domination to ahegemonicone".[33]Rather than rule by power, there was now rule by the majority ideology. To deal with this hegemonic domination, Fraser argues that repressed groups form "Subaltern counter-publics" that are "parallel discursive arenas where members of subordinated social groups invent and circulate counterdiscourses to formulate oppositional interpretations of their identities, interests, and needs".[34]
Benhabib notes that in Habermas' idea of the public sphere, the distinction between public and private issues separates issues that normally affect women (issues of "reproduction, nurture and care for the young, the sick, and the elderly")[35]into the private realm and out of the discussion in the public sphere. She argues that if the public sphere is to be open to any discussion that affects the population, there cannot be distinctions between "what is" and "what is not" discussed.[36]Benhabib argues for feminists to counter the popular public discourse in their own counter public.
The public sphere was long regarded as men's domain whereas women were supposed to inhabit the private domestic sphere.[37][38][39]A distinct ideology that prescribedseparate spheresfor women and men emerged during theIndustrial Revolution.[40][41]
The concept ofheteronormativityis used to describe the way in which those who fall outside of the basic male/female dichotomy ofgenderor whose sexual orientations are other thanheterosexualcannot meaningfully claim their identities, causing a disconnect between their public selves and their private selves.Michael Warnermade the observation that the idea of an inclusive public sphere makes the assumption that we are all the same without judgments about our fellows. He argues that we must achieve some sort of disembodied state in order to participate in a universal public sphere without being judged. His observations point to a homosexual counter public, and offers the idea that homosexuals must otherwise remain "closeted" in order to participate in the larger public discourse.[42]
Gerard Hauserproposed a different direction for the public sphere than previous models. He foregrounds therhetoricalnature of public spheres, suggesting that public spheres form around "the ongoing dialogue on public issues" rather than the identity of the group engaged in the discourse.[44]
Rather than arguing for an all-inclusive public sphere, or the analysis of tension between public spheres, he suggested that publics were formed by active members of society around issues.[45]They are a group of interested individuals who engage in vernacular discourse about a specific issue.[46]"Publics may be repressed, distorted, or responsible, but any evaluation of their actual state requires that we inspect the rhetorical environment as well as the rhetorical act out of which they evolved, for these are the conditions that constitute their individual character".[47]These people formed rhetorical public spheres that were based in discourse, not necessarily orderly discourse but any interactions whereby the interested public engages each other.[46]This interaction can take the form of institutional actors as well as the basic "street rhetoric" that "open[s] a dialogue between competing factions."[43]The spheres themselves formed around the issues that were being deliberated. The discussion itself would reproduce itself across the spectrum of interested publics "even though we lack personal acquaintance with all but a few of its participants and are seldom in contexts where we and they directly interact, we join these exchanges because they are discussing the same matters".[48]In order to communicate within the public sphere, "those who enter any given arena must share a reference world for their discourse to produce awareness for shared interests and public opinions about them".[49]This world consists of common meanings and cultural norms from which interaction can take place.[50]
The rhetorical public sphere has several primary features:
The rhetorical public sphere was characterized by five rhetorical norms from which it can be gauged and criticized. How well the public sphere adheres to these norms determine the effectiveness of the public sphere under the rhetorical model. Those norms are:
In all this Hauser believes a public sphere is a "discursive space in which strangers discuss issues they perceive to be of consequence for them and their group. Its rhetorical exchanges are the bases for shared awareness of common issues, shared interests, tendencies of extent and strength of difference and agreement, and self-constitution as a public whose opinions bear on the organization of society."[48]
This concept that the public sphere acts as a medium in which public opinion is formed as analogous to a lava lamp. Just as the lamp's structure changes, with its lava separating and forming new shapes, so does the public sphere's creation of opportunities for discourse to address public opinion, thereby forming new discussions of rhetoric. The lava of the public which holds together the public arguments is the public conversation.
Habermas argues that the public sphere requires "specific means for transmitting information and influencing those who receive it".[16]
Habermas' argument shows that the media are of particular importance for constituting and maintaining a public sphere. Discussions about the media have therefore been of particular importance in public sphere theory.
According to Habermas, there are two types of actors without whom no political public sphere could be put to work: professionals in the media system and politicians.[53]For Habermas, there are five types of actors who make their appearance on the virtual stage of an established public sphere:
(a) Lobbyists who represent special interest groups;
(b) Advocates who either represent general interest groups or substitute for a lack of representation of marginalized groups that are unable to voice their interests effectively;
(c) Experts who are credited with professional or scientific knowledge in some specialized area and are invited to give advice;
(d) Moral entrepreneurs who generate public attention for supposedly neglected issues;
(e) Intellectuals who have gained, unlike advocates or moral entrepreneurs, a perceived personal reputation in some field (e.g., as writers or academics) and who engage, unlike experts and lobbyists, spontaneously in public discourse with the declared intention of promoting general interests.[54]
Libraries have been inextricably tied to educational institutions in the modern era having developed within democratic societies. Libraries took on aspects of the public sphere (as did classrooms), even while public spheres transformed in the macro sense. These contextual conditions led to a fundamental conservative rethinking of civil society institutions like schools and libraries.[55]
Habermas argues that under certain conditions, the media act to facilitate discourse in a public sphere.[56]The rise of the Internet has brought about a resurgence of scholars applying theories of the public sphere to Internet technologies.[57]
For example, a study by S. Edgerly et al.[58]focused on the ability of YouTube to serve as an online public sphere. The researchers examined a large sample of video comments using the California Proposition 8 (2008) as an example. The authors argue that some scholars think the online public sphere is a space where a wide range of voices can be expressed due to the "low barrier of entry"[59]and interactivity. However, they also point at a number of limitations. Edgerly et al. say that the affirmative discourse presupposes that YouTube can be an influential player in the political process and that it can serve as an influential force to politically mobilize young people. YouTube has allowed anyone and everyone to be able to get any political knowledge they wish. The authors mention critiques that say YouTube is built around the popularity of videos with sensationalist content. It has also allowed people to broadcast themselves for a large public sphere, where people can form their own opinions and discuss different things in the comments. The research by Edgerly, et al.[60]found that the analyzed YouTube comments were diverse. They argue that this is a possible indicator that YouTube provides space for public discussion. They also found that YouTube videos' style influences the nature of the commentary. Finally, they concluded that the video's ideological stances influenced the language of the comments. The findings of the work suggest that YouTube is a public sphere platform.
Additional work by S. Buckley[61]reflected on the role that news content, specifically US cable news, contributed towards the formation of the public sphere. His research analysed a total of 1239 videos uploaded by five news organisations and investigated the link between content and user engagement. Through both content and sentiment analysis, it was suggested that the sentiment of the language used in the titles of the videos had an impact upon the public, with negatively sentimented titles generated more user engagement. Buckley suggested that due to the aspect of emotionality that is present in news content that due to the ongoing process of media hybridization, a new conceptual framework of the public sphere that acknowledges how both thoughtful discussions as well as ones which express feelings in an overt way needs to be developed.
Some, like Colin Sparks, note that a new global public sphere ought to be created in the wake of increasing globalization and global institutions, which operate at the supranational level.[62]However, the key questions for him were, whether any media exists in terms of size and access to fulfil this role. The traditional media, he notes, are close to the public sphere in this true sense. Nevertheless, limitations are imposed by the market and concentration of ownership. At present, the global media fail to constitute the basis of a public sphere for at least three reasons. Similarly, he notes that the internet, for all its potential, does not meet the criteria for a public sphere and that unless these are "overcome, there will be no sign of a global public sphere".[63]
German scholars Jürgen Gerhards and Mike S. Schäfer conducted a study in 2009 in order to establish whether theInternetoffers a better and broader communication environment compared to quality newspapers. They analyzed how the issue of human genome research was portrayed between 1999 and 2001 in popular quality newspapers in both Germany and the United States in comparison to the way it appeared on search engines at the time of their research. Their intention was to analyze what actors and what sort of opinions the subject generated in both print and the Internet and verify whether the online space proved to be a more democratic public sphere, with a wider range of sources and views. Gerhards and Schäfer say they have found "only minimal evidence to support the idea that the internet is a better communication space as compared to print media".[64]"In both media, communication is dominated by (bio- and natural) scientific actors; popular inclusion does not occur".[64]The scholars argue that the search algorithms select the sources of information based on the popularity of their links. "Their gatekeeping, in contrast to the old mass media, relies mainly on technical characteristics of websites".[64]For Gerhards and Schäfer the Internet is not an alternative public sphere because less prominent voices end up being silenced by the search engines' algorithms. "Search engines might actually silence societal debate by giving more space to established actors and institutions".[65]Another tactic that supports this view isastroturfing.The GuardiancolumnistGeorge Monbiotsaid that Astroturfing software, "has the potential to destroy the internet as a forum for constructive debate. It jeopardizes the notion of online democracy".[66]
There has been an academic debate about how social media impacts the public sphere. The sociologistsBrian LoaderandDan Merceagive an overview of this discussion.[67]They argue that social media offers increasing opportunities for political communication and enable democratic capacities for political discussion within the virtual public sphere. The effect would be that citizens could challenge governments and corporations' political and economic power. Additionally, new forms of political participation and information sources for the users emerge with the Internet that can be used, for example, in online campaigns.
However, the two authors point out that social media's dominant uses are entertainment, consumerism, and content sharing among friends. Loader and Mercea point out that "individual preferences reveal an unequal spread of social ties with a few giant nodes such asGoogle,Yahoo,FacebookandYouTubeattracting the majority of users".[68]They also stress that some critics have voiced the concern that there is a lack of seriousness inpolitical communication on social media platforms. Moreover, lines between professional media coverage and user-generated content would blur on social media.
The authors conclude that social media provides new opportunities for political participation; however, they warn users of the risks of accessing unreliable sources. The Internet impacts the virtual public sphere in many ways, but is not a free utopian platform as some observers argued at the beginning of its history.[69][70]
John Thompsoncriticises the traditional idea of public sphere by Habermas, as it is centred mainly in face-to-face interactions. On the contrary, Thompson argues that modern society is characterized by a new form of "mediated publicness",[71]whose main characteristics are:
This mediated publicness has altered the power relations in a way in which not only the many are visible to the few but the few can also now see the many:
"Whereas the Panopticon renders many people visible to a few and enables power to be exercised over the many by subjecting them to a state of permanent visibility, the development of communication media provides a means by which many people can gather information about a few and, at the same time, a few can appear before many; thanks to the media, it is primarily those who exercise power, rather than those over whom power is exercised, who are subjected to a certain kind of visibility".[72]
However, Thompson also acknowledges that "media and visibility is a double-edged sword"[73]meaning that even though they can be used to show an improved image (by managing the visibility), individuals are not in full control of their self-presentation. Mistakes, gaffes or scandals are now recorded therefore they are harder to deny, as they can be replayed by the media.
Examples of thepublic service modelincludeBBCin Britain, and theABCand SBS in Australia. The political function and effect of modes of public communication has traditionally continued with the dichotomy between Hegelian State and civil society. The dominant theory of this mode includes the liberal theory of the free press. However, the public service, state-regulated model, whether publicly or privately funded, has always been seen not as a positive good but as an unfortunate necessity imposed by the technical limitations of frequency scarcity.
According to Habermas's concept of the public sphere,[74]the strength of this concept is that it identifies and stresses the importance for democratic politics of a sphere distinct from the economy and the State. On the other hand, this concept challenges the liberal free press tradition form the grounds of its materiality, and it challenges the Marxist critique of that tradition from the grounds of the specificity of politics as well.
From Garnham's critique,[75]three great virtues of Habermas's public sphere are mentioned. Firstly, it focuses on the indissoluble like between the institutions and practices of mass public communication and the institutions and practices of democratic politics. The second virtue of Habermas's approach concentrate on the necessary material resource base for ant public. Its third virtue is to escape from the simple dichotomy of free market versus state control that dominates so much thinking about media policy.
Oskar Negt&Alexander Klugetook a non-liberal view of public spheres, and argued that Habermas' reflections on the bourgeois public sphere should be supplemented with reflections on theproletarian public spheresand thepublic spheres of production.[22]
The distinction between bourgeois and proletarian public spheres is not mainly a distinction between classes. The proletarian public sphere is rather to be conceived of as the "excluded", vague, unarticulated impulses of resistance or resentment. The proletarian public sphere carries the subjective feelings, the egocentric malaise with the common public narrative, interests that are not socially valorized
The bourgeois and proletarian public spheres are mutually defining: The proletarian public sphere carries the "left-overs" from the bourgeois public sphere, while the bourgeois public is based upon the productive forces of the underlying resentment:
Negt and Kluge furthermore point out the necessity of considering a third dimension of the public spheres: The public spheres of production. The public spheres of production collect the impulses of resentment and instrumentalizes them in the productive spheres. The public spheres of production are wholly instrumental and have no critical impulse (unlike the bourgeois and proletarian spheres). The interests that are incorporated in the public sphere of production are given capitalist shape, and questions of their legitimately are thus neutralized.[77]
By the end of the 20th century, the discussions about public spheres got a new biopolitical twist. Traditionally the public spheres had been contemplated as to how free agents transgress the private spheres.Michael HardtandAntonio Negrihave, drawing on the lateMichel Foucault's writings onbiopolitics, suggested that we reconsider the very distinction between public and private spheres.[78]They argue that the traditional distinction is founded on a certain (capitalist) account of property that presupposes clear-cut separations between interests. This account of property is (according to Hardt and Negri) based upon a scarcity economy. The scarcity economy is characterized by an impossibility of sharing the goods. If "agent A" eats the bread, "agent B" cannot have it. The interests of agents are thus, generally, clearly separated.
However, with the evolving shift in the economy towards an informational materiality, in which value is based upon the informational significance, or the narratives surrounding the products, the clear-cut subjective separation is no longer obvious. Hardt and Negri see theopen sourceapproaches as examples of new ways of co-operation that illustrate how economic value is not founded upon exclusive possession, but rather upon collective potentialities.[79]Informational materiality is characterized by gaining value only through being shared. Hardt and Negri thus suggest that thecommonsbecome the focal point of analyses of public relations. The point being that with this shift it becomes possible to analyse how the very distinctions between the private and public are evolving.[80]
|
https://en.wikipedia.org/wiki/Public_sphere
|
Radical democracyis a type ofdemocracythat advocates the radical extension ofequalityandliberty.[1]Radical democracy is concerned with a radical extension of equality andfreedom, following the idea that democracy is an unfinished, inclusive, continuous and reflexive process.[1]
Within radical democracy there are three distinct strands, as articulated by Lincoln Dahlberg.[1]These strands can be labeled as agonistic, deliberative and autonomist.
The first and most noted strand of radical democracy is theagonisticperspective, which is associated with the work of Laclau and Mouffe. Radical democracy was articulated byErnesto LaclauandChantal Mouffein their bookHegemony and Socialist Strategy: Towards a Radical Democratic Politics, written in 1985. They argue thatsocial movementswhich attempt to createsocial and political changeneed a strategy which challengesneoliberalandneoconservativeconcepts ofdemocracy.[2]This strategy is to expand theliberaldefinition of democracy, based onfreedomandequality, to includedifference.[2]
According to Laclau and Mouffe "Radical democracy" means "the root of democracy".[3]Laclau and Mouffe claim thatliberal democracyanddeliberative democracy, in their attempts to build consensus, oppress differing opinions, races, classes, genders, and worldviews.[2]In the world, in a country, and in a social movement there are many (a plurality of) differences which resist consensus. Radical democracy is not only accepting of difference,dissentand antagonisms, but is dependent on it.[2]Laclau and Mouffe argue based on the assumption that there are oppressivepowerrelations that exist in society and that those oppressive relations should be made visible, re-negotiated and altered.[4]By building democracy around difference and dissent, oppressive power relations existing in societies are able to come to the forefront so that they can be challenged.[2]
The second strand,deliberative, is mostly associated with the work ofJürgen Habermas. This strand of radical democracy is opposed to the agonistic perspective of Laclau and Mouffe. Habermas argues that political problems surrounding the organization of life can be resolved bydeliberation.[5]That is, people coming together and deliberating on the best possible solution. This type of radical democracy is in contrast with the agonistic perspective based on consensus and communicative means: there is a reflexive critical process of coming to the best solution.[5]Equality and freedom are at the root of Habermas' deliberative theory. The deliberation is established throughinstitutionsthat can ensure free and equal participation of all.[5]Habermas is aware of the fact that different cultures, world-views and ethics can lead to difficulties in the deliberative process. Despite this fact he argues that the communicative reason can create a bridge between opposing views and interests.[5]
The third strand of radical democracy is theautonomiststrand, which is associated with left-communist and post-Marxist ideas. The difference between this type of radical democracy and the two noted above is the focus on "the community".[1]Thecommunityis seen as the pure constituted power instead of the deliberative rational individuals or the agonistic groups as in the first two strands. The community resembles a "plural multitude" (of people) instead of theworking classin traditional Marxist theory.[1]This plural multitude is the pure constituted power and reclaims this power by searching and creating mutual understandings within the community.[1]This strand of radical democracy challenges the traditional thinking about equality and freedom in liberal democracies by stating that individual equality can be found in the singularities within the multitude, equality overall is created by an all-inclusive multitude and freedom is created by restoring the multitude in its pure constituted power.[1]This strand of radical democracy is often a term used to refer to the post-Marxist perspectives ofItalian radicalism– for examplePaolo Virno.
Laclau and Mouffe have argued for radical agonistic democracy, where different opinions and worldviews are not oppressed by the search for consensus in liberal and deliberative democracy. As this agonistic perspective has been most influential in academic literature, it has been subject to most criticisms on the idea of radical democracy. Brockelman for example argues that the theory of radical democracy is anUtopian idea.[15]Political theory, he argues, should not be used as offering a vision of a desirable society. In the same vein, it is argued that radical democracy might be useful at the local level, but does not offer a realistic perception ofdecision-makingon the national level.[16]For example, people might know what they want to see changing in their town and feel the urge to participate in the decision-making process of future local policy. Developing an opinion about issues at the local level often does not require specific skills or education. Deliberation in order to combat the problem ofgroupthink, in which the view of the majority dominates over the view of the minority, can be useful in this setting. However, people might not be skilled enough or willing to decide about national or international problems. A radical democracy approach for overcoming the flaws of democracy is, it is argued, not suitable for levels higher than the local one.
Habermas and Rawls have argued for radical deliberative democracy, where consensus and communicative means are at the root of politics. However, some scholars identify multiple tensions between participation and deliberation. Three of these tensions are identified byJoshua Cohen, a student of the philosopherJohn Rawls:[17]
However, the concept of radical democracy is seen in some circles as colonial in nature due to its reliance on a western notion of democracy.[18]It is argued that liberal democracy is viewed by the West as the only legitimate form of governance.[19]
Since Laclau and Mouffe argued for a radical democracy, many other theorists and practitioners have adapted and changed the term.[2]For example,bell hooksandHenry Girouxhave all written about the application of radical democracy in education. In Hook's bookTeaching to Transgress: Education as the practice of freedomshe argues for education where educators teach students to go beyond the limits imposed against racial, sexual and class boundaries in order to "achieve the gift of freedom".[20]Paulo Freire's work, although initiated decades before Laclau and Mouffe, can also be read through similar lenses.[21][22][23]Theorists such asPaul ChattertonandRichard JF Dayhave written about the importance of radical democracy within some of the autonomous movements in Latin America (namely the EZLN—Zapatista Army of National Liberationin Mexico, the MST—Landless Workers' Movementin Brazil, and thePiquetero—Unemployed Workers Movement in Argentina) although the term radical democracy is used differently in these contexts.[24][25]
With the rise of the internet in the years after the development of various strands of radical democracy theory, the relationship between the internet and the theory has been increasingly focused upon. The internet is regarded as an important aspect of radical democracy, as it provides a means for communication which is central to every approach to the theory.
The internet is believed to reinforce both the theory of radical democracy and the actual possibility of radical democracy through three distinct ways:[26]
This last point refers to the concept of aradical public spherewhere voice in thepolitical debateis given to otherwise oppressed ormarginalized groups.[27]Approached from the radical democracy theory, the expression of such views on the internet can be understood asonline activism. In current liberal representative democracies, certain voices and interests are always favored above others. Through online activism, excluded opinions and views can still be articulated. In this way, activists contribute to the ideal of a heterogeneity of positions. However, the digital age does not necessarily contribute to the notion of radical democracy. Social media platforms possess the opportunity of shutting down certain, often radical, voices. This is counterproductive to radical democracy[28]
|
https://en.wikipedia.org/wiki/Radical_democracy
|
Radical transparencyis a phrase used across fields of governance, politics, software design and business to describe actions and approaches that radically increase theopennessof organizational process and data. Its usage was originally understood as an approach or act that uses abundant networked information to access previously confidential organizational process or outcome data.[1][2]
Modern usage of the term radical transparency coincided with increased public use ofInformation communications technologiesincluding theInternet.Kevin Kellyargued in 1994 that, “in the network era, openness wins, central control is lost.”[3]: p.116David Brin's writing onThe Transparent Societyre-imagined the societal consequences of radical transparency remixingOrwell's1984. However, the explicit political argument for “radical transparency”[1]was first made in a 2001Foreign Affairsarticle on information and communication technology driving economic growth in developing regions. In 2006Wired’s Chris Anderson blogged on the shift from secrecy to transparency blogging culture had made on corporate communications, and highlighted the next step as a shift to ‘radical transparency’ where the “whole product development process [is] laid bare, and opened to customer input.”[4]By 2008 the term was being used to describe theWikiLeaksplatform that radically decentralized the power, voices and visibility of governance knowledge that was previously secret.[5]: p.58
Radical corporate transparency, as a philosophical concept, would involve removing all barriers to free and easy public access to corporate, political and personal (treating persons as corporations) information and the development of laws, rules, socialconnivanceand processes that facilitate and protect such an outcome.[6]
Using these methods to 'hold corporations accountable for the benefit of everyone' was emphasised in Tapscott and Ticoll's book "The Naked Corporation"[7]in 2003. Radical transparency has also been explained by Dan Goleman as amanagementapproach where (ideally,) all decision making is carried out publicly.[8]Specific to this approach is the potential for new technologies to reveal the eco-impact of products bought to steer consumers to make informed decisions and companies to reform their business practices.
In traditionalpublic relationsmanagement,damage controlinvolved the suppression of public information. But, as observed byClive ThompsoninWired, theInternethas created a force towards transparency:
"[H]ere's the interesting paradox: The reputation economy creates an incentive to be more open, not less. Since Internet commentary is inescapable, the only way to influence it is to be part of it. Being transparent, opening up, posting interesting material frequently and often is the only way to amass positive links to yourself and thus to directly influence your Googleable reputation. Putting out more evasion or PR puffery won't work, because people will either ignore it and not link to it – or worse, pick the spin apart and enshrinethosecriticisms high on your Google list of life."[9]Mark Zuckerberghas opined that "more transparency should make for a more tolerant society in which people eventually accept that everybody sometimes does bad or embarrassing things."[10]
Heemsbergen[11]argues that radical political transparency consists of actors outside of the structures of government, using new media forms, to disclose secrets to the public in ways that were previously unavailable and that create new expectations around how information should be used to govern. A prominent example of these evolutions of democracy was seen in the creation ofHansardin parliaments of theWestminster system, which started in pirate markets of pamphleteers illegally sharing the 'secrets' of what was said in British Parliament.[12]Hansardis now institutionalised in many parliaments, with full records of discussions in parliament recorded and published, while the texts of proposed laws and final laws are all, in principle, public documents.
Since the late 1990s, many national parliaments decided to publish all parliamentary debates and laws on the Internet. However, the initial texts of proposed laws and the discussions and negotiations regarding them generally occur in parliamentary commissions, which are rarely transparent, and amongpolitical parties, which are very rarely transparent. Moreover, given the logical and linguistic complexity of typical national laws, publicparticipationis difficult despite the radical transparency at the formal parliamentary level.
Radical transparency has also been suggested in the context of government finance and public economics.[13]In Missed Information,[14]Sarokin and Schulkin take the concept even further, advocating forhypertransparencyof government decision-making, a situation where all internal records, emails, meeting minutes and other internal information is proactively available to the public. Hypertransparency reverses the current Freedom of Information model of access only upon request, instead making all information available by default unless withheld for limited exemptions such as personal information or national security.
A radically transparent approach is also emerging within education.Open educational resources(OER) are freely accessible, usuallyopenly licenseddocuments and media that are useful for teaching, learning, educational, assessment and research purposes. Although some people consider the use of anopen formatto be an essential characteristic of OER, this is not a universally acknowledged requirement. In addition online courses activities are also becoming increasingly more accessible for others.[15]One example are the new and popularmassive open online courses(MOOC).
|
https://en.wikipedia.org/wiki/Radical_transparency
|
Sociocracyis a theory of governance that seeks to createpsychologically safeenvironments and productive organizations. It draws on the use ofconsent, rather thanmajority voting, in discussion anddecision-makingby people who have a sharedgoalorwork process.[1][2][3]
The Sociocratic Circle-Organization Method was developed by the Dutch electrical engineer and entrepreneurGerard Endenburgand is inspired by the work of activists and educators Betty Cadbury andKees Boeke, to which Endenburg was exposed at a young age while studying at aschoolled by Boeke.[2]
Sociocracy has informed and inspired similar organizational forms and methods, includingHolacracyand theself-organizing teamapproach developed byBuurtzorg.[4][5][6]
The word 'sociocracy' is derived from theLatinsociusmeaning companions, colleagues, or associates; and theGreekcratiawhich refers to the ruling class, as in aristocracy, plutocracy, democracy, and meritocracy.[7]
It was coined in 1851 by French philosopherAuguste Comte,[8]as a parallel to sociology, the science that studies how people organize themselves into social systems. Comte believed that a government led by sociologists would use scientific methods to meet the needs of all the people, not just the ruling class.[9]American sociologistLester Frank Wardin an 1881 paper for thePenn Monthlywas an active advocate of a sociocracy to replace the political competition created by majority vote.[10]
Ward expanded his concept of sociocracy inDynamic Sociology(1883) andThe Psychic Factors of Civilization(1892). Ward believed that a well educated public was essential for effective government, and foresaw a time when the emotional and partisan nature of contemporary politics would yield to a more effective, dispassionate, and scientific discussion of issues and problems. Democracy would thus eventually evolve into a more advanced form of government, sociocracy.[11]
The Dutch pacifist, educator, and peace workerKees Boekeand his wife, English peace activistBeatrice Boeke-Cadbury, updated and expanded Ward's ideas by implementing the first sociocratic organizational structure in a school inBilthoven, Netherlands they co-founded in 1926. The school still exists: the Children's Community Workshop (Werkplaats Kindergemeenschap). Boeke saw sociocracy (inDutch:Sociocratie) as a form of governance ormanagementthat presumes equality of individuals and is based on consensus. This equality is not expressed with the 'one man, one vote' law of democracy but rather by a group of individuals reasoning together until a decision is reached that is satisfactory to each one of them.
To make sociocratic ideals operational, Boeke used consensus decision-making based on theQuaker business method, which he described as one of the first sociocratic organizations. Another is his school of approximately 400 students and teachers in which decisions were made by everyone working together in weekly "talkovers" to find a mutually acceptable solution. The individuals in each group would then agree to abide by the decision. "Only when common agreement is reached can any action be taken, quite a different atmosphere is created from that arising from majority rule." Boeke defined three "fundamental rules": (1) That the interests of all members must be considered and the individual must respect the interests of the whole. (2) No action could be taken without a solution that everyone could accept, and (3) All members must accept these decisions when unanimously made. If a group could not make a decision, the decision would be made by a "higher level" of representatives chosen by each group. The size of a decision-making group should be limited to 40 with smaller committees of 5-6 making "detailed decisions". For larger groups, a structure of representatives is chosen by these groups to make decisions.[12]
This model placed a high importance on the role of trust. For the process to be effective, members of each group must trust each other, and it is claimed that this trust will be built over time as long as this method of decision-making is used. When applied to civic governance, people "would be forced to take an interest in those who live close by". Only when people had learned to apply this method in their neighborhoods could the next higher level of sociocratic governance be established. Eventually representatives would be elected from the highest local levels to establish a "World Meeting to govern and order the world."[12]
"Everything depends on a new spirit breaking through among men. May it be that, after the many centuries of fear, suspicion and hate, more and more a spirit of reconciliation and mutual trust will spread abroad. The constant practice of the art of sociocracy and of the education necessary for it seem to be the best way in which to further this spirit, upon which the real solution of all world problems depends."
—Kees Boeke,"Sociocracy: Democracy as it might be". (May 1945)[12]
In the late 1960s and early 1970sGerard Endenburg, an electrical engineer and former student of Boeke's, further developed and applied Boeke's principles in the electrical engineering company he first managed for his parents and then owned. Endenburg sought to replicate the culture of cooperation and harmony, he had experienced in the Boekes school, in this company.[2]He also recognized that, in a company with a diverse and changing workforce, he could not wait for workers to trust each other before they could make collective decisions. To solve this problem, Endenburg integrated his understanding of physics,cybernetics, and systems thinking to further develop the social, political, and educational theories of Comte, Ward, and Boeke.[13]
After years of experimentation and application, Endenburg developed a formal organizational method called the Sociocratic Circle Organizing Method (Sociocratische Kringorganisatie Methode). It was based on a "circular causal feedback process", now commonly called the circular process and feedback loops.[2]This method uses a hierarchy of circles corresponding to units or departments of an organization, but it is a circular hierarchy—the links between each circle combine to form feedback loops up and down the organization.[13]
All policy decisions, those about allocation of resources and that constrain operational decisions, require the consent of all members of a circle. Day-to-day operational decisions are made by the operations leader within the policies established in circle meetings. Policy decisions affecting more than one circle's domain are made by a higher circle formed by representatives from each circle. This structure of linked circles that make decisions by consent maintains the efficiency of a hierarchy while preserving the equivalence of the circles and their members.[13]
In the 1980s, Endenburg and his colleague Annewiek Reijmer founded theSociocratisch Centrum(Sociocratic Center) in Rotterdam.[14]
Endenburg's policy decision-making method was originally published as being based on four essential principles, all in order to emphasize that the process of selecting people for roles and responsibilities was likewise subject to the consent process. As explained below, however, it is now taught through the method of three principles, as Endenburg had originally developed:[15]
Decisions are made when there are no remaining "paramount objections", that is, when there is informed consent from all participants. Objections must bereasonedand argued and based on the ability of the objector to work productively toward the goals of the organization. Allpolicy decisionsare made by consent, although the group may consent to use anotherdecision-making method. Within these policies, day-to-day operational decisions are normally made in the traditional way. Generally, objections are highly valued to hear every stakeholder's concern. This process is sometimes called "objection harvesting".[16]It is emphasized that focusing on objections first leads to more efficient decision making.[17]
The sociocratic organization is composed of a hierarchy of semi-autonomouscircles. This hierarchy, however, does not constitute a power structure asautocratichierarchies do, instead resembling ahorizontal association, since the domain of each circle is strictly bounded by a group decision. Each circle has the responsibility to execute, measure, and control its own processes in achieving its goals. It governs a specific domain of responsibility within the policies of the larger organization. Circles are also responsible for their owndevelopmentand for each member's development. Often called "integral education," the circle and its members are expected to determine what they need to know to remain competitive in their field and to reach the goals of their circle.
Individuals acting as links function as full members in the decision-making of both their own circles and the next higher circle. A circle's operational leader is by definition a member of the next higher circle andrepresents the larger organizationin the decision-making of the circle they lead. Each circle also elects a representative to represent the circles' interests in the next higher circle. These links form a feedback loop between circles.
At the highest level of the organization, there is a “top circle”, analogous to aboard of directors, except that it works within the policies of the circle structure rather than ruling over it. The members of the top circle include external experts that connect the organization to its environment. Typically these members have expertise in law, government, finance, community, and the organization's mission. In a corporation, it might also include a representative selected by the shareholders. The top circle also includes theCEOand at least one representative of the general management circle. Each of these circle members participates fully in decision-making in the top circle.
This fourth principle extends principle 1. Individuals are elected to roles and responsibilities in open discussion using the same consent criteria used for other policy decisions. Members of the circle nominate themselves or other members of the circle and present reasons for their choice. After discussion, people can (and often do) change their nominations, and the discussion leader will suggest the election of the person for whom there are the strongest arguments. Circle members may object and there is further discussion. For a role that many people might fill, this discussion may continue for several rounds. When fewer people are qualified for the task, this process will quickly converge. The circle may also decide to choose someone who is not a current member of the circle.
In the first formulations of the Sociocratic Circle-Organizing Method, Endenburg had three principles and regarded the fourth, elections by consent, not as a separate principle but as a method for making decisions by consent when there are several choices. He considered it part of the first principle, consent governs policy decisions, but many people misunderstood that elections of people to roles and responsibilities are allocations of resources and thus policy decisions. To emphasize the importance of making these decisions by consent in the circle meetings, Endenburg separated it into a fourth principle.
Sociocracy makes a distinction betweenconsentandconsensusin order to emphasize that circle decisions are not expected to produce a "consensus" in the sense of full agreement. In sociocracy, consent is defined as "no objections", andobjectionsare based on one's ability to work toward the aims of the organization. Members discussing an idea in consent-based governance commonly ask themselves if it is "good enough for now, safe enough to try".[16]If not, then there is an objection, which leads to a search for an acceptable adaptation of the original proposal to gain consent.
Sociocratisch Centrumco-founder Reijmer has summarized the difference as follows:[2]"By consensus, I must convince you that I'm right; by consent, you ask whether you can live with the decision".
|
https://en.wikipedia.org/wiki/Sociocracy
|
Aworkers' council, also calledlabour council,[1]is a type ofcouncilin aworkplaceor alocalitymade up of workers or of temporary andinstantly revocabledelegates elected by the workers in a locality's workplaces.[2]In such a system of political and economic organization, the workers themselves are able to exercise decision-making power. Furthermore, the workers within each council decide on what their agenda is and what their needs are. The council communistAnton Pannekoekdescribes shop-committees and sectional assemblies as the basis for workers' management of theindustrial system.[3]A variation is asoldiers' council, where soldiers direct amutiny. Workers and soldiers have also operated councils in conjunction (like the 1918 GermanArbeiter- und Soldatenrat). Workers' councils may in turn elect delegates to central committees, such as theCongress of Soviets.
Supporters of workers' councils (such ascouncil communists,[4]libertarian socialists,[5]Leninists,[6]anarchists,[7]andMarxists[8]) argue that they are the most natural form ofworking-classorganization, and believe that workers' councils are necessary for the organization of aproletarian revolutionand the implementation of ananarchistorcommunist society.
TheParis Commune of 1871became a model for how future workers' councils would be organised for revolution and socialist governance. Workers' councils have played a significant role in thecommunistrevolutions of the 20th century. This was most notable in the lands of theRussian Empire(includingCongress PolandandLatvia) in 1905, with the workers' councils (soviets) acting as labor committees which coordinated strike activities throughout the cities due to repression of trade unions. During theRevolutions of 1917–1923, councils of socialist workers were able to exercise political authority. In the workers' councils organized as part of the1918 German revolution, factory organizations such as theGeneral Workers' Union of Germanyformed the basis for region-wide councils.
Anarchists advocate for astateless societybased on horizontalsocial organisationthrough voluntary federations of communes, with workers' councils andvoluntary associationsacting as the basic units of such societies. Early conceptions of this theory have come from the writings of FrenchanarchistphilosopherPierre-Joseph Proudhon. His theory ofmutualismenvisioned a society organised through workers' councils,cooperatives, and other types of workers' associations.[9][10]
At theFirst International, followers of Proudhon and thecollectivistsled byMikhail Bakuninhave endorsed the use of workers' councils both as a means for organisingclass struggleand for forming the structural basis of a future anarchist society.[11]Writing for the French anarchist journalThe New Times[fr], Russian theoristPeter Kropotkinhas praised the workers of Russia for using this form of organisation during the Revolution of 1905.[12]
Modern anarchists, such as proponents ofparticipatory economics, advocate for the use of workers' councils as a means forparticipatory urban planningas well asdecentralised planningof the economy.[13]
Council Communism advocates for a system of workers councils (council democracy) to coordinateclass struggle.Karl Marxdescribed inThe Civil War in FrancetheParis Communeas a system withindirect elections, where district assemblies select at any timerecall-able delegatesto a higher assembly.[14]Council communists, such as the Dutch-German current ofleft communists, believe that their nature means that workers' councils do away with bureaucratic form of the state and instead give power directly to workers. Council communists view this organization of a revolutionary government as ananti-authoritarianapproach to thedictatorship of the proletariat.[15]
The council communists in theCommunist Workers' Party of Germanyadvocated organizing "on the basis of places of work, not trades, and to establish a National Federation of Works Committees."[16]The Central Workers Council of Greater Budapest occupied this role in theHungarian Revolution of 1956, between late October and early January 1957, where it grew out of localfactory committees.[17]
Rosa Luxemburg was a vocal proponent of radical socialist democracy, and advocated for the revolution to be led by workers' and soldiers' councils.[18]She was also openly critical of the actions of theBolsheviksin the Russian Revolution, arguing that their approach was anti-democratic and totalitarian.[19]
Marxist revolutionaryVladimir Leninproposed that the dictatorship of the proletariat should come in the form of asoviet republicwithdemocratic centralism.[20]He proposed that the socialist revolution should be led by arevolutionary party, which should seize state power and establish asocialist statebased on soviet democracy. Lenin's model for the dictatorship of the proletariat is based on that of theParis Commune, and is meant to fulfil the task of suppressing thebourgeoisieand othercounter-revolutionaryforces, and "wither away" after the counter-revolution is fully suppressed and as the state institutions begin to "lose their political character".[6]
Some academics and socialists disputed the commitmentsVladimir LeninandLeon Trotskyhad toward workers' councils after theRussian Revolution of 1917, noting that workers' councils "were never meant to become a permanent political form of self-governance" and were therefore sidelined by theCommunist Party.[5][21][22][23]Some socialists have argued this as an example of theBolsheviks' betrayal of socialist principles,[5]while others have defended it as necessary for the social conditions at the time to maintain and advance the Revolution.[24]
At several times, both inlate modernand inrecent history, socialists and communists have organized workers' councils during periods of unrest. Examples include:
TheParis Communeof 1871 (La Commune de Paris) was a revolutionary government that seized control of the city ofParis, which governed the city for two months based on socialist principles through the combined efforts ofsocial democrats,anarchists,Blanquists, andJacobins.[25]The commune was headed by theCommune Council(French:conseil de la Commune),[26]which was composed of delegates who were each subject to immediaterecallby their electors. The events of this period has been a significant influence on the development ofMarxistand anarchist political theory and revolutionary praxis.Friedrich Engelsnamed the Paris Commune as the first example of adictatorship of the proletariat.[27]
TheRussian Revolution of 1905saw the spontaneous emergence of workers' councils (otherwise known locally assoviets) in theRussian Empire.[29]Trotsky would assume a central role in the 1905 revolution[30][31]and serve as Chairman of the Petersburg Soviet of Workers' Delegates in which he wrote several proclamations urging for improvedeconomic conditions, political rights and the use ofstrike actionagainst theTsarist regimeon behalf of workers.[32]
Councils such as thePetrograd Sovietwere formed by striking workers to coordinate the revolution, exercising political power in the absence of the Tsar's governance.[35]
Despite Lenin's declarations that "the workers must demand the immediate establishment of genuine control, to be exercised by the workers themselves", on May 30, theMenshevikminister of labor, Matvey Skobelev, pledged to not give the control of industry to the workers but instead to the state: "The transfer of enterprises into the hands of the people will not at the present time assist the revolution [...] The regulation and control of industry is not a matter for a particular class. It is a task for the state. Upon the individual class, especially the working class, lies the responsibility for helping the state in its organizational work."[36][37]Council communists criticize the Bolsheviks for superseding the soviet democracy formed by the councils and creating a bureaucratic system ofstate capitalism.
During theRussian Revolution, theRevolutionary Insurgent Army of Ukraineled byNestor Makhnoestablished astateless territoryin EasternUkraineon the principles ofanarchist communism. The Makhnovists established a system offree soviets(vilni rady), which allowed workers, peasants, and militants to self-govern their communities throughworkers' self-managementand send delegates to theRegional Congress of Peasants, Workers and Insurgents.[38]
During theIrish War of Independence&Irish Civil Wara number of worker's councils were set up for various degrees of time, between 1919 - 1923. See:Irish soviets.[39]
TheSpanish Revolution of 1936saw the creation of anarchist communes across much of Spain. These communes operated under the principle "from each according to his ability, to each according to his needs". Decision-making in the communes were conducted through workers' councils (comités trabajadores).[40]
Algeria, in the aftermath of theAlgerian War, oversaw the widespread practice ofworkers' self-management. This was subsequently suppressed by conservative forces in the country.[33][41]
During theMay 1968 events in France, "[t]he largest general strike that ever stopped the economy of an advanced industrial country, and the firstwildcat general strikein history",[42]theSituationists, against theunionsand theFrench Communist Partythat were starting to side with thede Gaullegovernment to contain the revolt, called for the formation of workers' councils (comités d'entreprise) to take control of the cities, expelling union leaders and left-wing bureaucrats, in order to keep the power in the hands of the workers withdirect democracy.[42]
|
https://en.wikipedia.org/wiki/Workers%27_council
|
Biodiversityis the variability oflife on Earth. It can be measured on various levels. There is for examplegenetic variability,species diversity,ecosystem diversityandphylogeneticdiversity.[1]Diversity is not distributed evenly onEarth. It is greater in thetropicsas a result of the warmclimateand highprimary productivityin the region near theequator. Tropical forest ecosystems cover less than one-fifth of Earth's terrestrial area and contain about 50% of the world's species.[2]There arelatitudinal gradients in species diversityfor both marine and terrestrial taxa.[3]
Sincelife began on Earth, six majormass extinctionsand several minor events have led to large and sudden drops in biodiversity. ThePhanerozoicaeon (the last 540 million years) marked a rapid growth in biodiversity via theCambrian explosion. In this period, the majority ofmulticellularphylafirst appeared. The next 400 million years included repeated, massive biodiversity losses. Those events have been classified asmass extinctionevents. In theCarboniferous,rainforest collapsemay have led to a great loss ofplantandanimallife. ThePermian–Triassic extinction event, 251 million years ago, was the worst; vertebrate recovery took 30 million years.
Human activitieshave led to an ongoingbiodiversity lossand an accompanying loss ofgenetic diversity. This process is often referred to asHolocene extinction, orsixth mass extinction. For example, it was estimated in 2007 that up to 30% of all species will be extinct by 2050.[4]Destroying habitatsfor farming is a key reason why biodiversity is decreasing today.Climate changealso plays a role.[5][6]This can be seen for example in theeffects of climate change on biomes. This anthropogenic extinction may have started toward the end of thePleistocene, as some studies suggest that themegafaunalextinction event that took place around the end of the last ice age partly resulted from overhunting.[7]
Biologists most often definebiodiversityas the "totality ofgenes,speciesandecosystemsof a region".[8][9]An advantage of this definition is that it presents a unified view of the traditional types of biological variety previously identified:
Biodiversityis most commonly used to replace the more clearly-defined and long-established terms,species diversityandspecies richness.[13]However, there is no concrete definition for biodiversity, as its definition continues to be reimagined and redefined. To give a couple examples, theFood and Agriculture Organization of the United Nations(FAO) defined biodiversity in 2019 as "the variability that exists among living organisms (both within and between species) and the ecosystems of which they are part." TheWorld Health Organizationupdated their website’s definition of biodiversity to be the “variability among living organisms from all sources.”[14]Both these definitions, although broad, give a current understanding of what is meant by the term biodiversity.
According to estimates by Mora et al. (2011), there are approximately 8.7 million terrestrial species and 2.2 million oceanic species. The authors note that these estimates are strongest for eukaryotic organisms and likely represent the lower bound of prokaryote diversity.[15]Other estimates include:
Since the rate of extinction has increased, many extant species may become extinct before they are described.[27]Not surprisingly, in theAnimaliathe most studied groups arebirdsandmammals, whereasfishesandarthropodsare the least studied animal groups.[28]
During the last century, decreases in biodiversity have been increasingly observed. It was estimated in 2007 that up to 30% of all species will be extinct by 2050.[4]Of these, about one eighth of known plant species are threatened withextinction.[32]Estimates reach as high as 140,000 species per year (based onSpecies-area theory).[33]This figure indicatesunsustainableecological practices, because few species emerge each year.[34]The rate of species loss is greater now than at any time in human history, with extinctions occurring at rates hundreds of times higher thanbackground extinctionrates.[32][35][36]and expected to still grow in the upcoming years.[36][37][38]As of 2012, some studies suggest that 25% of all mammal species could be extinct in 20 years.[39]
In absolute terms, the planet has lost 58% of its biodiversity since 1970 according to a 2016 study by the World Wildlife Fund.[40]The Living Planet Report 2014 claims that "the number of mammals, birds, reptiles, amphibians, and fish across the globe is, on average, about half the size it was 40 years ago". Of that number, 39% accounts for the terrestrial wildlife gone, 39% for the marine wildlife gone and 76% for the freshwater wildlife gone. Biodiversity took the biggest hit inLatin America, plummeting 83 percent. High-income countries showed a 10% increase in biodiversity, which was canceled out by a loss in low-income countries. This is despite the fact that high-income countries use five times the ecological resources of low-income countries, which was explained as a result of a process whereby wealthy nations are outsourcingresource depletionto poorer nations, which are suffering the greatest ecosystem losses.[41]
A 2017 study published inPLOS Onefound that the biomass of insect life in Germany had declined by three-quarters in the last 25 years.[42]Dave Goulson ofSussex Universitystated that their study suggested that humans "appear to be making vast tracts of land inhospitable to most forms of life, and are currently on course for ecological Armageddon. If we lose the insects then everything is going to collapse."[43]
In 2020 theWorld Wildlife Foundationpublished a report saying that "biodiversity is being destroyed at a rate unprecedented in human history". The report claims that 68% of the population of the examined species were destroyed in the years 1970 – 2016.[44]
Of 70,000 monitored species, around 48% are experiencing population declines from human activity (in 2023), whereas only 3% have increasing populations.[45][46][47]
Rates ofdecline in biodiversityin the currentsixth mass extinctionmatch or exceed rates of loss in the five previousmass extinction eventsin thefossil record.[57]Biodiversity loss is in fact "one of the most critical manifestations of theAnthropocene" (since around the 1950s); the continued decline of biodiversity constitutes "an unprecedented threat" to the continued existence of human civilization.[58]The reduction is caused primarily byhuman impacts, particularlyhabitat destruction.
Since theStone Age, species loss has accelerated above the average basal rate, driven by human activity. Estimates of species losses are at a rate 100–10,000 times as fast as is typical in the fossil record.[59]
Loss of biodiversity results in the loss ofnatural capitalthat suppliesecosystem goods and services. Species today are being wiped out at a rate 100 to 1,000 times higher than baseline, and the rate of extinctions is increasing. This process destroys the resilience and adaptability of life on Earth.[60]
In 2006, many species were formally classified asrareorendangeredorthreatened; moreover, scientists have estimated that millions more species are at risk which have not been formally recognized. About 40 percent of the 40,177 species assessed using theIUCN Red Listcriteria are now listed as threatened withextinction—a total of 16,119.[61]As of late 2022 9251 species were considered part of the IUCN'scritically endangered.[62]
Numerous scientists and theIPBESGlobal Assessment Report on Biodiversity and Ecosystem Servicesassert thathuman population growthandoverconsumptionare the primary factors in this decline.[63][64][65][66][67]However, other scientists have criticized this finding and say that loss of habitat caused by "the growth of commodities for export" is the main driver.[68]A 2025 study found that human activities are responsible for biodiversity loss across all species and ecosystems.[69]
Some studies have however pointed out that habitat destruction for the expansion of agriculture and theoverexploitationof wildlife are the more significant drivers of contemporary biodiversity loss, notclimate change.[5][6]
Biodiversity is not evenly distributed, rather it varies greatly across the globe as well as within regions and seasons. Among other factors, the diversity of all living things (biota) depends ontemperature,precipitation,altitude,soils,geographyand the interactions between other species.[70]The study of the spatial distribution oforganisms, species andecosystems, is the science ofbiogeography.[71][72]
Diversity consistently measures higher in thetropicsand in other localized regions such as theCape Floristic Regionand lower in polar regions generally.Rain foreststhat have had wet climates for a long time, such asYasuní National ParkinEcuador, have particularly high biodiversity.[73][74]
There is local biodiversity, which directly impacts daily life, affecting the availability of fresh water, food choices, and fuel sources for humans. Regional biodiversity includes habitats and ecosystems that synergizes and either overlaps or differs on a regional scale. National biodiversity within a country determines the ability for a country to thrive according to its habitats and ecosystems on a national scale. Also, within a country,endangered speciesare initially supported on a national level then internationally.Ecotourismmay be utilized to support the economy and encourages tourists to continue to visit and support species and ecosystems they visit, while they enjoy the available amenities provided. International biodiversity impacts global livelihood, food systems, and health. Problematic pollution, over consumption, and climate change can devastate international biodiversity. Nature-based solutions are a critical tool for a global resolution. Many species are in danger of becoming extinct and need world leaders to be proactive with theKunming-Montreal Global Biodiversity Framework.
Terrestrial biodiversity is thought to be up to 25 times greater than ocean biodiversity.[75]Forests harbour most of Earth's terrestrial biodiversity. The conservation of the world's biodiversity is thus utterly dependent on the way in which we interact with and use the world's forests.[76]A new method used in 2011, put the total number of species on Earth at 8.7 million, of which 2.1 million were estimated to live in the ocean.[77]However, this estimate seems to under-represent the diversity of microorganisms.[78]Forests provide habitats for 80 percent of amphibianspecies, 75 percent of bird species and 68 percent of mammal species. About 60 percent of all vascular plants are found in tropical forests. Mangroves provide breeding grounds and nurseries for numerous species of fish and shellfish and help trap sediments that might otherwise adversely affect seagrass beds and coral reefs, which are habitats for many more marine species.[76]Forests span around 4 billion acres (nearly a third of the Earth's land mass) and are home to approximately 80% of the world's biodiversity. About 1 billion hectares are covered by primary forests. Over 700 million hectares of the world's woods are officially protected.[79][80]
The biodiversity of forests varies considerably according to factors such as forest type, geography, climate and soils – in addition to human use.[76]Most forest habitats in temperate regions support relatively few animal and plant species and species that tend to have large geographical distributions, while the montane forests of Africa, South America and Southeast Asia and lowland forests of Australia, coastal Brazil, the Caribbean islands, Central America and insular Southeast Asia have many species with small geographical distributions.[76]Areas with dense human populations and intense agricultural land use, such asEurope, parts of Bangladesh, China, India and North America, are less intact in terms of their biodiversity. Northern Africa, southern Australia, coastal Brazil, Madagascar and South Africa, are also identified as areas with striking losses in biodiversity intactness.[76]European forests in EU and non-EU nations comprise more than 30% of Europe's land mass (around 227 million hectares), representing an almost 10% growth since 1990.[81][82]
Generally, there is an increase in biodiversity from thepolesto thetropics. Thus localities at lowerlatitudeshave more species than localities at higherlatitudes. This is often referred to as the latitudinal gradient in species diversity. Several ecological factors may contribute to the gradient, but the ultimate factor behind many of them is the greater mean temperature at the equator compared to that at the poles.[83]
Even though terrestrial biodiversity declines from the equator to the poles,[3]some studies claim that this characteristic is unverified inaquatic ecosystems, especially inmarine ecosystems.[84]The latitudinal distribution of parasites does not appear to follow this rule.[71]Also, in terrestrial ecosystems the soil bacterial diversity has been shown to be highest in temperate climatic zones,[85]and has been attributed to carbon inputs and habitat connectivity.[86]
In 2016, an alternative hypothesis ("thefractalbiodiversity") was proposed to explain the biodiversity latitudinal gradient.[87]In this study, thespeciespool size and the fractal nature of ecosystems were combined to clarify some general patterns of this gradient. This hypothesis considerstemperature,moisture, andnet primary production(NPP) as the main variables of an ecosystem niche and as the axis of the ecologicalhypervolume. In this way, it is possible to build fractal hyper volumes, whosefractal dimensionrises to three moving towards theequator.[88]
Abiodiversity hotspotis a region with a high level ofendemicspecies that have experienced greathabitat loss.[89]The term hotspot was introduced in 1988 byNorman Myers.[90][91][92][93]While hotspots are spread all over the world, the majority are forest areas and most are located in thetropics.[94]
Brazil'sAtlantic Forestis considered one such hotspot, containing roughly 20,000 plant species, 1,350 vertebrates and millions of insects, about half of which occur nowhere else.[95][96]The island ofMadagascarandIndiaare also particularly notable.Colombiais characterized by high biodiversity, with the highest rate of species by area unit worldwide and it has the largest number of endemics (species that are not found naturally anywhere else) of any country. About 10% of the species of the Earth can be found in Colombia, including over 1,900 species of bird, more than in Europe and North America combined, Colombia has 10% of the world's mammals species, 14% of the amphibian species and 18% of the bird species of the world.[97]Madagascar dry deciduous forestsand lowland rainforests possess a high ratio ofendemism.[98][99]Since the island separated from mainlandAfrica66 million years ago, many species and ecosystems have evolved independently.[100]Indonesia's 17,000 islands cover 735,355 square miles (1,904,560 km2) and contain 10% of the world'sflowering plants, 12% of mammals and 17% ofreptiles,amphibiansandbirds—along with nearly 240 million people.[101]Many regions of high biodiversity and/or endemism arise from specializedhabitatswhich require unusual adaptations, for example,alpineenvironments in highmountains, orNorthern Europeanpeatbogs.[99]
Accurately measuring differences in biodiversity can be difficult.Selection biasamongst researchers may contribute to biased empirical research for modern estimates of biodiversity. In 1768, Rev.Gilbert Whitesuccinctly observed of hisSelborne, Hampshire"all nature is so full, that that district produces the most variety which is the most examined."[102]
Biodiversity is the result of 3.5 billion years ofevolution.[103]Theorigin of lifehas not been established by science, however, some evidence suggests that life may already have been well-established only a few hundred million years after theformation of the Earth. Until approximately 2.5 billion years ago, all life consisted ofmicroorganisms–archaea,bacteria, andsingle-celledprotozoansandprotists.[78]
Biodiversity grew fast during thePhanerozoic(the last 540 million years), especially during the so-calledCambrian explosion—a period during which nearly everyphylumofmulticellular organismsfirst appeared.[105]However, recent studies suggest that this diversification had started earlier, at least in theEdiacaran, and that it continued in theOrdovician.[106]Over the next 400 million years or so,invertebratediversity showed little overall trend andvertebratediversity shows an overall exponential trend.[10]This dramatic rise in diversity was marked by periodic, massive losses of diversity classified asmass extinctionevents.[10]A significant loss occurred in anamniotic limbed vertebrates when rainforests collapsed in theCarboniferous,[107]butamniotesseem to have been little affected by this event; their diversification slowed down later, around theAsselian/Sakmarianboundary, in the earlyCisuralian(EarlyPermian), about 293 Ma ago.[108]The worst was thePermian-Triassic extinction event, 251 million years ago.[109][110]Vertebrates took 30 million years to recover from this event.[111]
The most recent majormass extinctionevent, theCretaceous–Paleogene extinction event, occurred 66 million years ago. This period has attracted more attention than others because it resulted in the extinction of thenon-aviandinosaurs, which were represented by many lineages at the end of theMaastrichtian, just before that extinction event. However, many other taxa were affected by this crisis, which affected even marine taxa, such asammonites, which also became extinct around that time.[112]
The biodiversity of the past is called Paleobiodiversity. Thefossil recordsuggests that the last few million years featured the greatest biodiversity inhistory.[10]However, not all scientists support this view, since there is uncertainty as to how strongly the fossil record is biased by the greater availability and preservation of recentgeologicsections.[113]Some scientists believe that corrected for sampling artifacts, modern biodiversity may not be much different from biodiversity 300 million years ago,[105]whereas others consider the fossil record reasonably reflective of the diversification of life.[114][10]Estimates of the present global macroscopic species diversity vary from 2 million to 100 million, with a best estimate of somewhere near 9 million,[77]the vast majorityarthropods.[115]Diversity appears to increase continually in the absence of natural selection.[116]
The existence of aglobalcarrying capacity, limiting the amount of life that can live at once, is debated, as is the question of whether such a limit would also cap the number of species. While records of life in the sea show alogisticpattern of growth, life on land (insects, plants and tetrapods) shows anexponentialrise in diversity.[10]As one author states, "Tetrapods have not yet invaded 64 percent of potentially habitable modes and it could be that without human influence the ecological andtaxonomicdiversity of tetrapods would continue to increase exponentially until most or all of the available eco-space is filled."[10]
It also appears that the diversity continues to increase over time, especially after mass extinctions.[117]
On the other hand, changes through thePhanerozoiccorrelate much better with thehyperbolicmodel (widely used inpopulation biology,demographyandmacrosociology, as well asfossilbiodiversity) than with exponential and logistic models. The latter models imply that changes in diversity are guided by a first-orderpositive feedback(more ancestors, more descendants) and/or anegative feedbackarising from resource limitation. Hyperbolic model implies a second-order positive feedback.[118]Differences in the strength of the second-order feedback due to different intensities of interspecific competition might explain the faster rediversification ofammonoidsin comparison tobivalvesafter theend-Permian extinction.[118]The hyperbolic pattern of theworld populationgrowth arises from a second-order positive feedback between the population size and the rate of technological growth.[119]The hyperbolic character of biodiversity growth can be similarly accounted for by a feedback between diversity and community structure complexity.[119][120]The similarity between the curves of biodiversity and human population probably comes from the fact that both are derived from the interference of the hyperbolic trend with cyclical andstochasticdynamics.[119][120]
Most biologists agree however that the period since human emergence is part of a new mass extinction, named theHolocene extinction event, caused primarily by the impact humans are having on the environment.[121]It has been argued that the present rate of extinction is sufficient to eliminate most species on the planet Earth within 100 years.[122]
New species are regularly discovered (on average between 5–10,000 new species each year, most of theminsects) and many, though discovered, are not yet classified (estimates are that nearly 90% of allarthropodsare not yet classified).[115]Most of the terrestrial diversity is found intropical forestsand in general, the land has more species than the ocean; some 8.7 million species may exist on Earth, of which some 2.1 million live in the ocean.[77]
It is estimated that 5 to 50 billion species have existed on the planet.[123]Assuming that there may be a maximum of about 50 million species currently alive,[124]it stands to reason that greater than 99% of the planet's species went extinct prior to the evolution of humans.[125]Estimates on the number of Earth's currentspeciesrange from 10 million to 14 million, of which about 1.2 million have been documented and over 86% have not yet been described.[126]However, a May 2016 scientific report estimates that 1 trillion species are currently on Earth, with only one-thousandth of one percent described.[127]The total amount of relatedDNAbase pairson Earth is estimated at 5.0 x 1037and weighs 50 billiontonnes. In comparison, the totalmassof thebiospherehas been estimated to be as much as four trillion tons ofcarbon.[128]In July 2016, scientists reported identifying a set of 355genesfrom thelast universal common ancestor(LUCA) of allorganismsliving on Earth.[129]
Theage of Earthis about 4.54 billion years.[130][131][132]The earliest undisputed evidence oflifedates at least from 3.7 billion years ago, during theEoarcheanera after a geologicalcruststarted to solidify following the earlier moltenHadeaneon.[133][134][135]There aremicrobial matfossilsfound in 3.48 billion-year-oldsandstonediscovered inWestern Australia. Other early physical evidence of abiogenic substanceisgraphitein 3.7 billion-year-oldmeta-sedimentary rocksdiscovered inWestern Greenland..[136][137]More recently, in 2015, "remains ofbiotic life" were found in 4.1 billion-year-old rocks in WesternAustralia. According to one of the researchers, "If life arose relatively quickly on Earth...then it could be common in theuniverse."[138]
There have been many claims about biodiversity's effect on theecosystem services, especiallyprovisioningandregulating services.[139]Some of those claims have been validated, some are incorrect and some lack enough evidence to draw definitive conclusions.[139]
Ecosystem services have been grouped in three types:[139]
Experiments with controlled environments have shown that humans cannot easily build ecosystems to support human needs;[141]for exampleinsect pollinationcannot be mimicked, though there have been attempts to create artificial pollinators usingunmanned aerial vehicles.[142]The economic activity of pollination alone represented between $2.1–14.6 billion in 2003.[143]Other sources have reported somewhat conflicting results and in 1997Robert Costanzaand his colleagues reported the estimated global value of ecosystem services (not captured in traditional markets) at an average of $33 trillion annually.[144]
With regards to provisioning services, greater species diversity has the following benefits:
With regards to regulating services, greater species diversity has the following benefits:
Greater species diversity
Agricultural diversity can be divided into two categories:intraspecific diversity, which includes the genetic variation within a single species, like the potato (Solanum tuberosum) that is composed of many different forms and types (e.g. in the U.S. they might compare russet potatoes with new potatoes or purple potatoes, all different, but all part of the same species,S. tuberosum). The other category of agricultural diversity is calledinterspecific diversityand refers to the number and types of different species.
Agricultural diversity can also be divided by whether it is 'planned' diversity or 'associated' diversity. This is a functional classification that we impose and not an intrinsic feature of life or diversity. Planned diversity includes the crops which a farmer has encouraged, planted or raised (e.g. crops, covers, symbionts, and livestock, among others), which can be contrasted with the associated diversity that arrives among the crops, uninvited (e.g. herbivores, weed species and pathogens, among others).[153]
Associated biodiversity can be damaging or beneficial. The beneficial associated biodiversity include for instance wild pollinators such as wild bees andsyrphidflies that pollinate crops[154]and natural enemies and antagonists to pests and pathogens. Beneficial associated biodiversity occurs abundantly in crop fields and provide multipleecosystem servicessuch as pest control,nutrient cyclingand pollination that support crop production.[155]
Although about 80 percent of humans' food supply comes from just 20 kinds of plants,[156]humans use at least 40,000 species.[157]Earth's surviving biodiversity provides resources for increasing the range of food and other products suitable for human use, although the present extinction rate shrinks that potential.[122]
Biodiversity's relevance to human health is becoming an international political issue, as scientific evidence builds on the global health implications of biodiversity loss.[158][159][160]This issue is closely linked with the issue ofclimate change,[161]as many of the anticipatedhealth risks of climate changeare associated with changes in biodiversity (e.g. changes in populations and distribution of disease vectors, scarcity of fresh water, impacts on agricultural biodiversity and food resources etc.). This is because the species most likely to disappear are those that buffer against infectious disease transmission, while surviving species tend to be the ones that increase disease transmission, such as that of West Nile Virus,Lyme diseaseand Hantavirus, according to a study done co-authored by Felicia Keesing, an ecologist at Bard College and Drew Harvell, associate director for Environment of theAtkinson Center for a Sustainable Future(ACSF) atCornell University.[162]
Some of the health issues influenced by biodiversity include dietary health and nutrition security, infectious disease, medical science and medicinal resources, social and psychological health.[163]Biodiversity is also known to have an important role in reducing disaster risk, including rising sea levels. For example, wetland ecosystems along coastal communities serve as excellent water filtration systems, storage, and ultimately create a buffer region between the ocean and mainland neighborhoods in order to prevent water reaching these communities under climate change pressures or storm storages. Other examples of diverse species or organisms are present around the world, offering their resourceful utilities to provide protection of human survival.[164]
Biodiversity provides critical support for drug discovery and the availability of medicinal resources.[165][166]A significant proportion of drugs are derived, directly or indirectly, from biological sources: at least 50% of the pharmaceutical compounds on the US market are derived from plants, animals andmicroorganisms, while about 80% of the world population depends on medicines from nature (used in either modern or traditional medical practice) for primary healthcare.[159]Only a tiny fraction of wild species has been investigated for medical potential.
Marine ecosystemsare particularly important, especially their chemical and physical properties that have paved the way for numerous pharmaceutical achievements; the immense diversity of marine organisms have led to scientific discoveries including medical treatments to cancer, viral bacteria, AIDS, etc.[167]This process ofbioprospectingcan increase biodiversity loss, as well as violating the laws of the communities and states from which the resources are taken.[168][169][170]
According to theBoston Consulting Group, in 2021, the economic value that biodiversity has on society comes down to four definable terms: regulation, culture, habitat, and provisioning. To sum these up in a relatively short manner, biodiversity helps maintain habitat and animal functions that provide considerable amounts of resources that benefit the economy.[171]
Biodiversity’s economic resources are worth at around $150 trillion annually which is roughly twice the world’s GDP. The loss of biodiversity is actually harming the GDP of the world by costing an estimated $5 trillion annually.[171]
Business supply chains rely heavily on ecosystems remaining relatively maintained and nurtured. A disruption to these supply chains would negatively impact many businesses that would end up costing them more than what they are gaining.[172]
Philosophically it could be argued that biodiversity has intrinsic aesthetic and spiritual value tomankindin and of itself. This idea can be used as a counterweight to the notion thattropical forestsand other ecological realms are only worthy of conservation because of the services they provide.[173]
Biodiversity also affords many non-material benefits including spiritual and aesthetic values, knowledge systems and education.[59]
Less than 1% of all species that have been described have been studied beyond noting their existence.[180]The vast majority of Earth's species are microbial. Contemporary biodiversity physics is "firmly fixated on the visible [macroscopic] world".[181]For example, microbial life ismetabolicallyand environmentally more diverse than multicellular life (see e.g.,extremophile). "On the tree of life, based on analyses of small-subunitribosomal RNA, visible life consists of barely noticeable twigs. The inverse relationship of size and population recurs higher on the evolutionary ladder—to a first approximation, all multicellular species on Earth are insects".[182]Insect extinctionrates are high—supporting the Holocene extinction hypothesis.[183][55]
Biodiversity naturally varies due to seasonal shifts. Spring's arrival enhances biodiversity as numerous species breed and feed, while winter's onset temporarily reduces it as some insects perish and migrating animals leave. Additionally, the seasonal fluctuation in plant and invertebrate populations influences biodiversity.[184]
Barriers such as largerivers,seas,oceans,mountainsanddesertsencourage diversity by enabling independent evolution on either side of the barrier, via the process ofallopatric speciation. The terminvasive speciesis applied to species that breach the natural barriers that would normally keep them constrained. Without barriers, such species occupy new territory, often supplanting native species by occupying their niches, or by using resources that would normally sustain native species.
Species are increasingly being moved by humans (on purpose and accidentally). Some studies say that diverse ecosystems are more resilient and resist invasive plants and animals.[185]Many studies cite effects of invasive species on natives,[186]but not extinctions.
Invasive species seem to increase local (alpha diversity) diversity, which decreases turnover of diversity (beta diversity). Overallgamma diversitymay be lowered because species are going extinct because of other causes,[187]but even some of the most insidious invaders (e.g.: Dutch elm disease, emerald ash borer, chestnut blight in North America) have not caused their host species to become extinct.Extirpation,population declineandhomogenizationof regional biodiversity are much more common. Human activities have frequently been the cause of invasive species circumventing their barriers,[188]by introducing them for food and other purposes. Human activities therefore allow species to migrate to new areas (and thus become invasive) occurred on time scales much shorter than historically have been required for a species to extend its range.
At present, several countries have already imported so many exotic species, particularly agricultural and ornamental plants, that their indigenous fauna/flora may be outnumbered. For example, the introduction ofkudzufrom Southeast Asia to Canada and the United States has threatened biodiversity in certain areas.[189]Another example arepines, which have invaded forests, shrublands and grasslands in the southern hemisphere.[190]
Endemic species can be threatened withextinction[191]through the process ofgenetic pollution, i.e. uncontrolledhybridization,introgressionand genetic swamping. Genetic pollution leads to homogenization or replacement of localgenomesas a result of either a numerical and/orfitnessadvantage of an introduced species.[192]
Hybridization and introgression are side-effects of introduction and invasion. These phenomena can be especially detrimental torare speciesthat come into contact with more abundant ones. The abundant species can interbreed with the rare species, swamping itsgene pool. This problem is not always apparent frommorphological(outward appearance) observations alone. Some degree ofgene flowis normal adaptation and not allgeneandgenotypeconstellations can be preserved. However, hybridization with or without introgression may, nevertheless, threaten a rare species' existence.[193][194]
Conservation biologymatured in the mid-20th century asecologists,naturalistsand otherscientistsbegan to research and address issues pertaining to global biodiversity declines.[196][197][198]
The conservation ethic advocates management ofnatural resourcesfor the purpose of sustaining biodiversity inspecies,ecosystems, theevolutionary processand human culture and society.[51][196][198][199][200]
Conservation biology is reforming around strategic plans to protect biodiversity.[196][201][202][203]Preserving global biodiversity is a priority in strategic conservation plans that are designed to engage public policy and concerns affecting local, regional and global scales of communities, ecosystems and cultures.[204]Action plans identifywaysof sustaining human well-being, employingnatural capital, macroeconomic policies including economic incentives, andecosystem services.[205][206]
In theEU Directive 1999/22/ECzoos are described as having a role in the preservation of the biodiversity of wildlife animals by conducting research or participation inbreeding programs.[207]
Removal of exotic species will allow the species that they have negatively impacted to recover their ecological niches. Exotic species that have become pests can be identified taxonomically (e.g., withDigital Automated Identification SYstem(DAISY), using thebarcode of life).[208][209]Removal is practical only given large groups of individuals due to the economic cost.
As sustainable populations of the remaining native species in an area become assured, "missing" species that are candidates for reintroduction can be identified using databases such as theEncyclopedia of Lifeand theGlobal Biodiversity Information Facility.
Protected areas, including forest reserves and biosphere reserves, serve many functions including for affording protection to wild animals and their habitat.[213]Protected areas have been set up all over the world with the specific aim of protecting and conserving plants and animals. Some scientists have called on the global community to designate as protected areas of 30 percent of the planet by 2030, and 50 percent by 2050, in order to mitigate biodiversity loss from anthropogenic causes.[214][215]The target of protecting 30% of the area of the planet by the year 2030 (30 by 30) was adopted by almost 200 countries in the2022 United Nations Biodiversity Conference. At the moment of adoption (December 2022) 17% of land territory and 10% of ocean territory were protected.[216]In a study published 4 September 2020 inScience Advancesresearchers mapped out regions that can help meet critical conservation and climate goals.[217]
Protected areas safeguard nature and cultural resources and contribute to livelihoods, particularly at local level. There are over 238 563 designated protected areas worldwide, equivalent to 14.9 percent of the earth's land surface, varying in their extension, level of protection, and type of management (IUCN, 2018).[218]
The benefits of protected areas extend beyond their immediate environment and time. In addition to conserving nature, protected areas are crucial for securing the long-term delivery of ecosystem services. They provide numerous benefits including the conservation ofgenetic resourcesfor food and agriculture, the provision of medicine and health benefits, the provision of water, recreation and tourism, and for acting as a buffer against disaster. Increasingly, there is acknowledgement of the wider socioeconomic values of these natural ecosystems and of the ecosystem services they can provide.[219]
Anational parkis a large natural or near natural area set aside to protect large-scale ecological processes, which also provide a foundation for environmentally and culturally compatible, spiritual, scientific, educational, recreational and visitor opportunities. These areas are selected by governments or private organizations to protect natural biodiversity along with its underlying ecological structure and supporting environmental processes, and to promote education and recreation. TheInternational Union for Conservation of Nature(IUCN), and its World Commission on Protected Areas (WCPA), has defined "National Park" as its Category II type of protected areas.[220]
Wildlife sanctuariesare areas of either shelter for animals who are unable to live in the wild on their own, or they are temporary rehabilitation centers for wildlife to improve in their overall health and wellbeing.[221]
Both of these serve as places in which biodiversity can be preserved rather than harmed. According to an article published in theNational Park Servicewebsite, national parks aim their resources at maintaining animal and habitat integrity through conservation and preservation of their ecosystems. This along with educating the general public on wildlife functions, the aim for an increase in biodiversity is one of many goals trying to be focused on through national parks.[222]
Forest protected areas are a subset of all protected areas in which a significant portion of the area is forest.[76]This may be the whole or only a part of the protected area.[76]Globally, 18 percent of the world's forest area, or more than 700 million hectares, fall within legally established protected areas such as national parks, conservation areas and game reserves.[76]
There is an estimated 726 million ha of forest in protected areas worldwide. Of the six major world regions, South America has the highest share of forests in protected areas, 31 percent.[223]Theforestsplay a vital role in harboring more than 45,000 floral and 81,000 faunal species of which 5150 floral and 1837 faunal species areendemic.[224]In addition, there are 60,065 different tree species in the world.[225]Plant and animal species confined to a specific geographical area are called endemic species.[226]
Inforest reserves, rights to activities like hunting and grazing are sometimes given to communities living on the fringes of the forest, who sustain their livelihood partially or wholly from forest resources or products.
Approximately 50 million hectares (or 24%) of European forest land is protected for biodiversity and landscape protection. Forests allocated for soil, water, and other ecosystem services encompass around 72 million hectares (32% of European forest area).[227][228]
In 2019, a summary for policymakers of the largest, most comprehensive study to date of biodiversity and ecosystem services, theGlobal Assessment Report on Biodiversity and Ecosystem Services, was published by theIntergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services(IPBES). It stated that "the state of nature has deteriorated at an unprecedented and accelerating rate". To fix the problem, humanity will need a transformative change, includingsustainable agriculture, reductions inconsumptionand waste, fishing quotas and collaborative water management.[229][230]
The concept ofnature-positiveis playing a role in mainstreaming the goals of theGlobal Biodiversity Framework(GBF) for biodiversity.[231]The aim of mainstreaming is to embed biodiversity considerations into public and private practice toconserveandsustainably usebiodiversity on global and local levels.[232]The concept of nature-positive refers to the societal goal to halt and reverse biodiversity loss, measured from a baseline of 2020 levels, and to achieve full so-called "nature recovery" by 2050.[233]
Citizen science, also known as public participation in scientific research, has been widely used in environmental sciences and is particularly popular in a biodiversity-related context. It has been used to enable scientists to involve the general public in biodiversity research, thereby enabling the scientists to collect data that they would otherwise not have been able to obtain.[234]
Volunteer observers have made significant contributions to on-the-ground knowledge about biodiversity, and recent improvements in technology have helped increase the flow and quality of occurrences from citizen sources. A 2016 study published in Biological Conservation[235]registers the massive contributions that citizen scientists already make to data mediated by theGlobal Biodiversity Information Facility (GBIF). Despite some limitations of the dataset-level analysis, it is clear that nearly half of all occurrence records shared through the GBIF network come from datasets with significant volunteer contributions. Recording and sharing observations are enabled by several global-scale platforms, includingiNaturalistandeBird.[236][237]
Global agreements such as theConvention on Biological Diversity, give "sovereign national rights over biological resources" (not property).[238]The agreements commit countries to "conserve biodiversity", "develop resources for sustainability" and "share the benefits" resulting from their use. Biodiverse countries that allowbioprospectingor collection of natural products, expect a share of the benefits rather than allowing the individual or institution that discovers/exploits the resource to capture them privately. Bioprospecting can become a type ofbiopiracywhen such principles are not respected.[239]
Sovereignty principles can rely upon what is better known asAccess and Benefit Sharing Agreements(ABAs).[240]The Convention on Biodiversity impliesinformed consentbetween the source country and the collector, to establish which resource will be used and for what and to settle on afair agreement on benefit sharing.
On the 19 of December 2022, during the2022 United Nations Biodiversity Conferenceevery country on earth, with the exception of theUnited Statesand theHoly See, signed onto the agreement which includes protecting 30% of land and oceans by 2030 (30 by 30) and 22 other targets intended to reducebiodiversity loss.[216][241][242]The agreement includes also recovering 30% of earth degraded ecosystems and increasing funding for biodiversity issues.[243]
In May 2020, the European Union published its Biodiversity Strategy for 2030. The biodiversity strategy is an essential part of theclimate change mitigationstrategy of the European Union. From the 25% of the European budget that will go to fight climate change, large part will go to restore biodiversity[203]andnature based solutions.
TheEU Biodiversity Strategy for 2030include the next targets:
Approximately half of the globalGDPdepend on nature. In Europe many parts of the economy that generate trillions of euros per year depend on nature. The benefits ofNatura 2000alone in Europe are €200 – €300 billion per year.[245]
Biodiversity is taken into account in some political and judicial decisions:
Uniform approval for use of biodiversity as a legal standard has not been achieved, however. Bosselman argues that biodiversity should not be used as a legal standard, claiming that the remaining areas of scientific uncertainty cause unacceptable administrative waste and increase litigation without promoting preservation goals.[248]
India passed theBiological Diversity Actin 2002 for the conservation of biological diversity in India. The Act also provides mechanisms for equitable sharing of benefits from the use of traditional biological resources and knowledge.
|
https://en.wikipedia.org/wiki/Biodiversity
|
Abioregionis a geographical area, on land or at sea, defined not by administrative boundaries, but by distinct characteristics such as plant and animal species, ecological systems, soils and landforms,human settlements, and topographic features such aswatersheds.[1][2][3][4][5][6][7][8]The idea of bioregions were adopted and popularized in the mid-1970s by a school of philosophy calledbioregionalism, which includes the concept that human culture, in practice, can influence bioregional definitions due to its affect on non-cultural factors.[9]Bioregions are part of a nested series of ecological scales, generally starting with local watersheds, growing into larger river systems, then Level III or IVecoregions(or regional ecosystems), bioregions, thenbiogeographical realm, followed by the continental-scale and ultimately the biosphere.[10][11]
Within the life sciences, there are numerous methods used to define the physical limits of a bioregion based on the spatial extent of mapped ecological phenomena—fromspecies distributionsand hydrological systems (i.e.Watersheds) to topographic features (e.g.landforms) and climate zones (e.g.Köppen classification). Bioregions also provide an effective framework in the field ofEnvironmental history, which seeks to use "river systems, ecozones, or mountain ranges as the basis for understanding the place of human history within a clearly delineated environmental context".[12]A bioregion can also have a distinct cultural identity[13][8]defined, for example, byIndigenous Peopleswhose historical, mythological and biocultural connections to their lands and waters shape an understanding of place and territorial extent.[14]Within the context of bioregionalism, bioregions can be socially constructed by modern-day communities for the purposes of better understanding a place, "with the aim to live in that place sustainably and respectfully."[15]
Bioregions have practical applications in the study ofbiology,biocultural anthropology,biogeography,biodiversity,bioeconomics,bioregionalism,Bioregional Financing Facilities,bioregional mapping,community health,ecology,environmental history,environmental science,foodsheds,geography,natural resource management,urban Ecology, and urban planning.[16][17]References to the term "bioregion" in scholarly literature have grown exponentially since the introduction of the term—from a single research paper in 1971 to approximately 65,000 journal articles and books published to date.[18]Governments and multilateral institutions have utilized bioregions in mappingEcosystem Servicesand tracking progress towards conservation objectives, such as ecosystem representation.[19]
The first confirmed use of the term "bioregion" in academic literature was by E. Jarowski in 1971, amarine biologiststudying theblue crabpopulations ofLouisiana. The author used the termsensu strictoto refer to a "biological region"—the area within which a crab can be provided with all the resources needed throughout its entire life cycle.[20]The term was quickly adopted by other biologists, but eventually took on a broader set of definitions to encompass a range of macro-ecological phenomena.
The term bioregion as it relates to bioregionalism is credited to Allen Van Newkirk, a Canadian poet and biogeographer.[21][22][23]In this field, the idea of "bioregion" probably goes back much earlier than published material suggests, being floated in early published small press zines by Newkirk, and in conversational dialogue.[24]This can be exemplified by the fact that Newkirk had met Peter Berg (another early scholar on bioregionalism) in San Francisco in 1969 and again in Nova Scotia in 1971 where he shared the idea with Berg. He would go on to found the Institute for Bioregional Research and issued a series of short papers using the term bioregion as early as 1970. Peter Berg, who would go on to found the Planet Drum foundation, and become a leading proponent of "bioregions" learned of the term in 1971 while Judy Goldhaft and Peter Berg were staying with Allen Van Newkirk, before Berg attended the first United Nations Conference on the Human Environment in Stockholm during June 1972.[25][26]The Planet Drum Foundation published their first Bioregional Bundle in that year, that also included a definition of a bioregion.[27][28]Helping refine this definition, AuthorKirkpatrick Salewrote in 1974 that "a bioregion is a part of the earth's surface whose rough boundaries are determined by natural rather than human dictates, distinguishable from other areas by attributes of flora, fauna, water, climate, soils and landforms, and human settlements and cultures those attributes give rise to.[13]
Several other marine biology papers picked up the term,[29][30][23]and in 1974 theInternational Union for Conservation of Nature(IUCN) published its first global-scale biogeographical map entitled "Biotic Provinces of the World".[31]However, in their 1977 article "Reinhabiting California", director of the IUCN and founder of the Man and Biosphere project Raymond Dasmann and Peter Berg pushed back against these global bodies that were attempting to use the term bioregion in a strictly ecological sense, which separated humans from the ecosystems they lived in, specifically naming that Biotic Provinces of the World Map, was not a map of bioregions.
"Reinhabitation involves developing a bioregional identity, something most North Americans have lost or have never possessed. We define bioregion in a sense different from the biotic provinces of Raymond Dasmann (1973) or the biogeographical province of Miklos Udvardy. The term refers both to geographical terrain and a terrain of consciousness—to a place and the ideas that have developed about how to live in that place. Within a bioregion, the conditions that influence life are similar, and these, in turn, have influenced human occupancy."
This article defined bioregions as distinct from biogeographical and biotic provinces that ecologists and geographers had been developing by adding a human and cultural lens to the strictly ecological idea.[32][33][34]
In 1975 A. Van Newkirk published a paper entitled "Bioregions: Towards Bioregional Strategy for Human Cultures" in which he advocates for the incorporation of human activity ("occupying populations of the culture-bearing animal") within bioregional definitions.[23]
Bioregion as a term comes from the Greekbios(life), and the Frenchregion(region), itself from the Latinregia(territory) and earlierregere(to rule or govern). Etymologically, bioregion means "life territory" or "place-of-life".[35][36]
Bioregions became a foundational concept within the philosophical system calledBioregionalism. A key difference betweenecoregionsandbiogeographyand the term bioregion, is that while ecoregions are based on general biophysical and ecosystem data, human settlement and cultural patterns play a key role in how a bioregion is defined.[37][38]A bioregion is defined along thewatershed, andhydrologicalboundaries, and uses a combination of bioregional layers, beginning with the oldest "hard" lines:geology,topography,tectonics,wind,fracture zonesandcontinental divides, working its way through the "soft" lines: living systems such assoil,ecosystems,climate, marine life, andfloraandfauna, and lastly the "human" lines: humangeography,energy,transportation,agriculture,food,music,language,history,Indigenous cultures, and ways of living within the context set into a place, and its limits to determine the final edges and boundaries.[39][40][13]This is summed up well by David McCloskey, author of the Cascadia Bioregion map: "A bioregion may be analyzed on physical, biological, and cultural levels. First, we map the landforms, geology, climate, and hydrology and how these environmental factors work together to create a common template for life in that particular place. Second, we map flora and fauna, especially the characteristic vegetative communities, and link them to their habitats. Third, we look at native peoples, western settlement, and current land-use patterns and problems, in interaction with the first two levels."[41]
A bioregion is defined as the largest physical boundaries where connections based on that place will make sense. The basic units of a bioregion are watersheds and hydrological basins, and a bioregion will always maintain the natural continuity and full extent of a watershed. While a bioregion may stretch across many watersheds, it will never divide or separate a water basin.[42]As conceived by Van Newkirk, bioregionalism is presented as a technical process of identifying “biogeographically interpreted culture areas called bioregions”. Within these territories, resident human populations would “restore plant and animal diversity,” “aid in the conservation and restoration of wild eco-systems,” and “discover regional models for new and relatively non-arbitrary scales of human activity in relation to the biological realities of the natural landscape”.[21]His first published article in a mainstream magazine was in 1975 in his article Bioregions: Towards Bioregional Strategy inEnvironmental Conservation.[21][23]In the article, Allen Van Newkirk defines a bioregion as:
“Bioregions are tentatively defined as biologically significant areas of the Earth’s surface which can be mapped and discussed as distinct existing patterns of plant, animal, and habitat distributions as related to range patterns and… deformations, attributed to one or more successive occupying populations of the culture-bearing animal (aka humans)....Towards this end a group of projects relating to bioregions or themes of applied human biogeography is envisaged.
.[43]
For Newkirk, the term "bioregion" was a way to combine human culture with earlier work on biotic provinces. So, he called this new field "regional human biogeography" and was the first to use terms such as "bioregional strategies" and "bioregional framework" for adapting human cultures into a place.[22]This idea was carried forward and developed by ecologist Raymond Dasmann and Peter Berg in an article they co-authored called Reinhabiting California in 1977, which rebuked earlier ecologist efforts to only use biotic provinces, and biogeography which excluded humans from the definition of bioregion.[44][45][46][47]
Peter Berg and Judy Goldhaft founded the Planet Drum Foundation in 1973,[48][43]located inSan Franciscowhich celebrated its 50th anniversary in 2023.[49]Planet Drum, from their website, defines a bioregion as:
A bioregion is a geographical area with coherent and interconnected plant and animal communities, and other natural characteristics (often defined by a watershed) plus the cultural values that humans have developed for living in harmony with these natural systems. Because it is a cultural idea, the description of a specific bioregion uses information from both the natural sciences and other sources.
Each bioregion is a whole “life-place” with unique requirements for human inhabitation so that it will not be disrupted and injured. People are counted as an integral aspect of a place’s life.[48]
Peter Bergdefined a bioregion at the Symposium on Biodiversity of Northwestern California, October 28–30, 1991:
A bioregion can be determined initially by the use of climatology, physiography, animal and plant geography, natural history and other descriptive natural sciences. The final boundaries of a bioregion are best described by the people who have lived within it, through human recognition of the realities of living-in-place. All life on the planet is interconnected in a few obvious ways, and in many more that remain barely explored. But there is a distinct resonance among living things and the factors which influence them that occurs specifically within each separate place on the planet. Discovering and describing that resonance is a way to describe a bioregion.[50][51]
Thomas Berry, an educator, environmentalist, activist and priest, who authored the United nations World Charter for Nature, and historian of the Hudson River Valley was also deeply rooted in the bioregional movement, and helping bioregionalism spread to the east coast of North America.[52]He defined a Bioregion as:
A bioregion is simply an indenfidable geographic area whose life systems are self-contained, self- sustaining and self renewing. A bioregion you might say, is a basic unit within the natural system of earth. Another way to define a bioregion is in terms of watersheds. Bioregions must develop human populations that accord with their natural context. The human is not exempt from being part of the basic inventory in a bioregion.
Kirkpatrick Saleanother early pioneer of the idea of bioregions, defined it in his book Dwellers in the Land: A bioregional vision that:
A bioregion is a part of the earth's surface whose rough boundaries are determined by natural and human dictates, distinguishable from other areas by attributes of flora, fauna, water, climate, soils and land-forms, and human settlements and cultures those attributes give rise to. The borders between such areas are usually not rigid – nature works with more flexibility and fluidity than that – but the general contours of the regions themselves are not hard to identify, and indeed will probably be felt, understood, sensed or in some way known to many inhabitants, and particularly those still rooted in the land.[13]
One of the other early proponents of bioregionalism, and who helped define what a bioregion is, was American biologist and environmental scientistRaymond F. Dasmann. Dasmann studied at UC Berkeley under the legendary wildlife biologistAldo Leopold, and earned his Ph.D. in zoology in 1954. He began his academic career at Humboldt State University, where he was a professor of natural resources from 1954 until 1965. During the 1960s, he worked at theConservation Foundationin Washington, D.C., as Director of International Programs and was also a consultant on the development of the 1972Stockholm Conference on the Human Environment. In the 1970s he worked withUNESCOwhere he initiated theMan and the Biosphere Programme(MAB), an international research and conservation program. During the same period he was Senior Ecologist for theInternational Union for Conservation of Naturein Switzerland, initiating global conservation programs which earned him the highest honors awarded byThe Wildlife Society, and theSmithsonian Institution.[53][54]
Working withPeter Berg, and also contemporary with Allen Van Newkirk, Dasmann was one of the pioneers in developing the definition for the term "Bioregion", as well as conservation concepts of "Eco-development" and "biological diversity," and identified the crucial importance of recognizing indigenous peoples and their cultures in efforts to conserve natural landscapes.[33]
Because it is a cultural idea, the description of a specific bioregion is drawn using information from not only the natural sciences but also many other sources. It is a geographic terrain and a terrain of consciousness.[55]Anthropological studies, historical accounts, social developments, customs, traditions, and arts can all play a part. Bioregionalism utilizes them to accomplish three main goals:
The latter is accomplished through proactive projects, employment and education, as well as by engaging in protests against the destruction of natural elements in a life-place.[57]
Bioregional goals play out in a spectrum of different ways for different places. In North America, for example, restoring native prairie grasses is a basic ecosystem-rebuilding activity for reinhabitants of the Kansas Area Watershed Bioregion in the Midwest, whereas bringing back salmon runs has a high priority for Shasta Bioregion in northern California. Using geothermal and wind as a renewable energy source fits Cascadia Bioregion in the rainy Pacific Northwest. Less cloudy skies in the Southwest's sparsely vegetated Sonoran Desert Bioregion make direct solar energy a more plentiful alternative there. Education about local natural characteristics and conditions varies diversely from place to place, along with bioregionally significant social and political issues.[56]
An important part of bioregionalism is bioregional mapping. Instructions for how to map a bioregion were first laid out in a book Mapping for Local Empowerment, written by University of British Columbia by Douglas Aberley in 1993,[58][59]followed by the mapping handbook Giving the Land a Voice in 1994.[60]This grew from theTsleil-Waututh First Nation,Nisga'a,Tsilhqotʼin,Wetʼsuwetʼenfirst nations who usedBioregional Mappingto create some of the first bioregional atlases as part of court cases to defend their sovereignty in the 1980s and 1990s, one such example being theTsilhqotʼin Nation v British Columbia.[58][61]
In these resources, there are two types of maps: Bioregional Maps and maps of Bioregions, which both include physical, ecological and human lines.[62]A bioregional map can be any scale, and is a community and participatory process to map what people care about. Bioregional maps and atlases can be considered tools and jumping off points for helping guide regenerative activities of a community. Mapping a bioregion is considered a specific type of bioregional map, in which many layers are brought together to map a "whole life place", and is considered an 'optimal zone of interconnection for a species to thrive', i.e. for humans, or a specific species such as salmon, and uses many different layers to see what boundaries "emerge" and make sense as frameworks of stewardship.[63]
A good example of this is the Salmon Nation bioregion, which is the Pacific Northwest and northwest rim of the Pacific Ocean as defined through the historic and current range of the salmon, as well as the people and ecosystem which have evolved over millennia to depend on them.[64][65]This style of bioregional mapping can also be found in the works ofHenry David Thoreauwho when hired to make maps by the United States government, chose instead to create maps "that charts and delineates the local ecology and its natural history as well as its intersection with a human community".[66]
This type of mapping is consistent with, and aligns with an indigenous and western worldview.[61]
This is put well by Douglas Aberley and chief Michael George noting that:
"Once the bioregional map atlas is completed it becomes the common foundation of knowledge from which planning scenarios can be prepared, and decisions ultimately made. Complex information that is otherwise difficult to present is clearly depicted. The community learns about itself in the process of making decisions about its future."[61]
Sheila Harrington, in the introduction to Islands of the Salish Sea: A Community Atlas goes one step further, noting that:
“The atlas should be used as a jumping off place for decision making about the future. From the holistic image of place that the maps collectively communicate, what actions could be adopted to achieve sustainable prosperity? What priorities emerge from a survey of damaged lands and unsolved social ills? What underutilized potentials can be put to work to help achieve sustainability? The atlas can become a focus for discussions setting a proactive plan for positive change.”[67][68]
Mapping a Bioregion consists of:
Your final map will generally help demarcate a bioregion, or life place.[72][59][69]
While references to bioregions (or biogeographical regions) have become increasingly common in scholarly literature related to life sciences, "there is little agreement on how to best classify and name such regions, with several conceptually related terms being used, often interchangeably."[74]Bioregions can take many forms and operate at many scales – from very small ecosystems or 'biotopes' to ecoregions (which can be nested at different scales) to continent-scale distributions of plants and animals, like biomes or realms. All of them, technically, can be considered types of bioregionssensu latoand are often referred to as such in academic literature.
In 2014, J. Marrone documented a history of 13 biogeographical concepts in "On Biotas and their names".[75]A recent review of scholarly literature finds 20 unique biotic methods to define bioregions—based on populations of specific plant and animals species or species assemblages. These range from global and continental scales to sub-continental and regional scales to sub-regional and local scales.
In addition, 5 abiotic methods have been utilized to inform the delineation of biogeographical extents.[104]
Ecoregions are one of the primary building blocks of bioregions, which are made up of "clusters of biotically related ecoregions".[110]
AnEcoregion(ecological region) is anecologicallyandgeographicallydefined area that is smaller than a bioregion, which in turn is smaller than abiogeographic realm.[11]Ecoregions cover relatively large areas of land or water, and contain characteristic, geographically distinct assemblages of naturalcommunitiesandspecies.[111]They can include geology physiography, vegetation, climate, hydrology, terrestrial and aquatic fanua, and soils, and may or may not include the impacts of human activity (e.g. land use patterns, vegetation changes etc.). Thebiodiversityofflora,faunaandecosystemsthat characterize an ecoregion tends to be distinct from that of other ecoregions.[112]
The phrase "ecological region" was widely used throughout the 20th century by biologists and zoologists to define specific geographic areas in research. In the early 1970s the term 'ecoregion' was introduced (short for ecological region), and R.G. Bailey published the first comprehensive map of U.S. ecoregions in 1976.[113]The term was used widely in scholarly literature in the 1980s and 1990s, and in 2001 scientists at the U.S. conservation organization World Wildlife Fund (WWF) codified and published the first global-scale map of Terrestrial Ecoregions of the World (TEOW), led by D. Olsen, E. Dinerstein, E. Wikramanayake, and N. Burgess.[114]While the two approaches are related, the Bailey ecoregions (nested in four levels) give more importance to ecological criteria and climate zones, while the WWF ecoregions give more importance to biogeography, that is, thedistributionof distinct species assemblages.[115]
Ecoregions can change gradually, and have soft transition areas known as ecotones. Because of this, there can be some variation in how ecoregions are defined. The US Environmental Protection Agency has four ranking systems they use, which lists there being 12 type one ecoregions, and 187 type III ecoregions in North America[116]while another study on the Biodiversity of the Klamath-Siskiyou Ecoregion, researchers found that North America contains 116 ecoregions nested within 10 major habitat types.[110]
The TEOW framework originally delineated 867 terrestrial ecoregions nested into 14 major biomes, contained with the world's 8 major biogeographical realms. Subsequent regional papers by the co-authors covering Africa, Indo-Pacific, and Latin America differentiate between ecoregions and bioregions, referring to the latter as "geographic clusters ofecoregionsthat may span several habitat types, but have strong biogeographic affinities, particularly at taxonomic levels higher than the species level (genus, family)".[117][118][119]In 2007, a comparable set of Marine Ecoregions of the World (MEOW) was published, led by M. Spalding,[120]and in 2008 a set of Freshwater Ecoregions of the World (FEOW) was published, led by R. Abell.[121]
In 2017, an updated version of the terrestrial ecoregions dataset was released in the paper "An Ecoregion-Based Approach to Protecting Half the Terrestrial Realm" led by E. Dinerstein with 48 co-authors.[87]Using recent advances in satellite imagery the ecoregion perimeters were refined and the total number reduced to 846 (and later 844), which can be explored on a web application developed by Resolve and Google Earth Engine.[122]For conservation practitioners and organizations monitoring progress towards the goals of the United NationsConvention on Biological Diversity(CBD), in particular the goal of ecosystem representation inProtected Areanetworks, the most widely used bioregional delineations include the Resolve Ecoregions and the IUCN Global Ecosystem Typology.
In bioregionalism, an ecoregion can also use geography, ecology, and culture as part of its definition.[41]
One example of a bioregion is theCascadia Bioregion, located along the Northwestern rim of North America. The Cascadia bioregion contains 75 distinct ecoregions, and extends for more than 2,500 miles (4,000 km) from theCopper Riverin Southern Alaska, toCape Mendocino, approximately 200 miles north ofSan Francisco, and east as far as theYellowstone Calderaand continental divide.[123]
The Cascadia Bioregion encompasses all of the state of Washington, all but the southeastern corner ofIdaho, and portions ofOregon,California,Nevada,Utah,Wyoming,Montana,Alaska,Yukon, andBritish Columbia. Bioregions are geographically based areas defined by land or soil composition, watershed, climate, flora, and fauna. The Cascadia Bioregion stretches along the entire watershed of theColumbia River(as far as the Continental Divide), as well as theCascade Rangefrom Northern California well into Canada. It is also considered to include the associated ocean and seas and their ecosystems out to thecontinental slope. The delineation of a bioregion has environmental stewardship as its primary goal, with the belief that political boundaries should match ecological and cultural boundaries.[124]
The name "Cascadia" was first applied to the whole geologic region by Bates McKee in his 1972 geology textbookCascadia; the geologic evolution of the Pacific Northwest. Later the name was adopted byDavid McCloskey, aSeattle Universitysociology professor, to describe it as a bioregion. McCloskey describes Cascadia as "a land of falling waters." He notes the blending of the natural integrity and the sociocultural unity that gives Cascadia its definition.[125]
|
https://en.wikipedia.org/wiki/Bioregion
|
Conservation biologyis the study of the conservation of nature and ofEarth'sbiodiversitywith the aim of protectingspecies, theirhabitats, andecosystemsfrom excessive rates ofextinctionand the erosion of biotic interactions.[1][2][3]It is an interdisciplinary subject drawing on natural and social sciences, and the practice ofnatural resource management.[4][5][page needed][6][7]
Theconservation ethicis based on the findings of conservation biology.
The term conservation biology and its conception as a new field originated with the convening of "The First International Conference on Research in Conservation Biology" held at theUniversity of California, San Diegoin La Jolla, California, in 1978 led by American biologists Bruce A. Wilcox andMichael E. Souléwith a group of leading university and zoo researchers and conservationists includingKurt Benirschke, SirOtto Frankel,Thomas Lovejoy, andJared Diamond. The meeting was prompted due to concern over tropical deforestation, disappearing species, and eroding genetic diversity within species.[8]The conference and proceedings that resulted[2]sought to initiate the bridging of a gap between theory in ecology andevolutionary geneticson the one hand and conservation policy and practice on the other.[9]
Conservation biology and the concept of biological diversity (biodiversity) emerged together, helping crystallize the modern era of conservation science andpolicy.[10]The inherent multidisciplinary basis for conservation biology has led to new subdisciplines including conservation social science,conservation behaviorand conservation physiology.[11]It stimulated further development ofconservation geneticswhichOtto Frankelhad originated first but is now often considered a subdiscipline as well.[12]
The rapid decline of established biological systems around the world means that conservation biology is often referred to as a "Discipline with a deadline".[13]Conservation biology is tied closely toecologyin researching thepopulation ecology(dispersal,migration,demographics,effective population size,inbreeding depression, andminimum population viability) ofrareorendangered species.[14][15]Conservation biology is concerned with phenomena that affect the maintenance, loss, and restoration of biodiversity and the science of sustaining evolutionary processes that engendergenetic,population,species, and ecosystem diversity.[5][6][7][15]The concern stems from estimates suggesting that up to 50% of all species on the planet will disappear within the next 50 years,[16]which will increase poverty and starvation, and will reset the course of evolution on this planet.[17][18]Researchers acknowledge that projections are difficult, given the unknown potential impacts of many variables, including species introduction to new biogeographical settings and a non-analog climate.[19]
Conservation biologists research and educate on the trends and process ofbiodiversity loss, speciesextinctions, and the negative effect these are having on our capabilities to sustain the well-being of human society. Conservation biologists work in the field and office, in government, universities, non-profit organizations and industry. The topics of their research are diverse, because this is an interdisciplinary network with professional alliances in the biological as well as social sciences. Those dedicated to the cause and profession advocate for a global response tothe current biodiversity crisisbased onmorals,ethics, and scientific reason. Organizations and citizens are responding to the biodiversity crisis through conservation action plans that direct research, monitoring, and education programs that engage concerns at local through global scales.[4][5][6][7]There is increasing recognition that conservation is not just about what is achieved but how it is done.[20]
The conservation of natural resources is the fundamental problem. Unless we solve that problem, it will avail us little to solve all others.
Conscious efforts to conserve and protectglobalbiodiversity are a recent phenomenon.[7][22]Natural resource conservation, however, has a history that extends prior to the age of conservation. Resource ethics grew out of necessity through direct relations with nature. Regulation or communal restraint became necessary to prevent selfish motives from taking more than could be locally sustained, therefore compromising the long-term supply for the rest of the community.[7]This social dilemma with respect to natural resource management is often called the "Tragedy of the Commons".[23][24]
From this principle, conservation biologists can trace communal resource based ethics throughout cultures as a solution to communal resource conflict.[7]For example, the AlaskanTlingitpeoples and theHaidaof thePacific Northwesthad resource boundaries, rules, and restrictions among clans with respect to the fishing ofsockeye salmon. These rules were guided by clan elders who knew lifelong details of each river and stream they managed.[7][25]There are numerous examples in history where cultures have followed rules, rituals, and organized practice with respect to communal natural resource management.[26][27]
The Mauryan emperorAshokaaround 250 BC issued edicts restricting the slaughter of animals and certain kinds of birds, as well as opened veterinary clinics.[citation needed]
Conservation ethics are also found in early religious and philosophical writings. There are examples in theTao,Shinto,Hindu,IslamicandBuddhisttraditions.[7][28]In Greek philosophy, Plato lamented about pastureland degradation: "What is left now is, so to say, the skeleton of a body wasted by disease; the rich, soft soil has been carried off and only the bare framework of the district left."[29]In the bible, through Moses, God commanded to let the land rest from cultivation every seventh year.[7][30]Before the 18th century, however, much of European culture considered it apagan viewto admire nature. Wilderness was denigrated while agricultural development was praised.[31]However, as early as AD 680 awildlife sanctuarywas founded on theFarne IslandsbySt Cuthbertin response to his religious beliefs.[7]
Natural historywas a major preoccupation in the 18th century, with grand expeditions and the opening of popular public displays inEuropeandNorth America. By 1900 there were 150natural history museumsinGermany, 250 inGreat Britain, 250 in theUnited States, and 300 inFrance.[32]Preservationist or conservationist sentiments are a development of the late 18th to early 20th centuries.
Before Charles Darwin set sail on HMSBeagle, most people in the world, including Darwin, believed in special creation and that all species were unchanged.[33]George-Louis Leclerc was one of the first naturalist that questioned this belief. He proposed in his 44 volume natural history book that species evolve due to environmental influences.[33]Erasmus Darwin was also a naturalist who also suggested that species evolved. Erasmus Darwin noted that some species have vestigial structures which are anatomical structures that have no apparent function in the species currently but would have been useful for the species' ancestors.[33]The thinking of these early 18th century naturalists helped to change the mindset and thinking of the early 19th century naturalists.
By the early 19th centurybiogeographywas ignited through the efforts ofAlexander von Humboldt,Charles LyellandCharles Darwin.[34]The 19th-century fascination with natural history engendered a fervor to be the first to collect rare specimens with the goal of doing so before they became extinct by other such collectors.[31][32]Although the work of many 18th and 19th century naturalists were to inspire nature enthusiasts andconservation organizations, their writings, by modern standards, showed insensitivity towards conservation as they would kill hundreds of specimens for their collections.[32]
The modern roots of conservation biology can be found in the late 18th-centuryEnlightenmentperiod particularly inEnglandandScotland.[31][35]Thinkers includingLord Monboddodescribed the importance of "preserving nature"; much of this early emphasis had its origins inChristian theology.[35]
Scientific conservation principles were first practically applied to the forests ofBritish India. The conservation ethic that began to evolve included three core principles: that human activity damaged theenvironment, that there was acivic dutyto maintain the environment for future generations, and that scientific, empirically based methods should be applied to ensure this duty was carried out. SirJames Ranald Martinwas prominent in promoting this ideology, publishing many medico-topographical reports that demonstrated the scale of damage wrought through large-scale deforestation and desiccation, and lobbying extensively for the institutionalization of forest conservation activities in British India through the establishment ofForest Departments.[36]
TheMadrasBoard of Revenue started local conservation efforts in 1842, headed byAlexander Gibson, a professionalbotanistwho systematically adopted a forest conservation program based on scientific principles. This was the first case of state conservation management of forests in the world.[37]Governor-GeneralLord Dalhousieintroduced the first permanent and large-scale forest conservation program in the world in 1855, a model that soon spread toother colonies, as well the United States,[38][39][40]whereYellowstone National Parkwas opened in 1872 as the world's first national park.[41][page needed]
The termconservationcame into widespread use in the late 19th century and referred to the management, mainly for economic reasons, of such natural resources astimber, fish, game,topsoil,pastureland, and minerals. In addition it referred to the preservation offorests(forestry),wildlife(wildlife refuge), parkland,wilderness, andwatersheds. This period also saw the passage of the first conservation legislation and the establishment of the first nature conservation societies. TheSea Birds Preservation Act of 1869was passed in Britain as the first nature protection law in the world[42]after extensive lobbying from the Association for the Protection of Seabirds[43]and the respectedornithologistAlfred Newton.[44]Newton was also instrumental in the passage of the firstGame lawsfrom 1872, which protected animals during their breeding season so as to prevent the stock from being brought close to extinction.[45]
One of the first conservation societies was theRoyal Society for the Protection of Birds, founded in 1889 inManchester[46]as aprotest groupcampaigning against the use ofgreat crested grebeandkittiwakeskins and feathers infur clothing. Originally known as "the Plumage League",[47]the group gained popularity and eventually amalgamated with the Fur and Feather League in Croydon, and formed the RSPB.[48]TheNational Trustformed in 1895 with the manifesto to "...promote the permanent preservation, for the benefit of the nation, of lands, ... to preserve (so far practicable) their natural aspect." In May 1912, a month after theTitanicsank, banker and expert naturalistCharles Rothschildheld a meeting at theNatural History Museumin London to discuss his idea for a new organisation to save the best places for wildlife in the British Isles. This meeting led to the formation of the Society for the Promotion of Nature Reserves, which later became theWildlife Trusts.[citation needed]
In theUnited States, theForest Reserve Act of 1891gave the President power to set aside forest reserves from the land in the public domain.John Muirfounded theSierra Clubin 1892, and theNew York Zoological Societywas set up in 1895. A series ofnational forests and preserveswere established byTheodore Rooseveltfrom 1901 to 1909.[50][51]The 1916 National Parks Act, included a 'use without impairment' clause, sought by John Muir, which eventually resulted in the removal of a proposal to build a dam inDinosaur National Monumentin 1959.[52]
In the 20th century,Canadiancivil servants, includingCharles Gordon Hewitt[53]andJames Harkin, spearheaded the movement towardwildlife conservation.[54][page needed]
In the 21st century professional conservation officers have begun to collaborate withindigenouscommunities for protecting wildlife in Canada.[55]Some conservation efforts are yet to fully take hold due to ecological neglect.[56][57][58]For example in the USA, 21st centurybowfishingof native fishes, which amounts to killing wild animals for recreation and disposing of them immediately afterwards, remains unregulated and unmanaged.[49]
In the mid-20th century, efforts arose to target individual species for conservation, notably efforts inbig catconservation inSouth Americaled by the New York Zoological Society.[59]In the early 20th century the New York Zoological Society was instrumental in developing concepts of establishing preserves for particular species and conducting the necessary conservation studies to determine the suitability of locations that are most appropriate as conservation priorities; the work of Henry Fairfield Osborn Jr.,Carl E. Akeley,Archie Carrand his son Archie Carr III is notable in this era.[60][61][62]Akeley for example, having led expeditions to theVirunga Mountainsand observed themountain gorillain the wild, became convinced that the species and the area were conservation priorities. He was instrumental in persuadingAlbert I of Belgiumto act in defense of themountain gorillaand establishAlbert National Park(since renamedVirunga National Park) in what is nowDemocratic Republic of Congo.[63]
By the 1970s, led primarily by work in the United States under theEndangered Species Act[64]along with theSpecies at Risk Act(SARA) of Canada,Biodiversity Action Plansdeveloped inAustralia,Sweden, theUnited Kingdom, hundreds of species specific protection plans ensued. Notably the United Nations acted to conserve sites of outstanding cultural or natural importance to the common heritage of mankind. The programme was adopted by the General Conference ofUNESCOin 1972. As of 2006, a total of 830 sites are listed: 644 cultural, 162 natural. The first country to pursue aggressive biological conservation through national legislation was the United States, which passed back to back legislation in the Endangered Species Act[65](1966) andNational Environmental Policy Act(1970),[66]which together injected major funding and protection measures to large-scale habitat protection and threatened species research. Other conservation developments, however, have taken hold throughout the world. India, for example, passed theWildlife Protection Act of 1972.[67]
In 1980, a significant development was the emergence of theurban conservationmovement. A local organization was established inBirmingham, UK, a development followed in rapid succession in cities across the UK, then overseas. Although perceived as agrassroots movement, its early development was driven by academic research into urban wildlife. Initially perceived as radical, the movement's view of conservation being inextricably linked with other human activity has now become mainstream in conservation thought. Considerable research effort is now directed at urban conservation biology. TheSociety for Conservation Biologyoriginated in 1985.[7]: 2
By 1992, most of the countries of the world had become committed to the principles of conservation of biological diversity with theConvention on Biological Diversity;[68]subsequently many countries began programmes ofBiodiversity Action Plansto identify and conserve threatened species within their borders, as well as protect associated habitats. The late 1990s saw increasing professionalism in the sector, with the maturing of organisations such as theInstitute of Ecology and Environmental Managementand theSociety for the Environment.
Since 2000, the concept oflandscape scale conservationhas risen to prominence, with less emphasis being given to single-species or even single-habitat focused actions. Instead an ecosystem approach is advocated by most mainstream conservationists, although concerns have been expressed by those working to protect some high-profile species.
Ecology has clarified the workings of thebiosphere; i.e., the complex interrelationships among humans, other species, and the physical environment. Theburgeoning human populationand associatedagriculture,industry, and the ensuing pollution, have demonstrated how easily ecological relationships can be disrupted.[69]
The last word in ignorance is the man who says of an animal or plant: "What good is it?" If the land mechanism as a whole is good, then every part is good, whether we understand it or not. If the biota, in the course of aeons, has built something we like but do not understand, then who but a fool would discard seemingly useless parts? To keep every cog and wheel is the first precaution of intelligent tinkering.
Extinction rates are measured in a variety of ways. Conservation biologists measure and applystatistical measuresoffossil records,[1][70]rates ofhabitat loss, and a multitude of other variables such asloss of biodiversityas a function of the rate of habitat loss and site occupancy[71]to obtain such estimates.[72]The Theory of Island Biogeography[73]is possibly the most significant contribution toward the scientific understanding of both the process and how to measure the rate of species extinction. The currentbackground extinction rateis estimated to be one species every few years.[74]Actual extinction rates are estimated to be orders of magnitudes higher.[75]While this is important, it's worth noting that there are no models in existence that account for the complexity of unpredictable factors like species movement, a non-analog climate, changing species interactions, evolutionary rates on finer time scales, and many other stochastic variables.[76][19]
The measure of ongoing species loss is made more complex by the fact that most of the Earth's species have not been described or evaluated. Estimates vary greatly on how many species actually exist (estimated range: 3,600,000–111,700,000)[77]to how many have received aspecies binomial(estimated range: 1.5–8 million).[77]Less than 1% of all species that have been described beyond simply noting its existence.[77]From these figures, the IUCN reports that 23% ofvertebrates, 5% ofinvertebratesand 70% of plants that have been evaluated are designated asendangeredorthreatened.[78][79]Better knowledge is being constructed byThe Plant Listfor actual numbers of species.
Systematic conservation planning is an effective way to seek and identify efficient and effective types of reserve design to capture or sustain the highest priority biodiversity values and to work with communities in support of local ecosystems. Margules and Pressey identify six interlinked stages in the systematic planning approach:[80]
Conservation biologists regularly prepare detailed conservation plans forgrant proposalsor to effectively coordinate their plan of action and to identify best management practices (e.g.[81]). Systematic strategies generally employ the services ofGeographic Information Systemsto assist in the decision-making process. TheSLOSS debateis often considered in planning.
Conservation physiology was defined bySteven J. Cookeand colleagues as:[11]
An integrative scientific discipline applying physiological concepts, tools, and knowledge to characterizing biological diversity and its ecological implications; understanding and predicting how organisms, populations, and ecosystems respond to environmental change and stressors; and solving conservation problems across the broad range of taxa (i.e. including microbes, plants, and animals). Physiology is considered in the broadest possible terms to include functional and mechanistic responses at all scales, and conservation includes the development and refinement of strategies to rebuild populations, restore ecosystems, inform conservation policy, generate decision-support tools, and manage natural resources.
Conservation physiology is particularly relevant to practitioners in that it has the potential to generate cause-and-effect relationships and reveal the factors that contribute to population declines.
TheSociety for Conservation Biologyis a global community of conservation professionals dedicated to advancing the science and practice of conserving biodiversity. Conservation biology as a discipline reaches beyond biology, into subjects such asphilosophy,law,economics,humanities,arts,anthropology, andeducation.[5][6]Within biology,conservation geneticsandevolutionare immense fields unto themselves, but these disciplines are of prime importance to the practice and profession of conservation biology.
Conservationists introducebiaswhen they support policies using qualitative description, such ashabitatdegradation, orhealthyecosystems. Conservation biologists advocate for reasoned and sensible management of natural resources and do so with a disclosed combination ofscience,reason,logic, andvaluesin their conservation management plans.[5]This sort of advocacy is similar to the medical profession advocating for healthy lifestyle options, both are beneficial to human well-being yet remain scientific in their approach.
There is a movement in conservation biology suggesting a new form of leadership is needed to mobilize conservation biology into a more effective discipline that is able to communicate the full scope of the problem to society at large.[82]The movement proposes an adaptive leadership approach that parallels anadaptive managementapproach. The concept is based on a new philosophy or leadership theory steering away from historical notions of power, authority, and dominance. Adaptive conservation leadership is reflective and more equitable as it applies to any member of society who can mobilize others toward meaningful change using communication techniques that are inspiring, purposeful, and collegial. Adaptive conservation leadership and mentoring programs are being implemented by conservation biologists through organizations such as the Aldo Leopold Leadership Program.[83]
Conservation may be classified as eitherin-situ conservation, which is protecting an endangered species in its naturalhabitat, orex-situ conservation, which occurs outside the natural habitat.[84]In-situ conservation involves protecting or restoring the habitat. Ex-situ conservation, on the other hand, involves protection outside of an organism's natural habitat, such as on reservations or ingene banks, in circumstances where viable populations may not be present in the natural habitat.[84]
The conservation of habitats like forest, water or soil in its natural state is crucial for any species depending in it to thrive. Instead of making the whole new environment looking alike the original habitat of wild animals is less effective than preserving the original habitats. An approach in Nepal named reforestation campaign has helped increase the density and area covered by the original forests which proved to be better than creating entirely new environment after original one is let to lost.Old Forests Store More Carbon than Young Onesas proved by latest researches, so it is more crucial to protect the old ones. The reforestation campaign launched by Himalayan Adventure Therapy in Nepal basically visits the old forests in periodic basis which are vulnerable to loss of density and the area covered due to unplanned urbanization activities. Then they plant the new saplings of same tree families of that existing forest in the areas where the old forest has been lost and also plant those saplings to the barren areas connected to the forest. This maintains the density and area covered by the forest.
Also, non-interference may be used, which is termed apreservationistmethod. Preservationists advocate for giving areas of nature and species a protected existence that halts interference from the humans.[5]In this regard, conservationists differ from preservationists in the social dimension, as conservation biology engages society and seeks equitable solutions for both society and ecosystems. Some preservationists emphasize the potential of biodiversity in a world without humans.
Ecological monitoring is the systematic collection of data relevant to theecologyof a species or habitat at repeating intervals with defined methods.[85]Long-term monitoring for environmental and ecological metrics is an important part of any successful conservation initiative. Unfortunately, long-term data for manyspeciesandhabitatsis not available in many cases.[86]A lack of historical data on speciespopulations, habitats, and ecosystems means that any current or future conservation work will have to make assumptions to determine if the work is having any effect on the population or ecosystem health. Ecological monitoring can provide early warning signals of deleterious effects (from human activities or natural changes in an environment) on an ecosystem and its species.[85]In order for signs of negative trends inecosystemor species health to be detected, monitoring methods must be carried out at appropriate time intervals, and the metric must be able to capture the trend of the population or habitat as a whole.
Long-term monitoring can include the continued measuring of many biological, ecological, and environmental metrics including annual breeding success, population size estimates, water quality,biodiversity(which can be measured in many way, i.e.Shannon Index), and many other methods. When determining which metrics to monitor for a conservation project, it is important to understand how an ecosystem functions and what role different species and abiotic factors have within the system.[87]It is important to have a precise reason for why ecological monitoring is implemented; within the context of conservation, this reasoning is often to track changes before, during, or after conservation measures are put in place to help a species or habitat recover from degradation and/or maintain integrity.[85]
Another benefit of ecological monitoring is the hard evidence it provides scientists to use for advising policy makers and funding bodies about conservation efforts. Not only is ecological monitoring data important for convincing politicians, funders, and the public why a conservation program is important to implement, but also to keep them convinced that a program should be continued to be supported.[86]
There is plenty of debate on how conservation resources can be used most efficiently; even within ecological monitoring, there is debate on which metrics that money, time and personnel should be dedicated to for the best chance of making a positive impact. One specific general discussion topic is whether monitoring should happen where there is littlehuman impact(to understand a system that has not been degraded by humans), where there is human impact (so the effects from humans can be investigated), or where there is data deserts and little is known about the habitats' and communities' response to humanperturbations.[85]
The concept ofbioindicators/indicator speciescan be applied to ecological monitoring as a way to investigate howpollutionis affecting an ecosystem.[88]Species likeamphibiansandbirdsare highly susceptible to pollutants in their environment due to their behaviours and physiological features that cause them to absorb pollutants at a faster rate than other species. Amphibians spend parts of their time in the water and on land, making them susceptible to changes in both environments.[89]They also have very permeable skin that allows them to breath and intake water, which means they also take any air or water-soluble pollutants in as well. Birds often cover a wide range in habitat types annually, and also generally revisit the same nesting site each year. This makes it easier for researchers to track ecological effects at both an individual and a population level for the species.[90]
Many conservation researchers believe that having a long-term ecological monitoring program should be a priority for conservation projects, protected areas, and regions where environmental harm mitigation is used.[91]
Conservation biologists areinterdisciplinaryresearchers that practice ethics in the biological and social sciences. Chan states[92]that conservationists must advocate for biodiversity and can do so in a scientifically ethical manner by not promoting simultaneous advocacy against other competing values.
A conservationist may be inspired by theresource conservation ethic,[7]: 15which seeks to identify what measures will deliver "the greatest good for the greatest number of people for the longest time."[5]: 13In contrast, some conservation biologists argue that nature has anintrinsic valuethat is independent ofanthropocentricusefulness orutilitarianism.[7]: 3, 12, 16–17Aldo Leopoldwas a classical thinker and writer on such conservation ethics whose philosophy, ethics and writings are still valued and revisited by modern conservation biologists.[7]: 16–17
TheInternational Union for Conservation of Nature(IUCN) has organized a global assortment of scientists and research stations across the planet to monitor the changing state of nature in an effort to tackle the extinction crisis. The IUCN provides annual updates on the status of species conservation through its Red List.[93]TheIUCN Red Listserves as an international conservation tool to identify those species most in need of conservation attention and by providing a global index on the status of biodiversity.[94]More than the dramatic rates of species loss, however, conservation scientists note that thesixth mass extinctionis a biodiversity crisis requiring far more action than a priority focus onrare,endemicorendangered species. Concerns for biodiversity loss covers a broader conservation mandate that looks atecological processes, such as migration, and a holistic examination of biodiversity at levels beyond the species, including genetic, population and ecosystem diversity.[95]Extensive, systematic, and rapid rates of biodiversity loss threatens the sustained well-being of humanity by limiting supply of ecosystem services that are otherwise regenerated by the complex and evolving holistic network of genetic and ecosystem diversity. While theconservation statusof species is employed extensively in conservation management,[94]some scientists highlight that it is the common species that are the primary source of exploitation and habitat alteration by humanity. Moreover, common species are often undervalued despite their role as the primary source of ecosystem services.[96][97]
While most in the community of conservation science "stress the importance" ofsustaining biodiversity,[98]there is debate on how to prioritize genes, species, or ecosystems, which are all components of biodiversity (e.g. Bowen, 1999). While the predominant approach to date has been to focus efforts on endangered species by conservingbiodiversity hotspots, some scientists (e.g)[99]and conservation organizations, such as theNature Conservancy, argue that it is more cost-effective, logical, and socially relevant to invest inbiodiversity coldspots.[100]The costs of discovering, naming, and mapping out the distribution of every species, they argue, is an ill-advised conservation venture. They reason it is better to understand the significance of the ecological roles of species.[95]
Biodiversity hotspots and coldspots are a way of recognizing that the spatial concentration of genes, species, and ecosystems is not uniformly distributed on the Earth's surface.[101]For example, "... 44% of all species of vascular plants and 35% of all species in four vertebrate groups are confined to 25 hotspots comprising only 1.4% of the land surface of the Earth."[102]
Those arguing in favor of setting priorities for coldspots point out that there are other measures to consider beyond biodiversity. They point out that emphasizing hotspots downplays the importance of the social and ecological connections to vast areas of the Earth's ecosystems wherebiomass, not biodiversity, reigns supreme.[103]It is estimated that 36% of the Earth's surface, encompassing 38.9% of the worlds vertebrates, lacks the endemic species to qualify as biodiversity hotspot.[104]Moreover, measures show that maximizing protections for biodiversity does not capture ecosystem services any better than targeting randomly chosen regions.[105]Population level biodiversity (mostly in coldspots) are disappearing at a rate that is ten times that at the species level.[99][106]The level of importance in addressing biomass versus endemism as a concern for conservation biology is highlighted in literature measuring the level of threat to global ecosystem carbon stocks that do not necessarily reside in areas of endemism.[107][108]A hotspot priority approach[109]would not invest so heavily in places such assteppes, theSerengeti, theArctic, ortaiga. These areas contribute a great abundance of population (not species) level biodiversity[106]andecosystem services, including cultural value and planetarynutrient cycling.[100]
Those in favor of the hotspot approach point out that species are irreplaceable components of the global ecosystem, they are concentrated in places that are most threatened, and should therefore receive maximal strategic protections.[110]This is a hotspot approach because the priority is set to target species level concerns over population level or biomass.[106][failed verification]Species richness and genetic biodiversity contributes to and engenders ecosystem stability, ecosystem processes, evolutionaryadaptability, and biomass.[111]Both sides agree, however, that conserving biodiversity is necessary to reduce the extinction rate and identify an inherent value in nature; the debate hinges on how to prioritize limited conservation resources in the most cost-effective way.
Conservation biologists have started to collaborate with leading globaleconomiststo determine how to measure thewealthandservicesof nature and to make these values apparent inglobal market transactions.[112]This system of accounting is callednatural capitaland would, for example, register the value of an ecosystem before it is cleared to make way for development.[113]TheWWFpublishes itsLiving Planet Reportand provides a global index of biodiversity by monitoring approximately 5,000 populations in 1,686 species of vertebrate (mammals, birds, fish, reptiles, and amphibians) and report on the trends in much the same way that the stock market is tracked.[114]
This method of measuring the global economic benefit of nature has been endorsed by theG8+5leaders and theEuropean Commission.[112]Nature sustains manyecosystem services[115]that benefit humanity.[116]Many of the Earth's ecosystem services arepublic goodswithout amarketand therefore nopriceorvalue.[112]When thestock marketregisters a financial crisis, traders onWall Streetare not in the business of trading stocks for much of the planet's living natural capital stored in ecosystems. There is no natural stock market with investment portfolios into sea horses, amphibians, insects, and other creatures that provide a sustainable supply of ecosystem services that are valuable to society.[116]The ecological footprint of society has exceeded the bio-regenerative capacity limits of the planet's ecosystems by about 30 percent, which is the same percentage of vertebrate populations that have registered decline from 1970 through 2005.[114]
The ecological credit crunch is a global challenge. TheLiving Planet Report 2008tells us that more than three-quarters of the world's people live in nations that are ecological debtors – their national consumption has outstripped their country's biocapacity. Thus, most of us are propping up our current lifestyles, and our economic growth, by drawing (and increasingly overdrawing) upon the ecological capital of other parts of the world.
The inherentnatural economyplays an essential role in sustaining humanity,[117]including the regulation of globalatmospheric chemistry,pollinating crops,pest control,[118]cycling soil nutrients, purifying ourwater supply,[119]supplying medicines and health benefits,[120]and unquantifiable quality of life improvements. There is a relationship, acorrelation, between markets andnatural capital, andsocial income inequityand biodiversity loss. This means that there are greater rates of biodiversity loss in places where the inequity of wealth is greatest,[121]an example of this would be the Perdido Key beach mouse. This is an endangered species that its demis started because of continued development along beaches, these mice leave in sand dunes and play an important role within this ecosystem. These mice help the grass grow inside the sandunes, they eat this grass and then this leads to then spreading seeds throughout the beach creating more grass. Sand dunes may not seem that important but they do act as a barrier for any sort of storm coming from the ocean such as hurricanes.[122][123]
Although a direct market comparison ofnatural capitalis likely insufficient in terms ofhuman value, one measure of ecosystem services suggests the contribution amounts to trillions of dollars yearly.[124][125][126][127]For example, one segment ofNorth Americanforests has been assigned an annual value of 250 billion dollars;[128]as another example,honey beepollination is estimated to provide between 10 and 18 billion dollars of value yearly.[129]The value of ecosystem services on oneNew Zealandisland has been imputed to be as great as theGDPof that region.[130]This planetary wealth is being lost at an incredible rate as the demands of human society is exceeding the bio-regenerative capacity of the Earth. While biodiversity and ecosystems are resilient, the danger of losing them is that humans cannot recreate many ecosystem functions throughtechnological innovation.
Some species, called akeystone speciesform a central supporting hub unique to their ecosystem.[131]The loss of such a species results in a collapse in ecosystem function, as well as the loss of coexisting species.[5]Keystone species are usually predators due to their ability to control the population of prey in their ecosystem.[131]The importance of a keystone species was shown by the extinction of theSteller's sea cow(Hydrodamalis gigas) through its interaction withsea otters,sea urchins, andkelp.Kelp bedsgrow and form nurseries in shallow waters to shelter creatures that support thefood chain. Sea urchins feed on kelp, while sea otters feed on sea urchins. With the rapid decline of sea otters due tooverhunting, sea urchin populationsgrazed unrestrictedon the kelp beds and theecosystem collapsed. Left unchecked, the urchins destroyed the shallow water kelp communities that supported the Steller's sea cow's diet and hastened their demise.[132]The sea otter was thought to be a keystone species because the coexistence of many ecological associates in the kelp beds relied upon otters for their survival. However this was later questioned by Turvey and Risley,[133]who showed that hunting alone would have driven the Steller's sea cow extinct.
Anindicator specieshas a narrow set of ecological requirements, therefore they become useful targets for observing the health of an ecosystem. Some animals, such asamphibianswith their semi-permeable skin and linkages towetlands, have an acute sensitivity to environmental harm and thus may serve as aminer's canary. Indicator species are monitored in an effort to captureenvironmental degradationthrough pollution or some other link to proximate human activities.[5]Monitoring an indicator species is a measure to determine if there is a significant environmental impact that can serve to advise or modify practice, such as through different forestsilviculturetreatments and management scenarios, or to measure the degree of harm that apesticidemay impart on the health of an ecosystem.
Government regulators, consultants, orNGOsregularly monitor indicator species, however, there are limitations coupled with many practical considerations that must be followed for the approach to be effective.[134]It is generally recommended that multiple indicators (genes, populations, species, communities, and landscape) be monitored for effective conservation measurement that prevents harm to the complex, and often unpredictable, response from ecosystem dynamics (Noss, 1997[135]: 88–89).
An example of anumbrella speciesis themonarch butterfly, because of its lengthymigrationsandaestheticvalue. The monarch migrates across North America, covering multiple ecosystems and so requires a large area to exist. Any protections afforded to the monarch butterfly will at the same time umbrella many other species and habitats. An umbrella species is often used asflagship species, which are species, such as thegiant panda, theblue whale, thetiger, themountain gorillaand the monarch butterfly, that capture the public's attention and attract support for conservation measures.[5]Paradoxically, however, conservation bias towards flagship species sometimes threatens other species of chief concern.[136]
Conservation biologists study trends and process from thepaleontologicalpast to theecologicalpresent as they gain an understanding of the context related tospecies extinction.[1]It is generally accepted that there have been five major global mass extinctions that register in Earth's history. These include: theOrdovician(440mya),Devonian(370 mya),Permian–Triassic(245 mya),Triassic–Jurassic(200 mya), andCretaceous–Paleogene extinction event(66 mya) extinction spasms. Within the last 10,000 years, human influence over the Earth's ecosystems has been so extensive that scientists have difficulty estimating the number of species lost;[137]that is to say the rates ofdeforestation,reef destruction,wetland drainingand other human acts are proceeding much faster than human assessment of species. The latestLiving Planet Reportby theWorld Wide Fund for Natureestimates that we have exceeded the bio-regenerative capacity of the planet, requiring 1.6 Earths to support the demands placed on our natural resources.[138]
Conservation biologists are dealing with and have publishedevidencefrom all corners of the planet indicating that humanity may be causing the sixth and fastest planetaryextinction event.[139][140][141]It has been suggested that an unprecedented number of species is becoming extinct in what is known as theHolocene extinction event.[142]The global extinction rate may be approximately 1,000 times higher than the natural background extinction rate.[143]It is estimated that two-thirds of allmammalgeneraand one-half of all mammalspeciesweighing at least 44 kilograms (97 lb) have gone extinct in the last 50,000 years.[133][144][145][146]The Global Amphibian Assessment[147]reports thatamphibians are decliningon a global scale faster than any othervertebrategroup, with over 32% of all surviving species being threatened with extinction. The surviving populations are in continual decline in 43% of those that are threatened. Since the mid-1980s the actual rates of extinction have exceeded 211 times rates measured from thefossil record.[148]However, "The current amphibian extinction rate may range from 25,039 to 45,474 times the background extinction rate for amphibians."[148]The global extinction trend occurs in every majorvertebrategroup that is being monitored. For example, 23% of allmammalsand 12% of allbirdsareRed Listedby theInternational Union for Conservation of Nature(IUCN), meaning they too are threatened with extinction. Even though extinction is natural, the decline in species is happening at such an incredible rate that evolution can simply not match, therefore, leading to the greatest continual mass extinction on Earth.[149]Humans have dominated the planet and our high consumption of resources, along with the pollution generated is affecting the environments in which other species live.[149][150]There are a wide variety of species that humans are working to protect such as the Hawaiian Crow and the Whooping Crane of Texas.[151]People can also take action on preserving species by advocating and voting for global and national policies that improve climate, under the concepts ofclimate mitigationandclimate restoration. The Earth's oceans demand particular attention as climate change continues to alter pH levels, making it uninhabitable for organisms with shells which dissolve as a result.[143]
Global assessments of coral reefs of the world continue to report drastic and rapid rates of decline. By 2000, 27% of the world's coral reef ecosystems had effectively collapsed. The largest period of decline occurred in a dramatic "bleaching" event in 1998, where approximately 16% of all the coral reefs in the world disappeared in less than a year.Coral bleachingis caused by a mixture ofenvironmental stresses, including increases in ocean temperatures andacidity, causing both the release ofsymbioticalgaeand death of corals.[152]Decline and extinction risk in coral reef biodiversity has risen dramatically in the past ten years. The loss of coral reefs, which are predicted to go extinct in the next century, threatens the balance of global biodiversity, will have huge economic impacts, and endangers food security for hundreds of millions of people.[153]Conservation biology plays an important role in international agreements covering the world's oceans[152]and other issues pertaining tobiodiversity.
These predictions will undoubtedly appear extreme, but it is difficult to imagine how such changes will not come to pass without fundamental changes in human behavior.
The oceans are threatened by acidification due to an increase in CO2levels. This is a most serious threat to societies relying heavily upon oceanicnatural resources. A concern is that the majority of allmarinespecies will not be able toevolveoracclimatein response to the changes in the ocean chemistry.[154]
The prospects of averting mass extinction seems unlikely when "90% of all of the large (average approximately ≥50 kg), open ocean tuna, billfishes, and sharks in the ocean"[18]are reportedly gone. Given the scientific review of current trends, the ocean is predicted to have few survivingmulti-cellular organismswith onlymicrobesleft to dominatemarine ecosystems.[18]
Serious concerns also being raised abouttaxonomic groupsthat do not receive the same degree of social attention or attract funds as the vertebrates. These includefungal(includinglichen-forming species),[155]invertebrate (particularlyinsect[16][156][157]) andplantcommunities[158]where the vast majority of biodiversity is represented. Conservation of fungi and conservation of insects, in particular, are both of pivotal importance for conservation biology. As mycorrhizal symbionts, and as decomposers and recyclers, fungi are essential for sustainability of forests.[155]The value of insects in thebiosphereis enormous because they outnumber all other living groups in measure ofspecies richness. The greatest bulk ofbiomasson land is found in plants, which is sustained by insect relations. This great ecological value of insects is countered by a society that often reacts negatively toward these aesthetically 'unpleasant' creatures.[159][160]
One area of concern in the insect world that has caught the public eye is the mysterious case of missinghoney bees(Apis mellifera). Honey bees provide an indispensable ecological services through their acts of pollination supporting a huge variety of agriculture crops. The use of honey and wax have become vastly used throughout the world.[161]The sudden disappearance of bees leaving empty hives orcolony collapse disorder(CCD) is not uncommon. However, in 16-month period from 2006 through 2007, 29% of 577beekeepersacross the United States reported CCD losses in up to 76% of their colonies. This sudden demographic loss in bee numbers is placing a strain on the agricultural sector. The cause behind the massive declines is puzzling scientists.Pests,pesticides, andglobal warmingare all being considered as possible causes.[162][163]
Another highlight that links conservation biology to insects, forests, and climate change is themountain pine beetle(Dendroctonus ponderosae)epidemicofBritish Columbia, Canada, which has infested 470,000 km2(180,000 sq mi) of forested land since 1999.[107]An action plan has been prepared by the Government of British Columbia to address this problem.[164][165]
This impact [pine beetle epidemic] converted the forest from a small netcarbon sinkto a large net carbon source both during and immediately after the outbreak. In the worst year, the impacts resulting from the beetle outbreak in British Columbia were equivalent to 75% of the average annual direct forest fire emissions from all of Canada during 1959–1999.
A large proportion of parasite species are threatened by extinction. A few of them are being eradicated as pests of humans or domestic animals; however, most of them are harmless. Parasites also make up a significant amount of global biodiversity, given that they make up a large proportion of all species on earth,[166]making them of increasingly prevalent conservation interest. Threats include the decline or fragmentation of host populations,[167]or the extinction of host species. Parasites are intricately woven into ecosystems and food webs, thereby occupying valuable roles in ecosystem structure and function.[168][166]
Today, many threats to biodiversity exist. An acronym that can be used to express the top threats of present-day H.I.P.P.O stands for Habitat Loss, Invasive Species, Pollution, Human Population, and Overharvesting.[169]The primary threats to biodiversity arehabitat destruction(such asdeforestation,agricultural expansion,urban development), andoverexploitation(such aswildlife trade).[137][170][171][172][173][174][175][176][177]Habitat fragmentationalso poses challenges, because the global network of protected areas only covers 11.5% of the Earth's surface.[178]A significant consequence of fragmentation and lack oflinked protected areasis the reduction of animal migration on a global scale.[179]Considering that billions of tonnes of biomass are responsible fornutrient cyclingacross the earth, the reduction of migration is a serious matter for conservation biology.[180][181]
Human activities are associated directly or indirectly with nearly every aspect of the current extinction spasm.
However, human activities need not necessarily cause irreparable harm to the biosphere. Withconservation management and planningfor biodiversity at all levels, fromgenesto ecosystems, there are examples where humans mutually coexist in a sustainable way with nature.[182]Even with the current threats to biodiversity there are ways we can improve the current condition and start anew.
Many of the threats to biodiversity, including disease and climate change, are reaching inside borders of protected areas, leaving them 'not-so protected' (e.g.Yellowstone National Park).[183]Climate change, for example, is often cited as a serious threat in this regard, because there is afeedback loopbetween species extinction and the release ofcarbon dioxideinto theatmosphere.[107][108]Ecosystems store andcyclelarge amounts of carbon which regulates global conditions.[184]In present day, there have been major climate shifts with temperature changes making survival of some species difficult.[169]Theeffects of global warmingadd a catastrophic threat toward a mass extinction of global biological diversity.[185]Numerous more species are predicted to face unprecedented levels of extinction risk due to population increase, climate change and economic development in the future.[186]Conservationists have claimed that not all the species can be saved, and they have to decide which their efforts should be used to protect. This concept is known as the Conservation Triage.[169]The extinction threat is estimated to range from 15 to 37 percent of all species by 2050,[185]or 50 percent of all species over the next 50 years.[16]The current extinction rate is 100–100,000 times more rapid today than the last several billion years.[169]
Scientific literature
Textbooks
General non-fiction
Periodicals
Training manuals
|
https://en.wikipedia.org/wiki/Conservation_biology
|
Nature conservationis the ethic/moral philosophy andconservation movementfocused on protecting species fromextinction, maintaining and restoringhabitats, enhancingecosystem services, and protectingbiological diversity. A range of values underlie conservation, which can be guided bybiocentrism,anthropocentrism,ecocentrism, andsentientism,[1]environmental ideologies that inform ecocultural practices and identities.[2]There has recently been a movement towardsevidence-based conservationwhich calls for greater use of scientific evidence to improve the effectiveness of conservation efforts. As of 2018 15% of land and 7.3% of the oceans were protected. Many environmentalists set a target of protecting 30% of land and marine territory by 2030.[3][4]In 2021, 16.64% of land and 7.9% of the oceans were protected.[5][6]The 2022 IPCC report on climate impacts and adaptation, underlines the need to conserve 30% to 50% of the Earth's land, freshwater and ocean areas – echoing the 30% goal of theU.N.'s Convention on Biodiversity.[7][8]
Conservation goals includeconserving habitat, preventingdeforestation, maintainingsoil organic matter, halting speciesextinction, reducingoverfishing, and mitigatingclimate change. Different philosophical outlooks guide conservationists towards these different goals.
The principal value underlying many expressions of the conservation ethic is that the natural world has intrinsic and intangible worth along with utilitarian value – a view carried forward by parts of the scientificconservation movementand some of the olderRomanticschools of theecology movement. Philosophers have attached intrinsic value to different aspects of nature, whether this is individual organisms (biocentrism) or ecological wholes such as species or ecosystems (ecoholism).[9]
Moreutilitarianschools of conservation have an anthropocentric outlook and seek a proper valuation oflocal and global impacts of human activityupon nature in their effect upon humanwellbeing, now and to posterity. How such values are assessed and exchanged among people determines the social, political and personal restraints and imperatives by which conservation is practiced. This is a view common in the modernenvironmental movement. There is increasing interest in extending the responsibility for human wellbeing to include the welfare ofsentientanimals. In 2022 the United Kingdom introduced theAnimal Welfare (Sentience) Actwhich lists all vertebrates, decapod crustaceans and cephalopods as sentient beings.[10]Branches of conservation ethics focusing on sentient individuals includeecofeminism[11]andcompassionate conservation.[12]
In the United States of America, the year 1864 saw the publication of two books which laid the foundation for Romantic and Utilitarian conservation traditions in America. The posthumous publication ofHenry David Thoreau'sWaldenestablished the grandeur of unspoiled nature as a citadel to nourish the spirit of man. A very different book fromGeorge Perkins Marsh,Man and Nature, later subtitled "The Earth as Modified by Human Action", catalogued his observations of man exhausting and altering the land from which his sustenance derives.
The consumer conservation ethic has been defined as the attitudes and behaviors held and engaged in by individuals and families that ultimately serve to reduce overall societal consumption of energy.[13][14]The conservation movement has emerged from the advancements of moral reasoning.[15]Increasing numbers of philosophers and scientists have made its maturation possible by considering the relationships between human beings and organisms with the same rigor.[16]This social ethic primarily relates tolocal purchasing,moral purchasing, thesustained, and efficient use ofrenewable resources, the moderation of destructive use of finite resources, and the prevention of harm to common resources such asairandwaterquality, the natural functions of a living earth, and cultural values in abuilt environment. These practices are used to slow down the accelerating rate in whichextinctionis occurring at. The origins of this ethic can be traced back to many different philosophical and religious beliefs; that is, these practices has been advocated for centuries. In the past, conservationism has been categorized under a spectrum of views, includinganthropocentric,utilitarianconservationism, and radicaleco-centricgreen eco-political views.
More recently, the three major movements has been grouped to become what we now know as conservation ethic. The person credited with formulating the conservation ethic in the United States is former president,Theodore Roosevelt.[17]
The conservation of natural resources is the fundamental problem. Unless we solve that problem, it will avail us little to solve all others.
The term "conservation" was coined byGifford Pinchotin 1907. He told his close friend United States PresidentTheodore Rooseveltwho used it for a national conference of governors in 1908.[19]
In common usage, the term refers to the activity of systematically protecting natural resources such as forests, including biological diversity.Carl F. Jordandefines biological conservation as:[20]
a philosophy of managing the environment in a manner that does not despoil, exhaust or extinguish.
While this usage is not new, the idea of biological conservation has been applied to the principles of ecology,biogeography,anthropology, economy, and sociology to maintainbiodiversity.
The term "conservation" itself may cover the concepts such ascultural diversity,genetic diversity, and the concept of movementsenvironmental conservation,seedbankcuration (preservation of seeds), andgene bankcoordination (preservation of animals' genetic material). These are often summarized as the priority to respect diversity.
Much recent movement in conservation can be considered a resistance tocommercialismandglobalization.Slow Foodis a consequence of rejecting these as moral priorities, and embracing aslower and more locally focused lifestyle.
Sustainable livingis a lifestyle that people are beginning to adopt, promoting to make decisions that would help protectbiodiversity.[21]The small lifestyle changes that promotesustainabilitywill eventually accumulate into the proliferation of biological diversity. Regulating the ecolabeling of products from fisheries, controlling forsustainable food production, or keeping the lights off during the day are some examples of sustainable living.[22][23]However, sustainable living is not a simple and uncomplicated approach. A 1987 Brundtland Report expounds on the notion of sustainability as a process of change that looks different for everyone: "It is not a fixed state of harmony, but rather a process of change in which the exploitation of resources, the direction of investments, the orientation of technological development, and institutional change are made consistent with future as well as present needs. We do not pretend that the process is easy or straightforward."[24]Simply put, sustainable living does make a difference by compiling many individual actions that encourage the protection ofbiological diversity.
Distinct trends exist regarding conservation development. The need for conserving land has only recently intensified during what some scholars refer to as theCapitaloceneepoch. This era marks the beginning ofcolonialism,globalization, and theIndustrial Revolutionthat has led to global land change as well asclimate change.
While many countries' efforts to preservespeciesand theirhabitatshave been government-led, those in the North Western Europe tended to arise out of the middle-class and aristocratic interest innatural history, expressed at the level of the individual and the national, regional or locallearned society. Thus countries like Britain, the Netherlands, Germany, etc. had what would be callednon-governmental organizations– in the shape of theRoyal Society for the Protection of Birds,National Trustand County Naturalists' Trusts (dating back to 1889, 1895, and 1912 respectively) Natuurmonumenten, Provincial Conservation Trusts for each Dutch province, Vogelbescherming, etc. – a long time before there werenational parksandnational nature reserves.[25]This in part reflects the absence of wilderness areas in heavily cultivated Europe, as well as a longstanding interest inlaissez-fairegovernment in some countries, like the UK, leaving it as no coincidence thatJohn Muir, the Scottish-born founder of the National Park movement (and hence of government-sponsored conservation) did his sterling work in the US, where he was the motor force behind the establishment of such national parks asYosemiteandYellowstone. Nowadays, officially more than 10 percent of the world is legally protected in some way or the other, and in practice, private fundraising is insufficient to pay for the effective management of so much land with protective status.
Protected areas in developing countries, where probably as many as 70–80 percent of the species of the world live, still enjoy very little effective management and protection. Some countries, such as Mexico, have non-profit civil organizations and landowners dedicated to protecting vast private property, such is the case of Hacienda Chichen's Maya Jungle Reserve and Bird Refuge inChichen Itza,Yucatán.[26]The Adopt A Ranger Foundation has calculated that worldwide about 140,000 rangers are needed for the protected areas in developing and transition countries. There are no data on how many rangers are employed at the moment, but probably less than half the protected areas in developing and transition countries have any rangers at all and those that have them are at least 50% short. This means that there would be a worldwide ranger deficit of 105,000 rangers in the developing and transition countries.[citation needed]
The termsconservationandpreservationare frequently conflated outside the academic, scientific, and professional kinds of literature. The United States'National Park Serviceoffers the following explanation of the important ways in which these two terms represent very different conceptions ofenvironmental protectionethics:
Conservation and preservation are closely linked and may indeed seem to mean the same thing. Both terms involve a degree of protection, but how that protection is carried out is the key difference. Conservation is generally associated with the protection of natural resources, while preservation is associated with the protection of buildings, objects, and landscapes. Put simply,conservation seeks the proper use of nature, while preservation seeks protection of nature from use.
During theenvironmental movementof the early 20th century, two opposing factions emerged: conservationists and preservationists. Conservationists sought to regulate human use while preservationists sought to eliminatehuman impactaltogether."[28]
C. Anne Claus presents a distinction for conservation practices.[29]Claus divides conservation into conservation-far and conservation-near. Conservation-far is the means of protecting nature by separating it and safeguarding it from humans.[29]Means of doing this include the creation of preserves or national parks. They are meant to keep the flora and fauna away from human influence and have become a staple method in the west. Conservation-near however is conservation via connection. The method of reconnecting people to nature through traditions and beliefs to foster a desire to protect nature.[29]The basis is that instead of forcing compliance to separate from nature onto the people, instead conservationists work with locals and their traditions to find conservation efforts that work for all.[29]
Evidence-based conservation is the application of evidence in conservation management actions and policy making. It is defined as systematically assessing scientific information from published,peer-reviewedpublications and texts, practitioners' experiences, independent expert assessment, and local andindigenousknowledge on a specific conservation topic. This includes assessing the current effectiveness of different management interventions, threats and emerging problems, and economic factors.[30]
Evidence-basedconservation was organized based on the observations thatdecision makingin conservation was based onintuitionand/or practitioner experience often disregarding other forms of evidence of successes and failures (e.g. scientific information). This has led to costly and poor outcomes.[31]Evidence-based conservation provides access to information that will support decision making through an evidence-based framework of "what works" in conservation.[32]
The evidence-based approach to conservation is based on evidence-based practice which started inmedicineand later spread tonursing,education,[33]psychology, and other fields. It is part of the larger movement towardsevidence-based practices.
|
https://en.wikipedia.org/wiki/Conservation_ethic
|
Theconservation movement, also known asnature conservation, is a political, environmental, and social movement that seeks to manage and protectnatural resources, includinganimal,fungus, andplant speciesas well as their habitat for the future. Conservationists are concerned with leaving the environment in a better state than the condition they found it in.[1]Evidence-based conservationseeks to use high quality scientific evidence to make conservation efforts more effective.
The early conservation movement evolved out of necessity to maintain natural resources such asfisheries,wildlife management,water,soil, as well asconservationandsustainable forestry. The contemporary conservation movement has broadened from the early movement's emphasis on use of sustainable yield of natural resources and preservation ofwildernessareas to include preservation ofbiodiversity. Some say the conservation movement is part of the broader and more far-reachingenvironmental movement, while others argue that they differ both in ideology and practice. Conservation is seen as differing fromenvironmentalismand it is generally a conservative school of thought which aims to preserve natural resources expressly for their continuedsustainableuse by humans.[2]
The conservation movement can be traced back toJohn Evelyn's workSylva, which was presented as a paper to theRoyal Societyin 1662. Published as a book two years later, it was one of the most highly influential texts onforestryever published.[3]Timber resources in England were becoming dangerously depleted at the time, and Evelyn advocated the importance of conserving the forests by managing the rate of depletion and ensuring that the cut down trees get replenished.
Khejarlimassacre:
TheBishnoinarrate the story ofAmrita Devi, a member of the sect who inspired as many as 363 other Bishnois to go to their deaths in protest of the cutting down ofKhejritrees on 12 September 1730. The Maharaja ofJodhpur,Abhay Singh, requiring wood for the construction of a new palace, sent soldiers to cut trees in the village of Khejarli, which was called Jehnad at that time. Noticing their actions, Amrita Devi hugged a tree in an attempt to stop them. Her family then adopted the same strategy, as did other local people when the news spread. She told the soldiers that she considered their actions to be an insult to her faith and that she was prepared to die to save the trees. The soldiers did indeed kill her and others until Abhay Singh was informed of what was going on and intervened to stop the massacre.
Some of the 363 Bishnois who were killed protecting the trees were buried in Khejarli, where a simple grave with four pillars was erected. Every year, in September, i.e., Shukla Dashmi of Bhadrapad (Hindi month) the Bishnois assemble there to commemorate the sacrifice made by their people to preserve the trees.
The field developed during the 18th century, especially inPrussiaand France where scientific forestry methods were developed. These methods were first applied rigorously inBritish Indiafrom the early 19th century. The government was interested in the use offorest produceand began managing the forests with measures to reduce the risk of wildfire in order to protect the "household" of nature, as it was then termed. This early ecological idea was in order to preserve the growth of delicateteaktrees, which was an important resource for theRoyal Navy.
Concerns over teak depletion were raised as early as 1799 and 1805 when the Navy was undergoing a massive expansion during theNapoleonic Wars; this pressure led to the first formal conservation Act, which prohibited the felling of small teak trees. The first forestry officer was appointed in 1806 to regulate and preserve the trees necessary for shipbuilding.[4]
This promising start received a setback in the 1820s and 30s, whenlaissez-faireeconomics and complaints from private landowners brought these early conservation attempts to an end.
In 1837, American poetGeorge Pope Morrispublished "Woodman, Spare that Tree!", aRomanticpoem urging a lumberjack to avoid anoak treethat has sentimental value. The poem was set to music later that year byHenry Russell. Lines from the song have been quoted by environmentalists.[5]
Conservation was revived in the mid-19th century, with the first practical application of scientific conservation principles to the forests of India. The conservation ethic that began to evolve included three core principles: that human activity damaged theenvironment, that there was acivic dutyto maintain the environment for future generations, and that scientific, empirically based methods should be applied to ensure this duty was carried out. SirJames Ranald Martinwas prominent in promoting this ideology, publishing many medico-topographical reports that demonstrated the scale of damage wrought through large-scale deforestation and desiccation, and lobbying extensively for the institutionalization of forest conservation activities inBritish Indiathrough the establishment ofForest Departments.[6]Edward Percy Stebbingwarned ofdesertificationof India. TheMadrasBoard of Revenue started local conservation efforts in 1842, headed byAlexander Gibson, a professionalbotanistwho systematically adopted a forest conservation program based on scientific principles. This was the first case of state management of forests in the world.[7]
These local attempts gradually received more attention by the British government as the unregulated felling of trees continued unabated. In 1850, theBritish Associationin Edinburgh formed a committee to study forest destruction at the behest ofHugh Cleghorna pioneer in the nascent conservation movement.
He had become interested inforest conservationinMysorein 1847 and gave several lectures at the Association on the failure of agriculture in India. These lectures influenced the government underGovernor-GeneralLord Dalhousieto introduce the first permanent and large-scale forest conservation program in the world in 1855, a model that soon spread toother colonies, as well theUnited States. In the same year, Cleghorn organised theMadras Forest Departmentand in 1860 the department banned the useshifting cultivation.[8]Cleghorn's 1861 manual,The forests and gardens of South India, became the definitive work on the subject and was widely used by forest assistants in the subcontinent.[9]In 1861, the Forest Department extended its remit into thePunjab.[10]
SirDietrich Brandis, aGermanforester, joined the British service in 1856 as superintendent of the teak forests of Pegu division in easternBurma. During that time Burma'steakforests were controlled by militantKarentribals. He introduced the "taungya" system,[11]in which Karen villagers provided labor for clearing, planting and weeding teak plantations. After seven years in Burma, Brandis was appointed Inspector General of Forests in India, a position he served in for 20 years. He formulated new forest legislation and helped establish research and training institutions. TheImperial Forest SchoolatDehradunwas founded by him.[12][13]
Germans were prominent in the forestry administration of British India. As well as Brandis,Berthold RibbentropandSir William P.D. Schlichbrought new methods to Indian conservation, the latter becoming the Inspector-General in 1883 after Brandis stepped down. Schlich helped to establish the journalIndian Foresterin 1874, and became the founding director of the firstforestryschool in England atCooper's Hillin 1885.[14]He authored the five-volumeManual of Forestry(1889–96) onsilviculture,forest management,forest protection, and forest utilization, which became the standard and enduring textbook for forestry students.
The American movement received its inspiration from 19th century works that exalted the inherent value of nature, quite apart from human usage. AuthorHenry David Thoreau(1817–1862) made key philosophical contributions that exalted nature. Thoreau was interested in peoples' relationship with nature and studied this by living close to nature in a simple life. He published his experiences in the bookWalden,which argued that people should become intimately close with nature.[15]The ideas ofSir Brandis,Sir William P.D. SchlichandCarl A. Schenckwere also very influential—Gifford Pinchot, the first chief of theUSDA Forest Service, relied heavily upon Brandis' advice for introducing professional forest management in the U.S. and on how to structure the Forest Service.[16][17]In 1864Abraham Lincolnestablished thefederally preserved Yosemite, before the firstnational parkwas created (Yellowstone National Park).
Both conservationists and preservationists appeared in political debates during theProgressive Era(the 1890s–early 1920s). There were three main positions.
The debate between conservation and preservation reached its peak in the public debates over the construction of California'sHetch Hetchy daminYosemite National Parkwhich supplies the water supply of San Francisco. Muir, leading theSierra Club, declared that the valley must be preserved for the sake of its beauty: "No holier temple has ever been consecrated by the heart of man."
PresidentRooseveltput conservationist issues high on the national agenda.[21]He worked with all the major figures of the movement, especially his chief advisor on the matter,Gifford Pinchotand was deeply committed to conserving natural resources. He encouraged theNewlands Reclamation Actof 1902 to promote federal construction of dams to irrigate small farms and placed 230 million acres (360,000 sq mi; 930,000 km2) under federal protection. Roosevelt set aside more federal land fornational parksandnature preservesthan all of his predecessors combined.[22]
Roosevelt established theUnited States Forest Service, signed into law the creation of five national parks, and signed the year 1906Antiquities Act, under which he proclaimed 18 newnational monuments. He also established the first 51bird reserves, fourgame preserves, and 150national forests, includingShoshone National Forest, the nation's first. The area of the United States that he placed under public protection totals approximately 230,000,000 acres (930,000 km2).
Gifford Pinchothad been appointed by McKinley as chief of Division of Forestry in the Department of Agriculture. In 1905, his department gained control of the national forest reserves. Pinchot promoted private use (for a fee) under federal supervision. In 1907, Roosevelt designated 16 million acres (65,000 km2) of new national forests just minutes before a deadline.[23]
In May 1908, Roosevelt sponsored theConference of Governorsheld in the White House, with a focus on natural resources and their most efficient use. Roosevelt delivered the opening address: "Conservation as a National Duty".
In 1903 Roosevelt toured the Yosemite Valley withJohn Muir, who had a very different view of conservation, and tried to minimize commercial use of water resources and forests. Working through the Sierra Club he founded, Muir succeeded in 1905 in having Congress transfer theMariposa Groveand Yosemite Valley to the federal government.[24]While Muir wanted nature preserved for its own sake, Roosevelt subscribed to Pinchot's formulation, "to make the forest produce the largest amount of whatever crop or service will be most useful, and keep on producing it for generation after generation of men and trees."[25]
Theodore Roosevelt's view on conservationism remained dominant for decades;Franklin D. Rooseveltauthorised the building of many large-scale dams and water projects, as well as the expansion of the National Forest System to buy out sub-marginal farms. In 1937, thePittman–Robertson Federal Aid in Wildlife Restoration Actwas signed into law, providing funding for state agencies to carry out their conservation efforts.
Environmental reemerged on the national agenda in 1970, with RepublicanRichard Nixonplaying a major role, especially with his creation of theEnvironmental Protection Agency. The debates over the public lands and environmental politics played a supporting role in the decline of liberalism and the rise of modern environmentalism. Although Americans consistently rank environmental issues as "important", polling data indicates that in the voting booth voters rank the environmental issues low relative to other political concerns.
The growth of the Republican party's political power in the inland West (apart from the Pacific coast) was facilitated by the rise of popular opposition to public lands reform. Successful Democrats in the inland West and Alaska typically take more conservative positions on environmental issues than Democrats from the Coastal states. Conservatives drew on new organizational networks of think tanks, industry groups, and citizen-oriented organizations, and they began to deploy new strategies that affirmed the rights of individuals to their property, protection of extraction rights, to hunt and recreate, and to pursue happiness unencumbered by the federal government at the expense of resource conservation.[26]
In 2019, convivial conservation was an idea proposed by Bram Büscher and Robert Fletcher. Convivial conservation draws on social movements and concepts likeenvironmental justiceand structural change to create a post-capitalist approach to conservation.[27]Convivial conservation rejects both human-nature dichotomies and capitalistic political economies. Built on a politics of equity, structural change and environmental justice, convivial conservation is considered a radical theory as it focuses on the structural political-economy of modern nation states and the need to create structural change.[28]Convivial conservation creates a more integrated approach which reconfigures the nature-human configuration to create a world in which humans are recognized as a part of nature. The emphasis on nature as for and by humans creates a human responsibility to care for the environment as a way of caring for themselves. It also redefines nature as not only being pristine and untouched, but cultivated by humans in everyday formats. The theory is a long-term process of structural change to move away from capitalist valuation in favor of a system emphasizing everyday and local living.[28]Convivial conservation creates a nature which includes humans rather than excluding them from the necessity of conservation. While other conservation theories integrate some of the elements of convivial conservation, none move away from both dichotomies and capitalist valuation principles.
Source:[28]
The early years of the environmental and conservation movements were rooted in the safeguarding of game to support the recreation activities of elite white men, such assport hunting.[29]This led to an economy to support and perpetuate these activities as well as the continued wilderness conservation to support the corporate interests supplying the hunters with the equipment needed for their sport.[29]Game parks in England and the United States allowed wealthy hunters and fishermen todeplete wildlife, while hunting by Indigenous groups, laborers and the working class, and poor citizens - especially for the express use of sustenance - was vigorously monitored.[29]Scholars have shown that the establishment of theU.S. national parks, while setting aside land for preservation, was also a continuation of preserving the land for the recreation and enjoyment of elite white hunters and nature enthusiasts.[29]
While Theodore Roosevelt was one of the leading activists for the conservation movement in the United States, he also believed that the threats to the natural world were equally threats to white Americans. Roosevelt and his contemporaries held the belief that the cities, industries and factories that were overtaking the wilderness and threatening the native plants and animals were also consuming and threatening the racial vigor that they believed white Americans held which made them superior.[30]Roosevelt was a big believer that white male virility depended on wildlife for its vigor, and that, consequently, depleting wildlife would result in a racially weaker nation.[30]This lead Roosevelt to support the passing of many immigration restrictions,eugenicslegislations and wildlife preservation laws.[30]For instance, Roosevelt established the first national parks through the Antiquities Act of 1906 while also endorsing the removal of Indigenous Americans from their tribal lands within the parks.[31]This move was promoted and endorsed by other leaders of the conservation movement, includingFrederick Law Olmsted, a leading landscape architect, conservationist, and supporter of the national park system, andGifford Pinchot, a leading eugenicist and conservationist.[31]Furthering the economic exploitation of the environment and national parks for wealthy whites was the beginning ofecotourismin the parks, which included allowing some Indigenous Americans to remain so that the tourists could get what was to be considered the full "wilderness experience".[32]
Another long-term supporter, partner, and inspiration to Roosevelt,Madison Grant, was a well known American eugenicist and conservationist.[30]Grant worked alongside Roosevelt in the American conservation movement and was even secretary and president of the Boone and Crockett Club.[33]In 1916, Grant published the book "The Passing of the Great Race, or The Racial Basis of European History", which based its premise on eugenics and outlined a hierarchy of races, with white, "Nordic" men at the top, and all other races below.[33]The German translation of this book was used by Nazi Germany as the source for many of their beliefs[33]and was even proclaimed by Hitler to be his "Bible".[31]
One of the first established conservation agencies in the United States is theNational Audubon Society. Founded in 1905, its priority was to protect and conserve various waterbird species.[34]However, the first state-level Audubon group was created in 1896 by Harriet Hemenway and Minna B. Hall to convince women to refrain from buying hats made with bird feathers- a common practice at the time.[34]The organization is named afterJohn Audubon, a naturalist and legendary bird painter.[35]Audubon was also a slaveholder who also included manyracisttales in his books.[35]Despite his views of racial inequality, Audubon did find black and Indigenous people to be scientifically useful, often using their local knowledge in his books and relying on them to collect specimens for him.[35]
The ideology of the conservation movement in Germany paralleled that of the U.S. and England.[36]Early German naturalists of the 20th century turned to the wilderness to escape the industrialization of cities. However, many of these early conservationists became part of and influenced theNazi party. Like elite and influential Americans of the early 20th century, they embraced eugenics and racism and promoted the idea that Nordic people aresuperior.[36]
Although the conservation movement developed in Europe in the 18th century,Costa Ricaas a country has been heralded its champion in the current times.[37]Costa Rica hosts an astonishing number of species, given its size, having more animal and plant species than theUSandCanadacombined[38]hosting over 500,000 species of plants and animals. Despite this, Costa Rica is only 250 miles long and 150 miles wide. A widely accepted theory for the origin of this unusual density of species is the free mixing of species from both North and South America occurring on this "inter-oceanic" and "inter-continental" landscape.[38]Preserving the natural environment of this fragile landscape, therefore, has drawn the attention of many international scholars and scientists.
MINAE(Ministry of Environment, Energy and Telecommunications) and its responsible for many conservation efforts in Costa Rica it achieves through its many agencies, including SINAC (National System of Conservation Areas), FONAFIFO (national forest fund), and CONAGEBIO (National Commission for Biodiversity Management).
Costa Rica has made conservation a national priority, and has been at the forefront of preserving its natural environment with 28% of its land protected in the form of national parks, reserves, and wildlife refuges, which is under the administrative control ofSINAC(National System of Conservation Areas)[39]a division ofMINAE(Ministry of Environment, Energy and Telecommunications). SINAC has subdivided the country into various zones depending on the ecological diversity of that region - as seen in figure 1.
The country has used this ecological diversity to its economic advantage in the form of a thrivingecotourism industry, putting its commitment to nature, on display to visitors from across the globe. The tourism market in Costa Rica is estimated to grow by USD 1.34 billion from 2023 to 2028, growing at a CAGR of 5.76%.
You know, when we first set up WWF, our objective was to save endangered species from extinction. But we have failed completely; we haven't managed to save a single one. If only we had put all that money into condoms, we might have done some good.
TheWorld Wide Fund for Nature(WWF) is aninternationalnon-governmental organizationfounded in 1961, working in the field of the wilderness preservation, and the reduction ofhuman impact on the environment.[42]It was formerly named the "World Wildlife Fund", which remains its official name inCanadaand theUnited States.[42]
WWF is the world's largestconservation organizationwith over five million supporters worldwide, working in more than 100 countries, supporting around 1,300 conservation and environmental projects.[43]They have invested over $1 billion in more than 12,000 conservation initiatives since 1995.[44]WWF is afoundationwith 55% of funding from individuals and bequests, 19% from government sources (such as theWorld Bank,DFID,USAID) and 8% from corporations in 2014.[45][46]
WWF aims to "stop the degradation of the planet's natural environment and to build a future in which humans live in harmony with nature."[47]TheLiving Planet Reportis published every two years by WWF since 1998; it is based on aLiving Planet Indexandecological footprintcalculation.[42]In addition, WWF has launched several notable worldwide campaigns includingEarth HourandDebt-for-Nature Swap, and its current work is organized around these six areas: food, climate, freshwater, wildlife, forests, and oceans.[42][44]
Institutions such as the WWF have historically been the cause of the displacement and divide between Indigenous populations and the lands they inhabit. The reason is the organization's historically colonial, paternalistic, and neoliberal approaches to conservation. Claus, in her article "Drawing the Sea Near: Satoumi and Coral Reef Conservation in Okinawa", expands on this approach, called "conservation far", in which access to lands is open to external foreign entities, such as researchers or tourists, but prohibited to local populations. The conservation initiatives are therefore taking place "far" away. This entity is largely unaware of the customs and values held by those within the territory surrounding nature and their role within it.[48]
In Japan, the town of Shiraho had traditional ways of tending to nature that were lost due to colonization and militarization by the United States. The return to traditional sustainability practices constituted a “conservation near” approach. This engages those near in proximity to the lands in the conservation efforts and holds them accountable for their direct effects on its preservation. While conservation-far drills visuals and sight as being the main interaction medium between people and the environment, conservation near includes a hands-on, full sensory experience permitted by conservation-near methodologies.[48]An emphasis on observation only stems from a deeper association with intellect and observation. The alternative to this is more of a bodily or "primitive" consciousness, which is associated with lower-intelligence and people of color. A new, integrated approach to conservation is being investigated in recent years by institutions such as WWF.[48]The socionatural relationships centered on the interactions based in reciprocity and empathy, making conservation efforts being accountable to the local community and ways of life, changing in response to values, ideals, and beliefs of the locals. Japanese seascapes are often integral to the identity of the residents and includes historical memories and spiritual engagements which need to be recognized and considered.[48]The involvement of communities gives residents a stake in the issue, leading to a long-term solution which emphasizes sustainable resource usage and the empowerment of the communities. Conservation efforts are able to take into consideration cultural values rather than the foreign ideals that are often imposed by foreign activists.
Evidence-based conservationis the application of evidence in conservation biology and environmental management actions and policy making. It is defined as systematically assessing scientific information from published,peer-reviewedpublications and texts, practitioners' experiences, independent expert assessment, and local andindigenousknowledge on a specific conservation topic. This includes assessing the current effectiveness of different management interventions, threats and emerging problems and economic factors.[49]
Evidence-based conservation was organized based on the observations that decision making in conservation was based onintuitionand or practitioner experience often disregarding other forms of evidence of successes and failures (e.g. scientific information). This has led to costly and poor outcomes.[50]Evidence-based conservation provides access to information that will support decision making through an evidence-based framework of "what works" in conservation.[51]
Deforestationandoverpopulationare issues affecting all regions of the world. The consequent destruction of wildlife habitat has prompted the creation of conservation groups in other countries, some founded by local hunters who have witnessed declining wildlife populations first hand. Also, it was highly important for the conservation movement to solve problems of living conditions in the cities and the overpopulation of such places.
The idea of incentive conservation is a modern one but its practice has clearly defended some of the sub Arctic wildernesses and the wildlife in those regions for thousands of years, especially by indigenous peoples such as the Evenk, Yakut, Sami, Inuit and Cree. The fur trade and hunting by these peoples have preserved these regions for thousands of years. Ironically, the pressure now upon them comes from non-renewable resources such as oil, sometimes to make synthetic clothing which is advocated as a humane substitute for fur. (SeeRaccoon dogfor case study of the conservation of an animal through fur trade.) Similarly, in the case of the beaver, hunting and fur trade were thought to bring about the animal's demise, when in fact they were an integral part of its conservation. For many years children's books stated and still do, that the decline in the beaver population was due to the fur trade. In reality however, the decline in beaver numbers was because of habitat destruction and deforestation, as well as its continued persecution as a pest (it causes flooding). In Cree lands, however, where the population valued the animal for meat and fur, it continued to thrive. The Inuit defend their relationship with the seal in response to outside critics.[52]
TheIzoceño-GuaraníofSanta Cruz Department,Bolivia, is a tribe of hunters who were influential in establishing the Capitania del Alto y Bajo Isoso (CABI). CABI promotes economic growth and survival of the Izoceno people while discouraging the rapid destruction of habitat within Bolivia'sGran Chaco. They are responsible for the creation of the 34,000 square kilometre Kaa-Iya del Gran Chaco National Park and Integrated Management Area (KINP). The KINP protects the most biodiverse portion of the Gran Chaco, an ecoregion shared with Argentina, Paraguay and Brazil. In 1996, theWildlife Conservation Societyjoined forces with CABI to institute wildlife and hunting monitoring programs in 23 Izoceño communities. The partnership combines traditional beliefs and local knowledge with the political and administrative tools needed to effectively manage habitats. The programs rely solely on voluntary participation by local hunters who perform self-monitoring techniques and keep records of their hunts. The information obtained by the hunters participating in the program has provided CABI with important data required to make educated decisions about the use of the land. Hunters have been willing participants in this program because of pride in their traditional activities, encouragement by their communities and expectations of benefits to the area.
In order to discourage illegal South African hunting parties and ensure future local use and sustainability, indigenous hunters inBotswanabegan lobbying for and implementing conservation practices in the 1960s. The Fauna Preservation Society of Ngamiland (FPS) was formed in 1962 by the husband and wife team: Robert Kay and June Kay, environmentalists working in conjunction with the Batawana tribes to preserve wildlife habitat.
The FPS promotes habitat conservation and provides local education for preservation of wildlife. Conservation initiatives were met with strong opposition from the Botswana government because of the monies tied to big-game hunting. In 1963, BaTawanga Chiefs and tribal hunter/adventurers in conjunction with the FPS foundedMoremi National Park and Wildlife Refuge, the first area to be set aside by tribal people rather than governmental forces. Moremi National Park is home to a variety of wildlife, including lions, giraffes, elephants, buffalo, zebra, cheetahs and antelope, and covers an area of 3,000 square kilometers. Most of the groups involved with establishing this protected land were involved with hunting and were motivated by their personal observations of declining wildlife and habitat.
|
https://en.wikipedia.org/wiki/Conservation_movement
|
Conservation-reliant speciesare animal or plantspeciesthat require continuing species-specificwildlife managementintervention such aspredator control,habitat managementandparasite controlto survive, even when a self-sustainable recovery in population is achieved.[1]
The term "conservation-reliant species" grew out of theconservation biologyundertaken byThe Endangered Species Act at Thirty Project(launched 2001)[2]and its popularization by project leaderJ. Michael Scott.[3]Its first use in a formal publication was inFrontiers in Ecology and the Environmentin 2005.[4]Worldwide use of the term has not yet developed and it has not yet appeared in a publication compiled outside North America.
Passages of the 1973Endangered Species Act(ESA) carried with it the assumption that endangered species would be delisted as their populations recovered. It assumed they would then thrive under existing regulations and the protections afforded under the ESA would no longer be needed. However, eighty percent ofspecies currently listedunder the ESA fail to meet that assumption. To survive, they require species-specific conservation interventions (e.g. control of predators, competitors, nest parasites, prescribed burns, altered hydrological processes, etc.) and thus they are conservation-reliant.[5]
The criteria for assessing whether a species is conservation-reliant are:[6]
There are five major areas of management action for conservation of vulnerable species:
A prominent example is inIndia, wheretigers, anapex predatorand the national animal, are considered a conservation-reliant species. Thiskeystone speciescan maintain self-sustaining wild populations; however, they require ongoing management actions because threats are pervasive, recurrent and put them at risk of extinction. The origin of these threats are rooted in the changing socio-economic, political and spatial organization of society in India. Tigers have become extinct in some areas because of extrinsic factors such ashabitat destruction, poaching, disease, floods, fires and drought, decline of prey species for the same reasons, as well as intrinsic factors such as demographicstochasticityandgenetic deterioration.
Recognizing the conservation reliance of tigers,Project Tigeris establishing a national science-based framework for monitoring tiger population trends in order to manage the species more effectively. India now has 28 tiger reserves, located in 17 states. These reserves cover 37,761 square kilometres (14,580 sq mi) including 1.14% of the total land area of the country. These reserves are kept free of biotic disturbances, forestry operations, collection of minor forest products, grazing and human disturbance. The populations of tigers in these reserves now constitute some of the most important tiger source populations in the country.[7]
The magnitude and pace of human impacts on the environment make it unlikely that substantial progress will be made in delisting many species unless the definition of "recovery" includes some form of active management. Preventing delisted species from again being at risk of extinction may require continuing, species-specific management actions. Viewing "recovery" of "conservation-reliant species" as a continuum of phases rather than a simple "recovered/not recovered" status may enhance the ability to manage such species within the framework of the Endangered Species Act. With ongoingloss of habitat, disruption of natural cycles, increasing impacts of non-native invasive species, it is probable that the number of conservation-reliant species will increase.
It has been proposed that development of "recovery management agreements", with legally and biologically defensible contracts would provide for continuing conservation management following delisting. The use of such formalized agreements will facilitate shared management responsibilities between federal wildlife agencies and other federal agencies, and with state, local, and tribal governments, as well as with private entities that have demonstrated the capability to meet the needs of conservation-reliant species.[4]
|
https://en.wikipedia.org/wiki/Conservation_reliant_species
|
Crowdmappingis a subtype ofcrowdsourcing[1][2]by whichaggregationof crowd-generated inputs such as captured communications andsocial mediafeeds are combined withgeographic datato create adigital mapthat is as up-to-date as possible[3]on events such aswars,humanitarian crises,crime,elections, ornatural disasters.[4][5]Such maps are typically created collaboratively by people coming together over theInternet.[3][6]
The information can typically be sent to the map initiator or initiators bySMSor by filling out a form online and are then gathered on a map online automatically or by a dedicated group.[7]In 2010,Ushahidireleased "Crowdmap" − afree and open-sourceplatform by which anyone can start crowdmapping projects.[8][9][10][11][12]
Crowdmapping can be used to track fires, floods,pollution,[6]crime, political violence, the spread of disease and bring a level of transparency to fast-moving events that are difficult fortraditional mediato adequately cover, or problem areas[6]and longer-term trends and that may be difficult to identify through the reporting of individual events.[5]
During disasters the timeliness of relevant maps is critical as the needs and locations of victims may change rapidly.[3]
The use of crowdmapping by authorities can improvesituational awarenessduring an incident and be used to supportincident response.[6]
Crowdmaps are an efficient way to visually demonstrate the geographical spread of a phenomenon.[7]
|
https://en.wikipedia.org/wiki/Crowdmapping
|
Ecology(fromAncient Greekοἶκος(oîkos)'house'and-λογία(-logía)'study of')[A]is thenatural scienceof the relationships among livingorganismsand theirenvironment. Ecology considers organisms at the individual,population,community,ecosystem, andbiospherelevels. Ecology overlaps with the closely related sciences ofbiogeography,evolutionary biology,genetics,ethology, andnatural history.
Ecology is a branch ofbiology, and is the study ofabundance,biomass, and distribution of organisms in the context of the environment. It encompasses life processes, interactions, andadaptations; movement of materials andenergythrough living communities;successionaldevelopment of ecosystems; cooperation, competition, and predation within and betweenspecies; and patterns ofbiodiversityand its effect on ecosystem processes.
Ecology has practical applications in fields such asconservation biology,wetlandmanagement,natural resource management, andhuman ecology.
The wordecology(German:Ökologie) was coined in 1866 by the German scientistErnst Haeckel. The science of ecology as we know it today began with a group of American botanists in the 1890s.[1]Evolutionaryconcepts relating to adaptation andnatural selectionare cornerstones of modernecological theory.
Ecosystems are dynamically interacting systems of organisms, the communities they make up, and the non-living (abiotic) components of their environment. Ecosystem processes, such asprimary production,nutrient cycling, andniche construction, regulate the flux of energy and matter through an environment. Ecosystems havebiophysicalfeedback mechanisms that moderate processes acting on living (biotic) and abiotic components of the planet. Ecosystems sustain life-supporting functions and provideecosystem serviceslikebiomassproduction (food, fuel, fiber, and medicine), the regulation ofclimate, globalbiogeochemical cycles,water filtration,soil formation,erosioncontrol, flood protection, and many other natural features of scientific, historical, economic, or intrinsic value.
The scope of ecology contains a wide array of interacting levels of organization spanning micro-level (e.g.,cells) to a planetary scale (e.g.,biosphere)phenomena. Ecosystems, for example, contain abioticresourcesand interacting life forms (i.e., individual organisms that aggregate intopopulationswhich aggregate into distinct ecological communities). Because ecosystems are dynamic and do not necessarily follow a linear successional route, changes might occur quickly or slowly over thousands of years before specific forest successional stages are brought about by biological processes.
An ecosystem's area can vary greatly, from tiny to vast. A single tree is of little consequence to the classification of a forest ecosystem, but is critically relevant to organisms living in and on it.[2]Several generations of anaphidpopulation can exist over the lifespan of a single leaf. Each of those aphids, in turn, supports diversebacterialcommunities.[3]The nature of connections in ecological communities cannot be explained by knowing the details of each species in isolation, because the emergent pattern is neither revealed nor predicted until the ecosystem is studied as an integrated whole.[4]Some ecological principles, however, do exhibit collective properties where the sum of the components explain the properties of the whole, such as birth rates of a population being equal to the sum of individual births over a designated time frame.[5]
The main subdisciplines of ecology,population(orcommunity) ecology andecosystem ecology, exhibit a difference not only in scale but also in two contrasting paradigms in the field. The former focuses on organisms' distribution and abundance, while the latter focuses on materials and energy fluxes.[6]
System behaviors must first be arrayed into different levels of the organization. Behaviors corresponding to higher levels occur at slow rates. Conversely, lower organizational levels exhibit rapid rates. For example, individual tree leaves respond rapidly to momentary changes in light intensity, CO2concentration, and the like. The growth of the tree responds more slowly and integrates these short-term changes.
The scale of ecological dynamics can operate like a closed system, such as aphids migrating on a single tree, while at the same time remaining open about broader scale influences, such as atmosphere or climate. Hence, ecologists classifyecosystemshierarchically by analyzing data collected from finer scale units, such asvegetation associations, climate, andsoil types, and integrate this information to identify emergent patterns of uniform organization and processes that operate on local to regional,landscape, and chronological scales.
To structure the study of ecology into a conceptually manageable framework, the biological world is organized into ahierarchy, ranging in scale from (as far as ecology is concerned)organisms, topopulations, toguilds, tocommunities, toecosystems, tobiomes, and up to the level of thebiosphere.[8]This framework forms apanarchy[9]and exhibitsnon-linearbehaviors; this means that "effect and cause are disproportionate, so that small changes to critical variables, such as the number ofnitrogen fixers, can lead to disproportionate, perhaps irreversible, changes in the system properties."[10]: 14
Biodiversity refers to the variety of life and its processes. It includes the variety of living organisms, the genetic differences among them, the communities and ecosystems in which they occur, and the ecological andevolutionaryprocesses that keep them functioning, yet ever-changing and adapting.
Biodiversity (an abbreviation of "biological diversity") describes the diversity of life from genes to ecosystems and spans every level of biological organization. The term has several interpretations, and there are many ways to index, measure, characterize, and represent its complex organization.[12][13][14]Biodiversity includesspecies diversity,ecosystem diversity, andgenetic diversityand scientists are interested in the way that this diversity affects the complex ecological processes operating at and among these respective levels.[13][15][16]
Biodiversity plays an important role inecosystem serviceswhich by definition maintain and improve human quality of life.[14][17][18]Conservation priorities and management techniques require different approaches and considerations to address the full ecological scope of biodiversity.Natural capitalthat supports populations is critical for maintainingecosystem services[19][20]and speciesmigration(e.g., riverine fish runs and avian insect control) has been implicated as one mechanism by which those service losses are experienced.[21]An understanding of biodiversity has practical applications for species and ecosystem-level conservation planners as they make management recommendations to consulting firms, governments, and industry.[22]
The habitat of a species describes the environment over which a species is known to occur and the type of community that is formed as a result.[24]More specifically, "habitats can be defined as regions in environmental space that are composed of multiple dimensions, each representing a biotic or abiotic environmental variable; that is, any component or characteristic of the environment related directly (e.g. forage biomass and quality) or indirectly (e.g. elevation) to the use of a location by the animal."[25]: 745For example, a habitat might be an aquatic or terrestrial environment that can be further categorized as amontaneoralpineecosystem.
Habitat shifts provide important evidence of competition in nature where one population changes relative to the habitats that most other individuals of the species occupy. For example, one population of a species of tropical lizard (Tropidurus hispidus) has a flattened body relative to the main populations that live in open savanna. The population that lives in an isolated rock outcrop hides in crevasses where its flattened body offers a selective advantage. Habitat shifts also occur in the developmentallife historyof amphibians, and in insects that transition from aquatic to terrestrial habitats.Biotopeand habitat are sometimes used interchangeably, but the former applies to a community's environment, whereas the latter applies to a species' environment.[24][26][27]
Definitions of the niche date back to 1917,[30]butG. Evelyn Hutchinsonmade conceptual advances in 1957[31][32]by introducing a widely adopted definition: "the set of biotic and abiotic conditions in which a species is able to persist and maintain stable population sizes."[30]: 519The ecological niche is a central concept in the ecology of organisms and is sub-divided into thefundamentaland therealizedniche. The fundamental niche is the set of environmental conditions under which a species is able to persist. The realized niche is the set of environmental plus ecological conditions under which a species persists.[30][32][33]The Hutchinsonian niche is defined more technically as a "Euclideanhyperspacewhosedimensionsare defined as environmental variables and whosesizeis a function of the number of values that the environmental values may assume for which an organism haspositive fitness."[34]: 71
Biogeographicalpatterns andrangedistributions are explained or predicted through knowledge of a species'traitsand niche requirements.[35]Species have functional traits that are uniquely adapted to the ecological niche. A trait is a measurable property,phenotype, orcharacteristicof an organism that may influence its survival. Genes play an important role in the interplay of development and environmental expression of traits.[36]Resident species evolve traits that are fitted to the selection pressures of their local environment. This tends to afford them a competitive advantage and discourages similarly adapted species from having an overlapping geographic range. Thecompetitive exclusion principlestates that two species cannot coexist indefinitely by living off the same limitingresource; one will always out-compete the other. When similarly adapted species overlap geographically, closer inspection reveals subtle ecological differences in their habitat or dietary requirements.[37]Some models and empirical studies, however, suggest that disturbances can stabilize the co-evolution and shared niche occupancy of similar species inhabiting species-rich communities.[38]The habitat plus the niche is called theecotope, which is defined as the full range of environmental and biological variables affecting an entire species.[24]
Organisms are subject to environmental pressures, but they also modify their habitats. Theregulatory feedbackbetween organisms and their environment can affect conditions from local (e.g., abeaverpond) to global scales, over time and even after death, such as decaying logs orsilicaskeleton deposits from marine organisms.[39]The process and concept ofecosystem engineeringare related toniche construction, but the former relates only to the physical modifications of the habitat whereas the latter also considers the evolutionary implications of physical changes to the environment and the feedback this causes on the process of natural selection. Ecosystem engineers are defined as: "organisms that directly or indirectly modulate the availability of resources to other species, by causing physical state changes in biotic or abiotic materials. In so doing they modify, maintain and create habitats."[40]: 373
The ecosystem engineering concept has stimulated a new appreciation for the influence that organisms have on the ecosystem and evolutionary process. The term "niche construction" is more often used in reference to the under-appreciated feedback mechanisms of natural selection imparting forces on the abiotic niche.[28][41]An example of natural selection through ecosystem engineering occurs in the nests ofsocial insects, including ants, bees, wasps, and termites. There is an emergenthomeostasisorhomeorhesisin the structure of the nest that regulates, maintains and defends the physiology of the entire colony. Termite mounds, for example, maintain a constant internal temperature through the design of air-conditioning chimneys. The structure of the nests themselves is subject to the forces of natural selection. Moreover, a nest can survive over successive generations, so that progeny inherit both genetic material and a legacy niche that was constructed before their time.[5][28][29]
Biomes are larger units of organization that categorize regions of the Earth's ecosystems, mainly according to the structure and composition of vegetation.[42]There are different methods to define the continental boundaries of biomes dominated by different functional types of vegetative communities that are limited in distribution by climate, precipitation, weather, and other environmental variables. Biomes includetropical rainforest,temperate broadleaf and mixed forest,temperate deciduous forest,taiga,tundra,hot desert, andpolar desert.[43]Other researchers have recently categorized other biomes, such as the human and oceanicmicrobiomes. To amicrobe, the human body is a habitat and a landscape.[44]Microbiomes were discovered largely through advances inmolecular genetics, which have revealed a hidden richness of microbial diversity on the planet. The oceanic microbiome plays a significant role in the ecological biogeochemistry of the planet's oceans.[45]
The largest scale of ecological organization is the biosphere: the total sum of ecosystems on the planet.Ecological relationshipsregulate the flux of energy, nutrients, and climate all the way up to the planetary scale. For example, the dynamic history of the planetary atmosphere's CO2and O2composition has been affected by the biogenic flux of gases coming from respiration and photosynthesis, with levels fluctuating over time in relation to the ecology and evolution of plants and animals.[46]Ecological theory has also been used to explain self-emergent regulatory phenomena at the planetary scale: for example, theGaia hypothesisis an example ofholismapplied in ecological theory.[47]The Gaia hypothesis states that there is an emergentfeedback loopgenerated by themetabolismof living organisms that maintains the core temperature of the Earth and atmospheric conditions within a narrow self-regulating range of tolerance.[48]
Population ecology studies the dynamics of species populations and how these populations interact with the wider environment.[5]A population consists of individuals of the same species that live, interact, and migrate through the same niche and habitat.[49]
A primary law of population ecology is theMalthusian growth model[50]which states, "a population will grow (or decline) exponentially as long as the environment experienced by all individuals in the population remains constant."[50]: 18Simplified populationmodelsusually starts with four variables: death, birth,immigration, andemigration.
An example of an introductory population model describes a closed population, such as on an island, where immigration and emigration does not take place. Hypotheses are evaluated with reference to a null hypothesis which states thatrandomprocesses create the observed data. In these island models, the rate of population change is described by:
whereNis the total number of individuals in the population,banddare the per capita rates of birth and death respectively, andris the per capita rate of population change.[50][51]
Using these modeling techniques, Malthus' population principle of growth was later transformed into a model known as thelogistic equationbyPierre Verhulst:
whereN(t)is the number of individuals measured asbiomassdensity as a function of time,t,ris the maximum per-capita rate of change commonly known as the intrinsic rate of growth, andα{\displaystyle \alpha }is the crowding coefficient, which represents the reduction in population growth rate per individual added. The formula states that the rate of change in population size (dN(t)/dt{\displaystyle \mathrm {d} N(t)/\mathrm {d} t}) will grow to approach equilibrium, where (dN(t)/dt=0{\displaystyle \mathrm {d} N(t)/\mathrm {d} t=0}), when the rates of increase and crowding are balanced,r/α{\displaystyle r/\alpha }. A common, analogous model fixes the equilibrium,r/α{\displaystyle r/\alpha }asK, which is known as the "carrying capacity."
Population ecology builds upon these introductory models to further understand demographic processes in real study populations. Commonly used types of data includelife history,fecundity, and survivorship, and these are analyzed using mathematical techniques such asmatrix algebra. The information is used for managing wildlife stocks and setting harvest quotas.[51][52]In cases where basic models are insufficient, ecologists may adopt different kinds of statistical methods, such as theAkaike information criterion,[53]or use models that can become mathematically complex as "several competing hypotheses are simultaneously confronted with the data."[54]
The concept of metapopulations was defined in 1969[55]as "a population of populations which go extinct locally and recolonize".[56]: 105Metapopulation ecology is another statistical approach that is often used inconservation research.[57]Metapopulation models simplify the landscape into patches of varying levels of quality,[58]and metapopulations are linked by the migratory behaviours of organisms. Animal migration is set apart from other kinds of movement because it involves the seasonal departure and return of individuals from a habitat.[59]Migration is also a population-level phenomenon, as with the migration routes followed by plants as they occupied northern post-glacial environments. Plant ecologists use pollen records that accumulate and stratify in wetlands to reconstruct the timing of plant migration and dispersal relative to historic and contemporary climates. These migration routes involved an expansion of the range as plant populations expanded from one area to another. There is a larger taxonomy of movement, such as commuting, foraging, territorial behavior, stasis, and ranging. Dispersal is usually distinguished from migration because it involves the one-way permanent movement of individuals from their birth population into another population.[60][61]
In metapopulation terminology, migrating individuals are classed as emigrants (when they leave a region) or immigrants (when they enter a region), and sites are classed either as sources or sinks. A site is a generic term that refers to places where ecologists sample populations, such as ponds or defined sampling areas in a forest. Source patches are productive sites that generate a seasonal supply ofjuvenilesthat migrate to other patch locations. Sink patches are unproductive sites that only receive migrants; the population at the site will disappear unless rescued by an adjacent source patch or environmental conditions become more favorable. Metapopulation models examine patch dynamics over time to answer potential questions about spatial and demographic ecology. The ecology of metapopulations is a dynamic process of extinction and colonization. Small patches of lower quality (i.e., sinks) are maintained or rescued by a seasonal influx of new immigrants. A dynamic metapopulation structure evolves from year to year, where some patches are sinks in dry years and are sources when conditions are more favorable. Ecologists use a mixture of computer models andfield studiesto explain metapopulation structure.[62][63]
Community ecology examines how interactions among species and their environment affect the abundance, distribution and diversity of species within communities.
Community ecology is the study of the interactions among a collection of species that inhabit the same geographic area. Community ecologists study the determinants of patterns and processes for two or more interacting species. Research in community ecology might measurespecies diversityin grasslands in relation to soil fertility. It might also include the analysis of predator-prey dynamics, competition among similar plant species, or mutualistic interactions between crabs and corals.
These ecosystems, as we may call them, are of the most various kinds and sizes. They form one category of the multitudinous physical systems of the universe, which range from the universe as a whole down to the atom.
Ecosystems may be habitats within biomes that form an integrated whole and a dynamically responsive system having both physical and biological complexes. Ecosystem ecology is the science of determining the fluxes of materials (e.g. carbon, phosphorus) between different pools (e.g., tree biomass, soil organic material). Ecosystem ecologists attempt to determine the underlying causes of these fluxes. Research in ecosystem ecology might measureprimary production(g C/m^2) in awetlandin relation to decomposition and consumption rates (g C/m^2/y). This requires an understanding of the community connections between plants (i.e., primary producers) and the decomposers (e.g.,fungiand bacteria).[66]
The underlying concept of an ecosystem can be traced back to 1864 in the published work ofGeorge Perkins Marsh("Man and Nature").[67][68]Within an ecosystem, organisms are linked to the physical and biological components of their environment to which they are adapted.[65]Ecosystems are complex adaptive systems where the interaction of life processes form self-organizing patterns across different scales of time and space.[69]Ecosystems are broadly categorized asterrestrial,freshwater, atmospheric, ormarine. Differences stem from the nature of the unique physical environments that shapes the biodiversity within each. A more recent addition to ecosystem ecology aretechnoecosystems, which are affected by or primarily the result of human activity.[5]
A food web is the archetypalecological network. Plants capturesolar energyand use it to synthesizesimple sugarsduringphotosynthesis. As plants grow, they accumulate nutrients and are eaten by grazingherbivores, and the energy is transferred through a chain of organisms by consumption. The simplified linear feeding pathways that move from a basaltrophic speciesto a top consumer is called thefood chain. Food chains in an ecological community create a complex food web. Food webs are a type ofconcept mapthat is used to illustrate and study pathways of energy and material flows.[7][70][71]
Empirical measurementsare generally restricted to a specific habitat, such as a cave or a pond, and principles gleaned from small-scale studies are extrapolated to larger systems.[72]Feeding relations require extensive investigations, e.g. into the gut contents of organisms, which can be difficult to decipher, or stable isotopes can be used to trace the flow of nutrient diets and energy through a food web.[73]Despite these limitations, food webs remain a valuable tool in understanding community ecosystems.[74]
Food webs illustrate importantprinciples of ecology: some species have many weak feeding links (e.g.,omnivores) while some are more specialized with fewer stronger feeding links (e.g.,primary predators). Such linkages explain how ecological communities remain stable over time[75][76]and eventually can illustrate a "complete" web of life.[71][77][78][79]
Thedisruption of food websmay have a dramatic impact on the ecology of individual species or whole ecosystems. For instance, the replacement of anantspecies by another (invasive) ant species has been shown to affect howelephantsreduce tree cover and thus the predation oflionsonzebras.[80][81]
A trophic level (from Greektroph, τροφή, trophē, meaning "food" or "feeding") is "a group of organisms acquiring a considerable majority of its energy from the lower adjacent level (according toecological pyramids) nearer the abiotic source."[82]: 383Links in food webs primarily connect feeding relations ortrophismamong species. Biodiversity within ecosystems can be organized into trophic pyramids, in which the vertical dimension represents feeding relations that become further removed from the base of the food chain up toward top predators, and the horizontal dimension represents theabundanceor biomass at each level.[83]When the relative abundance or biomass of each species is sorted into its respective trophic level, they naturally sort into a 'pyramid of numbers'.[84]
Species are broadly categorized asautotrophs(orprimary producers),heterotrophs(orconsumers), andDetritivores(ordecomposers). Autotrophs are organisms that produce their own food (production is greater than respiration) by photosynthesis orchemosynthesis. Heterotrophs are organisms that must feed on others for nourishment and energy (respiration exceeds production).[5]Heterotrophs can be further sub-divided into different functional groups, includingprimary consumers(strict herbivores),secondary consumers(carnivorouspredators that feed exclusively on herbivores), and tertiary consumers (predators that feed on a mix of herbivores and predators).[85]Omnivores do not fit neatly into a functional category because they eat both plant and animal tissues. It has been suggested that omnivores have a greater functional influence as predators because compared to herbivores, they are relatively inefficient at grazing.[86]
Trophic levels are part of theholisticorcomplex systemsview of ecosystems.[87][88]Each trophic level contains unrelated species that are grouped together because they share common ecological functions, giving a macroscopic view of the system.[89]While the notion of trophic levels provides insight into energy flow and top-down control within food webs, it is troubled by the prevalence of omnivory in real ecosystems. This has led some ecologists to "reiterate that the notion that species clearly aggregate into discrete, homogeneous trophic levels is fiction."[90]: 815Nonetheless, recent studies have shown that real trophic levels do exist, but "above the herbivore trophic level, food webs are better characterized as a tangled web of omnivores."[91]: 612
A keystone species is a species that is connected to a disproportionately large number of other species in thefood-web. Keystone species have lower levels of biomass in the trophic pyramid relative to the importance of their role. The many connections that a keystone species holds means that it maintains the organization and structure of entire communities. The loss of a keystone species results in a range of dramatic cascading effects (termedtrophic cascades) that alters trophic dynamics, other food web connections, and can cause the extinction of other species.[92][93]The term keystone species was coined by Robert Paine in 1969 and is a reference to thekeystonearchitectural feature as the removal of a keystone species can result in a community collapse just as the removal of the keystone in an arch can result in the arch's loss of stability.[94]
Sea otters(Enhydra lutris) are commonly cited as an example of a keystone species because they limit the density ofsea urchinsthat feed onkelp. If sea otters are removed from the system, the urchins graze until the kelp beds disappear, and this has a dramatic effect on community structure.[95]Hunting of sea otters, for example, is thought to have led indirectly to the extinction of theSteller's sea cow(Hydrodamalis gigas).[96]While the keystone species concept has been used extensively as aconservationtool, it has been criticized for being poorly defined from an operational stance. It is difficult to experimentally determine what species may hold a keystone role in each ecosystem. Furthermore, food web theory suggests that keystone species may not be common, so it is unclear how generally the keystone species model can be applied.[95][97]
Complexity is understood as a large computational effort needed to piece together numerous interacting parts exceeding the iterative memory capacity of the human mind. Global patterns of biological diversity are complex. Thisbiocomplexitystems from the interplay among ecological processes that operate and influence patterns at different scales that grade into each other, such as transitional areas orecotonesspanning landscapes. Complexity stems from the interplay among levels of biological organization as energy, and matter is integrated into larger units that superimpose onto the smaller parts. "What were wholes on one level become parts on a higher one."[98]: 209Small scale patterns do not necessarily explain large scale phenomena, otherwise captured in the expression (coined by Aristotle) 'the sum is greater than the parts'.[99][100][E]
"Complexity in ecology is of at least six distinct types: spatial, temporal, structural, process, behavioral, and geometric."[101]: 3From these principles, ecologists have identifiedemergentandself-organizingphenomena that operate at different environmental scales of influence, ranging from molecular to planetary, and these require different explanations at eachintegrative level.[48][102]Ecological complexity relates to the dynamic resilience of ecosystems that transition to multiple shifting steady-states directed by random fluctuations of history.[9][103]Long-term ecological studies provide important track records to better understand the complexity and resilience of ecosystems over longer temporal and broader spatial scales. These studies are managed by the International Long Term Ecological Network (LTER).[104]The longest experiment in existence is thePark Grass Experiment, which was initiated in 1856.[105]Another example is theHubbard Brook study, which has been in operation since 1960.[106]
Holism remains a critical part of the theoretical foundation in contemporary ecological studies. Holism addresses thebiological organizationof life thatself-organizesinto layers of emergent whole systems that function according to non-reducible properties. This means that higher-order patterns of a whole functional system, such as anecosystem, cannot be predicted or understood by a simple summation of the parts.[107]"New properties emerge because the components interact, not because the basic nature of the components is changed."[5]: 8
Ecological studies are necessarily holistic as opposed toreductionistic.[36][102][108]Holism has three scientific meanings or uses that identify with ecology: 1) the mechanistic complexity of ecosystems, 2) the practical description of patterns in quantitative reductionist terms where correlations may be identified but nothing is understood about the causal relations without reference to the whole system, which leads to 3) ametaphysicalhierarchy whereby the causal relations of larger systems are understood without reference to the smaller parts. Scientific holism differs frommysticismthat has appropriated the same term. An example of metaphysical holism is identified in the trend of increased exterior thickness in shells of different species. The reason for a thickness increase can be understood through reference to principles of natural selection via predation without the need to reference or understand thebiomolecularproperties of the exterior shells.[109]
Ecology and evolutionary biology are considered sister disciplines of the life sciences.Natural selection,life history,development,adaptation,populations, andinheritanceare examples of concepts that thread equally into ecological and evolutionary theory. Morphological, behavioural, and genetic traits, for example, can be mapped onto evolutionary trees to study the historical development of a species in relation to their functions and roles in different ecological circumstances. In this framework, the analytical tools of ecologists and evolutionists overlap as they organize, classify, and investigate life through common systematic principles, such asphylogeneticsor theLinnaean system of taxonomy.[110]The two disciplines often appear together, such as in the title of the journalTrends in Ecology and Evolution.[111]There is no sharp boundary separating ecology from evolution, and they differ more in their areas of applied focus. Both disciplines discover and explain emergent and unique properties and processes operating across different spatial or temporal scales of organization.[36][48]While the boundary between ecology and evolution is not always clear, ecologists study the abiotic and biotic factors that influence evolutionary processes,[112][113]and evolution can be rapid, occurring on ecological timescales as short as one generation.[114]
All organisms can exhibit behaviours. Even plants express complex behaviour, including memory and communication.[116]Behavioural ecology is the study of an organism's behaviour in its environment and its ecological and evolutionary implications. Ethology is the study of observable movement or behaviour in animals. This could include investigations of motilespermof plants, mobilephytoplankton,zooplanktonswimming toward the female egg, the cultivation of fungi byweevils, the mating dance of asalamander, or social gatherings ofamoeba.[117][118][119][120][121]
Adaptation is the central unifying concept in behavioural ecology.[122]Behaviours can be recorded as traits and inherited in much the same way that eye and hair colour can. Behaviours can evolve by means of natural selection as adaptive traits conferring functional utilities that increases reproductive fitness.[123][124]
Predator-prey interactions are an introductory concept into food-web studies as well as behavioural ecology.[126]Prey species can exhibit different kinds of behavioural adaptations to predators, such as avoid, flee, or defend. Many prey species are faced with multiple predators that differ in the degree of danger posed. To be adapted to their environment and face predatory threats, organisms must balance their energy budgets as they invest in different aspects of their life history, such as growth, feeding, mating, socializing, or modifying their habitat.
Hypotheses posited in behavioural ecology are generally based on adaptive principles of conservation, optimization, or efficiency.[33][112][127]For example, "[t]he threat-sensitive predator avoidance hypothesis predicts that prey should assess the degree of threat posed by different predators and match their behaviour according to current levels of risk"[128]or "[t]he optimalflight initiation distanceoccurs where expected postencounter fitness is maximized, which depends on the prey's initial fitness, benefits obtainable by not fleeing, energetic escape costs, and expected fitness loss due to predation risk."[129]
Elaborate sexualdisplaysand posturing are encountered in the behavioural ecology of animals. Thebirds-of-paradise, for example, sing and display elaborate ornaments duringcourtship. These displays serve a dual purpose of signalling healthy or well-adapted individuals and desirable genes. The displays are driven bysexual selectionas an advertisement of quality of traits amongsuitors.[130]
Cognitive ecology integrates theory and observations fromevolutionary ecologyandneurobiology, primarilycognitive science, in order to understand the effect that animal interaction with their habitat has on their cognitive systems and how those systems restrict behavior within an ecological and evolutionary framework.[131]"Until recently, however, cognitive scientists have not paid sufficient attention to the fundamental fact that cognitive traits evolved under particular natural settings. With consideration of the selection pressure on cognition, cognitive ecology can contribute intellectual coherence to the multidisciplinary study of cognition."[132][133]As a study involving the 'coupling' or interactions between organism and environment, cognitive ecology is closely related toenactivism,[131]a field based upon the view that "...we must see the organism and environment as bound together in reciprocal specification and selection...".[134]
Social-ecological behaviours are notable in thesocial insects,slime moulds,social spiders,human society, andnaked mole-ratswhereeusocialismhas evolved. Social behaviours include reciprocally beneficial behaviours among kin and nest mates[119][124][135]and evolve from kin and group selection.Kin selectionexplains altruism through genetic relationships, whereby an altruistic behaviour leading to death is rewarded by the survival of genetic copies distributed among surviving relatives. The social insects, includingants,bees, andwaspsare most famously studied for this type of relationship because the male drones areclonesthat share the same genetic make-up as every other male in the colony.[124]In contrast,group selectionistsfind examples of altruism among non-genetic relatives and explain this through selection acting on the group; whereby, it becomes selectively advantageous for groups if their members express altruistic behaviours to one another. Groups with predominantly altruistic members survive better than groups with predominantly selfish members.[124][136]
Ecological interactions can be classified broadly into ahostand an associate relationship. A host is any entity that harbours another that is called the associate.[137]Relationshipsbetween speciesthat are mutually or reciprocally beneficial are calledmutualisms. Examples of mutualism includefungus-growing antsemploying agricultural symbiosis, bacteria living in the guts of insects and other organisms, thefig waspandyucca mothpollination complex,lichenswith fungi and photosyntheticalgae, andcoralswith photosynthetic algae.[138][139]If there is a physical connection between host and associate, the relationship is calledsymbiosis. Approximately 60% of all plants, for example, have a symbiotic relationship witharbuscular mycorrhizal fungiliving in their roots forming an exchange network of carbohydrates formineral nutrients.[140]
Indirect mutualisms occur where the organisms live apart. For example, trees living in the equatorial regions of the planet supply oxygen into the atmosphere that sustains species living in distant polar regions of the planet. This relationship is calledcommensalismbecause many others receive the benefits of clean air at no cost or harm to trees supplying the oxygen.[5][141]If the associate benefits while the host suffers, the relationship is calledparasitism. Although parasites impose a cost to their host (e.g., via damage to their reproductive organs orpropagules, denying the services of a beneficial partner), their net effect on host fitness is not necessarily negative and, thus, becomes difficult to forecast.[142][143]Co-evolution is also driven by competition among species or among members of the same species under the banner of reciprocal antagonism, such as grasses competing for growth space. TheRed Queen Hypothesis, for example, posits that parasites track down and specialize on the locally common genetic defense systems of its host that drives the evolution of sexual reproduction to diversify the genetic constituency of populations responding to the antagonistic pressure.[144][145]
Biogeography (an amalgamation ofbiologyandgeography) is the comparative study of the geographic distribution of organisms and the corresponding evolution of their traits in space and time.[146]TheJournal of Biogeographywas established in 1974.[147]Biogeography and ecology share many of their disciplinary roots. For example,the theory of island biogeography, published by the Robert MacArthur andEdward O. Wilsonin 1967[148]is considered one of the fundamentals of ecological theory.[149]
Biogeography has a long history in the natural sciences concerning the spatial distribution of plants and animals. Ecology and evolution provide the explanatory context for biogeographical studies.[146]Biogeographical patterns result from ecological processes that influence range distributions, such asmigrationanddispersal.[149]and from historical processes that split populations or species into different areas. The biogeographic processes that result in the natural splitting of species explain much of the modern distribution of the Earth's biota. The splitting of lineages in a species is calledvicariance biogeographyand it is a sub-discipline of biogeography.[150]There are also practical applications in the field of biogeography concerning ecological systems and processes. For example, the range and distribution of biodiversity and invasive species responding to climate change is a serious concern and active area of research in the context ofglobal warming.[151][152]
A population ecology concept is r/K selection theory,[D]one of the first predictive models in ecology used to explainlife-history evolution. The premise behind the r/K selection model is that natural selection pressures change according topopulation density. For example, when an island is first colonized, density of individuals is low. The initial increase in population size is not limited by competition, leaving an abundance of availableresourcesfor rapid population growth. These early phases ofpopulation growthexperiencedensity-independentforces of natural selection, which is calledr-selection. As the population becomes more crowded, it approaches the island's carrying capacity, thus forcing individuals to compete more heavily for fewer available resources. Under crowded conditions, the population experiences density-dependent forces of natural selection, calledK-selection.[153]
In ther/K-selection model, the first variableris the intrinsic rate of natural increase in population size and the second variableKis the carrying capacity of a population.[33]Different species evolve different life-history strategies spanning a continuum between these two selective forces. Anr-selected species is one that has high birth rates, low levels of parental investment, and high rates of mortality before individuals reach maturity. Evolution favours high rates offecundityinr-selected species. Many kinds of insects andinvasive speciesexhibitr-selectedcharacteristics. In contrast, aK-selected species has low rates of fecundity, high levels of parental investment in the young, and low rates of mortality as individuals mature. Humans and elephants are examples of species exhibitingK-selected characteristics, including longevity and efficiency in the conversion of more resources into fewer offspring.[148][154]
The important relationship between ecology and genetic inheritance predates modern techniques for molecular analysis. Molecular ecological research became more feasible with the development of rapid and accessible genetic technologies, such as thepolymerase chain reaction (PCR). The rise of molecular technologies and the influx of research questions into this new ecological field resulted in the publicationMolecular Ecologyin 1992.[155]Molecular ecologyuses various analytical techniques to study genes in an evolutionary and ecological context. In 1994,John Avisealso played a leading role in this area of science with the publication of his book,Molecular Markers, Natural History and Evolution.[156]
Newer technologies opened a wave of genetic analysis into organisms once difficult to study from an ecological or evolutionary standpoint, such as bacteria, fungi, andnematodes. Molecular ecology engendered a new research paradigm for investigating ecological questions considered otherwise intractable. Molecular investigations revealed previously obscured details in the tiny intricacies of nature and improved resolution into probing questions about behavioural and biogeographical ecology.[156]For example, molecular ecology revealedpromiscuoussexual behaviour and multiple male partners intree swallowspreviously thought to be sociallymonogamous.[157]In a biogeographical context, the marriage between genetics, ecology, and evolution resulted in a new sub-discipline calledphylogeography.[158]
The history of life on Earth has been a history of interaction between living things and their surroundings. To a large extent, the physical form and the habits of the earth's vegetation and its animal life have been molded by the environment. Considering the whole span of earthly time, the opposite effect, in which life actually modifies its surroundings, has been relatively slight. Only within the moment of time represented by the present century has one species man acquired significant power to alter the nature of his world.
Ecology is as much a biological science as it is a human science.[5]Human ecology is aninterdisciplinaryinvestigation into the ecology of our species. "Human ecology may be defined: (1) from a bioecological standpoint as the study of man as the ecological dominant in plant and animal communities and systems; (2) from a bioecological standpoint as simply another animal affecting and being affected by his physical environment; and (3) as a human being, somehow different from animal life in general, interacting with physical and modified environments in a distinctive and creative way. A truly interdisciplinary human ecology will most likely address itself to all three."[160]: 3The term was formally introduced in 1921, but many sociologists, geographers, psychologists, and other disciplines were interested in human relations to natural systems centuries prior, especially in the late 19th century.[160][161]
The ecological complexities human beings are facing through the technological transformation of the planetary biome has brought on theAnthropocene. The unique set of circumstances has generated the need for a new unifying science calledcoupled human and natural systemsthat builds upon, but moves beyond the field of human ecology.[107]Ecosystems tie into human societies through the critical and all-encompassing life-supporting functions they sustain. In recognition of these functions and the incapability of traditional economic valuation methods to see the value in ecosystems, there has been a surge of interest insocial-natural capital, which provides the means to put a value on the stock and use of information and materials stemming fromecosystem goods and services. Ecosystems produce, regulate, maintain, and supply services of critical necessity and beneficial to human health (cognitive and physiological), economies, and they even provide an information or reference function as a living library giving opportunities for science and cognitive development in children engaged in the complexity of the natural world. Ecosystems relate importantly to human ecology as they are the ultimate base foundation of global economics as every commodity, and the capacity for exchange ultimately stems from the ecosystems on Earth.[107][162][163][164]
Ecosystem management is not just about science nor is it simply an extension of traditional resource management; it offers a fundamental reframing of how humans may work with nature.
Ecology is an employed science of restoration, repairing disturbed sites through human intervention, in natural resource management, and inenvironmental impact assessments. Edward O. Wilson predicted in 1992 that the 21st century "will be the era of restoration in ecology".[166]Ecological science has boomed in the industrial investment of restoring ecosystems and their processes in abandoned sites after disturbance. Natural resource managers, inforestry, for example, employ ecologists to develop, adapt, and implementecosystem based methodsinto the planning, operation, and restoration phases of land-use.
Another example of conservation is seen on the east coast of the United States in Boston, MA. The city of Boston implemented the Wetland Ordinance,[167]improving the stability of their wetland environments by implementing soil amendments that will improve groundwater storage and flow, and trimming or removal of vegetation that could cause harm to water quality.[citation needed]Ecological science is used in the methods of sustainable harvesting, disease, and fire outbreak management, in fisheries stock management, for integrating land-use with protected areas and communities, and conservation in complex geo-political landscapes.[22][165][168][169]
The environment of ecosystems includes both physical parameters and biotic attributes. It is dynamically interlinked and containsresourcesfor organisms at any time throughout their life cycle.[5][170]Like ecology, the term environment has different conceptual meanings and overlaps with the concept of nature. Environment "includes the physical world, the social world of human relations and the built world of human creation."[171]: 62The physical environment is external to the level of biological organization under investigation, includingabioticfactors such as temperature, radiation, light, chemistry,climateand geology. The biotic environment includes genes, cells, organisms, members of the same species (conspecifics) and other species that share a habitat.[172]
The distinction between external and internal environments, however, is an abstraction parsing life and environment into units or facts that are inseparable in reality. There is an interpenetration of cause and effect between the environment and life. The laws ofthermodynamics, for example, apply to ecology by means of its physical state. With an understanding of metabolic and thermodynamic principles, a complete accounting of energy and material flow can be traced through an ecosystem. In this way, the environmental and ecological relations are studied through reference to conceptually manageable and isolatedmaterialparts. After the effective environmental components are understood through reference to their causes; however, they conceptually link back together as an integrated whole, orholocoenoticsystem as it was once called. This is known as thedialecticalapproach to ecology. The dialectical approach examines the parts but integrates the organism and the environment into a dynamic whole (orumwelt). Change in one ecological or environmental factor can concurrently affect the dynamic state of an entire ecosystem.[36][173]
A disturbance is any process that changes or removes biomass from a community, such as a fire, flood, drought, or predation.[174]Disturbances are both the cause and product of natural fluctuations within an ecological community.[175][174][176][177]Biodiversity can protect ecosystems from disturbances.[177]
The effect of a disturbance is often hard to predict, but there are numerous examples in which a single species can massively disturb an ecosystem. For example, a single-celledprotozoanhas been able to kill up to 100% ofsea urchinsin somecoral reefsin theRed Seaand WesternIndian Ocean. Sea urchins enable complex reef ecosystems to thrive by eatingalgaethat would otherwise inhibit coral growth.[178]Similarly,invasive speciescan wreak havoc on ecosystems. For instance, invasiveBurmese pythonshave caused a 98% decline of smallmammalsin theEverglades.[179]
Metabolism – the rate at which energy and material resources are taken up from the environment, transformed within an organism, and allocated to maintenance, growth and reproduction – is a fundamental physiological trait.
The Earth was formed approximately 4.5 billion years ago.[181]As it cooled and a crust and oceans formed, its atmosphere transformed from being dominated byhydrogento one composed mostly ofmethaneandammonia. Over the next billion years, the metabolic activity of life transformed the atmosphere into a mixture ofcarbon dioxide,nitrogen, and water vapor. These gases changed the way that light from the sun hit the Earth's surface and greenhouse effects trapped heat. There were untapped sources of free energy within the mixture ofreducing and oxidizinggasses that set the stage for primitive ecosystems to evolve and, in turn, the atmosphere also evolved.[182]
Throughout history, the Earth's atmosphere andbiogeochemical cycleshave been in adynamic equilibriumwith planetary ecosystems. The history is characterized by periods of significant transformation followed by millions of years of stability.[183]The evolution of the earliest organisms, likely anaerobicmethanogenmicrobes, started the process by converting atmospheric hydrogen into methane (4H2+ CO2→ CH4+ 2H2O).Anoxygenic photosynthesisreduced hydrogen concentrations and increasedatmospheric methane, by convertinghydrogen sulfideinto water or other sulfur compounds (for example, 2H2S + CO2+ hv→ CH2O + H2O + 2S). Early forms offermentationalso increased levels of atmospheric methane. The transition to an oxygen-dominant atmosphere (theGreat Oxidation) did not begin until approximately 2.4–2.3 billion years ago, but photosynthetic processes started 0.3–1 billion years prior.[183][184]
The biology of life operates within a certain range of temperatures. Heat is a form of energy that regulates temperature. Heat affects growth rates, activity, behaviour, andprimary production. Temperature is largely dependent on the incidence ofsolar radiation. The latitudinal and longitudinal spatial variation oftemperaturegreatly affects climates and consequently the distribution ofbiodiversityand levels of primary production in different ecosystems or biomes across the planet. Heat and temperature relate importantly to metabolic activity.Poikilotherms, for example, have a body temperature that is largely regulated and dependent on the temperature of the external environment. In contrast,homeothermsregulate their internal body temperature by expendingmetabolic energy.[112][113][173]
There is a relationship between light, primary production, and ecologicalenergy budgets. Sunlight is the primary input of energy into the planet's ecosystems. Light is composed ofelectromagnetic energyof differentwavelengths.Radiant energyfrom the sun generates heat, provides photons of light measured as active energy in the chemical reactions of life, and also acts as a catalyst forgenetic mutation.[112][113][173]Plants, algae, and some bacteria absorb light and assimilate the energy throughphotosynthesis. Organisms capable of assimilating energy by photosynthesis or through inorganic fixation ofH2Sareautotrophs. Autotrophs—responsible for primary production—assimilate light energy which becomes metabolically stored aspotential energyin the form of biochemicalenthalpicbonds.[112][113][173]
Wetland conditions such as shallow water, high plant productivity, and anaerobic substrates provide a suitable environment for important physical, biological, and chemical processes. Because of these processes, wetlands play a vital role in global nutrient and element cycles.
Diffusion of carbon dioxide and oxygen is approximately 10,000 times slower in water than in air. When soils are flooded, they quickly lose oxygen, becominghypoxic(an environment with O2concentration below 2 mg/liter) and eventually completelyanoxicwhereanaerobic bacteriathrive among the roots. Water also influences the intensity andspectral compositionof light as it reflects off the water surface and submerged particles.[185]Aquatic plants exhibit a wide variety of morphological and physiological adaptations that allow them to survive, compete, and diversify in these environments. For example, their roots and stems contain large air spaces (aerenchyma) that regulate the efficient transportation of gases (for example, CO2and O2) used in respiration and photosynthesis.
Salt water plants (halophytes) have additional specialized adaptations, such as the development of special organs for shedding salt andosmoregulatingtheir internal salt (NaCl) concentrations, to live inestuarine,brackish, oroceanicenvironments. Anaerobic soilmicroorganismsin aquatic environments usenitrate,manganese ions,ferric ions,sulfate,carbon dioxide, and someorganic compounds; other microorganisms arefacultative anaerobesand use oxygen during respiration when the soil becomes drier. The activity of soil microorganisms and the chemistry of the water reduces theoxidation-reductionpotentials of the water. Carbon dioxide, for example, is reduced to methane (CH4) by methanogenic bacteria.[185]The physiology of fish is also specially adapted to compensate for environmental salt levels through osmoregulation. Their gills formelectrochemical gradientsthat mediate salt excretion in salt water and uptake in fresh water.[186]
The shape and energy of the land are significantly affected by gravitational forces. On a large scale, the distribution of gravitational forces on the earth is uneven and influences the shape and movement oftectonic platesas well as influencinggeomorphicprocesses such asorogenyanderosion. These forces govern many of the geophysical properties and distributions of ecological biomes across the Earth.
On the organismal scale, gravitational forces provide directional cues for plant and fungal growth (gravitropism), orientation cues for animal migrations, and influence thebiomechanicsand size of animals.[112]Ecological traits, such as allocation of biomass in trees during growth are subject to mechanical failure as gravitational forces influence the position and structure of branches and leaves.[187]Thecardiovascular systemsof animals are functionally adapted to overcome the pressure and gravitational forces that change according to the features of organisms (e.g., height, size, shape), their behaviour (e.g., diving, running, flying), and the habitat occupied (e.g., water, hot deserts, cold tundra).[188]
Climatic andosmotic pressureplacesphysiologicalconstraints on organisms, especially those that fly and respire at high altitudes, or dive to deep ocean depths.[189]These constraints influence vertical limits of ecosystems in the biosphere, as organisms are physiologically sensitive and adapted to atmospheric and osmotic water pressure differences.[112]For example, oxygen levels decrease with decreasing pressure and are a limiting factor for life at higher altitudes.[190]
Water transportationby plants is another importantecophysiologicalprocess affected by osmotic pressure gradients.[191][192][193]Water pressurein the depths of oceans requires that organisms adapt to these conditions. For example, diving animals such aswhales,dolphins, andsealsare specially adapted to deal with changes in sound due to water pressure differences.[194]Differences betweenhagfishspecies provide another example of adaptation to deep-sea pressure through specialized protein adaptations.[195]
Turbulent forcesin air and water affect the environment and ecosystem distribution, form, and dynamics. On a planetary scale, ecosystems are affected by circulation patterns in the globaltrade winds. Wind power and the turbulent forces it creates can influence heat, nutrient, and biochemical profiles of ecosystems.[112]For example, wind running over the surface of a lake creates turbulence, mixing thewater columnand influencing the environmental profile to createthermally layered zones, affecting how fish, algae, and other parts of theaquatic ecosystemare structured.[198][199]
Wind speed and turbulence also influenceevapotranspiration ratesand energy budgets in plants and animals.[185][200]Wind speed, temperature and moisture content can vary as winds travel across different land features and elevations. For example, thewesterliescome into contact with thecoastaland interior mountains of western North America to produce arain shadowon the leeward side of the mountain. The air expands and moisture condenses as the winds increase in elevation; this is calledorographic liftand can cause precipitation. This environmental process produces spatial divisions in biodiversity, as species adapted to wetter conditions are range-restricted to the coastal mountain valleys and unable to migrate across thexericecosystems (e.g., of theColumbia Basinin western North America) to intermix with sister lineages that are segregated to the interior mountain systems.[201][202]
Plants convert carbon dioxide into biomass and emit oxygen into the atmosphere. By approximately 350 million years ago (the end of theDevonian period), photosynthesis had brought the concentration of atmospheric oxygen above 17%, which allowed combustion to occur.[203]Fire releases CO2and converts fuel into ash and tar. Fire is a significant ecological parameter that raises many issues pertaining to its control and suppression.[204]While the issue of fire in relation to ecology and plants has been recognized for a long time,[205]Charles Cooperbrought attention to the issue of forest fires in relation to the ecology of forest fire suppression and management in the 1960s.[206][207]
Native North Americanswere among the first to influence fire regimes by controlling their spread near their homes or by lighting fires to stimulate the production of herbaceous foods and basketry materials.[208]Fire creates a heterogeneous ecosystem age and canopy structure, and the altered soil nutrient supply and cleared canopy structure opens new ecological niches for seedling establishment.[209][210]Most ecosystems are adapted to natural fire cycles. Plants, for example, are equipped with a variety of adaptations to deal with forest fires. Some species (e.g.,Pinus halepensis) cannotgerminateuntil after their seeds have lived through a fire or been exposed to certain compounds from smoke. Environmentally triggered germination of seeds is calledserotiny.[211][212]Fire plays a major role in the persistence andresilienceof ecosystems.[176]
Soil is the living top layer of mineral and organic dirt that covers the surface of the planet. It is the chief organizing centre of most ecosystem functions, and it is of critical importance in agricultural science and ecology. Thedecompositionof dead organic matter (for example, leaves on the forest floor), results in soils containingmineralsand nutrients that feed into plant production. The whole of the planet's soil ecosystems is called thepedospherewhere a large biomass of the Earth's biodiversity organizes into trophic levels. Invertebrates that feed and shred larger leaves, for example, create smaller bits for smaller organisms in the feeding chain. Collectively, these organisms are thedetritivoresthat regulate soil formation.[213][214]Tree roots, fungi, bacteria, worms, ants, beetles, centipedes, spiders, mammals, birds, reptiles, amphibians, and other less familiar creatures all work to create the trophic web of life in soil ecosystems.
Soils form composite phenotypes where inorganic matter is enveloped into the physiology of a whole community. As organisms feed and migrate through soils they physically displace materials, an ecological process calledbioturbation. This aerates soils and stimulates heterotrophic growth and production. Soilmicroorganismsare influenced by and are fed back into the trophic dynamics of the ecosystem. No single axis of causality can be discerned to segregate the biological from geomorphological systems in soils.[215][216]Paleoecologicalstudies of soils places the origin for bioturbation to a time before the Cambrian period. Other events, such as theevolution of treesand thecolonization of landin the Devonian period played a significant role in the early development of ecological trophism in soils.[214][217][218]
Ecologists study and measure nutrient budgets to understand how these materials are regulated, flow, andrecycledthrough the environment.[112][113][173]This research has led to an understanding that there is global feedback between ecosystems and the physical parameters of this planet, including minerals, soil, pH, ions, water, and atmospheric gases. Six major elements (hydrogen,carbon,nitrogen,oxygen,sulfur, andphosphorus; H, C, N, O, S, and P) form the constitution of all biological macromolecules and feed into the Earth's geochemical processes. From the smallest scale of biology, the combined effect of billions upon billions of ecological processes amplify and ultimately regulate thebiogeochemical cyclesof the Earth. Understanding the relations and cycles mediated between these elements and their ecological pathways has significant bearing toward understanding global biogeochemistry.[219]
The ecology of global carbon budgets gives one example of the linkage between biodiversity and biogeochemistry. It is estimated that the Earth's oceans hold 40,000 gigatonnes (Gt) of carbon, that vegetation and soil hold 2,070 Gt, and that fossil fuel emissions are 6.3 Gt carbon per year.[220]There have been major restructurings in these global carbon budgets during the Earth's history, regulated to a large extent by the ecology of the land. For example, through the early-mid Eocene volcanicoutgassing, the oxidation of methane stored in wetlands, and seafloor gases increased atmospheric CO2(carbon dioxide) concentrations to levels as high as 3500ppm.[221]
In theOligocene, from twenty-five to thirty-two million years ago, there was another significant restructuring of the globalcarbon cycleas grasses evolved a new mechanism of photosynthesis,C4photosynthesis, and expanded their ranges. This new pathway evolved in response to the drop in atmospheric CO2concentrations below 550 ppm.[222]The relative abundance and distribution of biodiversity alters the dynamics between organisms and their environment such that ecosystems can be both cause and effect in relation to climate change.
Human-driven modifications to the planet's ecosystems (e.g., disturbance,biodiversity loss, agriculture) contributes to rising atmospheric greenhouse gas levels. Transformation of the global carbon cycle in the next century is projected to raise planetary temperatures, lead to more extreme fluctuations in weather, alter species distributions, and increase extinction rates. The effect of global warming is already being registered in melting glaciers, melting mountain ice caps, and rising sea levels. Consequently, species distributions are changing along waterfronts and in continental areas where migration patterns and breeding grounds are tracking the prevailing shifts in climate.
Large sections ofpermafrostare also melting to create a new mosaic of flooded areas having increased rates of soil decomposition activity that raises methane (CH4) emissions. There is concern over increases in atmospheric methane in the context of the global carbon cycle, because methane is agreenhouse gasthat is 23 times more effective at absorbing long-wave radiation than CO2on a 100-year time scale.[223]Hence, there is a relationship between global warming, decomposition and respiration in soils and wetlands producing significant climate feedbacks and globally altered biogeochemical cycles.[107][224][225][226][227][228]
By ecology, we mean the whole science of the relations of the organism to the environment including, in the broad sense, all the "conditions of existence". Thus, the theory of evolution explains the housekeeping relations of organisms mechanistically as the necessary consequences of effectual causes; and so forms themonisticgroundwork of ecology.
Ecology has a complex origin, due in large part to its interdisciplinary nature.[230]Ancient Greek philosophers such asHippocratesandAristotlewere among the first to record observations on natural history. However, they viewed life in terms ofessentialism, where species were conceptualized as static unchanging things while varieties were seen as aberrations of anidealized type. This contrasts against the modern understanding ofecological theorywhere varieties are viewed as the real phenomena of interest and having a role in the origins of adaptations by means ofnatural selection.[5][231][232]Early conceptions of ecology, such as a balance and regulation in nature can be traced toHerodotus(diedc. 425 BC), who described one of the earliest accounts ofmutualismin his observation of "natural dentistry". BaskingNile crocodiles, he noted, would open their mouths to givesandpiperssafe access to pluckleechesout, giving nutrition to the sandpiper and oral hygiene for the crocodile.[230]Aristotle was an early influence on the philosophical development of ecology. He and his studentTheophrastusmade extensive observations on plant and animal migrations, biogeography, physiology, and their behavior, giving an early analogue to the modern concept of an ecological niche.[233][234]
Nowhere can one see more clearly illustrated what may be called the sensibility of such an organic complex, – expressed by the fact that whatever affects any species belonging to it, must speedily have its influence of some sort upon the whole assemblage. He will thus be made to see the impossibility of studying any form completely, out of relation to the other forms, – the necessity for taking a comprehensive survey of the whole as a condition to a satisfactory understanding of any part.
Ecological concepts such as food chains, population regulation, and productivity were first developed in the 1700s, through the published works of microscopistAntonie van Leeuwenhoek(1632–1723) and botanistRichard Bradley(1688?–1732).[5]BiogeographerAlexander von Humboldt(1769–1859) was an early pioneer in ecological thinking and was among the first to recognize ecological gradients, where species are replaced or altered in form alongenvironmental gradients, such as aclineforming along a rise in elevation. Humboldt drew inspiration fromIsaac Newton, as he developed a form of "terrestrial physics". In Newtonian fashion, he brought a scientific exactitude for measurement into natural history and even alluded to concepts that are the foundation of a modern ecological law on species-to-area relationships.[236][237][238]Natural historians, such as Humboldt,James Hutton, andJean-Baptiste Lamarck(among others) laid the foundations of the modern ecological sciences.[239]The term "ecology" (German:Oekologie, Ökologie) was coined byErnst Haeckelin his bookGenerelle Morphologie der Organismen(1866).[240]Haeckel was a zoologist, artist, writer, and later in life a professor of comparative anatomy.[229][241]
Opinions differ on who was the founder of modern ecological theory. Some mark Haeckel's definition as the beginning;[242]others say it wasEugenius Warmingwith the writing ofOecology of Plants: An Introduction to the Study of Plant Communities(1895),[243]orCarl Linnaeus' principles on the economy of nature that matured in the early 18th century.[244][245]Linnaeus founded an early branch of ecology that he called the economy of nature.[244]His works influenced Charles Darwin, who adopted Linnaeus' phrase on theeconomy or polity of natureinThe Origin of Species.[229]Linnaeus was the first to frame thebalance of natureas a testable hypothesis. Haeckel, who admired Darwin's work, defined ecology in reference to the economy of nature, which has led some to question whether ecology and the economy of nature are synonymous.[245]
From Aristotle until Darwin, the natural world was predominantly considered static and unchanging. Prior toThe Origin of Species, there was little appreciation or understanding of the dynamic and reciprocal relations between organisms, their adaptations, and the environment.[231]An exception is the 1789 publicationNatural History of SelbornebyGilbert White(1720–1793), considered by some to be one of the earliest texts on ecology.[248]WhileCharles Darwinis mainly noted for his treatise on evolution,[249]he was one of the founders ofsoil ecology,[250]and he made note of the first ecological experiment inThe Origin of Species.[246]Evolutionary theory changed the way that researchers approached the ecological sciences.[251]
Modern ecology is a young science that first attracted substantial scientific attention toward the end of the 19th century (around the same time that evolutionary studies were gaining scientific interest). The scientistEllen Swallow Richardsadopted the term "oekology" (which eventually morphed intohome economics) in the U.S. as early as 1892.[252]
In the early 20th century, ecology transitioned from a moredescriptive formofnatural historyto a moreanalytical formofscientific natural history.[236][239][253]Frederic Clementspublished the first American ecology book,Research Methods in Ecologyin 1905,[254]presenting the idea of plant communities as asuperorganism. This publication launched a debate between ecological holism and individualism that lasted until the 1970s. Clements' superorganism concept proposed that ecosystems progress through regular and determined stages ofseral developmentthat are analogous to the developmental stages of an organism. The Clementsian paradigm was challenged byHenry Gleason,[255]who stated that ecological communities develop from the unique and coincidental association of individual organisms. This perceptual shift placed the focus back onto the life histories of individual organisms and how this relates to the development of community associations.[256]
The Clementsian superorganism theory was an overextended application of anidealistic formof holism.[36][109]The term "holism" was coined in 1926 byJan Christiaan Smuts, a South African general and polarizing historical figure who was inspired by Clements' superorganism concept.[257][C]Around the same time,Charles Eltonpioneered the concept of food chains in his classical bookAnimal Ecology.[84]Elton[84]defined ecological relations using concepts of food chains, food cycles, and food size, and described numerical relations among different functional groups and their relative abundance. Elton's 'food cycle' was replaced by 'food web' in a subsequent ecological text.[258]Alfred J. Lotkabrought in many theoretical concepts applying thermodynamic principles to ecology.
In 1942,Raymond Lindemanwrote a landmark paper on thetrophic dynamicsof ecology, which was published posthumously after initially being rejected for its theoretical emphasis. Trophic dynamics became the foundation for much of the work to follow on energy and material flow through ecosystems.Robert MacArthuradvanced mathematical theory, predictions, and tests in ecology in the 1950s, which inspired a resurgent school of theoretical mathematical ecologists.[239][259][260]Ecology has also developed through contributions from other nations, including Russia'sVladimir Vernadskyand his founding of the biosphere concept in the 1920s[261]and Japan'sKinji Imanishiand his concepts of harmony in nature and habitat segregation in the 1950s.[262]Scientific recognition of contributions to ecology from non-English-speaking cultures is hampered by language and translation barriers.[261]
This whole chain of poisoning, then, seems to rest on a base of minute plants which must have been the original concentrators. But what of the opposite end of the food chain—the human being who, in probable ignorance of all this sequence of events, has rigged his fishing tackle, caught a string of fish from the waters of Clear Lake, and taken them home to fry for his supper?
Ecology surged in popular and scientific interest during the 1960–1970senvironmental movement. There are strong historical and scientific ties between ecology, environmental management, and protection.[239]The historical emphasis and poetic naturalistic writings advocating the protection of wild places by notable ecologists in the history ofconservation biology, such asAldo LeopoldandArthur Tansley, have been seen as far removed from urban centres where, it is claimed, the concentration of pollution andenvironmental degradationis located.[239][264]Palamar (2008)[264]notes an overshadowing by mainstream environmentalism of pioneering women in the early 1900s who fought for urban health ecology (then calledeuthenics)[252]and brought about changes in environmental legislation. Women such asEllen Swallow RichardsandJulia Lathrop, among others, were precursors to the more popularized environmental movements after the 1950s.
In 1962, marine biologist and ecologistRachel Carson's bookSilent Springhelped to mobilize the environmental movement by alerting the public to toxicpesticides, such asDDT(C14H9Cl5),bioaccumulatingin the environment. Carson used ecological science to link the release of environmental toxins to human andecosystem health. Since then, ecologists have worked to bridge their understanding of the degradation of the planet's ecosystems with environmental politics, law, restoration, and natural resources management.[22][239][264][265]
|
https://en.wikipedia.org/wiki/Ecology
|
Anecosystem(orecological system) is a system formed byorganismsin interaction with theirenvironment.[2]: 458Thebioticandabiotic componentsare linked together throughnutrient cyclesandenergyflows.
Ecosystems are controlled by external and internalfactors. External factors—includingclimateand whatparent materialsform the soil andtopography—control the overall structure of an ecosystem, but are not themselves influenced by it. By contrast, internal factors both control and are controlled by ecosystem processes. includedecomposition, the types of species present, root competition, shading, disturbance, and succession. While external factors generally determine whichresourceinputs an ecosystem has, the availability of said resources within the ecosystem is controlled by internal factors.
Ecosystems aredynamicentities—they are subject to periodic disturbances and are always in the process of recovering from some past disturbance. The tendency of an ecosystem to remain close to its equilibrium state, is termed itsresistance. The capacity of a system to absorb disturbance and reorganize while undergoing change so as to retain essentially the same function, structure, identity, and feedbacks is termed itsecological resilience.
Ecosystems can be studied through a variety of approaches—theoretical studies, studies monitoring specific ecosystems over long periods of time, those that look at differences between ecosystems to elucidate how they work and direct manipulative experimentation.Biomesare general classes or categories of ecosystems. However, there is no clear distinction between biomes and ecosystems.Ecosystem classificationsare specific kinds of ecological classifications that consider all four elements of the definition ofecosystems: a biotic component, anabioticcomplex, the interactions between and within them, and the physical space they occupy. Biotic factors of the ecosystem are living things; such as plants, animals, and bacteria, while abiotic are non-living components; such as water, soil and atmosphere.
Plants allow energy to enter the system throughphotosynthesis, building up plant tissue. Animals play an important role in the movement ofmatterand energy through the system, by feeding on plants and on one another. They also influence the quantity of plant andmicrobialbiomasspresent. By breaking down deadorganic matter,decomposersreleasecarbonback to the atmosphere and facilitatenutrient cyclingby converting nutrients stored in dead biomass back to a form that can be readily used by plants and microbes.
Ecosystems provide a variety of goods and services upon which people depend, and may be part of. Ecosystem goods include the "tangible, material products" of ecosystem processes such as water, food, fuel, construction material, andmedicinal plants.Ecosystem services, on the other hand, are generally "improvements in the condition or location of things of value". These include things like the maintenance ofhydrological cycles, cleaning air and water, the maintenance of oxygen in the atmosphere, croppollinationand even things like beauty, inspiration and opportunities for research. Many ecosystems become degraded through human impacts, such assoil loss,airandwater pollution,habitat fragmentation,water diversion,fire suppression, andintroduced speciesandinvasive species. These threats can lead to abrupt transformation of the ecosystem or to gradual disruption of biotic processes and degradation ofabioticconditions of the ecosystem. Once the original ecosystem has lost its defining features, it is considered"collapsed".Ecosystem restorationcan contribute to achieving theSustainable Development Goals.
An ecosystem (or ecological system) consists of all the organisms and the abiotic pools (or physical environment) with which they interact.[3][4]: 5[2]: 458The biotic andabiotic componentsare linked together through nutrient cycles and energy flows.[5]
"Ecosystem processes" are the transfers of energy and materials from one pool to another.[2]: 458Ecosystem processes are known to "take place at a wide range of scales". Therefore, the correct scale of study depends on the question asked.[4]: 5
The term "ecosystem" was first used in 1935 in a publication by British ecologistArthur Tansley. The term was coined byArthur Roy Clapham, who came up with the word at Tansley's request.[6]Tansley devised the concept to draw attention to the importance of transfers of materials between organisms and their environment.[4]: 9He later refined the term, describing it as "The whole system, ... including not only the organism-complex, but also the whole complex of physical factors forming what we call the environment".[3]Tansley regarded ecosystems not simply as natural units, but as "mental isolates".[3]Tansley later defined the spatial extent of ecosystems using the term "ecotope".[7]
G. Evelyn Hutchinson, alimnologistwho was a contemporary of Tansley's, combinedCharles Elton's ideas abouttrophicecology with those of Russian geochemistVladimir Vernadsky. As a result, he suggested that mineral nutrient availability in a lake limitedalgal production. This would, in turn, limit the abundance of animals that feed on algae.Raymond Lindemantook these ideas further to suggest that the flow of energy through a lake was the primary driver of the ecosystem. Hutchinson's students, brothersHoward T. OdumandEugene P. Odum, further developed a "systems approach" to the study of ecosystems. This allowed them to study the flow of energy and material through ecological systems.[4]: 9
Ecosystems are controlled by both external and internal factors. External factors, also called state factors, control the overall structure of an ecosystem and the way things work within it, but are not themselves influenced by the ecosystem. On broad geographic scales,climateis the factor that "most strongly determines ecosystem processes and structure".[4]: 14Climate determines thebiomein which the ecosystem is embedded. Rainfall patterns and seasonal temperatures influence photosynthesis and thereby determine the amount of energy available to the ecosystem.[8]: 145
Parent materialdetermines the nature of the soil in an ecosystem, and influences the supply of mineral nutrients.Topographyalso controls ecosystem processes by affecting things likemicroclimate, soil development and the movement of water through a system. For example, ecosystems can be quite different if situated in a small depression on the landscape, versus one present on an adjacent steep hillside.[9]: 39[10]: 66
Other external factors that play an important role in ecosystem functioning include time and potentialbiota, the organisms that are present in a region and could potentially occupy a particular site. Ecosystems in similar environments that are located in different parts of the world can end up doing things very differently simply because they have different pools of species present.[11]: 321Theintroduction of non-native speciescan cause substantial shifts in ecosystem function.[12]
Unlike external factors, internal factors in ecosystems not only control ecosystem processes but are also controlled by them.[4]: 16While theresourceinputs are generally controlled by external processes like climate and parent material, the availability of these resources within the ecosystem is controlled by internal factors like decomposition, root competition or shading.[13]Other factors like disturbance, succession or the types of species present are also internal factors.
Primary production is the production oforganic matterfrom inorganic carbon sources. This mainly occurs throughphotosynthesis. The energy incorporated through this process supports life on earth, while the carbon makes up much of the organic matter in living and dead biomass,soil carbonandfossil fuels. It also drives thecarbon cycle, which influences globalclimatevia thegreenhouse effect.
Through the process of photosynthesis, plants capture energy from light and use it to combinecarbon dioxideand water to producecarbohydratesandoxygen. The photosynthesis carried out by all the plants in an ecosystem is called the gross primary production (GPP).[8]: 124About half of the gross GPP is respired by plants in order to provide the energy that supports their growth and maintenance.[14]: 157The remainder, that portion of GPP that is not used up by respiration, is known as thenet primary production(NPP).[14]: 157Total photosynthesis is limited by a range of environmental factors. These include the amount of light available, the amount ofleafarea a plant has to capture light (shading by other plants is a major limitation of photosynthesis), the rate at which carbon dioxide can be supplied to thechloroplaststo support photosynthesis, the availability of water, and the availability of suitable temperatures for carrying out photosynthesis.[8]: 155
Energyandcarbonenter ecosystems through photosynthesis, are incorporated into living tissue, transferred to other organisms that feed on the living and dead plant matter, and eventually released through respiration.[14]: 157The carbon and energy incorporated into plant tissues (net primary production) is either consumed by animals while the plant is alive, or it remains uneaten when the plant tissue dies and becomesdetritus. Interrestrial ecosystems, the vast majority of the net primary production ends up being broken down bydecomposers. The remainder is consumed by animals while still alive and enters the plant-based trophic system. After plants and animals die, the organic matter contained in them enters the detritus-based trophic system.[15]
Ecosystem respirationis the sum ofrespirationby all living organisms (plants, animals, and decomposers) in the ecosystem.[16]Net ecosystem productionis the difference betweengross primary production(GPP) and ecosystem respiration.[17]In the absence of disturbance, net ecosystem production is equivalent to the net carbon accumulation in the ecosystem.
Energy can also be released from an ecosystem through disturbances such aswildfireor transferred to other ecosystems (e.g., from a forest to a stream to a lake) byerosion.
Inaquatic systems, the proportion of plant biomass that gets consumed byherbivoresis much higher than in terrestrial systems.[15]In trophic systems, photosynthetic organisms are the primary producers. The organisms that consume their tissues are called primary consumers orsecondary producers—herbivores. Organisms which feed onmicrobes(bacteriaandfungi) are termedmicrobivores. Animals that feed on primary consumers—carnivores—are secondary consumers. Each of these constitutes a trophic level.[15]
The sequence of consumption—from plant to herbivore, to carnivore—forms afood chain. Real systems are much more complex than this—organisms will generally feed on more than one form of food, and may feed at more than one trophic level. Carnivores may capture some prey that is part of a plant-based trophic system and others that are part of a detritus-based trophic system (a bird that feeds both on herbivorous grasshoppers and earthworms, which consume detritus). Real systems, with all these complexities, formfood websrather than food chains which present a number of common, non random properties in the topology of their network.[18]
The carbon and nutrients indead organic matterare broken down by a group of processes known as decomposition. This releases nutrients that can then be re-used for plant and microbial production and returns carbon dioxide to the atmosphere (or water) where it can be used for photosynthesis. In the absence of decomposition, the dead organic matter would accumulate in an ecosystem, and nutrients and atmospheric carbon dioxide would be depleted.[19]: 183
Decomposition processes can be separated into three categories—leaching, fragmentation and chemical alteration of dead material. As water moves through dead organic matter, it dissolves and carries with it the water-soluble components. These are then taken up by organisms in the soil, react with mineral soil, or are transported beyond the confines of the ecosystem (and are considered lost to it).[20]: 271–280Newly shed leaves and newly dead animals have high concentrations of water-soluble components and includesugars,amino acidsand mineral nutrients. Leaching is more important in wet environments and less important in dry ones.[10]: 69–77
Fragmentation processes break organic material into smaller pieces, exposing new surfaces for colonization by microbes. Freshly shedleaf littermay be inaccessible due to an outer layer ofcuticleorbark, andcell contentsare protected by acell wall. Newly dead animals may be covered by anexoskeleton. Fragmentation processes, which break through these protective layers, accelerate the rate of microbial decomposition.[19]: 184Animals fragment detritus as they hunt for food, as does passage through the gut.Freeze-thaw cyclesand cycles of wetting and drying also fragment dead material.[19]: 186
The chemical alteration of the dead organic matter is primarily achieved through bacterial and fungal action. Fungalhyphaeproduce enzymes that can break through the tough outer structures surrounding dead plant material. They also produce enzymes that break downlignin, which allows them access to both cell contents and the nitrogen in the lignin. Fungi can transfer carbon and nitrogen through their hyphal networks and thus, unlike bacteria, are not dependent solely on locally available resources.[19]: 186
Decomposition rates vary among ecosystems.[21]The rate of decomposition is governed by three sets of factors—the physical environment (temperature, moisture, and soil properties), the quantity and quality of the dead material available to decomposers, and the nature of the microbial community itself.[19]: 194Temperature controls the rate of microbial respiration; the higher the temperature, the faster the microbial decomposition occurs. Temperature also affects soil moisture, which affects decomposition. Freeze-thaw cycles also affect decomposition—freezing temperatures kill soil microorganisms, which allows leaching to play a more important role in moving nutrients around. This can be especially important as the soil thaws in the spring, creating a pulse of nutrients that become available.[20]: 280
Decomposition rates are low under very wet or very dry conditions. Decomposition rates are highest in wet, moist conditions with adequate levels of oxygen. Wet soils tend to become deficient in oxygen (this is especially true inwetlands), which slows microbial growth. In dry soils, decomposition slows as well, but bacteria continue to grow (albeit at a slower rate) even after soils become too dry to support plant growth.[19]: 200
Ecosystems are dynamic entities. They are subject to periodic disturbances and are always in the process of recovering from past disturbances.[22]: 347When aperturbationoccurs, an ecosystem responds by moving away from its initial state. The tendency of an ecosystem to remain close to its equilibrium state, despite that disturbance, is termed itsresistance. The capacity of a system to absorb disturbance and reorganize while undergoing change so as to retain essentially the same function, structure, identity, and feedbacks is termed itsecological resilience.[23][24]Resilience thinking also includes humanity as an integral part of thebiospherewhere we are dependent onecosystem servicesfor our survival and must build and maintain their natural capacities to withstand shocks and disturbances.[25]Time plays a central role over a wide range, for example, in the slow development of soil from bare rock and the fasterrecovery of a community from disturbance.[14]: 67
Disturbancealso plays an important role in ecological processes.F. Stuart Chapinand coauthors define disturbance as "a relatively discrete event in time that removes plant biomass".[22]: 346This can range fromherbivoreoutbreaks, treefalls, fires, hurricanes, floods,glacial advances, tovolcanic eruptions. Such disturbances can cause large changes in plant, animal and microbe populations, as well as soil organic matter content. Disturbance is followed by succession, a "directional change in ecosystem structure and functioning resulting from biotically driven changes in resource supply."[2]: 470
The frequency and severity of disturbance determine the way it affects ecosystem function. A major disturbance like a volcanic eruption orglacialadvance and retreat leave behind soils that lack plants, animals or organic matter. Ecosystems that experience such disturbances undergoprimary succession. A less severe disturbance like forest fires, hurricanes or cultivation result insecondary successionand a faster recovery.[22]: 348More severe and more frequent disturbance result in longer recovery times.
From one year to another, ecosystems experience variation in their biotic and abiotic environments. Adrought, a colder than usual winter, and a pest outbreak all are short-term variability in environmental conditions. Animal populations vary from year to year, building up during resource-rich periods and crashing as they overshoot their food supply. Longer-term changes also shape ecosystem processes. For example, the forests of eastern North America still show legacies ofcultivationwhich ceased in 1850 when large areas were reverted to forests.[22]: 340Another example is themethaneproduction in easternSiberianlakes that is controlled byorganic matterwhich accumulated during thePleistocene.[26]
Ecosystems continually exchange energy and carbon with the widerenvironment. Mineral nutrients, on the other hand, are mostly cycled back and forth between plants, animals, microbes and the soil. Most nitrogen enters ecosystems through biologicalnitrogen fixation, is deposited through precipitation, dust, gases or is applied asfertilizer.[20]: 266Mostterrestrial ecosystemsare nitrogen-limited in the short term makingnitrogen cyclingan important control on ecosystem production.[20]: 289Over the long term, phosphorus availability can also be critical.[27]
Macronutrients which are required by all plants in large quantities include the primary nutrients (which are most limiting as they are used in largest amounts): Nitrogen, phosphorus, potassium.[28]: 231Secondary major nutrients (less often limiting) include: Calcium, magnesium, sulfur.Micronutrientsrequired by all plants in small quantities include boron, chloride, copper, iron, manganese, molybdenum, zinc. Finally, there are also beneficial nutrients which may be required by certain plants or by plants under specific environmental conditions: aluminum, cobalt, iodine, nickel, selenium, silicon, sodium, vanadium.[28]: 231
Until modern times, nitrogen fixation was the major source of nitrogen for ecosystems. Nitrogen-fixing bacteria either livesymbioticallywith plants or live freely in the soil. The energetic cost is high for plants that support nitrogen-fixing symbionts—as much as 25% of gross primary production when measured in controlled conditions. Many members of thelegumeplant family support nitrogen-fixing symbionts. Somecyanobacteriaare also capable of nitrogen fixation. These arephototrophs, which carry out photosynthesis. Like other nitrogen-fixing bacteria, they can either be free-living or have symbiotic relationships with plants.[22]: 360Other sources of nitrogen includeacid depositionproduced through the combustion of fossil fuels,ammoniagas which evaporates from agricultural fields which have had fertilizers applied to them, and dust.[20]: 270Anthropogenic nitrogen inputs account for about 80% of all nitrogen fluxes in ecosystems.[20]: 270
When plant tissues are shed or are eaten, the nitrogen in those tissues becomes available to animals and microbes. Microbial decomposition releases nitrogen compounds from dead organic matter in the soil, where plants, fungi, and bacteria compete for it. Some soil bacteria use organic nitrogen-containing compounds as a source of carbon, and releaseammoniumions into the soil. This process is known asnitrogen mineralization. Others convert ammonium tonitriteandnitrateions, a process known asnitrification.Nitric oxideandnitrous oxideare also produced during nitrification.[20]: 277Under nitrogen-rich and oxygen-poor conditions, nitrates and nitrites are converted tonitrogen gas, a process known asdenitrification.[20]: 281
Mycorrhizal fungi which are symbiotic with plant roots, use carbohydrates supplied by the plants and in return transfer phosphorus and nitrogen compounds back to the plant roots.[29][30]This is an important pathway of organic nitrogen transfer from dead organic matter to plants. This mechanism may contribute to more than 70 Tg of annually assimilated plant nitrogen, thereby playing a critical role in global nutrient cycling and ecosystem function.[30]
Phosphorus enters ecosystems throughweathering. As ecosystems age this supply diminishes, making phosphorus-limitation more common in older landscapes (especially in the tropics).[20]: 287–290Calcium and sulfur are also produced by weathering, but acid deposition is an important source of sulfur in many ecosystems. Although magnesium and manganese are produced by weathering, exchanges between soil organic matter and living cells account for a significant portion of ecosystem fluxes. Potassium is primarily cycled between living cells and soil organic matter.[20]: 291
Biodiversityplays an important role in ecosystem functioning.[32]: 449–453Ecosystem processes are driven by the species in an ecosystem, the nature of the individual species, and the relative abundance of organisms among these species. Ecosystem processes are the net effect of the actions of individual organisms as they interact with their environment.Ecological theorysuggests that in order to coexist, species must have some level oflimiting similarity—they must be different from one another in some fundamental way, otherwise, one species wouldcompetitively excludethe other.[33]Despite this, the cumulative effect of additional species in an ecosystem is not linear: additional species may enhance nitrogen retention, for example. However, beyond some level of species richness,[11]: 331additional species may have little additive effect unless they differ substantially from species already present.[11]: 324This is the case for example forexotic species.[11]: 321
The addition (or loss) of species that are ecologically similar to those already present in an ecosystem tends to only have a small effect on ecosystem function. Ecologically distinct species, on the other hand, have a much larger effect. Similarly, dominant species have a large effect on ecosystem function, while rare species tend to have a small effect.Keystone speciestend to have an effect on ecosystem function that is disproportionate to their abundance in an ecosystem.[11]: 324
Anecosystem engineeris anyorganismthat creates, significantly modifies, maintains or destroys ahabitat.[34]
Ecosystem ecologyis the "study of the interactions between organisms and their environment as an integrated system".[2]: 458The size of ecosystems can range up to tenorders of magnitude, from the surface layers of rocks to the surface of the planet.[4]: 6
TheHubbard Brook Ecosystem Studystarted in 1963 to study theWhite Mountains in New Hampshire. It was the first successful attempt to study an entirewatershedas an ecosystem. The study used streamchemistryas a means of monitoring ecosystem properties, and developed a detailedbiogeochemical modelof the ecosystem.[35]Long-term researchat the site led to the discovery ofacid rainin North America in 1972. Researchers documented the depletion of soilcations(especially calcium) over the next several decades.[36]
Ecosystems can be studied through a variety of approaches—theoretical studies, studies monitoring specific ecosystems over long periods of time, those that look at differences between ecosystems to elucidate how they work and direct manipulative experimentation.[37]Studies can be carried out at a variety of scales, ranging from whole-ecosystem studies to studyingmicrocosmsormesocosms(simplified representations of ecosystems).[38]American ecologistStephen R. Carpenterhas argued that microcosm experiments can be "irrelevant and diversionary" if they are not carried out in conjunction with field studies done at the ecosystem scale. In such cases, microcosm experiments may fail to accurately predict ecosystem-level dynamics.[39]
Biomesare general classes or categories of ecosystems.[4]: 14However, there is no clear distinction between biomes and ecosystems.[40]Biomes are always defined at a very general level. Ecosystems can be described at levels that range from very general (in which case the names are sometimes the same as those of biomes) to very specific, such as "wet coastal needle-leafed forests".
Biomes vary due to global variations inclimate. Biomes are often defined by their structure: at a general level, for example,tropical forests,temperate grasslands, and arctictundra.[4]: 14There can be any degree of subcategories among ecosystem types that comprise a biome, e.g., needle-leafedboreal forestsor wet tropical forests. Although ecosystems are most commonly categorized by their structure and geography, there are also other ways to categorize and classify ecosystems such as by their level of human impact (seeanthropogenic biome), or by their integration with social processes or technological processes or their novelty (e.g.novel ecosystem). Each of thesetaxonomiesof ecosystems tends to emphasize different structural or functional properties.[41]None of these is the "best" classification.
Ecosystem classificationsare specific kinds of ecological classifications that consider all four elements of the definition ofecosystems: a biotic component, anabioticcomplex, the interactions between and within them, and the physical space they occupy.[41]Different approaches to ecological classifications have been developed in terrestrial, freshwater and marine disciplines, and a function-based typology has been proposed to leverage the strengths of these different approaches into a unified system.[42]
Human activities are important in almost all ecosystems. Although humans exist and operate within ecosystems, their cumulative effects are large enough to influence external factors like climate.[4]: 14
Ecosystems provide a variety of goods and services upon which people depend.[43]Ecosystem goods include the "tangible, material products" of ecosystem processes such as water, food, fuel, construction material, andmedicinal plants.[44][45]They also include less tangible items liketourismand recreation, and genes from wild plants and animals that can be used to improve domestic species.[43]
Ecosystem services, on the other hand, are generally "improvements in the condition or location of things of value".[45]These include things like the maintenance of hydrological cycles, cleaning air and water, the maintenance of oxygen in the atmosphere, croppollinationand even things like beauty, inspiration and opportunities for research.[43]While material from the ecosystem had traditionally been recognized as being the basis for things of economic value, ecosystem services tend to be taken for granted.[45]
TheMillennium Ecosystem Assessmentis an international synthesis by over 1000 of the world's leading biological scientists that analyzes the state of the Earth's ecosystems and provides summaries and guidelines for decision-makers. The report identified four major categories of ecosystem services: provisioning, regulating, cultural and supporting services.[46]It concludes that human activity is having a significant and escalating impact on the biodiversity of the world ecosystems, reducing both theirresilienceandbiocapacity. The report refers to natural systems as humanity's "life-support system", providing essential ecosystem services. The assessment measures 24 ecosystem services and concludes that only four have shown improvement over the last 50 years, 15 are in serious decline, and five are in a precarious condition.[46]: 6–19
TheIntergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services(IPBES) is an intergovernmental organization established to improve the interface between science and policy on issues ofbiodiversityand ecosystem services.[47][48]It is intended to serve a similar role to theIntergovernmental Panel on Climate Change.[49]
Ecosystem services are limited and also threatened by human activities.[50]To help inform decision-makers, many ecosystem services are being assigned economic values, often based on the cost of replacement with anthropogenic alternatives. The ongoing challenge of prescribing economic value to nature, for example throughbiodiversity banking, is prompting transdisciplinary shifts in how we recognize and manage the environment,social responsibility, business opportunities, and our future as a species.[50]
As human population and per capita consumption grow, so do the resource demands imposed on ecosystems and the effects of the humanecological footprint. Natural resources are vulnerable and limited. The environmental impacts ofanthropogenicactions are becoming more apparent. Problems for all ecosystems include:environmental pollution,climate changeandbiodiversity loss. For terrestrial ecosystems further threats includeair pollution,soil degradation, anddeforestation. Foraquatic ecosystemsthreats also include unsustainable exploitation of marine resources (for exampleoverfishing),marine pollution,microplasticspollution, theeffects of climate change on oceans(e.g. warming andacidification), and building on coastal areas.[51]
Many ecosystems become degraded through human impacts, such assoil loss,airandwater pollution,habitat fragmentation,water diversion,fire suppression, andintroduced speciesandinvasive species.[52]: 437
These threats can lead to abrupt transformation of the ecosystem or to gradual disruption of biotic processes and degradation ofabioticconditions of the ecosystem. Once the original ecosystem has lost its defining features, it is consideredcollapsed(see alsoIUCN Red List of Ecosystems).[53]Ecosystem collapse could be reversible and in this way differs fromspecies extinction.[54]Quantitative assessments of therisk of collapseare used as measures of conservation status and trends.
Whennatural resource managementis applied to whole ecosystems, rather than single species, it is termedecosystem management.[55]Although definitions of ecosystem management abound, there is a common set of principles which underlie these definitions: A fundamental principle is the long-termsustainabilityof the production of goods and services by the ecosystem;[52]"intergenerational sustainability [is] a precondition for management, not an afterthought".[43]While ecosystem management can be used as part of a plan forwildernessconservation, it can also be used in intensively managed ecosystems[43](see, for example,agroecosystemandclose to nature forestry).
Integrated conservation and development projects(ICDPs) aim to addressconservationand human livelihood (sustainable development) concerns indeveloping countriestogether, rather than separately as was often done in the past.[52]: 445
The following articles are types of ecosystems for particular types of regions or zones:
Ecosystem instances in specific regions of the world:
|
https://en.wikipedia.org/wiki/Ecosystem
|
Theenvironmental movement(sometimes referred to as theecology movement) is asocial movementthat aims to protect the natural world from harmful environmental practices in order to createsustainable living.[1]In its recognition of humanity as a participant in (not an enemy of)ecosystems, the movement is centered onecology,health, as well ashuman rights.
The environmental movement is an international movement, represented by a range of environmental organizations, from enterprises tograssrootsand varies from country to country. Due to its large membership, varying and strong beliefs, and occasionally speculative nature, the environmental movement is not always united in its goals. At its broadest, the movement includes private citizens, professionals,religious devotees, politicians, scientists,nonprofit organizations, and individual advocates like former Wisconsin SenatorGaylord NelsonandRachel Carsonin the 20th century.
Since the 1970s, public awareness,environmental sciences,ecology, and technology have advanced to include modern focus points likeozonedepletion,climate change,acid rain,mutation breeding,genetically modified cropsandgenetically modified livestock.
Theclimate movementcan be regarded as a sub-type of the environmental movement.
The environmental movement contains a number of subcommunities, that have developed with different approaches and philosophies in different parts of the world. Notably, the early environmental movement experienced a deep tension between the philosophies ofconservationand broaderenvironmental protection.[2]In recent decades the rise to prominence ofenvironmental justice, indigenous rights and key environmental crises like theclimate crises, has led to the development of other environmentalist identities.
The environmental movement is broad in scope and can include any topic related to the environment, conservation, and biology, as well as the preservation of landscapes, flora, and fauna for a variety of purposes and uses. Examples include:
Genetically modified plantsandanimalsare said by some environmentalists to be inherently bad because they are unnatural. Others point out the possible benefits of GM crops such aswater conservationthrough corn modified to be less "thirsty" and decreased pesticide use through insect-resistant crops. They also point out that somegenetically modified livestockhave accelerated growth which means there are shorter production cycles which again results in a more efficient use of feed.[5]
Besides genetically modified crops and livestock,synthetic biologyis also on the rise and environmentalists argue that these also contain risks, if these organisms were ever to end up in nature. This, as unlike with genetically modified organisms, synthetic biology even usesbase pairsthat do not exist in nature.[6]
Theanti-nuclear movementopposes the use of variousnuclear technologies. The initial anti-nuclear objective wasnuclear disarmamentand later the focus began to shift to other issues, mainly opposition to the use ofnuclear power. There have been many large anti-nucleardemonstrationsandprotests. Thepro-nuclear movementconsists of people, including former opponents of nuclear energy, who calculate that the threat to humanity from climate change is far worse than any risk associated with nuclear energy.
By the mid-1970s anti-nuclear activism had moved beyond local protests and politics to gain a wider appeal and influence. Although it lacked a single coordinating organization the anti-nuclear movement's efforts gained a great deal of attention, especially in theUnited Kingdomand United States.[7]In the aftermath of theThree Mile Island accidentin 1979, many mass demonstrations took place. The largest one was held in New York City in September 1979 and involved 200,000 people.[8][9][10]
Tree sittingis a form of activism in which the protester sits in a tree in an attempt to stop the removal of a tree or to impede the demolition of an area with the longest and most famous tree-sitter beingJulia Butterfly Hill, who spent 738 days in a California Redwood, saving a three-acre tract of forest.[11]Also notable is theYellow Finch tree sit, which was a 932-day blockade of theMountain Valley Pipelinefrom 2018 to 2021.[12][13]
Sit-inscan be used to encourage social change, such as the Greensboro sit-ins, a series of protests in 1960 to stop racial segregation, but can also be used in ecoactivism, as in theDakota Access PipelineProtest.[14]
Notable environmental protests and campaigns include:
The origins of the environmental movement in Europe and North America lay in response to increasing levels ofsmokepollutionin theatmosphereduring theIndustrial Revolution. The emergence of great factories and the concomitant immense growth incoal consumptiongave rise to an unprecedented level ofair pollutionin industrial centers; after 1900 the large volume of industrialchemicaldischarges added to the growing load of untreated human waste.[17]
Conservative critics of the movement characterize it as radical and misguided. Especially critics of theUnited States Endangered Species Act, which has come under scrutiny lately,[when?]and the Clean Air Act, which they said conflict with private property rights, corporate profits and the nation's overall economic growth. Critics alsochallenge the scientific evidence for global warming. They argue that the environmental movement has diverted attention from more pressing issues.[18]Western environmental activists have also been criticized forperformative activism,eco-colonialism, and enactingwhite saviortropes, especially celebrities who promote conservation in developing countries.[19][20]
When residents living near proposed developments organize opposition they are sometimes called"NIMBYS", short for "not in my back yard".[21]
Mithun Roy Chowdhury, President, Save Nature & Wildlife (SNW),Bangladesh, insisted that the people of Bangladesh raise their voice againstTipaimukh Dam, being constructed by theGovernment of India. He said the Tipaimukh Dam project will be another "death trap for Bangladesh like theFarakka Barrage," which would lead to anenvironmental disasterfor 50 million people in theMeghna Riverbasin. He said that this project will startdesertificationin Bangladesh.[22][23][24][25]
Bangladesh was ranked the most polluted country in the world due to defective automobiles, particularly diesel-powered vehicles, and hazardous gases from industry. The air is a hazard to Bangladesh's human health, ecology, and economic progress.[26]
China's environmental movement is characterized by the rise of environmental NGOs, policy advocacy, spontaneous alliances, and protests that often only occur at the local level.[27]Environmental protests in China are increasingly expanding their scope of concerns, calling for broader participation "in the name of the public."[28]
The Chinese have realized the ability of riots and protests to have success and had led to an increase in disputes in China by 30% since 2005 to more than 50,000 events. Protests cover topics such as environmental issues,land loss, income, and political issues. They have also grown in size from about 10 people or fewer in the mid-1990s to 52 people per incident in 2004. China has more relaxed environmental laws than other countries in Asia, so many polluting factories have relocated to China, causingpollution in China.
Water pollution,water scarcity,soil pollution,soil degradation, anddesertificationare issues currently in discussion in China. Thegroundwater tableof theNorth China Plainis dropping by 1.5 m (5 ft) per year. This groundwater table occurs in the region of China that produces 40% of the country's grain.[29][30]The Center for Legal Assistance to Pollution Victimsworks to confront legal issues associated with environmental justice by hearing court cases that expose the narratives of victims of environmental pollution.[31][page needed]As China continues domestic economic reforms and integration into global markets, there emerge new linkages between China's domesticenvironmental degradationand global ecological crisis.[32]
Comparing the experience of China, South Korea, Japan and Taiwan reveals that the impact of environmental activism is heavily modified by domestic political context, particularly the level of integration of mass-based protests and policy advocacy NGOs. Hinted by the history of neighboring Japan and South Korea, the possible convergence of NGOs and anti-pollution protests will have significant implications for Chinese environmental politics in the coming years.[33]
Environmental and public health is an ongoing struggle within India. The first seed of an environmental movement in India was the foundation in 1964 ofDasholi Gram Swarajya Sangh, a labour cooperative started byChandi Prasad Bhatt. It was inaugurated bySucheta Kriplaniand founded on land donated by Shyma Devi. This initiative was eventually followed up with theChipko movementstarting in 1974.[34][35]
The most severe single event underpinning the movement was theBhopal gas leakageon 3 December 1984.[36]40 tons ofmethyl isocyanatewas released, immediately killing 2,259 people and ultimately affecting 700,000 citizens.
India has a national campaign againstCoca-ColaandPepsi Colaplants due to their practices of drawing groundwater and contaminating fields with sludge. The movement is characterized by local struggles against intensiveaquaculturefarms. The most influential part of the environmental movement in India is the anti-dam movement. Dam creation has been thought of as a way for India to catch up with the West by connecting to thepower gridwith giant dams, coal or oil-powered plants, or nuclear plants. Jhola Aandolan a massmovementis conducting as fighting againstpolyethylenecarry bags uses and promoting cloth/jute/paper carry bags to protect the environment andnature. Activists in the Indian environmental movement consider global warming, sea levels rising, and glaciers retreating decreasing the amount of water flowing into streams to be the biggest challenges for them to face in the early twenty-first century.[29]Eco Revolution movement has been started byEco Needs Foundation[37]in 2008 from Aurangabad Maharashtra that seeks the participation of children, youth, researchers, spiritual and political leaders to organise awareness programmes and conferences. Child activists againstair pollution in Indiaandgreenhouse gas emissionsby India includeLicypriya Kangujam. From the mid to late 2010s a coalition of urban and Indigenous communities came together to protectAarey, a forest located in the suburbs ofMumbai.[38]Farming and indigenous communities have also opposed pollution and clearing caused by mining in states such asGoa,Odisha, andChhattisgarh.[39]
Environmental activism in theArab world, includingMiddle East and North Africa(MENA), mobilizes around issues such asindustrial pollution, and insistence that the government providesirrigation.[40]TheLeague of Arab Stateshas one specialized sub-committee, of 12 standing specialized subcommittees in the Foreign Affairs Ministerial Committees, which deals with Environmental Issues. Countries in the League of Arab States have demonstrated an interest in environmental issues, on paper some environmental activists have doubts about the level of commitment to environmental issues; being a part of the world community may have obliged these countries to portray concern for the environment. The initial level of environmental awareness may be the creation of a ministry of the environment. The year of establishment of a ministry is also indicative of the level of engagement. Saudi Arabia was the first to establish environmental law in 1992 followed by Egypt in 1994. Somalia is the only country without environmental law. In 2010 the Environmental Performance Index listed Algeria as the top Arab country at 42 of 163; Morocco was at 52 and Syria at 56. TheEnvironmental Performance Indexmeasures the ability of a country to actively manage and protect its environment and the health of its citizens. A weighted index is created by giving 50% weight for environmental health objective (health) and 50% for ecosystem vitality (ecosystem); values range from 0–100. No Arab countries were in the top quartile, and 7 countries were in the lowest quartile.[41]
South Korea and Taiwan experienced similar growth in industrialization from 1965 to 1990 with few environmental controls.[42]South Korea'sHan RiverandNakdong Riverwere so polluted by unchecked dumping of industrial waste that they were close to being classified as biologically dead. Taiwan's formula for balanced growth was to prevent industrial concentration and encourage manufacturers to set up in the countryside. This led to 20% of the farmland being polluted by industrial waste and 30% of the rice grown on the island was contaminated with heavy metals. Both countries had spontaneous environmental movements drawing participants from different classes. Their demands were linked with issues of employment, occupational health, and agricultural crisis. They were also quite militant; the people learned that protesting can bring results. The polluting factories were forced to make immediate improvements to the conditions or pay compensation to victims. Some were even forced to shut down or move locations. The people were able to force the government to come out with new restrictive rules on toxins, industrial waste, and air pollution. All of these new regulations caused the migration of those polluting industries from Taiwan and South Korea to China and other countries in Southeast Asia with more relaxed environmental laws.
The modern conservation movement was manifested in the forests ofIndia, with the practical application of scientific conservation principles. Theconservation ethicthat began to evolve included three core principles: human activity damaged theenvironment, there was acivic dutyto maintain the environment for future generations, and scientific, empirically based methods should be applied to ensure this duty was carried out.James Ranald Martinwas prominent in promoting this ideology, publishing manymedico-topographicalreports that demonstrated the scale of damage wrought through large-scale deforestation and desiccation, and lobbying extensively for theinstitutionalizationof forest conservation activities inBritish Indiathrough the establishment ofForest Departments.[43]
TheMadrasBoard of Revenue started local conservation efforts in 1842, headed byAlexander Gibson, a professionalbotanistwho systematically adopted a forest conservation programme based on scientific principles. This was the first case of state management of forests in the world.[44]Eventually, the government underGovernor-GeneralLord Dalhousieintroduced the first permanent and large-scale forest conservation programme in the world in 1855, a model that soon spread toother colonies, as well as theUnited States. In 1860, the Department banned the use ofshifting cultivation.[45]Hugh Cleghorn's 1861 manual,The forests and gardens of South India, became the definitive work on the subject and was widely used by forest assistants in the subcontinent.[46][47]
Dietrich Brandisjoined the British service in 1856 as superintendent of the teak forests of Pegu division in easternBurma. During that time Burma'steakforests were controlled by militantKarentribals. He introduced the "taungya" system,[48]in which Karen villagers provided labour for clearing, planting, and weeding teak plantations. Also, he formulated new forest legislation and helped establish research and training institutions. Brandis as well as founded the Imperial Forestry School at Dehradun.[49][50]
In 2022, a court in South Africa has confirmed the constitutional right of the country's citizens to an environment that isn't harmful to their health, which includes the right to clean air. The case is referred to "Deadly Air" case. The area includes one of South Africa's largest cities, Ekurhuleni, and a large portion of the Mpumalanga province.[51]
After theInternational Environmental Conference in Stockholmin 1972 Latin American officials returned with a high hope of growth and protection of the fairly untouched natural resources. Governments spent millions of dollars, and created departments and pollution standards. However, the outcomes have not always been what officials had initially hoped. Activists blame this on growing urban populations and industrial growth. Many Latin American countries have had a large inflow of immigrants that are living in substandard housing. Enforcement of the pollution standards is lax and penalties are minimal; in Venezuela, the largest penalty for violating an environmental law is 50,000bolivarfine ($3,400) and three days in jail. In the 1970s or 1980s, many Latin American countries were transitioning from military dictatorships to democratic governments.[52]
In 1992, Brazil came under scrutiny with theUnited Nations Conference on Environment and Developmentin Rio de Janeiro. Brazil has a history of little environmental awareness. It has the highestbiodiversityin the world and also the highest amount ofhabitat destruction. One-third of the world's forests lie in Brazil. It is home to the largest river,The Amazon, and the largest rainforest, theAmazon Rainforest. People have raised funds to create state parks and increase the consciousness of people who have destroyed forests and polluted waterways. From 1973 to the 1990s, and then in the 2000s, indigenous communities and rubber tappers also carried out blockades that protected much rainforest.[53]It is home to several organizations that have fronted the environmental movement. The Blue Wave Foundation was created in 1989 and has partnered with advertising companies to promote national education campaigns to keep Brazil's beaches clean. Funatura was created in 1986 and is a wildlife sanctuary program.Pro-Natura Internationalis a private environmental organization created in 1986.[54]
From the late 2000s onwards community resistance saw the formerly pro-mining southeastern state of Minas Gerais cancel a number of projects that threatened to destroy forests. In northern Brazil’s Pará state the Movimento dos Trabalhadores Rurais Sem Terra (Landless Workers Movement) and others campaigned and took part in occupations and blockades against the environmentally harmful Carajás iron ore mine.[55]
The movement in theUnited Statesbegan in the late 19th century, out of concerns for protecting the natural resources of the West, with individuals such asJohn MuirandHenry David Thoreaumaking key philosophical contributions. Thoreau was interested in peoples' relationship with nature and studied this by living close to nature in a simple life. He published his experiences in the 1854 bookWalden, which argues that people should become intimately close with nature. Muir came to believe in nature's inherent right, especially after spending time hiking inYosemite Valleyand studying both the ecology and geology. He successfully lobbied congress to formYosemite National Parkand went on to set up theSierra Clubin 1892.[56]The conservationist principles as well as the belief in an inherent right of nature became the bedrock of modern environmentalism.
Beginning in the conservation movement at the beginning of the 20th century, the contemporary environmental movement's roots can be traced back toRachel Carson's 1962 bookSilent Spring,Murray Bookchin's 1962 bookOur Synthetic Environment, andPaul R. Ehrlich's 1968The Population Bomb. American environmentalists have campaigned againstnuclear weaponsandnuclear powerin the 1960s and 1970s,acid rainin the 1980s,ozone depletionanddeforestationin the 1990s, and most recentlyclimate changeandglobal warming.[53]
The United States passed many pieces of environmental legislation in the 1970s, such as theClean Water Act,[57]theClean Air Act, theEndangered Species Act, and theNational Environmental Policy Act. These remain as the foundations for current environmental standards.
In the 1990s, theanti-environmental'Wise Use' movement emerged in the United States.[58]
TheEU's environmental policywas formally founded by aEuropean Councildeclaration and the first five-year environment programme was adopted.[59]Thepolluter pays principlewas well established inenvironmental economicsbefore it was included in theSingle European Act.[60]Following the1973 oil crisistheSocial Democratic Party of Germany(SPD) passed groundbreaking laws onenergy efficiency.[61]
During the 1930s the Nazis had elements that were supportive of animal rights, zoos and wildlife,[62]and took several measures to ensure their protection.[63]In 1933 the government created a stringent animal-protection law and in 1934,Das Reichsjagdgesetz(The Reich Hunting Law) was enacted which limited hunting.[64][65]Several Nazis were environmentalists(notablyRudolf Hess), and species protection andanimal welfarewere significant issues in the regime.[63]In 1935, the regime enacted the "Reich Nature Protection Act" (Reichsnaturschutzgesetz). The concept of theDauerwald(best translated as the "perpetual forest") which included concepts such asforest managementand protection was promoted and efforts were also made to curbair pollution.[66]
During theSpanish Revolutionin 1936, anarchist-controlled territories undertook several environmental reforms, which were possibly the largest in the world at the time.Daniel Guerinnotes thatanarchist territorieswould diversify crops, extendirrigation, initiatereforestation, start tree nurseries and help to establishnaturist communities.[67]Once there was a link discovered between air pollution and tuberculosis, theCNTshut down several metal factories.[68]
The late 19th century saw the formation of the first wildlife conservation societies. The zoologistAlfred Newtonpublished a series of investigations into theDesirability of establishing a 'Close-time' for the preservation of indigenous animalsbetween 1872 and 1903. His advocacy for legislation to protect animals from hunting during the mating season led to the formation of the Plumage League (later theRoyal Society for the Protection of Birds) in 1889.[69]The society acted as aprotest groupcampaigning against the use ofgreat crested grebeandkittiwakeskins and feathers infur clothing.[70][better source needed]The Society campaigned for greater protection for the indigenous birds of theisland.[71]The Society attracted growing support from the suburban middle-classes,[72]and influenced the passage of theSea Birds Preservation Actin 1869 as the first nature protection law in the world.[73][74]It also attracted support from many other influential figures, such as theornithologistProfessorAlfred Newton. By 1900, public support for the organisation had grown, and it had over 25,000 members. Thegarden city movementincorporated many environmental concerns into itsurban planningmanifesto; theSocialist LeagueandThe Clarionmovement also began to advocate measures ofnature conservation.[75]
For most of the century from 1850 to 1950, however, the primary environmental cause was the mitigation of air pollution. TheCoal Smoke Abatement Societywas formed in 1898 making it one of the oldest environmental NGOs. It was founded by artist SirWilliam Blake Richmond, frustrated with the pall cast by coal smoke. Although there were earlier pieces of legislation, thePublic Health Act 1875required all furnaces and fireplaces to consume their own smoke.
Systematic and general efforts on behalf of the environment only began in the late 19th century; it grew out of the amenity movement in Britain in the 1870s, which was a reaction toindustrialization, the growth of cities, and worsening air andwater pollution. Starting with the formation of theCommons Preservation Societyin 1865, the movement championed rural preservation against the encroachments of industrialisation.Robert Hunter, solicitor for the society, worked withHardwicke Rawnsley,Octavia Hill, andJohn Ruskinto lead a successful campaign to prevent the construction of railways to carry slate from the quarries, which would have ruined the unspoilt valleys ofNewlandsandEnnerdale. This success led to the formation of the Lake District Defence Society (later to become The Friends of the Lake District).[76][77]
In 1893 Hill, Hunter and Rawnsley agreed to set up a national body to coordinate environmental conservation efforts across the country; the "National Trust for Places of Historic Interest or Natural Beauty" was formally inaugurated in 1894.[78]The organisation obtained secure footing through the 1907 National Trust Bill, which gave the trust the status of a statutory corporation.[79]and the bill was passed in August 1907.[80]
Early interest in the environment was a feature of theRomantic movementin the early 19th century. The poetWilliam Wordsworthhad travelled extensively in England'sLake Districtand wrote that it is a "sort of national property in which every man has a right and interest who has an eye to perceive and a heart to enjoy".[81][82]
An early "Back-to-Nature" movement, which anticipated the romantic ideal of modern environmentalism, was advocated by intellectuals such asJohn Ruskin,William Morris,George Bernard ShawandEdward Carpenter, who were all againstconsumerism,pollutionand other activities that were harmful to the natural world.[83]The movement was a reaction to the urban conditions of the industrial towns, where sanitation was awful, pollution levels intolerable and housing terribly cramped.[84]Idealists championed the rural life as a mythicalutopiaand advocated a return to it. John Ruskin argued that people should return to a "small piece of English ground, beautiful, peaceful, and fruitful. We will have no steam engines upon it ... we will have plenty of flowers and vegetables ... we will have some music and poetry; the children will learn to dance to it and sing it."[85]Ruskin moved out of London and together with his friends started to think about thepost-industrial society. The predictions Ruskin made for the post-coalutopia coincided withforecastingpublished by the economistWilliam Stanley Jevons.[86]Practical ventures in the establishment of small cooperative farms were even attempted and old rural traditions, without the "taint of manufacture or the canker of artificiality", were enthusiastically revived, including theMorris danceand themaypole.[87]
The Coal Smoke Abatement Society (nowEnvironmental Protection UK) was formed in 1898 making it one of the oldest environmental NGOs. It was founded by artist SirWilliam Blake Richmond, frustrated with the pall cast by coal smoke. Although there were earlier pieces of legislation, thePublic Health Act 1875required all furnaces and fireplaces to consume their own smoke. It also provided for sanctions against factories that emitted large amounts of black smoke. This law's provisions were extended in 1926 with the Smoke Abatement Act to include other emissions, such as soot, ash, and gritty particles, and to empower local authorities to impose their own regulations.
It was only under the impetus of theGreat Smogof 1952 in London, which almost brought the city to a standstill and may have caused upward of 6,000 deaths, that theClean Air Act 1956was passed and airborne pollution in the city was first tackled. Financial incentives were offered to householders to replace open coal fires with alternatives (such as installing gas fires) or those who preferred, to burn coke instead (a byproduct of town gas production) which produces minimal smoke. 'Smoke control areas' were introduced in some towns and cities where only smokeless fuels could be burnt and power stations were relocated away from cities. The act formed an important impetus to modern environmentalism and caused a rethinking of the dangers of environmental degradation to people's quality of life.[88]
Beginning as aconservation movement, theenvironmental movement in Australiawas the first in the world to become a political movement.Australiais home toUnited Tasmania Group, the world's firstgreen party.[89][90]
The environmental movement is represented by a wide range of groups sometimes callednon-governmental organizations(NGOs). These exist on local, national, and international scales. Environmental NGOs vary widely in political views and in the amount they seek to influenceenvironmental policyin Australia and elsewhere. The environmental movement today consists of both large national groups and also many smaller local groups with local concerns.[91]There are also 5,000Landcare groupsin the six states and two mainland territories. Otherenvironmental issueswithin the scope of the movement include forest protection,climate changeandopposition to nuclear activities.[92][93]
|
https://en.wikipedia.org/wiki/Environmental_movement
|
Anenvironmental organizationis anorganizationcoming out of theconservationorenvironmental movementsthat seeks to protect, analyse or monitor the environment against misuse ordegradationfrom human forces.
In this sense the environment may refer to thebiophysical environmentor thenatural environment. The organization may be acharity, atrust, anon-governmental organization, agovernmental organizationor anintergovernmental organization. Environmental organizations can be global,national,regionalor local. Someenvironmental issuesthat environmental organizations focus on includepollution,plastic pollution,waste,resource depletion,human overpopulationandclimate change.
Many states haveagenciesdevoted to monitoring and protecting the environment:
Theseorganizationsare involved inenvironmental management,lobbying,advocacy, and/orconservationefforts:
Theseorganizationsare involved inenvironmental management,lobbying,advocacy, and/orconservationefforts at the national level:
|
https://en.wikipedia.org/wiki/Environmental_organizations
|
Environmental protection, orenvironment protection, refers to the taking of measures to protecting thenatural environment, prevent pollution and maintainecological balance.[1]Action may be taken by individuals, advocacy groups and governments. Objectives include the conservation of the existing natural environment and natural resources and, when possible, repair of damage and reversal of harmful trends.[2]
Due to the pressures ofoverconsumption,population growthand technology, thebiophysical environmentis being degraded, sometimes permanently. This has been recognized, and governments have begun placing restraints on activities that causeenvironmental degradation. Since the 1960s,environmental movementshave created more awareness of the multipleenvironmental problems. There is disagreement on the extent of theenvironmental impact of human activity, so protection measures are occasionally debated.
In industrial countries, voluntary environmental agreements often provide a platform for companies to be recognized for moving beyond the minimum regulatory standards and thus support the development of the best environmental practice. For instance, in India, Environment Improvement Trust (EIT) has been working for environmental andforest protectionsince 1998.[3]In developing countries, such as Latin America, these agreements are more commonly used to remedy significant levels of non-compliance with mandatory regulation.
Anecosystemsapproach to resource management and environmental protection aims to consider the complex interrelationships of an entire ecosystem in decision-making rather than simply responding to specific issues and challenges.[4]Ideally, the decision-making processes under such an approach would be a collaborative approach to planning and decision-making that involves a broad range of stakeholders across all relevant governmental departments, as well as industry representatives, environmental groups, and community. This approach ideally supports a better exchange of information, development of conflict-resolution strategies and improved regional conservation.Religionsalso play an important role in the conservation of the environment:[citation needed]for example, theCatholic Church'sCompendiumon its social teaching states that "environmental protection cannot be assured solely on the basis of financial calculations of costs and benefits. The environment is one of those goods that cannot be adequately safeguarded or promoted bymarket forces."[5]
Underrepresented Environmental Justice Movements:
Expand on environmental justice claims outside North America and Europe.
Environmental justice movements and environmental preservation initiatives frequently collide, especially in areas where underprivileged groups suffer disproportionate environmental harm. Grassroots movements have arisen in the Global South to protest widespread pollution, land dispossession, and resource extraction. Stricter environmental laws and increased involvement in decision-making have been demanded by indigenous groups.
Expanding the Global Perspective:
The Environmental Protection page focuses largely on policies in developed countries. You could add information on environmental protections in the Global South, including legal battles over land rights and pollution in countries like India, the Philippines, or Brazil.
Global environmental justice movements are still motivated by his legacy. These incidents demonstrate the crucial role that frontline communities play in protecting the environment, frequently at considerable personal danger. Even though there are legislative frameworks to address environmental inequalities, enforcement is still lacking, particularly in cases when governmental and corporate interests coincide. Stronger environmental regulations and the rights of impacted communities are now largely dependent on increased global awareness and solidarity.
Many of the earth's resources are especially vulnerable because they are influenced by human impacts across different countries. As a result of this, many attempts are made by countries to develop agreements that are signed by multiple governments to prevent damage or manage the impacts of human activity on natural resources. This can include agreements that impact factors such as climate, oceans, rivers andair pollution. These international environmental agreements are sometimes legally binding documents that have legal implications when they are not followed and, at other times, are more agreements in principle or are for use as codes of conduct. These agreements have a long history with some multinational agreements being in place from as early as 1910 in Europe, America andAfrica.[6]
Many of the international technical agencies formed after 1945 addressed environmental themes. By the late 1960s, a growing environmental movement called for coordinated and institutionalized international cooperation. The landmarkUnited Nations Conference on the Human Environmentwas held in Stockholm in 1972, establishing the concept of aright to a healthy environment. It was followed by the creation of theUnited Nations Environment Programmelater that year.[7]Some of the most well-known international agreements include theKyoto Protocolof 1997 and theParis Agreementof 2015.
On 8 October 2021, theUN Human Rights Councilpassed a resolution recognizing access to a healthy and sustainable environment as a universal right. In the resolution 48/13, the Council called on States around the world to work together, and with other partners, to implement the newly recognized right.[8]
On 28 July 2022, the United Nations General Assembly voted to declare the ability to live in "a clean, healthy and sustainable environment" a universal human right.[9][10]
Discussion concerning environmental protection often focuses on the role of government, legislation, and law enforcement. However, in its broadest sense, environmental protection may be seen to be the responsibility of all the people and not simply that of government. Decisions that impact the environment will ideally involve a broad range of stakeholders including industry, indigenous groups, environmental group and community representatives. Gradually, environmental decision-making processes are evolving to reflect this broad base of stakeholders and are becoming more collaborative in many countries.[11]
Many constitutions acknowledgeTanzaniaas having some of the greatest biodiversity of any African country. Almost 40% of the land has been established into a network of protected areas, including several national parks.[12]The concerns for the natural environment include damage to ecosystems and loss of habitat resulting from population growth, expansion ofsubsistence agriculture,pollution,timber extractionand significant use of timber as fuel.[13]
Environmental protection in Tanzania began during the German occupation of East Africa (1884–1919)—colonial conservation laws for the protection of game and forests were enacted, whereby restrictions were placed upon traditional indigenous activities such as hunting, firewood collecting, and cattle grazing.[14]In 1948, Serengeti has officially established the first national park for wild cats in East Africa. Since 1983, there has been a more broad-reaching effort to manage environmental issues at a national level, through the establishment of the National Environment Management Council (NEMC) and the development of an environmental act.[15]
Division of the biosphere is the main government body that oversees protection. It does this through the formulation of policy, coordinating and monitoring environmental issues,environmental planningand policy-oriented environmental research. The National Environment Management Council (NEMC) is an institution that was initiated when the National Environment Management Act was first introduced in year 1983. This council has the role to advise governments and the international community on a range of environmental issues. The NEMC the following purposes: provide technical advice; coordinate technical activities; develop enforcement guidelines and procedures; assess, monitor and evaluate activities that impact the environment; promote and assist environmental information and communication; and seek advancement of scientific knowledge.[16]
The National Environment Policy of 1997 acts as a framework for environmental decision making in Tanzania. The policy objectives are to achieve the following:
Tanzania is a signatory to a significant number of international conventions including the Rio Declaration on Development and Environment 1992 and theConvention on Biological Diversity1996. The Environmental Management Act, 2004, is the first comprehensive legal and institutional framework to guide environmental-management decisions. The policy tools that are parts of the act include the use of environmental-impact assessments, strategics environmental assessments, and taxation on pollution for specific industries and products. The effectiveness of shifting of this act will only become clear over time as concerns regarding its implementation become apparent based on the fact that, historically, there has been a lack of capacity to enforceenvironmental lawsand a lack of working tools to bring environmental-protection objectives into practice.
Formal environmental protection in China House was first stimulated by the 1972United Nations Conference on the Human Environmentheld in Stockholm, Sweden. Following this, they began establishing environmental protection agencies and putting controls on some of its industrial waste. China was one of the first developing countries to implement a sustainable development strategy. In 1983 the State Council announced that environmental protection would be one of China's basic national policies and in 1984 the National Environmental Protection Agency (NEPA) was established. Following severe flooding of the Yangtze River basin in 1998, NEPA was upgraded to the State Environmental Protection Agency (SEPA) meaning that environmental protection was now being implemented at a ministerial level. In 2008, SEPA became known by its current name ofMinistry of Environmental Protection of the People's Republic of China(MEP).[17]
Environmentalpollutionand ecological degradation has resulted in economic losses for China. In 2005, economic losses (mainly from air pollution) were calculated at 7.7% of China's GDP. This grew to 10.3% by 2002 and the economic loss from water pollution (6.1%) began to exceed that caused by air pollution.[18]China has been one of the top performing countries in terms of GDP growth (9.64% in the past ten years).[18]However, the high economic growth has put immense pressure on its environment and the environmental challenges that China faces are greater than most countries. In 2021 it was noted that China was the world's largest greenhouse gas emitter, while also facing additional environmental challenges which included illegal logging, wildlife trafficking, plastic waste, ocean pollution, environmental-related mismanagement, unregulated fishing, and the consequences associated with being the world's largest mercury polluter.[19]All these factors contribute to climate change and habitat loss. In 2022 China was ranked 160th out of 180 countries on theEnvironmental Performance Indexdue to poor air quality and high GHG emissions.
Ecological and environmental degradation in China have health related impacts; for example, if current pollution levels continue, Chinese citizens will lose 3.6 billion total life years.[20]Another issue is that non-transmittable diseases among Chinese, which cause at least 80% of 10.3 million annual deaths, are worsened by air pollution.[21]
China has taken initiatives to increase its protection of the environment and combat environmental degradation:
Rapid growth in GDP has been China's main goal during the past three decades with a dominant development model of inefficient resource use and high pollution to achieve high GDP. For China to develop sustainably, environmental protection should be treated as an integral part of its economic policies.[23]
Quote from Shengxian Zhou, head of MEP (2009): "Good economic policy is good environmental policy and the nature of environmental problem is the economic structure, production form and develop model."[22]
Since around 2010 China appears to be placing a greater emphasis on environmental and ecological protection. For example, formerGeneral SecretaryHu Jintao's report at the 2012 Party Congress added a section focusing on party policy on ecological issues.[24][25]
Xi Jinping's report at the 19th CPC National Congress in 2017 noted recent progress in ecological and environmental conservation and restoration, the importance of ecologically sustainable development and global ecological security, and the need to provide ecological goods to meet people's growing demands.[26]Most importantly, Xi Jinping has suggested clearly identifiable methods to meet the ecological demands of the country. Some of the solutions he notes are the need for the development and facilitation of: ecological corridors, biodiversity protection networks, redlines for protecting ecosystems, market-based mechanisms for ecological compensation in addition to afforestation, greater crop rotation, recycling, waste reduction, stricter pollution standards, and greener production and technology.[26]The report at the 19th CPC National Congress isn't simply the personal thoughts from Xi Jinping, it's a product of a long process of compromise and negotiation among competing party officials and leaders.[24]
Additionally, the Third Plenum of the CCP in 2013 included a manifesto that placed extreme emphasis on reforming management of the environment, promising to create greater transparency of those polluting, and placing environmental criteria above GDP growth for local official evaluations.[27]
Reform has not come cheap for China. In 2016, it was noted that in response to pollution and oversupply, China laid off around six million workers instate-owned enterprisesand spent $23 billion to cover layoffs specifically for coal and steel companies between 2016 and 2019.[28]While expensive, other benefits of environmental protection have been noticed beyond impacting citizens' health. For example, in the long run, environmental protection has been found to generally improve job quality of migrant workers by reducing their work intensity, while increasing social security and job quality.[29]
Different local governments in China implement different approaches to solving the issue of ecological protection, sometimes with negative consequences for the citizens. For example, a prefecture in theShanxiprovince imposed bans, and potential legal detentions or steep fines for violations, on coal-burning by villagers.[30]Although the government provided free gas-heaters often the villagers were unable to afford to run them.[30]InWuhan, automated surveillance technology and video is used to catch illegal fishing, and in some cities not recycling results in negative social credit points. It is unclear in some of these instances if citizens have any potential routes for recourse.[30]
News in 2023 has found that the Chinese Communist Party's recent war on pollution has already brought substantial and measurable impacts, including China's particulate pollution levels dropping 42% from 2013 levels and increasing the average lifespan expectancy of citizens by an estimated 2.2 years.[31][32]
The Constitution of India has a number of provisions demarcating the responsibility of the Central and State governments towards Environmental Protection. The state's responsibility with regard to environmental protection has been laid down under article 48-A of the constitution which stated that "The states shall endeavor to protect and improve the environment and to safeguard the forest and wildlife of the country".[33]
Environmental protection has been made a fundamental duty of every citizen of India under Article 51-A (g) of the constitution which says "It shall be the duty of every citizen of India to protect and improve the natural environment including forests, lakes, rivers, and wildlife and to have compassion for living creatures".[34]
Article 21 of the constitution is a fundamental right, which states that "No person shall be deprived of his life or personal liberty except according to the procedure established by law".[35]
The Middle Eastern countries become part of the joint Islamic environmental action, which was initiated in 2002 inJeddah. Under theIslamic Educational, Scientific and Cultural Organization, the member states join the Islamic Environment Ministers Conference in every two years, focusing on the importance of environment protection andsustainable development. The Arab countries are also awarded the title of best environment management in the Islamic world.[36]
In August 2019, theSultanate of Omanwon the award for 2018–19 inSaudi Arabia, citing its project "Verifying the Age and Growth of Spotted Small Spots in the Northwest Coast of the Sea of Oman".[37]
InRussia, environmental protection is considered an integral part of nationalsafety. The Federal Ministry of Natural Resources and Ecology is the authorized state body tasked with managing environmental protection. However, there are a lot ofenvironmental issues in Russia.
Environmental protection has become an important task for the institutions of theEuropean Communityafter theMaastricht Treatyfor theEuropean Unionratification by all of its member states. The EU is active in the field of environmental policy, issuing directives such as those onenvironmental impact assessmentand onaccess to environmental informationfor citizens in the member states.
TheEnvironmental Protection Agency, Ireland(EPA) has a wide range of functions to protect the environment, with its primary responsibilities including:[38]
Theenvironmental movementinSwitzerlandis represented by a wide range of associations (non-governmental organisations).
The environmental protection in Switzerland is mainly based on the measures to be taken against global warming. The pollution in Switzerland is mainly the pollution caused by vehicles and the litteration by tourists.[citation needed]
TheUnited Nations Environment Programme(UNEP) has identified 17megadiverse countries. The list includes six Latin American countries:Brazil,Colombia,Ecuador,Mexico,PeruandVenezuela.MexicoandBrazilstand out among the rest because they have the largest area, population and number of species. These countries represent a major concern for environmental protection because they have high rates of deforestation, ecosystems loss, pollution, and population growth.
Brazilhas the largest amount of the world's tropical forests, 4,105,401 km2(48.1% of Brazil), concentrated in the Amazon region.[39]Brazil is home to vast biological diversity, first among themegadiverse countriesof the world, having between 15% and 20% of the 1.5 million globally described species.[40]
The organization in charge of environment protection is the Brazilian Ministry of the Environment (in Portuguese: Ministério do Meio Ambiente, MMA).[41]It was first created in the year 1973 with the name Special Secretariat for the Environment (Secretaria Especial de Meio Ambiente), changing names several times, and adopting the final name in the year 1999. The Ministry is responsible for addressing the following issues:
In 2011, protected areas of the Amazon covered 2,197,485 km2(an area larger than Greenland), with conservation units, like national parks, accounting for just over half (50.6%) and indigenous territories representing the remaining 49.4%.[42]
With over 200,000 different species,Mexicois home to 10–12% of the world's biodiversity, ranking first inreptilebiodiversity and second inmammals[43]—one estimate indicates that over 50% of all animal and plant species live in Mexico.[44]
The history of environmental policy in Mexico started in the 1940s with the enactment of the Law of Conservation of Soil and Water (in Spanish: Ley de Conservación de Suelo y Agua). Three decades later, at the beginning of the 1970s, the Law to Prevent and Control Environmental Pollution was created (Ley para Prevenir y Controlar la Contaminación Ambiental).
In the year 1972 was the first direct response from the federal government to address eminent health effects from environmental issues. It established the administrative organization of the Secretariat for the Improvement of the Environment (Subsecretaría para el Mejoramiento del Ambiente) in the Department of Health and Welfare.
The Secretariat of Environment and Natural Resources (Secretaría del Medio Ambiente y Recursos Naturales,SEMARNAT[45]) is Mexico's environment ministry. The Ministry is responsible for addressing the following issues:
In November 2000 there were 127protected areas; currently there are 174, covering an area of 25,384,818 hectares, increasing federally protected areas from 8.6% to 12.85% of its land area.[46]
In 2008, there was 98,487,116haof terrestrial protected area, covering 12.8% of the land area ofAustralia.[47]The 2002 figures of 10.1% of terrestrial area and 64,615,554 ha of protected marine area[48]were found to poorly represent about half of Australia's 85 bioregions.[49]
Environmental protection in Australia could be seen as starting with the formation of the first national park,Royal National Park, in 1879.[50]More progressive environmental protection had it start in the 1960s and 1970s with major international programs such as theUnited Nations Conference on the Human Environmentin 1972, the Environment Committee of theOECDin 1970, and theUnited Nations Environment Programmeof 1972.[51]These events laid the foundations by increasing public awareness and support for regulation. State environmental legislation was irregular and deficient until the Australian Environment Council (AEC) and Council of Nature Conservation Ministers (CONCOM) were established in 1972 and 1974, creating a forum to assist in coordinating environmental and conservation policies between states and neighbouring countries.[52]These councils have since been replaced by the Australian and New Zealand Environment and Conservation Council (ANZECC) in 1991 and finally the Environment Protection and Heritage Council (EPHC) in 2001.[53]
At a national level, theEnvironment Protection and Biodiversity Conservation Act 1999is the primary environmental protection legislation for the Commonwealth of Australia. It concerns matters of national and international environmental significance regarding flora, fauna, ecological communities and cultural heritage.[54]It also has jurisdiction over any activity conducted by the Commonwealth, or affecting it, that has significant environmental impact.[55]The act covers eight main areas:[56]
There are several Commonwealth protected lands due to partnerships with traditional native owners, such asKakadu National Park, extraordinary biodiversity such asChristmas Island National Park, or managed cooperatively due to cross-state location, such as theAustralian Alps National Parks and Reserves.[57]
At a state level, the bulk of environmental protection issues are left to the responsibility of the state or territory.[52][55]Each state in Australia has its own environmental protection legislation and corresponding agencies. Their jurisdiction is similar and coverspoint source pollution, such as from industry or commercial activities, land/water use, and waste management. Most protected lands are managed by states and territories[57]with state legislative acts creating different degrees and definitions of protected areas such as wilderness, national land and marine parks, state forests, and conservation areas. States also create regulation to limit and provide general protection from air, water, and sound pollution.
At a local level, each city or regional council has responsibility over issues not covered by state or national legislation. This includes non-point source, or diffuse pollution, such as sediment pollution from construction sites.
Australia ranks second place on the UN 2010Human Development Index[58]and one of the lowest debt toGDPratios of the developed economies.[59]This could be seen as coming at the cost of the environment, with Australia being the world leader in coal exportation[60]and species extinctions.[61][62]Some have been motivated to proclaim it is Australia's responsibility to set the example of environmental reform for the rest of the world to follow.[63][64]
At a national level, theMinistry for the Environmentis responsible for environmental policy and theDepartment of Conservationaddressesconservation issues. At a regional level theregional councilsadminister the legislation and address regionalenvironmental issues.
Since 1970, theUnited States Environmental Protection Agency(EPA) has been working to protect the environment and human health.[65]
The Environmental Protection Agency (EPA) is an independent executive agency of the United States federal government tasked with environmental protection matters.
All US states have their own state-level departments of environmental protection,[66]which may issue regulations more stringent than the federal ones.
In January 2010, EPA AdministratorLisa P. Jacksonpublished via the official EPA blog her "Seven Priorities for EPA's Future", which were (in the order originally listed):[67]
As of 2019,[update]it is unclear whether these still represent the agency's active priorities, as Jackson departed in February 2013, and the page has not been updated in the interim.
There are numerous works of literature that contain the themes of environmental protection but some have been fundamental to its evolution. Several pieces such asA Sand County AlmanacbyAldo Leopold, "Tragedy of the commons" byGarrett Hardin, andSilent SpringbyRachel Carsonhave become classics due to their far reaching influences.[68]The conservationist and Nobel laureateWangari Muta Maathaidevoted her 2010 book Replenishing the Earth to the Green Belt Movement and the vital importance of trees in protecting the environment.
The subject of environmental protection is present in fiction as well as non-fictional literature. Books such asAntarcticaandBlockadehave environmental protection as subjects whereasThe Loraxhas become a popular metaphor for environmental protection. "The Limits of Trooghaft"[69]by Desmond Stewart is a short story that provides insight into human attitudes towards animals. Another book calledThe Martian ChroniclesbyRay Bradburyinvestigates issues such as bombs, wars, government control, and what effects these can have on the environment.
|
https://en.wikipedia.org/wiki/Environmental_protection
|
Environmental resource managementorenvironmental managementis themanagementof the interaction and impact ofhuman societieson theenvironment. It is not, as the phrase might suggest, the management of the environment itself. Environmental resources management aims to ensure thatecosystem servicesare protected and maintained for future human generations, and also maintainecosystemintegrity through consideringethical,economic, andscientific(ecological) variables.[1]Environmental resource management tries to identify factors between meeting needs and protecting resources.[2]It is thus linked toenvironmental protection,resource management,sustainability,integrated landscape management,natural resource management,fisheries management,forest management,wildlife management,environmental management systems, and others.
Environmental resource management is an issue of increasing concern, as reflected in its prevalence in several texts influencing globalsociopoliticalframeworks such as theBrundtland Commission'sOur Common Future,[3]which highlighted the integrated nature of the environment andinternational development, and theWorldwatch Institute's annualState of the Worldreports.
The environment determines the nature of people,animals,plants, and places around theEarth, affecting behaviour,religion,cultureand economic practices.
Environmental resource management can be viewed from a variety of perspectives. It involves the management of all components of thebiophysical environment, both living (biotic) and non-living (abiotic), and the relationships among all livingspeciesand theirhabitats. The environment also involves the relationships of the human environment, such as the social, cultural, and economic environment, with the biophysical environment. The essential aspects of environmental resource management are ethical, economical, social, and technological. These underlie principles and help make decisions.
The concept of environmental determinism,probabilism, andpossibilismare significant in the concept of environmental resource management.
Environmental resource management covers many areas inscience, includinggeography,biology,social sciences,political sciences,public policy,ecology,physics,chemistry,sociology,psychology, andphysiology. Environmental resource management as a practice anddiscourse(across these areas) is also the object of study in the social sciences.[4][5]
Environmental resource management strategies are intrinsically driven by conceptions ofhuman-nature relationships.[6]Ethical aspects involve the cultural and social issues relating to the environment, and dealing with changes to it. "All human activities take place in the context of certain types of relationships between society and the bio-physical world (the rest of nature),"[7]and so, there is a great significance in understanding the ethical values of different groups around the world. Broadly speaking, two schools of thought exist inenvironmental ethics:AnthropocentrismandEcocentrism, each influencing a broad spectrum of environmental resource management styles along a continuum.[6]These styles perceive "...different evidence, imperatives, and problems, and prescribe different solutions, strategies, technologies, roles for economic sectors, culture, governments, and ethics, etc."[7]
Anthropocentrism, "an inclination to evaluate reality exclusively in terms of human values,"[8]is an ethic reflected in the major interpretations of Western religions and the dominant economic paradigms of the industrialised world.[6]Anthropocentrism looks at nature as existing solely for the benefit of humans, and as a commodity to use for the good of humanity and to improve human quality of life.[9][10][11]Anthropocentric environmental resource management is therefore not the conservation of the environment solely for the environment's sake, but rather the conservation of the environment, and ecosystem structure, for humans' sake.
Ecocentrists believe in the intrinsic value of nature while maintaining that human beings must use and even exploit nature to survive and live.[12]It is this fine ethical line that ecocentrists navigate between fair use and abuse.[12]At an extreme of the ethical scale, ecocentrism includes philosophies such asecofeminismanddeep ecology, which evolved as a reaction to dominant anthropocentric paradigms.[6]"In its current form, it is an attempt to synthesize many old and some new philosophical attitudes about the relationship between nature and human activity, with particular emphasis on ethical, social, and spiritual aspects that have been downplayed in the dominant economic worldview."[13]
Main article:Economics
The economy functions within and is dependent upon goods and services provided by natural ecosystems.[14]The role of the environment is recognized in bothclassical economicsandneoclassical economicstheories, yet the environment was a lower priority in economic policies from 1950 to 1980 due to emphasis from policy makers on economic growth.[14]With the prevalence of environmental problems, many economists embraced the notion that, "Ifenvironmental sustainabilitymust coexist for economic sustainability, then the overall system must [permit] identification of an equilibrium between the environment and the economy."[15]As such, economic policy makers began to incorporate the functions of the natural environment – ornatural capital– particularly as a sink for wastes and for the provision of raw materials and amenities.[16]
Debate continues among economists as to how to account for natural capital, specifically whether resources can be replaced through knowledge and technology, or whether the environment is a closed system that cannot be replenished and is finite.[17]Economic models influence environmental resource management, in that management policies reflect beliefs about natural capital scarcity. For someone who believes natural capital is infinite and easily substituted, environmental management is irrelevant to the economy.[6]For example, economic paradigms based on neoclassical models of closed economic systems are primarily concerned with resource scarcity and thus prescribe legalizing the environment as an economic externality for an environmental resource management strategy.[6]This approach has often been termed 'Command-and-control'.[6]Colby has identified trends in the development of economic paradigms, among them, a shift towards moreecological economicssince the 1990s.[6]
There are many definitions of the field of science commonly calledecology. A typical one is "the branch of biology dealing with the relations and interactions between organisms and their environment, including other organisms."[18]"The pairing of significant uncertainty about the behaviour and response of ecological systems with urgent calls for near-term action constitutes a difficult reality, and a common lament" for manyenvironmental resource managers.[19]Scientific analysis of the environment deals with several dimensions of ecological uncertainty.[20]These include:structural uncertaintyresulting from the misidentification, or lack of information pertaining to the relationships between ecological variables;parameter uncertaintyreferring to "uncertainty associated with parameter values that are not known precisely but can be assessed and reported in terms of the likelihood…of experiencing a defined range of outcomes";[21]andstochastic uncertaintystemming from chance or unrelated factors.[20]Adaptive management[22][23]is considered a useful framework for dealing with situations of high levels of uncertainty[24]though it is not without its detractors.[25]
A common scientific concept and impetus behind environmental resource management iscarrying capacity. Simply put, carrying capacity refers to the maximum number of organisms a particular resource can sustain. The concept of carrying capacity, whilst understood by many cultures over history, has its roots inMalthusiantheory. An example is visible in the EUWater Framework Directive. However, "it is argued that Western scientific knowledge ... is often insufficient to deal with the full complexity of the interplay of variables in environmental resource management.[26][27]These concerns have been recently addressed by a shift in environmental resource management approaches to incorporate different knowledge systems includingtraditional knowledge,[28]reflected in approaches such as adaptive co-management[29][30][31]community-based natural resource management[32][33]and transitions management[34]among others.[28]
Sustainability in environmental resource management involves managing economic, social, and ecological systems both within and outside an organizational entity so it can sustain itself and the system it exists in.[35][36]In context, sustainability implies that rather than competing for endless growth on a finite planet, development improves quality of life without necessarily consuming more resources.[37]Sustainably managing environmental resources requires organizational change that instills sustainability values that portrays these values outwardly from all levels and reinforces them to surrounding stakeholders.[35][36]The result should be a symbiotic relationship between the sustaining organization, community, and environment.
Many drivers compel environmental resource management to take sustainability issues into account. Today's economic paradigms do not protect the natural environment, yet they deepen human dependency on biodiversity and ecosystem services.[38]Ecologically, massiveenvironmental degradation[39][40]andclimate change[41][42]threaten the stability of ecological systems that humanity depends on.[36][43]Socially, an increasing gap between rich and poor and the globalNorth–South dividedenies many access to basic human needs, rights, and education, leading to further environmental destruction.[36][43][44][45]The planet's unstable condition is caused by manyanthropogenicsources.[41]As an exceptionally powerful contributing factor to social and environmental change, the modern organisation has the potential to apply environmental resource management with sustainability principles to achieve highly effective outcomes.[35][36]To achievesustainable developmentwith environmental resource management an organisation should work within sustainability principles, including social and environmentalaccountability, long-term planning; a strong, shared vision; a holistic focus; devolved and consensus decision making; broad stakeholder engagement and justice; transparency measures; trust; and flexibility.[35][36][46]
To adjust to today's environment of quick social and ecological changes, some organizations have begun to experiment with new tools and concepts.[47][48]Those that are more traditional and stick to hierarchical decision making have difficulty dealing with the demand for lateral decision making that supports effective participation.[47]Whether it be a matter of ethics or just strategic advantage organizations are internalizing sustainability principles.[48][49]Some of the world's largest and most profitable corporations are shifting to sustainable environmental resource management: Ford, Toyota, BMW, Honda, Shell, Du Port, Sta toil,[50]Swiss Re, Hewlett-Packard, and Unilever, among others.[35][36]An extensive study by theBoston Consulting Groupreaching 1,560 business leaders from diverse regions, job positions, expertise in sustainability, industries, and sizes of organizations, revealed the many benefits of sustainable practice as well as its viability.[49]
Although the sustainability of environmental resource management has improved,[35][36]corporate sustainability, for one, has yet to reach the majority of global companies operating in the markets.[46]The three major barriers to preventing organizations from shifting towards sustainable practice with environmental resource management are not understanding what sustainability is; having difficulty modeling an economically viable case for the switch; and having a flawed execution plan, or a lack thereof.[49]Therefore, the most important part of shifting an organization to adopt sustainability in environmental resource management would be to create a shared vision and understanding of what sustainability is for that particular organization and to clarify the business case.[49]
Thepublic sectorcomprises the general government sector plus all public corporations including thecentral bank.[51]In environmental resource management the public sector is responsible for administeringnatural resource managementand implementingenvironmental protectionlegislation.[2][52]The traditional role of the public sector in environmental resource management is to provide professional judgement through skilled technicians on behalf of the public.[47]With the increase of intractable environmental problems, the public sector has been led to examine alternative paradigms for managing environmental resources.[47]This has resulted in the public sector working collaboratively with other sectors (including other governments, private and civil) to encourage sustainable natural resource management behaviours.[52]
Theprivate sectorcomprises private corporations and non-profit institutions serving households.[53]The private sector's traditional role in environmental resource management is that of the recovery ofnatural resources.[54]Such private sector recovery groups include mining (minerals and petroleum), forestry and fishery organisations.[54]Environmental resource management undertaken by the private sectors varies dependent upon the resource type, that being renewable or non-renewable and private and common resources (also seeTragedy of the Commons).[54]Environmental managersfrom the private sector also need skills to manage collaboration within a dynamic social and political environment.[47]
Civil societycomprises associations in which societies voluntarily organise themselves and which represent a wide range of interests and ties.[55]These can include community-based organisations, indigenous peoples' organisations andnon-government organisations(NGOs).[55]Functioning through strong public pressure, civil society can exercise their legal rights against the implementation of resource management plans, particularly land management plans.[47]The aim of civil society in environmental resource management is to be included in the decision-making process by means ofpublic participation.[47]Public participation can be an effective strategy to invoke a sense of social responsibility of natural resources.[47]
As with all management functions, effective management tools, standards, and systems are required. An environmental management standard or system orprotocolattempts to reduceenvironmental impactas measured by some objective criteria. TheISO 14001standard is the most widely used standard for environmentalrisk managementand is closely aligned to the EuropeanEco-Management and Audit Scheme(EMAS). As a common auditing standard, theISO 19011standard explains how to combine this withquality management.
Otherenvironmental management systems(EMS) tend to be based on the ISO 14001 standard and many extend it in various ways:
Other strategies exist that rely on making simple distinctions rather than building top-down management "systems" usingperformance auditsandfull cost accounting. For instance, Ecological Intelligent Design divides products intoconsumables,service productsor durables andunsaleables– toxic products that no one should buy, or in many cases, do not realize they are buying. By eliminating the unsaleables from thecomprehensive outcomeof any purchase, better environmental resource management is achieved withoutsystems.
Another example that diverges from top-down management is the implementation of community based co-management systems of governance. An example of this is community based subsistence fishing areas, such as is implemented in Ha'ena, Hawaii.[57]Community based systems of governance allow for the communities who most directly interact with the resource and who are most deeply impacted by theoverexploitationof said resource to make the decisions regarding its management, thus empowering local communities and more effectively managing resources.
Recent successful cases have put forward the notion ofintegrated management. It shares a wider approach and stresses out the importance of interdisciplinary assessment. It is an interesting notion that might not be adaptable to all cases.[58]
Kissidougou, Guinea’s dry season brings about fires in the open grass fires which defoliate the few trees in the savanna. There are villages within this savanna surrounded by “islands” of forests, allowing for forts, hiding, rituals, protection from wind and fire, and shade for crops. According to scholars and researchers in the region during the late-19th and 20th centuries,[59]there was a steady decline in tree cover. This led to colonial Guinea’s implementation of policies, including the switch of upland to swamp farming; bush-fire control; protection of certain species and land; and tree planting in villages. These policies were carried out in the form of permits, fines, and military repression.
But, Kissidougou villagers claim their ancestors’ established these islands. Many maps and letters evidence France’s occupation of Guinea, as well as Kissidougou’s past landscape. During the 1780s to 1860s “the whole country [was] prairie.” James Fairhead and Melissa Leach, both environmental anthropologists at the University of Sussex, claim the state’s environmental analyses “casts into question the relationships between society, demography, and environment.” With this, they reformed the state’s narratives: Local land use can be both vegetation enriching and degrading; combined effect on resource management is greater than the sum of their parts; there is evidence of increased population correlating to an increase in forest cover. Fairhead and Leach support the enabling of policy and socioeconomic conditions in which local resource management conglomerates can act effectively. In Kissidougou, there is evidence that local powers and community efforts shaped the island forests that shape the savanna’s landscape.[60]
|
https://en.wikipedia.org/wiki/Environmental_resources_management
|
1800s:Martineau·Tocqueville·Marx·Spencer·Le Bon·Ward·Pareto·Tönnies·Veblen·Simmel·Durkheim·Addams·Mead·Weber·Du Bois·Mannheim·Elias
Environmental sociologyis the study of interactions between societies and theirnatural environment. The field emphasizes the social factors that influenceenvironmental resource managementand causeenvironmental issues, the processes by which these environmental problems aresocially constructedand define associal issues, andsocietal responsesto these problems.[1]
Environmental sociology emerged as asubfieldofsociologyin the late 1970s in response to the emergence of theenvironmental movementin the 1960s. It represents a relatively new area of inquiry focusing on an extension of earlier sociology through inclusion of physical context as related to social factors.[2]
Environmental sociology is typically defined as thesociologicalstudy of socio-environmental interactions, although this definition immediately presents the problem of integrating human cultures with the rest of theenvironment.[3]Different aspects of human interaction with the natural environment are studied by environmental sociologists including population and demography, organizations and institutions, science and technology, health and illness, consumption and sustainability practices,[4]culture and identity,[5]and social inequality andenvironmental justice.[6]Although the focus of the field is the relationship between society and environment in general, environmentalsociologiststypically place special emphasis on studying the social factors that cause environmental problems, the societal impacts of those problems, and efforts to solve the problems. In addition, considerable attention is paid to the social processes by which certain environmental conditions become socially defined as problems. Most research in environmental sociology examines contemporary societies.
Environmental sociology emerged as a coherent subfield of inquiry after theenvironmental movementof the 1960s and early 1970s. The works ofWilliam R. Catton, Jr.andRiley Dunlap,[7]among others, challenged the constrictedanthropocentrismof classical sociology. In the late 1970s, they called for a new holistic, or systems perspective, which lead to a marked shift in the field’s focus. Since the 1970s, general sociology has noticeably transformed to include environmental forces in social explanations. Environmental sociology has now solidified as a respected,interdisciplinaryfield of study inacademia.[8][9]
The duality of the human condition rests with cultural uniqueness and evolutionary traits. From one perspective, humans are embedded in theecosphereand co-evolved alongside other species. Humans share the same basic ecological dependencies as other inhabitants of nature. From the other perspectives, humans are distinguished from other species because of their innovative capacities, distinct cultures and varied institutions.[10]Human creations have the power to independently manipulate, destroy, and transcend the limits of the natural environment.[11]
According to Buttel (2004), there are five major traditions in environmental sociology today: the treadmill of production and other eco-Marxisms, ecological modernization and other sociologies of environmental reform, cultural-environmental sociologies, neo-Malthusianisms, and the new ecological paradigm.[12]In practice, this means five different theories of what to blame forenvironmental degradation, i.e., what to research or consider as important. These ideas are listed below in the order in which they were invented. Ideas that emerged later built on earlier ideas, and contradicted them.[citation needed]
Works such as Hardin's "Tragedy of the Commons" (1969) reformulatedMalthusianthought about abstract population increases causing famines into a model of individual selfishness at larger scales causing degradation ofcommon pool resourcessuch as the air, water, the oceans, or general environmental conditions. Hardin offered privatization of resources or government regulation as solutions to environmental degradation caused by tragedy of the commons conditions. Many other sociologists shared this view of solutions well into the 1970s (see Ophuls). There have been many critiques of this view particularly political scientistElinor Ostrom, or economistsAmartya SenandEster Boserup.[13]Sociologists have developed a critical counter to Hardin's thesis calledThe Tragedy of the Commodity.
Even though much of mainstream journalism considers Malthusianism the only view of environmentalism, most sociologists would disagree with Malthusianism since social organizational issues of environmental degradation are more demonstrated to cause environmental problems than abstract population or selfishness per se. For examples of this critique, Ostrom in her bookGoverning the Commons: The Evolution of Institutions for Collective Action(1990) argues that instead of self-interest always causing degradation, it can sometimes motivate people to take care of their common property resources. To do this they must change the basic organizational rules of resource use. Her research provides evidence for sustainable resource management systems, around common pool resources that have lasted for centuries in some areas of the world.[14]
Amartya Sen argues in his bookPoverty and Famines: An Essay on Entitlement and Deprivation(1980) that population expansion fails to cause famines or degradation as Malthusians or Neo-Malthusians argue. Instead, in documented cases a lack of political entitlement to resources that exist in abundance, causes famines in some populations. He documents how famines can occur even in the midst of plenty or in the context of low populations. He argues that famines (and environmental degradation) would only occur in non-functioning democracies or unrepresentative states.
Ester Boserup argues in her bookThe Conditions of Agricultural Growth: The Economics of Agrarian Change under Population Pressure(1965) from inductive, empirical case analysis that Malthus's more deductive conception of a presumed one-to-one relationship with agricultural scale and population is actually reversed. Instead of agricultural technology and scale determining and limiting population as Malthus attempted to argue, Boserup argued the world is full of cases of the direct opposite: that population changes and expands agricultural methods.
Eco-Marxist scholarAllan Schnaiberg(below) argues against Malthusianism with the rationale that under larger capitalist economies, human degradation moved from localized, population-based degradation to organizationally caused degradation of capitalist political economies to blame. He gives the example of the organized degradation of rainforest areas which states and capitalists push people off the land before it is degraded by organizational means. Thus, many authors are critical of Malthusianism, from sociologists (Schnaiberg) to economists (Sen and Boserup), to political scientists (Ostrom), and all focus on how a country's social organization of its extraction can degrade the environment independent of abstract population.
In the 1970s, the New Ecological Paradigm (NEP) conception critiqued the claimed lack of human-environmental focus in the classical sociologists and the sociological priorities their followers created. This was critiqued as the Human Exemptionalism Paradigm (HEP). The HEP viewpoint claims that human-environmental relationships were unimportant sociologically because humans are 'exempt' from environmental forces via cultural change. This view was shaped by the leadingWestern worldviewof the time and the desire for sociology to establish itself as an independent discipline against the then popular racist-biologicalenvironmental determinismwhere environment was all. In this HEP view, human dominance was felt to be justified by the uniqueness of culture, argued to be more adaptable than biological traits. Furthermore, culture also has the capacity to accumulate and innovate, making it capable of solving all natural problems. Therefore, as humans were not conceived of as governed by natural conditions, they were felt to have complete control of their own destiny. Any potential limitation posed by the natural world was felt to be surpassed using human ingenuity. Research proceeded accordingly without environmental analysis.
In the 1970s, sociological scholars Riley Dunlap andWilliam R. Catton, Jr.began recognizing the limits of what would be termed the Human Excemptionalism Paradigm. Catton and Dunlap (1978) suggested a new perspective that took environmental variables into full account. They coined a new theoretical outlook for sociology, the New Ecological Paradigm, with assumptions contrary to HEP.
The NEP recognizes the innovative capacity of humans, but says that humans are still ecologically interdependent as with other species. The NEP notes the power of social and cultural forces but does not professsocial determinism. Instead, humans are impacted by the cause, effect, and feedback loops of ecosystems. The Earth has a finite level of natural resources and waste repositories. Thus, the biophysical environment can impose constraints on human activity. They discussed a few harbingers of this NEP in 'hybridized' theorizing about topics that were neither exclusively social nor environmental explanations of environmental conditions. It was additionally a critique of Malthusian views of the 1960s and 1970s.
Dunlap and Catton's work immediately received a critique from Buttel who argued to the contrary that classical sociological foundations could be found for environmental sociology, particularly in Weber's work on ancient "agrarian civilizations" and Durkheim's view of thedivision of laboras built on a material premise of specialization/specialization in response to material scarcity. This environmental aspect of Durkheim has been discussed by Schnaiberg (1971) as well.
The Treadmill of Production is a theory coined and popularized by Schnaiberg as a way to answer for the increase in U.S. environmental degradation post World War II. At its simplest, this theory states that the more product or commodities are created, the more resources will be used, and the higher the impact will be.[15]The treadmill is a metaphor of being caught in the cycle of continuous growth which never stops, demanding more resources and as a result causing more environmental damage.
In the middle of the HEP/NEP debate,neo-Marxistideas of conflict sociology were applied to environmental conflicts. Therefore, some sociologists wanted to stretch Marxist ideas of social conflict to analyze environmental social movements from the Marxist materialist framework instead of interpreting them as a cultural "New Social Movement", separate from material concerns. So "Eco-Marxism" was developed based on using neo-MarxistConflict theoriesconcepts of the relative autonomy of the state and applying them to environmental conflict.[citation needed]
Two people following this school wereJames O'Connor(The Fiscal Crisis of the State, 1971) and later Allan Schnaiberg.
Later, a different trend developed in eco-Marxism via the attention brought to the importance of metabolic analysis in Marx's thought byJohn Bellamy Foster. Contrary to previous assumptions that classical theorists in sociology all had fallen within a Human Exemptionalist Paradigm, Foster argued that Marx's materialism lead him to theorize labor as the metabolic process between humanity and the rest of nature.[16]In Promethean interpretations of Marx that Foster critiques, there was an assumption his analysis was very similar to the anthropocentric views critiqued by early environmental sociologists. Instead, Foster argued Marx himself was concerned about theMetabolic riftgenerated by capitalist society'ssocial metabolism, particularly in industrial agriculture—Marx had identified an "irreparable rift in the interdependent process of social metabolism,"[17]created by capitalist agriculture that was destroying the productivity of the land and creating wastes in urban sites that failed to be reintegrated into the land and thus lead toward destruction of urban workers health simultaneously.[18]Reviewing the contribution of this thread of eco-marxism to current environmental sociology, Pellow and Brehm conclude, "The metabolic rift is a productive development in the field because it connects current research to classical theory and links sociology with an interdisciplinary array of scientific literatures focused on ecosystem dynamics."[9]
Foster emphasized that his argument presupposed the "magisterial work" ofPaul Burkett, who had developed a closely related "red-green" perspective rooted in a direct examination of Marx's value theory. Burkett and Foster proceeded to write a number of articles together on Marx's ecological conceptions, reflecting their shared perspective[19][20][21]
More recently, Jason W. Moore, inspired by Burkett's value-analytical approach to Marx's ecology and arguing that Foster's work did not in itself go far enough, has sought to integrate the notion of metabolic rift with world systems theory, incorporating Marxian value-related conceptions.[22]For Moore, the modern world-system is a capitalist world-ecology, joining the accumulation of capital, the pursuit of power, and the production of nature in dialectical unity. Central to Moore's perspective is a philosophical re-reading of Marx's value theory, through which abstract social labor and abstract social nature are dialectically bound. Moore argues that the emergent law of value, from the sixteenth century, was evident in the extraordinary shift in the scale, scope, and speed of environmental change. What took premodern civilizations centuries to achieve—such as the deforestation of Europe in the medieval era—capitalism realized in mere decades. This world-historical rupture, argues Moore, can be explained through a law of value that regards labor productivity as the decisive metric of wealth and power in the modern world. From this standpoint, the genius of capitalist development has been to appropriate uncommodified natures—including uncommodified human natures—as a means of advancing labor productivity in the commodity system.[23]
In 1975, the highly influential work of Allan Schnaiberg transfigured environmental sociology, proposing a societal-environmental dialectic, though within the 'neo-Marxist' framework of the relative autonomy of the state as well. This conflictual concept has overwhelming political salience. First, the economic synthesis states that the desire for economic expansion will prevail over ecological concerns. Policy will decide to maximize immediate economic growth at the expense of environmental disruption. Secondly, the managed scarcity synthesis concludes that governments will attempt to control only the most dire of environmental problems to prevent health and economic disasters. This will give the appearance that governments act more environmentally consciously than they really do. Third, the ecological synthesis generates a hypothetical case where environmental degradation is so severe that political forces would respond with sustainable policies. The driving factor would be economic damage caused by environmental degradation. The economic engine would be based on renewable resources at this point. Production and consumption methods would adhere to sustainability regulations.[24]
These conflict-based syntheses have several potential outcomes. One is that the most powerful economic and political forces will preserve the status quo and bolster their dominance. Historically, this is the most common occurrence. Another potential outcome is for contending powerful parties to fall into a stalemate. Lastly, tumultuous social events may result that redistribute economic and political resources.
In 1980, the highly influential work of Allan Schnaiberg entitledThe Environment: From Surplus to Scarcity(1980)[25][26][27]was a large contribution to this theme of a societal-environmental dialectic.
By the 1980s, a critique of eco-Marxism was in the offing, given empirical data from countries (mostly in Western Europe like the Netherlands, Western Germany and somewhat the United Kingdom) that were attempting to wed environmental protection with economic growth instead of seeing them as separate. This was done through both state and capital restructuring. Major proponents of this school of research are Arthur P. J. Mol andGert Spaargaren. Popular examples of ecological modernization would be "cradle to cradle" production cycles,industrial ecology, large-scaleorganic agriculture,biomimicry,permaculture,agroecologyand certain strands ofsustainable development—all implying that economic growth is possible if that growth is well organized with the environment in mind.[citation needed]
Reflexive modernizationThe many volumes of the German sociologistUlrich Beckfirst argued from the late 1980s that ourrisk societyis potentially being transformed by the environmental social movements of the world into structural change without rejecting the benefits of modernization and industrialization. This is leading to a form of 'reflexive modernization' with a world of reducedriskand better modernization process in economics, politics, and scientific practices as they are made less beholden to a cycle of protecting risk from correction (which he calls our state's organized irresponsibility)—politics creates ecodisasters, then claims responsibility in an accident, yet nothing remains corrected because it challenges the very structure of the operation of the economy and the private dominance of development, for example. Beck's idea of areflexive modernizationlooks forward to how our ecological and social crises in the late 20th century are leading toward transformations of the whole political and economic system's institutions, making them more "rational" with ecology in mind.[citation needed]
Neoliberalismincludes deregulation, free market capitalism, and aims at reducing government spending. These neoliberal policies greatly affect environmental sociology. Since neoliberalism includes deregulation and essentially less government involvement, this leads to the commodification and privatization of unowned, state-owned, or common property resources. Diana Liverman and Silvina Vilas mention that this results in payments for environmental services; deregulation and cuts in public expenditure for environmental management; the opening up of trade and investment; and transfer of environmental management to local or nongovernmental institutions.[28]The privatization of these resources have impacts on society, the economy, and to the environment. An example that has greatly affected society is the privatization of water.
Additionally, in the 1980s, with the rise of postmodernism in the western academy and the appreciation of discourse as a form of power, some sociologists turned to analyzing environmental claims as a form of social construction more than a 'material' requirement. Proponents of this school includeJohn A. Hannigan, particularly inEnvironmental Sociology: A Social Constructionist Perspective(1995). Hannigan argues for a 'soft constructionism' (environmental problems are materially real though they require social construction to be noticed) over a 'hard constructionism' (the claim that environmental problems are entirely social constructs).
Although there was sometimes acrimonious debate between theconstructivistandrealist"camps" within environmental sociology in the 1990s, the two sides have found considerable common ground as both increasingly accept that while most environmental problems have a material reality they nonetheless become known only via human processes such as scientific knowledge,activists' efforts, and media attention. In other words, most environmental problems have a realontologicalstatus despite our knowledge/awareness of them stemming from social processes, processes by which various conditions are constructed as problems by scientists, activists, media and other social actors. Correspondingly, environmental problems must all be understood via social processes, despite any material basis they may have external to humans. This interactiveness is now broadly accepted, but many aspects of the debate continue in contemporary research in the field.[citation needed]
United States
The 1960s built strong cultural momentum for environmental causes, giving birth to the modern environmental movement and large questioning in sociologists interested in analyzing the movement. Widespread green consciousness moved vertically within society, resulting in a series of policy changes across many states in the U.S. and Europe in the 1970s. In the United States, this period was known as the "Environmental Decade" with the creation of theUnited States Environmental Protection Agencyand passing of theEndangered Species Act,Clean Water Act, and amendments to theClean Air Act.Earth Dayof 1970, celebrated by millions of participants, represented the modern age of environmental thought. The environmental movement continued with incidences such asLove Canal.
While the current mode of thought expressed in environmental sociology was not prevalent until the 1970s, its application is now used in analysis of ancient peoples. Societies includingEaster Island, the Anaszi, and theMayanswere argued to have ended abruptly, largely due to poor environmental management. This has been challenged in later work however as the exclusive cause (biologically trainedJared Diamond'sCollapse(2005); or more modern work on Easter Island). The collapse of the Mayans sent a historic message that even advanced cultures are vulnerable to ecological suicide—though Diamond argues now it was less of a suicide than an environmental climate change that led to a lack of an ability to adapt—and a lack of elite willingness to adapt even when faced with the signs much earlier of nearing ecological problems. At the same time, societal successes for Diamond includedNew GuineaandTikopiaisland whose inhabitants have lived sustainably for 46,000 years.[citation needed]
John Dryzeket al. argue inGreen States and Social Movements: Environmentalism in the United States, United Kingdom, Germany, and Norway(2003)[29]that there may be a common global green environmental social movement, though its specific outcomes are nationalist, falling into four 'ideal types' of interaction between environmental movements and state power. They use as their case studies environmental social movements and state interaction from Norway, the United Kingdom, the United States, and Germany. They analyze the past 30 years of environmentalism and the different outcomes that the green movement has taken in different state contexts and cultures.[citation needed]
Recently and roughly in temporal order below, much longer-term comparative historical studies of environmental degradation are found by sociologists. There are two general trends: many employ world systems theory—analyzing environmental issues over long periods of time and space; and others employ comparative historical methods. Some utilize both methods simultaneously, sometimes without reference to world systems theory (like Whitaker, see below).
Stephen G. Bunker(d. 2005) andPaul S. Ciccantellcollaborated on two books from aworld-systems theoryview, following commodity chains through history of the modern world system, charting the changing importance of space, time, and scale of extraction and how these variables influenced the shape and location of the main nodes of the world economy over the past 500 years.[30][31]Their view of the world was grounded in extraction economies and the politics of different states that seek to dominate the world's resources and each other through gaining hegemonic control of major resources or restructuring global flows in them to benefit their locations.
The three volume work of environmental world-systems theory by Sing C. Chew analyzed how "Nature and Culture" interact over long periods of time, starting withWorld Ecological Degradation(2001)[32][33][34]In later books, Chew argued that there were three "Dark Ages" in world environmental history characterized by periods of state collapse and reorientation in the world economy associated with more localist frameworks of community, economy, and identity coming to dominate the nature/culture relationships after state-facilitated environmental destruction delegitimized other forms. Thus recreated communities were founded in these so-called 'Dark Ages,' novel religions were popularized, and perhaps most importantly to him the environment had several centuries to recover from previous destruction. Chew argues that modern green politics andbioregionalismis the start of a similar movement of the present day potentially leading to wholesale system transformation. Therefore, we may be on the edge of yet another global "dark age" which is bright instead of dark on many levels since he argues for human community returning with environmental healing as empires collapse.
More case oriented studies were conducted by historical environmental sociologist Mark D. Whitaker analyzing China, Japan, and Europe over 2,500 years in his bookEcological Revolution(2009).[35]He argued that instead of environmental movements being "New Social Movements" peculiar to current societies, environmental movements are very old—being expressed via religious movements in the past (or in the present like inecotheology) that begin to focus on material concerns of health, local ecology, and economic protest against state policy and its extractions. He argues past or present is very similar: that we have participated with a tragic common civilizational process of environmental degradation, economic consolidation, and lack of political representation for many millennia which has predictable outcomes. He argues that a form of bioregionalism, the bioregional state,[36]is required to deal with political corruption in present or in past societies connected to environmental degradation.
After looking at the world history of environmental degradation from very different methods, both sociologists Sing Chew and Mark D. Whitaker came to similar conclusions and are proponents of (different forms of) bioregionalism.
Among the key journals in this field are:
|
https://en.wikipedia.org/wiki/Environmental_sociology
|
Forestryis the science and craft of creating, managing, planting, using, conserving and repairingforestsandwoodlandsfor associated resources for human andenvironmentalbenefits.[1]Forestry is practiced inplantationsand naturalstands.[2]The science of forestry has elements that belong to the biological, physical, social, political and managerial sciences.[3]Forest managementplays an essential role in the creation and modification of habitats and affectsecosystem servicesprovisioning.[4]
Modern forestry generally embraces a broad range of concerns, in what is known as multiple-use management, including: the provision oftimber, fuel wood,wildlife habitat, naturalwater quality management,recreation, landscape and community protection, employment, aesthetically appealinglandscapes,biodiversitymanagement,watershed management,erosion control, and preserving forests as "sinks" foratmosphericcarbon dioxide.
Forest ecosystems have come to be seen as the most important component of thebiosphere,[5]and forestry has emerged as a vitalapplied science,craft, andtechnology. A practitioner of forestry is known as aforester. Another common term is silviculturist.Silvicultureis narrower than forestry, being concerned only with forest plants, but is often used synonymously with forestry.
All people depend upon forests and their biodiversity, some more than others.[6]Forestry is an important economic segment in various industrial countries,[7]as forests provide more than 86 million green jobs and support the livelihoods of many more people.[6]For example, in Germany, forests cover nearly a third of the land area,[8]wood is the most importantrenewable resource, and forestry supports more than a million jobs and about €181 billion of value to the German economy each year.[9]
Worldwide, an estimated 880 million people spend part of their time collecting fuelwood or producing charcoal, many of them women.[6][quantify]Human populations tend to be low in areas of low-income countries with highforest coverand high forest biodiversity, but poverty rates in these areas tend to be high.[6]Some 252 million people living in forests and savannahs have incomes of less than US$1.25 per day.[6]
Over the past centuries,forestrywas regarded as a separate science. With the rise ofecologyandenvironmental science, there has been a reordering in the applied sciences. In line with this view, forestry is a primary land-use science comparable withagriculture.[10]Under these headings, the fundamentals behind the management of natural forests comes by way of natural ecology. Forests or tree plantations, those whose primary purpose is the extraction of forest products, are planned and managed to utilize a mix of ecological andagroecologicalprinciples.[11]In many regions of the world there is considerable conflict between forest practices and other societal priorities such as water quality, watershed preservation, sustainable fishing, conservation, and species preservation.[12]
Silvology (Latin:silvaorsylva, "forests and woods";Ancient Greek:-λογία,-logia, "science of" or "study of") is the biological science of studyingforestsandwoodlands, incorporating the understanding of natural forestecosystems, and the effects and development of silvicultural practices. The term complementssilviculture, which deals with the art and practice of forest management.[13]
Silvology is seen as a single science for forestry and was first used by ProfessorRoelof A.A. OldemanatWageningen University.[14]It integrates the study of forests and forest ecology, dealing with singletreeautecologyand naturalforest ecology.
Dendrology(Ancient Greek:δένδρον,dendron, "tree"; andAncient Greek:-λογία,-logia,science oforstudy of) orxylology(Ancient Greek:ξύλον,ksulon, "wood") is the science and study ofwoody plants(trees,shrubs, andlianas), specifically, their taxonomic classifications.[15]There is no sharp boundary betweenplant taxonomyand dendrology; woody plants not only belong to many different plantfamilies, but these families may be made up of both woody and non-woody members. Some families include only a few woodyspecies. Dendrology, as a discipline of industrial forestry, tends to focus on identification of economically useful woody plants and their taxonomic interrelationships. As an academic course of study, dendrology will include all woody plants, native and non-native, that occur in a region. A related discipline is the study of sylvics, which focuses on theautecologyofgeneraand species.
Theprovenanceofforest reproductive materialused to plant forests has a great influence on how the trees develop, hence why it is important to use forest reproductive material of good quality and of highgenetic diversity.[16]More generally, all forest management practices, including innatural regeneration systems, may impact the genetic diversity of trees.
The termgenetic diversitydescribes the differences inDNA sequencebetween individuals as distinct from variation caused by environmental influences. The unique genetic composition of an individual (itsgenotype) will determine its performance (itsphenotype) at a particular site.[17]
Genetic diversityis needed to maintain the vitality of forests and to provideresiliencetopestsanddiseases. Genetic diversity also ensures that forest trees can survive, adapt and evolve under changing environmental conditions. Furthermore, genetic diversity is the foundation of biological diversity at species andecosystemlevels.Forest genetic resourcesare therefore important to consider in forest management.[16]
Genetic diversity inforestsis threatened byforest fires, pests and diseases,habitat fragmentation, poor silvicultural practices and inappropriate use of forest reproductive material.
About 98 million hectares of forest were affected by fire in 2015; this was mainly in the tropical domain, where fire burned about 4 percent of the total forest area in that year. More than two-thirds of the total forest area affected was in Africa and South America. Insects, diseases and severe weather events damaged about 40 million hectares of forests in 2015, mainly in the temperate and boreal domains.[18]
Furthermore, the marginal populations of many tree species are facing new threats due to theeffects of climate change.[16]
Most countries in Europe have recommendations or guidelines for selecting species and provenances that can be used in a given site or zone.[17]
Forest managementis abranchof forestry concerned with overall administrative, legal, economic, and social aspects, as well as scientific and technical aspects, such assilviculture,forest protection, andforest regulation. This includes management for timber,aesthetics,recreation, urban values,water,wildlife, inland and nearshore fisheries,wood products,plant genetic resources, and otherforest resource values.[19]Management objectives can be for conservation, utilisation, or a mixture of the two. Techniques includetimberextraction,plantingandreplantingof differentspecies, building and maintenance of roads and pathways through forests, and preventingfire.
The first dedicated forestry school was established byGeorg Ludwig HartigatHungenin theWetterau,Hesse, in 1787, though forestry had been taught earlier in central Europe, including at theUniversity of Giessen, inHesse-Darmstadt.
In Spain, the first forestry school was the Forest Engineering School of Madrid (Escuela Técnica Superior de Ingenieros de Montes), founded in 1844.
The first in North America, theBiltmore Forest Schoolwas established nearAsheville, North Carolina, byCarl A. Schenckon September 1, 1898, on the grounds ofGeorge W. Vanderbilt'sBiltmore Estate. Another early school was theNew York State College of Forestry, established atCornell Universityjust a few weeks later, in September 1898.
Early 19th century North American foresters went to Germany to study forestry. Some early German foresters also emigrated to North America.
InSouth Americathe first forestry school was established in Brazil, inViçosa,Minas Gerais, in 1962, and moved the next year to become a faculty at theFederal University of Paraná, in Curitiba.[31]
Today, forestry education typically includes training in generalbiology,ecology,botany,genetics,soil science,climatology,hydrology,economicsandforest management. Education in the basics ofsociologyandpolitical scienceis often considered an advantage. Professional skills in conflict resolution and communication are also important in training programs.[32]
In India, forestry education is imparted in theagricultural universitiesand in Forest Research Institutes (deemed universities). Four year degree programmes are conducted in these universities at the undergraduate level. Masters and Doctorate degrees are also available in these universities.
In the United States,postsecondaryforestry education leading to aBachelor's degreeorMaster's degreeis accredited by theSociety of American Foresters.[33]
In Canada the Canadian Institute of Forestry awards silver rings to graduates from accredited university BSc programs, as well as college and technical programs.[34]
In many European countries, training in forestry is made in accordance with requirements of theBologna Processand theEuropean Higher Education Area.
TheInternational Union of Forest Research Organizationsis the only international organization that coordinates forest science efforts worldwide.[35]
In order to keep up with changing demands and environmental factors, forestry education does not stop at graduation. Increasingly, forestry professionals engage in regular training to maintain and improve on their management practices. An increasingly popular tool aremarteloscopes; one hectare large, rectangular forest sites where all trees are numbered, mapped and recorded.
These sites can be used to do virtualthinningsand test one's wood quality and volume estimations as well as treemicrohabitats. This system is mainly suitable to regions with small-scale multi-functional forest management systems
Forestry literature is the books, journals and other publications about forestry.
The first major works about forestry in the English language includedRoger Taverner'sBooke of Survey(1565),John Manwood'sA Brefe Collection of the Lawes of the Forrest(1592) andJohn Evelyn'sSylva(1662).[36]
This article incorporates text from afree contentwork. Licensed under CC BY-SA 3.0 (license statement/permission). Text taken fromGlobal Forest Resources Assessment 2020 Key findings, FAO, FAO.
This article incorporates text from afree contentwork. Licensed under CC BY-SA 3.0 IGO (license statement/permission). Text taken fromThe State of the World's Forests 2020. Forests, biodiversity and people – In brief, FAO & UNEP, FAO & UNEP.
This article incorporates text from afree contentwork. Licensed under CC BY-SA IGO 3.0 (license statement/permission). Text taken fromWorld Food and Agriculture – Statistical Yearbook 2023, FAO, FAO.
|
https://en.wikipedia.org/wiki/Forestry
|
Present-dayclimate changeincludes bothglobal warming—the ongoing increase inglobal average temperature—and its wider effects on Earth’sclimate system.Climate change in a broader sensealso includes previous long-term changes to Earth'sclimate. The current rise in global temperatures isdriven by human activities, especiallyfossil fuelburning since theIndustrial Revolution.[3][4]Fossil fuel use,deforestation, and someagriculturalandindustrialpractices releasegreenhouse gases.[5]These gasesabsorb some of the heatthat the Earthradiatesafter it warms fromsunlight, warming the lower atmosphere.Carbon dioxide, the primary gas driving global warming,has increased in concentration by about 50%since the pre-industrial era to levels not seen for millions of years.[6]
Climate change has an increasingly largeimpact on the environment.Deserts are expanding, whileheat wavesandwildfiresare becoming more common.[7]Amplified warming in the Arctichas contributed to thawingpermafrost,retreat of glaciersandsea ice decline.[8]Higher temperatures are also causingmore intense storms, droughts, and otherweather extremes.[9]Rapid environmental change inmountains,coral reefs, andthe Arcticis forcing many species to relocate orbecome extinct.[10]Even if efforts to minimize future warming are successful, some effects will continue for centuries. These includeocean heating,ocean acidificationandsea level rise.[11]
Climate changethreatens peoplewith increasedflooding, extreme heat, increasedfoodandwaterscarcity, more disease, andeconomic loss.[12]Human migrationand conflict can also be a result.[13]TheWorld Health Organizationcalls climate change one of the biggest threats toglobal healthin the 21st century.[14]Societies and ecosystems will experience more severe risks withoutaction to limit warming.[15]Adapting to climate changethrough efforts likeflood controlmeasures ordrought-resistant cropspartially reduces climate change risks, although some limits toadaptationhave already been reached.[16]Poorer communities are responsible fora small share of global emissions, yet have the least ability to adapt and are mostvulnerable to climate change.[17][18]
Many climate change impacts have been observed in the first decades of the 21st century, with 2024 the warmest on record at +1.60 °C (2.88 °F) since regular tracking began in 1850.[20][21]Additional warming will increase these impacts and can triggertipping points, such as melting all of theGreenland ice sheet.[22]Under the 2015Paris Agreement, nations collectively agreed to keep warming "well under 2 °C". However, with pledges made under the Agreement, global warming would still reach about 2.8 °C (5.0 °F) by the end of the century.[23]Limiting warming to 1.5 °C would require halving emissions by 2030 and achievingnet-zeroemissions by 2050.[24][25]
There is widespread support forclimate actionworldwide.[26][27]Fossil fuel use can be phased outbyconserving energyand switching to energy sources that do not produce significant carbon pollution. These energy sources includewind,solar,hydro, andnuclear power.[28]Cleanly generated electricity can replace fossil fuels forpowering transportation,heating buildings, and running industrial processes.[29]Carbon can also beremoved from the atmosphere, for instance byincreasing forest coverand farming with methods thatcapture carbon in soil.[30]
Before the 1980s it was unclear whether the warming effect ofincreased greenhouse gaseswas stronger than thecooling effect of airborne particulatesinair pollution. Scientists used the terminadvertent climate modificationto refer to human impacts on the climate at this time.[31]In the 1980s, the termsglobal warmingandclimate changebecame more common, often being used interchangeably.[32][33][34]Scientifically,global warmingrefers only to increased surface warming, whileclimate changedescribes both global warming and its effects on Earth'sclimate system, such as precipitation changes.[31]
Climate changecan also be used more broadly to includechanges to the climatethat have happened throughout Earth's history.[35]Global warming—used as early as 1975[36]—became the more popular term afterNASAclimate scientistJames Hansenused it in his 1988 testimony in theU.S. Senate.[37]Since the 2000s,climate changehas increased usage.[38]Various scientists, politicians and media may use the termsclimate crisisorclimate emergencyto talk about climate change, and may use the termglobal heatinginstead ofglobal warming.[39][40]
Over the last few million years the climate cycled throughice ages. One of the hotter periods was theLast Interglacial, around 125,000 years ago, where temperatures were between 0.5 °C and 1.5 °C warmer than before the start of global warming.[43]This period saw sea levels 5 to 10 metres higher than today. The mostrecent glacial maximum20,000 years ago was some 5–7 °C colder. This period has sea levels that were over 125 metres (410 ft) lower than today.[44]
Temperatures stabilized in the current interglacial period beginning11,700 years ago.[45]This period also saw the start of agriculture.[46]Historical patterns of warming and cooling, like theMedieval Warm Periodand theLittle Ice Age, did not occur at the same time across different regions. Temperatures may have reached as high as those of the late 20th century in a limited set of regions.[47][48]Climate information for that period comes fromclimate proxies, such as trees andice cores.[49][50]
Around 1850thermometerrecords began to provide global coverage.[53]Between the 18th century and 1970 there was little net warming, as the warming impact of greenhouse gas emissions was offset by cooling fromsulfur dioxideemissions. Sulfur dioxide causesacid rain, but it also producessulfateaerosols in the atmosphere, which reflect sunlight and causeglobal dimming. After 1970, the increasing accumulation of greenhouse gases and controls on sulfur pollution led to a marked increase in temperature.[54][55][56]
Ongoing changes in climate have had no precedent for several thousand years.[57]Multiple independent datasets all show worldwide increases in surface temperature,[58]at a rate of around 0.2 °C per decade.[59]The 2014–2023 decade warmed to an average 1.19 °C [1.06–1.30 °C] compared to the pre-industrial baseline (1850–1900).[60]Not every single year was warmer than the last: internalclimate variabilityprocesses can make any year 0.2 °C warmer or colder than the average.[61]From 1998 to 2013, negative phases of two such processes,Pacific Decadal Oscillation (PDO)[62]andAtlantic Multidecadal Oscillation (AMO)[63]caused a short slower period of warming called the "global warming hiatus".[64]After the "hiatus", the opposite occurred, with 2024 well above the recent average at more than +1.5 °C.[65]This is why the temperature change is defined in terms of a 20-year average, which reduces the noise of hot and cold years and decadal climate patterns, and detects the long-term signal.[66]: 5[67]
A wide range of other observations reinforce the evidence of warming.[68][69]The upper atmosphere is cooling, becausegreenhouse gasesare trapping heat near the Earth's surface, and so less heat is radiating into space.[70]Warming reduces average snow cover andforces the retreat of glaciers. At the same time, warming also causesgreater evaporation from the oceans, leading to moreatmospheric humidity, more and heavierprecipitation.[71][72]Plants arefloweringearlier in spring, and thousands of animal species have been permanently moving to cooler areas.[73]
Different regions of the worldwarm at different rates. The pattern is independent of where greenhouse gases are emitted, because the gases persist long enough to diffuse across the planet. Since the pre-industrial period, the average surface temperature over land regions has increased almost twice as fast as the global average surface temperature.[74]This is because oceans lose more heat byevaporationandoceans can store a lot of heat.[75]The thermal energy in the global climate system has grown with only brief pauses since at least 1970, and over 90% of this extra energy has beenstored in the ocean.[76][77]The rest has heated theatmosphere, melted ice, and warmed the continents.[78]
TheNorthern Hemisphereand theNorth Polehave warmed much faster than theSouth PoleandSouthern Hemisphere. The Northern Hemisphere not only has much more land, but also more seasonal snow cover andsea ice. As these surfaces flip from reflecting a lot of light to being dark after the ice has melted, they startabsorbing more heat.[79]Localblack carbondeposits on snow and ice also contribute to Arctic warming.[80]Arctic surface temperatures are increasingbetween three and four times fasterthan in the rest of the world.[81][82][83]Melting ofice sheetsnear the poles weakens both theAtlanticand theAntarcticlimb ofthermohaline circulation, which further changes the distribution of heat andprecipitationaround the globe.[84][85][86][87]
TheWorld Meteorological Organizationestimates there is almost a 50% chance of the five-year average global temperature exceeding +1.5 °C between 2024 and 2028.[90]The IPCC expects the 20-year average to exceed +1.5 °C in the early 2030s.[91]
TheIPCC Sixth Assessment Report(2021) included projections that by 2100 global warming is very likely to reach 1.0–1.8 °C under ascenario with very low emissions of greenhouse gases, 2.1–3.5 °C under anintermediate emissions scenario,
or 3.3–5.7 °C undera very high emissions scenario.[92]The warming will continue past 2100 in the intermediate and high emission scenarios,[93][94]with future projections of global surface temperatures by year 2300 being similar to millions of years ago.[95]
The remainingcarbon budgetfor staying beneath certain temperature increases is determined by modelling the carbon cycle andclimate sensitivityto greenhouse gases.[96]According toUNEP, global warming can be kept below 1.5 °C with a 50% chance if emissions after 2023 do not exceed 200 gigatonnes of CO2. This corresponds to around 4 years of current emissions. To stay under 2.0 °C, the carbon budget is 900 gigatonnes of CO2, or 16 years of current emissions.[97]
The climate system experiences various cycles on its own which can last for years, decades or even centuries. For example,El Niñoevents cause short-term spikes in surface temperature whileLa Niñaevents cause short term cooling.[98]Their relative frequency can affect global temperature trends on a decadal timescale.[99]Other changes are caused by animbalance of energyfromexternal forcings.[100]Examples of these include changes in the concentrations ofgreenhouse gases,solar luminosity,volcaniceruptions, andvariations in the Earth's orbitaround the Sun.[101]
To determine the human contribution to climate change, unique "fingerprints" for all potential causes are developed and compared with both observed patterns and known internalclimate variability.[102]For example, solar forcing—whose fingerprint involves warming the entire atmosphere—is ruled out because only the lower atmosphere has warmed.[103]Atmospheric aerosols produce a smaller, cooling effect. Other drivers, such as changes inalbedo, are less impactful.[104]
Greenhouse gases are transparent tosunlight, and thus allow it to pass through the atmosphere to heat the Earth's surface. The Earthradiates it as heat, and greenhouse gases absorb a portion of it. This absorption slows the rate at which heat escapes into space, trapping heat near the Earth's surface and warming it over time.[105]
Whilewater vapour(≈50%) and clouds (≈25%) are the biggest contributors to the greenhouse effect, they primarily change as a function of temperature and are therefore mostly considered to befeedbacksthat changeclimate sensitivity. On the other hand, concentrations of gases such as CO2(≈20%),tropospheric ozone,[106]CFCsandnitrous oxideare added or removed independently from temperature, and are therefore considered to beexternal forcingsthat change global temperatures.[107]
Before theIndustrial Revolution, naturally-occurring amounts of greenhouse gases caused the air near the surface to be about 33 °C warmer than it would have been in their absence.[108][109]Human activity since the Industrial Revolution, mainly extracting and burning fossil fuels (coal,oil, andnatural gas),[110]has increased the amount of greenhouse gases in the atmosphere. In 2022, theconcentrations of CO2and methane had increased by about 50% and 164%, respectively, since 1750.[111]These CO2levels are higher than they have been at any time during the last 14 million years.[112]Concentrations of methaneare far higher than they were over the last 800,000 years.[113]
Global human-caused greenhouse gas emissions in 2019 wereequivalent to59 billion tonnes of CO2. Of these emissions, 75% was CO2, 18% wasmethane, 4% was nitrous oxide, and 2% wasfluorinated gases.[114]CO2emissions primarily come from burning fossil fuels to provide energy fortransport, manufacturing,heating, and electricity.[5]Additional CO2emissions come fromdeforestationandindustrial processes, which include the CO2released by the chemical reactions formaking cement,steel,aluminum, andfertilizer.[115][116][117][118]Methane emissionscome from livestock, manure,rice cultivation, landfills, wastewater, andcoal mining, as well asoil and gas extraction.[119][120]Nitrous oxide emissions largely come from the microbial decomposition offertilizer.[121][122]
While methane only lasts in the atmosphere for an average of 12 years,[123]CO2lasts much longer. The Earth's surface absorbs CO2as part of thecarbon cycle. While plants on land and in the ocean absorb most excess emissions of CO2every year, that CO2is returned to the atmosphere when biological matter is digested, burns, or decays.[124]Land-surfacecarbon sinkprocesses, such ascarbon fixationin the soil and photosynthesis, remove about 29% of annual global CO2emissions.[125]The ocean has absorbed 20 to 30% of emitted CO2over the last two decades.[126]CO2is only removed from the atmosphere for the long term when it is stored in the Earth's crust, which is a process that can take millions of years to complete.[124]
Around 30% of Earth's land area is largely unusable for humans (glaciers,deserts, etc.), 26% isforests, 10% isshrublandand 34% isagricultural land.[128]Deforestationis the mainland use changecontributor to global warming,[129]as the destroyed trees release CO2, and are not replaced by new trees, removing thatcarbon sink.[130]Between 2001 and 2018, 27% of deforestation was from permanent clearing to enableagricultural expansionfor crops and livestock. Another 24% has been lost to temporary clearing under theshifting cultivationagricultural systems. 26% was due tologgingfor wood and derived products, andwildfireshave accounted for the remaining 23%.[131]Some forests have not been fully cleared, but were already degraded by these impacts. Restoring these forests also recovers their potential as a carbon sink.[132]
Local vegetation cover impacts how much of the sunlight gets reflected back into space (albedo), and how muchheat is lost by evaporation. For instance, the change from a darkforestto grassland makes the surface lighter, causing it to reflect more sunlight. Deforestation can also modify the release of chemical compounds that influence clouds, and by changing wind patterns.[133]In tropic and temperate areas the net effect is to produce significant warming, and forest restoration can make local temperatures cooler.[132]At latitudes closer to the poles, there is a cooling effect as forest is replaced by snow-covered (and more reflective) plains.[133]Globally, these increases in surface albedo have been the dominant direct influence on temperature from land use change. Thus, land use change to date is estimated to have a slight cooling effect.[134]
Air pollution, in the form ofaerosols, affects the climateon a large scale.[135]Aerosols scatter and absorb solar radiation. From 1961 to 1990, a gradual reduction in the amount ofsunlight reaching the Earth's surfacewas observed. This phenomenon is popularly known asglobal dimming,[136]and is primarily attributed tosulfateaerosols produced by the combustion of fossil fuels with heavy sulfur concentrations likecoalandbunker fuel.[56]Smaller contributions come fromblack carbon(from combustion of fossil fuels and biomass), and from dust.[137][138][139]Globally, aerosols have been declining since 1990 due to pollution controls, meaning that they no longer mask greenhouse gas warming as much.[140][56]
Aerosols also have indirect effects on theEarth's energy budget. Sulfate aerosols act ascloud condensation nucleiand lead to clouds that have more and smaller cloud droplets. These clouds reflect solar radiation more efficiently than clouds with fewer and larger droplets.[141]They also reduce thegrowth of raindrops, which makes clouds more reflective to incoming sunlight.[142]Indirect effects of aerosols are the largest uncertainty inradiative forcing.[143]
While aerosols typically limit global warming by reflecting sunlight,black carboninsootthat falls on snow or ice can contribute to global warming. Not only does this increase the absorption of sunlight, it also increases melting and sea-level rise.[144]Limiting new black carbon deposits in the Arctic could reduce global warming by 0.2 °C by 2050.[145]The effect of decreasing sulfur content of fuel oil for ships since 2020[146]is estimated to cause an additional 0.05 °C increase in global mean temperature by 2050.[147]
As the Sun is the Earth's primary energy source, changes in incoming sunlight directly affect theclimate system.[143]Solar irradiancehas been measured directly bysatellites,[150]and indirect measurements are available from the early 1600s onwards.[143]Since 1880, there has been no upward trend in the amount of the Sun's energy reaching the Earth, in contrast to the warming of the lower atmosphere (thetroposphere).[151]The upper atmosphere (thestratosphere) would also be warming if the Sun was sending more energy to Earth, but instead, it has been cooling.[103]This is consistent with greenhouse gases preventing heat from leaving the Earth's atmosphere.[152]
Explosive volcanic eruptionscan release gases, dust and ash that partially block sunlight and reduce temperatures, or they can send water vapour into the atmosphere, which adds to greenhouse gases and increases temperatures.[153]These impacts on temperature only last for several years, because both water vapour and volcanic material have low persistence in the atmosphere.[154]volcanic CO2emissionsare more persistent, but they are equivalent to less than 1% of current human-caused CO2emissions.[155]Volcanic activity still represents the single largest natural impact (forcing) on temperature in the industrial era. Yet, like the other natural forcings, it has had negligible impacts on global temperature trends since the Industrial Revolution.[154]
The climate system's response to an initial forcing is shaped by feedbacks, which either amplify or dampen the change.Self-reinforcingorpositivefeedbacks increase the response, whilebalancingornegativefeedbacks reduce it.[157]The main reinforcing feedbacks are thewater-vapour feedback, theice–albedo feedback, and the net effect of clouds.[158][159]The primary balancing mechanism isradiative cooling, as Earth's surface gives off moreheatto space in response to rising temperature.[160]In addition to temperature feedbacks, there are feedbacks in the carbon cycle, such as the fertilizing effect of CO2on plant growth.[161]Feedbacks are expected to trend in a positive direction as greenhouse gas emissions continue, raising climate sensitivity.[162]
These feedback processes alter the pace of global warming. For instance, warmer aircan hold more moisturein the form ofwater vapour, which is itself a potent greenhouse gas.[158]Warmer air can also make clouds higher and thinner, and therefore more insulating, increasing climate warming.[163]The reduction of snow cover and sea ice in the Arctic is another major feedback, this reduces the reflectivity of the Earth's surface in the region andaccelerates Arctic warming.[164][165]This additional warming also contributes topermafrostthawing, which releases methane and CO2into the atmosphere.[166]
Around half of human-caused CO2emissions have been absorbed by land plants and by the oceans.[167]This fraction is not static and if future CO2emissions decrease, the Earth will be able to absorb up to around 70%. If they increase substantially, it'll still absorb more carbon than now, but the overall fraction will decrease to below 40%.[168]This is because climate change increases droughts and heat waves that eventually inhibit plant growth on land, and soils will release more carbon from dead plantswhen they are warmer.[169][170]The rate at which oceans absorb atmospheric carbon will be lowered as they become more acidic and experience changes inthermohaline circulationandphytoplanktondistribution.[171][172][85]Uncertainty over feedbacks, particularly cloud cover,[173]is the major reason why different climate models project different magnitudes of warming for a given amount of emissions.[174]
Aclimate modelis a representation of the physical, chemical and biological processes that affect the climate system.[175]Models include natural processes like changes in the Earth's orbit, historical changes in the Sun's activity, and volcanic forcing.[176]Models are used to estimate the degree of warming future emissions will cause when accounting for thestrength of climate feedbacks.[177][178]Models also predict the circulation of the oceans, the annual cycle of the seasons, and the flows of carbon between the land surface and the atmosphere.[179]
The physical realism of models is tested by examining their ability to simulate current or past climates.[180]Past models have underestimated the rate ofArctic shrinkage[181]and underestimated the rate of precipitation increase.[182]Sea level rise since 1990 was underestimated in older models, but more recent models agree well with observations.[183]The 2017 United States-publishedNational Climate Assessmentnotes that "climate models may still be underestimating or missing relevant feedback processes".[184]Additionally, climate models may be unable to adequately predict short-term regional climatic shifts.[185]
Asubset of climate modelsadd societal factors to a physical climate model. These models simulate how population,economic growth, and energy use affect—and interact with—the physical climate. With this information, these models can produce scenarios of future greenhouse gas emissions. This is then used as input for physical climate models and carbon cycle models to predict how atmospheric concentrations of greenhouse gases might change.[186][187]Depending on thesocioeconomic scenarioand the mitigation scenario, models produce atmospheric CO2concentrations that range widely between 380 and 1400 ppm.[188]
The environmental effects of climate change are broad and far-reaching,affecting oceans, ice, and weather. Changes may occur gradually or rapidly. Evidence for these effects comes from studying climate change in the past, from modelling, and from modern observations.[190]Since the 1950s,droughtsand heat waves have appeared simultaneously with increasing frequency.[191]Extremely wet or dry events within themonsoonperiod have increased in India and East Asia.[192]Monsoonal precipitation over the Northern Hemisphere has increased since 1980.[193]The rainfall rate and intensity ofhurricanes and typhoons is likely increasing,[194]and the geographic range likely expanding poleward in response to climate warming.[195]Frequency of tropical cyclones has not increased as a result of climate change.[196]
Global sea level is rising as a consequence ofthermal expansionandthe melting of glaciersandice sheets. Sea level rise has increased over time, reaching 4.8 cm per decade between 2014 and 2023.[198]Over the 21st century, the IPCC projects 32–62 cm of sea level rise under a low emission scenario, 44–76 cm under an intermediate one and 65–101 cm under a very high emission scenario.[199]Marine ice sheet instabilityprocesses in Antarctica may add substantially to these values,[200]including the possibility of a 2-meter sea level rise by 2100 under high emissions.[201]
Climate change has led to decades ofshrinking and thinning of the Arctic sea ice.[202]While ice-free summers are expected to be rare at 1.5 °C degrees of warming, they are set to occur once every three to ten years at a warming level of 2 °C.[203]Higher atmospheric CO2concentrations cause more CO2to dissolve in the oceans, which ismaking them more acidic.[204]Because oxygen is less soluble in warmer water,[205]its concentrations in the oceanare decreasing, anddead zonesare expanding.[206]
Greater degrees of global warming increase the risk of passing through 'tipping points'—thresholds beyond which certain major impacts can no longer be avoided even if temperatures return to their previous state.[209][210]For instance, theGreenland ice sheetis already melting, but if global warming reaches levels between 1.7 °C and 2.3 °C, its melting will continue until it fully disappears. If the warming is later reduced to 1.5 °C or less, it will still lose a lot more ice than if the warming was never allowed to reach the threshold in the first place.[211]While the ice sheets would melt over millennia, other tipping points would occur faster and give societies less time to respond. The collapse of majorocean currentslike theAtlantic meridional overturning circulation(AMOC), and irreversible damage to key ecosystems like theAmazon rainforestandcoral reefscan unfold in a matter of decades.[208]Thecollapse of the AMOCwould be a severe climate catastrophe, resulting in a cooling of the Northern Hemisphere.[212]
The long-termeffects of climate change on oceansinclude further ice melt,ocean warming, sea level rise, ocean acidification and ocean deoxygenation.[213]The timescale of long-term impacts are centuries to millennia due to CO2's long atmospheric lifetime.[214]The result is an estimated total sea level rise of 2.3 metres per degree Celsius (4.2 ft/°F) after 2000 years.[215]Oceanic CO2uptake is slow enough that ocean acidification will also continue for hundreds to thousands of years.[216]Deep oceans (below 2,000 metres (6,600 ft)) are also already committed to losing over 10% of their dissolved oxygen by the warming which occurred to date.[217]Further, theWest Antarctic ice sheetappears committed to practically irreversible melting, which would increase the sea levels by at least 3.3 m (10 ft 10 in) over approximately 2000 years.[208][218][219]
Recent warming has driven many terrestrial and freshwater species poleward and towards higheraltitudes.[220]For instance, the range of hundreds of North Americanbirdshas shifted northward at an average rate of 1.5 km/year over the past 55 years.[221]Higher atmospheric CO2levels and an extended growing season have resulted in global greening. However, heatwaves and drought have reducedecosystemproductivity in some regions. The future balance of these opposing effects is unclear.[222]A related phenomenon driven by climate change iswoody plant encroachment, affecting up to 500 million hectares globally.[223]Climate change has contributed to the expansion of drier climate zones, such as theexpansion of desertsin thesubtropics.[224]The size and speed of global warming is makingabrupt changes in ecosystemsmore likely.[225]Overall, it is expected that climate change will result in theextinctionof many species.[226]
The oceans have heated more slowly than the land, but plants and animals in the ocean have migrated towards the colder poles faster than species on land.[227]Just as on land,heat waves in the oceanoccur more frequently due to climate change, harming a wide range of organisms such as corals,kelp, andseabirds.[228]Ocean acidification makes it harder formarine calcifying organismssuch asmussels,barnaclesand corals toproduce shells and skeletons; and heatwaves havebleached coral reefs.[229]Harmful algal bloomsenhanced by climate change andeutrophicationlower oxygen levels, disruptfood websand cause great loss of marine life.[230]Coastal ecosystems are under particular stress. Almost half of global wetlands have disappeared due to climate change and other human impacts.[231]Plants have come under increased stress from damage by insects.[232]
The effects of climate change are impacting humans everywhere in the world.[238]Impacts can be observed on all continents and ocean regions,[239]with low-latitude,less developed areasfacing the greatest risk.[240]Continued warming has potentially "severe, pervasive and irreversible impacts" for people and ecosystems.[241]The risks are unevenly distributed, but are generally greater for disadvantaged people in developing and developed countries.[242]
TheWorld Health Organizationcalls climate change one of the biggest threats to global health in the 21st century.[14]Scientists have warned about the irreversible harms it poses.[243]Extreme weatherevents affect public health, andfoodandwater security.[244][245][246]Temperature extremeslead to increased illness and death.[244][245]Climate change increases the intensity and frequency of extreme weather events.[245][246]It can affect transmission ofinfectious diseases, such asdengue feverandmalaria.[243][244]According to theWorld Economic Forum, 14.5 million more deaths are expected due to climate change by 2050.[247]30% of the global population currently live in areas where extreme heat and humidity are already associated with excess deaths.[248][249]By 2100, 50% to 75% of the global population would live in such areas.[248][250]
While totalcrop yieldshave been increasing in the past 50 years due to agricultural improvements,climate change has already decreased the rate of yield growth.[246]Fisheries have been negatively affectedin multiple regions.[246]Whileagricultural productivityhas been positively affected in some highlatitudeareas, mid- and low-latitude areas have been negatively affected.[246]According to the World Economic Forum, an increase indroughtin certain regions could cause 3.2 million deaths frommalnutritionby 2050 andstuntingin children.[251]With 2 °C warming, globallivestockheadcounts could decline by 7–10% by 2050, as less animal feed will be available.[252]If the emissions continue to increase for the rest of century, then over 9 million climate-related deaths would occur annually by 2100.[253]
Economic damages due to climate change may be severe and there is a chance of disastrous consequences.[254]Severe impacts are expected in South-East Asia andsub-Saharan Africa, where most of the local inhabitants are dependent upon natural and agricultural resources.[255][256]Heat stresscan prevent outdoor labourers from working. If warming reaches 4 °C then labour capacity in those regions could be reduced by 30 to 50%.[257]TheWorld Bankestimates that between 2016 and 2030, climate change could drive over 120 million people into extreme poverty without adaptation.[258]
Inequalities based on wealth and social status have worsened due to climate change.[259]Major difficulties in mitigating, adapting to, and recovering from climate shocks are faced by marginalized people who have less control over resources.[260][255]Indigenous people, who are subsistent on their land and ecosystems, will face endangerment to their wellness and lifestyles due to climate change.[261]An expert elicitation concluded that the role of climate change inarmed conflicthas been small compared to factors such as socio-economic inequality and state capabilities.[262]
While women are not inherently more at risk from climate change and shocks, limits on women's resources and discriminatory gender norms constrain their adaptive capacity and resilience.[263]For example, women's work burdens, including hours worked in agriculture, tend to decline less than men's during climate shocks such as heat stress.[263]
Low-lying islands and coastal communities are threatened by sea level rise, which makesurban floodingmore common. Sometimes, land is permanently lost to the sea.[264]This could lead tostatelessnessfor people in island nations, such as theMaldivesandTuvalu.[265]In some regions, the rise in temperature and humidity may be too severe for humans to adapt to.[266]With worst-case climate change, models project that almost one-third of humanity might live in Sahara-like uninhabitable and extremely hot climates.[267]
These factors can driveclimateorenvironmental migration, within and between countries.[268]More people are expected to be displaced because of sea level rise, extreme weather and conflict from increased competition over natural resources. Climate change may also increase vulnerability, leading to "trapped populations" who are not able to move due to a lack of resources.[269]
Climate change can be mitigated by reducing the rate at which greenhouse gases are emitted into the atmosphere, and by increasing the rate at which carbon dioxide is removed from the atmosphere.[275]To limit global warming to less than 1.5 °C global greenhouse gas emissions needs to benet-zeroby 2050, or by 2070 with a 2 °C target.[276]This requires far-reaching, systemic changes on an unprecedented scale in energy, land, cities, transport, buildings, and industry.[277]
TheUnited Nations Environment Programmeestimates that countries need to triple theirpledges under the Paris Agreementwithin the next decade to limit global warming to 2 °C. An even greater level of reduction is required to meet the 1.5 °C goal.[278]With pledges made under the Paris Agreement as of 2024, there would be a 66% chance that global warming is kept under 2.8 °C by the end of the century (range: 1.9–3.7 °C, depending on exact implementation and technological progress). When only considering current policies, this raises to 3.1 °C.[279]Globally, limiting warming to 2 °C may result in higher economic benefits than economic costs.[280]
Although there is no single pathway to limit global warming to 1.5 or 2 °C,[281]most scenarios and strategies see a major increase in the use of renewable energy in combination with increased energy efficiency measures to generate the needed greenhouse gas reductions.[282]To reduce pressures on ecosystems and enhance their carbon sequestration capabilities, changes would also be necessary in agriculture and forestry,[283]such as preventingdeforestationand restoring natural ecosystems byreforestation.[284]
Other approaches to mitigating climate change have a higher level of risk. Scenarios that limit global warming to 1.5 °C typically project the large-scale use ofcarbon dioxide removal methodsover the 21st century.[285]There are concerns, though, about over-reliance on these technologies, and environmental impacts.[286]
Solar radiation modification(SRM) is a proposal for reducing global warming by reflecting some sunlight away from Earth and back into space. Because it does not reduce greenhouse gas concentrations, it would not address ocean acidification[287]and is not considered mitigation.[288]SRM should be considered only as a supplement to mitigation, not a replacement for it,[289]due to risks such as rapid warming if it were abruptly stopped and not restarted.[290]The most-studied approach isstratospheric aerosol injection.[291]SRM could reduce global warming and some of its impacts, though imperfectly.[292]It poses environmental risks, such as changes to rainfall patterns,[293]as well as political challenges, such as who would decide whether to use it.[294]
Renewable energy is key to limiting climate change.[296]For decades, fossil fuels have accounted for roughly 80% of the world's energy use.[297]The remaining share has been split between nuclear power and renewables (includinghydropower,bioenergy, wind and solar power andgeothermal energy).[298]Fossil fuel use is expected to peak in absolute terms prior to 2030 and then to decline, with coal use experiencing the sharpest reductions.[299]Renewables represented 86% of all new electricity generation installed in 2023.[300]Other forms of clean energy, such as nuclear and hydropower, currently have a larger share of the energy supply. However, their future growth forecasts appear limited in comparison.[301]
Whilesolar panelsand onshore wind are now among the cheapest forms of adding new power generation capacity in many locations,[302]green energy policies are needed to achieve a rapid transition from fossil fuels to renewables.[303]To achieve carbon neutrality by 2050, renewable energy would become the dominant form of electricity generation, rising to 85% or more by 2050 in some scenarios. Investment in coal would be eliminated and coal use nearly phased out by 2050.[304][305]
Electricity generated from renewable sources would also need to become the main energy source for heating and transport.[306]Transport can switch away frominternal combustion enginevehicles and towardselectric vehicles, public transit, andactive transport(cycling and walking).[307][308]For shipping and flying, low-carbon fuels would reduce emissions.[307]Heating could be increasingly decarbonized with technologies likeheat pumps.[309]
There are obstacles to the continued rapid growth of clean energy, including renewables.[310]Wind and solar produce energyintermittently and with seasonal variability. Traditionally,hydro dams with reservoirsand fossil fuel power plants have been used when variable energy production is low. Going forward,battery storagecan be expanded,energy demand and supplycan be matched, and long-distancetransmissioncan smooth variability of renewable outputs.[296]Bioenergy is often not carbon-neutral and may have negative consequences for food security.[311]The growth of nuclear power is constrained by controversy aroundradioactive waste,nuclear weapon proliferation, andaccidents.[312][313]Hydropower growth is limited by the fact that the best sites have been developed, and new projects are confronting increased social and environmental concerns.[314]
Low-carbon energyimproves human health by minimizing climate change as well as reducing air pollution deaths,[315]which were estimated at 7 million annually in 2016.[316]Meeting the Paris Agreement goals that limit warming to a 2 °C increase could save about a million of those lives per year by 2050, whereas limiting global warming to 1.5 °C could save millions and simultaneously increaseenergy securityand reduce poverty.[317]Improving air quality also has economic benefits which may be larger than mitigation costs.[318]
Reducing energy demand is another major aspect of reducing emissions.[319]If less energy is needed, there is more flexibility for clean energy development. It also makes it easier to manage the electricity grid, and minimizescarbon-intensiveinfrastructure development.[320]Major increases in energy efficiency investment will be required to achieve climate goals, comparable to the level of investment in renewable energy.[321]SeveralCOVID-19related changes in energy use patterns, energy efficiency investments, and funding have made forecasts for this decade more difficult and uncertain.[322]
Strategies to reduce energy demand vary by sector. In the transport sector, passengers and freight can switch to more efficient travel modes, such as buses and trains, or use electric vehicles.[323]Industrial strategies to reduce energy demand include improving heating systems and motors, designing less energy-intensive products, and increasing product lifetimes.[324]In the building sector the focus is on better design of new buildings, and higher levels of energy efficiency in retrofitting.[325]The use of technologies like heat pumps can also increase building energy efficiency.[326]
Agriculture and forestry face a triple challenge of limiting greenhouse gas emissions, preventing the further conversion of forests to agricultural land, and meeting increases in world food demand.[327]A set of actions could reduce agriculture and forestry-based emissions by two-thirds from 2010 levels. These include reducing growth in demand for food and other agricultural products, increasing land productivity, protecting and restoring forests, and reducing greenhouse gas emissions from agricultural production.[328]
On the demand side, a key component of reducing emissions is shifting people towardsplant-based diets.[329]Eliminating the production of livestock formeat and dairywould eliminate about 3/4ths of all emissions from agriculture and other land use.[330]Livestock also occupy 37% of ice-free land area on Earth and consume feed from the 12% of land area used for crops, driving deforestation and land degradation.[331]
Steel and cement production are responsible for about 13% of industrial CO2emissions. In these industries, carbon-intensive materials such as coke and lime play an integral role in the production, so that reducing CO2emissions requires research into alternative chemistries.[332]Where energy production or CO2-intensiveheavy industriescontinue to produce waste CO2, technology can sometimes be used to capture and store most of the gas instead of releasing it to the atmosphere.[333]This technology,carbon capture and storage(CCS), could have a critical but limited role in reducing emissions.[333]It is relatively expensive[334]and has been deployed only to an extent that removes around 0.1% of annual greenhouse gas emissions.[333]
Natural carbon sinks can be enhanced to sequester significantly larger amounts of CO2beyond naturally occurring levels.[335]Reforestation andafforestation(planting forests where there were none before) are among the most mature sequestration techniques, although the latter raises food security concerns.[336]Farmers can promote sequestration ofcarbon in soilsthrough practices such as use of wintercover crops, reducing the intensity and frequency oftillage, and using compost and manure as soil amendments.[337]Forest and landscape restoration yields many benefits for the climate, including greenhouse gas emissions sequestration and reduction.[132]Restoration/recreation of coastal wetlands,prairie plotsandseagrass meadowsincreases the uptake of carbon into organic matter.[338][339]When carbon is sequestered in soils and in organic matter such as trees, there is a risk of the carbon being re-released into the atmosphere later through changes in land use, fire, or other changes in ecosystems.[340]
The use of bioenergy in conjunction with carbon capture and storage (BECCS) can result in net negative emissions as CO2is drawn from the atmosphere.[341]It remains highly uncertain whether carbon dioxide removal techniques will be able to play a large role in limiting warming to 1.5 °C. Policy decisions that rely on carbon dioxide removal increase the risk of global warming rising beyond international goals.[342]
Adaptation is "the process of adjustment to current or expected changes in climate and its effects".[343]: 5Without additional mitigation, adaptation cannot avert the risk of "severe, widespread and irreversible" impacts.[344]More severe climate change requires more transformative adaptation, which can be prohibitively expensive.[345]Thecapacity and potential for humans to adaptis unevenly distributed across different regions and populations, and developing countries generally have less.[346]The first two decades of the 21st century saw an increase in adaptive capacity in most low- and middle-income countries with improved access to basicsanitationand electricity, but progress is slow. Many countries have implemented adaptation policies. However, there is a considerable gap between necessary and available finance.[347]
Adaptation to sea level rise consists of avoiding at-risk areas, learning to live with increased flooding, and buildingflood controls. If that fails,managed retreatmay be needed.[348]There are economic barriers for tackling dangerous heat impact. Avoiding strenuous work or havingair conditioningis not possible for everybody.[349]In agriculture, adaptation options include a switch to more sustainable diets, diversification, erosion control, and genetic improvements for increased tolerance to a changing climate.[350]Insurance allows for risk-sharing, but is often difficult to get for people on lower incomes.[351]Education, migration andearly warning systemscan reduce climate vulnerability.[352]Planting mangroves or encouraging other coastal vegetation can buffer storms.[353][354]
Ecosystems adapt to climate change, a process that can be supported by human intervention. By increasing connectivity between ecosystems, species can migrate to more favourable climate conditions. Species can also beintroduced to areas acquiring a favourable climate. Protection and restoration of natural and semi-natural areas helps build resilience, making it easier for ecosystems to adapt. Many of the actions that promote adaptation in ecosystems, also help humans adapt viaecosystem-based adaptation. For instance, restoration ofnatural fire regimesmakes catastrophic fires less likely, and reduces human exposure. Giving rivers more space allows for more water storage in the natural system, reducing flood risk. Restored forest acts as a carbon sink, but planting trees in unsuitable regions can exacerbate climate impacts.[355]
There aresynergiesbut also trade-offs between adaptation and mitigation.[356]An example for synergy is increased food productivity, which has large benefits for both adaptation and mitigation.[357]An example of a trade-off is that increased use of air conditioning allows people to better cope with heat, but increases energy demand. Another trade-off example is that more compacturban developmentmay reduce emissions from transport and construction, but may also increase theurban heat islandeffect, exposing people to heat-related health risks.[358]
Countries that are mostvulnerable to climate changehave typically been responsible for a small share of global emissions. This raises questions about justice and fairness.[359]Limiting global warming makes it much easier to achieve the UN'sSustainable Development Goals, such as eradicating poverty and reducing inequalities. The connection is recognized inSustainable Development Goal 13which is to "take urgent action to combat climate change and its impacts".[360]The goals on food, clean water and ecosystem protection have synergies with climate mitigation.[361]
Thegeopoliticsof climate change is complex. It has often been framed as afree-rider problem, in which all countries benefit from mitigation done by other countries, but individual countries would lose from switching to alow-carbon economythemselves. Sometimes mitigation also has localized benefits though. For instance, the benefits of acoal phase-outto public health and local environments exceed the costs in almost all regions.[362]Furthermore, net importers of fossil fuels win economically from switching to clean energy, causing net exporters to facestranded assets: fossil fuels they cannot sell.[363]
A wide range ofpolicies,regulations, and laws are being used to reduce emissions. As of 2019,carbon pricingcovers about 20% of global greenhouse gas emissions.[364]Carbon can be priced withcarbon taxesandemissions trading systems.[365]Direct globalfossil fuel subsidiesreached $319 billion in 2017, and $5.2 trillion when indirect costs such as air pollution are priced in.[366]Ending these can cause a 28% reduction in global carbon emissions and a 46% reduction in air pollution deaths.[367]Money saved on fossil subsidies could be used to support thetransition to clean energyinstead.[368]More direct methods to reduce greenhouse gases include vehicle efficiency standards, renewable fuel standards, and air pollution regulations on heavy industry.[369]Several countriesrequire utilities to increase the share of renewables in power production.[370]
Policy designed through the lens ofclimate justicetries to addresshuman rightsissues and social inequality. According to proponents of climate justice, the costs of climate adaptation should be paid by those most responsible for climate change, while the beneficiaries of payments should be those suffering impacts. One way this can be addressed in practice is to have wealthy nations pay poorer countries to adapt.[371]
Oxfam found that in 2023 the wealthiest 10% of people were responsible for 50% of global emissions, while the bottom 50% were responsible for just 8%.[372]Production of emissions is another way to look at responsibility: under that approach, the top 21 fossil fuel companies would owe cumulativeclimate reparationsof $5.4 trillion over the period 2025–2050.[373]To achieve ajust transition, people working in the fossil fuel sector would also need other jobs, and their communities would need investments.[374]
Nearly all countries in the world are parties to the 1994United Nations Framework Convention on Climate Change(UNFCCC).[376]The goal of the UNFCCC is to prevent dangerous human interference with the climate system.[377]As stated in the convention, this requires that greenhouse gas concentrations are stabilized in the atmosphere at a level where ecosystems can adapt naturally to climate change, food production is not threatened, andeconomic developmentcan be sustained.[378]The UNFCCC does not itself restrict emissions but rather provides a framework for protocols that do. Global emissions have risen since the UNFCCC was signed.[379]Its yearly conferencesare the stage of global negotiations.[380]
The 1997Kyoto Protocolextended the UNFCCC and included legally binding commitments for most developed countries to limit their emissions.[381]During the negotiations, theG77(representingdeveloping countries) pushed for a mandate requiringdeveloped countriesto "[take] the lead" in reducing their emissions,[382]since developed countries contributed most to theaccumulation of greenhouse gasesin the atmosphere. Per-capita emissions were also still relatively low in developing countries and developing countries would need to emit more to meet their development needs.[383]
The 2009Copenhagen Accordhas been widely portrayed as disappointing because of its low goals, and was rejected by poorer nations including the G77.[384]Associated parties aimed to limit the global temperature rise to below 2 °C.[385]The Accord set the goal of sending $100 billion per year to developing countries for mitigation and adaptation by 2020, and proposed the founding of theGreen Climate Fund.[386]As of 2020[update], only 83.3 billion were delivered. Only in 2023 the target is expected to be achieved.[387]
In 2015 all UN countries negotiated theParis Agreement, which aims to keep global warming well below 2.0 °C and contains an aspirational goal of keeping warming under1.5 °C.[388]The agreement replaced the Kyoto Protocol. Unlike Kyoto, no binding emission targets were set in the Paris Agreement. Instead, a set of procedures was made binding. Countries have to regularly set ever more ambitious goals and reevaluate these goals every five years.[389]The Paris Agreement restated that developing countries must be financially supported.[390]As of March 2025[update], 194 states and theEuropean Unionhave acceded to orratifiedthe agreement.[391]
The 1987Montreal Protocol, an international agreement to phase out production of ozone-depleting gases, has had benefits for climate change mitigation.[392]Several ozone-depleting gases likechlorofluorocarbonsare powerful greenhouse gases, so banning their production and usage may have avoided a temperature rise of 0.5 °C–1.0 °C,[393]as well as additional warming by preventing damage to vegetation fromultravioletradiation.[394]It is estimated that the agreement has been more effective at curbing greenhouse gas emissions than the Kyoto Protocol specifically designed to do so.[395]The most recent amendment to the Montreal Protocol, the 2016Kigali Amendment, committed to reducing the emissions ofhydrofluorocarbons, which served as a replacement for banned ozone-depleting gases and are also potent greenhouse gases.[396]Should countries comply with the amendment, a warming of 0.3 °C–0.5 °C is estimated to be avoided.[397]
In 2019, theUnited Kingdom parliamentbecame the first national government to declare a climate emergency.[399]Other countries andjurisdictionsfollowed suit.[400]That same year, theEuropean Parliamentdeclared a "climate and environmental emergency".[401]TheEuropean Commissionpresented itsEuropean Green Dealwith the goal of making the EU carbon-neutral by 2050.[402]In 2021, the European Commission released its "Fit for 55" legislation package, which contains guidelines for thecar industry; all new cars on the European market must bezero-emission vehiclesfrom 2035.[403]
Major countries in Asia have made similar pledges: South Korea and Japan have committed to become carbon-neutral by 2050, and China by 2060.[404]While India has strong incentives for renewables, it also plans a significant expansion of coal in the country.[405]Vietnam is among very few coal-dependent, fast-developing countries that pledged to phase out unabated coal power by the 2040s or as soon as possible thereafter.[406]
As of 2021, based on information from 48national climate plans, which represent 40% of the parties to the Paris Agreement, estimated total greenhouse gas emissions will be 0.5% lower compared to 2010 levels, below the 45% or 25% reduction goals to limit global warming to 1.5 °C or 2 °C, respectively.[407]
Public debate about climate change has been strongly affected by climate change denial andmisinformation, which originated in the United States and has since spread to other countries, particularly Canada and Australia. Climate change denial has originated from fossil fuel companies, industry groups,conservativethink tanks, andcontrarianscientists.[409]Like the tobacco industry, the main strategy of these groups has been to manufacture doubt about climate-change related scientific data and results.[410]People who hold unwarranted doubt about climate change are called climate change "skeptics", although "contrarians" or "deniers" are more appropriate terms.[411]
There are different variants of climate denial: some deny that warming takes place at all, some acknowledge warming but attribute it to natural influences, and some minimize the negative impacts of climate change.[412]Manufacturing uncertainty about the science later developed into amanufactured controversy: creating the belief that there is significant uncertainty about climate change within the scientific community to delay policy changes.[413]Strategies to promote these ideas include criticism of scientific institutions,[414]and questioning the motives of individual scientists.[412]Anecho chamberof climate-denyingblogsand media has further fomented misunderstanding of climate change.[415]
Climate change came to international public attention in the late 1980s.[419]Due to media coverage in the early 1990s, people often confused climate change with other environmental issues like ozone depletion.[420]In popular culture, theclimate fictionmovieThe Day After Tomorrow(2004) and theAl GoredocumentaryAn Inconvenient Truth(2006) focused on climate change.[419]
Significant regional, gender, age and political differences exist in both public concern for, and understanding of, climate change. More highly educated people, and in some countries, women and younger people, were more likely to see climate change as a serious threat.[421]College biology textbooks from the 2010s featured less content on climate change compared to those from the preceding decade, with decreasing emphasis on solutions.[422]Partisan gaps also exist in many countries,[423]and countries with highCO2emissionstend to be less concerned.[424]Views on causes of climate change vary widely between countries.[425]Media coverage linked to protests has had impacts on public sentiment as well as on which aspects of climate change are focused upon.[426]Higher levels of worry are associated with stronger public support for policies that address climate change.[427]Concern has increased over time,[428]and in 2021 a majority of citizens in 30 countries expressed a high level of worry about climate change, or view it as a global emergency.[429]A 2024 survey across 125 countries found that 89% of the global population demanded intensified political action, but systematicallyunderestimated other peoples'willingness to act.[26][27]
Climate protests demand that political leaders take action to prevent climate change. They can take the form of public demonstrations,fossil fuel divestment, lawsuits and other activities.[430][431]Prominent demonstrations include theSchool Strike for Climate. In this initiative, young people across the globe have been protesting since 2018 by skipping school on Fridays, inspired by Swedish activist and then-teenagerGreta Thunberg.[432]Masscivil disobedienceactions by groups likeExtinction Rebellionhave protested by disrupting roads and public transport.[433]
Litigationis increasingly used as a tool to strengthenclimate actionfrom public institutions and companies. Activists also initiate lawsuits which target governments and demand that they take ambitious action or enforce existing laws on climate change.[434]Lawsuits against fossil-fuel companies generally seek compensation forloss and damage.[435]
Scientists in the 19th century such asAlexander von Humboldtbegan to foresee the effects of climate change.[437][438][439][440]In the 1820s,Joseph Fourierproposed the greenhouse effect to explain why Earth's temperature was higher than the Sun's energy alone could explain. Earth's atmosphere is transparent to sunlight, so sunlight reaches the surface where it is converted to heat. However, the atmosphere is not transparent to heat radiating from the surface, and captures some of that heat, which in turn warms the planet.[441]
In 1856Eunice Newton Footedemonstrated that the warming effect of the Sun is greater for air with water vapour than for dry air, and that the effect is even greater with carbon dioxide (CO2). She concluded that "An atmosphere of that gas would give to our earth a high temperature..."[442][443]
Starting in 1859,[444]John Tyndallestablished that nitrogen and oxygen—together totalling 99% of dry air—are transparent to radiated heat. However, water vapour and gases such as methane and carbon dioxide absorb radiated heat and re-radiate that heat into the atmosphere. Tyndall proposed that changes in the concentrations of these gases may have caused climatic changes in the past, includingice ages.[445]
Svante Arrheniusnoted that water vapour in air continuously varied, but the CO2concentration in air was influenced by long-term geological processes. Warming from increased CO2levels would increase the amount of water vapour, amplifying warming in a positive feedback loop. In 1896, he published the firstclimate modelof its kind, projecting that halving CO2levels could have produced a drop in temperature initiating an ice age. Arrhenius calculated the temperature increase expected from doubling CO2to be around 5–6 °C.[446]Other scientists were initially sceptical and believed that the greenhouse effect was saturated so that adding more CO2would make no difference, and that the climate would be self-regulating.[447]Beginning in 1938,Guy Stewart Callendarpublished evidence that climate was warming and CO2levels were rising,[448]but his calculations met the same objections.[447]
In the 1950s,Gilbert Plasscreated a detailed computer model that included different atmospheric layers and the infrared spectrum. This model predicted that increasing CO2levels would cause warming. Around the same time,Hans Suessfound evidence that CO2levels had been rising, andRoger Revelleshowed that the oceans would not absorb the increase. The two scientists subsequently helpedCharles Keelingto begin a record of continued increase—the "Keeling Curve"[447]—which was part of continued scientific investigation through the 1960s into possible human causation of global warming.[453]Studies such as theNational Research Council's 1979Charney Reportsupported the accuracy of climate models that forecast significant warming.[454]Human causation of observed global warming and dangers of unmitigated warming were publicly presented inJames Hansen's 1988 testimony before aUS Senatecommittee.[455][37]TheIntergovernmental Panel on Climate Change(IPCC), set up in 1988 to provide formal advice to the world's governments, spurredinterdisciplinary research.[456]As part of theIPCC reports, scientists assess the scientific discussion that takes place inpeer-reviewedjournalarticles.[457]
There is a near-complete scientific consensus that the climate is warming and that this is caused by human activities. As of 2019, agreement in recent literature reached over 99%.[450][451]No scientific body of national or international standingdisagrees with this view.[458]Consensus has further developed that some form of action should be taken to protect people against the impacts of climate change. National science academies have called on world leaders to cut global emissions.[459]The 2021 IPCC Assessment Report stated that it is "unequivocal" that climate change is caused by humans.[451]
This article incorporates text from afree contentwork. Licensed under CC BY-SA 3.0. Text taken fromThe status of women in agrifood systems – Overview, FAO, FAO.
Fourth Assessment Report
Fifth Assessment report
Special Report: Global Warming of 1.5 °C
Special Report: Climate change and Land
Special Report: The Ocean and Cryosphere in a Changing Climate
Sixth Assessment Report
|
https://en.wikipedia.org/wiki/Global_warming
|
Habitat conservationis a management practice that seeks toconserve, protect and restorehabitatsand prevent speciesextinction,fragmentationor reduction inrange.[1]It is a priority of many groups that cannot be easily characterized in terms of any oneideology.
For much of human history,naturewas seen as aresourcethat could be controlled by the government and used for personal andeconomic gain. The idea was that plants only existed to feed animals and animals only existed to feed humans.[2]The value of land was limited only to the resources it provided such asfertile soil,timber, andminerals.
Throughout the 18th and 19th centuries, social views started to change and conservation principles were first practically applied to theforestsofBritish India. The conservation ethic that began to evolve included three core principles: 1) human activities damage theenvironment, 2) there was acivic dutyto maintain the environment for future generations, and 3) scientific, empirically-based methods should be applied to ensure this duty was carried out. SirJames Ranald Martinwas prominent in promoting this ideology, publishing numerous medico-topographical reports that demonstrated the damage from large-scaledeforestationanddesiccation, and lobbying extensively for the institutionalization offorest conservationactivities in British India through the establishment ofForest Departments.[3]
TheMadrasBoard of Revenue started local conservation efforts in 1842, headed byAlexander Gibson, a professionalbotanistwho systematically adopted a forest conservation program based on scientific principles. This was the first case of state conservation management of forests in the world.[4]Governor-GeneralLord Dalhousieintroduced the first permanent and large-scale forest conservation program in 1855, a model that soon spread toother colonies, as well to theUnited States,[5][6][7]whereYellowstone National Parkwas opened in 1872 as the world's first national park.[8]
Rather than focusing on the economic or material benefits from nature, humans began to appreciate the value of nature itself and the need to protect it.[9]By the mid-20th century, countries such as the United States, Canada, and Britain instigated laws and legislation in order to ensure that the most fragile and beautiful environments would be protected for posterity.
Today, with the help ofNGO'sand governments worldwide, a strongmovementis mobilizing with the goal of protecting habitats and preservingbiodiversityon a global scale. The commitments and actions of small volunteer associations in villages and towns, that endeavour to emulate the work of well knownconservation organisations, are paramount in ensuring generations that follow understand the importance of natural resource conservation.
Natural habitats can provideEcosystem servicesto humans, which are "any positive benefit that wildlife or ecosystems provide to people."[10]Thenatural environmentis a source for a wide range of resources that can be exploited foreconomicprofit, for example timber is harvested from forests and clean water is obtained from natural streams. However,land developmentfrom anthropogenic economic growth often causes a decline in the ecological integrity of nearby natural habitat. For instance, this was an issue in the northern Rocky Mountains of the US.[11]
However, there is also the economic value in conserving natural habitats. Financial profit can be made from tourist revenue, for example in the tropics where species diversity is high, or in recreational sports which take place in natural environments such ashikingandmountain biking. The cost of repairing damaged ecosystems is considered to be much higher than the cost of conserving natural ecosystems.[12]
Measuring the worth of conserving different habitat areas is often criticized as being too utilitarian from a philosophical point of view.[13]
Habitat conservation is important in maintainingbiodiversity, which refers to the variability in populations, organisms, and gene pools, as well as habitats and ecosystems.[14]Biodiversity is also an essential part of global food security. There is evidence to support a trend of accelerating erosion of thegenetic resourcesof agricultural plants and animals.[15]An increase in genetic similarity of agricultural plants and animals means an increased risk of food loss from major epidemics. Wild species of agricultural plants have been found to be more resistant to disease, for example the wild corn species Teosinte is resistant to 4 corn diseases that affect human grown crops.[16]A combination of seed banking and habitat conservation has been proposed to maintain plant diversity for food security purposes.[17]It has been shown that focusing conversation efforts on ecosystems "within multiple trophic levels" can lead to a better functioning ecosystem with more biomass.[18]
Pearce and Moran outlined the following method for classifying environmental uses:[19]
Habitat lossand destruction can occur both naturally and through anthropogenic causes. Events leading to naturalhabitat lossinclude climate change, catastrophic events such as volcanic explosions and through the interactions of invasive and non-invasive species. Natural climate change, events have previously been the cause of many widespread and large scale losses in habitat. For example, some of the mass extinction events generally referred to as the "Big Five" have coincided with large scale such as the Earth entering an ice age, or alternate warming events.[20]Other events in the big five also have their roots in natural causes, such as volcanic explosions and meteor collisions.[21][22]TheChicxulubimpact is one such example, which has previously caused widespread losses in habitat as the Earth either received less sunlight or grew colder, causing certain fauna and flora to flourish whilst others perished. Previously known warm areas in the tropics, the most sensitive habitats on Earth, grew colder, and areas such as Australia developed radically different flora and fauna to those seen today. The big fivemass extinction eventshave also been linked to sea level changes, indicating that large scale marine species loss was strongly influenced by loss in marine habitats, particularly shelf habitats.[23]Methane-driven oceanic eruptions have also been shown to have caused smaller mass extinction events.[24]
Humans have been the cause of many species’ extinction. Due to humans’ changing and modifying their environment, the habitat of other species often become altered or destroyed as a result of human actions.[25]The altering of habitats will cause habitat fragmentation, reducing the species' habitat and decreasing their dispersal range. This increases species isolation which then causes their population to decline.[25]Even before the modern industrial era, humans were having widespread, and major effects on the environment. A good example of this is found in Aboriginal Australians andAustralian megafauna.[26]Aboriginal hunting practices, which included burning large sections of forest at a time, eventually altered and changed Australia's vegetation so much that many herbivorous megafauna species were left with no habitat and were driven into extinction. Once herbivorous megafauna species became extinct, carnivorous megafauna species soon followed.
In the recent past, humans have been responsible for causing more extinctions within a given period of time than ever before.Deforestation,pollution, anthropogenicclimate changeand human settlements have all been driving forces in altering or destroying habitats.[27]The destruction of ecosystems such as rainforests has resulted in countless habitats being destroyed. Thesebiodiversity hotspotsare home to millions of habitat specialists, which do not exist beyond a tiny area.[28]Once their habitat is destroyed, they cease to exist. This destruction has a follow-on effect, as species which coexist or depend upon the existence of other species also become extinct, eventually resulting in the collapse of an entire ecosystem.[29][30]These time-delayed extinctions are referred to as the extinction debt, which is the result of destroying and fragmenting habitats.
As a result of anthropogenic modification of the environment, the extinction rate has climbed to the point where the Earth is now within asixth mass extinctionevent, as commonly agreed by biologists.[31]This has been particularly evident, for example, in the rapid decline in the number ofamphibianspecies worldwide.[32]
Adaptive management addresses the challenge of scientific uncertainty in habitat conservation plans by systematically gathering and applying reliable information to enhance conservation strategies over time. This approach allows for adjustments in management practices based on new insights, making conservation efforts more effective.[33]Determining the size, type and location of habitat to conserve is a complex area of conservation biology. Although difficult to measure and predict, the conservation value of a habitat is often a reflection of the quality (e.g.species abundanceand diversity), endangerment of encompassing ecosystems, and spatial distribution of that habitat.[34]
Habitat restoration is a subset of habitat conservation and its goals include improving the habitat and resources ranging from one species to several species[35]TheSociety for Ecological Restoration'sInternational Science and Policy Working Group define restoration as "the process of assisting the recovery of an ecosystem that has been degraded, damaged, or destroyed."[36]The scale of habitat restoration efforts can range from small to large areas of land depending on the goal of the project.[37]Elements of habitat restoration include developing a plan and embedding goals within that plan, and monitoring and evaluating species.[38]Considerations such as the species type, environment, and context are aspects of planning a habitat restoration project.[37]Efforts to restore habitats that have been altered by anthropogenic activities has become a global endeavor, and is used to counteract the effects of habitat destruction by humans.[39][40]Miller and Hobbs state three constraints on restoration: "ecological, economic, and social" constraints.[37]Habitat restoration projects include Marine Debris Mitigation for Navassa Island National Wildlife Refuge in Haiti and Lemon Bay Preserve Habitat Restoration in Florida.[41]
Habitat conservation is vital for protecting species and ecological processes. It is important to conserve and protect the space/ area in which that species occupies.[42]Therefore, areas classified as ‘biodiversity hotspots’, or those in which a flagship, umbrella, or endangered species inhabits are often the habitats that are given precedence over others. Species that possess an elevated risk of extinction are given the highest priority and as a result of conserving their habitat, other species in that community are protected thus serving as an element of gap analysis. In the United States of America, aHabitat Conservation Plan(HCP) is often developed to conserve the environment in which a specific species inhabits. Under the U.S.Endangered Species Act(ESA) the habitat that requires protection in an HCP is referred to as the ‘critical habitat’. Multiple-species HCPs are becoming more favourable than single-species HCPs as they can potentially protect an array of species before they warrant listing under the ESA, as well as being able to conserve broad ecosystem components and processes . As of January 2007, 484 HCPs were permitted across the United States, 40 of which covered 10 or more species. The San Diego Multiple Species Conservation Plan (MSCP) encompasses 85 species in a total area of 26,000-km2. Its aim is to protect the habitats of multiple species and overall biodiversity by minimizing development in sensitive areas.
HCPs require clearly defined goals and objectives, efficient monitoring programs, as well as successful communication and collaboration with stakeholders and land owners in the area. Reserve design is also important and requires a high level of planning and management in order to achieve the goals of the HCP. Successful reserve design often takes the form of a hierarchical system with the most valued habitats requiring high protection being surrounded by buffer habitats that have a lower protection status. Like HCPs, hierarchical reserve design is a method most often used to protect a single species, and as a result habitat corridors are maintained, edge effects are reduced and a broader suite of species are protected.
A range of methods and models currently exist that can be used to determine how much habitat is to be conserved in order to sustain aviable population, includingResource Selection Functionand Step Selection models. Modelling tools often rely on the spatial scale of the area as an indicator of conservation value. There has been an increase in emphasis on conserving few large areas of habitat as opposed to many small areas. This idea is often referred to as the "single large or several small",SLOSS debate, and is a highly controversial area amongconservation biologistsandecologists. The reasons behind the argument that "larger is better" include the reduction in the negative impacts of patch edge effects, the general idea that species richness increases withhabitat areaand the ability of larger habitats to support greater populations with lower extinction probabilities. Noss & Cooperrider support the "larger is better" claim and developed a model that implies areas of habitat less than 1000ha are "tiny" and of low conservation value.[43]However, Shwartz suggests that although "larger is better", this does not imply that "small is bad". Shwartz argues that human inducedhabitat lossleaves no alternative to conserving small areas. Furthermore, he suggests many endangered species which are of high conservation value, may only be restricted to smallisolated patchesof habitat, and thus would be overlooked if larger areas were given a higher priority. The shift to conserving larger areas is somewhat justified in society by placing more value on larger vertebrate species, which naturally have larger habitat requirements.
Since its formation in 1951The Nature Conservancyhas slowly developed into one of the world's largest conservation organizations. Currently operating in over 30 countries, across five continents worldwide, The Nature Conservancy aims to protect nature and its assets for future generations.[44]The organization purchases land or accepts land donations with the intention of conserving its natural resources. In 1955 The Nature Conservancy purchased its first 60-acre plot near the New York/Connecticut border in the United States of America. Today the Conservancy has expanded to protect over 119 million acres of land, 5,000 river miles as well as participating in over 1,000 marine protection programs across the globe .
Since its beginnings The Nature Conservancy has understood the benefit in taking a scientific approach towards habitat conservation. For the last decade the organization has been using a collaborative, scientific method known as "Conservation by Design." By collecting and analyzing scientific data The Conservancy is able to holistically approach the protection of various ecosystems. This process determines the habitats that need protection, specific elements that should be conserved as well as monitoring progress so more efficient practices can be developed for the future.[45]
The Nature Conservancy currently has a large number of diverse projects in operation. They work with countries around the world to protect forests, river systems, oceans, deserts and grasslands. In all cases the aim is to provide a sustainable environment for both the plant and animal life forms that depend on them as well as all future generations to come.[46]
TheWorld Wildlife Fund(WWF) was first formed in after a group of passionate conservationists signed what is now referred to as the Morges Manifesto.[47]WWF is currently operating in over 100 countries across 5 continents with a current listing of over 5 million supporters.
One of the first projects of WWF was assisting in the creation of the Charles Darwin Research Foundation which aided in the protection of diverse range of unique species existing on the Galápagos’ Islands, Ecuador. It was also a WWF grant that helped with the formation of the College of African Wildlife Management in Tanzania which today focuses on teaching a wide range of protected area management skills in areas such as ecology, range management and law enforcement.[48]The WWF has since gone on to aid in the protection of land in Spain, creating theCoto Doñana National Parkin order to conserve migratory birds and TheDemocratic Republic of Congo, home to the world's largest protected wetlands. The WWF also initiated a debt-for-nature concept which allows the country to put funds normally allocated to paying off national debt, into conservation programs that protect its natural landscapes. Countries currently participating includeMadagascar, the first country to participate which since 1989 has generated over $US50 million towards preservation,Bolivia,Costa Rica,Ecuador,Gabon, thePhilippinesandZambia.
Rare has been in operation since 1973 with current global partners in over 50 countries and offices in the United States of America, Mexico, the Philippines, China and Indonesia. Rare focuses on the human activity that threatens biodiversity and habitats such as overfishing and unsustainable agriculture. By engaging local communities and changing behaviour Rare has been able to launch campaigns to protect areas in most need of conservation.[49]The key aspect of Rare's methodology is their "Pride Campaigns". For example, in the Andes in South America, Rare has incentives to develop watershed protection practices. In the Southeast Asia's "coral triangle" Rare is training fishers in local communities to better manage the areas around the coral reefs in order to lessen human impact.[50]Such programs last for three years with the aim of changing community attitudes so as to conserve fragile habitats and provide ecological protection for years to come.
WWF Netherlands, along with ARK Nature, Wild Wonders of Europe, and Conservation Capital have started theRewilding Europeproject. This project intents torewildseveral areas in Europe.[51]
|
https://en.wikipedia.org/wiki/Habitat_conservation
|
Inagriculture,holistic management(fromὅλοςholos, aGreekword meaning "all, whole, entire, total") is an approach to managing resources that was originally developed byAllan Savory[1]forgrazing management.[2][better source needed]Holistic management has been likened to "apermacultureapproach to rangeland management".[3]Holistic management is a registeredtrademarkof Holistic Management International (no longer associated with Allan Savory). It has faced criticism from many researchers who argue it is unable to provide the benefits claimed.[4][5]
"Holistic management" describes asystems thinkingapproach to managing resources. Originally developed by Allan Savory, it is now being adapted for use in managing other systems with complex social, ecological and economic factors.
Holistic planned grazing is similar torotational grazingbut differs in that it more explicitly recognizes and provides a framework for adapting to the four basic ecosystem processes: thewater cycle,[6][7]the mineral cycle including thecarbon cycle,[8][9]energy flow, andcommunitydynamics (the relationship between organisms in anecosystem),[10]giving equal importance to livestock production and social welfare. Holistic Management has been likened to "apermacultureapproach to rangeland management".[3]
The Holistic Management decision-making framework uses six key steps to guide the management of resources:[11][1]
Savory stated four key principles of Holistic Management planned grazing, which he intended to take advantage of thesymbioticrelationship between large herds of grazing animals and the grasslands that support them:[12]
The idea of holistic plannedgrazingwas developed in the 1960s byAllan Savory, a wildlife biologist in his nativeSouthern Rhodesia. Setting out to understanddesertificationin the context of the largerenvironmental movement, and influenced by the work ofAndré Voisin,[13][14]he hypothesized that the spread of deserts, the loss of wildlife, and the resulting human impoverishment were related to the reduction of the naturalherdsof large grazing animals and, even more, the changed behavior of the few remaining herds.[2]Savory hypothesized further thatlivestockcould be substituted for natural herds to provide importantecosystem serviceslike nutrientcycling.[15][16]However, while livestock managers had found thatrotational grazingsystems can work for livestock management purposes, scientific experiments demonstrated it does not necessarily improveecologicalissues such as desertification. As Savory saw it, a more comprehensive framework for the management of grassland systems — an adaptive, holistic management plan — was needed. For that reason Holistic Management has been used as a Whole Farm/Ranch Planning tool[1]In 1984, he founded the Center for Holistic Resource Management which became Holistic Management International.[2]
In many regions,pastoralismand communal land use are blamed forenvironmentaldegradation caused byovergrazing. After years of research and experience, Savory came to understand this assertion was often wrong, and that sometimes removing animals actually made matters worse.[disputed–discuss]This concept is a variation of thetrophic cascade, where humans are seen as the top level predator and the cascade follows from there.
Savory developed a management system that he claimed would improve grazing systems. Holistic planned grazing is one of a number of newer grazing management systems that aim to more closely simulate the behavior of natural herds of wildlife and has been claimed to improveriparianhabitats and water quality over systems that often led toland degradation, and claimed to improve range condition for both livestock andwildlife.[6][7][17][18][19]
Savory claims that Holistic Planned Grazing holds potential inmitigating climate change, while building soil, increasing biodiversity, and reversing desertification.[20][21]This practice uses fencing and/orherdersto restoregrasslands.[7][22][23]Carefully planned movements of largeherdsof livestockmimicthe processes of nature wheregrazinganimals are kept concentrated bypack predatorsand forced to move on after eating, trampling, and manuring an area, returning only after it has fully recovered. This grazing method seeks to emulate what occurred during the past 40 million years as the expansion of grass-grazer ecosystems built deep, richgrassland soils, sequestering carbon, and consequently cooling the planet.[24]
While originally developed as a tool for range land use[22]and restoring desertified land,[25]the Holistic Management system can be applied to other areas with multiple complexsocioeconomicand environmental factors. One such example isintegrated water resources management, which promotes sector integration in development and management of water resources to ensure that water is allocated fairly between different users, maximizing economic and social welfare without compromising thesustainabilityof vital ecosystems.[26][failed verification]Another example ismine reclamation.[27]A fourth use of Holistic Management® is in certain forms ofno tillcrop production, intercropping, and permaculture.[3][28][29][30]Holistic Management has been acknowledged[weasel words]by the United States Department of Agriculture.[30][31]The most comprehensive use of Holistic Management is as a Whole Farm/Ranch Planning tool which has been used successfully by farmers and ranchers. For that reason, the USDA invested six years of Beginning Farmer/Rancher Development funding to use it to train beginning women farmers and ranchers.[3][4]
There are many peer-reviewed studies and journalistic publications that dispute the claims of Holistic Management theory.[5][32][33]
A 2014 review examined five specific ecological assumptions of Holistic Management and found that none were supported by scientific evidence in the Western US.[34]A paper by Richard Teagueet al. claims that the different criticisms had examined rotational systems in general and not holistic planned grazing.[35]A meta-analysis of relevant studies between 1972 and 2016 found that Holistic Planned Grazing had no better effect than continuous grazing on plant cover, plant biomass and animal production, although it may have benefited some areas with higherprecipitation.[36]Conversely, at least three studies have documented soil improvement as measured bysoil carbon, soil nitrogen,soil biota, water retention, nutrient-holding capacity, and ground litter on grazed land using multi-pasture grazing methods compared to continuously grazed land.[7][37][38]
There is also evidence that multi-pasture grazing methods may increase water retention compared to non-grazed land.[22]However, George Wuerthner, writing inThe Wildlife Newsin a 2013 article titled, "Allan Savory: Myth And Reality" stated, "The few scientific experiments that Savory supporters cite as vindication of his methods (out of hundreds that refute his assertions), often fail to actually test his theories. Several of the studies cited on HM web site had utilization levels (degree of vegetation removed) well below the level that Savory actually recommends."[39]
These critiques have been challenged on the grounds that many studies examined rotational grazing systems in general and not Holistic Management or Holistic Planned Grazing.[35]In addition to a grazing method, Holistic Management involves goal setting, experiential learning and an emphasis on monitoring and adaptive decision-making that have not been captured by many scientific field trials.[33][38]This has been proposed as a reason why many land managers have reported a more positive experience of Holistic Management than scientific studies.[40]However, a 2022 review of 22 “farm-scale” studies, many of which included adaptive management, again found that Holistic Management had no effect on or reduced plant or animal productivity.[40]The same study found that Holistic Management was associated with improved social cohesion and peer-to-peer learning, but concluded that the “social cohesion, learning and networking so prevalent on HM farms could be adopted by any farming community without accepting the unfounded HM rhetoric”.[40]
Savory has also faced criticisms for claiming the carbon sequestration potential of holistic grazing is immune from empirical scientific study.[41]For instance, in 2000, Savory said that "the scientific method never discovers anything" and “the scientific method protects us from cranks like me".[42]A 2017 factsheet authored by Savory stated that “Every study of holistic planned grazing that has been done has provided results that are rejected by range scientists because there was no replication!".[43]TABLE Debates sums this up by saying "Savory argues that standardisation, replication, and therefore experimental testing of HPG [Holistic Planned Grazing] as a whole (rather than just the grazing system associated with it) is not possible, and that therefore, it is incapable of study by experimental science", but "he does not explain how HPG can make causal knowledge claims with regards to combating desertification and climate mitigation, without recourse to science demonstrating such connections."[41]
There is a less developed evidence base comparing Holistic management with the absence of livestock on grasslands. Several peer-reviewed studies have found that excluding livestock completely from semi-arid grasslands can lead to significant recovery of vegetation and soil carbon sequestration.[44][45][46][47][48]A 2021 peer-reviewed paper found that sparsely grazed and natural grasslands account for 80% of the total cumulative carbon sink of the world’s grasslands, whereas managed grasslands (i.e. with greater livestock density) have been a net greenhouse gas source over the past decade.[49]A 2011 study found that multi-paddock grazing of the type endorsed by Savory resulted in more soil carbon sequestration than heavy continuous grazing, but very slightly less soil carbon sequestration than "graze exclosure" (excluding grazing livestock from land).[7]Another peer-reviewed paper found that if current pastureland was restored to its former state as wild grasslands, shrublands, and sparse savannas without livestock this could store an estimated 15.2 - 59.9 Gt additional carbon.[50]
In 2013 the Savory Institute published a response to some of their critics.[51]The same month Savory was a guest speaker withTEDand gave a presentation titled "How to Fight Desertification and Reverse Climate Change".[52][53]In his TED Talk, Savory has claimed that holistic grazing could reduce carbon dioxide levels to pre-industrial levels in a span of 40 years, solving the problems caused byclimate change. Commenting on his TED talk, Savory has since denied claiming that holistic grazing can reverse climate change, saying that “I have only used the words address climate change… although I have written and talked about reversing man-made desertification”.[41]
RealClimate.orgpublished a piece saying that Savory's claims that his technique can bring atmospheric carbon "back to pre-industrial levels" are "simply not reasonable."[54][55]According toSkeptical Science, "it is not possible to increase productivity, increase numbers of cattle and store carbon using any grazing strategy, never-mind Holistic Management [...] Long term studies on the effect of grazing onsoil carbonstorage have been done before, and the results are not promising.[...] Because of the complex nature of carbon storage in soils, increasing global temperature, risk of desertification and methane emissions from livestock, it is unlikely that Holistic Management, or any management technique, can reverse climate change.[56]
According to a 2016 study published by theSwedish University of Agricultural Sciences, the actual rate at which improved grazing management could contribute tocarbon sequestrationis seven times lower than the claims made by Savory. The study concludes that Holistic Management cannot reverse climate change.[55]A study by the Food and Climate Research Network in 2017 has concluded that Savory's claims about carbon sequestration are "unrealistic" and very different from those issued by peer-reviewed studies.[57]The FCRN study estimates that, on the basis of meta-study of the scientific literature, the total global soil carbon sequestration potential from grazing management ranges from 0.3-0.8 Gt CO2eq per year, which is equivalent to offsetting a maximum of 4-11% of current total global livestock emissions, and that “Expansion or intensification in the grazing sector as an approach to sequestering more carbon would lead to substantial increases in methane, nitrous oxide and land use change-induced CO2 emissions”[57]Project Drawdown estimates the total carbon sequestration potential of improved managed grazing at 13.72 - 20.92 Gigatons CO2eq between 2020–2050, equal to 0.46-0.70 Gt CO2eq per year.[58]A 2022 peer-reviewed paper estimated the carbon sequestration potential of improved grazing management at a similar level of 0.15-0.70 Gt CO2eq per year.[59]
Savory received the 2003 Banksia International Award[60]and in 2010 the Africa Centre for Holistic Management in Zimbabwe, Operation Hope (a "proof of concept" project using Holistic Management) was named the winner of the 2010Buckminster Fuller Challengefor "recognizing initiatives which take a comprehensive, anticipatory, design approach to radically advance human well being and the health of our planet's ecosystems".[21][61]In addition, numerous Holistic Management practitioners have received awards for their environmental stewardship through using Holistic Management practices.[5]
|
https://en.wikipedia.org/wiki/Holistic_management
|
Landscape-scale conservationis aholisticapproach tolandscapemanagement, aiming to reconcile the competing objectives ofnature conservationand economic activities across a given landscape. Landscape-scale conservation may sometimes be attempted because ofclimate change. It can be seen as an alternative tosite based conservation.
Many global problems such aspoverty,food security,climate change,water scarcity,deforestationandbiodiversity lossare connected.[2][3]For example, lifting people out of poverty can increase consumption and drive climate change.[4]Expanding agriculturecan exacerbatewater scarcityand drivehabitat loss.[5][6]Proponents of landscape management argue that as these problems are interconnected, coordinated approaches are needed to address them, by focussing on how landscapes can generate multiple benefits. For example, a river basin can supply water for towns andagriculture, timber and food crops for people and industry, and habitat for biodiversity; and each one of these users can have impacts on the others.[2][3][7]
Landscapes in general have been recognised as important units for conservation by intergovernmental bodies,[8]government initiatives,[9][10]and research institutes.[11]
Problems with this approach include difficulties in monitoring, and the proliferation of definitions and terms relating to it.[3]
There are many overlapping terms and definitions,[12][13]but many terms have similar meanings.[3][14]Asustainablelandscape, for example, meets "the needs of the present without compromising the ability of future generations to meet their own needs."[2]
Approaching conservation by means of landscapes can be seen as "a conceptual framework wherebystakeholdersin a landscape aim to reconcile competing social, economic and environmental objectives". Instead of focussing on a single use of the land it aims to ensure that the interests of different stakeholders are met.[2]
The starting point for all landscape-scale conservation schemes must be an understanding of the character of the landscape. Landscape character goes beyondaesthetic. It involves understanding how the landscape functions to support communities, cultural heritage and development, the economy, as well as the wildlife and natural resources of the area. Landscape character requires careful assessment according to accepted methodologies. Landscape character assessment will contribute to the determination of what scale is appropriate in which landscape. "Landscape scale" does not merely mean acting at a bigger scale: it means conservation is carried out at the correct scale and that it takes into account the human elements of the landscape, both past and present.
The word 'landscape' in English is a loanword fromDutchlandschapintroduced in the 1660s andoriginally meant a painting. The meaning a "tract of land with its distinguishing characteristics" was derived from that in 1886. This was then used as a verb as of 1916.[15]
The GermangeographerCarl Trollcoined the German termLandschaftsökologie–thus 'landscape ecology' in 1939.[16]He developed this terminology and many early concepts of landscape ecology as part of this work, which consisted of applyingaerial photographinterpretation to studies of interactions between environment, agriculture and vegetation.
In the UK conservation of landscapes can be said to have begun in 1945 with the publication of theReport to the Government on National Parks in England and Wales. TheNational Parks and Access to the Countryside Act 1949introduced the legislation for the creationAreas of Outstanding Natural Beauty(AONB).[17][18]Northern Irelandhas the same system after adoption of the Amenity Lands (NI) Act 1965.[19]The first of these AONB were defined in 1956, with the last being created in 1995.[20]
ThePermanent European Conference for the Study of the Rural Landscapewas established in 1957.[21][22]TheEuropean Landscape Conventionwas initiated by the Congress of Regional and Local Authorities of theCouncil of Europe(CLRAE) in 1994, was adopted by the Committee of Ministers of theCouncil of Europein 2000,[23]and came into force in 2004.[24]
The conservation community began to take notice of the science of landscape ecology in the 1980s.[3]
Efforts to develop concepts of landscape management that integrate international social and economic development with biodiversity conservation began in 1992.[3]
Landscape management now exists in multiple iterations and alongside other concepts[3][12][25][14]such aswatershed management,landscape ecology[26]andcultural landscapes.[27][28]
TheUN Environment Programmestated in 2015 that the landscape approach embodiesecosystem management. UNEP uses the approach with the Ecosystem Management of Productive Landscapes project.[29]The scientific committee of theConvention on Biological Diversityalso considers the perspective of a landscape the most important scale for improving sustainable use of biodiversity.[8]There are global fora on landscapes.[30][31]During the Livelihoods and Landscapes Strategies programme theInternational Union for Conservation of Natureapplied this approach to locations worldwide, in 27 landscapes in 23 different countries.[32]
Examples of landscape approaches can be global[12][14][33]or continental, for example in Africa,[34]Oceania[35]and Latin America.[36]TheEuropean Agricultural Fund for Rural Developmentplays an important part in funding landscape conservation in Europe.[37]
Some argue landscape management can address theSustainable Development Goals.[3][38][39]Many of these goals have potential synergies or trade-offs: some therefore argue that addressing these goals individually may not be effective, and landscape approaches provide a potential framework to manage them. For example, increasing areas of irrigated agricultural land to end hunger could have adverse impacts on terrestrial ecosystems or sustainable water management.[39]Landscape approaches intend to include different sectors, and thus achieve the multiple objectives of the Sustainable Development Goals – for example, working within catchment area of a river to enhance agricultural productivity, flood defence, biodiversity and carbon storage.[2]
Climate change and agriculture are intertwined[40]so production of food and climate mitigation can be a part of landscape management.[41]The agricultural sector accounts for around 24% ofanthropogenic emissions. Unlike other sectors that emit greenhouse gases, agriculture and forestry have the potential to mitigate climate change by reducing or removinggreenhouse gas emissions, for example byreforestationand landscape restoration.[42]Advocates of landscape management argue that 'climate-smart agriculture' andREDD+can draw on landscape management.[41]
Because a large proportion of the biodiversity of Germany was able to invade from the south and east after human activities altered the landscape, maintaining such artificial landscapes is an integral part of nature conservation.[43]The full name of the main nature conservation law in Germany, theBundesnaturschutzgesetzes, is thus titled in its entiretyGesetz über Naturschutz und Landschaftspflege,[44]whereLandschaftspflegetranslates literally to "landscape maintenance" (see reference for more).[45]Related concepts areLandschaftsschutz, "landscape protection/conservation",[46]andLandschaftsschutzgebiet, a "nature preserve", or literally a (legally) "protected landscape area".[47]TheDeutscher Verband für Landschaftspflegeis the main organisation which protects landscapes in Germany. It is an umbrella organisation which coordinates the regional landscape protection organisations of the differentGerman states.[48][49]Classically, there are four methods which can be done in order to conserve landscapes:[50][51]maintenance,[49]improvement,[49]protection[49][52]and redevelopment.[52]The marketing of products such as meat from alpine meadows or apple juice from traditionalStreuobstwiesecan also be an important factor in conservation.[49]Landscapes are maintained by three methods: biological - such asgrazing by livestock, manually (although this is rare due to the high cost of labour) and commonly mechanically.[51]
Staatsbosbeheer, the Dutch governmental forest service, considers landscape management an important part of managing their lands.[54][55]Landschapsbeheer Nederlandis an umbrella organisation which promotes and helps fund the interests of the different provincial landscape management organisations, which between them include 75,000 volunteers and 110,000 hectares of protected nature reserves.[56]Sustainable landscape management is being researched in the Netherlands.[57]
An example of a producer movement managing a multi-functional landscape is the Potato Park inPísac, Peru, where local communities protect the ecological and cultural diversity of the 12,000ha landscape.[7][27]
In Sweden, the Swedish National Heritage Board, orRiksantikvarieämbetet, is responsible for landscape conservation.[58]Landscape conservation can be studied at the Department of Cultural Conservation (atDacapo Mariestad) of the University of Gothenburg, in both Swedish and English.[59]
An example of cooperation between very different actors is from theDoi Mae Salongwatershed in northwest Thailand, a Military Reserved Area under the control of theRoyal Thai Armed Forces. Reforestation activities led to tension with localhill tribes. In response, an agreement was reached with them on land rights and use of different parts of the reserve.[60]
Among the leading exponents of UK landscape scale conservation are theAreas of Outstanding Natural Beauty(AONB). There are 49 AONB in the UK. TheInternational Union for Conservation of Naturehas categorised these regions as "category 5 protected areas" and in 2005 claimed the AONB are administered using what the IUCN coined the "protected landscape approach".[1]In Scotland there is a similar system ofnational scenic areas.[61]
The UKBiodiversity Action Planprotects semi-natural grasslands, among other habitats, which constitute landscapes maintained bylow-intensity grazing. Agricultural environment schemes reward farmers and land managers financially for maintaining these habitats on registered agricultural land. Each of the four
countries in the UK has its own individual scheme.[62]
Studies have been carried out across the UK looking at much wider range of habitats. InWalesthePumlumonLarge Area Conservation Project focusses on upland conservation in areas of marginal agriculture and forestry.[63]The NorthSomersetLevels andMoorsProject addresses wetlands.[64]
Landscape approaches have been taken up by governments in for example theGreater Mekong Subregionproject[9][66]and in Indonesia'sclimate change commitments,[10]and by international research bodies such as theCenter for International Forestry Research,[11]which convenes the Global Landscapes Forum.[67]
TheMount Kailashregion is where theIndus River, theKarnali River(a major tributary of theGanges River), theBrahmaputra Riverand theSutlejriver systems originate. With assistance from the International Centre for Integrated Mountain Development, the three surrounding countries (China, India and Nepal) developed an integrated management approach to the different conservation and development issues within this landscape.[68]
Six countries inWest Africain theVolta Riverbasin using the 'Mapping Ecosystems Services to Human well-being' toolkit, use landscape modelling of alternative scenarios for the riparian buffer to make land-use decisions such as conserving hydrologicalecosystem servicesand meeting nationalSDG commitments.[69]
In a 2001 article published by Sara J. Scherr and Jeffrey McNeely,[70]soon expanded into a book,[71]Scherr and McNeely introduced the term "ecoagriculture" to describe their vision of rural development while advancing the environment, claim that agriculture is the dominant influence on wild species and habitats, and point to a number of recent and potential future developments they identified as beneficial examples of land use.[70][72]They incorporated the non-profitEcoAgriculture Partners.[73]in 2004 to promote this vision, with Scherr as President and CEO, and McNeely as an independent governing board member. Scherr and McNeely edited a second book in 2007.[74]Ecoagriculture had three elements in 2003.[71]
In 2012 Scherr invented a new term, integrated landscape management(ILM), to describe her ideas for developing entire regions, not at just a farm or plot level.[72][2]Integrated landscape management is a way of managingsustainablelandscapes by bringing together multiple stakeholders with different land use objectives. The integrated approach claims to go beyond other approaches which focus on users of the land independently of each other, despite needing some of the same resources.[2]It is promoted by the conservation NGOsWorldwide Fund for Nature, Global Canopy Programme,The Nature Conservancy, The Sustainable Trade Initiative, and EcoAgriculture Partners.[2]Promoters claim that integrated landscape management will maximise collaboration in planning, policy development and action regarding the interdependentSustainable Development Goals.[38]It was defined by four elements in 2013:[75]
By 2016 it had five elements, namely:
Theecosystem approach, promoted by theConvention on Biological Diversity, is a strategy for the integratedecosystem managementof land, water, and living resources for conservation and sustainability.[76]
This approach includes continual learning andadaptive management: including monitoring, the expectation that actions take place at multiple scales and that landscapes are multifunctional (e.g. supplying both goods, such as timber and food, and services, such as water and biodiversity protection). There are multiple stakeholders, and it assumes they have a common concern about the landscape, negotiate change with each other, and their rights and responsibilities are clear or will become clear.[77]
A literature review identified five main barriers, as follows:[3]
|
https://en.wikipedia.org/wiki/Integrated_landscape_management
|
Thenatural environmentornatural worldencompasses allbioticandabioticthings occurringnaturally, meaning in this case notartificial. The term is most often applied toEarthor some parts of Earth. This environment encompasses the interaction of all livingspecies,climate, weather and natural resources that affect human survival and economic activity.[1]The concept of thenatural environmentcan be distinguished as components:
In contrast to the natural environment is thebuilt environment. Built environments are where humans have fundamentally transformed landscapes such as urban settings and agriculturalland conversion, the natural environment is greatly changed into a simplified human environment. Even acts which seem less extreme, such as building a mudhutor aphotovoltaic systemin thedesert, the modified environment becomes an artificial one. Though many animals build things to provide a better environment for themselves, they are not human, hencebeaver damsand the works ofmound-building termitesare thought of as natural.
People cannot findabsolutely naturalenvironments on Earth, naturalness usually varies in a continuum, from 100% natural in one extreme to 0% natural in the other. The massive environmental changes of humanity in theAnthropocenehave fundamentally effected all natural environments including:climate change,biodiversity lossand pollution fromplasticandother chemicalsin theairandwater. More precisely, we can consider the different aspects or components of an environment, and see that their degree of naturalness is not uniform.[2]If, for instance, in an agricultural field, themineralogic compositionand thestructureof its soil are similar to those of an undisturbed forest soil, but the structure is quite different.
Earth science generally recognizes four spheres, thelithosphere, thehydrosphere, theatmosphereand thebiosphere[3]as correspondent torocks,water,airandliferespectively. Some scientists include as part of the spheres of the Earth, thecryosphere(corresponding toice) as a distinct portion of the hydrosphere, as well as thepedosphere(tosoil) as an active and intermixed sphere.Earth science(also known as geoscience, the geographical sciences or the Earth Sciences), is an all-embracing term for thesciencesrelated to the planetEarth.[4]There are four majordisciplinesin earth sciences, namelygeography,geology,geophysicsandgeodesy. These major disciplines usephysics,chemistry,biology,chronologyandmathematicsto build a qualitative and quantitative understanding of the principal areas orspheresof Earth.
TheEarth's crustorlithosphere, is the outermost solid surface of the planet and is chemically, physically and mechanically different from underlyingmantle. It has been generated greatly byigneousprocesses in whichmagmacools and solidifies to form solid rock. Beneath the lithosphere lies the mantle which is heated by thedecayofradioactive elements. The mantle though solid is in a state ofrheicconvection. This convection process causes the lithospheric plates to move, albeit slowly. The resulting process is known asplate tectonics.Volcanoesresult primarily from the melting ofsubductedcrust material or of rising mantle atmid-ocean ridgesandmantle plumes.
Most water is found in various kinds of naturalbody of water.
An ocean is a major body ofsaline waterand a component of the hydrosphere. Approximately 71% of thesurface of the Earth(an area of some 362 million square kilometers) is covered by ocean, acontinuous body of waterthat is customarily divided into several principal oceans and smallerseas. More than half of this area is over 3,000 meters (9,800 ft) deep. Average oceanicsalinityis around 35parts per thousand(ppt) (3.5%), and nearly all seawater has a salinity in the range of 30 to 38 ppt. Though generally recognized as several separate oceans, these waters comprise one global, interconnected body of salt water often referred to as theWorld Oceanor global ocean.[5][6]The deepseabedsare more than half the Earth's surface, and are among the least-modified natural environments. The major oceanic divisions are defined in part by thecontinents, variousarchipelagosand other criteria, these divisions are : (in descending order of size) thePacific Ocean, theAtlantic Ocean, theIndian Ocean, theSouthern Oceanand theArctic Ocean.
A river is a naturalwatercourse,[7]usuallyfreshwater, flowing toward anocean, alake, aseaor another river. A few rivers simply flow into the ground and dry up completely without reaching another body of water.
The water in a river is usually in achannel, made up of astream bedbetweenbanks. In larger rivers there is often also a widerfloodplainshaped by waters over-topping the channel. Flood plains may be very wide in relation to the size of the river channel. Rivers are a part of thehydrological cycle. Water within a river is generally collected fromprecipitationthroughsurface runoff,groundwater recharge,springsand the release of water stored inglaciersand snowpacks.
Small rivers may also be called by several other names, includingstream, creek and brook. Theircurrentis confined within abedandstream banks. Streams play an importantcorridorrole in connectingfragmented habitatsand thus in conservingbiodiversity. The study of streams and waterways in general is known assurfacehydrology.[8]
A lake (from Latinlacus) is aterrain feature, a body of water that is localized to the bottom ofbasin. A body of water is considered a lake when it is inland, is not part of anoceanand is larger and deeper than apond.[9][10]
Natural lakes on Earth are generally found inmountainousareas,rift zonesand areas with ongoing or recentglaciation. Other lakes are found inendorheic basinsor along the courses of mature rivers. In some parts of the world, there are many lakes because of chaotic drainage patterns left over from thelast ice age. All lakes are temporary over geologic time scales, as they will slowly fill in with sediments or spill out of the basin containing them.
A pond is abodyofstanding water, either natural or human-made, that is usually smaller than alake. A wide variety of human-made bodies of water are classified as ponds, includingwater gardensdesigned for aesthetic ornamentation,fish pondsdesigned for commercial fish breeding andsolar pondsdesigned to store thermal energy. Ponds and lakes are distinguished from streams by theircurrent speed. While currents in streams are easily observed, ponds and lakes possess thermally driven micro-currents and moderate wind-driven currents. These features distinguish a pond from many other aquatic terrain features, such asstream poolsandtide pools.
Humans impact the water in different ways such as modifying rivers (throughdamsandstream channelization),urbanizationanddeforestation. These impact lake levels, groundwater conditions, water pollution, thermal pollution, andmarine pollution. Humans modify rivers by using direct channel manipulation.[11]We build dams and reservoirs and manipulate the direction of the rivers and water path. Dams can usefully create reservoirs and hydroelectric power. However, reservoirs and dams may negatively impact the environment and wildlife. Dams stop fish migration and the movement of organisms downstream. Urbanization affects the environment because of deforestation and changing lake levels, groundwater conditions, etc. Deforestation and urbanization go hand in hand. Deforestation may cause flooding, declining stream flow and changes in riverside vegetation. The changing vegetation occurs because when trees cannot get adequate water they start to deteriorate, leading to a decreased food supply for the wildlife in an area.[11]
The atmosphere of the Earth serves as a key factor in sustaining the planetary ecosystem. The thin layer ofgasesthat envelops the Earth is held in place by the planet's gravity. Dryairconsists of 78%nitrogen, 21%oxygen, 1%argon,inert gasesandcarbon dioxide. The remaining gases are often referred to as trace gases.[13]The atmosphere includesgreenhouse gasessuch as carbon dioxide, methane,nitrous oxideand ozone. Filtered air includes trace amounts of many otherchemical compounds. Air also contains a variable amount ofwater vaporandsuspensionsof water droplets andicecrystals seen asclouds. Many natural substances may be present in tiny amounts in an unfiltered air sample, includingdust,pollenandspores,sea spray,volcanic ashandmeteoroids. Various industrialpollutantsalso may be present, such aschlorine(elementary or in compounds),fluorinecompounds, elementalmercury, andsulphurcompounds such assulphur dioxide(SO2).
Theozone layerof the Earth's atmosphere plays an important role in reducing the amount ofultraviolet(UV) radiation that reaches the surface. AsDNAis readily damaged by UV light, this serves to protect life at the surface. The atmosphere also retains heat during the night, thereby reducing the daily temperature extremes.
Earth's atmosphere can be divided into five main layers. These layers are mainly determined by whether temperature increases or decreases with altitude. From highest to lowest, these layers are:
Within the five principal layers determined by temperature there are several layers determined by other properties.
The dangers ofglobal warmingare being increasingly studied by a wide global consortium of scientists.[17]These scientists are increasingly concerned about the potentiallong-term effects of global warmingon our natural environment and on the planet. Of particular concern is howclimate changeand global warming caused byanthropogenic, or human-made releases ofgreenhouse gases, most notablycarbon dioxide, can act interactively and have adverse effects upon the planet, its natural environment and humans' existence. It is clear the planet is warming, and warming rapidly. This is due to thegreenhouse effect, which is caused by greenhouse gases, which trap heat inside the Earth's atmosphere because of their more complex molecular structure which allows them to vibrate and in turn trap heat and release it back towards the Earth.[18]This warming is also responsible for the extinction of natural habitats, which in turn leads to a reduction in wildlife population. The most recent report from theIntergovernmental Panel on Climate Change(the group of the leading climate scientists in the world) concluded that the earth will warm anywhere from 2.7 to almost 11 degrees Fahrenheit (1.5 to 6 degrees Celsius) between 1990 and 2100.[19]Efforts have been increasingly focused on themitigationof greenhouse gases that are causing climatic changes, ondeveloping adaptative strategiesto global warming, to assist humans, other animal, and plant species, ecosystems, regions andnationsin adjusting to theeffects of global warming. Some examples of recent collaboration toaddress climate changeand global warming include:
A significantly profound challenge is to identify the natural environmental dynamics in contrast to environmental changes not within natural variances. A common solution is to adapt a static view neglecting natural variances to exist. Methodologically, this view could be defended when looking at processes which change slowly and short time series, while the problem arrives when fast processes turns essential in the object of the study.
Climatelooks at the statistics oftemperature,humidity,atmospheric pressure,wind,rainfall, atmospheric particle count and othermeteorologicalelements in a given region over long periods of time.[23]Weather, on the other hand, is the present condition of these same elements over periods up to two weeks.[23]
Climates can beclassifiedaccording to the average and typical ranges of different variables, most commonly temperature and precipitation. The most commonly used classification scheme is the one originally developed byWladimir Köppen. TheThornthwaite system,[24]in use since 1948, usesevapotranspirationas well as temperature and precipitation information to study animal species diversity and the potential impacts ofclimate changes.[25]
Weatheris a set of all thephenomenaoccurring in a givenatmosphericarea at a giventime.[26]Most weather phenomena occur in thetroposphere,[27][28]just below thestratosphere. Weather refers, generally, to day-to-day temperature and precipitation activity, whereasclimateis the term for the average atmospheric conditions over longer periods of time.[29]When used without qualification, "weather" is understood to be the weather of Earth.
Weather occurs due to density (temperature and moisture) differences between one place and another. These differences can occur due to the sun angle at any particular spot, which varies by latitude from the tropics. The strong temperature contrast between polar and tropical air gives rise to thejet stream. Weather systems in themid-latitudes, such asextratropical cyclones, are caused by instabilities of the jet stream flow. Because the Earth'saxisis tilted relative to its orbital plane,sunlightis incident at different angles at different times of the year. On the Earth's surface, temperatures usually range ±40 °C (100 °F to −40 °F) annually. Over thousands of years, changes in the Earth's orbit have affected the amount and distribution of solar energy received by the Earth and influenced long-term climate.
Surfacetemperaturedifferences in turn cause pressure differences. Higher altitudes are cooler than lower altitudes due to differences in compressional heating. Weather forecasting is the application of science and technology to predict the state of theatmospherefor a future time and a given location. Theatmosphereis achaotic system, and small changes to one part of the system can grow to have large effects on the system as a whole. Human attempts tocontrol the weatherhave occurred throughout human history, and there is evidence that civilized human activity such asagricultureandindustryhas inadvertently modified weather patterns.
Evidence suggests thatlife on Earthhas existed for about 3.7billionyears.[30]All known life forms share fundamental molecular mechanisms, and based on these observations, theories on the origin of life attempt to find a mechanism explaining the formation of a primordial single cell organism from which all life originates. There are many different hypotheses regarding the path that might have been taken from simpleorganic moleculesvia pre-cellular life to protocells and metabolism.
Although there is no universal agreement on the definition of life, scientists generally accept that the biological manifestation of life is characterized byorganization,metabolism,growth,adaptation, response tostimuliandreproduction.[31]Life may also be said to be simply the characteristic state oforganisms. Inbiology, the science of living organisms, "life" is the condition which distinguishes activeorganismsfrominorganic matter, including the capacity for growth,functional activityand the continual change preceding death.[32][33]
A diverse variety of living organisms (life forms) can be found in thebiosphereon Earth, and properties common to these organisms—plants,animals,fungi,protists,archaea, andbacteria—are acarbon- and water-basedcellularform with complexorganizationand heritablegeneticinformation. Living organisms undergometabolism, maintainhomeostasis, possess a capacity togrow, respond tostimuli,reproduceand, throughnatural selection, adapt to their environment in successive generations. More complex living organisms can communicate through various means.
Anecosystem(also called an environment) is a natural unit consisting of all plants, animals, and micro-organisms (bioticfactors) in an area functioning together with all of the non-living physical (abiotic) factors of the environment.[34]
Central to the ecosystem concept is the idea thatliving organismsare continually engaged in a highly interrelated set of relationships with every other element constituting theenvironmentin which they exist.Eugene Odum, one of the founders of the science ofecology, stated: "Any unit that includes all of the organisms (i.e.: the "community") in a given area interacting with the physical environment so that a flow of energy leads to clearly defined trophic structure, biotic diversity, and material cycles (i.e.: exchange of materials between living and nonliving parts) within the system is an ecosystem."[35]
The human ecosystem concept is then grounded in the deconstruction of the human/naturedichotomy, and the emergent premise that all species are ecologically integrated with each other, as well as with the abiotic constituents of theirbiotope.
A more significant number or variety of species orbiological diversityof an ecosystem may contribute to greater resilience of an ecosystem because there are more species present at a location to respond to change and thus "absorb" or reduce its effects. This reduces the effect before the ecosystem's structure changes to a different state. This is not universally the case and there is no proven relationship between the species diversity of an ecosystem and its ability to provide goods and services on a sustainable level.
The term ecosystem can also pertain to human-made environments, such ashuman ecosystemsand human-influenced ecosystems. It can describe any situation where there is relationship between living organisms and their environment. Fewer areas on the surface of the earth today exist free from human contact, although some genuinewildernessareas continue to exist without any forms of human intervention.
Globalbiogeochemical cyclesare critical to life, most notably those ofwater,oxygen,carbon,nitrogenandphosphorus.[36]
Wildernessis generally defined as a natural environment on Earth that has not been significantly modified byhumanactivity. TheWILD Foundationgoes into more detail, defining wilderness as: "The most intact, undisturbed wild natural areas left on our planet – those last truly wild places that humans do not control and have not developed with roads, pipelines or other industrial infrastructure."[37]Wilderness areas and protectedparksare considered important for the survival of certainspecies, ecological studies,conservation, solitude, andrecreation. Wilderness is deeply valued for cultural, spiritual,moral, andaestheticreasons. Some nature writers believe wilderness areas are vital for the human spirit and creativity.[38]
The word, "wilderness", derives from the notion ofwildness; in other words that which is not controllable by humans. The wordetymologyis from theOld Englishwildeornes, which in turn derives fromwildeormeaningwild beast(wild + deor = beast, deer).[39]From this point of view, it is the wildness of a place that makes it a wilderness. The mere presence or activity of people does not disqualify an area from being "wilderness". Many ecosystems that are, or have been, inhabited or influenced by activities of people may still be considered "wild". This way of looking at wilderness includes areas within which natural processes operate without very noticeable human interference.
Wildlifeincludes all non-domesticatedplants, animals and other organisms. Domesticating wild plant and animal species for human benefit has occurred many times all over the planet, and has a major impact on the environment, both positive and negative. Wildlife can be found in all ecosystems. Deserts, rain forests, plains, and other areas—including the most developedurbansites—all have distinct forms of wildlife. While the term in popular culture usually refers to animals that are untouched by civilized human factors, most scientists agree that wildlife around the world is (now) impacted by human activities.
It is the common understanding ofnatural environmentthat underliesenvironmentalism— a broadpolitical,socialandphilosophicalmovement that advocates various actions and policies in the interest of protecting what nature remains in the natural environment, or restoring or expanding the role of nature in this environment. While true wilderness is increasingly rare,wildnature (e.g., unmanagedforests, uncultivatedgrasslands,wildlife,wildflowers) can be found in many locations previously inhabited by humans.
Goals for the benefit of people and natural systems, commonly expressed byenvironmental scientistsandenvironmentalistsinclude:
In some cultures the term environment is meaningless because there is no separation between people and what they view as the natural world, or their surroundings.[48]Specifically in the United States and Arabian countries many native cultures do not recognize the "environment", or see themselves as environmentalists.[49]
|
https://en.wikipedia.org/wiki/Natural_environment
|
Natural resourcesareresourcesthat are drawn fromnatureand used with few modifications. This includes the sources of valued characteristics such as commercial and industrial use, aesthetic value, scientific interest, and cultural value. OnEarth, it includessunlight,atmosphere,water,land, allmineralsalong with allvegetation, andwildlife.[1][2][3][4]
Natural resources are part of humanity'snatural heritageor protected innature reserves. Particular areas (such asthe rainforest in Fatu-Hiva) often featurebiodiversityandgeodiversityin their ecosystems. Natural resources may be classified in different ways. Natural resources are materials and components (something that can be used) found within the environment. Every man-made product is composed of natural resources (at its fundamental level).
A natural resource may exist as a separate entity such as freshwater,air, or any living organism such as a fish, or it may be transformed byextractivist industriesinto an economically useful form that must be processed to obtain the resource such asmetal ores,rare-earth elements,petroleum,timberand most forms ofenergy. Some resources arerenewable, which means that they can be used at a certain rate and natural processes will restore them. In contrast, many extractive industries rely heavily onnon-renewable resourcesthat can only be extracted once.
Natural resource allocations can be at the centre of many economic and political confrontations both within and between countries. This is particularly true during periods of increasing scarcity and shortages (depletionandoverconsumptionof resources).Resource extractionis also a major source of human rights violations and environmental damage. TheSustainable Development Goalsand other international development agendas frequently focus on creating more sustainable resource extraction, with some scholars and researchers focused on creating economic models, such ascircular economy, that rely less on resource extraction, and more onreuse,recyclingand renewable resources that can be sustainably managed.
There are various criteria for classifying natural resources. These include the source of origin, stages of development, renewability andownership.
Resource extraction involves any activity that withdraws resources from nature. This can range in scale from thetraditional useof preindustrial societies to global industry. Extractive industries are, along with agriculture, the basis of theprimary sectorof the economy. Extraction producesraw material, which is then processed toadd value. Examples of extractive industries arehunting,trapping,mining,oil and gas drilling, andforestry. Natural resources can be a substantial part of a country's wealth;[7]however, a sudden inflow of money caused by a resource extraction boom can create social problems including inflation harming other industries ("Dutch disease") and corruption, leading to inequality and underdevelopment, this is known as the "resource curse".
Extractive industries represent a large growing activity in many less-developed countries but the wealth generated does not always lead tosustainableandinclusive growth. People often accuse extractive industry businesses as acting only to maximize short-term value, implying that less-developed countries are vulnerable to powerful corporations. Alternatively, host governments are often assumed to be only maximizing immediaterevenue. Researchers argue there are areas of common interest where development goals and business cross. These present opportunities for international governmental agencies to engage with theprivate sectorand host governments through revenue management and expenditure accountability, infrastructure development,employment creation, skills andenterprise development, and impacts on children, especially girls and women.[8]A strong civil society can play an important role in ensuring the effective management of natural resources. Norway can serve as a role model in this regard as it has good institutions and open and dynamic public debate with strong civil society actors that provide an effective checks and balances system for the government's management of extractive industries, such as theExtractive Industries Transparency Initiative(EITI), a global standard for the good governance of oil, gas and mineral resources. It seeks to address the key governance issues in the extractive sectors.[9]However, in countries that do not have a very strong and unified society, meaning that there are dissidents who are not as happy with the government as in Norway's case, natural resources can actually be a factor in whether a civil war starts and how long the war lasts.[10]
In recent years, thedepletion of natural resourceshas become a major focus of governments and organizations such as theUnited Nations(UN). This is evident in the UN's Agenda 21 Section Two, which outlines the necessary steps for countries to take to sustain their natural resources.[11]The depletion of natural resources is considered asustainable developmentissue.[12]The termsustainable developmenthas many interpretations, most notably the Brundtland Commission's 'to ensure that it meets the needs of the present without compromising the ability offuture generationsto meet their own needs';[13]however, in broad terms it is balancing the needs of the planet's people and species now and in the future.[11]In regards to natural resources, depletion is of concern for sustainable development as it has the ability to degrade current environments[14]and the potential to impact the needs of future generations.[12]
"The conservation of natural resources is the fundamental problem. Unless we solve that problem, it will avail us little to solve all others."
Depletion of natural resources is associated withsocial inequity. Considering most biodiversity are located in developing countries,[16]depletion of this resource could result in losses ofecosystem servicesfor these countries.[17]Some view this depletion as a major source of social unrest and conflicts in developing nations.[18]
At present, there is a particular concern for rainforest regions that hold most of the Earth's biodiversity.[19]According to Nelson,[20]deforestation and degradation affect 8.5% of the world's forests with 30% of the Earth's surface already cropped. If we consider that 80% of people rely on medicines obtained from plants and3⁄4of the world's prescription medicines have ingredients taken from plants,[17]loss of the world's rainforests could result in a loss of finding more potential life-saving medicines.[21]
The depletion of natural resources is caused by 'direct drivers of change'[20]such asmining,petroleum extraction,fishing, and forestry as well as 'indirect drivers of change' such as demography (e.g. population growth), economy, society, politics, and technology.[20]The current practice ofagricultureis another factor causing depletion of natural resources. For example, the depletion of nutrients in the soil due to excessive use of nitrogen[20]anddesertification.[11]The depletion of natural resources is a continuing concern for society. This is seen in the cited quote given byTheodore Roosevelt, a well-known conservationist and former United States president, who was opposed to unregulated natural resource extraction.
In 1982, theUnited Nationsdeveloped theWorld Charter for Nature, which recognized the need to protect nature from further depletion due to human activity. It states that measures must be taken at all societal levels, from international to individual, to protect nature. It outlines the need for sustainable use of natural resources and suggests that the protection of resources should be incorporated into national and international systems of law.[22]To look at the importance of protecting natural resources further, the World Ethic of Sustainability, developed by theIUCN,WWFand theUNEPin 1990,[23]set out eight values for sustainability, including the need to protect natural resources from depletion. Since the development of these documents, many measures have been taken to protect natural resources including establishment of the scientific field and practice of conservation biology and habitat conservation, respectively.
Conservation biologyis the scientific study of the nature and status of Earth's biodiversity with the aim of protectingspecies, theirhabitats, and ecosystems from excessive rates ofextinction.[24][25]It is an interdisciplinary subject drawing on science, economics and the practice ofnatural resource management.[26][27][28][29]The termconservation biologywas introduced as the title of a conference held at theUniversity of California, San Diego, in La Jolla, California, in 1978, organized by biologists Bruce A. Wilcox andMichael E. Soulé.
Habitat conservationis a type ofland managementthat seeks toconserve, protect and restorehabitatareas for wildplantsandanimals, especiallyconservation reliant species, and prevent their extinction,fragmentationor reduction inrange.[30]
Natural resource management is a discipline in the management of natural resources such as land,water,soil,plants, andanimals—with a particular focus on how management affectsquality of lifefor present and future generations. Hence, sustainable development is followed according to the judicious use of resources to supply present and future generations. The disciplines of fisheries, forestry, and wildlife are examples of large subdisciplines of natural resource management.
Management of natural resources involves identifying who has the right to use the resources and who does not to define the management boundaries of the resource.[31]The resources may be managed by the users according to the rules governing when and how the resource is used depending on local condition[32]or the resources may be managed by a governmental organization or other central authority.[33]
A "...successful management of natural resources depends on freedom of speech, a dynamic and wide-ranging public debate through multiple independent media channels and an active civil society engaged in natural resource issues..."[34]because of the nature of the shared resources, the individuals who are affected by the rules can participate in setting or changing them.[31]The users have rights to devise their own management institutions and plans under the recognition by the government. The right to resources includes land, water, fisheries, and pastoral rights.[32]The users or parties accountable to the users have to actively monitor and ensure the utilisation of the resource compliance with the rules and impose penalties on those people who violate the rules.[31]These conflicts are resolved quickly and efficiently by the local institution according to the seriousness and context of the offense.[32]The global science-based platform to discuss natural resources management is theWorld Resources Forum, based in Switzerland.
|
https://en.wikipedia.org/wiki/Natural_resource
|
Natural resource management(NRM) is the management ofnatural resourcessuch asland,water,soil,plantsandanimals, with a particular focus on how management affects thequality of lifefor both present andfuture generations(stewardship).
Natural resource management deals with managing the way in which people and naturallandscapesinteract. It brings togethernatural heritagemanagement, land use planning, water management,bio-diversity conservation, and the future sustainability of industries likeagriculture,mining,tourism,fisheriesandforestry. It recognizes that people and their livelihoods rely on the health and productivity of our landscapes, and their actions as stewards of the land play a critical role in maintaining this health and productivity.[1]
Natural resource management specifically focuses on a scientific and technical understanding of resources andecologyand the Life-supporting capacity of those resources.[2]Environmental managementis similar to natural resource management. In academic contexts, thesociology of natural resourcesis closely related to, but distinct from, natural resource management.
The emphasis on a sustainability can be traced back to early attempts to understand the ecological nature ofNorth Americanrangelandsin the late 19th century, and theresource conservationmovement of the same time.[3][4]This type of analysis coalesced in the 20th century with recognition that preservationistconservationstrategies had not been effective in halting the decline of natural resources. A more integrated approach was implemented recognising the intertwined social, cultural, economic and political aspects of resource management.[5]A more holistic, national and even global form evolved, from theBrundtland Commissionand the advocacy ofsustainable development.
In 2005 the government ofNew South Wales, Australia established aStandard for Quality Natural Resource Management,[6]to improve the consistency of practice, based on anadaptive managementapproach.
In the United States, the most active areas of natural resource management arefisheries management,[7]wildlife management,[8]often associated withecotourismandrangeland management, andforest management.[9]In Australia, water sharing, such as theMurray Darling Basin Planandcatchment managementare also significant.
Here are some ways to prevent changes in land and sea use:
Natural resource management approaches[10]can be categorised according to the kind and right ofstakeholders, natural resources:
Stakeholder analysisoriginated from business management practices and has been incorporated intonatural resourcemanagement in ever growing popularity. Stakeholder analysis in the context of natural resource management identifies distinctive interest groups affected in the utilisation and conservation of natural resources.[12]
There is no definitive definition of a stakeholder as illustrated in the table below. Especially in natural resource management as it is difficult to determine who has a stake and this will differ according to each potential stakeholder.[13]
Different approaches to who is a stakeholder:[13]
Therefore, it is dependent upon the circumstances of the stakeholders involved with natural resource as to which definition and subsequent theory is utilised.
Billgrena and Holme[13]identified the aims of stakeholder analysis in natural resource management:
This gives transparency and clarity to policy making allowing stakeholders to recognise conflicts of interest and facilitate resolutions.[13][22]There are numerous stakeholder theories such as Mitchell et al.[23]however Grimble[22]created a framework of stages for a Stakeholder Analysis in natural resource management. Grimble[22]designed this framework to ensure that the analysis is specific to the essential aspects of natural resource management.
Stages in Stakeholder analysis:[22]
Application:
Grimble and Wellard[17]established that Stakeholder analysis in natural resource management is most relevant where issued can be characterised as;
Case studies:
In the case of theBwindi Impenetrable National Park, a comprehensive stakeholder analysis would have been relevant and the Batwa people would have potentially been acknowledged as stakeholders preventing the loss of people's livelihoods and loss of life.[17][22]
InWales,Natural Resources Wales, aWelsh Governmentsponsored body "pursues sustainable management of natural resources" and "applies the principles of sustainable management of natural resources" as stated in the Environment (Wales) Act 2016.[24]NRW is responsible for more than 40 different types of regulatory regime across a wide range of activities.
Nepal, Indonesia and Koreas'community forestryare successful examples of how stakeholder analysis can be incorporated into the management of natural resources. This allowed the stakeholders to identify their needs and level of involvement with the forests.
Criticisms:
Alternatives/ Complementary forms of analysis:
Natural resource management issues are inherently complex and contentious. First, they involve the ecological cycles, hydrological cycles, climate, animals, plants and geography, etc. All these are dynamic and inter-related. A change in one of them may have far reaching and/or long-term impacts which may even be irreversible. Second, in addition to the complexity of the natural systems, managers also have to consider various stakeholders and their interests, policies, politics, geographical boundaries and economic implications. It is impossible to fully satisfy all aspects at the same time. Therefore, between the scientific complexity and the diverse stakeholders, natural resource management is typically contentious.
After the United Nations Conference for the Environment and Development (UNCED) held in Rio de Janeiro in 1992,[28]most nations subscribed to new principles for the integrated management of land, water, and forests. Although program names vary from nation to nation, all express similar aims.
The various approaches applied to natural resource management include:
Thecommunity-based natural resource management(CBNRM) approach combines conservation objectives with the generation of economic benefits for rural communities. The three key assumptions being that: locals are better placed to conserve natural resources, people will conserve a resource only if benefits exceed the costs of conservation, and people will conserve a resource that is linked directly to their quality of life.[5]When a local people's quality of life is enhanced, their efforts and commitment to ensure the future well-being of the resource are also enhanced.[29]Regional and community based natural resource management is also based on the principle ofsubsidiarity.
The United Nations advocates CBNRM in the Convention on Biodiversity and the Convention to Combat Desertification. Unless clearly defined, decentralised NRM can result in an ambiguous socio-legal environment with local communities racing to exploit natural resources while they can, such as the forest communities in central Kalimantan (Indonesia).[30]
A problem of CBNRM is the difficulty of reconciling and harmonising the objectives of socioeconomic development, biodiversity protection and sustainable resource utilisation.[31]The concept and conflicting interests of CBNRM,[32][33]show how the motives behind the participation are differentiated as either people-centred (active or participatory results that are truly empowering)[34]or planner-centred (nominal and results in passive recipients). Understanding power relations is crucial to the success of community based NRM. Locals may be reluctant to challenge government recommendations for fear of losing promised benefits.
CBNRM is based particularly on advocacy by nongovernmental organizations working with local groups and communities, on the one hand, and national and transnational organizations, on the other, to build and extend new versions of environmental and social advocacy that link social justice and environmental management agendas[35]with both direct and indirect benefits observed including a share of revenues, employment, diversification of livelihoods and increased pride and identity. Ecological and societal successes and failures of CBNRM projects have been documented.[36][37]CBNRM has raised new challenges, as concepts of community, territory, conservation, and indigenous are worked into politically varied plans and programs in disparate sites. Warner and Jones[38]address strategies for effectively managing conflict in CBNRM.
The capacity ofIndigenouscommunities, led bytraditional custodians, to conserve natural resources has been acknowledged by the Australian Government with theCaring for Country[39]Program. Caring for our Country is an Australian Government initiative jointly administered by the Australian Government Department of Agriculture, Fisheries and Forestry and the Department of the Environment, Water, Heritage and the Arts. These Departments share responsibility for delivery of the Australian Government's environment and sustainable agriculture programs, which have traditionally been broadly referred to under the banner of 'natural resource management'. These programs have been delivered regionally, through 56 State government bodies, successfully allowing regional communities to decide the natural resource priorities for their regions.[40]
More broadly, a research study based in Tanzania and the Pacific researched what motivates communities to adopt CBNRM's and found that aspects of the specific CBNRM program, of the community that has adopted the program, and of the broader social-ecological context together shape the why CBNRM's are adopted.[41]However, overall, program adoption seemed to mirror the relative advantage of CBNRM programs to local villagers and villager access to external technical assistance.[41]There have been socioeconomic critiques of CBNRM in Africa,[42]but ecological effectiveness of CBNRM measured by wildlife population densities has been shown repeatedly in Tanzania.[43][44]
Governance is seen as a key consideration for delivering community-based or regional natural resource management. In the State of NSW, the 13catchment management authorities(CMAs) are overseen by theNatural Resources Commission(NRC), responsible for undertaking audits of the effectiveness of regional natural resource management programs.[45]
Though presenting a transformative approach to resource management that recognizes and involves local communities rather than displacing them, Community-Based Natural Resource Management strategies have faced scrutiny from both scholars and advocates for indigenous communities. Tania Murray, in her examination of CBNRM in Upland Southeast Asia,[46]discovered certain limitations associated with the strategy, primarily stemming from her observation of an idealistic perspective of the communities held by external entities implementing CBNRM programs.
Murray's findings revealed that, in the Uplands, CBNRM as a legal strategy imposed constraints on the communities. One significant limitation was the necessity for communities to fulfill discriminatory and enforceable prerequisites in order to obtain legal entitlements to resources. Murray contends that such legal practices, grounded in specific distinguishing identities or practices, pose a risk of perpetuating and strengthening discriminatory norms in the region.[46]
Furthermore, adopting a Marxist perspective centered on class struggle, some have criticized CBNRM as an empowerment tool, asserting that its focus on state-community alliances may limit its effectiveness, particularly for communities facing challenges from "vicious states," thereby restricting the empowerment potential of the programs.[46]
Social capital and gender are factors that impact community-based natural resource management (CBNRM), including conservation strategies and collaborations between community members and staff. Through three months of participant observation in a fishing camp in San Evaristo, Mexico, Ben Siegelman learned that the fishermen build trust through jokes and fabrications. He emphasizes social capital as a process because it is built and accumulated through practice of intricate social norms. Siegelman notes that playful joking is connected to masculinity and often excludes women. He stresses that both gender and social capital are performed. Furthermore, in San Evaristo, the gendered network of fishermen is simultaneously a social network. Nearly all fishermen in San Evaristo are men and most families have lived there for generations. Men form intimate relationships by spending 14 hour work days together, while women spend time with the family managing domestic caretaking. Siegelman observes three categories of lies amongst the fishermen: exaggerations, deceptions, and jokes. For example, a fisherman may exaggerate his success fishing at a particular spot to mislead friends, place his hand on the scale to turn a larger profit, or make a sexual joke to earn respect. As Siegelman puts it, "lies build trust." Siegelman saw that this division of labor was reproduced, at least in part, to do with the fact that the culture of lying and trust was a masculine activity unique to the fisherman. Similar to the ways in which the culture of lying excluded women from the social sphere of fishing, conservationists were also excluded from this social arrangement and, thus, were not able to obtain the trust needed to do their work of regulating fishing practices. As outsiders, conservationists, even male conservationists, were not able to fit the ideal of masculinity that was considered "trustable" by the fishermen and could convince them to implement or participate in conservation practices. In one instance, the researcher replied jokingly "in the sea" when a fisherman asked where the others were fishing that day. This vague response earned him trust. Women are excluded from this form of social capital because many of the jokes center around "masculine exploits". Siegelman finishes by asking: how can female conservationists act when they are excluded through social capital? What role should men play in this situation?[47]
The primary methodological approach adopted bycatchment management authorities(CMAs) for regional natural resource management in Australia isadaptive management.[6]
This approach includes recognition that adaption occurs through a process of 'plan-do-review-act'. It also recognises seven key components that should be considered for quality natural resource management practice:
Integrated natural resource management (INRM) is the process of managing natural resources in a systematic way, which includes multiple aspects of natural resource use (biophysical, socio-political, and economic) meet production goals of producers and other direct users (e.g., food security, profitability, risk aversion) as well as goals of the wider community (e.g., poverty alleviation, welfare of future generations, environmental conservation). It focuses on sustainability and at the same time tries to incorporate all possible stakeholders from the planning level itself, reducing possible future conflicts. The conceptual basis of INRM has evolved in recent years through the convergence of research in diverse areas such as sustainable land use, participatory planning, integrated watershed management, and adaptive management.[48][49]INRM is being used extensively and been successful in regional and community based natural management.[50]
There are various frameworks and computer models developed to assist natural resource management.
Geographic Information Systems (GIS)
GISis a powerful analytical tool as it is capable of overlaying datasets to identify links. A bush regeneration scheme can be informed by the overlay of rainfall, cleared land and erosion.[51]In Australia, Metadata Directories such as NDAR provide data on Australian natural resources such as vegetation, fisheries, soils and water.[52]These are limited by the potential for subjective input and data manipulation.
Natural Resources Management Audit Frameworks
The NSW Government in Australia has published an audit framework[53]for natural resource management, to assist the establishment of a performance audit role in the governance of regional natural resource management. This audit framework builds from other established audit methodologies, includingperformance audit,environmental auditandinternal audit. Audits undertaken using this framework have provided confidence to stakeholders, identified areas for improvement and described policy expectations for the general public.[54][55]
The Australian Government has established a framework for auditinggreenhouse emissionsand energy reporting, which closely follows Australian Standards for Assurance Engagements.
The Australian Government is also currently preparing an audit framework for auditing water management, focussing on the implementation of the Murray Darling Basin Plan.
The issue of biodiversity conservation is regarded as an important element in natural resource management. What is biodiversity? Biodiversity is a comprehensive concept, which is a description of the extent of natural diversity. Gaston and Spicer[56](p. 3) point out that biodiversity is "the variety of life" and relate to different kinds of "biodiversity organization". According to Gray[57](p. 154), the first widespread use of the definition of biodiversity, was put forward by the United Nations in 1992, involving different aspects of biological diversity.
The "threats" wreaking havoc on biodiversity include;habitat fragmentation, putting a strain on the already stretched biological resources; forest deterioration and deforestation; the invasion of "alien species" and "climate change"[58]( p. 2). Since these threats have received increasing attention from environmentalists and the public, the precautionary management of biodiversity becomes an important part of natural resources management. According to Cooney, there are material measures to carry out precautionary management of biodiversity in natural resource management.
Cooney claims that the policy making is dependent on "evidences", relating to "high standard of proof", the forbidding of special "activities" and "information and monitoring requirements". Before making the policy of precaution, categorical evidence is needed. When the potential menace of "activities" is regarded as a critical and "irreversible" endangerment, these "activities" should be forbidden. For example, since explosives and toxicants will have serious consequences to endanger human and natural environment, the South Africa Marine Living Resources Act promulgated a series of policies on completely forbidding to "catch fish" by using explosives and toxicants.
According to Cooney, there are four methods to manage the precaution of biodiversity in natural resources management;
In order to have a sustainable environment, understanding and using appropriate management strategies is important. In terms of understanding, Young[60]emphasises some important points of land management:
Dale et al. (2000)[61]study has shown that there are five fundamental and helpful ecological principles for the land manager and people who need them. The ecological principles relate to time, place, species, disturbance and the landscape and they interact in many ways. It is suggested that land managers could follow these guidelines:
|
https://en.wikipedia.org/wiki/Natural_resource_management
|
Citizen participationorpublic participationinsocial sciencerefers to different mechanisms for thepublic to express opinions—and ideally exert influence—regarding political, economic, management or other social decisions. Participatory decision-making can take place along any realm of human social activity, includingeconomic(i.e.participatory economics),political(i.e.participatory democracyorparpolity),management(i.e.participatory management),cultural(i.e.polyculturalism) orfamilial(i.e.feminism).
For well-informed participation to occur, it is argued that some version oftransparency, e.g.radical transparency, is necessary but not sufficient. It has also been argued that those most affected by a decision should have the most say while those that are least affected should have the least say in a topic.[citation needed]
Sherry Arnsteindiscusses eight types of participation inA Ladder of Citizen Participation(1969). Often termed as "Arnstein's ladder of citizen participation", these are broadly categorized as:
She defines citizen participation as the redistribution of power that enables the have-not citizens, presently excluded from the political and economic processes, to be deliberately included in the future.[1]
Robert Silverman expanded onArnstein'sladder of citizen participation with the introduction of his "citizen participation continuum." In this extension to Arstein's work he takes the groups that drive participation into consideration and the forms of participation they pursue. Consequently, Silverman's continuum distinguishes between grassroots participation and instrumental participation.[2]
Archon Fungpresents another classification of participation based on three key questions: Who is allowed to participate, and are they representative of the population? What is the method of communication or decision-making? And how much influence or authority is granted to the participation?[3]
Other "ladders" of participation have been presented by D.M. Connor,[4]Wiedemann and Femers,[5]A. Dorcey et al.,[6]Jules N. Pretty[7]and E.M. Rocha.[8]
The International Association for Public Participation (IAP2) has developed a 'spectrum of public participation' based on five levels: information, consultation, involvement, collaboration and empowerment.[9]
Participation in the corporate sector has been studied as a way to improve business related processes starting from productivity to employee satisfaction.[10][11]
A cultural variation of participation can be seen through the actions ofIndigenous American Cultures. Participation draws from two aspects: respect and commitment to their community and family. The respect is seen through their participation in non-obligated participation in various aspects of their lives, ranging from housework to fieldwork.[12]
Often the participation in these communities is a social interaction occurring as a progression for the community, rather than that of the individual. Participation in these communities can serve as a "learning service". This learning ranges from everyday activities, in which community members gain a new skill to complete a task or participate through social events to keep their cultural practices alive. These social participation events allow newer generations to see the events and learn from this ongoing participation to continue these practices.[13][14]Although there are different domains and objectives of participation in these communities, the bottom line to this participation is that it is non obligated and often community orientated.
A social interaction that continues to thrive because of this high level of non-obligation is the everyday action oftranslating.
Participation activities may be motivated from an administrative perspective or a citizen perspective on a governmental, corporate or social level. From the administrative viewpoint, participation can buildpublic supportfor activities. It can educate the public about an agency's activities. It can also facilitate useful information exchange regarding local conditions. Furthermore, participation is often legally mandated. From the citizen viewpoint, participation enables individuals and groups to influence agency decisions in arepresentationalmanner. The different types of political participation depends on the motivation. When a group is determined to work to solve a community problem, there can be led marches to work for candidates. Most immigrant racial groups have higher motivation since there is an increase in geographical dispersion and are faster growing racial groups.[15]How well participation can influence the relation between citizen and their local government, how it increases trust and boosts peoples willingness to participate Giovanni Allegretti explains in an interview using the example ofparticipatory budgeting.[16]
Public participation in decision-making has been studied as a way to align value judgements and risk trade-offs with public values and attitudes about acceptable risk. This research is of interest for emerging areas of science, including controversial technologies and new applications.[17]
In the United States, studies have demonstrated public support for increased participation in science. While public trust in scientists remains generally high in the United States,[18]the public may rate scientists' ability to make decisions on behalf of society less highly. For example, a 2016–2017 survey of public opinion onCRISPR gene editingtechnology showed a "relatively broad consensus among all groups in support of the idea that the scientific community 'should consult with the public before applying gene editing to humans,'" providing a "broad mandate for public engagement."[19]
The scientific community has struggled to involve the public in scientific decision-making. Abuses of scientific research participants, including well-known examples like theTuskegee syphilis experiment, may continue to erode trust in scientists among vulnerable populations.
Additionally, past efforts to come to scientific consensus on controversial issues have excluded the public, and as a result narrowed the scope of technological risks considered. For example, at the 1975Asilomar conference on recombinant DNA, scientists addressed the risks of biological contamination during laboratory experiments, but failed to consider the more varied public concerns that would surface with commercial adoption ofgenetically modified crops.[20]
Researchers acknowledge that further infrastructure and investment is needed to facilitate effective participatory decision-making in science. A five-part approach has been suggested:
Communities can be involved in local, regional and national cultural heritage initiatives, in the processes of creation, organisation, access, use and preservation.[21]The internet has facilitated this, particularly viacrowdsourcing, where the general public is asked to help contribute to shared goals, creating content, but also as a form of mutually beneficial engagement[22]particularly with the collections and research of Galleries, Libraries, Archives, and Museums (GLAM). An example of this is theTranscribe Benthamproject, where volunteers are asked to transcribe the manuscripts of the philosopherJeremy Bentham. Challenges include: how to managecopyright, ownership,orphan works, access toopen datafrom heritage organisations, how to build relationships with cultural heritageamateurs, sustainable preservation, and attitudes towards openness.[21]
Efforts to promote public participation have been widely critiqued. There is particular concern regarding the potential capture of the public into the sphere of influence of governance stakeholders, leaving communities frustrated by public participation initiatives, marginalized and ignored.[23]
Youth participation in civic activities has been found to be linked to a student's race, academic track, and their school'ssocioeconomic status.[24]The American Political Science Task Force on Inequality and American Democracy has found that those with higher socioeconomic status participate at higher rates than those with lower status.[25]A collection of surveys on student participation in 2008 found that "Students who are more academically successful or white and those with parents of higher socioeconomic status receive more classroom-based civic learning opportunities."[24]Youth from disadvantaged backgrounds are less likely to report participation in school-based service orservice-learningthan other students.[26][27]Students with more highly educated parents and higher household incomes are more likely to have the opportunity to participate in student government, give a speech, or develop debating skills in school.[28]
|
https://en.wikipedia.org/wiki/Participation_(decision_making)
|
Renewable energy(also calledgreen energy) isenergymade fromrenewable natural resourcesthat are replenished on ahuman timescale. The most widely used renewable energy types aresolar energy,wind power, andhydropower.Bioenergyandgeothermal powerare also significant in some countries. Some also considernuclear power a renewable power source, although this is controversial, as nuclear energy requires mining uranium, a nonrenewable resource. Renewable energy installations can be large or small and are suited for both urban and rural areas. Renewable energy is often deployed together with furtherelectrification. This has several benefits: electricity canmove heatandvehiclesefficiently and is clean at the point of consumption.[1][2]Variable renewable energysources are those that have a fluctuating nature, such as wind power and solar power. In contrast,controllable renewable energysources include dammedhydroelectricity,bioenergy, orgeothermal power.
Renewable energy systems have rapidly become more efficient and cheaper over the past 30 years.[3]A large majority of worldwide newly installed electricity capacity is now renewable.[4]Renewable energy sources, such as solar and wind power, have seen significant cost reductions over the past decade, making them more competitive with traditional fossil fuels.[5]In most countries,photovoltaic solaroronshore windare the cheapest new-build electricity.[6]From 2011 to 2021, renewable energy grew from 20% to 28% of global electricity supply. Power from the sun and wind accounted for most of this increase, growing from a combined 2% to 10%. Use offossil energyshrank from 68% to 62%.[7]In 2024, renewables accounted for over 30% of global electricity generation and are projected to reach over 45% by 2030.[8][9]Many countries already have renewables contributing more than 20% of their total energy supply, with some generating over half or even all their electricity from renewable sources.[10][11]
The main motivation to use renewable energy instead of fossil fuels is to slow and eventually stopclimate change, which is mostly caused by theirgreenhouse gas emissions. In general, renewable energy sources pollute much less than fossil fuels.[12]TheInternational Energy Agencyestimates that to achievenet zero emissionsby 2050, 90% of global electricity will need to be generated by renewables.[13]Renewables also cause much lessair pollutionthan fossil fuels, improving public health, and are lessnoisy.[12]
The deployment of renewable energy still faces obstacles, especiallyfossil fuel subsidies,[14]lobbyingby incumbent power providers,[15]and local opposition to the use of land for renewable installations.[16][17]Like all mining, the extraction of minerals required for many renewable energy technologies also results inenvironmental damage.[18]In addition, although most renewable energy sources aresustainable, some are not.
Renewable energy is usually understood as energy harnessed from continuously occurring natural phenomena. TheInternational Energy Agencydefines it as "energy derived from natural processes that are replenished at a faster rate than they are consumed".Solar power,wind power,hydroelectricity,geothermalenergy, andbiomassare widely agreed to be the main types of renewable energy.[21]Renewable energy often displaces conventional fuels in four areas:electricity generation,hot water/space heating,transportation, and rural (off-grid) energy services.[22]
Although almost all forms of renewable energy cause much fewer carbon emissions than fossil fuels, the term is not synonymous withlow-carbon energy. Some non-renewable sources of energy, such asnuclear power,[contradictory]generate almost no emissions, while some renewable energy sources can be very carbon-intensive, such as the burning of biomass if it is not offset by planting new plants.[12]Renewable energy is also distinct fromsustainable energy, a more abstract concept that seeks to group energy sources based on their overall permanent impact on future generations of humans. For example, biomass is often associated with unsustainabledeforestation.[23]
As part of the global effort tolimit climate change, most countries have committed tonet zero greenhouse gas emissions.[24]In practice, this meansphasing out fossil fuelsand replacing them with low-emissions energy sources.[12]This much needed process, coined as "low-carbon substitutions"[25]in contrast to other transition processes including energy additions, needs to be accelerated multiple times in order to successfully mitigate climate change.[25]At the2023 United Nations Climate Change Conference, around three-quarters of the world's countries set a goal of tripling renewable energy capacity by 2030.[26]TheEuropean Unionaims to generate 40% of its electricity from renewables by the same year.[27]
Renewable energy is more evenly distributed around the world than fossil fuels, which are concentrated in a limited number of countries.[28]It also brings health benefits by reducingair pollutioncaused by the burning of fossil fuels. The potential worldwide savings in health care costs have been estimated at trillions of dollars annually.[29]
The two most important forms of renewable energy, solar and wind, areintermittent energy sources: they are not available constantly, resulting in lowercapacity factors. In contrast,fossil fuel power plants, nuclear power plants and hydropower are usually able to produce precisely the amount of energy anelectricity gridrequires at a given time. Solar energy can only be captured during the day, and ideally in cloudless conditions. Wind power generation can vary significantly not only day-to-day, but even month-to-month.[30]This poses a challenge when transitioning away from fossil fuels: energy demand will often be higher or lower than what renewables can provide.[31]
In the medium-term, this variability may require keeping somegas-fired power plantsor otherdispatchable generationon standby[32][33]until there is enough energy storage,demand response, grid improvement, orbase load powerfrom non-intermittent sources. In the long-term,energy storageis an important way of dealing with intermittency.[34]Using diversified renewable energy sources andsmart gridscan also help flatten supply and demand.[35]
Sector coupling of the power generation sector with other sectors may increase flexibility: for example the transport sector can be coupled by charging electric vehicles and sending electricity fromvehicle to grid.[36]Similarly the industry sector can be coupled by hydrogen produced by electrolysis,[37]and the buildings sector by thermal energy storage for space heating and cooling.[38]
Building overcapacity for wind and solar generation can help ensure sufficient electricity production even during poor weather. In optimal weather, it may be necessary to curtail energy generation if it is not possible to use or store excess electricity.[39]
Electrical energy storage is a collection of methods used to store electrical energy. Electrical energy is stored during times when production (especially from intermittent sources such aswind power,tidal power,solar power) exceeds consumption, and returned to thegridwhen production falls below consumption.Pumped-storage hydroelectricityaccounts for more than 85% of allgrid power storage.[40]Batteries are increasingly being deployed for storage[41]and gridancillary services[42]and for domestic storage.[43]Green hydrogenis a more economical means of long-term renewable energy storage, in terms ofcapital expenditurescompared to pumped hydroelectric or batteries.[44][45]
Two main renewable energy sources - solar power and wind power - are usually deployed indistributed generationarchitecture, which offers specific benefits and comes with specific risks.[46]Notable risks are associated with centralisation of 90% of the supply chains in a single country (China) in the photovoltaic sector.[47]Mass-scale installation of photovoltaicpower inverterswith remote control,security vulnerabilitiesandbackdoorsresults in cyberattacks that can disable generation from millions of physically decentralised panels, resulting in disappearance of hundreds of gigawatts of installed power from the grid in one moment.[48][49]Similar attacks have targeted wind power farms through vulnerabilities in their remote control and monitoring systems.[50]The EuropeanNIS2 directivepartially responds to these challenges by extending the scope of cybersecurity regulations to the energy generation market.[51]
Solar power produced around 1.3 terrawatt-hours (TWh) worldwide in 2022,[10]representing 4.6% of the world's electricity. Almost all of this growth has happened since 2010.[57]Solar energy can be harnessed anywhere that receives sunlight; however, the amount of solar energy that can be harnessed for electricity generation is influenced byweather conditions, geographic location and time of day.[58]
There are two mainstream ways of harnessing solar energy:solar thermal, which converts solar energy into heat; andphotovoltaics(PV), which converts it into electricity.[12]PV is far more widespread, accounting for around two thirds of the global solar energy capacity as of 2022.[59]It is also growing at a much faster rate, with 170 GW newly installed capacity in 2021,[60]compared to 25 GW of solar thermal.[59]
Passive solarrefers to a range of construction strategies and technologies that aim to optimize the distribution of solar heat in a building. Examples includesolar chimneys,[12]orienting a building to the sun, usingconstruction materials that can store heat, and designing spaces thatnaturally circulate air.[61]
From 2020 to 2022, solar technology investments almost doubled from USD 162 billion to USD 308 billion, driven by the sector's increasing maturity and cost reductions, particularly in solar photovoltaic (PV), which accounted for 90% of total investments. China and the United States were the main recipients, collectively making up about half of all solar investments since 2013. Despite reductions in Japan and India due to policy changes andCOVID-19, growth in China, the United States, and a significant increase from Vietnam's feed-in tariff program offset these declines. Globally, the solar sector added 714 gigawatts (GW) of solar PV andconcentrated solar power(CSP) capacity between 2013 and 2021, with a notable rise in large-scale solar heating installations in 2021, especially in China, Europe, Turkey, and Mexico.[62]
Aphotovoltaic system, consisting ofsolar cellsassembled intopanels, converts light into electricaldirect currentvia thephotoelectric effect.[65][66]PV has several advantages that make it by far the fastest-growing renewable energy technology. It is cheap, low-maintenance and scalable; adding to an existing PV installation as demanded arises is simple. Its main disadvantage is its poor performance in cloudy weather.[12]
PV systems range from small, residential and commercialrooftoporbuilding integratedinstallations,[67][68][69]to large utility-scalephotovoltaic power station.[70][71][72]A household's solar panels can either be used for just that household or, if connected to an electrical grid, can be aggregated with millions of others.[73][74][75]
The first utility-scale solar power plant was built in 1982 inHesperia, CaliforniabyARCO.[76][77]The plant was not profitable and was sold eight years later.[78]However, over the following decades, PV cells became significantly more efficient and cheaper.[79]As a result, PV adoption has grown exponentially since 2010.[80]Global capacity increased from 230 GW at the end of 2015 to 890 GW in 2021.[81]PV grew fastest in China between 2016 and 2021, adding 560 GW, more than all advanced economies combined.[82]Four of the ten biggest solar power stations are in China, including the biggest,Golmud Solar Parkin China.[83]
Solar panels are recycledto reduceelectronic wasteand create a source for materials that would otherwise need to be mined,[84]but such business is still small and work is ongoing to improve and scale-up the process.[85][86][87]
Unlike photovoltaic cells that convert sunlight directly into electricity, solar thermal systems convert it into heat. They use mirrors or lenses to concentrate sunlight onto a receiver, which in turn heats a water reservoir. The heated water can then be used in homes. The advantage of solar thermal is that the heated water can be stored until it is needed, eliminating the need for a separate energy storage system.[88]Solar thermal power can also be converted to electricity by using the steam generated from the heated water to drive aturbineconnected to a generator. However, because generating electricity this way is much more expensive than photovoltaic power plants, there are very few in use today.[89]
Floatovoltiacs, or floating solar panels, are solar panels floating on bodies of water. There are both positive and negative points to this. Some positive points are increased efficiency and price decrease of water space compared to land space. A negative point is that making floating solar panels could be more expensive.
Agrivoltiacs is where there is simultaneous use of land for energy production and agriculture. There are again both positive and negative points. A positive viewpoint is there is a better use of land, which leads to lower land costs. A negative viewpoint is it the plants grown underneath would have to be plants that can grow well under shade, such asPolka Dot Plant,Pineapple Sage, andBegonia.[90]Agrivoltaics not only optimizes land use and reduces costs by enabling dual revenue streams from both energy production and agriculture, but it can also help moderate temperatures beneath the panels, potentially reducing water loss and improving microclimates for crop growth. However, careful design and crop selection are crucial, as the shading effect may limit the types of plants that can thrive, necessitating the use of shade-tolerant species and innovative management practices.[91]
Humans have harnessed wind energy since at least 3500 BC. Until the 20th century, it was primarily used to power ships, windmills and water pumps. Today, the vast majority of wind power is used to generate electricity using wind turbines.[12]Modern utility-scale wind turbines range from around 600 kW to 9 MW of rated power. The power available from the wind is a function of the cube of the wind speed, so as wind speed increases, power output increases up to the maximum output for the particular turbine.[96]Areas where winds are stronger and more constant, such asoffshoreand high-altitude sites, are preferred locations for wind farms.
Wind-generated electricity met nearly 4% of global electricity demand in 2015, with nearly 63 GW of new wind power capacity installed. Wind energy was the leading source of new capacity in Europe, the US and Canada, and the second largest in China. In Denmark, wind energy met more than 40% of its electricity demand while Ireland, Portugal andSpaineach met nearly 20%.[97]
Globally, the long-term technical potential of wind energy is believed to be five times total current global energy production, or 40 times current electricity demand, assuming all practical barriers needed were overcome. This would require wind turbines to be installed over large areas, particularly in areas of higher wind resources, such as offshore, and likely also industrial use of new types of VAWT turbines in addition to the horizontal axis units currently in use. As offshore wind speeds average ~90% greater than that of land, offshore resources can contribute substantially more energy than land-stationed turbines.[98]
Investments in wind technologies reached USD 161 billion in 2020, with onshore wind dominating at 80% of total investments from 2013 to 2022. Offshore wind investments nearly doubled to USD 41 billion between 2019 and 2020, primarily due to policy incentives in China and expansion in Europe. Global wind capacity increased by 557 GW between 2013 and 2021, with capacity additions increasing by an average of 19% each year.[62]
Since water is about 800 timesdenser than air, even a slow flowing stream of water, or moderate seaswell, can yield considerable amounts of energy. Water can generate electricity with aconversion efficiencyof about 90%, which is the highest rate in renewable energy.[102]There are many forms of water energy:
Much hydropower is flexible, thus complementing wind and solar, as it not intermittent.[106]In 2021, the world renewable hydropower capacity was 1,360 GW.[82]Only a third of the world's estimated hydroelectric potential of 14,000 TWh/year has been developed.[107][108]New hydropower projects face opposition from local communities due to their large impact, including relocation of communities and flooding of wildlife habitats and farming land.[109]High cost and lead times from permission process, including environmental and risk assessments, with lack of environmental and social acceptance are therefore the primary challenges for new developments.[110]It is popular to repower old dams thereby increasing their efficiency and capacity as well as quicker responsiveness on the grid.[111]Where circumstances permit existing dams such as theRussell Dambuilt in 1985 may be updated with "pump back" facilities forpumped-storagewhich is useful for peak loads or to support intermittent wind and solar power. Becausedispatchable poweris more valuable than VRE[112][113]countries with large hydroelectric developments such as Canada and Norway are spending billions to expand their grids to trade with neighboring countries having limited hydro.[114]
Biomassis biological material derived from living, or recently living organisms. Most commonly, it refers to plants or plant-derived materials. As an energy source, biomass can either be used directly viacombustionto produce heat, or converted to a more energy-densebiofuellike ethanol. Wood is the most significant biomass energy source as of 2012[118]and is usually sourced from a trees cleared forsilviculturalreasons orfire prevention. Municipal wood waste – for instance, construction materials or sawdust – is also often burned for energy.[119]The biggest per-capita producers of wood-based bioenergy are heavily forested countries like Finland, Sweden, Estonia, Austria, and Denmark.[120]
Bioenergy can be environmentally destructive if old-growth forests are cleared to make way for crop production. In particular, demand for palm oil to produce biodiesel has contributed to the deforestation of tropical rainforests in Brazil and Indonesia.[121]In addition, burning biomass still produces carbon emissions, although much less than fossil fuels (39 grams of CO2per megajoule of energy, compared to 75 g/MJ for fossil fuels).[122]
Somebiomasssources are unsustainable at current rates of exploitation (as of 2017).[123]
Biofuelsare primarily used in transportation, providing 3.5% of the world's transport energy demand in 2022,[124]up from 2.7% in 2010.[125]Biojetis expected to be important for short-term reduction of carbon dioxide emissions from long-haul flights.[126]
Aside from wood, the major sources of bioenergy arebioethanolandbiodiesel.[12]Bioethanol is usually produced by fermenting the sugar components of crops likesugarcaneandmaize, while biodiesel is mostly made from oils extracted from plants, such assoybean oilandcorn oil.[127]Most of the crops used to produce bioethanol and biodiesel are grown specifically for this purpose,[128]although usedcooking oilaccounted for 14% of the oil used to produce biodiesel as of 2015.[127]The biomass used to produce biofuels varies by region. Maize is the major feedstock in the United States, while sugarcane dominates in Brazil.[129]In the European Union, where biodiesel is more common than bioethanol,rapeseed oilandpalm oilare the main feedstocks.[130]China, although it produces comparatively much less biofuel, uses mostly corn and wheat.[131]In many countries, biofuels are either subsidized or mandated to beincluded in fuel mixtures.[121]
There are many other sources of bioenergy that are more niche, or not yet viable at large scales. For instance, bioethanol could beproduced from the cellulosic partsof crops, rather than only the seed as is common today.[132]Sweet sorghummay be a promising alternative source of bioethanol, due to its tolerance of a wide range of climates.[133]Cow dungcan be converted into methane.[134]There is also a great deal of research involvingalgal fuel, which is attractive because algae is a non-food resource, grows around 20 times faster than most food crops, and can be grown almost anywhere.[135]
Geothermal energy isthermal energy(heat) extracted from theEarth's crust. It originates from several differentsources, of which the most significant is slowradioactive decayof minerals contained in theEarth's interior,[12]as well as some leftover heat from theformation of the Earth.[140]Some of the heat is generated near the Earth's surface in the crust, but some also flows from deep within the Earth from themantleandcore.[140]Geothermal energy extraction is viable mostly in countries located ontectonic plateedges, where the Earth's hot mantle is more exposed.[141]As of 2023, the United States has by far the most geothermal capacity (2.7 GW,[142]or less than 0.2% of the country's total energy capacity[143]), followed by Indonesia and the Philippines. Global capacity in 2022 was 15 GW.[142]
Geothermal energy can be either used directly to heat homes, as is common in Iceland where almost all of its energy is renewable, or to generate electricity. Iceland is a global leader in renewable energy, relying almost entirely on its abundant geothermal and hydroelectric resources derived from volcanic activity and glaciers.[144]At smaller scales, geothermal power can be generated withgeothermal heat pumps, which can extract heat from ground temperatures of under 30 °C (86 °F), allowing them to be used at relatively shallow depths of a few meters.[141]Electricity generation requires large plants and ground temperatures of at least 150 °C (302 °F). In some countries, electricity produced from geothermal energy accounts for a large portion of the total, such as Kenya (43%) and Indonesia (5%).[145]
Technical advances may eventually make geothermal power more widely available. For example,enhanced geothermal systemsinvolve drilling around 10 kilometres (6.2 mi) into the Earth, breaking apart hot rocks and extracting the heat using water. In theory, this type of geothermal energy extraction could be done anywhere on Earth.[141]
There are also other renewable energy technologies that are still under development, includingenhanced geothermal systems,concentrated solar power,cellulosic ethanol, andmarine energy.[146][147]These technologies are not yet widely demonstrated or have limited commercialization. Some may have potential comparable to other renewable energy technologies, but still depend on further breakthroughs from research, development and engineering.[147]
Enhanced geothermal systems (EGS) are a new type of geothermal power which does not require natural hot water reservoirs or steam to generate power. Most of the underground heat within drilling reach is trapped in solid rocks, not in water.[148]EGS technologies usehydraulic fracturingto break apart these rocks and release the heat they contain, which is then harvested by pumping water into the ground. The process is sometimes known as "hot dry rock" (HDR).[149]Unlike conventional geothermal energy extraction, EGS may be feasible anywhere in the world, depending on the cost of drilling.[150]EGS projects have so far primarily been limited todemonstration plants, as the technology is capital-intensive due to the high cost of drilling.[151]
Marine energy (also sometimes referred to as ocean energy) is the energy carried byocean waves,tides,salinity, andocean temperature differences. Technologies to harness the energy of moving water includewave power,marine current power, andtidal power.Reverse electrodialysis(RED) is a technology for generating electricity by mixingfresh waterand saltysea waterin large power cells.[152]Most marine energy harvesting technologies are still at lowtechnology readiness levelsand not used at large scales. Tidal energy is generally considered the most mature, but has not seen wide deployment.[153]The world's largest tidal power station is onSihwa Lake, South Korea,[154]which produces around 550 gigawatt-hours of electricity per year.[155]
Earth emits roughly 1017W of infrared thermal radiation that flows toward the cold outer space. Solar energy hits the surface and atmosphere of the earth and produces heat. Using various theorized devices like emissive energy harvester (EEH) or thermoradiative diode, this energy flow can be converted into electricity. In theory, this technology can be used during nighttime.[156][157]
Producing liquid fuels from oil-rich (fat-rich) varieties of algae is an ongoing research topic. Various microalgae grown in open or closed systems are being tried including some systems that can be set up in brownfield and desert lands.[158]
There have been numerous proposals forspace-based solar power, in which very large satellites with photovoltaic panels would be equipped withmicrowavetransmitters to beam power back to terrestrial receivers. A 2024 study by theNASAOffice of Science and Technology Policy examined the concept and concluded that with current and near-future technologies it would be economically uncompetitive.[159]
Collection of static electricity charges from water droplets on metal surfaces is an experimental technology that would be especially useful inlow-income countrieswith relative air humidity over 60%.[160]
Breeder reactorscould, in principle, depending on the fuel cycle employed, extract almost all of the energy contained inuraniumorthorium, decreasing fuel requirements by a factor of 100 compared to widely used once-throughlight water reactors, which extract less than 1% of the energy in the actinide metal (uranium or thorium) mined from the earth.[161]The high fuel-efficiency of breeder reactors could greatly reduce concerns about fuel supply, energy used in mining, and storage ofradioactive waste. Withseawater uranium extraction(currently too expensive to be economical), there is enough fuel for breeder reactors to satisfy the world's energy needs for 5 billion years at 1983's total energy consumption rate, thus making nuclear energy effectively a renewable energy.[162][163]In addition to seawater the average crustal granite rocks contain significant quantities of uranium and thorium with which breeder reactors can supply abundant energy for the remaining lifespan of the sun on the main sequence of stellar evolution.[164]
Artificial photosynthesis uses techniques includingnanotechnologyto store solar electromagnetic energy in chemical bonds by splitting water to produce hydrogen and then using carbon dioxide to make methanol.[165]Researchers in this field strived to design molecular mimics of photosynthesis that use a wider region of the solar spectrum, employ catalytic systems made from abundant, inexpensive materials that are robust, readily repaired, non-toxic, stable in a variety of environmental conditions and perform more efficiently allowing a greater proportion of photon energy to end up in the storage compounds, i.e., carbohydrates (rather than building and sustaining living cells).[166]However, prominent research faces hurdles, Sun Catalytix a MIT spin-off stopped scaling up their prototype fuel-cell in 2012 because it offers few savings over other ways to make hydrogen from sunlight.[167]
Recent research emphasizes that while artificial photosynthesis shows promise in splitting water to generate hydrogen, its broader significance lies in the ability to produce dense, carbon-based solar fuels suitable for transport applications, such as aviation and long-haul shipping. These fuels, if derived from carbon dioxide and water using sunlight, could close the carbon loop and reduce reliance on fossil-based hydrocarbons. However, realizing this potential requires overcoming major technical hurdles, including the development of efficient, durable catalysts for water oxidation and CO₂ reduction, and careful attention to land use and public perception.[168]
Most new renewables are solar, followed by wind then hydro then bioenergy.[169]Investment in renewables, especially solar, tends to be more effective in creating jobs than coal, gas or oil.[170][171]Worldwide, renewables employ about 12 million people as of 2020, with solar PV being the technology employing the most at almost 4 million.[172]However, as of February 2024, the world's supply of workforce for solar energy is lagging greatly behind demand as universities worldwide still produce more workforce for fossil fuels than for renewable energy industries.[173]
In 2021, China accounted for almost half of the global increase in renewable electricity.[174]
There are 3,146 gigawatts installed in 135 countries, while 156 countries have laws regulating the renewable energy sector.[7][175]
Globally in 2020 there are over 10 million jobs associated with the renewable energy industries, withsolar photovoltaicsbeing the largest renewable employer.[176]The clean energy sectors added about 4.7 million jobs globally between 2019 and 2022, totaling 35 million jobs by 2022.[177]: 5
Some studies say that a global transition to100% renewable energyacross all sectors – power, heat, transport and industry – is feasible and economically viable.[178][179][180]
One of the efforts to decarbonize transportation is the increased use ofelectric vehicles(EVs).[181]Despite that and the use ofbiofuels, such asbiojet, less than 4% of transport energy is from renewables.[182]Occasionallyhydrogen fuel cellsare used for heavy transport.[183]Meanwhile, in the futureelectrofuelsmay also play a greater role in decarbonizing hard-to-abate sectors like aviation and maritime shipping.[184]
Solar water heatingmakes an important contribution torenewable heatin many countries, most notably in China, which now has 70% of the global total (180 GWth). Most of these systems are installed on multi-family apartment buildings[185]and meet a portion of the hot water needs of an estimated 50–60 million households in China. Worldwide, total installed solar water heating systems meet a portion of the water heating needs of over 70 million households.
Heat pumpsprovide both heating and cooling, and also flatten the electric demand curve and are thus an increasing priority.[186]Renewable thermal energyis also growing rapidly.[187]About 10% of heating and cooling energy is from renewables.[188]
TheInternational Renewable Energy Agency(IRENA) stated that ~86% (187 GW) of renewable capacity added in 2022 had lower costs than electricity generated from fossil fuels.[189]IRENA also stated that capacity added since 2000 reduced electricity bills in 2022 by at least $520 billion, and that in non-OECD countries, the lifetime savings of 2022 capacity additions will reduce costs by up to $580 billion.[189]
*= 2018. All other values for 2019.
The results of a recent review of the literature concluded that asgreenhouse gas(GHG) emitters begin to be held liable for damages resulting from GHG emissions resulting in climate change, a high value for liability mitigation would provide powerful incentives for deployment of renewable energy technologies.[206]
In the decade of 2010–2019, worldwide investment in renewable energy capacity excluding large hydropower amounted to US$2.7 trillion, of which the top countries China contributed US$818 billion, the United States contributed US$392.3 billion, Japan contributed US$210.9 billion, Germany contributed US$183.4 billion, and the United Kingdom contributed US$126.5 billion.[207]This was an increase of over three and possibly four times the equivalent amount invested in the decade of 2000–2009 (no data is available for 2000–2003).[207]
As of 2022, an estimated 28% of the world's electricity was generated by renewables. This is up from 19% in 1990.[208]
A December 2022 report by the IEA forecasts that over 2022-2027, renewables are seen growing by almost 2,400 GW in its main forecast, equal to the entire installed power capacity of China in 2021. This is an 85% acceleration from the previous five years, and almost 30% higher than what the IEA forecast in its 2021 report, making its largest ever upward revision. Renewables are set to account for over 90% of global electricity capacity expansion over the forecast period.[82]To achieve net zero emissions by 2050, IEA believes that 90% of global electricity generation will need to be produced from renewable sources.[17]
In June 2022, IEA Executive DirectorFatih Birolsaid that countries should invest more in renewables to "ease the pressure on consumers from high fossil fuel prices, make our energy systems more secure, and get the world on track to reach our climate goals."[210]
China'sfive year plan to 2025includes increasing direct heating by renewables such as geothermal and solar thermal.[211]
REPowerEU, the EU plan to escapedependence on fossil Russian gas, is expected to call for much moregreen hydrogen.[212]
After a transitional period,[213]renewable energy production is expected to make up most of the world's energy production. In 2018, the risk management firm,DNV GL, forecasts that the world's primaryenergy mixwill be split equally between fossil and non-fossil sources by 2050.[214]
Middle eastern nations are also planning on reducing their reliance fossil fuel. Many planned green projects will contribute in 26% of energy supply for the region by 2050 achieving emission reductions equal to 1.1 Gt CO2/year.[215]
Massive Renewable Energy Projects in the Middle East:[215]
In July 2014,WWFand theWorld Resources Instituteconvened a discussion among a number of major US companies who had declared their intention to increase their use of renewable energy. These discussions identified a number of "principles" which companies seeking greater access to renewable energy considered important market deliverables. These principles included choice (between suppliers and between products), cost competitiveness, longer term fixed price supplies, access to third-party financing vehicles, and collaboration.[216]
UK statistics released in September 2020 noted that "the proportion of demand met from renewables varies from a low of 3.4 per cent (for transport, mainly from biofuels) to highs of over 20 per cent for 'other final users', which is largely the service and commercial sectors that consume relatively large quantities of electricity, and industry".[217]
In some locations, individual households can opt to purchase renewable energy through aconsumer green energy program.
Renewable energy in developing countriesis an increasingly used alternative tofossil fuelenergy, as these countries scale up their energy supplies and addressenergy poverty. Renewable energy technology was once seen as unaffordable for developing countries.[218]However, since 2015, investment in non-hydro renewable energy has been higher indeveloping countriesthan in developed countries, and comprised 54% of global renewable energy investment in 2019.[219]TheInternational Energy Agencyforecasts that renewable energy will provide the majority of energy supply growth through 2030 in Africa and Central and South America, and 42% of supply growth in China.[220]
In Kenya, theOlkaria V Geothermal Power Stationis one of the largest in the world.[222]TheGrand Ethiopia Renaissance Dam projectincorporates wind turbines.[223]Once completed, Morocco'sOuarzazate Solar Power Stationis projected to provide power to over a million people.[224]
Policies to support renewable energy have been vital in their expansion. Where Europe dominated in establishingenergy policyin the early 2000s, most countries around the world now have some form of energy policy.[227]
TheInternational Renewable Energy Agency(IRENA) is anintergovernmental organizationfor promoting the adoption of renewable energy worldwide. It aims to provide concrete policy advice and facilitatecapacity buildingand technology transfer. IRENA was formed in 2009, with 75 countries signing the charter of IRENA.[228]As of April 2019, IRENA has 160 member states.[229]The then United Nations Secretary-GeneralBan Ki-moonhas said that renewable energy can lift the poorest nations to new levels of prosperity,[230]and in September 2011 he launched the UNSustainable Energy for Allinitiative to improve energy access, efficiency and the deployment of renewable energy.[231]
The 2015Paris Agreementon climate change motivated many countries to develop or improve renewable energy policies.[232]In 2017, a total of 121 countries adopted some form of renewable energy policy.[227]National targets that year existed in 176 countries.[232]In addition, there is also a wide range of policies at the state/provincial, and local levels.[125]Somepublic utilitieshelp plan or installresidential energy upgrades.
Many national, state and local governments have createdgreen banks. A green bank is a quasi-public financial institution that uses public capital to leverage private investment in clean energy technologies.[233]Green banks use a variety of financial tools to bridge market gaps that hinder the deployment of clean energy.
Global and national policies related to renewable energy can be divided based on sectors, such as agriculture, transport, buildings, industry:
Climate neutrality(net zero emissions) by the year 2050 is the main goal of theEuropean Green Deal.[234]For the European Union to reach their target of climate neutrality, one goal is to decarbonise its energy system by aiming to achieve "net-zerogreenhouse gas emissionsby 2050."[235]
TheInternational Renewable Energy Agency's (IRENA) 2023 report on renewable energy finance highlights steady investment growth since 2018: USD 348 billion in 2020 (a 5.6% increase from 2019), USD 430 billion in 2021 (24% up from 2020), and USD 499 billion in 2022 (16% higher). This trend is driven by increasing recognition of renewable energy's role in mitigating climate change and enhancingenergy security, along with investor interest in alternatives to fossil fuels. Policies such asfeed-in tariffsin China and Vietnam have significantly increased renewable adoption. Furthermore, from 2013 to 2022, installation costs for solar photovoltaic (PV), onshore wind, and offshore wind fell by 69%, 33%, and 45%, respectively, making renewables more cost-effective.[236][62]
Between 2013 and 2022, the renewable energy sector underwent a significant realignment of investment priorities. Investment in solar and wind energy technologies markedly increased. In contrast, other renewable technologies such as hydropower (includingpumped storage hydropower),biomass,biofuels,geothermal, andmarine energyexperienced a substantial decrease in financial investment. Notably, from 2017 to 2022, investment in these alternative renewable technologies declined by 45%, falling from USD 35 billion to USD 17 billion.[62]
In 2023, the renewable energy sector experienced a significant surge in investments, particularly in solar and wind technologies, totaling approximately USD 200 billion—a 75% increase from the previous year. The increased investments in 2023 contributed between 1% and 4% to the GDP in key regions including the United States, China, the European Union, and India.[237]
The energy sector receives investments of approximately USD 3 trillion each year, with USD 1.9 trillion directed towardsclean energytechnologies and infrastructure. To meet the targets set in theNet Zero Emissions(NZE) Scenario by 2035, this investment must increase to USD 5.3 trillion per year.[238]: 15
Whethernuclear powershould be considered a form of renewable energy is an ongoing subject of debate.Statutorydefinitions of renewable energy usually exclude many present nuclear energy technologies, with the notable exception of the state ofUtah.[239]Dictionary-sourced definitions of renewableenergy technologiesoften omit or explicitly exclude mention of nuclear energy sources, with an exception made for the natural nucleardecay heatgeneratedwithin the Earth.[240][241]
The most common fuel used in conventionalnuclear fission power stations,uranium-235is "non-renewable" according to theEnergy Information Administration, the organization however is silent on the recycledMOX fuel.[241]TheNational Renewable Energy Laboratorydoes not mention nuclear power in its "energy basics" definition.[242]
Thegeopoliticalimpact of the growing use of renewable energy is a subject of ongoing debate and research.[245]Many fossil-fuel producing countries, such asQatar,Russia,Saudi ArabiaandNorway, are currently able to exert diplomatic or geopolitical influence as a result of their oil wealth. Most of these countries are expected to be among the geopolitical "losers" of the energy transition, although some, like Norway, are also significant producers and exporters of renewable energy. Fossil fuels and the infrastructure to extract them may, in the long term, becomestranded assets.[246]It has been speculated that countries dependent on fossil fuel revenue may one day find it in their interests to quicklysell offtheir remaining fossil fuels.[247]
Conversely, nations abundant in renewable resources, and the minerals required for renewables technology, are expected to gain influence.[248][249]In particular,Chinahas become the world's dominant manufacturer of the technology needed to produce or store renewable energy, especiallysolar panels,wind turbines, andlithium-ion batteries.[250]Nations rich in solar and wind energy could become major energy exporters.[251]Some may produce and exportgreen hydrogen,[252][251]although electricity is projected to be the dominantenergy carrierin 2050, accounting for almost 50% of total energy consumption (up from 22% in 2015).[253]Countries with large uninhabited areas such as Australia, China, and many African and Middle Eastern countries have a potential for huge installations of renewable energy. The production of renewable energy technologies requiresrare-earth elementswith new supply chains.[254]
Countries with already weak governments that rely on fossil fuel revenue may face even higher political instability or popular unrest. Analysts consider Nigeria,Angola,Chad,Gabon, andSudan, all countries with a history ofmilitary coups, to be at risk of instability due to dwindling oil income.[255]
A study found that transition from fossil fuels to renewable energy systems reduces risks from mining, trade and political dependence because renewable energy systems don't need fuel – they depend on trade only for the acquisition of materials and components during construction.[256]
In October 2021, European Commissioner for Climate ActionFrans Timmermanssuggested "the best answer" to the2021 global energy crisisis "to reduce our reliance on fossil fuels."[257]He said those blaming theEuropean Green Dealwere doing so "for perhaps ideological reasons or sometimes economic reasons in protecting their vested interests."[257]Some critics blamed theEuropean Union Emissions Trading System(EU ETS) andclosure of nuclear plantsfor contributing to the energy crisis.[258][259][260]European Commission PresidentUrsula von der Leyensaid that Europe is "too reliant" onnatural gasand toodependent on natural gas imports. According to Von der Leyen, "The answer has to do with diversifying our suppliers ... and, crucially, with speeding up the transition to clean energy."[261]
Thetransitionto renewable energy requires increased extraction of certain metals and minerals. Like all mining, this impacts the environment[262]and can lead toenvironmental conflict.[263]For example, lithium mining uses around 65% of the water in the Salar de Atamaca desert forcing farmers and llama herders to abandon their ancestral settlements and creating environment degradation,[264]in several African countries, the green energy transition has created a mining boom, causing deforestation, and threatening already endangered species.[265]Wind power requires large amounts of copper and zinc, as well as smaller amounts of the rarer metalneodymium. Solar power is less resource-intensive, but still requires significant amounts of aluminum. The expansion of electrical grids requires both copper and aluminum. Batteries, which are critical to enable storage of renewable energy, use large quantities of copper, nickel, aluminum and graphite. Demand for lithium is expected to grow 42-fold from 2020 to 2040. Demand for nickel, cobalt and graphite is expected to grow by a factor of about 20–25.[266]For each of the most relevant minerals and metals, its mining is dominated by a single country:copper in Chile,nickel in Indonesia,rare earths in China,cobalt in the Democratic Republic of the Congo (DRC), andlithium in Australia. China dominatesprocessingof all of these.[266]
Recycling these metals after the devices they are embedded in are spent is essential to create acircular economyand ensure renewable energy is sustainable. By 2040, recycledcopper,lithium, cobalt, andnickelfrom spent batteries could reduce combined primary supply requirements for these minerals by around 10%.[266]
A controversial approach isdeep sea mining. Minerals can be collected from new sources likepolymetallic noduleslying on theseabed.[267]This would damage local biodiversity,[268]but proponents point out that biomass on resource-rich seabeds is much scarcer than in the mining regions on land, which are often found in vulnerable habitats like rainforests.[269]
Due to co-occurrence of rare-earth and radioactive elements (thorium,uraniumandradium), rare-earth mining results in production of low-levelradioactive waste.[270]
Installations used to produce wind, solar and hydropower are an increasing threat to key conservation areas, with facilities built in areas set aside for nature conservation and other environmentally sensitive areas. They are often much larger than fossil fuel power plants, needing areas of land up to 10 times greater than coal or gas to produce equivalent energy amounts.[271]More than 2000 renewable energy facilities are built, and more are under construction, in areas of environmental importance and threaten the habitats of plant and animal species across the globe. The authors' team emphasized that their work should not be interpreted as anti-renewables because renewable energy is crucial for reducing carbon emissions. The key is ensuring that renewable energy facilities are built in places where they do not damage biodiversity.[272]
In 2020 scientists published aworld mapof areas that contain renewable energy materials as well as estimations of their overlaps with "Key Biodiversity Areas", "Remaining Wilderness" and "Protected Areas". The authors assessed that carefulstrategic planningis needed.[273][274][275]
Climate change is making weather patterns less predictable. This can seriously hamper the use of renewable energy. For example, in the year 2023, in Sudan and Namibia, hydropower production dropped by more than half due to drastic reduction in rainfall, in China, India and some regions in Africa unusual weather phenomena reduced the amount of produced wind energy, heatwaves and clouds reduce the effectiveness of solar pannels, melting glaciers are creating problems to hydropower. Nuclear energy is also affected as drought create water shortage, so nuclear power plants sometimes do not have enough water for cooling.[276]
Solar power plants may compete witharable land,[280][281]while on-shorewind farmsoften face opposition due to aesthetic concerns and noise.[282][283]Such opponents are often described asNIMBYs("not in my back yard").[284]Some environmentalists are concerned about fatal collisions of birds and bats with wind turbines.[285]Although protests against new wind farms occasionally occur around the world, regional and national surveys generally find broad support for both solar and wind power.[286][287][288]
Community-owned wind energyis sometimes proposed as a way to increase local support for wind farms.[289]A 2011 UK Government document stated that "projects are generally more likely to succeed if they have broad public support and the consent of local communities. This means giving communities both a say and a stake."[290]In the 2000s and early 2010s, many renewable projects in Germany, Sweden and Denmark were owned by local communities, particularly throughcooperativestructures.[291][292]In the years since, more installations in Germany have been undertaken by large companies,[289]but community ownership remains strong in Denmark.[293]
Prior to the development of coal in the mid 19th century, nearly all energy used was renewable. The oldest known use of renewable energy, in the form of traditionalbiomasstofuel fires, dates from more than a million years ago. The use of biomass for fire did not become commonplace until many hundreds of thousands of years later.[294]Probably the second oldest usage of renewable energy is harnessing the wind in order to drive ships over water. This practice can be traced back some 7000 years, to ships in the Persian Gulf and on the Nile.[295]Fromhot springs, geothermal energy has been used for bathing sincePaleolithictimes and for space heating since ancient Roman times.[296]Moving into the time of recorded history, the primary sources of traditional renewable energy were humanlabor,animal power,water power, wind, in grain crushingwindmills, andfirewood, a traditional biomass.
In 1885,Werner Siemens, commenting on the discovery of thephotovoltaic effectin the solid state, wrote:
In conclusion, I would say that however great the scientific importance of this discovery may be, its practical value will be no less obvious when we reflect that the supply of solar energy is both without limit and without cost, and that it will continue to pour down upon us for countless ages after all the coal deposits of the earth have been exhausted and forgotten.[297]
Max Webermentioned the end of fossil fuel in the concluding paragraphs of hisDie protestantische Ethik und der Geist des Kapitalismus(The Protestant Ethic and the Spirit of Capitalism), published in 1905.[298]Development of solar engines continued until the outbreak of World War I. The importance of solar energy was recognized in a 1911Scientific Americanarticle: "in the far distant future,natural fuelshaving been exhausted [solar power] will remain as the only means of existence of the human race".[299]
The theory ofpeak oilwas published in 1956.[300]In the 1970s environmentalists promoted the development of renewable energy both as a replacement for the eventualdepletion of oil, as well as for an escape from dependence on oil, and the first electricity-generatingwind turbinesappeared. Solar had long been used for heating and cooling, but solar panels were too costly to build solar farms until 1980.[301]
New government spending, regulation and policies helped the renewables industry weather the2008 financial crisisand theGreat Recessionbetter than many other sectors.[302]In 2022, renewables accounted for 30% of global electricity generation, up from 21% in 1985.[303]
Among the most notable historical uses of renewable energy (in the form of ancient and traditional methods), the following examples can be highlighted:
[1]
|
https://en.wikipedia.org/wiki/Renewable_energy
|
Arenewable resource(also known as aflow resource[note 1][1]) is anatural resourcewhich will replenish to replace the portiondepletedby usage and consumption, either through natural reproduction or other recurring processes in a finite amount of time in a human time scale. When the recovery rate of resources is unlikely to ever exceed a human time scale, these are calledperpetual resources.[1]Renewable resources are a part of Earth's natural environment and the largest components of itsecosphere. A positivelife-cycle assessmentis a key indicator of a resource'ssustainability.
Definitions of renewable resources may also include agricultural production, as inagricultural productsand to an extentwater resources.[2]In 1962,Paul Alfred Weissdefined renewable resources as: "The total range of living organisms providing man with life, fibres, etc...".[3]Another type of renewable resources isrenewable energyresources. Common sources of renewable energy include solar, geothermal and wind power, which are all categorized as renewable resources. Fresh water is an example of a renewable resource.
Watercan be considered arenewablematerial when carefully controlled usage and temperature, treatment, and release are followed. If not, it would become a non-renewable resource at that location. For example, asgroundwateris usually removed from anaquiferat a rate much greater than its very slow natural recharge, it is a considered non-renewable resource. Removal of water from the pore spaces in aquifers may cause permanent compaction (subsidence) that cannot be renewed. 97.5% of the water on the Earth is salt water, and 3% isfresh water; slightly over two thirds of this is frozen inglaciersandpolarice caps.[4]The remaining unfrozen freshwater is found mainly as groundwater, with only a small fraction (0.008%) present above ground or in the air.[5]
Water pollutionis one of the main concerns regarding water resources. It is estimated that 22% of worldwide water is used in industry.[6]Major industrial users include hydroelectric dams,thermoelectric power plants(which use water for cooling),oreandoilrefineries (which use water in chemical processes) and manufacturing plants (which use water as a solvent), it is also used for dumping garbage.
Desalinationof seawater is considered a renewable source of water, although reducing its dependence on fossil fuel energy is needed for it to be fully renewable.[7]
Food is any substance consumed to provide nutritional support for the body.[8]Most food has its origin in renewable resources. Food is obtained directly from plants and animals.
Hunting may not be the first source of meat in the modernised world, but it is still an important and essential source for many rural and remote groups. It is also the sole source of feeding for wild carnivores.[9]
The phrasesustainable agriculturewas coined by Australian agricultural scientistGordon McClymont.[10]It has been defined as "an integrated system of plant and animal production practices having a site-specific application that will last over the long term".[11]Expansion of agricultural land reducesbiodiversityand contributes todeforestation. TheFood and Agriculture Organizationof the United Nations estimates that in coming decades, cropland will continue to be lost to industrial and urban development, along with reclamation of wetlands, and conversion of forest to cultivation, resulting in theloss of biodiversityand increasedsoil erosion.[12]
Althoughairandsunlightare available everywhere onEarth,cropsalso depend onsoilnutrientsand the availability ofwater.Monocultureis a method of growing only one crop at a time in a given field, which can damage land and cause it to become either unusable or suffer from reducedyields. Monoculture can also cause the build-up ofpathogensand pests that target one specific species. TheGreat Irish Famine (1845–1849)is a well-known example of the dangers of monoculture.
Crop rotationandlong-term crop rotationsconfer the replenishment of nitrogen through the use ofgreen manurein sequence with cereals and other crops, and can improvesoil structureandfertilityby alternating deep-rooted and shallow-rooted plants. Other methods to combat lost soil nutrients are returning to natural cycles that annually flood cultivated lands (returning lost nutrients indefinitely) such as theFlooding of the Nile, the long-term use ofbiochar, and use of crop and livestocklandracesthat are adapted to less than ideal conditions such as pests, drought, or lack of nutrients.
Agricultural practices are one of the single greatest contributor to the global increase insoil erosionrates.[13]It is estimated that "more than a thousand million tonnes of southern Africa's soil are eroded every year. Experts predict that crop yields will be halved within thirty to fifty years if erosion continues at present rates."[14]TheDust Bowlphenomenon in the 1930s was caused by severedroughtcombined with farming methods that did not include crop rotation, fallow fields,cover crops, soil terracing and wind-breaking trees to preventwind erosion.[15]
Thetillageof agricultural lands is one of the primary contributing factors to erosion, due to mechanised agricultural equipment that allows for deep plowing, which severely increases the amount of soil that is available for transport bywater erosion.[16][17]The phenomenon calledpeak soildescribes how large-scale factory farming techniques are affecting humanity's ability to grow food in the future.[18]Without efforts to improvesoil managementpractices, the availability ofarable soilmay become increasingly problematic.[19][unreliable source?]
Methods to combat erosion includeno-till farming, using akeyline design, growingwind breaksto hold the soil, and widespread use ofcompost.Fertilizersandpesticidescan also have an effect of soil erosion,[20]which can contribute tosoil salinityand prevent other species from growing.Phosphateis a primary component in the chemical fertiliser applied most commonly in modern agricultural production. However, scientists estimate that rock phosphate reserves will be depleted in 50–100 years and thatPeak Phosphatewill occur in about 2030.[21]
Industrial processingandlogisticsalso have an effect on agriculture's sustainability. The way and locations crops aresoldrequires energy for transportation, as well as the energy cost for materials,labour, andtransport. Food sold at a local location, such afarmers' market, have reduced energy overheads.
Air is a renewable resource. Allliving organismsneedoxygen,nitrogen(directly or indirectly),carbon(directly or indirectly) and many other gases in smallquantitiesfor theirsurvival.
An important renewable resource iswoodprovided by means offorestry, which has been used for construction, housing and firewood since ancient times.[22][23][24]Plants provide the main sources for renewable resources, the main distinction is betweenenergy cropsandnon-food crops. A large variety oflubricants, industrially used vegetable oils, textiles and fibre made e.g. ofcotton,copraorhemp,paperderived fromwood,ragsorgrasses,bioplasticare based on plant renewable resources. A large variety of chemical based products likelatex,ethanol,resin,sugarandstarchcan be provided with plant renewables. Animal based renewables includefur,leather, technicalfatand lubricants and further derived products, as e.g.animal glue,tendons,casingsor in historical timesambraandbaleenprovided bywhaling.
With regard to pharmacy ingredients and legal and illegal drugs, plants are important sources, however e.g. venom of snakes, frogs and insects has been a valuable renewable source of pharmacological ingredients. Before GMO production set in,insulinand importanthormoneswere based on animal sources.Feathers, an important byproduct of poultry farming for food, is still being used as filler and as base forkeratinin general. Same applies for thechitinproduced in farmingCrustaceanswhich may be used as base ofchitosan. The most important part of the human body used for non-medical purposes ishuman hairas forartificial hair integrations, which is being traded worldwide.
Historically, renewable resources like firewood,latex,guano,charcoal,wood ash, plant colors asindigo, and whale products have been crucial for human needs but failed to supply demand in the beginning of the industrial era.[25]Early modern times faced large problems with overuse of renewable resources as indeforestation,overgrazingoroverfishing.[25]
In addition to fresh meat and milk, which as food items are not the topic of this section,livestockfarmers and artisans used further animal ingredients astendons, horn, bones, bladders. Complex technical constructions as thecomposite bowwere based on combination of animal and plant based materials. The current distribution conflict between biofuel and food production is being described asFood vs. fuel. Conflicts between food needs and usage, as supposed byfiefobligations were in so far common in historical times as well.[26]However, a significant percentage of (middle European) farmers yields went intolivestock, which provides as well organic fertiliser.[27]Oxen and horses were important for transportation purposes, drove engines as e.g. intreadmills.
Other regions solved the transportation problem withterracing,urbanand garden agriculture.[25]Further conflicts as between forestry and herding, or (sheep) herders and cattle farmers led to various solutions. Some confined wool production and sheep to large state and nobility domains or outsourced to professional shepherds with larger wandering herds.[28]
TheBritish Agricultural Revolutionwas mainly based on a new system ofcrop rotation, the four-field rotation.BritishagriculturistCharles Townshendrecognised the invention in DutchWaaslandand popularised it in the 18th century UK,George Washington Carverin the USA. The system usedwheat,turnipsandbarleyand introduced as wellclover. Clover is able to fix nitrogen from air, a practically non exhaustive renewable resource, into fertilizing compounds to the soil and allowed to increase yields by large. Farmers opened up a fodder crop and grazing crop. Thuslivestockcould to be bred year-round and wintercullingwas avoided. The amount of manure rose and allowed more crops but to refrain fromwood pasture.[25]
Early modern times and the 19th century saw the previous resource base partially replaced respectively supplemented by large scale chemical synthesis and by the use of fossil and mineral resources respectively.[29]Besides the still central role of wood, there is a sort of renaissance of renewable products based on modern agriculture, genetic research and extraction technology. Besides fears about an upcomingglobal shortage of fossil fuels, local shortages due to boycotts, war and blockades or just transportation problems in remote regions have contributed to different methods of replacing or substituting fossil resources based on renewables.
The use of certain basically renewable products as inTCMendangers various species. Just the black market inrhinoceros hornreduced the world's rhino population by more than 90 percent over the past 40 years.[30][31]
The success of the German chemical industry till World War I was based on the replacement of colonial products. The predecessors ofIG Farbendominated the world market forsynthetic dyesat the beginning of the 20th century[32][33]and had an important role in artificialpharmaceuticals,photographic film,agricultural chemicalsandelectrochemicals.[29]
However the formerPlant breedingresearch institutes took a different approach. After the loss of theGerman colonial empire, important players in the field asErwin BaurandKonrad Meyerswitched to using local crops as base for economicautarky.[34][35]Meyer as a key agricultural scientist and spatial planner of the Nazi era managed and leadDeutsche Forschungsgemeinschaftresources and focused about a third of the complete research grants in Nazi Germany on agricultural and genetic research and especially on resources needed in case of a further German war effort.[34]A wide array of agrarian research institutes still existing today and having importance in the field was founded or enlarged in the time.
There were some major failures as trying to e.g. growfrost resistantolive species, but some success in the case ofhemp,flax,rapeseed, which are still of current importance.[34]During World War 2, German scientists tried to use RussianTaraxacum(dandelion) species to manufacturenatural rubber.[34]Rubber dandelions are still of interest, as scientists in the Fraunhofer Institute for Molecular Biology and Applied Ecology (IME) announced 2013 to have developed a cultivar that is suitable for commercial production of natural rubber.[36]
Several legal and economic means have been used to enhance the market share of renewables.
The UK usesNon-Fossil Fuel Obligations(NFFO), a collection ofordersrequiring the electricitydistribution network operatorsinEnglandandWalesto purchase electricity from thenuclear powerandrenewable energysectors. Similar mechanisms operate inScotland(the Scottish Renewable Orders under the Scottish Renewables Obligation) andNorthern Ireland(the Northern Ireland Non-Fossil Fuel Obligation). In the US,Renewable Energy Certificates(RECs), use a similar approach. GermanEnergiewendeis using feed-in tariffs. An unexpected outcome of the subsidies was the quick increase of pellet byfiring in conventional fossil fuel plants (compareTilbury power stations) and cement works, making wood respectively biomass accounting for about half of Europe's renewable-energy consumption.[24]
Biorenewable chemicals are chemicals created by biological organisms that provide feedstocks for the chemical industry.[37]Biorenewable chemicals can provide solar-energy-powered substitutes for the petroleum-based carbon feedstocks that currently supply the chemical industry. The tremendous diversity of enzymes in biological organisms, and the potential forsynthetic biologyto alter these enzymes to create yet new chemical functionalities, can drive the chemical industry. A major platform for creation of new chemicals is thepolyketidebiosynthetic pathway, which generates chemicals containing repeatedalkylchain units with potential for a wide variety offunctional groupsat the different carbon atoms.[37][38][39]Polyurethaneresearch is ongoing that specifically uses renewable resources.[40]
Bioplastics are a form ofplasticsderived from renewablebiomasssources, such asvegetable fats and oils,lignin,corn starch,peastarch[41]ormicrobiota.[42]The most common form of bioplastic isthermoplasticstarch. Other forms includeCellulosebioplastics, biopolyester,Polylactic acid, and bio-derivedpolyethylene.
The production and use of bioplastics is generally regarded as a moresustainable activitywhen compared to plastic production from petroleum (petroplastic); however, manufacturing of bioplastic materials is often still reliant upon petroleum as an energy and materials source. Because of the fragmentation in the market and ambiguous definitions it is difficult to describe the total market size for bioplastics, but the global production capacity is estimated at 327,000 tonnes.[43]In contrast, global consumption of all flexible packaging is estimated at 12.3 million tonnes.[44]
Bioasphalt is anasphaltalternative made from non-petroleum based renewable resources. Manufacturing sources of bioasphalt includesugar,molassesandrice,cornandpotatostarches, and vegetable oil based waste. Asphalt made with vegetable oil based binders was patented by Colas SA in France in 2004.[45][46]
Renewable energyrefers to the provision of energy via renewable resources which are naturally replenished as fast as they are being used. Examples aresunlight,wind,biomass,rain,tides,wavesandgeothermal heat.[47]Renewable energy may replace conventional fuels in four distinct markets, namelyelectricity generation,hot water/space heating,motor fuels, andrural (off-grid)energy services.[48]Manufacturing of renewable energy devices usesnon-renewable resourcessuch as mined metals andland surface.
Biomassis referring tobiological materialfrom living, or recently living organisms, most often referring to plants or plant-derived materials.
Sustainable harvesting and use of renewable resources (i.e., maintaining a positive renewal rate) can reduceair pollution,soil contamination,habitat destructionandland degradation.[49]Biomass energy is derived from six distinct energy sources: garbage, wood, plants, waste,landfill gases, andalcohol fuels. Historically, humans have harnessed biomass-derived energy since the advent of burning wood to make fire, and wood remains the largest biomass energy source today.[50][51]
However, low tech use of biomass, which still amounts for more than 10% of world energy needs may induceindoor air pollution in developing nations[52]and results in between 1.5 million and 2 million deaths in 2000.[53]
The biomass used for electricity generation varies by region.[54]Forest by-products, such as wood residues, are common in theUnited States.[54]Agricultural waste is common inMauritius(sugar cane residue) andSoutheast Asia(rice husks).[54]Animal husbandry residues, such as poultry litter, are common in theUK.[54]The biomass power generating industry in the United States, which consists of approximately 11,000MWof summer operating capacity actively supplying power to the grid, produces about 1.4 percent of the U.S. electricity supply.[55]
A biofuel is a type offuelwhose energy is derived from biologicalcarbon fixation. Biofuels include fuels derived frombiomassconversion, as well assolid biomass,liquid fuelsand variousbiogases.[56]
Bioethanolis analcoholmade byfermentation, mostly fromcarbohydratesproduced insugarorstarchcrops such ascorn,sugarcaneorswitchgrass.
Biodieselis made fromvegetable oilsandanimal fats. Biodiesel is produced from oils or fats usingtransesterificationand is the most common biofuel in Europe.
Biogasismethaneproduced by the process ofanaerobic digestionoforganic materialbyanaerobes.,[57]etc. is also a renewable source of energy.
Biogastypically refers to a mixture ofgasesproduced by the breakdown oforganic matterin the absence ofoxygen. Biogas is produced byanaerobic digestionwith anaerobic bacteria orfermentationof biodegradable materials such asmanure,sewage,municipal waste,green waste,plant material, and crops.[58]It is primarilymethane(CH4) andcarbon dioxide(CO2) and may have small amounts ofhydrogen sulphide(H2S), moisture andsiloxanes.
Natural fibres are a class of hair-like materials that are continuous filaments or are in discrete elongated pieces, similar to pieces ofthread. They can be used as a component ofcompositematerials. They can also bemattedinto sheets to make products such aspaperorfelt. Fibres are of two types: natural fibre which consists of animal and plant fibres, and man made fibre which consists of synthetic fibres and regenerated fibres.
Renewable resources are endangered by non-regulated industrial developments and growth.[59]They must be carefully managed to avoid exceeding the natural world's capacity to replenish them.[60]A life cycle assessment provides a systematic means of evaluating renewability. This is a matter of sustainability in the natural environment.[61]
National Geographichas described ocean over fishing as "simply the taking of wildlife from the sea at rates too high for fished species to replace themselves."[62]
Tunameat is driving overfishing as to endanger some species like the bluefin tuna. The European Community and other organisations are trying to regulate fishery as to protect species and to prevent their extinctions.[63]TheUnited Nations Convention on the Law of the Seatreaty deals with aspects of overfishing in articles 61, 62, and 65.[64]
Examples of overfishing exist in areas such as theNorth SeaofEurope, theGrand BanksofNorth Americaand theEast China Seaof Asia.[65]
The decline ofpenguinpopulation is caused in part by overfishing, caused by human competition over the same renewable resources[66]
Besides their role as a resource for fuel and building material, trees protect the environment by absorbing carbon dioxide and by creating oxygen.[67]The destruction of rain forests is one of the critical causes ofclimate change. Deforestation causes carbon dioxide to linger in the atmosphere. As carbon dioxide accrues, it produces a layer in the atmosphere that traps radiation from the sun. The radiation converts to heat which causesglobal warming, which is better known as thegreenhouse effect.[68]
Deforestation also affects thewater cycle. It reduces the content of water in the soil and groundwater as well as atmospheric moisture.[69]Deforestation reduces soil cohesion, so thaterosion, flooding andlandslidesensue.[70][71]
Rain forests house many species and organisms providing people with food and other commodities. In this way biofuels may well be unsustainable if their production contributes to deforestation.[72]
Some renewable resources, species and organisms are facing a very high risk of extinction caused by growing human population and over-consumption. It has been estimated that over 40% of all living species on Earth are at risk of going extinct.[73]Many nations have laws to protect hunted species and to restrict the practice of hunting. Other conservation methods include restricting land development or creating preserves. TheIUCN Red List of Threatened Speciesis the best-known worldwide conservation status listing and ranking system.[74]Internationally, 199 countries have signed an accord agreeing to createBiodiversity Action Plansto protect endangered and other threatened species.
|
https://en.wikipedia.org/wiki/Renewable_resource
|
1800s:Martineau·Tocqueville·Marx·Spencer·Le Bon·Ward·Pareto·Tönnies·Veblen·Simmel·Durkheim·Addams·Mead·Weber·Du Bois·Mannheim·Elias
Rural sociologyis a field ofsociologytraditionally associated with the study of social structure and conflict in rural areas. It is an active academic field in much of the world, originating in theUnited Statesin the 1910s with close ties to the nationalDepartment of Agricultureandland-grant universitycolleges of agriculture.[1]
While the issue of natural resource access transcends traditional rural spatial boundaries, the sociology offoodandagricultureis one focus of rural sociology, and much of the field is dedicated to the economics offarm production. Other areas of study includerural migrationand otherdemographic patterns,environmental sociology,amenity-led development, public-lands policies, so-called "boomtown" development,social disruption, the sociology ofnatural resources(including forests, mining, fishing and other areas),rural culturesandidentities,rural health-care, and educational policies. Many rural sociologists work in the areas ofdevelopment studies,community studies,community development, andenvironmental studies. Much of the research involvesdeveloping countriesor theThird World.
Rural sociology was a concept first brought by Americans in response to the large amounts of people living and working on the grounds of farms.[2]Rural sociology was the first and for a time the largest branch of American sociology. Histories of the field were popular in the 1950s and 1960s.[3][4]
History of European Rural Sociology
Though Europe included more agricultural land than the United States at the turn of the twentieth century, European rural sociology did not develop as an academic field until after World War II.[5]This is partially explained by the highly philosophical nature of pre-war European sociology: the field’s focus on broad scale generalizations largely erased rural-urban difference. European sociology in the early 1900s was also almost entirely siloed within European academia, with little cross Atlantic pollination. Practical applications and research methods employed by Land Grant Colleges,[6]the Country Life Commission,[7]and early American rural sociologists like W.B. Du Bois[8]were also well beyond the strictly academic sphere in which European sociologists resided.[9]The concerns of rural people, farmers, and agriculture were simply outside the attention of most European sociologists at that time.
Post war, European academic institutions began to understand that “there was something useful in the activities of those queer people who called themselves rural sociologists.”[10]Stronger relationships between American and European sociologists developed in the late 1940s, which was reflected in the Marshall Plan of 1948.[11]The Plan formalized the United States as a source of information and economic guidance for postwar Europe and allocated the equivalent of 100B in 2023 dollars to help Europe rebuild, especially its food systems and machinery needed to expand agricultural production.[12]With this aid came an infusion of empirical rural research designed to promote rural growth and agricultural success.
The United States’ influence was reflected in pedagogical changes to include rural sociological methods pioneered by American rural sociologists, particularly statistics. Education met increased government demand for sociological expertise brought by European reconstruction and a growing understanding of the importance of sociological understanding to policy making.[13]
While the mid 20th century saw rural sociological research in most European nations driven by government need, rural sociology as an academic discipline was rare in general universities.[14]This was due in part to the lack of university agricultural programs but also a general resistance to applied sciences.[15]Where rural sociology classes did exist, an emerging divergence from the American model presented itself in European’s treatment of culture as an independent variable in rural sociological research.[16]E.W. Hofstree, by all accounts the grandfather of European rural sociology, observed why cultural difference was of particular importance in Europe:
"In Europe, not only between the different nations but also between an infinite number of regional and even local groups within every country, there are differences in culture, which influence the behaviour of those groups considerably.... it will take a long time before Europe will show the same basic culture everywhere, and I must say that, from a personal point of view, I hope that it will take a very long time."[17]
This departure from America’s more homogenous treatment of rural culture[18]grounded the field in methods that require community-level planning before technical change or community development can occur.[19]These differences somewhat receded the 1950s and 60s, when European rural sociology shifted away from sociocultural study and towards the facilitation of modern agricultural practices.[20]This shift was driven by government interest in policy change as well as the perception that “backward [European] farmers [are] backward not only socially and culturally, but also economically and technically.”[21]
After relatively united beginnings, European rural sociology faced internal disagreements about pedagogy, focus, and direction in the 1970s.[22]Many felt the field had strayed too far from its sociocultural roots, become too empirical, and overly aligned with government.[16]Critics were particularly concerned by the field’s seeming disregard for consideration of social interaction and culture, and encouraged a return to earlier modes of rural sociology that centered community structure. Ultimately, the field regained it balance between empiricism and sociocultural and institutional study in the 1980s.[20]Considerations of European rural sociologists have since expanded to include food systems, rural-urban interface, urban poverty, and sustainable development.[16]
Outside formal academic programs, rural sociology organizations and journals were founded in the 1950s, including Sociologia Ruralis—which still publishes today— and the European Society for Rural Sociology (ESRS). Founded in 1957 by E.W. Hofstee, the ESRS welcomes international membership, including professional rural sociologists as well as those interested in their work and holds regular congresses that promote cross boundary collaboration and the growth of rural sociology research.[17]Its liberal internationalism and inclusivity makes it a unique interdisciplinary organization that stands somewhat apart from academia and splits its focus between theory and applied research.[23]For example, in 2023, the ESRS’s congress included working groups on diverse topics, including rural migration, population change, place making, mental health, and the role of arts and culture in sustaining rural spaces.[24]
Rural Spaces in Europe
The relevance of Rural Sociology to the European continent is undeniable. 44% of the EU’s total land is considered “rural,” with the Union’s newest countries including even higher percentages (upwards of 50%). More than half the population of several member states, including Slovenia, Romania, and Ireland, live in rural spaces.[25]
While the definition of rurality in Europe has traditionally included all “non-urban” spaces academia’s definition of the term is in flux as more residents move to liminal spaces (sub-urban, peri-urban, ex-urban).[16]Unlike the United States,[26]European populations in urban areas are shrinking, with a noted uptick in migration back to rural and intermediary spaces over the last two decades, and especially since the end of COVID-19 lockdowns.[25]These increasingly populated rural spaces are being met with greater economic development and tourism in the last two decades.[27]As of 2020, 44% of Europe’s population was categorized as “intermediate”, and only 12% reside in urban space.[25]
Despite these changes, focus on rural issues has been largely siloed within rural sociology programs. Between 2010 and 2019, the Council for European Studies hosted only one panel on Rural issues (Farm, Form, Family: Agriculture in Europe).[28]There are signs this may be changing. Europe Now, a widely distributed mainstream academic journal, recently devoting an entire article to the intersection of European and rural studies, including articles challenging the continued applicability of the urban-rural dichotomy, land access, food, resource use disparity, and culture. This move towards interdisciplinarity reflects the human and topographical geography of Europe writ large, and foreshadows possible integration of rural sociology into mainstream academic discourse.[29]
Rural sociology in Australia and New Zealand had a much slower start than its American and European counterparts. This is due to the lack of land grant universities which heavily invested in the discipline in the United States and a lack of interest in studying the “peasant problem” as was the case in Europe.[30]The earliest cases of studying rural life in Australia were conducted by anthropologists and social psychologists[31]in the 1950s, with sociologists taking on the subject beginning in the 1990s.[32][33][34]
Attempts were made between 1935-1957 to bring an American style rural sociology to New Zealand. The New Zealand department of Agriculture, funded by the Carnegie Foundation, tasked Otago Universities economist W.T. Doig with surveying living standards in rural New Zealand in 1935.[35]The creation and funding of such a report mirrors America's Commission on Country Life. Additional Carnegie funds were granted to the Shelly Group who conducted the countries first major sociological community study and endorsed the creation of land grant institutions in New Zealand. Ultimately, these attempts to institutionalize rural sociology in New Zealand failed due to the departments lack of organization and failure to publish impactful survey results.[35]
Early studies of rural sociology in the region focused on the influence of transnational agribusiness, technological changes effects on rural communities, the restructuring of rural environments, and social causes of environmental degradation.[30]By the mid 2000's researchers focus had shifted towards broader sociological questions and variables such as the construction and framing of gender among Australian and New Zealand farmers,[36]governmental policies impacts on rural spaces and studies,[37]and rural safety and crime.[38]Scholars have additionally focused on rural residents, particularly farmers, opinions of environmentalism and environmental policies in recent years.[39]Such a focus is particularly salient in New Zealand where livestock farming has historically been a major national source of income and environmental policies have become increasingly strict in recent years.
Though early scholars of rural sociology in Australasia tout it for its critical lens, publications in the 2010’s and 2020’s have accused the discipline of omitting the experiences of indigenous peoples,[40]failing to account for class based differences,[41]discounting the importance of race and ethnicity,[42]and only recently incorporating in studies of women in rural places.[43][44]Work on rural women in the region has often incorporated white feminism and used a colonial lens. As a response, scholars, particularly in New Zealand (Aotearoa), have begun to focus on the experiences of the Māori in rural areas,[45][46][47]while likewise shifting from solving issues of farmers to rural residents. A few scholars in Australia have likewise begun to incorporate the experiences of Aboriginal peoples into their scholarship, some of whom are indigenous scholars themselves.[48][49][40]In particular, Chelsea Joanne Ruth Watego,[50]and Aileen Moreton-Robinson[51]have risen to prominence in recent years, though the later two identify more as indigenous feminist scholars then rural sociology scholars.
Today many prominent scholars do not belong to a department of rural sociology, but rather related disciplines such as geography in the case of Ruth Liepins, Indigenous Studies in the case of Sandy O'Sullivan,[52]or Arts, Education, and Law in the case of Barbara Pini.[53]Today courses in the discipline can be studied at a small number of institutions: University of Western Sydney (Hawkesbury), Central Queensland University, Charles Sturt University, and the Department of Agriculture at the University of Queensland. Additionally, academics who publish in the discipline, such as Ann Pomeroy, Barbara Pini, Laura Rodriguez Castro, and Ruth Liepins, can be found at University of Otago, Griffith University, and Deakin University.
The beginnings of rural sociology’s development in Latin America began in 1934 under the research of Commission of Cuban Affairs of the Foreign Policy Association memberCarle C. Zimmerman.[54]As a North American rural sociologist, he conducted a study in Cuba comparing the wealth and conditions of cane workers to that of colonizers. The results of this work ultimately resulting in a demand of rural life studies expanding to Bolivia, Brazil, Argentina, and Mexico largely for the sake of materials to fuel the quality of the United States’ performance inWorld War II.
In the midst of the war, other rural sociologists were exploring the rural life of other countries. Dr. Olen Leonard assisted in the establishment of Tingo Maria’sAgricultural Extensionprogram, the study of which was published in 1943.[55]While in Ecuador, Leonard attempted to establish a similar program in the Hacienda Pichalinqui region by identifying how locals gathered, the value and meaning of possessions, and the attitudes of those in the area. His work in Guatemala consisted of assisting public officials develop a long term plan for agricultural education; in Nicaragua he participated in the development of a general and agricultural population census. Glen Taggert (El Salvador), Dr. Carl Taylor (Argentina), and T. Lynn Smith (Colombia, El Salvador) all also took part in advancing Agricultural Extension programs in Latin America. Taylor’s work in particular inspired the Argentinian Institute of Agriculture to create the Institute of Rural life.
The Caracas Regional Seminar on Education in Latin America of 1948 established fundamental education as a system that would be “specifically attending to native groups in such a way as to promote their all-around development in accordance with their best cultural traditions, economic needs, and social idiosyncrasies”.[56]This establishment catapulted a pilot project that would be explicitly tailored to the education of adults in rural communities. By the Fourth Inter-American Agricultural Conference in 1950 Montevideo, the United Nations departments ofFood and Agriculture Organizationand theInternational Labour Organizationwere given the responsibility of becoming more involved in those activities that would benefit rural welfare. As a combined force, they were also tasked with requesting that studies be performed on conditions of social, economic, and spiritual nature as they pertain to the well-being of rural communities.
There are five ways in which Latin American rural communities are differentiated from North American rural communities in 1958:[57]
Mobilized peasants of the 1960s and 1970 attracted scholars to perform more in-depth studies on Latin American rural life.[58]Conflict struck between the Marxist lean of social science and neoclassical domination of economics. Rural class structure, agrarian reform, and capitalist modes of production were all topics of discussion as the peasantry navigated their revolutionary status. The turn of the 21st century introduced the concept of “new rurality”. The shaping of Latin America’s rural economy had finally become entrenched in the newfound neoliberalism and globalization of the 1980s and 90s. Researchers claim that this has been expressed through embracing non-farm activities, feminization of rural work, growing rural-urban relations, and migration and remittances. Though some argue that no change has occurred because social ills (e.g., poverty, social injustice) prevail.
Early studies of rural sociology in Asia appear to first occur and be written about in the mid 19th and early 20th century, though the records of ancient thought on the matter of agriculturalists and peasants in rural spaces appear much earlier.[59][60]India was a focus of many sociological studies in rural areas, with Henry S. Maine writingAncient Law(1861), which studied some elements of Indian rural society.[61][62][59]Similar texts from around that time were written by those with connections to the East India Company.[59][61]Holt Mackenzie and Charles Metcalf both wrote about village communities and village life in India, and the East India Company published general reports on Indian territories like, for example, the Punjab territories from the mid 19th and early 20th century.[63][64][65][66]
India, however, was not the only focus of early sociological literature on rural life in Asia.A Systematic Source Book in Rural Sociologyby Pitirim A. Sorokin was published in 1930 and focused on European, Asiatic, and American literature and thought on rural sociology .[60]Sorokin outlines ‘Ancient Oriental Sources’ from Assyro-Babylonia, China, Egypt, India, Japan, Palestine, and Persia.[60]He argues that caste is important for understanding agriculture in ancient India, and that the government and its structure can be used to explain the importance of agriculture and rural life in China.[60]Sorokin makes these conclusions by drawing on records from these countries, which indicate study and thought about the sociology of early agriculturalists and those in rural areas. The excerpts and records used “give the ancient evaluation of agriculture as being a means of group subsistence as compared with other occupations; they reflect the society’s view as to the relative rank of the cultivators in the social order; they depict ancient opinions concerning agriculture as an economic basis for the moral and social well-being of a society, as well as sever similar points. In addition, they depict in detail various laws concerning agriculture, much of the technique of ancient agriculture, the forms of ownership and possession of land, and, finally, the numerous rites and ceremonies connected with agriculture”.[67]
It was not until later, often in the mid to late 20th century, that rural sociology as a systematic branch of academia and study appeared in Asia (for example, seeRalph B. Brown's work).
In India, the rise of rural sociology was, in part, due to the country’s gaining of their independence in 1947.[68][69]The government needed rural sociology to aid in its understanding of “the problems of extreme poverty of the people, overpopulation and general under-development of the economy”.[70]Studies focused on the changing nature of the role of towns, rural-urban actions since independence, rural change and what might be driving it, demographic research, rural development, and rural economies.[68]In 1953, A. R. Desai published the first edition ofRural Sociology in India.[71]The foreword of the book underlines the importance of understanding each aspect of society so that the Indian government could create “a uniform line of action for building a better social milieu”.[71]Due to the popularity of Desai's work and the expansion of the study of rural sociology in India, second and third editions ofRural Sociology in Indiawere published in 1959 and 1961 to better represent new study foci and methodologies in this emerging field.[71]Other popular researchers during the mid-20th century include S. C. Dube, M. N. Srinivas, and D. N. Majumdar.[72]In India, rural sociological research and policies continued to be connected into the 21st century.[72]
Before 1949, China’s rural sociological studies focused primarily on the rural class and power structures.[73]Community studies by prominent sociologists likeFei Xiatong (Fei Hsiao-tung)were influenced by American rural sociology and were also popular in mid and early 20th century China.[74]All sociology programs in China were terminated in 1952 by Mao Zedong.[75]It was not until 1979, when the Chinese Sociological Association was reestablished, that sociological studies in China began again.[75]Influences from American sociologists were welcomed during this time and continued to impact Chinese rural sociological studies into the 21st century.[75]However, there have been pushes from contemporary Chinese rural sociologists like Yang Min and Xu Yong to reconsider this western lens.[76]
Though rural sociology is thought to have an earlier origin in Japan than in the United States, it was not until the end of the 1930s that sociologists in the country were introduced to the methods and viewpoints of American rural sociologists.[77]This introduction was primarily made by Eitarō Suzuki, who is considered one of the pioneers of Japanese rural and urban sociology.[77][78]Other prominent Japanese rural sociological researchers of this time include Kitano Seiichi, Kizaemon Ariga, and Yozo Yamamoto.[77][79][80]The rapid decrease in farming populations in Japan in 1955 shifted the focus of rural sociological studies in the mid 20th century to second jobs among farmers, farming cooperative associations, and the impact of community development policies on villages. Hiroyuki Torigoe of Kwansai Gakuin University was the leader of the Asian Rural Sociology working group, which was established in 1992 and later led to the development of the Asian Rural Sociological Society.[81]
The mission statements of university departments of rural sociology have expanded to include more topics, such as sustainable development. For example, at the University of Missouri the mission is:
"The Department of Rural Sociology at the University of Missouri employs the theoretical and methodological tools of rural sociology to address challenges of the 21st century – preserving our natural resources, providing safe and nutritious food for an expanding population, adapting to climate changes, and maintaining sustainable rural livelihoods."[82]
The University of Wisconsin set up one of the first departments of rural sociology. It has now dropped the term "rural" and changed its name to the "Department of Community and Environmental Sociology."[83]Similarly, the Rural Sociology Program at the University of Kentucky has evolved into the. "Department of Community and Leadership Development," while transferring the graduate program in rural sociology to the Sociology Department.[84]Cornell University's department of rural sociology has also changed its name to the department of Development Sociology.[85]
Scholarly associations in rural sociology include:
Several academic journals are published in the field of (or closely related to) rural sociology, including:
|
https://en.wikipedia.org/wiki/Rural_sociology
|
Stewardshipis a practice committed toethical valuethat embodies the responsible planning and management ofresources. The concepts of stewardship can be applied to the environment and nature,[1][2][3]economics,[4][5]health,[6]places,[7]property,[8]information,[9]theology,[10]and cultural resources.
Stewardship was originally made up of the tasks of a domesticsteward, fromstiġ(house,hall) andweard, (ward,guard,guardian,keeper).[11][12]In the beginning, it referred to the household servant's duties for bringing food and drink to the castle's dining hall. Stewardship responsibilities were eventually expanded to include the domestic, service and management needs of the entire household.
The NOAA Planet Stewards Education Project (PSEP) is an example of an environmental stewardship program in the United States to advance scientific literacy especially in areas that conserve, restore, and protect human communities and natural resources in the areas of climate, ocean, and atmosphere. It includes professional teachers of students of all ages and abilities, and informal educators who work with the public in nature and science centers, aquaria, and zoos. The project began in 2008 as the NOAA Climate Stewards Project. Its name was changed to NOAA Planet Stewards Educational Project in 2016.
|
https://en.wikipedia.org/wiki/Stewardship
|
Sustainable agricultureisfarminginsustainableways meeting society's present food and textile needs, without compromising the ability for current or future generations to meet their needs.[1]It can be based on an understanding ofecosystem services. There are many methods to increase the sustainability of agriculture. When developing agriculture withinsustainable food systems, it is important to develop flexible business processes and farming practices.[2]Agriculture has an enormousenvironmental footprint, playing a significant rolein causing climate change(food systemsare responsible for one third of the anthropogenicgreenhouse gas emissions),[3][4]water scarcity,water pollution,land degradation,deforestationand other processes;[5]it is simultaneously causing environmental changes and being impacted by these changes.[6]Sustainable agriculture consists ofenvironment friendlymethods of farming that allow the production of crops or livestock without causing damage to human or natural systems. It involves preventing adverse effects on soil, water, biodiversity, and surrounding or downstream resources, as well as to those working or living on the farm or in neighboring areas. Elements of sustainable agriculture can includepermaculture,agroforestry,mixed farming,multiple cropping, andcrop rotation.[7]
Developing sustainable food systems contributes to the sustainability of the human population. For example, one of the best ways to mitigate climate change is to create sustainable food systems based on sustainable agriculture. Sustainable agriculture provides a potential solution to enable agricultural systems to feed a growing population within the changing environmental conditions.[6]Besides sustainable farming practices,dietary shifts to sustainable dietsare an intertwined way to substantially reduce environmental impacts.[8][9][10][11]Numeroussustainability standards and certificationsystems exist, includingorganic certification,Rainforest Alliance,Fair Trade,UTZ Certified,GlobalGAP, Bird Friendly, and the Common Code for the Coffee Community (4C).[12]
The term "sustainable agriculture" was defined in 1977 by theUSDAas an integrated system of plant and animal production practices having a site-specific application that will, over the long term:[13]
Yet the idea of having a sustainable relationship with the land has been prevalent in indigenous communities for centuries before the term was formally added to the lexicon.[14]
A common consensus is that sustainable farming is the most realistic way to feed growing populations. In order to successfully feed the population of the planet, farming practices must consider future costs–to both the environment and the communities they fuel.[15]The risk of not being able to provide enough resources for everyone led to the adoption of technology within the sustainability field to increase farm productivity. The ideal end result of this advancement is the ability to feed ever-growing populations across the world. The growing popularity of sustainable agriculture is connected to the wide-reaching fear that the planet'scarrying capacity(orplanetary boundaries), in terms of the ability to feed humanity, has been reached or even exceeded.[16]
There are several key principles associated with sustainability in agriculture:[17]
It "considers long-term as well as short-term economics because sustainability is readily defined as forever, that is, agricultural environments that are designed to promote endless regeneration".[18]It balances the need for resource conservation with the needs offarmerspursuing theirlivelihood.[19]
It is considered to bereconciliation ecology, accommodating biodiversity within human landscapes.[20]
Oftentimes, the execution of sustainable practices within farming comes through the adoption of technology and environmentally-focusedappropriate technology.
Sustainable agricultural systems are becoming an increasingly important field for AI research and development. By leveraging AI's skills in areas such as resource optimization, crop health monitoring, and yield prediction, farmers might greatly advance toward more environmentally friendly agricultural practices. AI-controlled irrigation systems optimize water consumption by using sensors to monitor soil moisture levels and weather conditions to distribute water accordingly. This water management technology can lower water consumption up to 30%.[21]Artificial intelligence (AI) mobile soil analysis enables farmers to enhance soil fertility while decreasing their ecological footprint. This technology permits on-site, real-time evaluations of soil nutrient levels.[22]Agrivoltaics enhances sustainable agriculture by optimizing land use—allowing crops to be grown alongside solar panels, which generate clean energy. This dual-use approach conserves land resources, improves microclimates, and can promote more resilient, eco-friendly farming practices.[23]
Practices that can cause long-term damage tosoilinclude excessivetillingof the soil (leading toerosion) andirrigationwithout adequate drainage (leading tosalinization).[24][25]
The most important factors for a farming site areclimate, soil,nutrientsandwater resources. Of the four, water andsoil conservationare the most amenable to human intervention. When farmers grow and harvest crops, they remove some nutrients from the soil. Without replenishment, the land suffers fromnutrient depletionand becomes either unusable or suffers from reducedyields. Sustainable agriculture depends on replenishing the soil while minimizing the use or need of non-renewable resources, such asnatural gasor mineral ores.
A farm that can "produce perpetually", yet has negative effects on environmental quality elsewhere is not sustainable agriculture. An example of a case in which a global view may be warranted is the application offertilizerormanure, which can improve the productivity of a farm but can pollute nearby rivers and coastal waters (eutrophication).[26]The other extreme can also be undesirable, as the problem of low crop yields due to exhaustion of nutrients in the soil has been related torainforestdestruction.[27]In Asia, the specific amount of land needed for sustainable farming is about 12.5 acres (5.1 ha) which include land for animal fodder, cereal production as a cash crop, and other food crops. In some cases, a small unit of aquaculture is included (AARI-1996).
Nitrates are used widely in farming as fertilizer. Unfortunately, a major environmental problem associated with agriculture is the leaching of nitrates into the environment.[28]Possible sources ofnitratesthat would, in principle, be available indefinitely, include:
The last option was proposed in the 1970s, but is only gradually becoming feasible.[32][33]Sustainable options for replacing other nutrient inputs such as phosphorus and potassium are more limited.
Other options includelong-term crop rotations, returning to natural cycles that annually flood cultivated lands (returning lost nutrients) such as theflooding of the Nile, the long-term use ofbiochar, and use of crop and livestocklandracesthat are adapted to less than ideal conditions such as pests, drought, or lack of nutrients. Crops that require high levels of soil nutrients can be cultivated in a more sustainable manner with appropriate fertilizer management practices.
Phosphateis a primary component infertilizer. It is the second most important nutrient for plants after nitrogen,[34]and is often a limiting factor.[35]It is important for sustainable agriculture as it can improve soil fertility and crop yields.[36]Phosphorus is involved in all major metabolic processes including photosynthesis, energy transfer, signal transduction, macromolecular biosynthesis, and respiration. It is needed for root ramification and strength and seed formation, and can increase disease resistance.[37]
Phosphorus is found in the soil in both inorganic and organic forms[34]and makes up approximately 0.05% of soil biomass.[37]Phosphorus fertilizers are the main input of inorganic phosphorus in agricultural soils and approximately 70%–80% of phosphorus in cultivated soils is inorganic.[38]Long-term use of phosphate-containing chemical fertilizers causeseutrophicationand deplete soil microbial life, so people have looked to other sources.[37]
Phosphorus fertilizers are manufactured fromrock phosphate.[39]However, rock phosphate is a non-renewable resource and it is being depleted by mining for agricultural use:[36][38]peak phosphoruswill occur within the next few hundred years,[40][41][42]or perhaps earlier.[43][44][45]
Potassium is a macronutrient very important for plant development and is commonly sought in fertilizers.[46]This nutrient is essential for agriculture because it improves water retention, nutrient value, yield, taste, color, texture and disease resistance of crops. It is often used in the cultivation of grains, fruits, vegetables, rice, wheat, millets, sugar, corn, soybeans,palm oiland coffee.[47]
Potassium chloride (KCl) represents the most widely source of K used in agriculture,[48]accounting for 90% of all potassium produced for agricultural use.[49]
The use of KCl leads to high concentrations of chloride (Clˉ) in soil harming its health due to the increase in soil salinity, imbalance in nutrient availability and this ion's biocidal effect for soil organisms.[7]In consequences the development of plants and soil organisms is affected, putting at risksoil biodiversityand agricultural productivity.[50][51][52][53]A sustainable option for replacing KCl are chloride-free fertilizers, its use should take into account plants' nutrition needs, and the promotion of soil health.[54][55]
Land degradationis becoming a severe global problem. According to theIntergovernmental Panel on Climate Change: "About a quarter of the Earth's ice-free land area is subject to human-induced degradation (medium confidence). Soil erosion from agricultural fields is estimated to be currently 10 to 20 times (no tillage) to more than 100 times (conventional tillage) higher than the soil formation rate (medium confidence)."[56]Almost half of the land on earth is covered with dry land, which is susceptible to degradation.[57]Over a billion tonnes of southern Africa's soil are being lost to erosion annually, which if continued will result in halving of crop yields within thirty to fifty years.[58]A comparative study of two adjacent wheat farms—one using sustainable practices and the other conventional methods—found that the sustainable farm had significantly better soil quality, including higher organic matter, microbial populations, and nutrient content, while also showing 22.4% higher net returns due to lower input costs, despite slightly lower yields.[59]Impropersoil managementis threatening the ability to grow sufficient food.Intensive agriculturereduces thecarbonlevel in soil, impairing soil structure, crop growth and ecosystem functioning,[60]and acceleratingclimate change.[60]Modification of agricultural practices is a recognized method ofcarbon sequestrationas soil can act as an effectivecarbon sink.[61]
Soil management techniques includeno-till farming,keyline designandwindbreaksto reduce wind erosion,reincorporation of organic matterinto the soil, reducingsoil salinization, and preventing water run-off.[62][63]
As the global population increases and demand for food increases, there is pressure on land as a resource. Inland-use planningand management, considering the impacts ofland-use changeson factors such as soil erosion can support long-term agricultural sustainability, as shown by a study of Wadi Ziqlab, a dry area in the Middle East where farmers graze livestock and grow olives, vegetables, and grains.[64]
Looking back over the 20th century shows that for people in poverty, following environmentally sound land practices has not always been a viable option due to many complex and challenging life circumstances.[65]Currently, increasedland degradationin developing countries may be connected with rural poverty among smallholder farmers when forced into unsustainable agricultural practices out of necessity.[66]
Converting big parts of the land surface to agriculture has severe environmental and health consequences. For example, it leads to rise inzoonotic disease(like theCoronavirus disease 2019) due to the degradation of natural buffers between humans and animals, reducing biodiversity and creating larger groups of genetically similar animals.[67][68]
Land is a finite resource on Earth. Although expansion of agricultural land can decreasebiodiversityand contribute todeforestation, the picture is complex; for instance, a study examining the introduction of sheep by Norse settlers (Vikings) to the Faroe Islands of the North Atlantic concluded that, over time, the fine partitioning of land plots contributed more to soil erosion and degradation than grazing itself.[69]
TheFood and Agriculture Organizationof the United Nations estimates that in coming decades, cropland will continue to be lost to industrial andurban development, along with reclamation of wetlands, and conversion of forest to cultivation, resulting in theloss of biodiversityand increased soil erosion.[70]
In modern agriculture, energy is used in on-farm mechanisation, food processing, storage, and transportation processes.[71]It has therefore been found that energy prices are closely linked tofood prices.[72]Oil is also used as an input inagricultural chemicals. TheInternational Energy Agencyprojects higher prices of non-renewable energy resources as a result of fossil fuel resources being depleted. It may therefore decrease globalfood securityunless action is taken to 'decouple' fossil fuel energy from food production, with a move towards 'energy-smart' agricultural systems includingrenewable energy.[72][73][74]The use of solar powered irrigation inPakistanis said to be a closed system for agricultural water irrigation.[75]
The environmental cost of transportation could be avoided if people use local products.[76]
In some areas sufficientrainfallis available for crop growth, but many other areas requireirrigation. For irrigation systems to be sustainable, they require proper management (to avoidsalinization) and must not use more water from their source than is naturally replenishable. Otherwise, the water source effectively becomes anon-renewable resource. Improvements in waterwell drillingtechnology andsubmersible pumps, combined with the development ofdrip irrigationand low-pressure pivots, have made it possible to regularly achieve high crop yields in areas where reliance on rainfall alone had previously made successful agriculture unpredictable. However, this progress has come at a price. In many areas, such as theOgallala Aquifer, the water is being used faster than it can be replenished.
According to the UC Davis Agricultural Sustainability Institute, several steps must be taken to develop drought-resistant farming systems even in "normal" years with average rainfall. These measures include both policy and management actions:[77]
Indicators for sustainable water resource development include the average annual flow of rivers from rainfall, flows from outside a country, the percentage of water coming from outside a country, and gross water withdrawal.[78]It is estimated that agricultural practices consume 69% of the world's fresh water.[79]
Farmers discovered a way to save water using wool in Wyoming and other parts of the United States.[80]
Sustainable agriculture attempts to solve multiple problems with one broad solution. The goal of sustainable agricultural practices is to decreaseenvironmental degradationdue to farming while increasing crop–and thus food–output. There are many varying strategies attempting to use sustainable farming practices in order to increase rural economic development withinsmall-scale farmingcommunities. Two of the most popular and opposing strategies within the modern discourse are allowing unrestricted markets to determine food production and deeming food ahuman right. Neither of these approaches have been proven to work without fail. A promising proposal torural povertyreduction within agricultural communities is sustainable economic growth; the most important aspect of this policy is to regularly include the poorest farmers in the economy-wide development through the stabilization of small-scale agricultural economies.[81]
In 2007, theUnited Nationsreported on "Organic AgricultureandFood Securityin Africa", stating that using sustainable agriculture could be a tool in reaching global food security without expanding land usage and reducingenvironmental impacts.[82]There has been evidence provided by developing nations from the early 2000s stating that when people in their communities are not factored into the agricultural process that serious harm is done. Thesocial scientistCharles Kellogg has stated that, "In a final effort, exploited people pass their suffering to the land."[82]Sustainable agriculture mean the ability to permanently and continuously "feed its constituent populations".[82]
There are a lot of opportunities that can increase farmers' profits, improve communities, and continue sustainable practices. For example, inUganda,Genetically Modified Organismswere originally illegal. However, with the stress of banana crisis in Uganda, whereBanana Bacterial Wilthad the potential to wipe out 90% of yield, they decided to explore GMOs as a possible solution.[83]The government issued the National Biotechnology and Biosafety bill, which will allow scientists that are part of the National Banana Research Program to start experimenting with genetically modified organisms.[84]This effort has the potential to help local communities because a significant portionlive off the food they grow themselves, and it will beprofitablebecause the yield of their main produce will remain stable.
Not all regions are suitable for agriculture.[85][86]The technological advancement of the past few decades has allowed agriculture to develop in some of these regions. For example, Nepal has builtgreenhousesto deal with its high altitude and mountainous regions.[34]Greenhouses allow for greater crop production and also use less water since they are closed systems.[87]
Desalinationtechniques can turn salt water into fresh water which allows greater access to water for areas with a limited supply.[88]This allows the irrigation of crops without decreasing natural fresh water sources.[89]While desalination can be a tool to provide water to areas that need it to sustain agriculture, it requires money and resources. Regions of China have been considering large scale desalination in order to increase access to water, but the current cost of the desalination process makes it impractical.[90]
Women working in sustainable agriculture come from numerous backgrounds, ranging from academia to labour.[91]From 1978-2007, in theUnited States, the number of women farm operators has tripled.[85]In 2007, women operated 14 percent of farms, compared to five percent in 1978. Much of the growth is due to women farming outside of the "male dominated field of conventional agriculture".[85]
The practice of growing food in the backyard of houses, schools, etc., by families or by communities became widespread in the US at the time ofWorld War I, theGreat DepressionandWorld War II, so that in one point of time 40% of the vegetables of the USA was produced in this way. The practice became more popular again in the time of theCOVID-19 pandemic. This method permits to grow food in a relatively sustainable way and at the same time can make it easier forpoor peopleto obtain food.[92]
Costs, such as environmental problems, not covered in traditionalaccountingsystems (which take into account only the direct costs of production incurred by the farmer) are known asexternalities.[17]
Netting studied sustainability andintensive agricultureinsmallholdersystems through history.[93]
There are several studies incorporating externalities such as ecosystem services, biodiversity, land degradation, andsustainable land managementin economic analysis. These includeThe Economics of Ecosystems and Biodiversitystudy and theEconomics of Land Degradation Initiativewhich seek to establish an economic cost-benefit analysis on the practice of sustainable land management and sustainable agriculture.
Triple bottom lineframeworks include social and environmental alongside a financial bottom line. A sustainable future can be feasible if growth in material consumption and population is slowed down and if there is a drastic increase in the efficiency of material and energy use. To make that transition, long- and short-term goals will need to be balanced enhancingequityand quality of life.[94]
The barriers to sustainable agriculture can be broken down and understood through three different dimensions. These three dimensions are seen as the core pillars tosustainability: social, environmental, and economic pillars.[95]The social pillar addresses issues related to the conditions in which societies are born into, growing in, and learning from.[95]It deals with shifting away from traditional practices of agricultural and moving into new sustainable practices that will create better societies and conditions.[95]The environmental pillar addresses climate change and focuses on agricultural practices that protect the environment for future generations.[95]The economic pillar discovers ways in which sustainable agriculture can be practiced while fostering economic growth and stability, with minimal disruptions to livelihoods.[95]All three pillars must be addressed to determine and overcome the barriers preventing sustainable agricultural practices.[95]
Social barriers to sustainable agriculture include cultural shifts, the need for collaboration, incentives, and new legislation.[95]The move from conventional to sustainable agriculture will require significant behavioural changes from both farmers and consumers.[96]Cooperation and collaboration between farmers is necessary to successfully transition to sustainable practices with minimal complications.[96]This can be seen as a challenge for farmers who care about competition and profitability.[97]There must also be an incentive for farmers to change their methods of agriculture.[98]The use of public policy, advertisements, and laws that make sustainable agriculture mandatory or desirable can be utilized to overcome these social barriers.[99]
Environmental barriers prevent the ability to protect and conserve the natural ecosystem.[95]Examples of these barriers include the use ofpesticidesand the effects of climate change.[95]Pesticides are widely used to combat pests that can devastate production and plays a significant role in keeping food prices and production costs low.[100]To move toward sustainable agriculture, farmers are encouraged to utilize green pesticides, which cause less harm to both human health and habitats, but would entail a higher production cost.[101]Climate changeis also a rapidly growing barrier, one that farmers have little control over, which can be seen through place-based barriers.[102]These place-based barriers include factors such as weather conditions,topography, andsoil qualitywhich can cause losses in production, resulting in the reluctance to switch from conventional practices.[102]Many environmental benefits are also not visible or immediately evident.[103]Significant changes such as lower rates of soil andnutrientloss, improvedsoil structure, and higher levels of beneficialmicroorganismstake time.[103]Inconventional agriculture, the benefits are easily visible with no weeds, pests, etc..., but the long term costs to the soil and surroundingecosystemsare hidden and "externalized".[103]Conventional agricultural practices since the evolution of technology have caused significant damage to the environment throughbiodiversity loss, disrupted ecosystems, poor water quality, among other harms.[98]
The economic obstacles to implementing sustainable agricultural practices include low financial return/profitability, lack of financial incentives, and negligible capital investments.[104]Financial incentives and circumstances play a large role in whether sustainable practices will be adopted.[95][104]The human and material capital required to shift to sustainable methods of agriculture requires training of the workforce and making investments in new technology and products, which comes at a high cost.[95][104]In addition to this, farmers practicing conventional agriculture can mass produce their crops, and therefore maximize their profitability.[95]This would be difficult to do in sustainable agriculture which encourages low production capacity.[95]
The authorJames Howard Kunstlerclaims almost all modern technology is bad and that there cannot be sustainability unless agriculture is done in ancient traditional ways.[105]Efforts toward more sustainable agriculture are supported in the sustainability community, however, these are often viewed only as incremental steps and not as an end.[98]One promising method of encouraging sustainable agriculture is throughlocal farmingandcommunity gardens.[98]Incorporating local produce and agricultural education into schools, communities, and institutions can promote the consumption of freshly grown produce which will drive consumer demand.[98]
Some foresee a true sustainablesteady state economythat may be very different from today's: greatly reduced energy usage, minimalecological footprint, fewerconsumer packaged goods,local purchasingwithshort food supply chains, littleprocessed foods, more home andcommunity gardens, etc.[106]
There is a debate on the definition of sustainability regarding agriculture. The definition could be characterized by two different approaches:an ecocentric approach and a technocentric approach.[107]The ecocentric approach emphasizes no- or low-growth levels of human development, and focuses onorganicandbiodynamic farmingtechniques with the goal of changing consumption patterns, and resource allocation and usage. The technocentric approach argues thatsustainabilitycan be attained through a variety of strategies, from the view that state-led modification of the industrial system like conservation-oriented farming systems should be implemented, to the argument thatbiotechnologyis the best way to meet the increasing demand for food.[107]
One can look at the topic of sustainable agriculture through two different lenses:multifunctional agricultureandecosystem services.[108]Both of these approaches are similar, but look at the function of agriculture differently. Those that employ the multifunctional agriculture philosophy focus on farm-centered approaches, and define function as being the outputs of agricultural activity.[108]The central argument of multifunctionality is that agriculture is a multifunctional enterprise with other functions aside from the production of food and fiber. These functions include renewable resource management, landscape conservation and biodiversity.[109]The ecosystem service-centered approach posits that individuals and society as a whole receive benefits fromecosystems, which are called "ecosystem services".[108][110]In sustainable agriculture, the services that ecosystems provide includepollination,soil formation, andnutrient cycling, all of which are necessary functions for the production of food.[111]
It is also claimed sustainable agriculture is best considered as an ecosystem approach to agriculture, calledagroecology.[112]
Most agricultural professionals agree that there is a "moral obligation to pursue [the] goal [of] sustainability."[82]The major debate comes from what system will provide a path to that goal because if an unsustainable method is used on a large scale it will have a massive negative effect on the environment and human population.
Other practices includepolyculture, growing a diverse number of perennial crops in a single field, each of which would grow in separate seasons so as not to compete with each other for natural resources.[113]This system would result in increased resistance to diseases and decreased effects of erosion and loss of nutrients in the soil.Nitrogen fixationfrom legumes, for example, used in conjunction with plants that rely on nitrate from the soil for growth, helps to allow the land to be reused annually. Legumes will grow for a season and replenish the soil with ammonium and nitrate, and the next season other plants can be seeded and grown in the field in preparation for harvest.
Sustainable methods of weed management may help reduce the development of herbicide-resistant weeds.[114]Crop rotationmay also replenish nitrogen iflegumesare used in the rotations and may also use resources more efficiently.[115]
There are also many ways to practice sustainableanimal husbandry. Some of the tools tograzingmanagement include fencing off the grazing area into smaller areas calledpaddocks, lowering stock density, and moving the stock between paddocks frequently.[116]
Within the realm of sustainable agriculture methods, the integration of different farming practices is gaining recognition for its potential to enhance efficiency and reduce environmental impact. For example, research on integrated wheat-fish farming in Egypt has demonstrated increased overall productivity and potential for reduced reliance on external inputs.[117]This approach, aligning with principles seen in other integrated systems like rice-fish culture, underscores the value of diversifying the combing farming activities for greater sustainability.
An increased production is a goal ofintensification. Sustainable intensification encompasses specific agriculture methods that increase production and at the same time help improve environmental outcomes. The desired outcomes of the farm are achieved without the need for more land cultivation or destruction of natural habitat; the system performance is upgraded with no net environmental cost. Sustainable Intensification has become a priority for the United Nations. Sustainable intensification differs from prior intensification methods by specifically placing importance on broader environmental outcomes. By 2018; it was predicted in 100 nations a combined total of 163 million farms used sustainable intensification. The amount of agricultural land covered by this is 453 million ha of land. That amount of land is equal to 29% of farms worldwide.[118]In light of concerns aboutfood security,human populationgrowth and dwindling land suitable for agriculture, sustainable intensive farming practises are needed to maintain highcrop yields, while maintainingsoil healthandecosystem services. The capacity for ecosystem services to be strong enough to allow a reduction in use of non-renewable inputs whilst maintaining or boosting yields has been the subject of much debate. Recent work in irrigated rice production system of east Asia has suggested that – in relation to pest management at least – promoting the ecosystem service of biological control using nectar plants can reduce the need for insecticides by 70% whilst delivering a 5% yield advantage compared with standard practice.[119]
Vertical farmingis a concept with the potential advantages of year-round production, isolation from pests and diseases, controllable resource recycling and reduced transportation costs.[120]
Water efficiencycan be improved by reducing the need forirrigationand using alternative methods. Such methods include: researching ondroughtresistant crops, monitoring planttranspirationand reducing soilevaporation.[121]
Drought resistant crops have been researched extensively as a means to overcome the issue ofwater shortage. They aremodified geneticallyso they can adapt in an environment with little water. This is beneficial as it reduces the need for irrigation and helps conserve water. Although they have been extensively researched, significant results have not been achieved as most of the successful species will have no overall impact on water conservation. However, some grains likerice, for example, have been successfully genetically modified to be drought resistant.[122]
Soil amendmentsinclude using compost from recycling centers. Using compost from yard and kitchen waste uses available resources in the area.
Abstinence from soiltillagebefore planting and leaving the plant residue afterharvestingreduces soil water evaporation; It also serves to prevent soil erosion.[123]
Crop residues left covering the surface of the soil may result in reduced evaporation of water, a lower surface soil temperature, and reduction of wind effects.[123]
A way to makerock phosphatemore effective is to add microbial inoculates such as phosphate-solubilizing microorganisms, known as PSMs, to the soil.[35][86]These solubilize phosphorus already in the soil and use processes like organic acid production and ion exchange reactions to make that phosphorus available for plants.[86]Experimentally, these PSMs have been shown to increase crop growth in terms of shoot height, dry biomass and grain yield.[86]
Phosphorus uptake is even more efficient with the presence ofmycorrhizaein the soil.[124]Mycorrhiza is a type ofmutualistic symbioticassociation between plants and fungi,[124]which are well-equipped to absorb nutrients, including phosphorus, in soil.[125]These fungi can increase nutrient uptake in soil where phosphorus has been fixed by aluminum, calcium, and iron.[125]Mycorrhizae can also release organic acids that solubilize otherwise unavailable phosphorus.[125]
Soil steamingcan be used as an alternative to chemicals for soil sterilization. Different methods are available to induce steam into the soil to kill pests and increase soil health.
Solarizing is based on the same principle, used to increase the temperature of the soil to kill pathogens and pests.[126]
Certain plants can be cropped for use asbiofumigants, "natural"fumigants, releasing pest suppressing compounds when crushed, ploughed into the soil, and covered in plastic for four weeks. Plants in theBrassicaceaefamily release large amounts of toxic compounds such asmethyl isothiocyanates.[127][128]
Relocating current croplands to environmentally moreoptimallocations, whilst allowing ecosystems in then-abandoned areas toregeneratecould substantially decrease the current carbon, biodiversity, and irrigation water footprint of global crop production, with relocation only within national borders also having substantial potential.[129][130]
Sustainability may also involvecrop rotation.[131]Crop rotation andcover cropspreventsoil erosion, by protectingtopsoilfrom wind and water.[34]Effective crop rotation can reduce pest pressure on crops, provides weed control, reduces disease build up, and improves the efficiency ofsoil nutrientsand nutrient cycling.[132]This reduces the need for fertilizers andpesticides.[131]Increasing the diversity of crops by introducing newgenetic resourcescan increase yields by 10 to 15 percent compared to when they are grown in monoculture.[132][133]Perennial cropsreduce the need fortillageand thus help mitigate soil erosion, and may sometimes tolerate drought better, increase water quality and help increase soil organic matter. There are research programs attempting to develop perennial substitutes for existing annual crops, such as replacing wheat with the wild grassThinopyrum intermedium, or possible experimental hybrids of it and wheat.[134]Being able to do all of this without the use of chemicals is one of the main goals of sustainability which is why crop rotation is a very central method of sustainable agriculture.[132]
Sustainable agriculture is not limited to practices within individual plots but can also be considered at the landscape scale. This broader approach is particularly relevant for reconciling biodiversity conservation while maintaining sufficient agricultural production. In this context, two landscape management strategies are traditionally opposed: land sparing and land sharing.[135][136][137]These two strategies have generated intense debate within the scientific community for over a decade, with no clear consensus emerging on their respective effectiveness in maximizing biodiversity conservation and agricultural production.[138]This is particularly pertinent given that effectiveness appears to vary according to the landscape context. Recently, a third alternative has been introduced to the debate: the so-called land blending strategy, an intermediate approach at the interface between the two traditional ones.[139]
Land sparing is a strategy that involves strictly separating land dedicated to agricultural production from land dedicated to conservation.[136][140]This strategy promotes increasing the yield of agricultural land, particularly throughintensive farming. This is done to preserve areas of major biodiversity interest from agricultural expansion. This has been the dominant strategy in developed countries for over 150 years.
Land sharing, also known as "wildlife-friendly agriculture", involves integrating biodiversity into agricultural production areas.[136][140]This approach focuses on the combination of agriculture and biodiversity by reducing the intensification of farming practices. This is illustrated notably by agroecological practices such as agroforestry and mixed crop-livestock systems.
Previously described as a “mixed strategy”, a term considered too ambiguous in ecology, land blending is an intermediate approach between land sparing and land sharing.[139]Unlike these two traditionally opposed strategies, land blending offers a flexible combination of the two approaches, adapted to the specific landscape features. This third strategy has only recently emerged as a credible and viable alternative to the traditional land sparing and land sharing.[141][142]Recent research[139]has highlighted its potential for effectively reconciling biodiversity conservation and agricultural production, while enhancing resilience in the face of change and uncertainty.
Organic agriculture can be defined as:
an integrated farming system that strives for sustainability, the enhancement of soil fertility and biological diversity whilst, with rare exceptions, prohibiting synthetic pesticides, antibiotics, synthetic fertilizers, genetically modified organisms, and growth hormones.[143][144][145][146]
Some claimorganic agriculturemay produce the most sustainable products available for consumers in the US, where no other alternatives exist, although the focus of the organics industry is not sustainability.[131]
In 2018 the sales of organic products in USA reach $52.5 billion.[147]According to a USDA survey, two-thirds of Americans consume organic products at least occasionally.[148]
Ecological farming is a concept that focused on the environmental aspects of sustainable agriculture. Ecological farming includes all methods, including organic, which regenerateecosystem serviceslike: prevention ofsoil erosion, water infiltration and retention,carbon sequestrationin the form of humus, and increased biodiversity.[149]Many techniques are used includingno-till farming, multispecies cover crops, strip cropping, terrace cultivation, shelter belts, pasture cropping etc.
There are a plethora of methods and techniques that are employed when practicing ecological farming, all having their own unique benefits and implementations that lead to more sustainable agriculture. Crop genetic diversity is one method that is used to reduce the risks associated with monoculture crops, which can be susceptible to a changing climate.[150]This form of biodiversity causes crops to be more resilient, increasing food security and enhancing the productivity of the field on a long-term scale.[150]The use of biodigestors is another method which converts organic waste into a combustible gas, which can provide several benefits to an ecological farm: it can be used as a fuel source, fertilizer for crops and fish ponds, and serves as a method for removing wastes that are rich in organic matter.[151]Because biodigestors can be used as fertilizer, it reduces the amount of industrial fertilizers that are needed to sustain the yields of the farm. Another technique used is aquaculture integration, which combines fish farming with agricultural farming, using the wastes from animals and crops and diverting them towards the fish farms to be used up instead of being leeched into the environment.[152]Mud from the fish ponds can also be used to fertilize crops.[152]
Organic fertilizers can also be employed in an ecological farm, such as animal and green manure.[153]This allows soil fertility to be improved and well-maintained, leads to reduced costs and increased yields, reduces the usage of non-renewable resources in industrial fertilizers (Nitrogen and Phosphorus), and reduces the environmental pressures that are posed by intensive agricultural systems.[153]Precision Agriculture can also be used, which focuses on efficient removal of pests using non-chemical techniques and minimizes the amount of tilling needed to sustain the farm. An example of a precision machine is the false seedbed tiller, which can remove a great majority of small weeds while only tilling one centimeter deep.[154]This minimized tilling reduces the amount of new weeds that germinate from soil disturbance.[154]Other methods that reduce soil erosion include contour farming, strip cropping, and terrace cultivation.[155]
The challenge for ecological farming science is to be able to achieve a mainstream productivefood systemthat is sustainable or even regenerative. To enter the field of ecological farming, location relative to the consumer, can reduce the food miles factor to help minimise damage to the biosphere by combustion engine emissions involved in current food transportation.
Design of the ecological farm is initially constrained by the same limitations as conventional farming: local climate, the soil's physical properties, budget for beneficial soil supplements, manpower and available automatons; however long-term water management by ecological farming methods is likely to conserve and increase water availability for the location, and require far fewer inputs to maintain fertility.
Certain principles unique to ecological farming need to be considered.
Often thought of as inherently destructive,slash-and-burnorslash-and-charshifting cultivationhave been practiced in the Amazon for thousands of years.[164]
Some traditional systems combinepolyculturewith sustainability. In South-East Asia,rice-fish systemsonrice paddieshave raisedfreshwater fishas well as rice, producing an additional product and reducingeutrophicationof neighboring rivers.[165]A variant in Indonesia combines rice, fish, ducks and water fern; the ducks eat the weeds that would otherwise limit rice growth, saving labour and herbicides, while the duck and fish manure substitute for fertilizer.[166]
Raised field agriculture has been recently revived in certain areas of the world, such as theAltiplanoregion inBoliviaandPeru. This has resurged in the form of traditionalWaru Waruraised fields, which create nutrient-rich soil in regions where such soil is scarce. This method is extremely productive and has recently been utilized by indigenous groups in the area and the nearbyAmazon Basinto make use of lands that have been historically hard to cultivate.
Other forms of traditional agriculture include agro forestry, crop rotations, and water harvesting. Water harvesting is one of the largest and most common practices, particularly used in dry areas and seasons. In Ethiopia, over half of their GDP and over 80 percent of their exports are attributed to agriculture; yet, it is known for its intense droughts and dry periods.[167]Rain water harvesting is considered to be a low-cost alternative. This type of harvesting collects and stores water from roof tops during high-rain periods for use during droughts.[168]Rainwater harvestinghas been a large practice to help the country survive by focusing on runoff irrigation, roof water harvesting, and flood spreading.
Native Americans in the United States practiced sustainable agriculture through their subsistence farming techniques. Many tribes grew or harvested their own food from plants that thrived in their local ecosystems. Native American farming practices are specific to local environments and work with natural processes.[169]This is a practice calledPermaculture, and it involves a deep understanding of the local environment.[170]Native American farming techniques also incorporate local biodiversity into many of their practices, which helps the land remain healthy.[171]
Many indigenous tribes incorporatedIntercroppinginto their agriculture, which is a practice where multiple crops are planted together in the same area. This strategy allows crops to help one another grow through exchanged nutrients, maintained soil moisture, and physical supports for one another. The crops that are paired in intercropping often do not heavily compete for resources, which helps them to each be successful. For example, many tribes utilized intercropping in ways such as the Three Sisters Garden. This gardening technique consists of corn, beans, and squash. These crops grow in unity as the corn stalk supports the beans, the beans produce nitrogen, and the squash retain moisture.[172]Intercropping also provides a natural strategy for pest management and the prevention of weed growth. Intercropping is a natural agricultural practice that often improves the overall health of the soil and plants, increases crop yield, and is sustainable.[170]
One of the most significant aspects of indigenous sustainable agriculture is theirtraditional ecological knowledgeof harvesting. TheAnishinaabetribes follow an ideology known as "the Honorable Harvest". The Honorable Harvest is a set of practices that emphasize the idea that people should "take only what you need and use everything you take."[173]Resources are conserved through this practice because several rules are followed when harvesting a plant. These rules are to never take the first plant, never take more than half of the plants, and never take the last plant.[174]This encourages future growth of the plant and therefore leads to a sustainable use of the plants in the area.
Native Americans practicedagroforestryby managing the forest, animals, and crops together. They also helped promote tree growth through controlled burns andsilviculture. Often, the remaining ash from these burns would be used to fertilize their crops. By improving the conditions of the forest, the local wildlife populations also increased. Native Americans allowed their livestock to graze in the forest, which provided natural fertilizer for the trees as well.[170]
Regenerative agriculture is a conservation and rehabilitation approach to food and farming systems. It focuses ontopsoilregeneration, increasingbiodiversity,[175]improving thewater cycle,[176]enhancingecosystem services, supportingbiosequestration, increasingresilience to climate change, and strengthening the health and vitality of farm soil. Practices include, recycling as much farm waste as possible, and addingcompostedmaterial from sources outside the farm.[85][177][34][178]
Permacultureis an approach to land management and settlement design that adopts arrangements observed in flourishing naturalecosystems. It includes a set of design principles derived usingwhole-systems thinking. It applies these principles in fields such asregenerative agriculture, town planning,rewilding, andcommunity resilience. The term was coined in 1978 byBill MollisonandDavid Holmgren, who formulated the concept in opposition to modern industrialized methods, instead adopting a more traditional or "natural" approach to agriculture.[179][180][181]
Multiple thinkers in the early and mid-20th century exploredno-dig gardening,no-till farming, and the concept of "permanent agriculture", which were early inspirations for the field of permaculture.[182]Mollison and Holmgren's work from the 1970s and 1980s led to several books, starting withPermaculture Onein 1978, and to the development of the "Permaculture Design Course" which has been one of the main methods of diffusion of permacultural ideas.[183]Starting from a focus on land usage inSouthern Australia, permaculture has since spread in scope to include other regions and other topics, such asappropriate technologyandintentional communitydesign.[184]
Several concepts and practices unify the wide array of approaches labelled as permaculture. Mollison and Holmgren's three foundational ethics and Holmgren's twelve design principles are often cited and restated in permaculture literature.[183]Practices such ascompanion planting, extensive use ofperennialcrops, and designs such as theherb spiralhave been used extensively by permaculturists.
There is limited evidencepolyculturemay contribute to sustainable agriculture. A meta-analysis of a number of polycrop studies found that predator insect biodiversity was higher at comparable yields than conventional in certaintwo-crop systemswith a singlecash cropcombined with a cover crop.[187]
One approach to sustainability is to develop polyculture systems usingperennial cropvarieties. Such varieties are being developed for rice, wheat, sorghum, barley, and sunflowers. If these can be combined in polyculture with a leguminous cover crop such as alfalfa, fixation of nitrogen will be added to the system, reducing the need for fertilizer and pesticides.[134]
The use of available city space (e.g.,rooftop gardens,community gardens,garden sharing,organopónicos, and other forms ofurban agriculture) may be able to contribute to sustainability.[188]Some consider "guerrilla gardening" an example of sustainability in action[189]– in some cases seeds of edible plants have been sown in local rural areas.[190]
Hydroponics is an alternative to agriculture that creates the ideal environment for optimal growth without using a dormant medium. This innovative farming technique produces higher crop yields without compromising soil health. The most significant drawback of this sustainable farming technique is the cost associated with development.[191]
Certification systems are important to the agriculture community and to consumers as these standards determine the sustainability of produce. Numeroussustainability standards and certificationsystems exist, includingorganic certification,Rainforest Alliance,Fair Trade,UTZ Certified,GlobalGAP, Bird Friendly, and the Common Code for the Coffee Community (4C).[12]These standards specify rules that producers, manufacturers and traders need to follow so that the things they do, make, or grow do not hurt people and the environment.[192]These standards are also known as Voluntary Sustainability Standards (VSS) that are private standards that require products to meet specific economic, social orenvironmental sustainabilitymetrics. The requirements can refer to product quality or attributes, but also to production and processing methods, as well as transportation. VSS are mostly designed and marketed by non-governmental organizations (NGOs) or private firms and they are adopted by actors up and down the value chain, from farmers to retailers. Certifications and labels are used to signal the successful implementation of a VSS. According to the ITC standards map the mostly covered products by standards are agricultural products.[193]Around 500 VSS today apply to key exports of many developing countries, such as coffee, tea, bananas, cocoa, palm oil, timber, cotton, and organic agri-foods.[194]VSS are found to reduce eutrophication, water use, greenhouse gas emissions, and natural ecosystem conversion.[195]And thus are considered as a potential tool for sustainable agriculture.
TheUSDAproduces an organic label that is supported by nationalized standards of farmers and facilities. The steps for certification consist of creating an organic system plan, which determines how produce will be tilled, grazed, harvested, stored, and transported. This plan also manages and monitors the substances used around the produce, the maintenance needed to protect the produce, and any nonorganic products that may come in contact with the produce. The organic system plan is then reviewed and inspected by the USDA certifying agent. Once the certification is granted, the produce receives an approval sticker from the USDA and the produce is distributed across the U.S. In order to hold farmers accountable and ensure that Americans are receiving organic produce, these inspections are done at least once a year.[196]This is just one example of sustainable certification systems through produce maintenance.
Sustainable agriculture is a topic in international policy concerning its potential to reduce environmental risks. In 2011, the Commission on Sustainable Agriculture and Climate Change, as part of its recommendations for policymakers on achieving food security in the face of climate change, urged that sustainable agriculture must be integrated into national and international policy.[197]The Commission stressed that increasing weather variability and climate shocks will negatively affect agricultural yields, necessitating early action to drive change in agricultural production systems towards increasing resilience.[197]It also called for dramatically increased investments in sustainable agriculture in the next decade, including in national research and development budgets,land rehabilitation, economic incentives, and infrastructure improvement.[197]
During2021 United Nations Climate Change Conference, 45 countries pledged to give more than 4 billion dollars for transition to sustainable agriculture. The organization "Slow Food" expressed concern about the effectivity of the spendings, as they concentrate on technological solutions and reforestation en place of "a holistic agroecology that transforms food from a mass-produced commodity into part of a sustainable system that works within natural boundaries."[198]
Additionally, the Summit consisted of negotiations that led to heavily reducing CO2emissions, becoming carbon neutral, ending deforestation and reliance on coal, and limiting methane emissions.[199][200]
In November, the Climate Action Tracker reported that global efforts are on track to for a 2.7 °C temperature increase with current policies, finding that the current targets will not meet global needs as coal and natural gas consumption are primarily responsible for the gap in progress.[201][202]Since, like-minded developing countries, like those in Africa,[203]asked for an addendum to the agreement that removed the obligation for developing countries to meet the same requirements of wealthy nations.[204]
In May 2020 the European Union published a program, named "From Farm to Fork" for making its agriculture more sustainable. In the official page of the programFrom Farm to Forkis citedFrans Timmermansthe Executive Vice-President of the European Commission, saying that:
The coronavirus crisis has shown how vulnerable we all are, and how important it is to restore the balance between human activity and nature. At the heart of the Green Deal the Biodiversity and Farm to Fork strategies point to a new and better balance of nature, food systems, and biodiversity; to protect our people's health and well-being, and at the same time to increase the EU's competitiveness and resilience. These strategies are a crucial part of the great transition we are embarking upon.[205]
The program includes the next targets:
Policies from 1930 - 2000
TheNew Dealimplemented policies and programs that promoted sustainable agriculture. Under the Agriculture Adjustment Act of 1933, it provided farmers payments to create a supply management regime that capped production of important crops.[206][207][208]This allowed farmers to focus on growing food and not competing in the market based system. TheNew Dealalso provided a monetary incentive for farmers that left some of their fields unsown or ungrazed to order to improve the soil conditions.[206]The Cooperative Extension Service was also established that set up sharing funding responsibilities amongst theUSDA, land-grant universities, and local communities.[207]
The 1950s to 1990s was when the government switched its stance on agriculture policy which halted sustainable agriculture. TheAgricultural Act of 1954passed which supported farmers with flexible price supports, but only to commodity programs.[209]TheFood and Agricultural Act of 1965had new income support payments and continued supply controls but reduced priced supports.[209]Agriculture and Consumer Protection Act of 1973removed price supports and instead introduced target prices and deficiency payments.[209]It continued to promote commodity crops by lowering interest rates.Food Security Act of 1985continued commodity loan programs.[208][209]These policies incentivized profit over sustainability because the US government was promoting farms to maximize their production output instead of placing checks.[209]This meant that farms were being turned into food factories as they became bigger in size and grew morecommodity cropslike corn, wheat, and cotton. From 1900 to 2002, the number of farms in the US decreased significantly while the average size of a farm went up after 1950.[209][208]
Current Policies
In the United States, the federalNatural Resources Conservation Service(USDA) provides technical and financial assistance for those interested in pursuing natural resource conservation along with production agriculture. With programs likeSAREand China-UKSAINto help promote research on sustainable agriculture practices and a framework for agriculture and climate change respectively.
Future Policies
Currently, there are policies on the table that could move the US agriculture system into a more sustainable direction with theGreen New Deal. This policy promotes decentralizing agrarian governance by breaking up large commodity farms that were created in the 1950s to 1980s.[206]Decentralized governance within the farming community would allow for more adaptive management at local levels to help focus onclimate change mitigation,food security, and landscape-scale ecological stewardship.[206]TheGreen New Dealwould invest in public infrastructure to support farmers transition from industrial food regime and acquireagroecologicalskills.[206]Just like in theNew Deal, it would invest incooperativesand commons to share and redistribute resources like land, food, equipment, research facilities, personnel, and training programs.[206]All of these policies and programs would break down barriers that have prevented sustainable farmers and agriculture from taking place in the United States.[208]
In 2016, theChinese governmentadopted a plan to reduce China's meat consumption by 50%, for achieving more sustainable and healthy food system.[210][211]
In 2019, the National Basic Research Program orProgram 973funded research into Science and Technology Backyard (STB). STBs are hubs often created in rural areas with significant rates ofsmall-scale farmingthat combine knowledge of traditional practices with new innovations and technology implementation. The purpose of this program was to invest in sustainable farming throughout the country and increase food production while achieving few negative environmental effects. The program was ultimately proven to be successful, and the study found that the merging of traditional practices and appropriate technology was instrumental in higher crop yields.[212]
In collaboration with the Food and Land Use Coalition (FOLU), CEEW (council for energy, environment and water), has given an overview of the current state of sustainable agriculture practices and systems (SAPSs) in India.[213]India is aiming to scale-up SAPs, through policymakers, administrators, philanthropists, and other which represent a vital alternative to conventional, input-intensive agriculture. In idea these efforts identify 16 SAPSs – including agroforestry, crop rotation, rainwater harvesting, organic farming and natural farming – using agroecology as an investigative lens. In a conclusive understanding it is realised that sustainable agriculture is far from mainstream in India. Further proposals for several measures for promoting SAPSs, including restructured government support and rigorous evidence generation for benefits and implementation of sustainable farming are ongoing progress in Indian Agriculture.
An example of initiatives in India towards exploring the world of sustainable farming has been set by theSowgood foundationwhich is a nonprofit founded by educator Pragati Chaswal.[214]It started by teaching primary school children about sustainable farming by helping them farm on small farm strips in suburban farmhouses and gardens. Today many government and private schools in Delhi, India have adopted the sowgood foundation curriculum for sustainable farming for their students.
In 2012, the Israeli Ministry of Agriculture found itself at the height of the Israeli commitment to sustainable agriculture policy. A large factor of this policy was funding programs that made sustainable agriculture accessible to smaller Palestinian-Arab communities. The program was meant to create biodiversity, train farmers in sustainable agriculture methods, and hold regular meetings for agriculture stakeholders.[215]
In 1907, the American authorFranklin H. Kingdiscussed in his bookFarmers of Forty Centuriesthe advantages of sustainable agriculture and warned that such practices would be vital to farming in the future.[216]The phrase 'sustainable agriculture' was reportedly coined by the AustralianagronomistGordon McClymont.[217]The term became popular in the late 1980s.[172]There was an international symposium on sustainability in horticulture by the International Society of Horticultural Science at the International Horticultural Congress in Toronto in 2002.[218]At the following conference at Seoul in 2006, the principles were discussed further.[219]
This potential future inability to feed the world's population has been a concern since the English political economistThomas Malthusin the early 1800s, but has become increasingly important recently.[15]Starting at the very end of the twentieth and early twenty-first centuries, this issue became widely discussed in the U.S. because of growing anxieties of a rapidly increasing global population.Agriculturehas long been the biggest industry worldwide and requires significant land, water, and labor inputs. At the turn of the twenty-first century, experts questioned the industry's ability to keep up with population growth.[16]This debate led to concerns over globalfood insecurityand "solving hunger".[220]
This article incorporates text from afree contentwork. Licensed under CC BY-SA IGO 3.0 (license statement/permission). Text taken fromThe State of the World's Biodiversity for Food and Agriculture − In Brief, FAO, FAO.
|
https://en.wikipedia.org/wiki/Sustainable_agriculture
|
Sustainable managementtakes the concepts from sustainability and synthesizes them with the concepts ofmanagement.Sustainabilityhas three branches: theenvironment, the needs of present andfuture generations, and theeconomy. Using these branches, it creates the ability of a system to thrive by maintaining economic viability and also nourishing the needs of the present and future generations by limitingresource depletion.
Sustainable management is needed because it is an important part of the ability to successfully maintain the quality of life on our planet. Sustainable management can be applied to all aspects of our lives. For example, the practices of a business should be sustainable if they wish to stay in businesses, because if the business is unsustainable, then by the definition of sustainability they will cease to be able to be in competition. Communities are in a need of sustainable management, because if thecommunityis to prosper, then the management must be sustainable.Forestandnatural resourcesneed to have sustainable management if they are to be able to be continually used by our generation and future generations. Our personal lives also need to be managed sustainably. This can be by making decisions that will help sustain our immediate surroundings and environment, or it can be by managing our emotional and physical well-being. Sustainable management can be applied to many things, as it can be applied as a literal and an abstract concept. Meaning, depending on what they are applied to the meaning of what it is can change.
Managers' strategies reflect the mindset of the times. This being the case, it has been a problem for the evolution of sustainable management practices for two reasons. The first reason is that sustainable norms are continually changing. For example, things considered unthinkable a few years ago are now standard practices. And the second reason is that in order to practice sustainable management, one has to be forward thinking, not only in the short term, but also in the long term.
Management behavior is a reflection of how accepted conceptions of behavior are defined. This means that forces and beliefs outside of the given program push along the management. Themanagercan take some credit for the cultural changes in his or her program, but overall the organization’s culture reflects dominant conceptions of the public at that time. This is exemplified through the managerial actions taken during the time periods that lead up to the present day. These examples are given below:
This was a time period in which, even though there were outside concerns about the environment, the industries were able to resist pressures and make their own definitions and regulations.[1]Environmentalists were not viewed as credible sources of information during this time and usually discredited.
The norms of this period radically shifted with the creating of theU.S. Environmental Protection Agency(EPA) in 1970. The EPA became the mediator between the environmentalists and the industry, although the two sides never met.[1]During this period, the environment for the majority of industry and business management teams was only important in terms of compliance with law.[1]In 1974 a conference board survey found that the majority of companies still treated environmental management as a threat.[1]The survey noted a widespread tendency in most of industry to treat pollution control expenditures as non-recoverable investments.[1]According to the consensus environmental protection was considered at best a necessary evil, and at worst a temporary nuisance.[1]
By 1982, the EPA had lost its credibility, but at the same time activism became more influential, and there was an increase in the funding and memberships of major non-governmental organizations (NGOs).[1]Industry gradually became more cooperative with government and new managerial structures were implemented to achieve compliances with regulations.[1]
During this period, industry progressed into a proactive stance on environmental protection.[1]With this attitude, the issue became one in which they felt qualified to manage on their own. Although there was advancement in organizational power, the concern for the environment still kept being pushed down the hierarchy of important things to do.[1]
In 1995 Harvard professorMichael Porterwrote in theHarvard Business Reviewthat environmental protection was not a threat to the corporate enterprise but rather an opportunity, one that could increase competitive advantage in the marketplace.[1]Before 2000, The companies generally regarded green buildings as interesting experiments but unfeasible projects in the real business world.[2]Since then several factors, including the ones listed below, have caused major shifts in thinking.[2]The creation of reliable building rating and performance measurement systems for new construction and renovation has helped change corporate perceptions about green. In 2000, the Washington D.C.–basedUnited States Green Building Councillaunched its rigorousLeadership in Energy and Environmental Design(LEED) program.[2]Hundreds of US and international studies have proven the financial advantages of going green: lower utility costs, higher employee productivity.[2]Green building materials, mechanical systems, and furnishings have become more widely available, and prices have dropped considerably.[2]As changes are made to the norms of what is acceptable from a management perspective, more and more it becomes apparent that sustainable management is the new norm of the future. Currently, there are many programs, organizations, communities, and businesses that follow sustainable management plans. These new entities are pressing forward with the help of changing social norms and management initiatives.
A manager is a person that is held responsible for the planning of things that will benefit the situation that they are controlling. To be a manager of sustainability, one needs to be a manager that can control issues and plan solutions that will be sustainable, so that what they put into place will be able to continue for future generations. The job of a sustainable manager is like other management positions, but additionally they have to manage systems so that they are able to support and sustain themselves. Whether it is a person that is a manager of groups, business, family, communities, organizations, agriculture, or the environment, they can all use sustainable management to improve their productivity, environment, and atmosphere, among other things. Some practical skills that are needed to be able to perform the job include:
Recently, there has even been the addition of new programs in colleges and universities in order to be able to offer Bachelor of Science and Master of Science degrees in Sustainable management.
In business, time and time again, environmentalists are seen facing off against industry, and there is usually very little "meeting in the middle" or compromises. When these two sides agree to disagree, the result is a more powerful message, and it becomes one that allows more people to understand and embrace.
Organizations need to face the fact that the boundaries of accountability are moving fast. The trend towards sustainable management means that organizations are beginning to implement a systems wide approach that links in the various parts of the business with the greater environment at large.
As sustainable management institutions adapt, it becomes imperative that they include an image of sustainable responsibility that is projected for the public to see. This is because firms are socially based organizations. But this can be a double edged sword, because sometimes they end up focusing too much on their image rather than actually focusing on implementing what they are trying to project to the public; this is called green washing. It is important that the execution of sustainable management practices is not put aside while the firm tries to appeal to the public with their sustainable management “practices.”
Additionally, companies must make the connection between sustainability as a vision and sustainability as a practice. Managers need to think systematically and realistically about the application of traditional business principles to environmental problems. By melding the two concepts together, new ideas of business principles emerge and can enable some companies-those with the right industry structure, competitive position, and managerial skills- to deliver increased value to shareholders while making improvements in their environmental performance.[4]
Any corporation can become green on a standard budget.[2]By focusing on the big picture, a company can generate more savings and better performance. By using planning, design, and construction based on sustainable values, sustainable management strives to gain LEED points by reducing footprint of the facility by sustainably planning the site with focus on these three core ideas.[2]To complete a successful green building, or business, the management also applies cost benefit analysis in order to allocate funds appropriately.
The economic system, like all systems, is subject to the laws of thermodynamics, which define the limit at which the Earth can successfully process energy and wastes.[5]Managers need to understand that their values are critical factors in their decisions. Many of current business values are based on unrealistic economic assumptions; adopting new economic models that take the Earth into account in the decision-making process is at the core of sustainable management.[5]This new management addresses the interrelatedness of the ecosystem and the economic system.[5]
The strategic vision that is based on core values of the firm guides the firm’s decision-making processes at all levels. Thus, the sustainable management requires finding out what business activities fit into the Earth’s carrying capacity, and also defining the optimal levels of those activities.[5]Sustainability values form the basis of the strategic management, process the costs and benefits of the firm’s operations, and are measured against the survival needs of the planets stakeholders.[5]Sustainability is the core value because it supports a strategic vision of firms in the long term by integrating economic profits with the responsibility to protect the whole environment.[5]
Changing industrial processes so that they actually replenish and magnify the stock ofnatural capitalis another component of sustainable management. One way managers have figured out how to do this is by using a service model of business.[6]This focuses on building relationships with customers, instead of focusing on making and selling products.[6]This type of model represents a fundamental change in the way businesses behave. It allows for managers to be aware of the lifecycle of their products by leaving the responsibility up to the company to take care of the product throughout the life cycle.[6]The service model, because the product is the responsibility of the business, creates an avenue in which the managers can see ways in which they can reduce the use of resources through recycling and product construction.
For communities to be able to improve, sustainable management needs to be in practice. If a community relies on the resources that are in the surrounding area, then they need to be used in a sustainable manner to insure the indefinite supply of the resources. A community needs to work together to be able to be productive, and when there is a need to get things done, management needs to take the lead. If sustainable management is in practice in a community, then people will want to stay in that community, and other people will realize the success, and they will also want to live in a similar environment, as their own unsustainable towns fail. Part of a sustainable management system in a community is the education, the cooperation, and the responsiveness of the people that live in the community.[7]
There are new ideals to how a community can be sustainable. This can includeurban planning, which allow people to move about a city that are more sustainable for the environment. If management plans a community that allows for people to move without cars, it helps make a community sustainable by increasing mass transit or other modes of transportation. People would spend less time in traffic while improving the environment, and on an occasions exercise.[8]
Sustainable management provides plans that can improve multiple parts of people lives, environment, and future generations. If a community sets goals, then people are more likely to reduce energy, water, and waste, but a community cannot set goals unless they have the management in place to set goals.[9]
A part of sustainable management for a community is communicating the ideals and plans for an area to the people that will be carrying out the plan. It is important to note that sustainable management is not sustainable if the person that is managing a situation is not communicating what needs to be improved, how it should be improved, why it is important to them, and how they are involved it in the process.
For a person to be responsible for their action is a part of managing, and that is part of being managed sustainable. To be able to manage oneself sustainable there are many factors to consider, because to be able to manage oneself a person needs to be able to see what they are doing unsustainable, and how to become sustainable. By using plastic bags at a check out line is unsustainable because it creates pollutants, but using reusable biodegradable bags can resolve the problem. This is not only environmentally sustainable, but it also improves the physical and mental sustainability of the person that uses the reusable bags. It is physical improvement because people do not have to live with the countless plastic bags on the Earth and the pollution that comes with it. It is also an improvement to mental sustainability, because the person that uses the reusable bags has feeling of accomplishment that comes from doing the right thing. Deciding to buy local food to make the community stronger through community sustainable management, can also be emotionally, environmentally, and physically rewarding.
In Figure 1[9]Mckenzie shows how a person can look at a behavior that they are doing and determine if it is sustainable or not, and what they could replace the bad behavior with. Education of an individual would be the first step to deciding to take a step towards managing their lives sustainable. To manage a person life the benefits needs to be high and the barriers low. Good managing would come up with a competing behavior that has no barriers to it. To come up with a Competing behavior that does not have a barrier to it would involve good problem solving.
Figure 2[9]Mckenzie is an example of what a person might try to change in their life to make it more sustainable. Walking instead of taking the taxi helps the environment, but it also loses time spent with family. The bus is in the middle of walking and taking a taxi, but another option that is not on the list is riding a bike. Good sustainable management would include all the options that are possible, and new options that were not available before. These figures are tools that can be used in helping people manage their lives sustainably, but there are other ways to think about their lives to become more sustainable.
There are very practical needs for sustainable management of forest. Since forests provide many as per as resources to the people, and to the world, management of the forests are critical to keep those resources available. To be able to manage a forest, knowledge of how the natural systems work is needed. If a manager knows how the natural system works, then when manager of the forest makes plans how the resources are to remove from the forest, the manager will know how the resources can be removed without damaging the forest. Since many forests are under management of the government that is in the region, the forest are not truly functioning how the ecosystem was naturally developed, and how it is meant to be. An example is the pineflatwoodsinFlorida. To be able to maintain that ecosystem frequent burnings of the forest needs to happen. Fires are a natural part of the ecosystem, but since wild fires can spread to communities near the forest, control of the wild fires is requested from the communities. To maintain flatwoods forest control burning or prescribe burning is part of the management to sustain the forest.[10]
|
https://en.wikipedia.org/wiki/Sustainable_management
|
Anonymousis adecentralizedinternationalactivistandhacktivistcollectiveandmovementprimarily known for its variouscyberattacksagainst several governments, government institutions andgovernment agencies,corporations, and theChurch of Scientology.
Anonymous originated in 2003 on theimageboard4chanrepresenting the concept of many online and offline community users simultaneously existing as an "anarchic", digitized "global brain" or "hivemind".[2][3][4]Anonymous members (known asanons) can sometimes be distinguished in public by the wearing ofGuy Fawkes masksin the style portrayed in thegraphic novelandfilmV for Vendetta.[5]Some anons also opt to mask their voices throughvoice changersortext-to-speechprograms.
Dozens of people have been arrested for involvement in Anonymouscyberattacksin countries including theUnited States, theUnited Kingdom,Australia, theNetherlands,South Africa,[6]Spain,India, andTurkey. Evaluations of the group's actions and effectiveness vary widely. Supporters have called the group "freedom fighters"[7]and digitalRobin Hoods,[8]while critics have described them as "a cyber lynch-mob"[9]or "cyber terrorists".[10]In 2012,Timecalled Anonymous one of the "100 most influential people" in the world.[11]Anonymous' media profile diminished by 2018,[12][13]but the group re-emerged in 2020 to support theGeorge Floyd protestsand other causes.[14][15]
The philosophy of Anonymous offers insight into a long-standing political question that has gone unanswered with often tragic consequences for social movements: what does a new form of collective politics look like that wishes to go beyond the identity of the individual subject in late capitalism?[16]
Internal dissent is also a regular feature of the group.[17]A website associated with the group describes it as "an Internet gathering" with "a very loose and decentralized command structure that operates on ideas rather than directives".[17]Gabriella Colemanwrites of the group: "In some ways, it may be impossible to gauge the intent and motive of thousands of participants, many of who don't even bother to leave a trace of their thoughts, motivations, and reactions. Among those that do, opinions vary considerably."[18]
Broadly speaking, Anons oppose Internet censorship and control and the majority of their actions target governments, organizations, and corporations that they accuse of censorship. Anons were early supporters of the globalOccupy movementand theArab Spring.[19]Since 2008, a frequent subject of disagreement within Anonymous is whether members should focus on pranking and entertainment or more serious (and, in some cases, political) activism.[20][21]
We [Anonymous] just happen to be a group of people on the Internet who need—just kind of an outlet to do as we wish, that we wouldn't be able to do in regular society. ...That's more or less the point of it. Do as you wish. ... There's a common phrase: 'we are doing it for the lulz.'
Because Anonymous has no leadership, no action can be attributed to the membership as a whole.Parmy Olsonand others have criticized media coverage that presents the group as well-organized or homogeneous; Olson writes, "There was no single leader pulling the levers, but a few organizational minds that sometimes pooled together to start planning a stunt."[23]Some members protest using legal means, while others employ illegal measures such as DDoS attacks and hacking.[24]Membership is open to anyone who wishes to state they are a member of the collective;[25]British journalistCarole CadwalladrofThe Observercompared the group's decentralized structure to that ofal-Qaeda: "If you believe in Anonymous, and call yourself Anonymous, you are Anonymous."[26]Olson, who formerly described Anonymous as a "brand", stated in 2012 that she now characterized it as a "movement" rather than a group: "anyone can be part of it. It is a crowd of people, a nebulous crowd of people, working together and doing things together for various purposes."[27]
The group's few rules include not disclosing one's identity, not talking about the group, and not attacking media.[28]Members commonly use the tagline "We are Anonymous. We are Legion. We do not forgive. We do not forget. Expect us."[29]Brian Kelly writes that three of the group's key characteristics are "(1) an unrelenting moral stance on issues and rights, regardless of direct provocation; (2) a physical presence that accompanies
online hacking activity; and (3) a distinctive brand."[30]
Journalists have commented that Anonymous' secrecy, fabrications, and media awareness pose an unusual challenge for reporting on the group's actions and motivations.[31][32]Quinn NortonofWiredwrites that: "Anons lie when they have no reason to lie. They weave vast fabrications as a form of performance. Then they tell the truth at unexpected and unfortunate times, sometimes destroying themselves in the process. They are unpredictable."[31]Norton states that the difficulties in reporting on the group cause most writers, including herself, to focus on the "small groups of hackers who stole the limelight from a legion, defied their values, and crashed violently into the law" rather than "Anonymous's sea of voices, all experimenting with new ways of being in the world".[31]
Since 2009, dozens of people have been arrested for involvement in Anonymous cyberattacks, in countries including the U.S., UK, Australia, the Netherlands, Spain, and Turkey.[33]Anons generally protest these prosecutions and describe these individuals as martyrs to the movement.[34]The July 2011 arrest of LulzSec member Topiary became a particular rallying point, leading to a widespread "Free Topiary" movement.[35]
The first person to be sent to jail for participation in an Anonymous DDoS attack was Dmitriy Guzner, an American 19-year-old. He pleaded guilty to "unauthorized impairment of a protected computer" in November 2009 and was sentenced to 366 days inU.S. federal prison.[36][37]
On June 13, 2011, officials in Turkey arrested 32 individuals that were allegedly involved in DDoS attacks on Turkish government websites. These members of Anonymous were captured in different cities of Turkey includingIstanbulandAnkara. According toPC Magazine, these individuals were arrested after they attacked websites as a response to the Turkish government demand to ISPs to implement a system of filters that many have perceived as censorship.[38][39]
Chris Doyon(alias "Commander X"), a self-described leader of Anonymous, was arrested in September 2011 for a cyberattack on the website ofSanta Cruz County, California.[40][41]Hejumped bailin February 2012 and fled across the border into Canada.[41]
In September 2012, journalist and Anonymous associateBarrett Brown, known for speaking to media on behalf of the group, was arrested hours after posting a video that appeared to threaten FBI agents with physical violence. Brown was subsequently charged with 17 offenses, including publishing personal credit card information from the Stratfor hack.[42]
Several law enforcement agencies took action after Anonymous' Operation Avenge Assange.[43]In January 2011, British police arrested five male suspects between the ages of 15 and 26 with suspicion of participating in Anonymous DDoS attacks.[44]During July 19–20, 2011, as many as 20 or more arrests were made of suspected Anonymous hackers in the US, UK, and Netherlands. According to the statements of U.S. officials, suspects' homes were raided and suspects were arrested inAlabama,Arizona,California,Colorado,Washington DC,Florida,Massachusetts,Nevada,New Mexico, andOhio. Additionally, a 16-year-old boy was held by the police in south London on suspicion of breaching theComputer Misuse Act 1990, and four were held in the Netherlands.[45][46][47][48]
AnonOps adminChristopher Weatherhead(alias "Nerdo"), a 22-year-old who had reportedly been intimately involved in organizing DDoS attacks during "Operation Payback",[49]was convicted by a UK court on one count of conspiracy to impair the operation of computers in December 2012. He was sentenced to 18 months' imprisonment. Ashley Rhodes, Peter Gibson, and another male had already pleaded guilty to the same charge for actions between August 2010 and January 2011.[49][50]
Evaluations of Anonymous' actions and effectiveness vary widely. In a widely shared post, blogger Patrick Gray wrote that private security firms "secretly love" the group for the way in which it publicizes cyber security threats.[51]Anonymous is sometimes stated to have changed the nature of protesting,[8][9]and in 2012,Timecalled it one of the "100 most influential people" in the world.[11]
In 2012,Public Radio Internationalreported that the U.S.National Security Agencyconsidered Anonymous a potential national security threat and had warned the president that it could develop the capability to disable parts of the U.S. power grid.[52]In contrast, CNN reported in the same year that "security industry experts generally don't consider Anonymous a major player in the world of cybercrime" due to the group's reliance on DDoS attacks that briefly disabled websites, rather than the more serious damage possible through hacking. One security consultant compared the group to "a jewelry thief that drives through a window, steals jewels, and rather than keep them, waves them around and tosses them out to a crowd... They're very noisy, low-grade crimes."[53]In its2013 Threats Predictionsreport,McAfeewrote that the technical sophistication of Anonymous was in decline and that it was losing supporters due to "too many uncoordinated and unclear operations".[54]
Graham Cluley, a security expert forSophos, argued that Anonymous' actions against child porn websites hosted on adarknetcould be counterproductive, commenting that while their intentions may be good, the removal of illegal websites and sharing networks should be performed by the authorities, rather than Internet vigilantes.[55]Some commentators also argued that the DDoS attacks by Anonymous following the January 2012Stop Online Piracy Actprotests had proved counterproductive. Molly Wood of CNET wrote that "[i]f the SOPA/PIPA protests were the Web's moment of inspiring, non-violent, hand-holding civil disobedience, #OpMegaUpload feels like the unsettling wave of car-burning hooligans that sweep in and incite the riot portion of the play."[56]Dwight Silverman of theHouston Chronicleconcurred, stating that "Anonymous' actions hurt the movement to kill SOPA/PIPA by highlighting online lawlessness."[57]TheOxford Internet Institute's Joss Wright wrote that "In one sense the actions of Anonymous are themselves, anonymously and unaccountably, censoring websites in response to positions with which they disagree."[58]
Gabriella Colemanhas compared the group to thetricksterarchetype[59]and said that "they dramatize the importance of anonymity and privacy in an era when both are rapidly eroding. Given that vast databases track us, given the vast explosion of surveillance, there's something enchanting, mesmerizing and at a minimum thought-provoking about Anonymous' interventions".[60]When asked what good Anonymous had done for the world,Parmy Olsonreplied:
In some cases, yes, I think it has in terms of some of the stuff they did in the Middle East supporting the pro-democracy demonstrators. But a lot of bad things too, unnecessarily harassing people – I would class that as a bad thing. DDOSing the CIA website, stealing customer data and posting it online just for shits and giggles is not a good thing.[27]
Quinn Norton ofWiredwrote of the group in 2011:
I will confess up front that I love Anonymous, but not because I think they're the heroes. Like Alan Moore's character V who inspired Anonymous to adopt the Guy Fawkes mask as an icon and fashion item, you're never quite sure if Anonymous is the hero or antihero. The trickster is attracted to change and the need for change, and that's where Anonymous goes. But they are not your personal army – that's Rule 44 – yes, there are rules. And when they do something, it never goes quite as planned. The internet has no neat endings.[59]
Furthermore, Landers assessed the following in 2008:
Anonymous is the first internet-based super-consciousness. Anonymous is a group, in the sense that a flock of birds is a group. How do you know they’re a group? Because they’re travelling in the same direction. At any given moment, more birds could join, leave, peel off in another direction entirely.[61]
Sam Esmailshared in an interview withMotherboardthat he was inspired by Anonymous when creating theUSA Networkhacktivist drama,Mr. Robot.[62]Furthermore,Wiredcalls the "Omegas", a fictitious hacker group in the show, "a clear reference to the Anonymous offshoot known asLulzSec".[63]In the TV seriesElementarya hacktivist collective called "Everyone" plays a recurring role; there are several hints and similarities to Anonymous.[64]
The name Anonymous itself is inspired by the perceived anonymity under which users post images and comments on the Internet. Usage of the term Anonymous in the sense of a shared identity began onimageboards, particularly the/b/board of4chan, dedicated to random content and to raiding other websites.[66]A tag of Anonymous is assigned to visitors who leave comments without identifying the originator of the posted content. Users of imageboards sometimes jokingly acted as if Anonymous was a single individual. The concept of the Anonymous entity advanced in 2004 when an administrator on the 4chan image board activated a "Forced_Anon" protocol that signed all posts as Anonymous.[67]As the popularity of imageboards increased, the idea of Anonymous as a collective of unnamed individuals became anInternet meme.[68]
Users of 4chan's /b/ board would occasionally join into mass pranks or raids.[66]In a raid on July 12, 2006, for example, large numbers of 4chan readers invaded the Finnish social networking siteHabbo Hotelwith identical avatars; the avatars blocked regular Habbo members from accessing the digital hotel's pool, stating it was "closed due to fail and AIDS".[69]Future LulzSec memberTopiarybecame involved with the site at this time, inviting large audiences to listen to his prank phone calls viaSkype.[70][a]Due to the growing traffic on 4chan's board, users soon began to plot pranks off-site usingInternet Relay Chat(IRC).[72]These raids resulted in the first mainstream press story on Anonymous, a report byFoxstationKTTVin Los Angeles, California in the U.S. The report called the group "hackers on steroids", "domestic terrorists", and an "Internet hate machine".[65][73]
Encyclopedia Dramatica was founded in 2004 by Sherrod DeGrippo, initially as a means of documenting gossip related toLiveJournal, but it quickly was adopted as a major platform by Anonymous for parody and other purposes.[74]Thenot safe for worksite celebrates asubversive"trollingculture", and documentsInternet memes,culture, and events, such as mass pranks, trolling events, "raids", large-scale failures of Internet security, and criticism ofInternet communitiesthat are accused ofself-censorshipto gain prestige or positive coverage from traditional andestablished media outlets. JournalistJulian Dibbelldescribed Encyclopedia Dramatica as the site "where the vast parallel universe of Anonymous in-jokes, catchphrases, and obsessions is lovingly annotated, and you will discover an elaborate trolling culture: Flamingly racist and misogynist content lurks throughout, all of it calculated to offend."[74]The site also played a role in the anti-Scientologycampaign ofProject Chanology.[75]
On April 14, 2011, the original URL of the site was redirected to a new website namedOh Internetthat bore little resemblance to Encyclopedia Dramatica. Parts of the ED community harshly criticized the changes.[76]In response, Anonymous launched "Operation Save ED" to rescue and restore the site's content.[77]The Web Ecology Project made a downloadable archive of former Encyclopedia Dramatica content.[78][79]The site's reincarnation was initially hosted at encyclopediadramatica.ch on servers owned by Ryan Cleary, who later was arrested in relation to attacks by LulzSec against Sony.[80]
Anonymous first became associated withhacktivism[b]in 2008 following a series of actions against the Church of Scientology known as Project Chanology. On January 15, 2008, the gossip blogGawkerposted a video in which celebrity ScientologistTom Cruisepraised the religion;[81]and the Church responded with acease-and-desist letterfor violation of copyright.[82]4chan users organized a raid against the Church in retaliation, prank-calling its hotline, sendingblack faxesdesigned to waste ink cartridges, and launchingDDoSattacks against its websites.[83][84]
The DDoS attacks were at first carried out with the Gigaloader andJMeterapplications. Within a few days, these were supplanted by theLow Orbit Ion Cannon(LOIC), a network stress-testing application allowing users to flood a server withTCPorUDPpackets. The LOIC soon became a signature weapon in the Anonymous arsenal; however, it would also lead to a number of arrests of less experienced Anons who failed to conceal theirIP addresses.[85]Some operators in Anonymous IRC channels incorrectly told or lied to new volunteers that using the LOIC carried no legal risk.[86][87]
During the DDoS attacks, a group of Anons uploaded a YouTube video in which a robotic voice speaks on behalf of Anonymous, telling the "leaders of Scientology" that "For the good of your followers, for the good of mankind—for the laughs—we shall expel you from the Internet."[88][89]Within ten days, the video had attracted hundreds of thousands of views.[89]
With more than 10 thousand followers on their IRC server waiting for instructions, they felt they had to come up with something. The idea of a worldwide protest emerged because they both wanted to use a symbol or image to unify the protests, and since all protesters were supposed to be anonymous, it was decided to use a mask. Due to shipment problems caused by the short amount of time to prepare, they improvised and called all the costume and comic book-shops in the major cities around the world, and found that the only mask available in all the cities was theGuy Fawkes maskfrom the graphic novel and filmV for Vendetta, in which an anarchist revolutionary battles a totalitarian government. The suggestion of the choice of mask was well received. On February 10, thousands of Anonymous joined simultaneous protests at Church of Scientology facilities in 142 cities in 43 countries.[90][91][92]The stylized Guy Fawkes masks soon became a popular symbol for Anonymous.[93]In-person protests against the Church continued throughout the year, including "Operation Party Hard" on March 15 and "Operation Reconnect" on April 12.[94][95][96]However, by mid-year, they were drawing far fewer protesters, and many of the organizers in IRC channels had begun to drift away from the project.[97]
By the start of 2009, Scientologists had stopped engaging with protesters and had improved online security, and actions against the group had largely ceased. A period of infighting followed between the politically engaged members (referred to as "moralfags" in the parlance of 4chan) and those seeking to provoke for entertainment (trolls).[98]By September 2010, the group had received little publicity for a year and faced a corresponding drop in member interest; its raids diminished greatly in size and moved largely off of IRC channels, organizing again from the chan boards, particularly /b/.[99]
In September 2010, however, Anons became aware of Aiplex Software, an Indian software company that contracted with film studios to launch DDoS attacks on websites used by copyright infringers, such asThe Pirate Bay.[100][99]Coordinating through IRC, Anons launched a DDoS attack on September 17 that shut down Aiplex's website for a day. Primarily using LOIC, the group then targeted theRecording Industry Association of America(RIAA) and theMotion Picture Association of America(MPAA), successfully bringing down both sites.[101]On September 19, future LulzSec member Mustafa Al-Bassam (known as "Tflow") and other Anons hacked the website of Copyright Alliance, an anti-infringement group, and posted the name of the operation: "Payback Is A Bitch", or "Operation Payback" for short.[102]Anons also issued a press release, stating:
Anonymous is tired of corporate interests controlling the internet and silencing the people’s rights to spread information, but more importantly, the right to SHARE with one another. The RIAA and the MPAA feign to aid the artists and their cause; yet they do no such thing. In their eyes is not hope, only dollar signs. Anonymous will not stand this any longer.[103]
As IRC network operators were beginning to shut down networks involved in DDoS attacks, Anons organized a group of servers to host an independent IRC network, titled AnonOps.[104]Operation Payback's targets rapidly expanded to include the British law firmACS:Law,[105]theAustralian Federation Against Copyright Theft,[106]the British nightclubMinistry of Sound,[107]the Spanish copyright societySociedad General de Autores y Editores,[108]theU.S. Copyright Office,[109]and the website ofGene SimmonsofKiss.[110]By October 7, 2010, total downtime for all websites attacked during Operation Payback was 537.55 hours.[110]
In November 2010, the organizationWikiLeaksbegan releasinghundreds of thousands of leaked U.S. diplomatic cables. In the face of legal threats against the organization by the U.S. government,Amazon.combooted WikiLeaks from its servers, andPayPal,MasterCard, andVisacut off service to the organization.[111]Operation Payback then expanded to include "Operation AvengeAssange", and Anons issued a press release declaring PayPal a target.[112]Launching DDoS attacks with the LOIC, Anons quickly brought down the websites of the PayPal blog;PostFinance, a Swiss financial company denying service to WikiLeaks;EveryDNS, a web-hosting company that had also denied service; and the website of U.S. SenatorJoe Lieberman, who had supported the push to cut off services.[113]
On December 8, Anons launched an attack against PayPal's main site. According to Topiary, who was in the command channel during the attack, the LOIC proved ineffective, and Anons were forced to rely on thebotnetsof two hackers for the attack, marshaling hijacked computers for a concentrated assault.[114]Security researcher Sean-Paul Correll also reported that the "zombie computers" of involuntary botnets had provided 90% of the attack.[115]Topiary states that he and other Anons then "lied a bit to the press to give it that sense of abundance", exaggerating the role of the grassroots membership. However, this account was disputed.[116]
The attacks brought down PayPal.com for an hour on December 8 and for another brief period on December 9.[117]Anonymous also disrupted the sites for Visa and MasterCard on December 8.[118]Anons had announced an intention to bring down Amazon.com as well, but failed to do so, allegedly because of infighting with the hackers who controlled the botnets.[119]PayPal estimated the damage to have cost the company US$5.5 million. It later provided the IP addresses of 1,000 of its attackers to theFBI, leading to at least 14 arrests.[120]On Thursday, December 5, 2013, 13 of thePayPal 14pleaded guilty to taking part in the attacks.[121]
In the years following Operation Payback, targets of Anonymous protests, hacks, and DDoS attacks continued to diversify. Beginning in January 2011, Anons took a number of actions known initially asOperation Tunisiain support ofArab Springmovements. Tflow created a script that Tunisians could use to protect their web browsers from government surveillance, while fellow future LulzSec memberHector Xavier Monsegur(alias "Sabu") and others allegedly hijacked servers from a London web-hosting company to launch a DDoS attack on Tunisian government websites, taking them offline. Sabu also used a Tunisian volunteer's computer to hack the website of Prime MinisterMohamed Ghannouchi, replacing it with a message from Anonymous.[122]Anons also helped Tunisian dissidents share videos online about the uprising.[123]In Operation Egypt, Anons collaborated with the activist groupTelecomixto help dissidents access government-censored websites.[123]Sabu and Topiary went on to participate in attacks on government websites in Bahrain, Egypt, Libya, Jordan, and Zimbabwe.[124]
Tflow, Sabu, Topiary, and Ryan Ackroyd (known as "Kayla") collaborated in February 2011 on a cyber-attack againstAaron Barr, CEO of the computer security firmHBGary Federal, in retaliation for his research on Anonymous and his threat to expose members of the group. Using aSQL injectionweakness, the four hacked the HBGary site, used Barr's captured password to vandalize his Twitter feed with racist messages, and released an enormous cache of HBGary's e-mails in atorrent fileon Pirate Bay.[125]The e-mails stated that Barr and HBGary had proposed toBank of Americaa plan to discredit WikiLeaks in retaliation for a planned leak of Bank of America documents,[126]and the leak caused substantial public relations harm to the firm as well as leading one U.S. congressman to call for a congressional investigation.[127]Barr resigned as CEO before the end of the month.[128]
Several attacks by Anons have targeted organizations accused of homophobia. In February 2011, an open letter was published on AnonNews.org threatening theWestboro Baptist Church, an organization based inKansasin the U.S. known for picketing funerals with signs reading "God Hates Fags".[129]During a live radio current affairs program in which Topiary debated church memberShirley Phelps-Roper, CosmoTheGod hacked one of the organization's websites.[130][131]After the church announced its intentions in December 2012 to picket the funerals of theSandy Hook Elementary School shootingvictims, CosmoTheGod published the names, phone numbers, and e-mail and home addresses of church members and brought down GodHatesFags.com with a DDoS attack.[132]In August 2012, Anons hacked the site of Ugandan Prime MinisterAmama Mbabaziin retaliation for theParliament of Uganda's consideration ofan anti-homosexuality lawpermitting capital punishment.[133]
In April 2011, Anons launched a series of attacks against Sony in retaliation for trying to stop hacks of thePlayStation 3game console. More than 100 million Sony accounts were compromised, and the Sony servicesQriocityandPlayStation Networkwere taken down for a month apiece by cyberattacks.[134]
In July 2011, Anonymous announced the launch of its social media platformAnonplus.[135]This came after Anonymous' presence was removed fromGoogle+.[136]The site was later hacked by a Turkish hackers group who placed a message on the front page and replaced its logo with a picture of a dog.[137]
In August 2011, Anons launched an attack againstBARTin San Francisco, which they dubbed #OpBart. The attack, made in response to the killing of Charles Hill a month prior, resulted in customers' personal information leaked onto the group's website.[138]
When theOccupy Wall Streetprotests began in New York City in September 2011, Anons were early participants and helped spread the movement to other cities such asBoston.[19]In October, some Anons attacked the website of theNew York Stock Exchangewhile other Anons publicly opposed the action via Twitter.[53]Some Anons also helped organize an Occupy protest outside theLondon Stock Exchangeon May 1, 2012.[139]
Anons launched Operation Darknet in October 2011, targeting websites hostingchild pornography. In particular, the group hacked a child pornography site called "Lolita City" hosted byFreedom Hosting, releasing 1,589 usernames from the site. Anons also said that they had disabled forty image-swapping pedophile websites that employed the anonymity networkTor.[140]In 2012, Anons leaked the names of users of a suspected child porn site in OpDarknetV2.[141]Anonymous launched the #OpPedoChat campaign on Twitter in 2012 as a continuation of Operation Darknet. In attempt to eliminate child pornography from the internet, the group posted the emails and IP addresses of suspected pedophiles on the online forum PasteBin.[142][143]
In 2011, theKoch Industrieswebsite was attacked following their attack upon union members, resulting in their website being made inaccessible for 15 minutes. In 2013, one member, a 38-year-old truck driver, pleaded guilty when accused of participating in the attack for a period of one minute, and received a sentence of two years federal probation, and ordered to pay $183,000 restitution, the amount Koch stated they paid a consultancy organization, despite this being only a denial of service attack.[144]
On January 19, 2012, the U.S.Department of Justiceshut down the file-sharing siteMegauploadon allegations of copyright infringement. Anons responded with a wave of DDoS attacks on U.S. government and copyright organizations, shutting down the sites for the RIAA, MPAA,Broadcast Music, Inc., and the FBI.[145]
In April 2012, Anonymous hacked 485Chinese governmentwebsites, some more than once, to protest the treatment of their citizens. They urged people to "fight for justice, fight for freedom, [and] fight for democracy".[146][147][148]
In 2012, Anonymous launched Operation Anti-Bully: Operation Hunt Hunter in retaliation toHunter Moore'srevenge porn site, "Is Anyone Up?" Anonymous crashed Moore's servers and publicized much of his personal information online, including his social security number. The organization also published the personal information of Andrew Myers, the proprietor of "Is Anyone Back", a copycat site of Moore's "Is Anyone Up?"[149]
In response toOperation Pillar of Defense, a November 2012 Israeli military operation in theGaza Strip, Anons took down hundreds of Israeli websites with DDoS attacks.[150]Anons pledged another "massive cyberassault" against Israel in April 2013 in retaliation for its actions inGaza, promising to "wipe Israel off the map of the Internet".[151]However, its DDoS attacks caused only temporary disruptions, leading cyberwarfare experts to suggest that the group had been unable to recruit or hire botnet operators for the attack.[152][153]
On November 5, 2013, Anonymous protesters gathered around the world for the Million Mask March. Demonstrations were held in 400 cities around the world to coincide withGuy Fawkes Night.[154]
Operation Safe Winter was an effort to raise awareness about homelessness through the collection, collation, and redistribution of resources. This program began on November 7, 2013[155]after an online call to action from Anonymous UK. Three missions using a charity framework were suggested in the original global spawning a variety of direct actions from used clothing drives to pitch in community potlucks feeding events in the UK, US and Turkey.[156]The #OpSafeWinter call to action quickly spread through the mutual aid communities like Occupy Wall Street[157]and its offshoot groups like theopen-source-based OccuWeather.[158]With the addition of the long-term mutual aid communities of New York City and online hacktivists in the US, it took on an additional three suggested missions.[159]Encouraging participation from the general public, this operation has raised questions ofprivacyand the changing nature of the Anonymous community's use of monikers. The project to support those living on the streets while causing division in its own online network has been able to partner with many efforts and organizations not traditionally associated with Anonymous or online activists.
In the wake of thefatal police shooting of unarmed African-American Michael BrowninFerguson, Missouri, "Operation Ferguson"—ahacktivistorganization that claimed to be associated with Anonymous—organized cyberprotests against police, setting up a website and a Twitter account to do so.[160]The group promised that if any protesters were harassed or harmed, they would attack the city's servers and computers, taking them offline.[160]City officials said that e-mail systems were targeted and phones died, while the Internet crashed at the City Hall.[160][161]Prior to August 15, members of Anonymous corresponding withMother Jonessaid that they were working on confirming the identity of the undisclosed police officer who shot Brown and would release his name as soon as they did.[162]On August 14, Anonymous posted on its Twitter feed what it claimed was the name of the officer involved in the shooting.[163][164]However, police said the identity released by Anonymous was incorrect.[165]Twitter subsequently suspended the Anonymous account from its service.[166]
It was reported on November 19, 2014, that Anonymous had declaredcyber waron theKu Klux Klan(KKK) the previous week, after the KKK had made death threats following the Ferguson riots. They hacked the KKK's Twitter account, attacked servers hosting KKK sites, and started to release the personal details of members.[167]
On November 24, 2014, Anonymous shut down theClevelandcity website and posted a video afterTamir Rice, a twelve-year-old boy armed only with a BB gun, was shot to death by a police officer in a Cleveland park.[168]Anonymous also usedBeenVerifiedto uncover the phone number and address of a police officer involved in the shooting.[169]
In January 2015, Anonymous released a video and a statement via Twitter condemningthe attackonCharlie Hebdo, in which 12 people, including eight journalists, were fatally shot. The video, claiming that it is "a message foral-Qaeda, theIslamic Stateand other terrorists", was uploaded to the group's Belgian account.[170]The announcement stated that "We, Anonymous around the world, have decided to declare war on you, the terrorists" and promises to avenge the killings by "shut[ting] down your accounts on all social networks."[171]On January 12, they brought down a website that was suspected to belong to one of these groups.[172]Critics of the action warned that taking down extremists' websites would make them harder to monitor.[173]
On June 17, 2015, Anonymous claimed responsibility for a Denial of Service attack against Canadian government websites in protest of the passage of billC-51—an anti-terror legislation that grants additional powers to Canadian intelligence agencies.[174]The attack temporarily affected the websites of several federal agencies.
On October 28, 2015, Anonymous announced that it would reveal the names of up to 1,000 members of theKu Klux Klanand other affiliated groups, stating in a press release, "You are terrorists that hide your identities beneath sheets and infiltrate society on every level. The privacy of the Ku Klux Klan no longer exists in cyberspace."[175]On November 2, a list of 57 phone numbers and 23 email addresses (that allegedly belong to KKK members) was reportedly published and received media attention.[176]However, a tweet from the "@Operation_KKK" Twitter account the same day denied it had released that information[177][178][179]The group stated it planned to, and later did, reveal the names onNovember 5.[180]
Since 2013, Saudi Arabianhacktivistshave been targeting government websites protesting the actions of the regime.[181]These actions have seen attacks supported by the possibly Iranian backedYemen Cyber Army.[182]An offshoot of Anonymous self-described asGhost Securityor GhostSec started targetingIslamic State-affiliated websites and social media handles.[183][184]
In November 2015, Anonymous announced a major, sustained operation against ISIS following theNovember 2015 Paris attacks,[185]declaring: "Anonymous from all over the world will hunt you down. You should know that we will find you and we will not let you go."[186][187]ISIS responded onTelegramby calling them "idiots", and asking "What they gonna to [sic] hack?"[188][189]By the next day, however, Anonymous claimed to have taken down 3,824 pro-ISIS Twitter accounts, and by the third day more than 5,000,[190]and to havedoxxedISIS recruiters.[191]A week later, Anonymous increased their claim to 20,000 pro-ISIS accounts and released a list of the accounts.[192][193]The list included the Twitter accounts ofBarack Obama,Hillary Clinton,The New York Times, andBBC News. The BBC reported that most of the accounts on the list appeared to be still active.[194]A spokesman for Twitter toldThe Daily Dotthat the company is not using the lists of accounts being reported by Anonymous, as they have been found to be "wildly inaccurate" and include accounts used by academics and journalists.[195]
In 2015, a group that claimed to be affiliated with Anonymous, calling themselves as AnonSec, claimed to have hacked and gathered almost 276 GB of data from NASA servers including NASA flight and radar logs and videos, and also multiple documents related to ongoing research.[196]AnonSec group also claimed gaining access of a Global Hawk Drone of NASA, and released some video footage purportedly from the drone's cameras. A part of the data was released by AnonSec on Pastebin service, as an Anon Zine.[197]NASA has denied the hack, asserting that the control of the drones were never compromised, but has acknowledged that the photos released along with the content are real photographs of its employees, but that most of these data are already available in thepublic domain.[198]
The Blink Hacker Group, associating themselves with the Anonymous group, claimed to have hacked the Thailand prison websites and servers.[199]The compromised data has been shared online, with the group claiming that they give the data back to Thailand Justice and the citizens of Thailand as well. The hack was done in response to news from Thailand about the mistreatment of prisoners in Thailand.[200]
A group calling themselves Anonymous Africa launched a number of DDoS attacks on websites associated with the controversial South AfricanGupta familyin mid-June 2016. Gupta-owned companies targeted included the websites of Oakbay Investments,The New Age, andANN7. The websites of theSouth African Broadcasting Corporation, the political partyEconomic Freedom Fighters, and Zimbabwe'sZanu-PFwere also attacked for "politicising racism."[201]
In late 2017, theQAnonconspiracy theory first emerged on 4chan, and adherents used similar terminology and branding to Anonymous. In response, in 2018, anti-Trump members of Anonymous warned that QAnon was stealing the collective's branding and vowed to oppose the theory.[202][203][13]
However, in 2017, some members stood against similar groups[202][203][204]and QAnon itself.[202]
In February 2020, Anonymous hacked the United Nations' website and created a page forTaiwan, a country which has not had a seat at the UNsince 1971.[205][206]The hacked page featured theFlag of Taiwan, theKMTemblem, aTaiwan Independenceflag, and the Anonymous logo along with a caption.[205][207]The hacked server belonged to theUnited Nations Department of Economic and Social Affairs.[205]
In the wake of protests across the U.S. following themurder of George Floyd, Anonymous released a video on Facebook as well as sending it out to the Minneapolis Police Department on May 28, 2020, titled "Anonymous Message To The Minneapolis Police Department", in which they state that they are going to seek revenge on theMinneapolis Police Department, and "expose their crimes to the world".[208][non-primary source needed][209]According toBloomberg, the video was initially posted on an unconfirmed Anonymous Facebook page on May 28.[210]According to BBC News, that same Facebook page had no notoriety and published videos of dubious content linked to UFOs and "China's plan to take over the world". It gained repercussions after the video about George Floyd was published[211]and the Minneapolis police website, which is responsible for the police officer, was down.[212]Later,MinnesotaGovernorTim Walzsaid that every computer in the region suffered a sophisticated attack.[213]According to BBC News, the attack on the police website using DDoS (Distributed Denial of Service) was unsophisticated.[211]According to researcherTroy Hunt, these breaches of the site may have happened from old credentials. Regarding unverified Twitter posts that also went viral, where radio stations of police officers playing music and preventing communication are shown, experts point out that this is unlikely to be due to a hack attack – if they are real.[211]Later, it was confirmed byCNETthat the leaks made from the police website are false and that someone is taking advantage of the repercussions of George Floyd's murder to spread misinformation.[214]
On June 19, 2020, Anonymous publishedBlueLeaks, sometimes referred to by theTwitter hashtag#BlueLeaks, 269.21gigabytesof internalU.S. law enforcementdata through the activist groupDistributed Denial of Secrets, which called it the "largest published hack of American law enforcement agencies".[215]The data — internal intelligence, bulletins, emails, and reports — was produced between August 1996 and June 2020[216]by more than 200 law enforcement agencies, which provided it tofusion centers. It was obtained through a security breach of Netsential, a web developer that works with fusion centers and law enforcement.[217]In Maine, legislators took interest in BlueLeaks thanks to details about the Maine Information and Analysis Center, which is under investigation. The leaks showed the fusion center was spying on and keeping records on people who had been legally protesting or had been "suspicious" but committed no crime.[218]
In 2020, Anonymous started cyber-attacks against the Nigerian government. They started the operation to support the#EndSARSmovement in Nigeria. The group's attacks were tweeted by a member of Anonymous called LiteMods. The websites of EFCC, INEC and various other Nigerian government websites were taken-down with DDoS attacks. The websites of some banks were compromised.[219][220][221][222]
TheTexas Heartbeat Act, a law which bansabortionsafter six weeks of pregnancy, came into effect inTexason September 1, 2021. The law relies on private citizens to filecivil lawsuitsagainst anyone who performs or induces an abortion, or aids and abets one, once "cardiac activity" in anembryocan be detected viatransvaginal ultrasound, which is usually possible beginning at around six weeks of pregnancy.[223]Shortly after the law came into effect, anti-abortion organizations set up websites to collect "whistleblower" reports of suspected violators of the bill.[224]
On September 3, Anonymous announced "Operation Jane", a campaign focused on stymying those who attempted to enforce the law by "exhaust[ing] the investigational resources of bounty hunters, their snitch sites, and online gathering spaces until no one is able to maintain data integrity".[224]On September 11, the group hacked the website of theRepublican Party of Texas, replacing it with text about Anonymous, an invitation to join Operation Jane, and aPlanned Parenthooddonation link.[225]
On September 13, Anonymous released a large quantity of private data belonging toEpik, a domain registrar and web hosting company known for providing services to websites that hostfar-right,neo-Nazi, and otherextremistcontent.[226]Epik had briefly provided services to an abortion "whistleblower" website run by the anti-abortion Texas Right to Life organization, but the reporting form went offline on September 4 after Epik told the group they had violated their terms of service by collecting private information about third parties.[227]The data included domain purchase and transfer details, account credentials and logins, payment history, employee emails, and unidentifiedprivate keys.[228]The hackers claimed they had obtained "a decade's worth of data" which included all customers and all domains ever hosted or registered through the company, and which included poorly encrypted passwords and other sensitive data stored inplaintext.[228][229]Later on September 13, theDistributed Denial of Secrets(DDoSecrets) organization said they were working to curate the allegedly leaked data for more accessible download, and said that it consisted of "180gigabytesof user, registration, forwarding and other information".[230]Publications includingThe Daily DotandThe Record by Recorded Futuresubsequently confirmed the veracity of the hack and the types of data that had been exposed.[231][229]Anonymous released another leak on September 29, this time publishing bootabledisk imagesof Epik's servers;[232][233]more disk images as well as some leaked documents from the Republican Party of Texas appeared on October 4.[234]
On February 25, 2022, Twitter accounts associated with Anonymous declared that they had launched a 'cyber operations' against theRussian Federation, in retaliation for theinvasion of Ukraineordered by Russian presidentVladimir Putin. The group later temporarily disabled websites such asRT.comand the website of theDefence Ministryalong with other state owned websites.[235][236][237][238][239]Anonymous also leaked 200 GB worth of emails from the Belarusian weapons manufacturer Tetraedr, which provided logistical support for Russia in the Russian invasion of Ukraine.[240]Anonymous also hacked into Russian TV channels and played Ukrainian music[241]through them and showed uncensored news of events in Ukraine.[242]
On March 7, 2022, Anonymous actors DepaixPorteur and TheWarriorPoetz declared on Twitter[243]that they hacked 400 Russian surveillance cameras and broadcast them on a website.[244]They call this operation "Russian Camera Dump".[243]
Between March 25, 2022, and June 1, 2022,DDoSecretscollected hundreds and hundreds of gigabytes and millions of emails allegedly from theCentral Bank of Russia,[245]Capital Legal Services,[246]All-Russia State Television and Radio Broadcasting Company(VGTRK),[247]Aerogas,[248]BlagoveshchenskCity Administration,[246]Continent Express,[249]Gazregion,[250]GUOV i GS - General Dept. of Troops and Civil Construction,[251]Accent Capital,[252]ALET/АЛЕТ, CorpMSP,[253]Nikolai M. Knipovich Polar Research Institute of Marine Fisheries and Oceanography(PINRO),[254]theAchinskCity Government,[254][255]SOCAR Energoresource,[254][256]Metprom Group LLC,[257]and the Vyberi Radio / Выбери Радио group,[258]all of which were allegedly hacked by Anonymous and Anonymous aligned NB65.[246]
On September 18, 2022,YourAnonSpiderhacked the Supreme Leader of the Islamic Revolution of Iran's official webpage belonging to Ali Khamenei in retaliation to the death of Mahsa Amini.[259]Anonymous launched a cyber operation against the Iranian government for the alleged murder ofMahsa Amini. Anonymous launcheddistributed denial of service (DDOS) attacksagainst Iran's government and state-owned websites.[260]On September 23, 2022, a hacktivist named "Edaalate Ali" hacked Iran's state TV government channel during the middle of broadcast and released CCTV footage of Iran's prison facilities.[261][262]On October 23, 2022, an Iranian hacker group known as "Black Reward" published confidential files and documents email system belonging to Iran's nuclear program.[263][264]Black Reward announced on their Telegram channel that they have hacked into 324 emails which contained more than a hundred thousand messages and over 50 gigabytes of files.[265]A hacktivist group by the name "Lab Dookhtegan" published the Microsoft Excel macros, PowerShell exploitsAPT34reportedly used to target organizations across the world.[266][267][268]
In response to the2022 COVID-19 protests in China, "Anonymous OpIran" launchedOperation White Paper, attacked and took down Chinese government controlled websites, and leaked some Chinese government officials' personal information.[269]
In May 2011, the small group of Anons behind the HBGary Federal hack—including Tflow, Topiary, Sabu, and Kayla—formed the hacker group "Lulz Security", commonly abbreviated "LulzSec". The group's first attack was againstFox.com, leaking several passwords, LinkedIn profiles, and the names of 73,000X Factorcontestants. In May 2011, members of Lulz Security gained international attention for hacking into the AmericanPublic Broadcasting Service(PBS) website. They stole user data and posted a fake story on the site that claimed that rappersTupac ShakurandBiggie Smallswere still alive and living in New Zealand.[270]LulzSec stated that some of its hacks, including its attack on PBS, were motivated by a desire to defend WikiLeaks and its informantChelsea Manning.[271]
In June 2011, members of the group claimed responsibility for an attack againstSony Picturesthat took data that included "names, passwords, e-mail addresses, home addresses and dates of birth for thousands of people."[272]In early June, LulzSec hacked into and stole user information from thepornography websitewww.pron.com. They obtained and published around 26,000 e-mail addresses and passwords.[273]On June 14, 2011, LulzSec took down four websites by request of fans as part of their "Titanic Take-down Tuesday". These websites wereMinecraft,League of Legends,The Escapist, and IT security companyFinFisher.[274]They also attacked the login servers of the multiplayer online gameEVE Online, which also disabled the game's front-facing website, and theLeague of Legendslogin servers. Most of the takedowns were performed withDDoS attacks.[275]
LulzSec also hacked a variety of government-affiliated sites, such as chapter sites ofInfraGard, a non-profit organization affiliated with the FBI.[276]The group leaked some of InfraGard member e-mails and a database of local users.[277]On June 13, LulzSec released the e-mails and passwords of a number of users of senate.gov, the website of theU.S. Senate.[278]On June 15, LulzSec launched an attack on cia.gov, the public website of the U.S.Central Intelligence Agency, taking the website offline for several hours with a distributed denial-of-service attack.[279]On December 2, an offshoot of LulzSec calling itself LulzSec Portugal attacked several sites related to the government of Portugal. The websites for theBank of Portugal, theAssembly of the Republic, and theMinistry of Economy, Innovation and Developmentall became unavailable for a few hours.[280]
On June 26, 2011, the core LulzSec group announced it had reached the end of its "50 days of lulz" and was ceasing operations.[281]Sabu, however, had already been secretly arrested on June 7 and then released to work as an FBI informant. His cooperation led to the arrests of Ryan Cleary, James Jeffery, and others.[282]Tflow was arrested on July 19, 2011,[283]Topiary was arrested on July 27,[284]and Kayla was arrested on March 6, 2012.[285]Topiary, Kayla, Tflow, and Cleary pleaded guilty in April 2013 and were scheduled to be sentenced in May 2013.[286]In April 2013, Australian police arrested the alleged LulzSec leader Aush0k, but subsequent prosecutions failed to establish police claims.[287][288]
Beginning in June 2011, hackers from Anonymous and LulzSec collaborated on a series of cyber attacks known as "Operation AntiSec". On June 23, in retaliation for the passage of the immigration enforcement billArizona SB 1070, LulzSec released a cache of documents from theArizona Department of Public Safety, including the personal information and home addresses of many law enforcement officers.[289]On June 22, LulzSec Brazil took down the websites of theGovernment of Braziland thePresident of Brazil.[290][291]Later data dumps included the names, addresses, phone numbers, Internet passwords, andSocial Security numbersof police officers inArizona,[292]Missouri,[293]andAlabama.[294]AntiSec members also stole police officer credit card information to make donations to various causes.[295]
On July 18, LulzSec hacked into and vandalized the website of British newspaperThe Sunin response to aphone-hacking scandal.[296][297]Other targets of AntiSec actions have included FBI contractorManTech International,[298]computer security firm Vanguard Defense Industries,[299]and defense contractorBooz Allen Hamilton, releasing 90,000 military e-mail accounts and their passwords from the latter.[300]
In December 2011, AntiSec member "sup_g" (alleged by the U.S. government to beJeremy Hammond) and others hackedStratfor, a U.S.-based intelligence company, vandalizing its web page and publishing 30,000 credit card numbers from its databases.[301]AntiSec later released millions of the company's e-mails to Wikileaks.[302]
|
https://en.wikipedia.org/wiki/Anonymous_(group)
|
TheSlashdot effect, also known asslashdottingor thehug of deathoccurs when a popularwebsitelinks to a smaller website, causing a massive increase in traffic. Thisoverloadsthe smaller site, causing it to slow down or even temporarily become unavailable. Typically, less robust sites are unable to cope with the huge increase in traffic and become unavailable – common causes are lack of sufficientdata bandwidth,serversthat fail to cope with the high number of requests, and trafficquotas. Sites that are maintained onshared hostingservices often fail when confronted with the Slashdot effect. This has the same effect as adenial-of-service attack, albeit accidentally. The name stems from the huge influx ofweb trafficwhich would result from the technology news siteSlashdotlinking to websites. The termflash crowdis a more generic term.[1]
The original circumstances have changed, as flash crowds fromSlashdotwere reported in 2005 to be diminishing due to competition fromsimilar sites,[2]and the general adoption of elastically scalable cloud hosting platforms.
The term "Slashdot effect" refers to the phenomenon of a website becoming virtually unreachable because too many people are hitting it after the site was mentioned in an interesting article on the popular Slashdot news service. It was later extended to describe any similar effect from being listed on a popular site.[3]
The effect has been associated with other websites or metablogs such asFark,Digg,Drudge Report,Imgur,Reddit, andTwitter, leading to terms such as beingfarkedordrudged, being under theReddit effect, or receiving ahug of deathfrom the site in question.[4][5]Another generic term, "flash crowd,"[6]originates fromLarry Niven's1973 novella by that name, in which the invention of inexpensiveteleportationallows crowds to materialize almost instantly at the sites of interesting news stories.
Sites such asSlashdot, Digg, Reddit, StumbleUpon and Fark consist of brief submitted stories and a self-moderated discussion on each story. The typical submission introduces a news item or website of interest bylinkingto it. In response, large masses of readers tend to simultaneously rush to view the referenced sites. The ensuing flood of page requests from readers can exceed the site's available bandwidth or the ability of its servers to respond, and render the site temporarily unreachable.
Google Doodles, which link to search results on the doodle topic, also result in high increases of traffic from the search results page.[7]
Major news sites or corporate websites are typically engineered to serve large numbers of requests and therefore do not normally exhibit this effect. Websites that fall victim may be hosted on home servers, offer large images or movie files or have inefficiently generated dynamic content (e.g. many database hits for every web hit even if all web hits are requesting the same page). These websites often became unavailable within a few minutes of a story's appearance, even before any comments had been posted. Occasionally, payingSlashdotsubscribers (who have access to stories before non-paying users) rendered a site unavailable even before the story was posted for the general readership.
Few definitive numbers exist regarding the precise magnitude of theSlashdoteffect, but estimates put the peak of the mass influx of page requests at anywhere from several hundred to several thousand hits per minute.[8][9][10]The flood usually peaked when the article was at the top of the site's front page and gradually subsided as the story was superseded by newer items. Traffic usually remained at elevated levels until the article was pushed off the front page, which could take from 12 to 18 hours after its initial posting. However, some articles had significantly longer lifetimes due to the popularity, newsworthiness, or interest in the linked article.
By 2005, reporters were commenting that theSlashdoteffect had been diminishing.[2]However, the effect has been seen involving Twitter when some popular users mention a website.[11]
When the targeted website has acommunity-based structure, the term can also refer to the secondary effect of having a large group of new users suddenly set up accounts and start to participate in the community. While in some cases this has been considered a good thing, in others it is viewed with disdain by the prior members, as quite often the sheer number of new people brings many of the unwanted aspects ofSlashdotalong with it, such astrolling,vandalism, andnewbie-like behavior. This bears some similarity to the 1990s Usenet concept ofEternal September.
Many solutions have been proposed for sites to deal with the Slashdot effect.[12]
There are several systems that automatically mirror any Slashdot-linked pages to ensure that the content remains available even if the original site becomes unresponsive.[13]Sites in the process of being Slashdotted may be able to mitigate the effect by temporarily redirecting requests for the targeted pages to one of these mirrors. Slashdot does notmirrorthe sites it links to on its own servers, nor does it endorse a third party solution. Mirroring of content may constitute a breach ofcopyrightand, in many cases, cause ad revenue to be lost for the targeted site.
|
https://en.wikipedia.org/wiki/Flash_crowd
|
1800s:Martineau·Tocqueville·Marx·Spencer·Le Bon·Ward·Pareto·Tönnies·Veblen·Simmel·Durkheim·Addams·Mead·Weber·Du Bois·Mannheim·Elias
Science studiesis aninterdisciplinaryresearch area that seeks to situate scientificexpertisein broad social, historical, and philosophical contexts. It uses various methods to analyze the production, representation and reception of scientific knowledge and itsepistemicandsemioticrole.
Similarly tocultural studies, science studies are defined by the subject of their research and encompass a large range of different theoretical and methodological perspectives and practices. The interdisciplinary approach may include and borrow methods from the humanities, natural and formal sciences, fromscientometricstoethnomethodologyorcognitive science.
Science studies have a certain importance forevaluationand science policy. Overlapping with the field ofscience, technology and society, practitioners study the relationship between science and technology, and the interaction of expert and lay knowledge in the public realm.
The field started with a tendency towardnavel-gazing: it was extremely self-conscious in its genesis and applications.[1]From early concerns with scientificdiscourse, practitioners soon started to deal with the relation of scientific expertise to politics and lay people.[1]Practical examples includebioethics,bovine spongiform encephalopathy(BSE),pollution,global warming,[2][3]biomedical sciences,physical sciences,natural hazardpredictions, the (alleged) impact of the Chernobyl disaster in the UK, generation and review of science policy and risk governance and its historical and geographic contexts.[1]While staying a discipline with multiple metanarratives, the fundamental concern is about the role of the perceived expert in providing governments and local authorities with information from which they can make decisions.[1]
The approach poses various important questions about what makes an expert and how experts and their authority are to be distinguished from the lay population and interacts with the values and policy making process in liberal democratic societies.[1]
Practitioners examine the forces within and through which scientists investigate specific phenomena such as
In 1935, in a celebrated paper, the PolishsociologistcoupleMaria OssowskaandStanisław Ossowskiproposed the founding of a "science of science" to study the scientific enterprise, its practitioners, and the factors influencing their work.[10][11]Earlier, in 1923, the Polish sociologistFlorian Znanieckihad made a similar proposal.[12]
Fifty years before Znaniecki, in 1873,Aleksander Głowacki, better known in Poland by his pen name "Bolesław Prus", had delivered a public lecture – later published as a booklet –On Discoveries and Inventions, in which he said:
Until now there has been no science that describes the means for making discoveries and inventions, and the generality of people, as well as many people of learning, believe that there never will be. This is an error. Someday a science of making discoveries and inventions will exist and will render services. It will arise not all at once; first only its general outline will appear, which subsequent researchers will correct and elaborate, and which still later researchers will apply to individual branches of knowledge.[13]
It is striking that, while early 20th-century sociologist proponents of a discipline to study science and its practitioners wrote in general theoretical terms, Prus had already half a century earlier described, with many specific examples, the scope and methods of such a discipline.
Thomas Kuhn'sStructure of Scientific Revolutions(1962) increased interest both in thehistory of scienceand in science'sphilosophical underpinnings. Kuhn posited that thehistory of sciencewas less a linear succession of discoveries than a succession ofparadigmswithin thephilosophy of science. Paradigms are broader, socio-intellectual constructs that determine which types of truth claims are permissible.
Science studies seeks to identify keydichotomies– such as those between science and technology, nature and culture, theory and experiment, and science and fine art – leading to the differentiation of scientific fields and practices.
Thesociology of scientific knowledgearose at theUniversity of Edinburgh, whereDavid Bloorand his colleagues developed what has been termed "thestrong programme". It proposed that both "true" and "false" scientific theories should be treated the same way.[14]Both are informed by social factors such as cultural context and self-interest.[15]
Human knowledge, abiding as it does within human cognition, is ineluctably influenced by social factors.[16]
It proved difficult, however, to address natural-science topics with sociological methods, as was abundantly evidenced by the USscience wars.[17]Use of a deconstructive approach (as in relation to works on arts or religion) to the natural sciences risked endangering not only the "hard facts" of the natural sciences, but the objectivity and positivist tradition of sociology itself.[17]The view on scientific knowledge production as a (at least partial) social construct was not easily accepted.[1]Latour and others identified a dichotomy crucial for modernity, the division between nature (things, objects) as beingtranscendent, allowing to detect them, and society (the subject, the state) asimmanentas being artificial, constructed. The dichotomy allowed for mass production of things (technical-natural hybrids) and large-scaleglobal issuesthat endangered the distinction as such. E.g.We Have Never Been Modernasks to reconnect the social and natural worlds, returning to the pre-modern use of "thing"[18]—addressing objects as hybrids made and scrutinized by the public interaction of people, things, and concepts.[19]
Science studies scholars such asTrevor PinchandSteve Woolgarstarted already in the 1980s to involve "technology", and called their field "science, technology and society".[20]This "turn to technology" brought science studies into communication with academics in science, technology, and society programs.
More recently, a novel approach known asmapping controversieshas been gaining momentum among science studies practitioners, and was introduced as a course for students in engineering,[21][22]and architecture schools.[23]In 2002Harry Collinsand Robert Evans asked for a third wave of science studies (a pun onThe Third Wave), namely studies ofexpertiseandexperienceanswering to recent tendencies to dissolve the boundary between experts and the public.[24]
A showcase of the rather complex problems of scientific information and its interaction with lay persons isBrian Wynne's study of Sheepfarming in Cumbria after theChernobyl disaster.[1][25]He elaborated on the responses of sheep farmers inCumbria, who had been subjected to administrative restrictions because ofradioactive contamination, allegedly caused by the nuclear accident atChernobylin 1986.[25]The sheep farmers suffered economic losses, and their resistance against the imposed regulation was being deemed irrational and inadequate.[25]It turned out that the source of radioactivity was actually theSellafieldnuclear reprocessing complex; thus, the experts who were responsible for the duration of the restrictions were completely mistaken.[25]The example led to attempts to better involve local knowledge and lay-persons' experience and to assess its often highly geographically and historically defined background.[26]
Donovan et al. (2012) used social studies ofvolcanologyto investigate the generation of knowledge and expert advice on various active volcanoes.[1]It contains a survey of volcanologists carried out during 2008 and 2009 and interviews with scientists in theUK,Montserrat,ItalyandIcelandduring fieldwork seasons. Donovan et al. (2012) asked the experts about the felt purpose of volcanology and what they considered the most important eruptions in historical time. The survey tries to identify eruptions that had an influence on volcanology as a science and to assess the role of scientists in policymaking.[1]
A main focus was on the impact of the Montserrat eruption 1997. The eruption, a classical example of theblack swan theory[27]directly killed (only) 19 persons. However the outbreak had major impacts on the local society and destroyed important infrastructure, as theisland's airport.[28]About 7,000 people, or two-thirds of the population, left Montserrat; 4,000 to the United Kingdom.[29]
The Montserrat case put immense pressure on volcanologists, as their expertise suddenly became the primary driver of various public policy approaches.[1]The science studies approach provided valuable insights in that situation.[1]There were various miscommunications among scientists. Matching scientific uncertainty (typical of volcanic unrest) and the request for a single unified voice for political advice was a challenge.[1]The Montserrat Volcanologists began to use statistical elicitation models to estimate the probabilities of particular events, a rather subjective method, but allowing to synthesizing consensus and experience-based expertise step by step.[1]It involved as well local knowledge and experience.[1]
Volcanologyas a science currently faces a shift of its epistemological foundations of volcanology. The science started to involve more research into risk assessment and risk management. It requires new, integrated methodologies for knowledge collection that transcend scientific disciplinary boundaries but combine qualitative and quantitative outcomes in a structured whole.[30]
Science has become a major force in Western democratic societies, which depend on innovation and technology (compareRisk society) to address its risks.[31]Beliefs about science can be very different from those of the scientists themselves, for reasons of e.g. moral values, epistemology or political motivations. The designation of expertise as authoritative in the interaction with lay people and decision makers of all kind is nevertheless challenged in contemporary risk societies, as suggested by scholars who followUlrich Beck's theorisation. The role of expertise in contemporary democracies is an important theme for debate among science studies scholars. Some argue for a more widely distributed, pluralist understanding of expertise (Sheila JasanoffandBrian Wynne, for example), while others argue for a more nuanced understanding of the idea of expertise and its social functions (Collins and Evans, for example).[32][33]
|
https://en.wikipedia.org/wiki/Science_studies
|
User-generated content(UGC), alternatively known asuser-created content(UCC), emerged from the rise of web services which allow a system'susersto createcontent, such as images, videos, audio, text, testimonials, and software (e.g.video game mods) and interact with otherusers.[1][2]Onlinecontent aggregation platformssuch associal media, discussion forums andwikisby their interactive and social nature, no longer produce multimedia content but provide tools to produce, collaborate, and share a variety of content, which can affect the attitudes and behaviors of the audience in various aspects.[3]This transforms the role of consumers from passive spectators to active participants.[1][4][5]
User-generated content is used for a wide range of applications, including problem processing, news, entertainment, customer engagement, advertising, gossip, research and more. It is an example of the democratization of content production and theflatteningof traditional media hierarchies. TheBBCadopted a user-generated content platform for its websites in 2005, andTIMEMagazinenamed "You" as the Person of the Year in 2006, referring to the rise in the production of UGC onWeb 2.0platforms.[6][7]CNNalso developed a similar user-generated content platform, known as iReport.[8]There are other examples of news channels implementing similar protocols, especially in the immediate aftermath of a catastrophe or terrorist attack.[9]Social media users can provide key eyewitness content and information that may otherwise have been inaccessible.
Since 2020, there has been an increasing number of businesses who are utilizing User Generated Content (UGC) to promote their products and services. Several factors significantly influence how UGC is received, including the quality of the content, the credibility of the creator, and viewer engagement.[10][11]These elements can impact users' perceptions and trust towards the brand, as well as influence the buying intentions of potential customers. UGC has proven to be an effective method for brands to connect with consumers, drawing their attention through the sharing of experiences and information on social media platforms.[12][13]Due to new media andtechnologyaffordances, such as low cost and low barriers to entry, the Internet is an easy platform to create and dispense user-generated content,[14]allowing the dissemination of information at a rapid pace in the wake of an event.[15]
The advent of user-generated content marked a shift among media organizations from creating online content to providing facilities foramateursto publish their own content.[5]User-generated content has also been characterized as citizen media as opposed to the"packaged goods media" of the past century.[16]Citizen Media is audience-generatedfeedbackand news coverage.[17]People give their reviews and share stories in the form of user-generated and user-uploaded audio and user-generated video.[18]The former is a two-way process in contrast to the one-way distribution of the latter. Conversational or two-way media is a key characteristic of so-calledWeb 2.0, which encourages the publishing of one's own content and commenting on other people's content.
The role of the passive audience, therefore, has shifted since the birth ofnew media, and an ever-growing number of participatory users are taking advantage of these interactive opportunities, especially on the Internet, to create independent content. Grassroots experimentation then generated an innovation in sounds, artists, techniques, and associations with audiences, which then are being used in mainstream media.[19]The active, participatory, and creative audience is prevailing today with relatively accessible media, tools, and applications, and its culture is in turn affecting mass media corporations and global audiences.
TheOrganisation for Economic Co-operation and Development(OECD) has defined three core variables for UGC:[20][21]
According toCisco, in 2016 an average of 96,000petabyteswas transferred monthly over the Internet, more than twice as many as in 2012.[22]In 2016, the number of active websites surpassed 1 billion, up from approximately 700 million in 2012.[23]This means the content we like others currently have access to is even more diverse, incorporated, and unique than ever before.[24]
Reaching 1.66 billion daily active users in Q4 2019,Facebookhas emerged as the most popular social media platform globally.[25]Other social media platforms are also dominant at the regional level such as:TwitterinJapan,Naverin theRepublic of Korea,Instagram(owned by Facebook) andLinkedIn(owned byMicrosoft) inAfrica,VKontakte(VK) andOdnoklassniki(eng.Classmates) inRussiaand other countries inCentral and Eastern Europe,WeChatandQQinChina.[citation needed]
However, a concentration phenomenon is occurring globally giving dominance to a few online platforms that become popular for some unique features they provide, most commonly for the added privacy they offer users through disappearing messages orend-to-end encryption(e.g.WhatsApp,Snapchat,Signal, andTelegram), but they have tended to occupy niches and to facilitate the exchanges of information that remain rather invisible to larger audiences.[26]
Production of freely accessible information has been increasing since 2012. In January 2017,Wikipediahad more than 43 million articles, almost twice as many as in January 2012. This corresponded to a progressive diversification of content and an increase in contributions in languages other than English. In 2017, less than 12 percent of Wikipedia content was in English, down from 18 percent in 2012.[27]Graham, Straumann, and Hogan say that the increase in the availability and diversity of content has not radically changed the structures and processes for the production of knowledge. For example, while content on Africa has dramatically increased, a significant portion of this content has continued to be produced by contributors operating from North America and Europe, rather than from Africa itself.[28]
The massive, multi-volumeOxford English Dictionarywas exclusively composed of user-generated content. In 1857,Richard Chenevix Trenchof theLondon Philological Societysought public contributions throughout the English-speaking world for the creation of the first edition of theOED.[29]As Simon Winchester recounts:
So what we're going to do, if I have your agreement that we're going to produce such a dictionary, is that we're going to send out invitations, were going to send these invitations to every library, every school, every university, every book shop that we can identify throughout the English-speaking world... everywhere where English is spoken or read with any degree of enthusiasm, people will be invited to contribute words. And the point is, the way they do it, the way they will be asked and instructed to do it, is to read voraciously and whenever they see a word, whether it's a preposition or a sesquipedalian monster, they are to... if it interests them and if where they read it, they see it in a sentence that illustrates the way that that word is used, offers the meaning of the day to that word, then they are to write it on a slip of paper... the top left-hand side you write the word, the chosen word, the catchword, which in this case is 'twilight'. Then the quotation, the quotation illustrates the meaning of the word. And underneath it, the citation, where it came from, whether it was printed or whether it was in manuscript... and then the reference, the volume, the page and so on... and send these slips of paper, these slips are the key to the making of this dictionary, into the headquarters of the dictionary.[30]
In the following decades, hundreds of thousands of contributions were sent to the editors.
In the 1990s several electronicbulletin board systemswere based on user-generated content. Some of these systems have been converted into websites, including the film information siteIMDbwhich started asrec.arts.moviesin 1990. With the growth of theWorld Wide Webthe focus moved to websites, several of which were based on user-generated content, including Wikipedia (2001) andFlickr(2004).
User-generatedInternet videowas popularized byYouTube, anonline video platformfounded byChad Hurley,Jawed KarimandSteve Chenin April 2005. It enabled thevideo streamingofMPEG-4 AVC(H.264) user-generated content from anywhere on theWorld Wide Web.[31]
TheBBCset up a pilot user-generated content team in April 2005 with 3 staff. In the wake of the7 July 2005 London bombingsand theBuncefield oil depot fire, the team was made permanent and was expanded, reflecting the arrival in the mainstream of thecitizen journalist. After the Buncefield disaster the BBC received over 5,000 photos from viewers. The BBC does not normally pay for content generated by its viewers.
In 2006,CNNlaunchedCNN iReport, a project designed to bring user-generated news content to CNN. Its rivalFox News Channellaunched its project to bring in user-generated news, similarly titled "uReport". This was typical of major television news organizations in 2005–2006, who realized, particularly in the wake of the London 7 July bombings, thatcitizen journalismcould now become a significant part of broadcast news.[6]Sky News, for example, regularly solicits for photographs and videos from its viewers.
User-generated content was featured inTimemagazine's 2006Person of the Year, in which the person of the year was "you", meaning all of the people who contribute to user-generated media, including YouTube, Wikipedia andMyspace.[7]A precursor to user-generated content uploaded on YouTube wasAmerica's Funniest Home Videos.[17]
The benefits derived from user-generated content for the content host are clear, these include a low-cost promotion, positive impact on product sales, and fresh content. However, the benefit to the contributor is less direct. There are various theories behind the motivation for contributing user-generated content, ranging from altruistic, to social, to materialistic. Due to the high value of user-generated content, a number of sites use incentives to encourage their generation. These incentives can be generally categorized into implicit incentives and explicit incentives. Sometimes, users are also given monetary incentives to encourage them to create captivating and inspiring UGC.[32]
A growing subset of user-generated content (UGC) in this field is Paid UGC. It’s primarily used by brands and businesses looking for organic content to leverage authenticity, perspective of their customers and trust associated with user generated content (UGC) for marketing purposes.# According to several studies, a large percent of millennials and younger consumers look up information on products through social media and see UGC before making a purchase decision. Research suggests 78% of millennials and 70% of Gen-Z rely on UGC to determine their purchasing decision.
Paid UGC is distinct from normal UGC through how it’s created. It’s created by a UGC Creator, someone who creates authentic looking content on a product or service by brand’s request. In return, they receive compensation in form of monetary rewards, free products, discounts, exclusive access or other valuable incentives. It is not to be confused with influencer marketing.
Unlike influencers, UGC developers focus on creating organic product reviews and the content isn’t shared on their personal pages and on the company’s page instead. On the other hand, influencers have a strong connection with their audience, showcasing branded content on their social media feeds and directly engaging with their followers. The structure of work differs since influencer deals are more comprehensive and agreements include creating and distributing content across their personal platforms.
However, it’s possible for UGC Creators to function as macro-influencers if they have 100k+ followers. In this case, they can accept influencer deals where they post on their personal page in exchange for money or UGC deals where the brands post on their own page.
There are several ways where paid UGC differs from non-paid UGC:
Companies leveraging paid UGC see increased credibility with their platform, as customers connect with creators who feel like everyday people facing similar challenges. By showcasing the product as a real solution to a relatable problem, UGC makes brands more trustworthy and authentic. With commercial ads, customers can’t put a face behind the high production edits and don’t connect with it. A survey showcases that UGC is 85% better at increasing conversion rates than any studio content.[38]This showcases the content's ability to impact and potential reasons why companies increasingly utilize it for their social media strategy.
Nevertheless, there are concerns on the authenticity of content published on social media, particularly with the increasing prevalence of paid user generated content. Additionally, legal considerations such as copyright laws, privacy regulations and trademark protection play a role in content dissemination. As this field of work grows, there is potential for increased liability, particularly regarding disclosure requirements for paid content and will continue to evolve over time.
The distribution of UGC across theWebprovides a high volume data source that is accessible foranalysis, and offers utility in enhancing the experiences ofend users.Social scienceresearch can benefit from having access to the opinions of a population of users, and use this data to make inferences about their traits. Applications ininformation technologyseek tomine end user datato support and improve machine-based processes, such asinformation retrievalandrecommendation. However, processing the high volumes of data offered by UGC necessitate the ability to automatically sort and filter these data points according to their value.[39]
Determining the value of user contributions for assessment and ranking can be difficult due to the variation in the quality and structure of this data. The quality and structure of the data provided by UGC is application-dependent, and can include items such as tags, reviews, or comments that may or may not be accompanied by usefulmetadata. Additionally, the value of this data depends on the specific task for which it will be utilized and the available features of the application domain. Value can ultimately be defined and assessed according to whether the application will provide service to a crowd of humans, a single end user, or a platform designer.[39]
The variation of data and specificity of value has resulted in various approaches and methods for assessing and ranking UGC. The performance of each method essentially depends on thefeatures and metricsthat are available for analysis. Consequently, it is critical to have an understanding of the task objective and its relation to how the data is collected, structured, and represented in order to choose the most appropriate approach to utilizing it. The methods of assessment and ranking can be categorized into two classes: human-centered and machine-centered. Methods emphasizing human-centered utility consider the ranking and assessment problem in terms of the users and their interactions with the system, whereas the machine-centered method considers the problem in terms ofmachine learningandcomputation. The various methods of assessment and ranking can be classified into one of four approaches: community-based, user-based, designer-based, and hybrid.[39]
There are a number of types of user-generated content:Internet forums, where people talk about different topics;blogsare services where users can post about multiple topics,product reviewson a supplier website or in social media;wikissuch asWikipediaandFandomallow users, sometimes including anonymous users, to edit the content. Another type of user-generated content aresocial networking siteslikeFacebook,Instagram,Tumblr,Twitter,Snapchat,Twitch,TikTokorVK, where users interact with other people via chatting, writing messages, posting images or links, and sharing content. Media hosting sites such asYouTubeandVimeoallow users to post content. Some forms of user-generated content, such as a social commentary blog, can be considered as a form ofcitizen journalism.
Blogs are websites created by individuals, groups, and associations. They mostly consist of journal-style text and enable interaction between a blogger and reader in the form of online comments.[40]Self-hosted blogs can be created by professional entities such as entrepreneurs and small businesses. Blog hosting platforms includeWordPress,Blogger, andMedium;Typepadis often used by media companies;Weeblyis geared foronline shopping.Social networkingblogging platforms include Tumblr,LiveJournal, andWeibo. Among the multiple blogs on the web,Boing Boingis a group blog with themes includingtechnologyandscience fiction;HuffPostblogs include opinions on subjects such as politics, entertainment, and technology. There are also travel blogs such asHead for Points, Adventurous Kate, and an early form ofThe Points Guy.[41]
Entertainment social media and information sharing websites includeReddit,9gag,4chan,UpworthyandNewgrounds.[42]Sites like 9Gag allow users to create memes and quick video clips. Sites like Tech in Asia andBuzzfeedengage readers with professional communities by posting articles with user-generated comment sections.[43]Other websites includefanfictionsites such asFanFiction.Net;imageboards; artwork communities likeDeviantArt; mobile photos and video sharing sites such asPicasaandFlickr; audio social networks such asSoundCloud;crowd fundingorcrowdsourcingsites likeKickstarter,Indiegogo, andArtistShare; and customerreview sitessuch asYelp.
After launching in the mid-2000s, major UGC-based adult websites likePornhub,YouPornandxHamsterand became the dominant mode of consumption and distribution of pornographic content on the internet. The appearance of pornographic content on sites like Wikipedia and Tumblr led moderators and site owners to institute stricter limits on uploads.[44]
The travel industry, in particular, has begun utilizing user-generated content to show authentic traveler experiences. Travel-related companies such as The Millennial, Gen Z,[citation needed]and Busabout[45]relaunched their websites featuring UGC images and social content by their customers posted in real time.TripAdvisorincludes reviews and recommendations by travelers about hotels, restaurants, and activities.
The restaurant industry has also been altered by a review system the places more emphasis on online reviews and content from peers than traditional media reviews. In 2011Yelpcontained 70% of reviews for restaurants in the Seattle area compared toFood & WineMagazine containing less than 5 percent.[46]
Video gamescan havefan-madecontent in the form ofmods,fan patches,fan translationsorserver emulators.[47]Some games come withlevel editorprograms to aid in their creation. A fewmassively multiplayer online gamesincludingStar Trek Online,Dota 2, andEverQuest 2have UGC systems integrated into the game itself.[48]Ametaversecan be a user-generated world, such asSecond Life.[citation needed]Second Lifeis a 3-D virtual world which provides its users with tools to modify the game world and participate in an economy, trading user content created via online creation for virtual currency.[49]
A popular use of UGC involves collaboration between a brand and a user. An example is the "Elf Yourself" videos by Jib Jab that come back every year around Christmas. The Jib Jab website lets people use their photos of friends and family that they have uploaded to make a holiday video to share across the internet. Then, you cut and paste the faces of the people in the pictures to animated dancing elves, to make this work.[50]
Some brands are also using UGC images to boost the performance of their paid social ads. For example, Toyota leveraged UGC for their "Feeling the Streets" Facebook ad campaign and were able to increase their total ad engagement by 440%.[51]
Some bargain hunting websites feature user-generated content, such aseBay,Dealsplus, andFatWalletwhich allow users to post, discuss, and control which bargains get promoted within the community. Because of the dependency of social interaction, these sites fall into the category ofsocial commerce.
Wikipedia, a free encyclopedia, is one of the largest user-generated content databases in the world. Platforms such asYouTubehave frequently been used as an instructional aide. Organizations such as theKhan Academyand theGreen brothershave used the platform to upload series of videos on topics such as math, science, and history to help aid viewers master or better understand the basics. Educationalpodcastshave also helped in teaching through an audio platform. Personal websites and messaging systems like Yahoo Messenger have also been used to transmit user-generated educational content. There have also been web forums where users give advice to each other.
Students can also manipulate digital images or video clips to their advantage and tag them with easy to find keywords then share them to friends and family worldwide. The category of "student performance content" has risen in the form of discussion boards and chat logs. Students could write reflective journals and diaries that may help others.[52]The websitesSparkNotesandShmoopare used to summarize and analyze books so that they are more accessible to the reader.
Photo sharingwebsites are another popular form of UGC.Flickris a site in which users are able to upload personal photos they have taken and label them in regards to their "motivation".[53]: 46Flickr not only hosts images but makes them publicly available for reuse and reuse with modification.[53]Instagramis a social media platform that allows users to edit, upload and include location information with photos they post.[54]Panoramio.comand Flickr use metadata, such as GPS coordinates that allows for geographic placement of images.[55]
In 1995,Webshotswas one of the first online photo sharing platforms.[56][57]Webshots offered an easy-to-use interface and basic photo editing tools.[58][59]In 2002,SmugMugwas founded, focusing on providing a high-quality photo sharing experience for professional photographers.[60][61][62]SmugMug offers features such as custom photo galleries ande-commerceoptions.[63][64]In 2003,Yahoo! Photoswas one of the most popular photo sharing platforms thanks to its integration with Yahoo's email and search services.[65][66]
Video sharing websites are another popular form of UGC.YouTubeandTikTokallow users to create and upload videos.
The incorporation of user-generated content into mainstreamjournalismoutlets is considered to have begun in 2005 with theBBC'screation of a user-generated content team, which was expanded and made permanent in the wake of the7 July 2005 London bombings.[6]The incorporation of Web 2.0 technologies into news websites allowed user-generated content online to move from more social platforms such asMySpace,LiveJournal, andpersonal blogs, into the mainstream of online journalism, in the form of comments on news articles written by professional journalists, but also through surveys, content sharing, and other forms of citizen journalism.[67]
Since the mid-2000s, journalists and publishers have had to consider the effects that user-generated content has had on how news gets published, read, and shared. A 2016 study on publisher business models suggests that readers of online news sources value articles written both by professional journalists, as well as users—provided that those users are experts in a field relevant to the content that they create. In response to this, it is suggested that online news sites must consider themselves not only a source for articles and other types of journalism but also a platform for engagement and feedback from their communities. The ongoing engagement with a news site that is possible due to the interactive nature of user-generated content is considered a source of sustainable revenue for publishers of online journalism going forward.[68]
Journalists are increasingly sourcing UGC from platforms, such asFacebookandTikTok, as news shifts to a digital space.[69]This form ofcrowdsourcingcan include using user content to support claims, using social media platforms to contact witnesses and obtain relevant images and videos for articles.[70]
The use of user-generated content has been prominent in the efforts of marketing online, especially among millennials.[71]A good reason for this may be that 86% of consumers say authenticity is important when deciding which brands they support, and 60% believe user-generated content is not only the most authentic form of content, but also the most influential when making purchasing decisions.[72]
Companies can leverage user-generated content (UGC) to improve their products and services, through feedback obtained by users. Additionally, UGC can improve decision-making processes by strengthening potential consumers and guiding them toward purchasing and consumption decisions.[73]An increasing number of companies have been employing UGC techniques into their marketing efforts, such asStarbuckswith their "White Cup Contest" campaign where customers competed to create the best doodle on their cups.[74]
The effectiveness of UGC in marketing has been shown to be significant as well. For instance, the "Share a Coke" byCoca-Colacampaign in which customers uploaded images of themselves with bottles to social media attributed to a two percent increase in revenue. Of millennials, UGC can influence purchase decisions up to fifty-nine percent of the time, and eighty-four percent say that UGC on company websites has at least some influence on what they buy, typically in a positive way. As a whole, consumers place peer recommendations and reviews above those of professionals.[75]
User-generated content (UGC) can enhance marketing strategies by gathering relevant information from users and directing social media advertising efforts toward UGC marketing, which functions similarly to influencer marketing. However, each serves different purposes and plays distinct roles.[76]The distinction between UGC (User-Generated Content) creators and influencers lies primarily in their approaches to content creation. UGC creators are a varied range of individuals who share content based on their personal experiences with a product, service, or brand. They typically do not collaborate with specific brands, which lends authenticity to their posts and makes them relatable to their audience. In contrast, influencers have a significant and engaged following. They create branded content through sponsorships and paid partnerships with companies. Their role is to influence their followers' purchasing decisions, and their content is usually more polished and aligns closely with the branding and messaging of the companies they work with.[77]
User-generated content used in a marketing context has been known to help brands in a number of ways.[78]
There are a number of opportunities in user-generated content. The advantage of user-generated content is that it is a quick, easy way to reach the general public. Here are some examples:
The term "user-generated content" has received some criticism. The criticism to date has addressed issues of fairness, quality,[83]privacy,[84]the sustainable availability of creative work and effort among legal issues namely related to intellectual property rights such as copyrights etc.
Some commentators assert that the term "user" implies an illusory or unproductive distinction between different kinds of "publishers", with the term "users" exclusively used to characterize publishers who operate on a much smaller scale than traditional mass-media outlets or who operate for free.[85]Such classification is said to perpetuate an unfair distinction that some argue is diminishing because of the prevalence and affordability of the means of production and publication. A better response[according to whom?]might be to offer optional expressions that better capture the spirit and nature of such work, such as EGC, Entrepreneurial Generated Content (see external reference below).[citation needed]
Sometimes creative works made by individuals are lost because there are limited or no ways to precisely preserve creations when a UGC Web site service closes down. One example of such loss is the closing of the Disneymassively multiplayer online game"VMK". VMK, like most games, has items that are traded from user to user. A number of these items are rare within the game. Users are able to use these items to create their own rooms, avatars and pin lanyard. This site shut down at 10 pm CDT on 21 May 2008. There are ways to preserve the essence, if not the entirety of such work through the users copying text and media to applications on their personal computers or recording live action or animated scenes using screen capture software, and then uploading elsewhere. Long before the Web, creative works were simply lost or went out of publication and disappeared from history unless individuals found ways to keep them in personal collections.[citation needed]
Another criticized aspect is the vast array of user-generated product and service reviews that can at times be misleading for consumer on the web.
A study conducted at Cornell University found that an estimated 1 to 6 percent of positive user-generated online hotel reviews are fake.[86]
Another concern of platforms that rely heavily on user-generated content, such as Twitter and Facebook, is how easy it is to find people who holds the same opinions and interests in addition to how well they facilitate the creation of networks or closed groups.[87]While the strength of these services are that users can broaden their horizon by sharing their knowledge and connect with other people from around the world, these platforms also make it very easy to connect with only a restricted sample of people who holds similar opinions (seeFilter bubble).[88]
There is also criticism regarding whether or not those who contribute to a platform should be paid for their content. In 2015, a group of 18 famous content creators on Vine attempted to negotiate a deal with Vine representatives to secure a $1.2 million contract for a guaranteed 12 videos a month.[89]This negotiation was not successful.
The ability for services to accept user-generated content opens up a number of legal concerns, from the broader sense to specific local laws. In general, knowing who committed the online crime is difficult because many use pseudonyms or remain anonymous. Sometimes it can be traced back. But in the case of a public coffee shop, they have no way of pinpointing the exact user. There is also a problem with the issues surrounding extremely harmful but not legal acts. For example, the posting of content that instigates a person's suicide. It is a criminal offense if there is proof of "beyond reasonable doubt" but different situations may produce different outcomes.[90]Depending on the country, there is certain laws that come with the Web 2.0. In the United States, the "Section 230" exemptions of theCommunications Decency Actstate that "no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider." This clause effectively provides a general immunity for websites that host user-generated content that is defamatory, deceptive or otherwise harmful, even if the operator knows that the third-party content is harmful and refuses to take it down. An exception to this general rule may exist if a website promises to take down the content and then fails to do so.[91]
Copyrightlaws also play a factor in relation to user-generated content, as users may use such services to upload works—particularly videos—that they do not have the sufficient rights to distribute. In multiple cases, the use of these materials may be covered by local "fair use" laws, especially if the use of the material submitted istransformative.[92]Local laws also vary on who is liable for any resultingcopyright infringementscaused by user-generated content; in the United States, theOnline Copyright Infringement Liability Limitation Act(OCILLA)—a portion of theDigital Millennium Copyright Act(DMCA), dictatessafe harborprovisions for "online service providers" as defined under the act, which grants immunity fromsecondary liabilityfor the copyright-infringing actions of their users, as long as they promptly remove access to allegedly infringing materials upon the receipt of anotice from a copyright holder or registered agent, and they do not haveactual knowledgethat their service is being used for infringing activities.[93][94]
In the UK, the Defamation Act of 1996 says that if a person is not the author, editor or publisher and did not know about the situation, they are not convicted. Furthermore, ISPs are not considered authors, editors, or publishers and they cannot have responsibility for people they have no "effective control" over. Just like the DMCA, once the ISP learns about the content, they must delete it immediately.[90]The European Union's approach is horizontal by nature, which means that civil and criminal liability issues are addressed under theElectronic Commerce Directive. Section 4 deals with liability of the ISP while conducting "mere conduit" services, caching and web hosting services.[95]
A study on YouTube analyzing one of the video on demand systems was conducted in 2007. The length of the video had decreased by two-fold from the non-UGC content but they saw a fast production rate. The user behavior is what perpetuates the UGC. The act of P2P (peer-to-peer) was studied and saw a great benefit to the system. They also studied the impact of content aliasing, sharing of multiple copies, and illegal uploads.[96]
A study fromYork UniversityinOntarioin 2012 conducted research that resulted in a proposed framework for comparing brand-related UGC and to understand how the strategy used by a company could influence the brand sentiment across different social media channels including YouTube, Twitter and Facebook. The three scholars of this study examined two clothing brands, Lulu Lemon and American Apparel. The difference between these two brands is that Lulu Lemon had a social media following while American Apparel was the complete opposite with no social media following. Unsurprisingly, Lulu Lemon had much more positive contributions compared to American Apparel which had less positive contributions. Lulu Lemon has three times the number of positive contributions, 64 percent vs 22 percent for American Apparel on Twitter while on Facebook and YouTube, they had roughly an equal number of contributions. This proves that social media can influence how a brand is perceived, usually in a more positive light.[97]A study by Dhar and Chang, published in 2007, found that the volume of blogs posted on a music album was positively correlated with future sales of that album.[98]
This article incorporates text from afree contentwork. Licensed under CC BY SA 3.0 IGO (license statement/permission). Text taken fromWorld Trends in Freedom of Expression and Media Development Global Report 2017/2018, 202, University of Oxford, UNESCO.
|
https://en.wikipedia.org/wiki/User-generated_content
|
The Wisdom of Crowds: Why the Many Are Smarter Than the Few and How Collective Wisdom Shapes Business, Economies, Societies and Nations, published in 2004, is a book written byJames Surowieckiabout the aggregation of information in groups, resulting in decisions that, he argues, are often better than could have been made by any single member of the group. The book presents numerous case studies andanecdotesto illustrate its argument, and touches on several fields, primarilyeconomicsandpsychology.
The opening anecdote relatesFrancis Galton's surprise that the crowd at a county fair accurately guessed the weight of anoxwhen the median of their individual guesses was taken (the median was closer to the ox's true butchered weight than the estimates of most crowd members).[1][2]
The book relates to diverse collections of independently deciding individuals, rather thancrowd psychologyas traditionally understood. Its central thesis, that a diverse collection of independently deciding individuals is likely to make certain types of decisions and predictions better than individuals or even experts, draws many parallels with statisticalsampling; however, there is little overt discussion of statistics in the book.
Its title is an allusion toCharles Mackay'sExtraordinary Popular Delusions and the Madness of Crowds,published in 1841.[3]
Surowiecki breaks down the advantages he sees in disorganized decisions into three main types, which he classifies as
Not all crowds (groups) are wise. Consider, for example, mobs or crazed investors in astock market bubble. According to Surowiecki, these key criteria separate wise crowds from irrational ones:
Based on Surowiecki's book, Oinas-Kukkonen[4]captures the wisdom of crowds approach with the following eight conjectures:
Surowiecki studies situations (such asrational bubbles) in which the crowd produces very bad judgment, and argues that in these types of situations their cognition or cooperation failed because (in one way or another) the members of the crowd were too conscious of the opinions of others and began to emulate each other and conform rather than think differently. Although he gives experimental details of crowds collectively swayed by a persuasive speaker, he says that the main reason that groups of people intellectually conform is that the system for making decisions has a systemic flaw.
Causes and detailed case histories of such failures include:
TheOffice of the Director of National Intelligenceand theCIAhave created aWikipedia-style information sharing network calledIntellipediathat will help the free flow of information to prevent such failures again.
At the 2005O'ReillyEmerging TechnologyConference Surowiecki presented a session entitledIndependent Individuals and Wise Crowds, orIs It Possible to Be Too Connected?[6]
The question for all of us is, how can you have interaction withoutinformation cascades, without losing the independence that's such a key factor in group intelligence?
He recommends:
Tim O'Reilly[7]and others also discuss the success ofGoogle,wikis,blogging, andWeb 2.0in the context of the wisdom of crowds.
Surowiecki is a strong advocate of the benefits of decision markets and regrets the failure ofDARPA's controversialPolicy Analysis Marketto get off the ground. He points to the success of public and internal corporate markets as evidence that a collection of people with varying points of view but the same motivation (to make a good guess) can produce an accurate aggregate prediction. According to Surowiecki, the aggregate predictions have been shown to be more reliable than the output of anythink tank. He advocates extensions of the existing futures markets even into areas such asterroristactivity and prediction markets within companies.
To illustrate this thesis, he says that his publisher can publish a more compelling output by relying on individual authors under one-off contracts bringing book ideas to them. In this way, they are able to tap into the wisdom of a much larger crowd than would be possible with an in-house writing team.
Will Huttonhas argued that Surowiecki's analysis applies to value judgments as well as factual issues, with crowd decisions that "emerge of our own aggregated free will [being] astonishingly... decent". He concludes that "There's no better case for pluralism, diversity and democracy, along with a genuinely independent press."[8]
Applications of the wisdom-of-crowds effect exist in three general categories:Prediction markets,Delphi methods, and extensions of thetraditional opinion poll.
The most common application is the prediction market, a speculative or betting market created to make verifiable predictions. Surowiecki discusses the success of prediction markets. Similar toDelphi methodsbut unlikeopinion polls, prediction (information) markets ask questions like, "Who do you think will win the election?" and predict outcomes rather well. Answers to the question, "Who will you vote for?" are not as predictive.[9]
Assets are cash values tied to specific outcomes (e.g., Candidate X will win the election) or parameters (e.g., Next quarter's revenue). The current market prices are interpreted as predictions of the probability of the event or the expected value of the parameter.Betfairis the world's biggest prediction exchange, with around $28 billion traded in 2007.NewsFuturesis an international prediction market that generates consensus probabilities for news events.Intrade.com, which operated a person to person prediction market based in Dublin Ireland achieved very high media attention in 2012 related to the US Presidential Elections, with more than 1.5 million search references to Intrade and Intrade data. Several companies now offer enterprise class prediction marketplaces to predict project completion dates, sales, or the market potential for new ideas.[citation needed]A number of Web-based quasi-prediction marketplace companies have sprung up to offer predictions primarily on sporting events and stock markets but also on other topics. The principle of the prediction market is also used inproject management softwareto let team members predict a project's "real" deadline and budget.
The Delphi method is a systematic, interactiveforecastingmethod which relies on a panel of independent experts. The carefully selected experts answer questionnaires in two or more rounds. After each round, a facilitator provides an anonymous summary of the experts' forecasts from the previous round as well as the reasons they provided for their judgments. Thus, participants are encouraged to revise their earlier answers in light of the replies of other members of the group. It is believed that during this process the range of the answers will decrease and the group will converge towards the "correct" answer. Many of the consensus forecasts have proven to be more accurate than forecasts made by individuals.
Designed as an optimized method for unleashing the wisdom of crowds, this approach implements real-time feedback loops around synchronous groups of users with the goal of achieving more accurate insights from fewer numbers of users. Human Swarming (sometimes referred to as Social Swarming) is modeled after biological processes in birds, fish, and insects, and is enabled among networked users by using mediating software such as theUNUcollective intelligence platform. As published by Rosenberg (2015), such real-time control systems enable groups of human participants to behave as a unifiedcollective intelligence.[10]When logged into the UNU platform, for example, groups of distributed users can collectively answer questions, generate ideas, and make predictions as a singular emergent entity.[11][12]Early testing shows that human swarms can out-predict individuals across a variety of real-world projections.[13][14]
Hugo-winningwriterJohn Brunner's 1975science fictionnovelThe Shockwave Riderincludes an elaborate planet-wide information futures andbetting poolcalled "Delphi" based on the Delphi method.
IllusionistDerren Brownclaimed to use the 'Wisdom of Crowds' concept to explain how he correctly predicted theUK National Lotteryresults in September 2009. His explanation was met with criticism on-line, by people who argued that the concept was misapplied.[15]The methodology employed was too flawed; the sample of people could not have been totally objective and free in thought, because they were gathered multiple times and socialised with each other too much; a condition Surowiecki tells us is corrosive to pure independence and the diversity of mind required (Surowiecki 2004:38). Groups thus fall intogroupthinkwhere they increasingly make decisions based on influence of each other and are thuslessaccurate. However, other commentators have suggested that, given the entertainment nature of the show, Brown's misapplication of the theory may have been a deliberate smokescreen to conceal his true method.[16][17]
This was also shown in the television series East of Eden where a social network of roughly 10,000 individuals came up with ideas to stop missiles in a very short span of time.[citation needed]
Wisdom of Crowdswould have a significant influence on the naming of the crowdsourcing creative companyTongal, which is an anagram for Galton, the last name of the social-scientist highlighted in the introduction to Surowiecki's book.Francis Galtonrecognized the ability of a crowd's median weight-guesses for oxen to exceed the accuracy of experts.[18]
In his bookEmbracing the Wide Sky,Daniel Tammetfinds fault with this notion. Tammet points out the potential for problems in systems which have poorly defined means of pooling knowledge: Subject matter experts can be overruled and even wrongly punished by less knowledgeable persons in crowd sourced systems, citing a case of this on Wikipedia. Furthermore, Tammet mentions the assessment of theaccuracy of Wikipediaas described in a study mentioned inNaturein 2005, outlining several flaws in the study's methodology which included that the study made no distinction between minor errors and large errors.
Tammet also cites theKasparov versus the World, an online competition that pitted the brainpower of tens of thousands of online chess players choosing moves in a match againstGarry Kasparov, which was won by Kasparov, not the "crowd". Although Kasparov did say, "It is the greatest game in the history of chess. The sheer number of ideas, the complexity, and the contribution it has made to chess make it the most important game ever played."
In his bookYou Are Not a Gadget,Jaron Lanierargues that crowd wisdom is best suited for problems that involve optimization, but ill-suited for problems that require creativity or innovation. In the online articleDigital Maoism, Lanier argues that the collective is more likely to be smart only when
Lanier argues that only under those circumstances can a collective be smarter than a person. If any of these conditions are broken, the collective becomes unreliable or worse.
Iain Couzin, a professor in Princeton's Department of Ecology and Evolutionary Biology, and Albert Kao, his student, in a 2014article, in the journal Proceedings of the Royal Society, argue that "the conventional view of the wisdom of crowds may not be informative in complex and realistic environments, and that being in small groups can maximize decision accuracy across many contexts." By "small groups," Couzin and Kao mean fewer than a dozen people.
They conclude and say that “the decisions of very large groups may be highly accurate when the information used is independently sampled, but they are particularly susceptible to the negative effects of correlated information, even when only a minority of the group uses such information.”
|
https://en.wikipedia.org/wiki/The_Wisdom_of_Crowds
|
Acommunity of practice(CoP) is a group of people who "share a concern or a passion for something they do and learn how to do it better as they interact regularly".[1]The concept was first proposed bycognitive anthropologistJean Laveand educational theoristEtienne Wengerin their 1991 bookSituated Learning.[2]Wenger significantly expanded on this concept in his 1998 bookCommunities of Practice.[3]A CoP can form around members' shared interests or goals. Through being part of a CoP, the members learn from each other and develop their identities.[2]
CoP members can engage with one another in physical settings (for example, in a lunchroom at work, an office, a factory floor), but CoP members are not necessarily co-located.[3]They can form avirtual community of practice(VCoP)[4]where the CoP is primarily located in anonline communitysuch as a discussion board, newsgroup, or on asocial networking service.
Communities of practice have existed for as long as people have been learning and sharing their experiences through storytelling. The idea is rooted inAmerican pragmatism, especiallyC. S. Peirce's concept of the "community of inquiry",[5]as well asJohn Dewey's principle of learning through occupation.[6]
ForEtienne Wenger,learningin a CoP is central toidentitybecause learning is conceptualized as social participation – the individual actively participates in the practices of social communities, thus developing their role and identity within the community.[7]In this context, a community of practice is a group of individuals with shared interests or goals who develop both their individual and shared identities through community participation.
The structural characteristics of a community of practice are redefined to a domain of knowledge, a notion of community and a practice:
In many organizations, communities of practice are integral to the organization structure.[8]These communities take on knowledge stewarding tasks that were previously covered by more formal organizational structures. Both formal and informal communities of practice may be established in an organization. There is a great deal of interest within organizations to encourage, support, and sponsor communities of practice to benefit from shared knowledge that may lead to higher productivity.[citation needed]Communities of practice are viewed by many within business settings as a means to explicatetacit knowledge, or the "know-how" that is difficult to articulate.
An important aspect and function of communities of practice is increasing organization performance. Lesser and Storck identify four areas of organizational performance that can be affected by communities of practice:[9]
Collaboration constellations differ in various ways. Some are under organizational control (e.g., teams), whereas others, like CoPs, are self-organized or under the control of individuals. Researchers have studied how collaboration types vary in their temporal or boundary focus, and the basis of their members' relationships.[10]
Aproject teamdiffers from a community of practice in several ways.[citation needed]
By contrast,
In some cases, it may be useful to differentiate CoP from acommunity of inquiry(CoI).
Social capitalis a multi-dimensional concept with public and private facets.[11]That is, social capital may provide value to both the individual and the group as a whole. As participants build informal connections in their community of practice, they also share their expertise, learn from others, participate in the group, and demonstrate their expertise - all of which can be viewed as acquiringsocial capital.
Wasko and Faraj describe three kinds of knowledge: knowledge as object, knowledge embedded within individuals, and knowledge embedded in a community.[12]CoPs are associated with finding, sharing, transferring, and archiving knowledge, as well as making explicit "expertise", or articulatingtacit knowledge. Tacit knowledge is considered to be valuable context-based experiences that cannot easily be captured, codified and stored.[13][14]
Becauseknowledge managementis seen "primarily as a problem of capturing, organizing, and retrieving information, evoking notions of databases, documents, query languages, and data mining",[15]the community of practice is viewed as a potential rich source for helpful information in the form of actual experiences; in other words,best practices. Thus, for knowledge management, if community practices within a CoP can be codified and archived, they provide rich content and contexts that can be accessed for future use.
Members of CoPs are thought to be more efficient and effective conduits of information and experiences. While organizations tend to provide manuals to meet employee training needs, CoPs help foster the process of storytelling among colleagues, which helps them strengthen their skills.[16]
Studies have shown that workers spend a third of their time looking for information and are five times more likely to turn to a co-worker than an explicit source of information (book, manual, or database).[13]Conferring with CoP members saves time because community members havetacit knowledge, which can be difficult to store and retrieve for people unfamiliar with the CoP. For example, someone might share one of their best ways of responding to a situation based on their experiences, which may enable another person to avoid mistakes, thus shortening the learning curve. In a CoP, members can openly discuss and brainstorm about a project, which can lead to new capabilities. The type of information that is shared and learned in a CoP is boundless.[17]Paul Duguid distinguishestacit knowledge(knowinghow) fromexplicit knowledge(knowingwhat).[18]Performing optimally in a job requires the application of theory into practice. CoPs help individuals bridge the gap between knowingwhatand knowinghow.[18]
As members of CoPs, individuals report increased communication with people (professionals, interested parties, hobbyists), less dependence on geographic proximity, and the generation of new knowledge.[19]This assumes that interactions occur naturally when individuals come together. Social and interpersonal factors play a role in the interaction, and research shows that some individuals share or withhold knowledge and expertise from others because their knowledge relates to their professional identities, position, and interpersonal relationships.[20][21]
Communicating with others in a CoP involves creatingsocial presence. Chih-Hsiung defines social presence as "the degree of salience of another person in an interaction and the consequent salience of an interpersonal relationship".[22]Social presence may affect the likelihood for an individual to participate in a CoP (especially in online environments andvirtual communities of practice).[22]CoP management often encounter barriers that inhibit knowledge exchange between members. Reasons for these barriers may include egos and personal attacks, large overwhelming CoPs, and time constraints.[12]
Motivation to share knowledge is critical to success in communities of practice. Studies show that members are motivated to become active participants in a CoP when they view knowledge as a public good, a moral obligation and/or a community interest.[19]CoP members can also be motivated to participate through tangible returns (promotion, raises or bonuses), intangible returns (reputation, self-esteem) and community interest (exchange of practice related knowledge, interaction).
Collaboration is essential to ensure that communities of practice thrive. In a study on knowledge exchange in a business network, Sveiby and Simons found that more seasoned colleagues tend to foster a more collaborative culture.[23]Additionally they noted that a higher educational level predicted a tendency to favor collaboration.
What makes a community of practice succeed depends on the purpose and objective of the community as well as the interests and resources of community members. Wenger identified seven actions to cultivate communities of practice:
Since the publication of "Situated Learning: Legitimate Peripheral Participation",[2]communities of practice have been the focus of attention, first as a theory of learning and later as part of the field of knowledge management.[24]Andrew Cox offers a more critical view of the different ways in which the term communities of practice can be interpreted.[25]
To understand how learning occurs outside the classroom, Lave and Wenger studied how newcomers or novices become established community members within an apprenticeship.[2]Lave and Wenger first used the term communities of practice to describe learning through practice and participation, which they described assituated learning.
The process by which a community member becomes part of a community occurs throughlegitimate peripheral participation. Legitimation and participation define ways of belonging to a community, whereas peripherality and participation are concerned with location and identity in the social world.[2]
Lave and Wenger's research examined how a community and its members learn within apprenticeships. When newcomers join an established community, they initially observe and perform simple tasks in basic roles while they learn community norms and practices. For example, an apprentice electrician might watch and learn through observation before doing any electrical work, but would eventually take on more complicated electrical tasks. Lave and Wenger described this socialization process as legitimate peripheral participation. Lave and Wenger referred to a "community of practice" as a group that shares a common interest and desire to learn from and contribute to the community.[2]
In his later work, Wenger shifted his focus from legitimate peripheral participation toward tensions that emerge fromdualities.[3]He identifies four dualities that exist in communities of practice: participation-reification, designed-emergent, identification-negotiability and local-global. The participation-reification duality has been a particular focus in the field ofknowledge management.
Wenger describes three dimensions of practice that support community cohesion:mutual engagement, negotiation of a joint enterprise and shared repertoire.[3]
The communities Lave and Wenger studied were naturally forming as practitioners of craft and skill-based activities met to share experiences and insights.[2]
Lave and Wenger observed situated learning within a community of practice among Yucatánmidwives, Liberian tailors, navy quartermasters and meat cutters,[2]and insurance claims processors.[3]Other fields have used the concept of CoPs in education,[26]sociolinguistics, material anthropology,medical education,second language acquisition,[27]Parliamentary Budget Offices,[28]health care and business sectors,[29]research data,[30][31]and child mental health practice (AMBIT).
A famous example of a community of practice within an organization is theXeroxcustomer service representatives who repaired machines.[32]The Xerox reps began exchanging repair tips and tricks in informal meetings over breakfast or lunch. Eventually, Xerox saw the value of these interactions and created the Eureka project, which allowed these interactions to be shared across its global network of representatives. The Eureka database is estimated to have saved the corporation $100 million.
Examples of large virtual CoPs include:
|
https://en.wikipedia.org/wiki/Community_of_Practice
|
Figure Eight(formerly known asDolores Labs,CrowdFlower) was ahuman-in-the-loopmachine learningandartificial intelligencecompany based in San Francisco.
Figure Eight technology uses human intelligence to do simple tasks such as transcribing text or annotating images to train machine learning algorithms.[1]
Figure Eight's software automates tasks for machine learning algorithms, which can be used to improve catalog search results, approve photos or support customers and the technology can be used in the development of self-driving cars, intelligent personal assistants and other technology that uses machine learning.[2]
In March 2019, Figure Eight was acquired byAppenfor $300 million.[3]
Originally called Dolores Labs, the company was founded in 2007 by Lukas Biewald andChris Van Pelt.[4]They found a need for temporary workers doing simple tasks that could not be automated.[5]After experimenting with pictures and questions related to them onAmazon's Mechanical Turk, a crowdsourcing internet marketplace, they encouraged others to participate in their experimentation through the site Facestat. They collected 20 million assessments of people's faces within three months and began to add queries for companies needing data such as event listing site Zvents andO'Reilly Media.[6]
Dolores Labs, initially in a loft space in the Mission District briefly moved to an office onValencia Streetwhich it outgrew in nine months.[7]They felt the name Dolores Labs was too research-oriented and sounded like experimentation, so the company was renamed CrowdFlower. In 2009, CrowdFlower held an official launch at the TechCrunch50 conference. A sleek logo replaced its previous mint-eating alligator. The company moved to its third office in the Mission in early 2010.[8]The name Dolores Labs was adopted by Dan Scholnick of Trinity Ventures who turned the name and previous office space into a co-working and startup incubator space.[7]
In 2009, the company provided work forrefugeesinKenyawho completed microtasks;iPhoneusers donated their time by checking for accuracy through the app Give Work.[9]After the2010 Haiti earthquake, CrowdFlower again worked withSamasourceto help Haitians find work through the application GiveWork.[10][11]
Founders Lukas Biewald and Chris Van Pelt were included onInc.'s30 Under 30 list in 2010.[12]
In 2011, CrowdFlower raised a Series B funding round that totaled $9.3 million and included investor Harmony Venture Partners. The company's Series C funding, which closed in September 2014, totaled $12.5 million.[13]In 2014, CrowdFlower was named Best in Show at FinovateFall.[14]
The company established a scientific advisory board in 2016, which made up of entrepreneurBarney Pell, founder and CEO ofKaggleAnthony Goldbloom, and staff research engineer at Google, Pete Warden.[15]That same year, it raised a $10 million Series D funding round led byMicrosoft Ventures,Canvas Venturesand Trinity Ventures.[16]The following year, CrowdFlower raised $20 million in a venture capital round led by Industry Ventures and included Salesforce Ventures, Canvas Ventures, Microsoft Ventures, and Trinity Ventures.[17]The company announced its international expansion with an office in Israel in October 2016.[18]
CrowdFlower was named to the 2017 list of Cool Vendors released byGartner.[19]That same year, it received AWS Machine Learning Competency status from Amazon Web Services.[20]In 2018, CrowdFlower was included on the Forbes list of 100 Companies Leading the Way in A.I.[21]
The company raised $58 million in venture capital and was acquired byAppenin March 2019 for $300 million.[3]In 2020, Appen announced that it had "successfully transitioned all former Figure Eight assets."
In June 2012, the company released version 2.0 of its Real Time Foto Moderator which checks photographs for adult or inappropriate content. The new version included two different "rule sets" to determine appropriate photos including a stricter rule set and one that is more flexible. The update also added an option for moderators to specify why a photo is rejected.[22]That same year, Parse partnered with CrowdFlower to add photo moderation to its backend services designed for mobile app development.[23]
In November 2014, CrowdFlower announced that it was releasing support for eight new languages crowds to its platform, making twelve available language crowds at the time.[24]
In 2015, CrowdFlower AI launched at the Rich Data Summit. The AI platform combines machine learning and human-labeled training data to create data sets used for predictive models.[25]
In 2015, CrowdFlower announced the Data For Everyone initiative, which included a collection of data sets available to researchers and entrepreneurs.[26]
Microsoftpartnered with CrowdFlower in October 2016 to create a "human-in-the-loop" platform usingMicrosoft AzureMachine Learning.[27]In May 2017, CrowdFlower released an enhancement for its Computer Vision software, announced during the Train AI conference, designed to simplify and speed up the process of annotating images.[28]
Figure Eight held TrainAI, a conference held in San Francisco. In 2017, the company launched AI for Everyone at the TrainAI conference.[29]AI For Everyone is a contest run by Figure Eight for non-profit ventures and scientific research that aims to improve society by awarding $1 million in prize money that will go toward projects using AI.[30]Six winners have been announced as of February 2018 to projects ranging from computer vision for cancer research to natural language processing for hate speech.[31]
The company was a Machine Learning Competency Partner in Amazon's AWS Machine Learning Partner Solutions program.[clarification needed][32]Figure Eight works with companies such asAutodesk,Google,Facebook,Twitter,Cisco Systems,GitHub,Mozilla,VMware,[2]eBay,Etsy,ToyotaandAmerican Express.[33]
|
https://en.wikipedia.org/wiki/CrowdFlower
|
Folksonomyis aclassification systemin whichend usersapply publictagsto online items, typically to make those items easier for themselves or others to find later. Over time, this can give rise to a classification system based on those tags and how often they are applied or searched for, in contrast to ataxonomicclassification designed by the owners of thecontentand specified when it is published.[1][2]This practice is also known ascollaborative tagging,[3][4]social classification,social indexing, andsocial tagging. Folksonomy was originally "the result of personal free tagging of information [...] for one's own retrieval",[5]but online sharing and interaction expanded it into collaborative forms.Social taggingis the application of tags in an open online environment where the tags of other users are available to others.Collaborative tagging(also known as group tagging) is tagging performed by a group of users. This type of folksonomy is commonly used in cooperative and collaborative projects such as research, content repositories, and social bookmarking.
The term was coined byThomas Vander Walin 2004[5][6][7]as aportmanteauoffolkandtaxonomy. Folksonomies became popular as part ofsocial softwareapplications such associal bookmarkingand photograph annotation that enable users to collectively classify and find information via shared tags. Some websites includetag cloudsas a way to visualize tags in a folksonomy.[8]
Folksonomies can be used forK–12education, business, and higher education. More specifically, folksonomies may be implemented for social bookmarking, teacher resource repositories, e-learning systems, collaborative learning, collaborative research, professional development and teaching.Wikipediais a prime example of folksonomy.[9][better source needed][clarification needed]
Folksonomies are a trade-off between traditional centralized classification and no classification at all,[10]and have several advantages:[11][12][13]
There are several disadvantages with the use of tags and folksonomies as well,[14]and some of the advantages can lead to problems. For example, the simplicity in tagging can result in poorly applied tags.[15]Further, while controlled vocabularies are exclusionary by nature,[16]tags are often ambiguous and overly personalized.[17]Users apply tags to documents in many different ways and tagging systems also often lack mechanisms for handlingsynonyms,acronymsandhomonyms, and they also often lack mechanisms for handlingspellingvariations such as misspellings,singular/pluralform,conjugatedandcompoundwords. Some tagging systems do not support tags consisting of multiple words, resulting in tags like "viewfrommywindow". Sometimes users choose specialized tags or tags without meaning to others.
A folksonomy emerges when users tag content or information, such as web pages, photos, videos, podcasts, tweets, scientific papers and others. Strohmaier et al.[18]elaborate the concept: the term "tagging" refers to a "voluntary activity of users who are annotating resources with term-so-called 'tags' – freely chosen from an unbounded and uncontrolled vocabulary". Others explain tags as an unstructured textual label[19]or keywords,[17]and that they appear as a simple form of metadata.[20]
Folksonomies consist of three basic entities: users, tags, and resources. Users create tags to mark resources such as: web pages, photos, videos, and podcasts. These tags are used to manage, categorize and summarize online content. This collaborative tagging system also uses these tags as a way to index information, facilitate searches and navigate resources. Folksonomy also includes a set of URLs that are used to identify resources that have been referred to by users of different websites. These systems also include category schemes that have the ability to organize tags at different levels of granularity.[21]
Vander Wal identifies two types of folksonomy: broad and narrow.[22]A broad folksonomy arises when multiple users can apply the same tag to an item, providing information about which tags are the most popular. A narrow folksonomy occurs when users, typically fewer in number and often including the item's creator, tag an item with tags that can each be applied only once. While both broad and narrow folksonomies enable the searchability of content by adding an associated word or phrase to an object, a broad folksonomy allows for sorting based on the popularity of each tag, as well as the tracking of emerging trends in tag usage and developing vocabularies.[22]
An example of a broad folksonomy isdel.icio.us, a website where users can tag any online resource they find relevant with their own personal tags. The photo-sharing websiteFlickris an oft-cited example of a narrow folksonomy.
'Taxonomy' refers to a hierarchicalcategorizationin which relatively well-defined classes are nested under broader categories. Afolksonomyestablishes categories (each tag is a category) without stipulating or necessarily deriving a hierarchical structure of parent-child relations among different tags. (Work has been done on techniques for deriving at least loose hierarchies from clusters of tags.[23])
Supporters of folksonomies claim that they are often preferable to taxonomies because folksonomies democratize the way information is organized, they are more useful to users because they reflect current ways of thinking about domains, and they express more information about domains.[24]Critics claim that folksonomies are messy and thus harder to use, and can reflect transient trends that may misrepresent what is known about a field.
An empirical analysis of the complex dynamics of tagging systems, published in 2007,[25]has shown that consensus around stable distributions and shared vocabularies does emerge, even in the absence of a centralcontrolled vocabulary. For content to be searchable, it should be categorized and grouped. While this was believed to require commonly agreed on sets of content describing tags (much like keywords of a journal article), some research has found that in large folksonomies common structures also emerge on the level of categorizations.[26]Accordingly, it is possible to devise mathematicalmodels of collaborative taggingthat allow for translating from personal tag vocabularies (personomies) to the vocabulary shared by most users.[27]
Folksonomy is unrelated tofolk taxonomy, a cultural practice that has been widely documented in anthropological andfolkloristicwork. Folk taxonomies are culturally supplied, intergenerationally transmitted, and relatively stable classification systems that people in a given culture use to make sense of the entire world around them (not just theInternet).[21]
The study of the structuring or classification of folksonomy is termedfolksontology.[28]This branch ofontologydeals with the intersection between highly structured taxonomies or hierarchies and loosely structured folksonomy, asking what best features can be taken by both for a system of classification. The strength of flat-tagging schemes is their ability to relate one item to others like it. Folksonomy allows large disparate groups of users to collaboratively label massive, dynamic information systems. The strength of taxonomies are their browsability: users can easily start from more generalized knowledge and target their queries towards more specific and detailed knowledge.[29]Folksonomy looks to categorize tags and thus create browsable spaces of information that are easy to maintain and expand.
Social tagging forknowledge acquisitionis the specific use of tagging for finding and re-finding specific content for an individual or group. Social tagging systems differ from traditional taxonomies in that they are community-based systems lacking the traditional hierarchy of taxonomies. Rather than a top-down approach, social tagging relies on users to create the folksonomy from the bottom up.[30]
Common uses of social tagging for knowledge acquisition include personal development for individual use and collaborative projects. Social tagging is used for knowledge acquisition in secondary, post-secondary, and graduate education as well as personal and business research. The benefits of finding/re-finding source information are applicable to a wide spectrum of users. Tagged resources are located through search queries rather than searching through a more traditional file folder system.[31]The social aspect of tagging also allows users to take advantage of metadata from thousands of other users.[30]
Users choose individual tags for stored resources. These tags reflect personal associations, categories, and concept, all of which are individual representations based on meaning and relevance to that individual. The tags, or keywords, are designated by users. Consequently, tags represent a user's associations corresponding to the resource. Commonly tagged resources include videos, photos, articles, websites, and email.[32]Tags are beneficial for a couple of reasons. First, they help to structure and organize large amounts of digital resources in a manner that makes them easily accessible when users attempt to locate the resource at a later time. The second aspect is social in nature, that is to say that users may search for new resources and content based on the tags of other users. Even the act of browsing through common tags may lead to further resources for knowledge acquisition.[30]
Tags that occur more frequently with specific resources are said to be more strongly connected. Furthermore, tags may be connected to each other. This may be seen in the frequency in which they co-occur. The more often they co-occur, the stronger the connection. Tag clouds are often utilized to visualize connectivity between resources and tags. Font size increases as the strength of association increases.[32]
Tags show interconnections of concepts that were formerly unknown to a user. Therefore, a user's current cognitive constructs may be modified or augmented by the metadata information found in aggregated social tags. This process promotes knowledge acquisition through cognitive irritation and equilibration. This theoretical framework is known as the co-evolution model of individual and collective knowledge.[32]
The co-evolution model focuses on cognitive conflict in which a learner's prior knowledge and the information received from the environment are dissimilar to some degree.[30][32]When this incongruence occurs, the learner must work through a process cognitive equilibration in order to make personal cognitive constructs and outside information congruent. According to the coevolution model, this may require the learner to modify existing constructs or simply add to them.[30]The additional cognitive effort promotes information processing which in turn allows individual learning to occur.[32]
|
https://en.wikipedia.org/wiki/Collaborative_tagging
|
Web 2.0(also known asparticipative(orparticipatory)[1]webandsocial web)[2]refers towebsitesthat emphasizeuser-generated content,ease of use,participatory culture, andinteroperability(i.e., compatibility with other products, systems, and devices) forend users.
The term was coined byDarcy DiNucciin 1999[3]and later popularized byTim O'ReillyandDale Doughertyat the firstWeb 2.0 Conferencein 2004.[4][5][6]Although the term mimics the numbering ofsoftware versions, it does not denote a formal change in the nature of theWorld Wide Web,[7]but merely describes a general change that occurred during this period as interactive websites proliferated and came to overshadow the older, more static websites of the original Web.[2]
A Web 2.0 website allows users to interact and collaborate throughsocial mediadialogue as creators ofuser-generated contentin avirtual community. This contrasts the first generation ofWeb 1.0-era websites where people were limited to passively viewing content. Examples of Web 2.0 features includesocial networking sitesorsocial mediasites (e.g.,Facebook),blogs,wikis,folksonomies("tagging" keywords on websites and links),video sharingsites (e.g.,YouTube),image sharingsites (e.g.,Flickr),hosted services,Web applications("apps"),collaborative consumptionplatforms, andmashup applications.
Whether Web 2.0 is substantially different from prior Web technologies has been challenged by World Wide Web inventorTim Berners-Lee, who describes the term asjargon.[8]His original vision of the Web was "a collaborative medium, a place where we [could] all meet and read and write".[9][10]On the other hand, the termSemantic Web(sometimes referred to as Web 3.0)[11]was coined by Berners-Lee to refer to a web of content where the meaning can be processed by machines.[12]
Web 1.0 is aretronymreferring to the first stage of theWorld Wide Web's evolution, from roughly 1989 to 2004. According to Graham Cormode and Balachander Krishnamurthy, "content creators were few in Web 1.0 with the vast majority of users simply acting as consumers of content".[13]Personal web pageswere common, consisting mainly of static pages hosted onISP-runweb servers, or onfree web hosting servicessuch asTripodand the now-defunctGeoCities.[14][15]With Web 2.0, it became common for average web users to have social-networking profiles (on sites such asMyspaceandFacebook) and personal blogs (sites likeBlogger,TumblrandLiveJournal) through either a low-costweb hosting serviceor through a dedicated host. In general, content was generated dynamically, allowing readers to comment directly on pages in a way that was not common previously.[citation needed]
Some Web 2.0 capabilities were present in the days of Web 1.0, but were implemented differently. For example, a Web 1.0 site may have had aguestbookpage for visitor comments, instead of acomment sectionat the end of each page (typical of Web 2.0). During Web 1.0, server performance and bandwidth had to be considered—lengthy comment threads on multiple pages could potentially slow down an entire site.Terry Flew, in his third edition ofNew Media,described the differences between Web 1.0 and Web 2.0 as a
"move from personal websites to blogs and blog site aggregation, from publishing to participation, from web content as the outcome of large up-front investment to an ongoing and interactive process, and from content management systems to links based on "tagging" website content usingkeywords(folksonomy)."
Flew believed these factors formed the trends that resulted in the onset of the Web 2.0 "craze".[16]
Some common design elements of a Web 1.0 site include:[17]
The term "Web 2.0" was coined byDarcy DiNucci, aninformation architectureconsultant, in her January 1999 article "Fragmented Future":[3][20]
"The Web we know now, which loads into abrowser windowin essentially static screenfuls, is only an embryo of the Web to come. The first glimmerings of Web 2.0 are beginning to appear, and we are just starting to see how that embryo might develop. The Web will be understood not as screenfuls of text and graphics but as a transport mechanism, the ether through which interactivity happens. It will [...] appear on your computer screen, [...] on your TV set [...] your car dashboard [...] your cell phone [...] hand-held game machines [...] maybe even your microwave oven."
Writing whenPalm Inc.introduced its first web-capablepersonal digital assistant(supporting Web access withWAP), DiNucci saw the Web "fragmenting" into a future that extended beyond the browser/PC combination it was identified with. She focused on how the basic information structure and hyper-linking mechanism introduced byHTTPwould be used by a variety of devices and platforms. As such, her "2.0" designation refers to the next version of the Web that does not directly relate to the term's current use.
The term Web 2.0 did not resurface until 2002.[21][22][23]Companies such asAmazon, Facebook,Twitter, andGoogle, made it easy to connect and engage in online transactions. Web 2.0 introduced new features, such asmultimediacontent and interactive web applications, which mainly consisted of two-dimensional screens.[24]Kinsley and Eric focus on the concepts currently associated with the term where, as Scott Dietzen puts it, "the Web becomes a universal, standards-based integration platform".[23]In 2004, the term began to popularize whenO'Reilly Mediaand MediaLive hosted the first Web 2.0 conference. In their opening remarks,John Battelleand Tim O'Reilly outlined their definition of the "Web as Platform", where software applications are built upon the Web as opposed to upon the desktop. The unique aspect of this migration, they argued, is that "customers are building your business for you".[25]They argued that the activities of users generating content (in the form of ideas, text, videos, or pictures) could be "harnessed" to create value. O'Reilly and Battelle contrasted Web 2.0 with what they called "Web 1.0". They associated this term with the business models ofNetscapeand theEncyclopædia Britannica Online. For example,
"Netscape framed 'the web as platform' in terms of the old softwareparadigm: their flagship product was the web browser, a desktop application, and their strategy was to use their dominance in the browser market to establish a market for high-priced server products. Control over standards for displaying content and applications in the browser would, in theory, give Netscape the kind of market power enjoyed by Microsoft in the PC market. Much like the 'horseless carriage' framed the automobile as an extension of the familiar, Netscape promoted a 'webtop' to replace the desktop, and planned to populate that webtop with information updates and applets pushed to the webtop by information providers who would purchase Netscape servers.[26]"
In short, Netscape focused on creating software, releasing updates and bug fixes, and distributing it to the end users. O'Reilly contrasted this withGoogle, a company that did not, at the time, focus on producing end-user software, but instead on providing a service based on data, such as the links that Web page authors make between sites. Google exploits this user-generated content to offer Web searches based on reputation through its "PageRank" algorithm. Unlike software, which undergoes scheduled releases, such services are constantly updated, a process called "theperpetual beta". A similar difference can be seen between theEncyclopædia Britannica OnlineandWikipedia– while the Britannica relies upon experts to write articles and release them periodically in publications, Wikipedia relies on trust in (sometimes anonymous) community members to constantly write and edit content. Wikipedia editors are not required to have educational credentials, such as degrees, in the subjects in which they are editing. Wikipedia is not based on subject-matter expertise, but rather on an adaptation of theopen sourcesoftware adage"given enough eyeballs, all bugs are shallow". This maxim is stating that if enough users are able to look at a software product's code (or a website), then these users will be able to fix any "bugs" or other problems. The Wikipedia volunteer editor community produces, edits, and updates articles constantly. Web 2.0 conferences have been held every year since 2004, attractingentrepreneurs, representatives from large companies, tech experts and technology reporters.
The popularity of Web 2.0 was acknowledged by2006TIME magazinePerson of The Year(You).[27]That is,TIMEselected the masses of users who were participating in content creation onsocial networks, blogs, wikis, and media sharing sites.
In the cover story,Lev Grossmanexplains:
"It's a story about community and collaboration on a scale never seen before. It's about the cosmic compendium of knowledge Wikipedia and the million-channel people's networkYouTubeand the online metropolisMySpace. It's about the many wresting power from the few and helping one another for nothing and how that will not only change the world but also change the way the world changes."
Instead of merely reading a Web 2.0 site, a user is invited to contribute to the site's content by commenting on published articles, or creating auser accountorprofileon the site, which may enable increased participation. By increasing emphasis on these already-extant capabilities, they encourage users to rely more on their browser foruser interface,application software("apps") andfile storagefacilities. This has been called "network as platform" computing.[5]Major features of Web 2.0 includesocial networkingwebsites, self-publishing platforms (e.g.,WordPress' easy-to-use blog and website creation tools),"tagging"(which enables users to label websites, videos or photos in some fashion),"like" buttons(which enable a user to indicate that they are pleased by online content), andsocial bookmarking.
Users can provide the data and exercise some control over what they share on a Web 2.0 site.[5][28]These sites may have an "architecture of participation" that encourages users to add value to the application as they use it.[4][5]Users can add value in many ways, such as uploading their own content on blogs, consumer-evaluation platforms (e.g.AmazonandeBay), news websites (e.g. responding in the comment section), social networking services, media-sharing websites (e.g. YouTube andInstagram) and collaborative-writing projects.[29]Some scholars argue thatcloud computingis an example of Web 2.0 because it is simply an implication of computing on the Internet.[30]
Web 2.0 offers almost all users the same freedom to contribute,[31]which can lead to effects that are varyingly perceived as productive by members of a given community or not, which can lead to emotional distress and disagreement. The impossibility of excluding group members who do not contribute to the provision of goods (i.e., to the creation of a user-generated website) from sharing the benefits (of using the website) gives rise to the possibility that serious members will prefer to withhold their contribution of effort and"free ride"on the contributions of others.[32]This requires what is sometimes calledradical trustby the management of the Web site.
Encyclopaedia BritannicacallsWikipedia"the epitome of the so-called Web 2.0" and describes what many view as the ideal of a Web 2.0 platform as "an egalitarian environment where the web of social software enmeshes users in both their real and virtual-reality workplaces."[33]
According to Best,[34]the characteristics of Web 2.0 are rich user experience, user participation,dynamic content,metadata,Web standards, andscalability. Further characteristics, such as openness, freedom,[35]andcollective intelligence[36]by way of user participation, can also be viewed as essential attributes of Web 2.0. Some websites require users to contributeuser-generated contentto have access to the website, to discourage "free riding".
The key features of Web 2.0 include:[citation needed]
Theclient-side(Web browser) technologies used in Web 2.0 development includeAjaxandJavaScript frameworks. Ajax programming usesJavaScriptand theDocument Object Model(DOM) to update selected regions of the page area without undergoing a full page reload. To allow users to continue interacting with the page, communications such as data requests going to the server are separated from data coming back to the page (asynchronously).
Otherwise, the user would have to routinely wait for the data to come back before they can do anything else on that page, just as a user has to wait for a page to complete the reload. This also increases the overall performance of the site, as the sending of requests can complete quicker independent of blocking and queueing required to send data back to the client. The data fetched by an Ajax request is typically formatted inXMLorJSON(JavaScript Object Notation) format, two widely usedstructured dataformats. Since both of these formats are natively understood by JavaScript, a programmer can easily use them to transmit structured data in their Web application.
When this data is received via Ajax, the JavaScript program then uses the Document Object Model to dynamically update the Web page based on the new data, allowing for rapid and interactive user experience. In short, using these techniques, web designers can make their pages function like desktop applications. For example,Google Docsuses this technique to create a Web-based word processor.
As a widely available plug-in independent ofW3Cstandards (the World Wide Web Consortium is the governing body of Web standards and protocols),Adobe Flashwas capable of doing many things that were not possible pre-HTML5. Of Flash's many capabilities, the most commonly used was its ability to integrate streaming multimedia into HTML pages. With the introduction of HTML5 in 2010 and the growing concerns with Flash's security, the role of Flash became obsolete, with browser support ending on December 31, 2020.
In addition to Flash and Ajax, JavaScript/Ajax frameworks have recently become a very popular means of creating Web 2.0 sites. At their core, these frameworks use the same technology as JavaScript, Ajax, and the DOM. However, frameworks smooth over inconsistencies between Web browsers and extend the functionality available to developers. Many of them also come with customizable, prefabricated 'widgets' that accomplish such common tasks as picking a date from a calendar, displaying a data chart, or making a tabbed panel.
On theserver-side, Web 2.0 uses many of the same technologies as Web 1.0. Languages such asPerl,PHP,Python,Ruby, as well asEnterprise Java (J2EE)andMicrosoft.NET Framework, are used by developers to output data dynamically using information from files and databases. This allows websites and web services to sharemachine readableformats such asXML(Atom,RSS, etc.) andJSON. When data is available in one of these formats, another website can use it tointegrate a portion of that site's functionality.
Web 2.0 can be described in three parts:
As such, Web 2.0 draws together the capabilities ofclient- andserver-side software,content syndicationand the use ofnetwork protocols. Standards-oriented Web browsers may useplug-insand software extensions to handle the content and user interactions. Web 2.0 sites provide users withinformation storage, creation, and dissemination capabilities that were not possible in the environment known as "Web 1.0".
Web 2.0 sites include the following features and techniques, referred to as the acronymSLATESby Andrew McAfee:[37]
While SLATES forms the basic framework of Enterprise 2.0, it does not contradict all of the higher level Web 2.0 design patterns and business models. It includes discussions of self-service IT, the long tail of enterprise IT demand, and many other consequences of the Web 2.0 era in enterprise uses.[38]
A third important part of Web 2.0 is thesocial web. The social Web consists of a number of online tools and platforms where people share their perspectives, opinions, thoughts and experiences. Web 2.0 applications tend to interact much more with the end user. As such, the end user is not only a user of the application but also a participant by:
The popularity of the term Web 2.0, along with the increasing use of blogs, wikis, and social networking technologies, has led many in academia and business to append a flurry of 2.0's to existing concepts and fields of study,[39]includingLibrary 2.0, Social Work 2.0,[40]Enterprise 2.0, PR 2.0,[41]Classroom 2.0,[42]Publishing 2.0,[43]Medicine 2.0,[44]Telco 2.0,Travel 2.0,Government 2.0,[45]and evenPorn 2.0.[46]Many of these 2.0s refer to Web 2.0 technologies as the source of the new version in their respective disciplines and areas. For example, in the Talis white paper "Library 2.0: The Challenge of Disruptive Innovation",Paul Millerargues
"Blogs, wikis and RSS are often held up as exemplary manifestations of Web 2.0. A reader of a blog or a wiki is provided with tools to add a comment or even, in the case of the wiki, to edit the content. This is what we call the Read/Write web. Talis believes thatLibrary 2.0means harnessing this type of participation so that libraries can benefit from increasingly rich collaborative cataloging efforts, such as including contributions from partner libraries as well as adding rich enhancements, such as book jackets or movie files, to records from publishers and others."[47]
Here, Miller links Web 2.0 technologies and the culture of participation that they engender to the field of library science, supporting his claim that there is now a "Library 2.0". Many of the other proponents of new 2.0s mentioned here use similar methods. The meaning of Web 2.0 is role dependent. For example, some use Web 2.0 to establish and maintain relationships through social networks, while some marketing managers might use this promising technology to "end-run traditionally unresponsive I.T. department[s]."[48]
There is a debate over the use of Web 2.0 technologies in mainstream education. Issues under consideration include the understanding of students' different learning modes; the conflicts between ideas entrenched in informal online communities and educational establishments' views on the production and authentication of 'formal' knowledge; and questions about privacy, plagiarism, shared authorship and the ownership of knowledge and information produced and/or published on line.[49]
Web 2.0 is used by companies, non-profit organisations and governments for interactivemarketing. A growing number of marketers are using Web 2.0 tools to collaborate with consumers on product development,customer serviceenhancement, product or service improvement and promotion. Companies can use Web 2.0 tools to improve collaboration with both its business partners and consumers. Among other things, company employees have created wikis—Websites that allow users to add, delete, and edit content — to list answers to frequently asked questions about each product, and consumers have added significant contributions.
Another marketing Web 2.0 lure is to make sure consumers can use the online community to network among themselves on topics of their own choosing.[50]Mainstream media usage of Web 2.0 is increasing. Saturating media hubs—likeThe New York Times,PC MagazineandBusiness Week— with links to popular new Web sites and services, is critical to achieving the threshold for mass adoption of those services.[51]User web content can be used to gauge consumer satisfaction. In a recent article for Bank Technology News, Shane Kite describes how Citigroup's Global Transaction Services unit monitorssocial mediaoutlets to address customer issues and improve products.[52]
In tourism industries, social media is an effective channel to attract travellers and promote tourism products and services by engaging with customers. The brand of tourist destinations can be built through marketing campaigns on social media and by engaging with customers. For example, the "Snow at First Sight" campaign launched by theState of Coloradoaimed to bring brand awareness to Colorado as a winter destination. The campaign used social media platforms, for example, Facebook and Twitter, to promote this competition, and requested the participants to share experiences, pictures and videos on social media platforms. As a result, Colorado enhanced their image as a winter destination and created a campaign worth about $2.9 million.[citation needed]
The tourism organisation can earn brand royalty from interactive marketing campaigns on social media with engaging passive communication tactics. For example, "Moms" advisors of theWalt Disney Worldare responsible for offering suggestions and replying to questions about the family trips at Walt Disney World. Due to its characteristic of expertise in Disney, "Moms" was chosen to represent the campaign.[53]Social networking sites, such as Facebook, can be used as a platform for providing detailed information about the marketing campaign, as well as real-time online communication with customers. Korean Airline Tour created and maintained a relationship with customers by using Facebook for individual communication purposes.[54]
Travel 2.0 refers a model of Web 2.0 on tourism industries which provides virtual travel communities. The travel 2.0 model allows users to create their own content and exchange their words through globally interactive features on websites.[55][56]The users also can contribute their experiences, images and suggestions regarding their trips through online travel communities. For example,TripAdvisoris an online travel community which enables user to rate and share autonomously their reviews and feedback on hotels and tourist destinations. Non pre-associate users can interact socially and communicate through discussion forums on TripAdvisor.[57]
Social media, especially Travel 2.0 websites, plays a crucial role in decision-making behaviors of travelers. The user-generated content on social media tools have a significant impact on travelers choices and organisation preferences. Travel 2.0 sparked radical change in receiving information methods for travelers, from business-to-customer marketing into peer-to-peer reviews. User-generated content became a vital tool for helping a number of travelers manage their international travels, especially for first time visitors.[58]The travellers tend to trust and rely on peer-to-peer reviews and virtual communications on social media rather than the information provided by travel suppliers.[57][53]
In addition, an autonomous review feature on social media would help travelers reduce risks and uncertainties before the purchasing stages.[55][58]Social media is also a channel for customer complaints and negative feedback which can damage images and reputations of organisations and destinations.[58]For example, a majority of UK travellers read customer reviews before booking hotels, these hotels receiving negative feedback would be refrained by half of customers.[58]
Therefore, the organisations should develop strategic plans to handle and manage the negative feedback on social media. Although the user-generated content and rating systems on social media are out of a business' controls, the business can monitor those conversations and participate in communities to enhance customer loyalty and maintain customer relationships.[53]
Web 2.0 could allow for more collaborative education. For example, blogs give students a public space to interact with one another and the content of the class.[59]Some studies suggest that Web 2.0 can increase the public's understanding of science, which could improve government policy decisions. A 2012 study by researchers at theUniversity of Wisconsin–Madisonnotes that
Ajaxhas prompted the development of Web sites that mimic desktop applications, such asword processing, thespreadsheet, andslide-show presentation.WYSIWYGwikiandbloggingsites replicate many features of PC authoring applications. Several browser-based services have emerged, includingEyeOS[61]andYouOS.(No longer active.)[62]Although namedoperating systems, many of these services are application platforms. They mimic the user experience of desktop operating systems, offering features and applications similar to a PC environment, and are able to run within any modern browser. However, these so-called "operating systems" do not directly control the hardware on the client's computer. Numerous web-based application services appeared during thedot-com bubbleof 1997–2001 and then vanished, having failed to gain a critical mass of customers.
Many regard syndication of site content as a Web 2.0 feature. Syndication uses standardized protocols to permit end-users to make use of a site's data in another context (such as another Web site, abrowser plugin, or a separate desktop application). Protocols permitting syndication includeRSS(really simple syndication, also known as Web syndication),RDF(as in RSS 1.1), andAtom, all of which areXML-based formats. Observers have started to refer to these technologies asWeb feeds.
Specialized protocols such asFOAFandXFN(both for social networking) extend the functionality of sites and permit end-users to interact without centralized Web sites.
Web 2.0 often uses machine-based interactions such asRESTandSOAP. Servers often expose proprietaryApplication programming interfaces(APIs), but standard APIs (for example, for posting to a blog or notifying a blog update) have also come into use. Most communications through APIs involveXMLorJSONpayloads. REST APIs, through their use of self-descriptive messages andhypermedia as the engine of application state, should be self-describing once an entryURIis known.Web Services Description Language(WSDL) is the standard way of publishing a SOAP Application programming interface and there area range of Web service specifications.
In November 2004,CMP Mediaapplied to theUSPTOfor aservice markon the use of the term "WEB 2.0" for live events.[63]On the basis of this application, CMP Media sent acease-and-desistdemand to the Irish non-profit organisation IT@Cork on May 24, 2006,[64]but retracted it two days later.[65]The "WEB 2.0" service mark registration passed final PTO Examining Attorney review on May 10, 2006, and was registered on June 27, 2006.[63]TheEuropean Unionapplication (which would confer unambiguous status in Ireland)[66]was declined on May 23, 2007.
Critics of the term claim that "Web 2.0" does not represent a new version of theWorld Wide Webat all, but merely continues to use so-called "Web 1.0" technologies and concepts:[8]
"Nobody really knows what it means... If Web 2.0 for you is blogs and wikis, then that is people to people. But that was what the Web was supposed to be all along... Web 2.0, for some people, it means moving some of the thinking [to the] client side, so making it more immediate, but the idea of the Web as interaction between people is really what the Web is. That was what it was designed to be... a collaborative space where people can interact."
"The Web is great because that person can't foist anything on you—you have to go get it. They can make themselves available, but if nobody wants to look at their site, that's fine. To be honest, most people who have something to say get published now."[72]
"The task before us is to extend into the digital world the virtues of authenticity, expertise, and scholarly apparatus that have evolved over the 500 years of print, virtues often absent in the manuscript age that preceded print".
|
https://en.wikipedia.org/wiki/Enterprise_2.0
|
Knowledge management(KM) is the set of procedures for producing, disseminating, utilizing, and overseeing an organization's knowledge and data. It alludes to a multidisciplinary strategy that maximizes knowledge utilization to accomplish organizational goals. Courses in business administration, information systems, management, libraries, and information science are all part of knowledge management, a discipline that has been around since 1991. Information and media, computer science, public health, and public policy are some of the other disciplines that may contribute to KM research. Numerous academic institutions provide master's degrees specifically focused on knowledge management.
As a component of their IT, human resource management, or business strategy departments, many large corporations, government agencies, and nonprofit organizations have resources devoted to internal knowledge management initiatives. These organizations receive KM guidance from a number of consulting firms. Organizational goals including enhanced performance, competitive advantage, innovation, sharing of lessons learned, integration, and ongoing organizational improvement are usually the focus of knowledge management initiatives. These initiatives are similar to organizational learning, but they can be differentiated by their increased emphasis on knowledge management as a strategic asset and information sharing. Organizational learning is facilitated by knowledge management.
The setting ofsupply chainmay be the most challenging situation for knowledge management since it involves several businesses without a hierarchy or ownership tie; some authors refer to this type of knowledge as transorganizational or interorganizational knowledge.industry 4.0(or4th industrial revolution) anddigital transformationalso add to that complexity, as new issues arise from the volume and speed of information flows and knowledge generation.
Knowledge management efforts have a long history, including on-the-job discussions, formalapprenticeship,discussion forums, corporate libraries, professional training, and mentoring programs.[1][2]With increased use of computers in the second half of the 20th century, specific adaptations of technologies such asknowledge bases,expert systems,information repositories, groupdecision support systems,intranets, andcomputer-supported cooperative workhave been introduced to further enhance such efforts.[1]
In 1999, the termpersonal knowledge managementwas introduced; it refers to the management of knowledge at the individual level.[3]
In the enterprise, early collections of case studies recognised the importance of knowledge management dimensions of strategy,processandmeasurement.[4][5]Key lessons learned include people and the cultural norms which influence their behaviors are the most critical resources for successful knowledge creation, dissemination and application; cognitive, social and organisational learning processes are essential to the success of a knowledge management strategy; and measurement,benchmarkingand incentives are essential to accelerate the learning process and to drive cultural change.[5]In short, knowledge management programs can yield impressive benefits to individuals and organisations if they are purposeful, concrete and action-orientated.
TheISO 9001:2015 quality management standardreleased in September 2015 introduced a specification for 'organizational knowledge' as a complementary aspect of quality management within an organisation.[6]
KM emerged as a scientific discipline in the early 1990s.[7]It was initially supported by individual practitioners, whenSkandiahired Leif Edvinsson of Sweden as the world's firstchief knowledge officer(CKO).[8]Hubert Saint-Onge (formerly ofCIBC, Canada), started investigating KM long before that.[1]The objective of CKOs is to manage and maximise the intangible assets of their organizations.[1]Gradually, CKOs became interested in practical and theoretical aspects of KM, and the new research field was formed.[9]The KM idea has been taken up by academics, such asIkujiro Nonaka(Hitotsubashi University), Hirotaka Takeuchi (Hitotsubashi University),Thomas H. Davenport(Babson College) and Baruch Lev (New York University).[10][11]
In 2001,Thomas A. Stewart, former editor atFortunemagazine and subsequently the editor ofHarvard Business Review, published a cover story highlighting the importance of intellectual capital in organizations.[12]The KM discipline has been gradually moving towards academic maturity.[1]First, is a trend toward higher cooperation among academics; single-author publications are less common. Second, the role of practitioners has changed.[9]Their contribution to academic research declined from 30% of overall contributions up to 2002, to only 10% by 2009.[13]Third, the number of academic knowledge management journals has been steadily growing, currently reaching 27 outlets.[14][15]
Multiple KM disciplines exist; approaches vary by author and school.[9][16]As the discipline matured, academic debates increased regardingtheoryand practice, including:
Regardless of theschool of thought, core components of KM roughly include people/culture, processes/structure and technology. The details depend on theperspective.[22]KM perspectives include:
The practical relevance of academic research in KM has been questioned[29]withaction researchsuggested as having more relevance[30]and the need to translate the findings presented in academic journals to a practice.[4]
Differentframeworksfor distinguishing between different 'types of' knowledge exist.[2]One proposed framework for categorising the dimensions of knowledge distinguishestacit knowledgeandexplicit knowledge.[26]Tacit knowledge represents internalised knowledge that an individual may not be consciously aware of, such as to accomplish particular tasks. At the opposite end of the spectrum, explicit knowledge represents knowledge that the individual holds consciously in mental focus, in a form that can easily be communicated to others.[9][31]
Ikujiro Nonaka proposed a model (SECI, for Socialisation, Externalisation, Combination, Internalisation) which considers a spiraling interaction betweenexplicit knowledgeand tacit knowledge.[32]In this model, knowledge follows a cycle in which implicit knowledge is 'extracted' to become explicit knowledge, and explicit knowledge is 're-internalised' into implicit knowledge.[32]
Hayes and Walsham (2003) describe knowledge and knowledge management as two different perspectives.[33]The content perspective suggests that knowledge is easily stored; because it may be codified, while the relational perspective recognises the contextual and relational aspects of knowledge which can make knowledge difficult to share outside the specific context in which it is developed.[33]
Early research suggested that KM needs to convert internalised tacit knowledge into explicit knowledge to share it, and the same effort must permit individuals to internalise and make personally meaningful any codified knowledge retrieved from the KM effort.[19][34]
Subsequent research suggested that a distinction between tacit knowledge and explicit knowledge represented an oversimplification and that the notion of explicit knowledge is self-contradictory.[3]Specifically, for knowledge to be made explicit, it must be translated into information (i.e.,symbolsoutside our heads).[3][35]More recently, together withGeorg von KroghandSven Voelpel, Nonaka returned to his earlier work in an attempt to move the debate about knowledge conversion forward.[36][37]
A second proposed framework for categorising knowledge dimensions distinguishes embedded knowledge of asystemoutside a human individual (e.g., an information system may have knowledge embedded into its design) fromembodied knowledgerepresenting a learned capability of a human body'snervousandendocrine systems.[38]
A third proposed framework distinguishes between the exploratory creation of "new knowledge" (i.e., innovation) vs. thetransferor exploitation of "established knowledge" within a group, organisation, or community.[33][39]Collaborative environments such as communities of practice or the use ofsocial computingtools can be used for both knowledge creation and transfer.[39]
Knowledge may be accessed at three stages: before, during, or after KM-related activities.[25]Organisations have tried knowledge captureincentives, including making content submission mandatory and incorporating rewards intoperformance measurementplans.[40]Considerable controversy exists over whether such incentives work and no consensus has emerged.[41]
One strategy to KM involves actively managing knowledge (push strategy).[41][42]In such an instance, individuals strive to explicitly encode their knowledge into a shared knowledge repository, such as adatabase, as well as retrieving knowledge they need that other individuals have provided (codification).[42]Another strategy involves individuals making knowledge requests of experts associated with a particular subject on an ad hoc basis (pull strategy).[41][42]In such an instance, expert individual(s) provide insights to requestor (personalisation).[26]When talking about strategic knowledge management, the form of the knowledge and activities to share it defines the concept between codification and personalization.[43]The form of the knowledge means that it's eithertacitorexplicit.Dataandinformationcan be considered as explicit andknow-howcan be considered as tacit.[44]
Hansenet al. defined the two strategies (codification and personalisation).[45]Codification means a system-oriented method in KM strategy for managing explicit knowledge with organizational objectives.[46]Codification strategy is document-centered strategy, where knowledge is mainly codified as "people-to-document" method. Codification relies on information infrastructure, where explicit knowledge is carefully codified and stored.[45]Codification focuses on collecting and storing codified knowledge in electronic databases to make it accessible.[47]Codification can therefore refer to both tacit and explicit knowledge.[48]In contrast, personalisation encourages individuals to share their knowledge directly.[47]Personification means human-oriented KM strategy where the target is to improve knowledge flows through networking and integrations related to tacit knowledge with knowledge sharing and creation.[46]Information technology plays a less important role, as it only facilitates communication and knowledge sharing.
Generic knowledge strategies includeknowledge acquisitionstrategy, knowledge exploitation strategy, knowledge exploration strategy, andknowledge sharingstrategy. These strategies aim at helping organisations to increase their knowledge andcompetitive advantage.[49]
Other knowledge management strategies and instruments for companies include:[41][20][26]
Multiple motivations lead organisations to undertake KM.[31]Typical considerations include:[26][53]
Knowledge management (KM) technology can be categorised:
These categories overlap. Workflow, for example, is a significant aspect of content or document management systems, most of which have tools for developing enterprise portals.[41][55]
Proprietary KM technology products such asHCL Notes(Previously Lotus Notes) defined proprietary formats for email, documents, forms, etc. The Internet drove most vendors to adopt Internet formats.Open-sourceandfreewaretools for the creation ofblogsandwikisnow enable capabilities that used to require expensive commercial tools.[30][56]
KM is driving the adoption of tools that enable organisations to work at the semantic level,[57]as part of theSemantic Web.[58]Some commentators have argued that after many years the Semantic Web has failed to see widespread adoption,[59][60][61]while other commentators have argued that it has been a success.[62]
Just like knowledge transfer and knowledge sharing, the term "knowledge barriers" is not a uniformly defined term and differs in its meaning depending on the author.[63]Knowledge barriers can be associated with high costs for both companies and individuals.[64][65][66]Knowledge barriers appear to have been used from at least three different perspectives in the literature:[63]1) Missing knowledge about something as a result of barriers for the share or transfer of knowledge.
2) Insufficient knowledge based on the amount of education in a certain field or issue.
3) A unique individual or group of humans' perceptual system lacks adequate contact points or does not fit incoming information to use and transform it to knowledge.
Knowledge retention is part of knowledge management. It helps convert tacit form of knowledge into an explicit form. It is a complex process which aims to reduce the knowledge loss in the organization.[67]Knowledge retention is needed when expert knowledge workers leave the organization after a long career.[68]Retaining knowledge prevents losing intellectual capital.[69]
According to DeLong(2004)[70]knowledge retention strategies are divided into four main categories:
Knowledge retention projects are usually introduced in three stages: decision making, planning and implementation. There are differences among researchers on the terms of the stages. For example, Dalkir talks about knowledge capture, sharing and acquisition and Doan et al. introduces initiation, implementation and evaluation.[71][72]Furthermore, Levy introduces three steps (scope, transfer, integration) but also recognizes a "zero stage" for initiation of the project.[68]
A knowledge audit is a comprehensive assessment of an organization's knowledge assets, including its explicit and tacit knowledge, intellectual capital, expertise, and skills. The goal of a knowledge audit is to identify the organization's knowledge strengths and gaps, and to develop strategies for leveraging knowledge to improve performance and competitiveness. Knowledge audit helps ensure that an organization's knowledge management activities are heading in the right direction. It also reduces the making of incorrect decisions. Term knowledge audit is often used interchangeably with information audit, although information audit is slightly narrower in scope.[73][74]
The requirement and significance of a knowledge audit can vary widely among different industries and companies. For instance, within the software development industry, knowledge audits can play a pivotal role due to the inherently knowledge-intensive nature of the work. This contrasts with sectors like manufacturing, where physical assets often take more important role. The difference arises from the fact that in software development companies, the skills, expertise, and intellectual capital, often overshadow the value of physical assets.[75]
Knowledge audits provide opportunities for organizations to improve their management of knowledge assets, with the goal of enhancing organizational effectiveness and efficiency. By conducting a knowledge audit, organizations can raise awareness of knowledge assets as primary factors of production and as critical capital assets in today's knowledge economy. The process of a knowledge audit allows organizations to gain a deeper understanding of their knowledge assets. This includes identifying and defining these assets, understanding their behavior and properties, and describing how, when, why, and where they are used in business processes.[75]
Knowledge protection refers to behaviors and actions taken to protect the knowledge from unwanted opportunistic behavior for example appropriation or imitation of the knowledge.[76]
Knowledge protection is used to prevent the knowledge to be unintentionally available or useful for competitors. Knowledge protection can be for example a patent, copyright, trademark, lead time or secrecy held by a company or an individual.[77]
There are various methods for knowledge protection and those methods are often divided into two categories by their formality: formal protection and informal protection.[78][79][80][81]Occasionally a third category is introduced, semi-formal protection, which includes contracts and trade-secrets.[80][81][82]These semi-formal methods are also usually placed under formal methods.
Organizations often use a combination of formal and informal knowledge protection methods to achieve comprehensive protection of their knowledge assets.[81]The formal and informal knowledge protection mechanisms are different in nature, and they have their benefits and drawbacks. In many organizations, the challenge is to find a good mix of measures that works for the organization.[79]
Formal knowledge protection practices can take various forms, such as legal instruments or formal procedures and structures, to control which knowledge is shared and which is protected.[78]Formal knowledge protection methods include for example: patents, trademarks, copyrights and licensing.[78][80][83]
Technical solutions to protect the knowledge fall also under the category of formal knowledge protection. Formal knowledge protection from technical viewpoint includes technical access constraints and protection of communication channels, systems, and storage.[79]
While knowledge may eventually become public in some form or another, formal protection mechanisms are necessary to prevent competitors from directly utilizing it for their own gain.[79]Formal protection methods are particularly effective in protecting established knowledge that can be codified and embodied in final products or services.[83]
Informal knowledge protection methods refer to the use of informal mechanisms such as human resource management practices or secrecy to protect knowledge assets. There is notable amount of knowledge that cannot be protected by formal methods, and for which more informal protection might be the most efficient option.[84]
Informal knowledge protection methods can take various forms, such as: secrecy, social norms and values, complexity, lead-time and Human resource management.[78][83][85][84]
Informal knowledge protection methods protect knowledge assets for example by making it difficult for outsiders to access and understand the knowledge within the boundaries of the organization.[85]Informal protection methods are more effective for protecting knowledge that is complex or difficult to express, articulate, or codify.[85][84]
The balance between knowledge sharing and knowledge protection is a critical dilemma faced by organizations today.[86][79]While sharing knowledge can lead to innovation, collaboration, and competitive advantage, protecting knowledge can prevent it from being misused, misappropriated, or lost.[86][79][87]Thus, the need for organizational learning must be balanced with the need to protect organisations' intellectual property, especially whilst cooperating with external partners.[86][88]The role of information security is crucial in helping organisations protect their assets whilst still enabling the benefits of information sharing.[79][87]By implementing effective knowledge management strategies, organizations can protect valuableintellectual propertywhile also encouraging the sharing of relevant knowledge across teams and departments.[86]This active balancing act requires careful consideration of factors such as the level of openness, the identification of core knowledge areas, and the establishment of appropriate mechanisms for knowledge transfer and collaboration.[86]Finding the right balance between knowledge sharing and knowledge protection is a complex issue that requires a nuanced understanding of the trade-off's involved and the context in which knowledge is shared or protected.[86][88]
Protecting knowledge cannot be considered without its risks. Here are listed four of the major risks associated with knowledge protection:
In conclusion, protecting knowledge is crucial to promote innovation and creativity, but it is not without its risks. Overprotection, misappropriation, infringement claims, and inadequate protection are all risks associated with knowledge protection. Individuals and organizations should take steps to protect their intellectual property while also considering the potential risks and benefits of such protection.
|
https://en.wikipedia.org/wiki/Knowledge_Management
|
Avirtual communityis asocial networkof individuals who connect through specificsocial media, potentially crossing geographical and political boundaries in order to pursue mutual interests or goals. Some of the most pervasive virtual communities areonline communitiesoperating undersocial networking services.
Howard Rheingolddiscussed virtual communities in his book,The Virtual Community, published in 1993. The book's discussion ranges from Rheingold's adventures onThe WELL,computer-mediated communication, social groups and information science. Technologies cited includeUsenet,MUDs(Multi-User Dungeon) and their derivativesMUSHesandMOOs,Internet Relay Chat(IRC),chat roomsandelectronic mailing lists. Rheingold also points out the potential benefits for personal psychological well-being, as well as for society at large, of belonging to a virtual community. At the same time, it showed that job engagement positively influences virtual communities of practice engagement.[1]
Virtual communities all encourage interaction, sometimes focusing around a particular interest or just to communicate. Some virtual communities do both. Community members are allowed to interact over a shared passion through various means:message boards,chat rooms,social networkingWorld Wide Web sites, or virtual worlds.[2]Members usually become attached to the community world, logging in and out on sites all day every day, which can certainly become an addiction.[3]
The traditional definition of a community is of geographically circumscribed entity (neighborhoods, villages, etc.). Virtual communities are usually dispersed geographically, and therefore are not communities under the original definition. Some online communities are linked geographically, and are known as community websites. However, if one considers communities to simply possess boundaries of some sort between their members and non-members, then a virtual community is certainly a community.[4]Virtual communities resemble real lifecommunitiesin the sense that they both provide support, information, friendship and acceptance between strangers.[5]While in a virtual community space, users may be expected to feel a sense of belonging and a mutual attachment among the members that are in the space.
One of the most influential part about virtual communities is the opportunity to communicate through several media platforms or networks. Now that virtual communities exists, this had leveraged out the things we once did prior to virtual communities, such as postal services, fax machines, and even speaking on the telephone. Early research into the existence of media-based communities was concerned with the nature ofreality, whether communities actually could exist through the media, which could place virtual community research into the social sciences definition of ontology. In the seventeenth century, scholars associated with theRoyal Societyof London formed a community through the exchange of letters.[4]"Community without propinquity", coined by urban plannerMelvin Webberin 1963 and "community liberated", analyzed byBarry Wellmanin 1979 began the modern era of thinking about non-local community.[6]As well,Benedict Anderson'sImagined Communitiesin 1983, described how different technologies, such as national newspapers, contributed to the development of national and regional consciousness among early nation-states.[7]Some authors that built their theories on Anderson's imagined communities have been critical of the concept, claiming that all communities are based on communication and that virtual/real dichotomy is disintegrating, making use of the word "virtual" problematic or even obsolete.[8]
Virtual communities are used for a variety of social and professional groups; interaction between community members vary from personal to purely formal. For example, an email distribution list could serve as a personal means of communicating with family and friends, and also formally to coordinate with coworkers.
User experienceis the ultimate goal for the program or software used by an internet community, because user experience will determine the software's success.[9]The software for social media pages or virtual communities is structured around the users' experience and designed specifically for online use.
User experience testing is utilized to reveal something about the personal experience of the human being using a product or system.[10]When it comes to testing user experience in a software interface, three main characteristics are needed: a user who is engaged, a user who is interacting with a product or interface, and defining the users' experience in ways that are and observable or measurable.[10]User experience metrics are based on a reliability and repeatability, using a consistent set of measurements to result in comparable outcomes. User experience metrics are based on user retention, using a consistent set of measurements to collect data on user experience.
The widespread use of the Internet and virtual communities by millions of diverse users for socializing is a phenomenon that raises new issues for researchers and developers. The vast number and diversity of individuals participating in virtual communities worldwide makes it a challenge to test usability across platforms to ensure the best overall user experience. Some well-established measures applied to the usability framework for online communities are speed of learning, productivity, user satisfaction, how much people remember using the software, and how many errors they make.[11]The human computer interactions that are measured during a usability experience test focus on the individuals rather than their social interactions in the online community. The success of online communities depend on the integration of usability and social semiotics. Usability testing metrics can be used to determine social codes by evaluating a user's habits when interacting with a program. Social codes are established and reinforced by the regular repetition of behavioral patterns.[12]People communicate their social identities orculture codethrough the work they do, the way they talk, the clothes they wear, their eating habits, domestic environments and possessions, and use of leisure time. Usability testing metrics can be used to determine social codes by evaluating a user's habits when interacting with a program. The information provided during a usability test can determine demographic factors and help define the semiotic social code. Dialogue and social interactions, support information design, navigation support, and accessibility are integral components specific to online communities. As virtual communities grow, so do the diversity of their users. However, the technologies are not made to be any more or less intuitive. Usability tests can ensure users are communicating effectively using social and semiotic codes while maintaining their social identities.[11]Efficient communication requires a common set of signs in the minds of those seeking to communicate.[12]As technologies evolve and mature, they tend to be used by an increasingly diverse set of users. This kind of increasing complexity and evolution of technology does no necessarily mean that the technologies are becoming easier to use.[10]Usability testing in virtual communities can ensure users are communicating effectively through social and semiotic codes and maintenance of social realities and identities.[12]
Recent studies have looked into development of health related communities and their impact on those already suffering health issues. These forms of social networks allow for open conversation between individuals who are going through similar experiences, whether themselves or in their family.[13]Such sites have so grown in popularity that now many health care providers form groups for their patients by providing web areas where one may direct questions to doctors. These sites prove especially useful when related to rare medical conditions. People with rare or debilitating disorders may not be able to access support groups in their physical community, thus online communities act as primary means for such support. Online health communities can serve as supportive outlets as they facilitate connecting with others who truly understand the disease, as well as offer more practical support, such as receiving help in adjusting to life with the disease.[14]Each patient on online health communities are on there for different reasons, as some may need quick answers to questions they have, or someone to talk to.Involvement in social communities of similar health interests has created a means for patients to develop a better understanding and behavior towards treatment and health practices.[15][16]Some of these users could have very serious life-threatening issues which these personal contexts could become very helpful to these users, as the issues are very complex.[17]Patients increasingly use such outlets, as this is providing personalized and emotional support and information, that will help them and have a better experience.[17]The extent to which these practices have effects on health are still being studied.
Studies on health networks have mostly been conducted on groups which typically suffer the most from extreme forms of diseases, for example cancer patients, HIV patients, or patients with other life-threatening diseases. It is general knowledge that one participates in online communities to interact with society and develop relationships.[18]Individuals who suffer from rare or severe illnesses are unable to meet physically because of distance or because it could be a risk to their health to leave a secure environment. Thus, they have turned to the internet.
Some studies have indicated that virtual communities can provide valuable benefits to their users. Online health-focused communities were shown to offer a unique form of emotional support that differed from event-based realities and informational support networks. Growing amounts of presented material show how online communities affect the health of their users. Apparently the creation of health communities has a positive impact on those who are ill or in need of medical information.[19]
It was found that young individuals are more bored with politics and history topics, and instead are more interested in celebrity dramas and topics. Young individuals claim that "voicing what you feel" does not mean "being heard", so they feel the need to not participate in these engagements, as they believe they are not being listened to anyway.[20]Over the years, things have changed, as new forms of civic engagement and citizenship have emerged from the rise of social networking sites. Networking sites act as a medium for expression and discourse about issues in specific user communities. Online content-sharing sites have made it easy for youth as well as others to not only express themselves and their ideas through digital media, but also connect with large networked communities. Within these spaces, young people are pushing the boundaries of traditional forms of engagement such as voting and joining political organizations and creating their own ways to discuss, connect, and act in their communities.[21]
Civic engagement throughonline volunteeringhas shown to have positive effects on personal satisfaction and development. Some 84 percent of online volunteers found that their online volunteering experience had contributed to their personal development and learning.[22]
In his bookThe Wealth of Networksfrom 2006,Yochai Benklersuggests that virtual communities would "come to represent a new form of human communal existence, providing new scope for building a shared experience of human interaction".[23]Although Benkler's prediction has not become entirely true, clearly communications and social relations are extremely complex within a virtual community. The two main effects that can be seen according to Benkler are a "thickening of preexisting relations with friends, family and neighbours" and the beginnings of the "emergence of greater scope for limited-purpose, loose relationships".[23]Despite being acknowledged as "loose" relationships, Benkler argues that they remain meaningful.
Previous concerns about the effects of Internet use on community and family fell into two categories: 1) sustained, intimate human relations "are critical to well-functioning human beings as a matter of psychological need" and 2) people with "social capital" are better off than those who lack it. It leads to better results in terms of political participation.[23]However, Benkler argues that unless Internet connections actually displace direct, unmediated, human contact, there is no basis to think that using the Internet will lead to a decline in those nourishing connections we need psychologically, or in the useful connections we make socially. Benkler continues to suggest that the nature of an individual changes over time, based on social practices and expectations. There is a shift from individuals who depend upon locally embedded, unmediated and stable social relationships to networked individuals who are more dependent upon their own combination of strong and weak ties across boundaries and weave their own fluid relationships. Manuel Castells calls this the "networked society".[23]
In 1997,MCI Communicationsreleased the "Anthem" advertisement, heralding the internet as a utopia without age, race, or gender.Lisa Nakamuraargues in chapter 16 of her 2002 bookAfter/image of identity: Gender, Technology, and Identity Politics, that technology gives us iterations of our age, race and gender in virtual spaces, as opposed to them being fully extinguished. Nakamura uses a metaphor of "after-images" to describe the cultural phenomenon of expressing identity on the internet. The idea is that any performance of identity on the internet is simultaneously present and past-tense, "posthuman and projectionary", due to its immortality.[24]
Sherry Turkle, professor of Social Studies of Science and Technology atMIT, believes the internet is a place where actions of discrimination are less likely to occur. In her 1995 bookLife on the Screen: Identity in the Age of the Internet, she argues that discrimination is easier in reality as it is easier to identify as face value, what is contrary to one's norm. The internet allows for a more fluid expression of identity and thus people become more accepting of inconsistent personae within themselves and others. For these reasons, Turkle argues users existing in online spaces are less compelled to judge or compare themselves to their peers, allowing people in virtual settings an opportunity to gain a greater capacity for acknowledging diversity.[25]
Nakamura argues against this view, coining the termidentity tourismin her 1999 article "Race In/For Cyberspace: Identity Tourism and Racial Passing on the Internet". Identity tourism, in the context of cyberspace, is a term used to the describe the phenomenon of users donning and doffing other-race and other-gender personae. Nakamura finds that performed behavior from these identity tourists often perpetuate stereotypes.[26]
In the 1998 bookCommunities in Cyberspace, authorsMarc A. SmithandPeter Kollock, perceives the interactions with strangers are based upon with whom we are speaking or interacting with. People use everything from clothes, voice,body language,gestures, and power to identify others, which plays a role with how they will speak or interact with them. Smith and Kollock believes that online interactions breaks away of all of the face-to-face gestures and signs that people tend to show in front of one another. Although this is difficult to do online, it also provides space to play with one's identity.[27]
The gaming community is extremely vast and accessible to a wide variety of people, However, there are negative effects on the relationships "gamers" have with the medium when expressing identity of gender.Adrienne Shawnotes in her 2012 article "Do you identify as a gamer? Gender, race, sexuality, and gamer identity", that gender, perhaps subconsciously, plays a large role in identifying oneself as a "gamer".[28]According to Lisa Nakamura, representation in video games has become a problem, as the minority of players from different backgrounds who are not just the stereotyped white teen male gamer are not represented.[29]
The explosive diffusion[30]of the Internet since the mid-1990s fostered the proliferation of virtual communities in the form of social networking services and online communities. Virtual communities may synthesizeWeb 2.0technologies with the community, and therefore have been described as Community 2.0, although strong community bonds have been forged online since the early 1970s on timeshare systems likePLATOand later onUsenet. Online communities depend upon social interaction and exchange between users online. This interaction emphasizes thereciprocityelement of the unwrittensocial contractbetween community members.
An onlinemessage boardis a forum where people can discuss thoughts or ideas on various topics or simply express an idea. Users may choose which thread, or board of discussion, they would like to read or contribute to. A user will start a discussion by making a post.[31]Other users who choose to respond can follow the discussion by adding their own posts to that thread at any time. Unlike in spokenconversations, message boards do not usually have instantaneous responses; users actively go to the website to check for responses.
Anyone can register to participate in an online message board. People can choose to participate in the virtual community, even if or when they choose not to contribute their thoughts and ideas. Unlike chat rooms, at least in practice, message boards can accommodate an almost infinite number of users.
Internet users' urges to talk to and reach out to strangers online is unlike those in real-life encounters where people are hesitant and often unwilling to step in to help strangers. Studies have shown that people are more likely to intervene when they are the only one in a situation. With Internet message boards, users at their computers are alone, which might contribute to their willingness to reach out. Another possible explanation is that people can withdraw from a situation much more easily online than off. They can simply click exit or log off, whereas they would have to find a physical exit and deal with the repercussions of trying to leave a situation in real life. The lack of status that is presented with an online identity also might encourage people, because, if one chooses to keep it private, there is no associated label of gender, age, ethnicity or lifestyle.[32]
Shortly after the rise of interest in message boards and forums, people started to want a way of communicating with their "communities" in real time. The downside to message boards was that people would have to wait until another user replied to their posting, which, with people all around the world in different time frames, could take a while. The development of onlinechat roomsallowed people to talk to whoever was online at the same time they were. This way, messages were sent and online users could immediately respond.
The original development byCompuServe CBhosted forty channels in which users could talk to one another in real time. The idea of forty different channels led to the idea of chat rooms that were specific to different topics. Users could choose to join an already existent chat room they found interesting, or start a new "room" if they found nothing to their liking. Real-time chatting was also brought into virtual games, where people could play against one another and also talk to one another through text. Now, chat rooms can be found on all sorts of topics, so that people can talk with others who share similar interests. Chat rooms are now provided byInternet Relay Chat(IRC) and other individual websites such asYahoo,MSN, andAOL.
Chat room users communicate through text-based messaging. Most chat room providers are similar and include an input box, a message window, and a participant list. The input box is where users can type their text-based message to be sent to the providing server. The server will then transmit the message to the computers of anyone in the chat room so that it can be displayed in the message window. The message window allows the conversation to be tracked and usually places a time stamp once the message is posted. There is usually a list of the users who are currently in the room, so that people can see who is in their virtual community.
Users can communicate as if they are speaking to one another in real life. This "simulated reality" attribute makes it easy for users to form a virtual community, because chat rooms allow users to get to know one another as if they were meeting in real life. The individual "room" feature also makes it more likely that the people within a chat room share a similar interest; an interest that allows them to bond with one another and be willing to form a friendship.[33][34]
Virtual worldsare the most interactive of all virtual community forms. In this type of virtual community, people are connected by living as anavatarin a computer-based world. Users create their own avatar character (from choosing the avatar's outfits to designing the avatar's house) and control their character's life and interactions with other characters in the 3D virtual world. It is similar to a computer game; however, there is no objective for the players. A virtual world simply gives users the opportunity to build and operate a fantasy life in the virtual realm. Characters within the world can talk to one another and have almost the same interactions people would have in reality. For example, characters can socialize with one another and hold intimate relationships online.
This type of virtual community allows for people to not only hold conversations with others in real time, but also to engage and interact with others. The avatars that users create are like humans. Users can choose to make avatars like themselves, or take on an entirely different personality than them. When characters interact with other characters, they can get to know one another through text-based talking and virtual experience (such as having avatars go on a date in the virtual world). A virtual community chat room may give real-time conversations, but people can only talk to one another. In a virtual world, characters can do activities together, just like friends could do in reality. Communities in virtual worlds are most similar to real-life communities because the characters are physically in the same place, even if the users who are operating the characters are not.[35]Second Lifeis one of the most popular virtual worlds on the Internet.Whyvilleoffers an alternative for younger audiences where safety and privacy are a concern. In Whyville, players use the virtual world's simulation aspect to experiment and learn about various phenomena.
Another use for virtual worlds has been in business communications. Benefits from virtual world technology such as photo realistic avatars and positional sound create an atmosphere for participants that provides a less fatiguing sense of presence. Enterprise controls that allow the meeting host to dictate the permissions of the attendees such as who can speak, or who can move about allow the host to control the meeting environment.Zoom, is a popular platform that has grown over theCOVID-19 pandemic. Where those who host meetings on this platform, can dictate who can or cannot speak, by muting or unmuting them, along with who is able to join. Several companies are creating business based virtual worlds includingSecond Life. These business based worlds have stricter controls and allow functionality such as muting individual participants, desktop sharing, or access lists to provide a highly interactive and controlled virtual world to a specific business or group. Business based virtual worlds also may provide various enterprise features such as Single Sign on with third party providers, or Content Encryption.[citation needed]
Social networking servicesare the most prominent type of virtual community. They are either a website or software platform that focuses on creating and maintaining relationships.Facebook,Twitter, andInstagramare all virtual communities. With these sites, one often creates a profile or account, and adds friends or follow friends. This allows people to connect and look for support using the social networking service as a gathering place. These websites often allow for people to keep up to date with their friends and acquaintances' activities without making much of an effort.[36]On several of these sites you may be able to video chat, with several people at once, making the connections feel more like you are together. On Facebook, for example, one can upload photos and videos, chat, make friends, reconnect with old ones, and join groups or causes.[37]
Participatory culture plays a large role in online and virtual communities. In participatory culture, users feel that their contributions are important and that by contributing, they are forming meaningful connections with other users. The differences between being a producer of content on the website and being a consumer on the website become blurred and overlap. According toHenry Jenkins, "Members believe their contributions matter and feel some degree of social connection with one another "(Jenkins, et al. 2005). The exchange and consumption of information requires a degree of "digital literacy", such that users are able to "archive, annotate, appropriate, transform and recirculate media content" (Jenkins). Specialized information communities centralizes a specific group of users who are all interested in the same topic. For example, TasteofHome.com, the website of the magazineTaste of Home, is a specialized information community that focuses on baking and cooking. The users contribute consumer information relating to their hobby and additionally participate in further specialized groups and forums. Specialized Information Communities are a place where people with similar interests can discuss and share their experiences and interests.
Howard Rheingold'sVirtual Communitycould be compared withMark Granovetter's ground-breaking "strength of weak ties" article published twenty years earlier in theAmerican Journal of Sociology. Rheingold translated, practiced and published Granovetter's conjectures about strong and weak ties in the online world. His comment on the first page even illustrates the social networks in the virtual society: "My seven year old daughter knows that her father congregates with a family of invisible friends who seem to gather in his computer. Sometimes he talks to them, even if nobody else can see them. And she knows that these invisible friends sometimes show up in the flesh, materializing from the next block or the other side of the world" (page 1). Indeed, in his revised version ofVirtual Community, Rheingold goes so far to say that had he readBarry Wellman's work earlier, he would have called his book "onlinesocial networks".
Rheingold's definition contains the terms "social aggregation and personal relationships" (page 3). Lipnack and Stamps (1997)[38]and Mowshowitz (1997) point out how virtual communities can work across space, time and organizational boundaries; Lipnack and Stamps (1997)[38]mention a common purpose; and Lee, Eom, Jung and Kim (2004) introduce "desocialization" which means that there is less frequent interaction with humans in traditional settings, e.g. an increase in virtual socialization. Calhoun (1991) presents adystopiaargument, asserting the impersonality of virtual networks. He argues that IT has a negative influence on offline interaction between individuals because virtual life takes over our lives. He believes that it also creates different personalities in people which can cause frictions in offline and online communities and groups and in personal contacts. (Wellman & Haythornthwaite, 2002). Recently, Mitch Parsell (2008) has suggested that virtual communities, particularly those that leverage Web 2.0 resources, can be pernicious by leading to attitude polarization, increased prejudices and enabling sick individuals to deliberately indulge in their diseases.[39]
Internet communities offer the advantage of instant information exchange that is not possible in a real-life community. This interaction allows people to engage in many activities from their home, such as: shopping, paying bills, and searching for specific information. Users of online communities also have access to thousands of specific discussion groups where they can form specialized relationships and access information in such categories as: politics, technical assistance, social activities, health (see above) and recreational pleasures. Virtual communities provide an ideal medium for these types of relationships because information can easily be posted and response times can be very fast. Another benefit is that these types of communities can give users a feeling of membership and belonging. Users can give and receive support, and it is simple and cheap to use.[40]
Economically, virtual communities can be commercially successful, making money through membership fees, subscriptions, usage fees, and advertising commission. Consumers generally feel very comfortable making transactions online provided that the seller has a good reputation throughout the community. Virtual communities also provide the advantage ofdisintermediationin commercial transactions, which eliminates vendors and connects buyers directly to suppliers. Disintermediation eliminates pricey mark-ups and allows for a more direct line of contact between the consumer and the manufacturer.[41]
While instant communication means fast access, it also means that information is posted without being reviewed for correctness. It is difficult to choose reliable sources because there is no editor who reviews each post and makes sure it is up to a certain degree of quality.[42]
In theory, online identities can be kept anonymous which enables people to use the virtual community for fantasy role playing as in the case ofSecond Life's use of avatars. Some professionals urge caution with users who use online communities because predators also frequent these communities looking for victims who are vulnerable to onlineidentity theftoronline predators.[43]
There are also issues surrounding bullying on internet communities. With users not having to show their face, people may use threatening and discriminating acts towards other people because they feel that they would not face any consequences.[44]
There are standing issues with gender and race on the online community as well, where only the majority is represented on the screen, and those of different background and genders are underrepresented.[29]
|
https://en.wikipedia.org/wiki/Online_Community
|
Online participationis used to describe the interaction between users and online communities on the web.Online communitiesoften involve members to provide content to the website or contribute in some way. Examples of such includewikis,blogs,online multiplayer games, and other types of social platforms. Online participation is currently a heavily researched field. It provides insight into fields such asweb design,online marketing,crowdsourcing, and many areas ofpsychology. Some subcategories that fall under online participation are: commitment toonline communities, coordination and interaction, and member recruitment.
Some key examples of onlineknowledge sharinginfrastructures include the following:
In the past important online knowledge sharing infrastructures included:
Many online communities (e.g.Blogs,Chat rooms,Electronic mailing lists,Internet forums,Imageboards,Wikis), are not only knowledge-sharing resources but also fads. Studies have shown that committed members of online communities have reasons to remain active. As long as members feel the need to contribute, there is a mutual dependence between the community and the member.
Although many researchers have come up with severalmotivational factorsbehind online contribution, these theories can all be categorized under instrinsic and extrinsic motivations.Intrinsic motivationrefers to an action that is driven by personal interests and internal emotions in the task itself whileextrinsic motivationrefers to an action that is influenced by external factors, often for a certain outcome, reward or recognition. The two types of motivation contradict each other but often go hand-in-hand in cases where continual contribution is observed.
Several motivational factors lead people to continue their participation to these online communities and remain loyal.Peter Kollockresearched motivations for contributing to online communities. Kollock (1999, p. 227) outlines three motivations that do not rely on altruistic behavior on the part of the contributor: anticipated reciprocity; increased recognition; and sense of efficacy. Another motivation, in which Marc Smith mentions in his 1992 thesisVoices from the WELL: The Logic of the Virtual Commonsis "Communion"—a "sense of community" as it is referred to insocial psychology. In a simple sentence we can say it is made by people for the people.
A person is motivated to contribute valuable information to the group in the expectation that one will receive useful help and information in return. Indeed, there is evidence that active participants in online communities get more responses faster to questions than unknown participants.[5]The higher the expectation of reciprocity, the greater the chance of there being high knowledge contribution intent in an online community. Reciprocity represents a sense of fairness where individuals usually reciprocate the positive feedback they receive from others so that they can in return get more useful knowledge from others in the future.
Research has shown thatself esteemneeds of recognition from others lead to expectations ofreciprocity.[6]Self-esteem plays such an important role in the need for reciprocity because contributing to online communities can be an ego booster for many types of users. The more positive feedback contributors get from other members of their community, the closer they may feel to being considered an expert in the knowledge they are sharing. Because of this, contributing to online communities can lead to a sense of self-value and respect, based on the level of positive feedback reciprocated from the community In addition, there is evidence that active participants in online communities get more responses faster to questions than unknown participants.[5]
A study on the participation ineBay'sreputation systemdemonstrated that the expectation of reciprocal behavior from partners increases participation from self-interested eBay buyers and sellers. Standard economic theory predicts that people are not inclined to contribute voluntarily to the provision of such public goods but, rather, they tend to free ride on the contributions of others.[7]Nevertheless, empirical results from eBay show that buyers submit ratings to more than 50% of transactions.[8][9]The main takeaways from their conclusion were that they found that experienced users tend to rate more frequently, and motivation for leaving comments is not strongly motivated by pure altruism targeted towards the specific transaction partner, but from self-interest and reciprocity to "warm glow" feeling of contribution.
Some theories supportaltruismas being a key motivator in online participation and reciprocity. Although evidence from sociology, economics, political science, and social psychology shows that altruism is part of human nature, recent research reveals that the pure
altruism model lacks predictive power in many situations. Several authors have proposed
combining a "joy-of-giving" (sometimes also referred to as "warm glow") motive with altruism to
create a model of impure altruism.[10][11]Different from altruism, reciprocity represents a pattern of behavior where people respond to friendly or hostile actions with similar actions even if no material gains are expected.[12]
Voluntary participation in online feedback mechanisms seems to be largely motivated by self-interest. Because their reputation is on the line, the eBay study showed that some partners using eBay's feedback mechanism had selfish motivations to rate others. For example, data showed that some eBay users exhibited reciprocity towards partners who rated them first. This caused them to only rate partners with hopes the increase the probability of eliciting a reciprocal response.[13]
Recognitionis important to online contributors such that, in general, individuals want recognition for their contributions. Some have called thisEgoboo. Kollock outlines the importance of reputation online: "Rheingold (1993)in his discussion ofthe WELL(an early online community) lists the desire for prestige as one of the key motivations of individuals' contributions to the group. To the extent this is the concern of an individual, contributions will likely be increased to the degree that the contribution is visible to the community as a whole and to the extent there is some recognition of the person's contributions. ... the powerful effects of seemingly trivial markers of recognition (e.g. being designated as an 'official helper') has been commented on in a number of online communities..."
One of the key ingredients of encouraging areputationis to allow contributors to be known or not to beanonymous. The following example, fromMeyers (1989)harvtxt error: no target: CITEREFMeyers1989 (help)study of the computer underground illustrates the power of reputation. When involved in illegal activities, computer hackers must protect their personal identities with pseudonyms. If hackers use the same nicknames repeatedly, this can help the authorities to trace them. Nevertheless, hackers are reluctant to change their pseudonyms regularly because the status and fame associated with a particular nickname would be lost.
On the importance ofonline identity: Profiles and reputation are clearly evident in online communities today.Amazon.comis a case in point, as all contributors are allowed to createprofilesabout themselves and as their contributions are measured by the community, their reputation increases.Myspace.comencourages elaborate profiles for members where they can share all kinds of information about themselves including what music they like, their heroes, etc. Displaying photos and information about individual members and their recent activities on social networking websites can promote bonds-based commitment. Because social interaction is the primary basis for building and maintaining social bonds, we can gain appreciation for other users once we interact with them.[14]This appreciation turns into increased recognition for the contributors, which would in turn give them the incentive to contribute more.
In addition to this, many communities give incentives for contributing. For example, many forums award Members points for posting. Members can spend these points in a virtual store.eBayis an example of an online marketplace where reputation is very important because it is used to measure the trustworthiness of someone you potentially will do business with. This type of community is known as a reputation system, which is a type ofcollaborative filteringalgorithm which attempts to collect, distribute, and aggregate ratings about all users' past behavior within an online community in an effort to strike a balance between the democratic principles of open publishing and maintaining standards of quality.[15]These systems, like eBay's, promote the idea of trust that relates to expectations of reciprocity which can help increase the sense of reputation for each member. With eBay, you have the opportunity to rate your experience with someone and they, likewise, can rate you. This has an effect on the reputation score.
The participants may therefore be encouraged tomanage their online identityin order tomake a good impressionon the other members of the community.
Other successful online communities havereputation systemsthat do not exactly provide any concrete incentive. For example,Redditis an online social content-aggregation community which serves as a "front page of the Internet" and allows its users to submit content (e.g. text, photos, links, news-articles, blog-posts, music or videos) under sometimes ambiguous usernames. It features a reputation system by which users can rate the quality of submissions and comments. The total votecount of a users' submissions are not of any practical value—however when users feel that their content is generally appreciated by the rest of the Reddit-community (or its sub-communities called "subreddits") they may be motivated to contribute more.
Individuals may contribute valuable information because the act results in asense of efficacy, that is, a sense that they are capable of achieving their desired outcome and have some effect on this environment. There is well-developed research literature that has shown how important a person's sense of efficacy is (e.g.Bandura1995). Studies have shown that increasing the user's sense of efficacy boosts theirintrinsic motivationand therefore makes them more likely to stay in an online community. According to Wang and Fesenmaier's research, efficacy is the biggest factor in affecting active contribution online. Of the many sub-factors, it was discovered that "satisfying other members' needs" is the biggest reason behind the increase of efficacy in a member followed by "being helpful to others" (Wang and Fesenmaier).[16]Features such as the task progress bars and an attempt to reduce some difficulty of completing a general task can easily enhance the feeling of self-worth in the community. "Creating immersive experiences with clear goals, feedback and challenge that exercise peoples' skills to the limits but still leave them in control causes the experiences to be intrinsically interesting. Positive but constructive and sincere feedbacks also produce similar effects and increase motivation to complete more tasks. A competitive setting—which may or may not have been intended to be competitive can also increase a person's self-esteem if quality performance is assumed" (Kraut 2012)[17]).
People, in general, are social beings and are motivated by receiving direct responses to their contributions. Most online communities enable this by allowing people to reply back to others' contributions (e.g. manyBlogsallow comments from readers, one can reply back to forum posts, etc.). Granted, there is some overlap between improving one's reputation and gaining a sense of community, and it seems safe to say that there are also some overlapping areas between all four motivators.
While some people are active contributors to online discussion, others joinvirtual communitiesand do not actively participate, a concept referred to aslurking(Preece 2009). There are several reasons why people choose not to participate online. For instance, users may get the information they wanted without actively participating, think they are helpful by not posting, want to learn more about the community before becoming an active member, be unable to use the software provided, or dislike the dynamics they observe within the group (Preece, Nonnecke & Andrews 2004). When online communities have lurking members, the amount of participation within the group decreases and the sense of community for these lurking members also diminishes. Online participation increases the sense of community for all members, as well as gives them a motivation to continue participating.
Other problems regarding a sense of community arise when theonline communityattempts to attract and retain newcomers. These problems include difficulty of recruiting newcomers, making them stay committed early on, and controlling their possible inappropriate behavior. If an online community is able to solve these problems with their newcomers, then they can increase thesense of communitywithin the entire group. A sense of community is also heightened in online communities when each person has a willingness to participate due tointrinsicandextrinsic motivations. Findings also show that newcomers may be unaware that an online social networking website even has a community. As these users build their own profiles and get used to the culture of the group over time, they eventually self-identify with the community and develop a sense of belonging to the community.
Another motivation for participance may also come fromself-expressionby what is being shared in or created for online communities.
Self-discovery may be another motivation[18]as many online-communities allow for feedback on personal beliefs, artistic creations, ideas and the like which may provide grounds to develop new perspectives on the self.
Depending on the online-platform content being shared on them can be perceived by millions around the world which gives participants a certain influence which can serve as a motivation for participation. Additionally high participation may provide a user with special rights within a community (such asmodship) which can be inbuilt into the technical platform, be granted by the community (e.g. via voting) or certain users.
Online-participation may be motivated by an instrumental purpose such as providing specific information.[18]
The entertainment of playing or otherwise interacting with other users may be a major motivation for participants in certain communities.[18]
Users ofsocial networkshave various reasons that motivate them to join particular networks. In general "communication technologies open up new pathways between individuals who would not otherwise connect".[19]The ability to have synchronous communication arrived with the development of online social networks.Facebookis one example of an online social network that people choose to openly participate in. Although there are a number of different social networking platforms available, there exists a large community of people who choose to actively engage on Facebook. Although Facebook is commonly known as a method of communication, there are a variety of reasons why users prefer to use Facebook, over other platforms, as their social networking platform. For some users, interactivity between themselves and other user is a matter of fidelity.[20]
For many, it is important to maintain a sense of community. Through participation on online social networks it becomes easier for users to find and communicate with people within their community. Facebook often has friend recommendations based on the geography of the user.[21]This allows users to quickly connect with people in their area whom they may not see often, and stay in contact with them. For students, Facebook is an effective network to participate in when building and maintaining social capital.[22]By adding family, friends, acquaintances, and colleagues who use the network, students can expand their social capital. The online connections they make can later prove to be of benefit later on. Due to the competitive nature of the job market "[i]t is particularly important for university students to build social capital with the industry".[22]Since Facebook has a large number of active users it is easier for students to find out about opportunities relating to jobs through their friends online.
Facebook's interface allows users to share content, such as status updates, photos, links, and keep in contact with people they may not be able to see on a day-to-day basis. The messenger application allows friends to have conversations privately from their other friends. Users can also create groups and events through Facebook in order to share information with specific people on the network. "Facebook encourages users to engage in self-promoting".[23]Facebook allows users to engage in self-promotion in a positive way; it allows friends to like and/or comment on posts and statuses.
Facebook users are also able to "follow" people whom they may not be friends with, such as public figures, companies, or celebrities. This allows users to keep up to date with things that interest them like music, sports, and promotions from their favorite companies, and share them with their Facebook friends.
Aside from features such as email, the photo album, and status updates, Facebook provides various additional features which help to individualize each users experience.[23]Some social networks have a specific interface that users cannot individualize to their specific interests, Facebook allows users to control certain preferences. Users can use "add-in functions (e.g., virtual pets, online games, the wall, virtual gifts) that facilitate users to customize their own interface on Facebook".[23]
Studies have found that the nature and the level of participation in onlinesocial networkingsites have been directly correlated with thepersonalityof the participants. The Department of Psychology in theUniversity of Windsorsite their findings regarding this correlation in the articles"Personality and motivations associated with Facebook use"and"The Influence of Shyness on the Use of Facebook in an Undergraduate Sample". The articles state that people who have high levels of anxiety, stress, or shyness are more likely to favor socializing through the Internet than in-person socialization. The reason for this is because they are able to communicate with others without being face-to-face, and mediums such aschat roomsgive a sense of anonymity which make them feel more comfortable when participating in discussions with others.
Studies also show that in order to increase online participation, the contributors must feel unique, useful, and be given challenging and specific goals. These findings fall in line with the social psychology theories ofsocial loafingand goal setting.Social loafingclaims that when people are involved in a group setting, they tend to not contribute as much and depend on the work of others. Goal setting is the theory stating that people will work harder if given a specific goal rather than a broad or general problem. However, other social psychology theories have been disproven to help with online participation. For instance, one study found that users will contribute more to an online group project than an individual one. Additionally, although users enjoy when their contributions are unique, they want a sense of similarity within the online community. Finding similarities with other members of a community encourage new users to participate more and become more active within the community. So, new users must be able to find and recognize similar users already participating in the community. Also, the online community must give a method of analyzing and quantifying the contribution made by any user to visualize their contributions to users and help convince them that they are unique and useful. However, these and otherpsychological motivationsbehind online participation are still being researched today.
Research has shown that social characteristics, such associoeconomic status, gender, and age affect users' propensity to participate online. Following sociological research on thedigital divide, newer studies indicate a participation divide in the United States (Correa 2010)(Hargittai & Walejko 2008)(Schradie 2011) and the United Kingdom (Blank 2013). Age is the strongest demographic predictor of online participation, while gender differentiates forms of online participation. The effect of socioeconomic status is not found to be strong in all studies (Correa 2010) and (partly) mediated through online skills (Hargittai & Walejko 2008) andself-efficacy. Furthermore, existing social science research on online participation has heavily focused on the political sphere, neglecting other areas, such as education, health or cultural participation (Lutz, Hoffmann & Meckel 2014).
Online participation is relevant in different systems of thesocial websuch as:
Nielsen's 90-9-1% rule: "In most online communities, 90% of users are lurkers who never contribute, 9% of users contribute a little, and 1% of users account for almost all the action".
It is interesting to point out that the majority of the user population is in fact not contributing to the informational gain of online communities, which leads to the phenomenon of contribution inequality. Often, feedbacks, opinions and editorials are posted from those users who have stronger feelings towards the matter than most others; thus it is often the case that some posts online are not in fact representative of the entire population leading to what is called theSurvivorship bias. Therefore, it is important to ease the process of contribution as well as to promote quality contribution to address this concern.
Lior ZalmansonandGal Oestreicher-Singershowed that participation in the social websites can help boost subscription and conversion rates in these websites.[24][25]
|
https://en.wikipedia.org/wiki/Online_participation
|
1800s:Martineau·Tocqueville·Marx·Spencer·Le Bon·Ward·Pareto·Tönnies·Veblen·Simmel·Durkheim·Addams·Mead·Weber·Du Bois·Mannheim·Elias
Asocial networkis asocial structureconsisting of a set ofsocialactors (such asindividualsor organizations), networks ofdyadicties, and othersocial interactionsbetween actors. The social network perspective provides a set of methods for analyzing the structure of whole social entities along with a variety of theories explaining the patterns observed in these structures.[1]The study of these structures usessocial network analysisto identify local and global patterns, locate influential entities, and examine dynamics of networks. For instance, social network analysis has been used in studying the spread of misinformation on social media platforms or analyzing the influence of key figures in social networks.
Social networks and the analysis of them is an inherentlyinterdisciplinaryacademic field which emerged fromsocial psychology,sociology,statistics, andgraph theory. Georg Simmel authored early structural theories in sociology emphasizing the dynamics of triads and "web of group affiliations".[2]Jacob Morenois credited with developing the firstsociogramsin the 1930s to study interpersonal relationships. These approaches were mathematically formalized in the 1950s and theories and methods of social networks became pervasive in thesocial and behavioral sciencesby the 1980s.[1][3]Social network analysisis now one of the major paradigms in contemporary sociology, and is also employed in a number of other social and formal sciences. Together with othercomplex networks, it forms part of the nascent field ofnetwork science.[4][5]
The social network is atheoreticalconstructuseful in thesocial sciencesto study relationships between individuals,groups,organizations, or even entiresocieties(social units, seedifferentiation). The term is used to describe asocial structuredetermined by suchinteractions. The ties through which any given social unit connects represent the convergence of the various social contacts of that unit. This theoretical approach is, necessarily, relational. Anaxiomof the social network approach to understandingsocial interactionis that social phenomena should be primarily conceived and investigated through the properties of relations between and within units, instead of the properties of these units themselves. Thus, one common criticism of social network theory is thatindividual agencyis often ignored[6]although this may not be the case in practice (seeagent-based modeling). Precisely because many different types of relations, singular or in combination, form these network configurations,network analyticsare useful to a broad range of research enterprises. In social science, these fields of study include, but are not limited toanthropology,biology,communication studies,economics,geography,information science,organizational studies,social psychology,sociology, andsociolinguistics.
In the late 1890s, bothÉmile DurkheimandFerdinand Tönniesforeshadowed the idea of social networks in their theories and research ofsocial groups. Tönnies argued that social groups can exist as personal and direct social ties that either link individuals who share values and belief (Gemeinschaft, German, commonly translated as "community") or impersonal, formal, and instrumental social links (Gesellschaft, German, commonly translated as "society").[7]Durkheim gave a non-individualistic explanation of social facts, arguing that social phenomena arise when interacting individuals constitute a reality that can no longer be accounted for in terms of the properties of individual actors.[8]Georg Simmel, writing at the turn of the twentieth century, pointed to the nature of networks and the effect of network size on interaction and examined the likelihood of interaction in loosely knit networks rather than groups.[9]
Major developments in the field can be seen in the 1930s by several groups in psychology, anthropology, and mathematics working independently.[6][10][11]Inpsychology, in the 1930s,Jacob L. Morenobegan systematic recording and analysis of social interaction in small groups, especially classrooms and work groups (seesociometry). Inanthropology, the foundation for social network theory is the theoretical andethnographicwork ofBronislaw Malinowski,[12]Alfred Radcliffe-Brown,[13][14]andClaude Lévi-Strauss.[15]A group of social anthropologists associated withMax Gluckmanand theManchester School, includingJohn A. Barnes,[16]J. Clyde MitchellandElizabeth Bott Spillius,[17][18]often are credited with performing some of the first fieldwork from which network analyses were performed, investigating community networks in southern Africa, India and the United Kingdom.[6]Concomitantly, British anthropologistS. F. Nadelcodified a theory of social structure that was influential in later network analysis.[19]Insociology, the early (1930s) work ofTalcott Parsonsset the stage for taking a relational approach to understanding social structure.[20][21]Later, drawing upon Parsons' theory, the work of sociologistPeter Blauprovides a strong impetus for analyzing the relational ties of social units with his work onsocial exchange theory.[22][23][24]
By the 1970s, a growing number of scholars worked to combine the different tracks and traditions. One group consisted of sociologistHarrison Whiteand his students at theHarvard University Department of Social Relations. Also independently active in the Harvard Social Relations department at the time wereCharles Tilly, who focused on networks in political and community sociology and social movements, andStanley Milgram, who developed the "six degrees of separation" thesis.[25]Mark Granovetter[26]andBarry Wellman[27]are among the former students of White who elaborated and championed the analysis of social networks.[26][28][29][30]
Beginning in the late 1990s, social network analysis experienced work by sociologists, political scientists, and physicists such asDuncan J. Watts,Albert-László Barabási,Peter Bearman,Nicholas A. Christakis,James H. Fowler, and others, developing and applying new models and methods to emerging data available about online social networks, as well as "digital traces" regarding face-to-face networks.
In general, social networks areself-organizing,emergent, andcomplex, such that a globally coherent pattern appears from the local interaction of the elements that make up the system.[32][33]These patterns become more apparent as network size increases. However, a global network analysis[34]of, for example, allinterpersonal relationshipsin the world is not feasible and is likely to contain so muchinformationas to be uninformative. Practical limitations of computing power, ethics and participant recruitment and payment also limit the scope of a social network analysis.[35][36]The nuances of a local system may be lost in a large network analysis, hence the quality of information may be more important than its scale for understanding network properties. Thus, social networks are analyzed at the scale relevant to the researcher's theoretical question. Althoughlevels of analysisare not necessarilymutually exclusive, there are three general levels into which networks may fall:micro-level,meso-level, andmacro-level.
At the micro-level, social network research typically begins with an individual,snowballingas social relationships are traced, or may begin with a small group of individuals in a particular social context.
Dyadic level: Adyadis a social relationship between two individuals. Network research on dyads may concentrate onstructureof the relationship (e.g. multiplexity, strength),social equality, and tendencies towardreciprocity/mutuality.
Triadic level: Add one individual to a dyad, and you have atriad. Research at this level may concentrate on factors such asbalanceandtransitivity, as well associal equalityand tendencies towardreciprocity/mutuality.[35]In thebalance theoryofFritz Heiderthe triad is the key to social dynamics. The discord in a rivalrouslove triangleis an example of an unbalanced triad, likely to change to a balanced triad by a change in one of the relations. The dynamics of social friendships in society has been modeled by balancing triads. The study is carried forward with the theory ofsigned graphs.
Actor level: The smallest unit of analysis in a social network is an individual in their social setting, i.e., an "actor" or "ego." Egonetwork analysis focuses on network characteristics, such as size, relationship strength, density,centrality,prestigeand roles such asisolates, liaisons, andbridges.[37]Such analyses, are most commonly used in the fields ofpsychologyorsocial psychology,ethnographickinshipanalysis or othergenealogicalstudies of relationships between individuals.
Subset level:Subsetlevels of network research problems begin at the micro-level, but may cross over into the meso-level of analysis. Subset level research may focus ondistanceand reachability,cliques,cohesivesubgroups, or othergroup actionsorbehavior.[38]
In general, meso-level theories begin with apopulationsize that falls between the micro- and macro-levels. However, meso-level may also refer to analyses that are specifically designed to reveal connections between micro- and macro-levels. Meso-level networks are low density and may exhibit causal processes distinct from interpersonal micro-level networks.[39]
Organizations: Formalorganizationsaresocial groupsthat distribute tasks for a collectivegoal.[40]Network research on organizations may focus on either intra-organizational or inter-organizational ties in terms offormalorinformalrelationships. Intra-organizational networks themselves often contain multiple levels of analysis, especially in larger organizations with multiple branches, franchises or semi-autonomous departments. In these cases, research is often conducted at a work group level and organization level, focusing on the interplay between the two structures.[40]Experiments with networked groups online have documented ways to optimize group-level coordination through diverse interventions, including the addition of autonomous agents to the groups.[41]
Randomly distributed networks:Exponential random graph modelsof social networks became state-of-the-art methods of social network analysis in the 1980s. This framework has the capacity to represent social-structural effects commonly observed in many human social networks, including generaldegree-based structural effects commonly observed in many human social networks as well asreciprocityandtransitivity, and at the node-level,homophilyandattribute-based activity and popularity effects, as derived from explicit hypotheses aboutdependenciesamong network ties.Parametersare given in terms of the prevalence of smallsubgraphconfigurations in the network and can be interpreted as describing the combinations of local social processes from which a given network emerges. These probability models for networks on a given set of actors allow generalization beyond the restrictive dyadic independence assumption of micro-networks, allowing models to be built from theoretical structural foundations of social behavior.[42]
Scale-free networks: Ascale-free networkis anetworkwhosedegree distributionfollows apower law, at leastasymptotically. Innetwork theorya scale-free ideal network is arandom networkwith adegree distributionthat unravels the size distribution of social groups.[43]Specific characteristics of scale-free networks vary with the theories and analytical tools used to create them, however, in general, scale-free networks have some common characteristics. One notable characteristic in a scale-free network is the relative commonness ofverticeswith adegreethat greatly exceeds the average. The highest-degree nodes are often called "hubs", and may serve specific purposes in their networks, although this depends greatly on the social context. Another general characteristic of scale-free networks is theclustering coefficientdistribution, which decreases as the node degree increases. This distribution also follows apower law.[44]TheBarabásimodel of network evolution shown above is an example of a scale-free network.
Rather than tracing interpersonal interactions, macro-level analyses generally trace the outcomes of interactions, such aseconomicor otherresourcetransferinteractions over a largepopulation.
Large-scale networks:Large-scale networkis a term somewhat synonymous with "macro-level." It is primarily used insocialandbehavioralsciences, and ineconomics. Originally, the term was used extensively in thecomputer sciences(seelarge-scale network mapping).
Complex networks: Most larger social networks display features ofsocial complexity, which involves substantial non-trivial features ofnetwork topology, with patterns of complex connections between elements that are neither purely regular nor purely random (see,complexity science,dynamical systemandchaos theory), as dobiological, andtechnological networks. Suchcomplex networkfeatures include a heavy tail in thedegree distribution, a highclustering coefficient,assortativityor disassortativity among vertices,community structure(seestochastic block model), andhierarchical structure. In the case ofagency-directednetworks these features also includereciprocity, triad significance profile (TSP, seenetwork motif), and other features. In contrast, many of the mathematical models of networks that have been studied in the past, such aslatticesandrandom graphs, do not show these features.[45]
Various theoretical frameworks have been imported for the use of social network analysis. The most prominent of these areGraph theory,Balance theory, Social comparison theory, and more recently, theSocial identity approach.[46]
Few complete theories have been produced from social network analysis. Two that have arestructural role theoryandheterophily theory.
The basis of Heterophily Theory was the finding in one study that more numerous weak ties can be important in seeking information and innovation, as cliques have a tendency to have more homogeneous opinions as well as share many common traits. This homophilic tendency was the reason for the members of the cliques to be attracted together in the first place. However, being similar, each member of the clique would also know more or less what the other members knew. To find new information or insights, members of the clique will have to look beyond the clique to its other friends and acquaintances. This is what Granovetter called "the strength of weak ties".[47]
In the context of networks,social capitalexists where people have an advantage because of their location in a network. Contacts in a network provide information, opportunities and perspectives that can be beneficial to the central player in the network. Most social structures tend to be characterized by dense clusters of strong connections.[48]Information within these clusters tends to be rather homogeneous and redundant. Non-redundant information is most often obtained through contacts in different clusters.[49]When two separate clusters possess non-redundant information, there is said to be a structural hole between them.[49]Thus, a network that bridgesstructural holeswill provide network benefits that are in some degree additive, rather than overlapping. An ideal network structure has a vine and cluster structure, providing access to many different clusters and structural holes.[49]
Networks rich in structural holes are a form of social capital in that they offerinformationbenefits. The main player in a network that bridges structural holes is able to access information from diverse sources and clusters.[49]For example, inbusiness networks, this is beneficial to an individual's career because he is more likely to hear of job openings and opportunities if his network spans a wide range of contacts in different industries/sectors. This concept is similar to Mark Granovetter's theory ofweak ties, which rests on the basis that having a broad range of contacts is most effective for job attainment. Structural holes have been widely applied in social network analysis, resulting in applications in a wide range of practical scenarios as well as machine learning-based social prediction.[50]
Research has used network analysis to examine networks created when artists are exhibited together in museum exhibition. Such networks have been shown to affect an artist's recognition in history and historical narratives, even when controlling for individual accomplishments of the artist.[51][52]Other work examines how network grouping of artists can affect an individual artist's auction performance.[53]An artist's status has been shown to increase when associated with higher status networks, though this association has diminishing returns over an artist's career.
In J.A. Barnes' day, a "community" referred to a specific geographic location and studies of community ties had to do with who talked, associated, traded, and attended church with whom. Today, however, there are extended "online" communities developed throughtelecommunicationsdevices andsocial network services. Such devices and services require extensive and ongoing maintenance and analysis, often usingnetwork sciencemethods.Community developmentstudies, today, also make extensive use of such methods.
Complex networksrequire methods specific to modelling and interpretingsocial complexityandcomplex adaptive systems, including techniques ofdynamic network analysis.
Mechanisms such asDual-phase evolutionexplain how temporal changes in connectivity contribute to the formation of structure in social networks.
The study of social networks is being used to examine the nature of interdependencies between actors and the ways in which these are related to outcomes of conflict and cooperation. Areas of study include cooperative behavior among participants incollective actionssuch asprotests; promotion of peaceful behavior,social norms, andpublic goodswithincommunitiesthrough networks of informal governance; the role of social networks in both intrastate conflict and interstate conflict; and social networking among politicians, constituents, and bureaucrats.[54]
Incriminologyandurban sociology, much attention has been paid to the social networks among criminal actors. For example, murders can be seen as a series of exchanges between gangs. Murders can be seen to diffuse outwards from a single source, because weaker gangs cannot afford to kill members of stronger gangs in retaliation, but must commit other violent acts to maintain their reputation for strength.[55]
Diffusion of ideas and innovationsstudies focus on the spread and use of ideas from one actor to another or onecultureand another. This line of research seeks to explain why some become "early adopters" of ideas and innovations, and links social network structure with facilitating or impeding the spread of an innovation. A case in point is the social diffusion of linguistic innovation such as neologisms. Experiments and large-scale field trials (e.g., byNicholas Christakisand collaborators) have shown that cascades of desirable behaviors can be induced in social groups, in settings as diverse as Honduras villages,[56][57]Indian slums,[58]or in the lab.[59]Still other experiments have documented the experimental induction of social contagion of voting behavior,[60]emotions,[61]risk perception,[62]and commercial products.[63]
Indemography, the study of social networks has led to new sampling methods for estimating and reaching populations that are hard to enumerate (for example, homeless people or intravenous drug users.) For example, respondent driven sampling is a network-based sampling technique that relies on respondents to a survey recommending further respondents.[64][65]
The field ofsociologyfocuses almost entirely on networks of outcomes of social interactions. More narrowly,economic sociologyconsiders behavioral interactions of individuals and groups throughsocial capitaland social "markets". Sociologists, such as Mark Granovetter, have developed core principles about the interactions of social structure, information, ability to punish or reward, and trust that frequently recur in their analyses of political, economic and other institutions. Granovetter examines how social structures and social networks can affect economic outcomes like hiring, price, productivity and innovation and describes sociologists' contributions to analyzing the impact of social structure and networks on the economy.[66]
Analysis of social networks is increasingly incorporated intohealth care analytics, not only inepidemiologicalstudies but also in models ofpatient communicationand education, disease prevention, mental health diagnosis and treatment, and in the study of health care organizations andsystems.[67]
Human ecologyis aninterdisciplinaryandtransdisciplinarystudy of the relationship betweenhumansand theirnatural,social, andbuilt environments. The scientific philosophy of human ecology has a diffuse history with connections togeography,sociology,psychology,anthropology,zoology, and naturalecology.[68][69]
In the study of literary systems, network analysis has been applied by Anheier, Gerhards and Romo,[70]De Nooy,[71]Senekal,[72]andLotker,[73]to study various aspects of how literature functions. The basic premise is that polysystem theory, which has been around since the writings ofEven-Zohar, can be integrated with network theory and the relationships between different actors in the literary network, e.g. writers, critics, publishers, literary histories, etc., can be mapped usingvisualizationfrom SNA.
Research studies offormalorinformal organizationrelationships,organizational communication,economics,economic sociology, and otherresourcetransfers. Social networks have also been used to examine how organizations interact with each other, characterizing the manyinformal connectionsthat link executives together, as well as associations and connections between individual employees at different organizations.[74]Many organizational social network studies focus onteams.[75]Withinteamnetwork studies, research assesses, for example, the predictors and outcomes ofcentralityand power, density and centralization of team instrumental and expressive ties, and the role of between-team networks. Intra-organizational networks have been found to affectorganizational commitment,[76]organizational identification,[37]interpersonal citizenship behaviour.[77]
Social capitalis a form ofeconomicandcultural capitalin which social networks are central,transactionsare marked byreciprocity,trust, andcooperation, andmarketagentsproducegoods and servicesnot mainly for themselves, but for acommon good.Social capitalis split into three dimensions: the structural, the relational and the cognitive dimension. The structural dimension describes how partners interact with each other and which specific partners meet in a social network. Also, the structural dimension of social capital indicates the level of ties among organizations.[78]This dimension is highly connected to the relational dimension which refers to trustworthiness, norms, expectations and identifications of the bonds between partners. The relational dimension explains the nature of these ties which is mainly illustrated by the level of trust accorded to the network of organizations.[78]The cognitive dimension analyses the extent to which organizations share common goals and objectives as a result of their ties and interactions.[78]
Social capitalis a sociological concept about the value ofsocial relationsand the role of cooperation and confidence to achieve positive outcomes. The term refers to the value one can get from their social ties. For example, newly arrived immigrants can make use of their social ties to established migrants to acquire jobs they may otherwise have trouble getting (e.g., because of unfamiliarity with the local language). A positive relationship exists between social capital and the intensity of social network use.[79][80][81]In a dynamic framework, higher activity in a network feeds into higher social capital which itself encourages more activity.[79][82]
This particular cluster focuses on brand-image and promotional strategy effectiveness, taking into account the impact of customer participation on sales and brand-image. This is gauged through techniques such as sentiment analysis which rely on mathematical areas of study such as data mining and analytics. This area of research produces vast numbers of commercial applications as the main goal of any study is to understandconsumer behaviourand drive sales.
In manyorganizations, members tend to focus their activities inside their own groups, which stifles creativity and restricts opportunities. A player whose network bridges structural holes has an advantage in detecting and developing rewarding opportunities.[48]Such a player can mobilize social capital by acting as a "broker" of information between two clusters that otherwise would not have been in contact, thus providing access to new ideas, opinions and opportunities. British philosopher and political economistJohn Stuart Mill, writes, "it is hardly possible to overrate the value of placing human beings in contact with persons dissimilar to themselves.... Such communication [is] one of the primary sources of progress."[83]Thus, a player with a network rich in structural holes can add value to an organization through new ideas and opportunities. This in turn, helps an individual's career development and advancement.
A social capital broker also reaps control benefits of being the facilitator of information flow between contacts. Full communication with exploratory mindsets and information exchange generated by dynamically alternating positions in a social network promotes creative and deep thinking.[84]In the case of consulting firm Eden McCallum, the founders were able to advance their careers by bridging their connections with former big three consulting firm consultants and mid-size industry firms.[85]By bridging structural holes and mobilizing social capital, players can advance their careers by executing new opportunities between contacts.
There has been research that both substantiates and refutes the benefits of information brokerage. A study of high tech Chinese firms by Zhixing Xiao found that the control benefits of structural holes are "dissonant to the dominant firm-wide spirit of cooperation and the information benefits cannot materialize due to the communal sharing values" of such organizations.[86]However, this study only analyzed Chinese firms, which tend to have strong communal sharing values. Information and control benefits of structural holes are still valuable in firms that are not quite as inclusive and cooperative on the firm-wide level. In 2004, Ronald Burt studied 673 managers who ran the supply chain for one of America's largest electronics companies. He found that managers who often discussed issues with other groups were better paid, received more positive job evaluations and were more likely to be promoted.[48]Thus, bridging structural holes can be beneficial to an organization, and in turn, to an individual's career.
Computer networkscombined with social networking software produce a new medium for social interaction. A relationship over a computerizedsocial networking servicecan be characterized by context, direction, and strength. The content of a relation refers to the resource that is exchanged. In acomputer-mediated communicationcontext, social pairs exchange different kinds of information, including sending a data file or a computer program as well as providing emotional support or arranging a meeting. With the rise ofelectronic commerce, information exchanged may also correspond to exchanges of money, goods or services in the "real" world.[87]Social network analysismethods have become essential to examining these types of computer mediated communication.
In addition, the sheer size and the volatile nature ofsocial mediahas given rise to new network metrics. A key concern with networks extracted from social media is the lack of robustness of network metrics given missing data.[88]
Based on the pattern ofhomophily, ties between people are most likely to occur between nodes that are most similar to each other, or within neighbourhoodsegregation, individuals are most likely to inhabit the same regional areas as other individuals who are like them. Therefore, social networks can be used as a tool to measure the degree of segregation or homophily within a social network. Social Networks can both be used to simulate the process of homophily but it can also serve as a measure of level of exposure of different groups to each other within a current social network of individuals in a certain area.[89]
|
https://en.wikipedia.org/wiki/Social_Network
|
Folksonomyis aclassification systemin whichend usersapply publictagsto online items, typically to make those items easier for themselves or others to find later. Over time, this can give rise to a classification system based on those tags and how often they are applied or searched for, in contrast to ataxonomicclassification designed by the owners of thecontentand specified when it is published.[1][2]This practice is also known ascollaborative tagging,[3][4]social classification,social indexing, andsocial tagging. Folksonomy was originally "the result of personal free tagging of information [...] for one's own retrieval",[5]but online sharing and interaction expanded it into collaborative forms.Social taggingis the application of tags in an open online environment where the tags of other users are available to others.Collaborative tagging(also known as group tagging) is tagging performed by a group of users. This type of folksonomy is commonly used in cooperative and collaborative projects such as research, content repositories, and social bookmarking.
The term was coined byThomas Vander Walin 2004[5][6][7]as aportmanteauoffolkandtaxonomy. Folksonomies became popular as part ofsocial softwareapplications such associal bookmarkingand photograph annotation that enable users to collectively classify and find information via shared tags. Some websites includetag cloudsas a way to visualize tags in a folksonomy.[8]
Folksonomies can be used forK–12education, business, and higher education. More specifically, folksonomies may be implemented for social bookmarking, teacher resource repositories, e-learning systems, collaborative learning, collaborative research, professional development and teaching.Wikipediais a prime example of folksonomy.[9][better source needed][clarification needed]
Folksonomies are a trade-off between traditional centralized classification and no classification at all,[10]and have several advantages:[11][12][13]
There are several disadvantages with the use of tags and folksonomies as well,[14]and some of the advantages can lead to problems. For example, the simplicity in tagging can result in poorly applied tags.[15]Further, while controlled vocabularies are exclusionary by nature,[16]tags are often ambiguous and overly personalized.[17]Users apply tags to documents in many different ways and tagging systems also often lack mechanisms for handlingsynonyms,acronymsandhomonyms, and they also often lack mechanisms for handlingspellingvariations such as misspellings,singular/pluralform,conjugatedandcompoundwords. Some tagging systems do not support tags consisting of multiple words, resulting in tags like "viewfrommywindow". Sometimes users choose specialized tags or tags without meaning to others.
A folksonomy emerges when users tag content or information, such as web pages, photos, videos, podcasts, tweets, scientific papers and others. Strohmaier et al.[18]elaborate the concept: the term "tagging" refers to a "voluntary activity of users who are annotating resources with term-so-called 'tags' – freely chosen from an unbounded and uncontrolled vocabulary". Others explain tags as an unstructured textual label[19]or keywords,[17]and that they appear as a simple form of metadata.[20]
Folksonomies consist of three basic entities: users, tags, and resources. Users create tags to mark resources such as: web pages, photos, videos, and podcasts. These tags are used to manage, categorize and summarize online content. This collaborative tagging system also uses these tags as a way to index information, facilitate searches and navigate resources. Folksonomy also includes a set of URLs that are used to identify resources that have been referred to by users of different websites. These systems also include category schemes that have the ability to organize tags at different levels of granularity.[21]
Vander Wal identifies two types of folksonomy: broad and narrow.[22]A broad folksonomy arises when multiple users can apply the same tag to an item, providing information about which tags are the most popular. A narrow folksonomy occurs when users, typically fewer in number and often including the item's creator, tag an item with tags that can each be applied only once. While both broad and narrow folksonomies enable the searchability of content by adding an associated word or phrase to an object, a broad folksonomy allows for sorting based on the popularity of each tag, as well as the tracking of emerging trends in tag usage and developing vocabularies.[22]
An example of a broad folksonomy isdel.icio.us, a website where users can tag any online resource they find relevant with their own personal tags. The photo-sharing websiteFlickris an oft-cited example of a narrow folksonomy.
'Taxonomy' refers to a hierarchicalcategorizationin which relatively well-defined classes are nested under broader categories. Afolksonomyestablishes categories (each tag is a category) without stipulating or necessarily deriving a hierarchical structure of parent-child relations among different tags. (Work has been done on techniques for deriving at least loose hierarchies from clusters of tags.[23])
Supporters of folksonomies claim that they are often preferable to taxonomies because folksonomies democratize the way information is organized, they are more useful to users because they reflect current ways of thinking about domains, and they express more information about domains.[24]Critics claim that folksonomies are messy and thus harder to use, and can reflect transient trends that may misrepresent what is known about a field.
An empirical analysis of the complex dynamics of tagging systems, published in 2007,[25]has shown that consensus around stable distributions and shared vocabularies does emerge, even in the absence of a centralcontrolled vocabulary. For content to be searchable, it should be categorized and grouped. While this was believed to require commonly agreed on sets of content describing tags (much like keywords of a journal article), some research has found that in large folksonomies common structures also emerge on the level of categorizations.[26]Accordingly, it is possible to devise mathematicalmodels of collaborative taggingthat allow for translating from personal tag vocabularies (personomies) to the vocabulary shared by most users.[27]
Folksonomy is unrelated tofolk taxonomy, a cultural practice that has been widely documented in anthropological andfolkloristicwork. Folk taxonomies are culturally supplied, intergenerationally transmitted, and relatively stable classification systems that people in a given culture use to make sense of the entire world around them (not just theInternet).[21]
The study of the structuring or classification of folksonomy is termedfolksontology.[28]This branch ofontologydeals with the intersection between highly structured taxonomies or hierarchies and loosely structured folksonomy, asking what best features can be taken by both for a system of classification. The strength of flat-tagging schemes is their ability to relate one item to others like it. Folksonomy allows large disparate groups of users to collaboratively label massive, dynamic information systems. The strength of taxonomies are their browsability: users can easily start from more generalized knowledge and target their queries towards more specific and detailed knowledge.[29]Folksonomy looks to categorize tags and thus create browsable spaces of information that are easy to maintain and expand.
Social tagging forknowledge acquisitionis the specific use of tagging for finding and re-finding specific content for an individual or group. Social tagging systems differ from traditional taxonomies in that they are community-based systems lacking the traditional hierarchy of taxonomies. Rather than a top-down approach, social tagging relies on users to create the folksonomy from the bottom up.[30]
Common uses of social tagging for knowledge acquisition include personal development for individual use and collaborative projects. Social tagging is used for knowledge acquisition in secondary, post-secondary, and graduate education as well as personal and business research. The benefits of finding/re-finding source information are applicable to a wide spectrum of users. Tagged resources are located through search queries rather than searching through a more traditional file folder system.[31]The social aspect of tagging also allows users to take advantage of metadata from thousands of other users.[30]
Users choose individual tags for stored resources. These tags reflect personal associations, categories, and concept, all of which are individual representations based on meaning and relevance to that individual. The tags, or keywords, are designated by users. Consequently, tags represent a user's associations corresponding to the resource. Commonly tagged resources include videos, photos, articles, websites, and email.[32]Tags are beneficial for a couple of reasons. First, they help to structure and organize large amounts of digital resources in a manner that makes them easily accessible when users attempt to locate the resource at a later time. The second aspect is social in nature, that is to say that users may search for new resources and content based on the tags of other users. Even the act of browsing through common tags may lead to further resources for knowledge acquisition.[30]
Tags that occur more frequently with specific resources are said to be more strongly connected. Furthermore, tags may be connected to each other. This may be seen in the frequency in which they co-occur. The more often they co-occur, the stronger the connection. Tag clouds are often utilized to visualize connectivity between resources and tags. Font size increases as the strength of association increases.[32]
Tags show interconnections of concepts that were formerly unknown to a user. Therefore, a user's current cognitive constructs may be modified or augmented by the metadata information found in aggregated social tags. This process promotes knowledge acquisition through cognitive irritation and equilibration. This theoretical framework is known as the co-evolution model of individual and collective knowledge.[32]
The co-evolution model focuses on cognitive conflict in which a learner's prior knowledge and the information received from the environment are dissimilar to some degree.[30][32]When this incongruence occurs, the learner must work through a process cognitive equilibration in order to make personal cognitive constructs and outside information congruent. According to the coevolution model, this may require the learner to modify existing constructs or simply add to them.[30]The additional cognitive effort promotes information processing which in turn allows individual learning to occur.[32]
|
https://en.wikipedia.org/wiki/Social_tagging
|
Web 2.0(also known asparticipative(orparticipatory)[1]webandsocial web)[2]refers towebsitesthat emphasizeuser-generated content,ease of use,participatory culture, andinteroperability(i.e., compatibility with other products, systems, and devices) forend users.
The term was coined byDarcy DiNucciin 1999[3]and later popularized byTim O'ReillyandDale Doughertyat the firstWeb 2.0 Conferencein 2004.[4][5][6]Although the term mimics the numbering ofsoftware versions, it does not denote a formal change in the nature of theWorld Wide Web,[7]but merely describes a general change that occurred during this period as interactive websites proliferated and came to overshadow the older, more static websites of the original Web.[2]
A Web 2.0 website allows users to interact and collaborate throughsocial mediadialogue as creators ofuser-generated contentin avirtual community. This contrasts the first generation ofWeb 1.0-era websites where people were limited to passively viewing content. Examples of Web 2.0 features includesocial networking sitesorsocial mediasites (e.g.,Facebook),blogs,wikis,folksonomies("tagging" keywords on websites and links),video sharingsites (e.g.,YouTube),image sharingsites (e.g.,Flickr),hosted services,Web applications("apps"),collaborative consumptionplatforms, andmashup applications.
Whether Web 2.0 is substantially different from prior Web technologies has been challenged by World Wide Web inventorTim Berners-Lee, who describes the term asjargon.[8]His original vision of the Web was "a collaborative medium, a place where we [could] all meet and read and write".[9][10]On the other hand, the termSemantic Web(sometimes referred to as Web 3.0)[11]was coined by Berners-Lee to refer to a web of content where the meaning can be processed by machines.[12]
Web 1.0 is aretronymreferring to the first stage of theWorld Wide Web's evolution, from roughly 1989 to 2004. According to Graham Cormode and Balachander Krishnamurthy, "content creators were few in Web 1.0 with the vast majority of users simply acting as consumers of content".[13]Personal web pageswere common, consisting mainly of static pages hosted onISP-runweb servers, or onfree web hosting servicessuch asTripodand the now-defunctGeoCities.[14][15]With Web 2.0, it became common for average web users to have social-networking profiles (on sites such asMyspaceandFacebook) and personal blogs (sites likeBlogger,TumblrandLiveJournal) through either a low-costweb hosting serviceor through a dedicated host. In general, content was generated dynamically, allowing readers to comment directly on pages in a way that was not common previously.[citation needed]
Some Web 2.0 capabilities were present in the days of Web 1.0, but were implemented differently. For example, a Web 1.0 site may have had aguestbookpage for visitor comments, instead of acomment sectionat the end of each page (typical of Web 2.0). During Web 1.0, server performance and bandwidth had to be considered—lengthy comment threads on multiple pages could potentially slow down an entire site.Terry Flew, in his third edition ofNew Media,described the differences between Web 1.0 and Web 2.0 as a
"move from personal websites to blogs and blog site aggregation, from publishing to participation, from web content as the outcome of large up-front investment to an ongoing and interactive process, and from content management systems to links based on "tagging" website content usingkeywords(folksonomy)."
Flew believed these factors formed the trends that resulted in the onset of the Web 2.0 "craze".[16]
Some common design elements of a Web 1.0 site include:[17]
The term "Web 2.0" was coined byDarcy DiNucci, aninformation architectureconsultant, in her January 1999 article "Fragmented Future":[3][20]
"The Web we know now, which loads into abrowser windowin essentially static screenfuls, is only an embryo of the Web to come. The first glimmerings of Web 2.0 are beginning to appear, and we are just starting to see how that embryo might develop. The Web will be understood not as screenfuls of text and graphics but as a transport mechanism, the ether through which interactivity happens. It will [...] appear on your computer screen, [...] on your TV set [...] your car dashboard [...] your cell phone [...] hand-held game machines [...] maybe even your microwave oven."
Writing whenPalm Inc.introduced its first web-capablepersonal digital assistant(supporting Web access withWAP), DiNucci saw the Web "fragmenting" into a future that extended beyond the browser/PC combination it was identified with. She focused on how the basic information structure and hyper-linking mechanism introduced byHTTPwould be used by a variety of devices and platforms. As such, her "2.0" designation refers to the next version of the Web that does not directly relate to the term's current use.
The term Web 2.0 did not resurface until 2002.[21][22][23]Companies such asAmazon, Facebook,Twitter, andGoogle, made it easy to connect and engage in online transactions. Web 2.0 introduced new features, such asmultimediacontent and interactive web applications, which mainly consisted of two-dimensional screens.[24]Kinsley and Eric focus on the concepts currently associated with the term where, as Scott Dietzen puts it, "the Web becomes a universal, standards-based integration platform".[23]In 2004, the term began to popularize whenO'Reilly Mediaand MediaLive hosted the first Web 2.0 conference. In their opening remarks,John Battelleand Tim O'Reilly outlined their definition of the "Web as Platform", where software applications are built upon the Web as opposed to upon the desktop. The unique aspect of this migration, they argued, is that "customers are building your business for you".[25]They argued that the activities of users generating content (in the form of ideas, text, videos, or pictures) could be "harnessed" to create value. O'Reilly and Battelle contrasted Web 2.0 with what they called "Web 1.0". They associated this term with the business models ofNetscapeand theEncyclopædia Britannica Online. For example,
"Netscape framed 'the web as platform' in terms of the old softwareparadigm: their flagship product was the web browser, a desktop application, and their strategy was to use their dominance in the browser market to establish a market for high-priced server products. Control over standards for displaying content and applications in the browser would, in theory, give Netscape the kind of market power enjoyed by Microsoft in the PC market. Much like the 'horseless carriage' framed the automobile as an extension of the familiar, Netscape promoted a 'webtop' to replace the desktop, and planned to populate that webtop with information updates and applets pushed to the webtop by information providers who would purchase Netscape servers.[26]"
In short, Netscape focused on creating software, releasing updates and bug fixes, and distributing it to the end users. O'Reilly contrasted this withGoogle, a company that did not, at the time, focus on producing end-user software, but instead on providing a service based on data, such as the links that Web page authors make between sites. Google exploits this user-generated content to offer Web searches based on reputation through its "PageRank" algorithm. Unlike software, which undergoes scheduled releases, such services are constantly updated, a process called "theperpetual beta". A similar difference can be seen between theEncyclopædia Britannica OnlineandWikipedia– while the Britannica relies upon experts to write articles and release them periodically in publications, Wikipedia relies on trust in (sometimes anonymous) community members to constantly write and edit content. Wikipedia editors are not required to have educational credentials, such as degrees, in the subjects in which they are editing. Wikipedia is not based on subject-matter expertise, but rather on an adaptation of theopen sourcesoftware adage"given enough eyeballs, all bugs are shallow". This maxim is stating that if enough users are able to look at a software product's code (or a website), then these users will be able to fix any "bugs" or other problems. The Wikipedia volunteer editor community produces, edits, and updates articles constantly. Web 2.0 conferences have been held every year since 2004, attractingentrepreneurs, representatives from large companies, tech experts and technology reporters.
The popularity of Web 2.0 was acknowledged by2006TIME magazinePerson of The Year(You).[27]That is,TIMEselected the masses of users who were participating in content creation onsocial networks, blogs, wikis, and media sharing sites.
In the cover story,Lev Grossmanexplains:
"It's a story about community and collaboration on a scale never seen before. It's about the cosmic compendium of knowledge Wikipedia and the million-channel people's networkYouTubeand the online metropolisMySpace. It's about the many wresting power from the few and helping one another for nothing and how that will not only change the world but also change the way the world changes."
Instead of merely reading a Web 2.0 site, a user is invited to contribute to the site's content by commenting on published articles, or creating auser accountorprofileon the site, which may enable increased participation. By increasing emphasis on these already-extant capabilities, they encourage users to rely more on their browser foruser interface,application software("apps") andfile storagefacilities. This has been called "network as platform" computing.[5]Major features of Web 2.0 includesocial networkingwebsites, self-publishing platforms (e.g.,WordPress' easy-to-use blog and website creation tools),"tagging"(which enables users to label websites, videos or photos in some fashion),"like" buttons(which enable a user to indicate that they are pleased by online content), andsocial bookmarking.
Users can provide the data and exercise some control over what they share on a Web 2.0 site.[5][28]These sites may have an "architecture of participation" that encourages users to add value to the application as they use it.[4][5]Users can add value in many ways, such as uploading their own content on blogs, consumer-evaluation platforms (e.g.AmazonandeBay), news websites (e.g. responding in the comment section), social networking services, media-sharing websites (e.g. YouTube andInstagram) and collaborative-writing projects.[29]Some scholars argue thatcloud computingis an example of Web 2.0 because it is simply an implication of computing on the Internet.[30]
Web 2.0 offers almost all users the same freedom to contribute,[31]which can lead to effects that are varyingly perceived as productive by members of a given community or not, which can lead to emotional distress and disagreement. The impossibility of excluding group members who do not contribute to the provision of goods (i.e., to the creation of a user-generated website) from sharing the benefits (of using the website) gives rise to the possibility that serious members will prefer to withhold their contribution of effort and"free ride"on the contributions of others.[32]This requires what is sometimes calledradical trustby the management of the Web site.
Encyclopaedia BritannicacallsWikipedia"the epitome of the so-called Web 2.0" and describes what many view as the ideal of a Web 2.0 platform as "an egalitarian environment where the web of social software enmeshes users in both their real and virtual-reality workplaces."[33]
According to Best,[34]the characteristics of Web 2.0 are rich user experience, user participation,dynamic content,metadata,Web standards, andscalability. Further characteristics, such as openness, freedom,[35]andcollective intelligence[36]by way of user participation, can also be viewed as essential attributes of Web 2.0. Some websites require users to contributeuser-generated contentto have access to the website, to discourage "free riding".
The key features of Web 2.0 include:[citation needed]
Theclient-side(Web browser) technologies used in Web 2.0 development includeAjaxandJavaScript frameworks. Ajax programming usesJavaScriptand theDocument Object Model(DOM) to update selected regions of the page area without undergoing a full page reload. To allow users to continue interacting with the page, communications such as data requests going to the server are separated from data coming back to the page (asynchronously).
Otherwise, the user would have to routinely wait for the data to come back before they can do anything else on that page, just as a user has to wait for a page to complete the reload. This also increases the overall performance of the site, as the sending of requests can complete quicker independent of blocking and queueing required to send data back to the client. The data fetched by an Ajax request is typically formatted inXMLorJSON(JavaScript Object Notation) format, two widely usedstructured dataformats. Since both of these formats are natively understood by JavaScript, a programmer can easily use them to transmit structured data in their Web application.
When this data is received via Ajax, the JavaScript program then uses the Document Object Model to dynamically update the Web page based on the new data, allowing for rapid and interactive user experience. In short, using these techniques, web designers can make their pages function like desktop applications. For example,Google Docsuses this technique to create a Web-based word processor.
As a widely available plug-in independent ofW3Cstandards (the World Wide Web Consortium is the governing body of Web standards and protocols),Adobe Flashwas capable of doing many things that were not possible pre-HTML5. Of Flash's many capabilities, the most commonly used was its ability to integrate streaming multimedia into HTML pages. With the introduction of HTML5 in 2010 and the growing concerns with Flash's security, the role of Flash became obsolete, with browser support ending on December 31, 2020.
In addition to Flash and Ajax, JavaScript/Ajax frameworks have recently become a very popular means of creating Web 2.0 sites. At their core, these frameworks use the same technology as JavaScript, Ajax, and the DOM. However, frameworks smooth over inconsistencies between Web browsers and extend the functionality available to developers. Many of them also come with customizable, prefabricated 'widgets' that accomplish such common tasks as picking a date from a calendar, displaying a data chart, or making a tabbed panel.
On theserver-side, Web 2.0 uses many of the same technologies as Web 1.0. Languages such asPerl,PHP,Python,Ruby, as well asEnterprise Java (J2EE)andMicrosoft.NET Framework, are used by developers to output data dynamically using information from files and databases. This allows websites and web services to sharemachine readableformats such asXML(Atom,RSS, etc.) andJSON. When data is available in one of these formats, another website can use it tointegrate a portion of that site's functionality.
Web 2.0 can be described in three parts:
As such, Web 2.0 draws together the capabilities ofclient- andserver-side software,content syndicationand the use ofnetwork protocols. Standards-oriented Web browsers may useplug-insand software extensions to handle the content and user interactions. Web 2.0 sites provide users withinformation storage, creation, and dissemination capabilities that were not possible in the environment known as "Web 1.0".
Web 2.0 sites include the following features and techniques, referred to as the acronymSLATESby Andrew McAfee:[37]
While SLATES forms the basic framework of Enterprise 2.0, it does not contradict all of the higher level Web 2.0 design patterns and business models. It includes discussions of self-service IT, the long tail of enterprise IT demand, and many other consequences of the Web 2.0 era in enterprise uses.[38]
A third important part of Web 2.0 is thesocial web. The social Web consists of a number of online tools and platforms where people share their perspectives, opinions, thoughts and experiences. Web 2.0 applications tend to interact much more with the end user. As such, the end user is not only a user of the application but also a participant by:
The popularity of the term Web 2.0, along with the increasing use of blogs, wikis, and social networking technologies, has led many in academia and business to append a flurry of 2.0's to existing concepts and fields of study,[39]includingLibrary 2.0, Social Work 2.0,[40]Enterprise 2.0, PR 2.0,[41]Classroom 2.0,[42]Publishing 2.0,[43]Medicine 2.0,[44]Telco 2.0,Travel 2.0,Government 2.0,[45]and evenPorn 2.0.[46]Many of these 2.0s refer to Web 2.0 technologies as the source of the new version in their respective disciplines and areas. For example, in the Talis white paper "Library 2.0: The Challenge of Disruptive Innovation",Paul Millerargues
"Blogs, wikis and RSS are often held up as exemplary manifestations of Web 2.0. A reader of a blog or a wiki is provided with tools to add a comment or even, in the case of the wiki, to edit the content. This is what we call the Read/Write web. Talis believes thatLibrary 2.0means harnessing this type of participation so that libraries can benefit from increasingly rich collaborative cataloging efforts, such as including contributions from partner libraries as well as adding rich enhancements, such as book jackets or movie files, to records from publishers and others."[47]
Here, Miller links Web 2.0 technologies and the culture of participation that they engender to the field of library science, supporting his claim that there is now a "Library 2.0". Many of the other proponents of new 2.0s mentioned here use similar methods. The meaning of Web 2.0 is role dependent. For example, some use Web 2.0 to establish and maintain relationships through social networks, while some marketing managers might use this promising technology to "end-run traditionally unresponsive I.T. department[s]."[48]
There is a debate over the use of Web 2.0 technologies in mainstream education. Issues under consideration include the understanding of students' different learning modes; the conflicts between ideas entrenched in informal online communities and educational establishments' views on the production and authentication of 'formal' knowledge; and questions about privacy, plagiarism, shared authorship and the ownership of knowledge and information produced and/or published on line.[49]
Web 2.0 is used by companies, non-profit organisations and governments for interactivemarketing. A growing number of marketers are using Web 2.0 tools to collaborate with consumers on product development,customer serviceenhancement, product or service improvement and promotion. Companies can use Web 2.0 tools to improve collaboration with both its business partners and consumers. Among other things, company employees have created wikis—Websites that allow users to add, delete, and edit content — to list answers to frequently asked questions about each product, and consumers have added significant contributions.
Another marketing Web 2.0 lure is to make sure consumers can use the online community to network among themselves on topics of their own choosing.[50]Mainstream media usage of Web 2.0 is increasing. Saturating media hubs—likeThe New York Times,PC MagazineandBusiness Week— with links to popular new Web sites and services, is critical to achieving the threshold for mass adoption of those services.[51]User web content can be used to gauge consumer satisfaction. In a recent article for Bank Technology News, Shane Kite describes how Citigroup's Global Transaction Services unit monitorssocial mediaoutlets to address customer issues and improve products.[52]
In tourism industries, social media is an effective channel to attract travellers and promote tourism products and services by engaging with customers. The brand of tourist destinations can be built through marketing campaigns on social media and by engaging with customers. For example, the "Snow at First Sight" campaign launched by theState of Coloradoaimed to bring brand awareness to Colorado as a winter destination. The campaign used social media platforms, for example, Facebook and Twitter, to promote this competition, and requested the participants to share experiences, pictures and videos on social media platforms. As a result, Colorado enhanced their image as a winter destination and created a campaign worth about $2.9 million.[citation needed]
The tourism organisation can earn brand royalty from interactive marketing campaigns on social media with engaging passive communication tactics. For example, "Moms" advisors of theWalt Disney Worldare responsible for offering suggestions and replying to questions about the family trips at Walt Disney World. Due to its characteristic of expertise in Disney, "Moms" was chosen to represent the campaign.[53]Social networking sites, such as Facebook, can be used as a platform for providing detailed information about the marketing campaign, as well as real-time online communication with customers. Korean Airline Tour created and maintained a relationship with customers by using Facebook for individual communication purposes.[54]
Travel 2.0 refers a model of Web 2.0 on tourism industries which provides virtual travel communities. The travel 2.0 model allows users to create their own content and exchange their words through globally interactive features on websites.[55][56]The users also can contribute their experiences, images and suggestions regarding their trips through online travel communities. For example,TripAdvisoris an online travel community which enables user to rate and share autonomously their reviews and feedback on hotels and tourist destinations. Non pre-associate users can interact socially and communicate through discussion forums on TripAdvisor.[57]
Social media, especially Travel 2.0 websites, plays a crucial role in decision-making behaviors of travelers. The user-generated content on social media tools have a significant impact on travelers choices and organisation preferences. Travel 2.0 sparked radical change in receiving information methods for travelers, from business-to-customer marketing into peer-to-peer reviews. User-generated content became a vital tool for helping a number of travelers manage their international travels, especially for first time visitors.[58]The travellers tend to trust and rely on peer-to-peer reviews and virtual communications on social media rather than the information provided by travel suppliers.[57][53]
In addition, an autonomous review feature on social media would help travelers reduce risks and uncertainties before the purchasing stages.[55][58]Social media is also a channel for customer complaints and negative feedback which can damage images and reputations of organisations and destinations.[58]For example, a majority of UK travellers read customer reviews before booking hotels, these hotels receiving negative feedback would be refrained by half of customers.[58]
Therefore, the organisations should develop strategic plans to handle and manage the negative feedback on social media. Although the user-generated content and rating systems on social media are out of a business' controls, the business can monitor those conversations and participate in communities to enhance customer loyalty and maintain customer relationships.[53]
Web 2.0 could allow for more collaborative education. For example, blogs give students a public space to interact with one another and the content of the class.[59]Some studies suggest that Web 2.0 can increase the public's understanding of science, which could improve government policy decisions. A 2012 study by researchers at theUniversity of Wisconsin–Madisonnotes that
Ajaxhas prompted the development of Web sites that mimic desktop applications, such asword processing, thespreadsheet, andslide-show presentation.WYSIWYGwikiandbloggingsites replicate many features of PC authoring applications. Several browser-based services have emerged, includingEyeOS[61]andYouOS.(No longer active.)[62]Although namedoperating systems, many of these services are application platforms. They mimic the user experience of desktop operating systems, offering features and applications similar to a PC environment, and are able to run within any modern browser. However, these so-called "operating systems" do not directly control the hardware on the client's computer. Numerous web-based application services appeared during thedot-com bubbleof 1997–2001 and then vanished, having failed to gain a critical mass of customers.
Many regard syndication of site content as a Web 2.0 feature. Syndication uses standardized protocols to permit end-users to make use of a site's data in another context (such as another Web site, abrowser plugin, or a separate desktop application). Protocols permitting syndication includeRSS(really simple syndication, also known as Web syndication),RDF(as in RSS 1.1), andAtom, all of which areXML-based formats. Observers have started to refer to these technologies asWeb feeds.
Specialized protocols such asFOAFandXFN(both for social networking) extend the functionality of sites and permit end-users to interact without centralized Web sites.
Web 2.0 often uses machine-based interactions such asRESTandSOAP. Servers often expose proprietaryApplication programming interfaces(APIs), but standard APIs (for example, for posting to a blog or notifying a blog update) have also come into use. Most communications through APIs involveXMLorJSONpayloads. REST APIs, through their use of self-descriptive messages andhypermedia as the engine of application state, should be self-describing once an entryURIis known.Web Services Description Language(WSDL) is the standard way of publishing a SOAP Application programming interface and there area range of Web service specifications.
In November 2004,CMP Mediaapplied to theUSPTOfor aservice markon the use of the term "WEB 2.0" for live events.[63]On the basis of this application, CMP Media sent acease-and-desistdemand to the Irish non-profit organisation IT@Cork on May 24, 2006,[64]but retracted it two days later.[65]The "WEB 2.0" service mark registration passed final PTO Examining Attorney review on May 10, 2006, and was registered on June 27, 2006.[63]TheEuropean Unionapplication (which would confer unambiguous status in Ireland)[66]was declined on May 23, 2007.
Critics of the term claim that "Web 2.0" does not represent a new version of theWorld Wide Webat all, but merely continues to use so-called "Web 1.0" technologies and concepts:[8]
"Nobody really knows what it means... If Web 2.0 for you is blogs and wikis, then that is people to people. But that was what the Web was supposed to be all along... Web 2.0, for some people, it means moving some of the thinking [to the] client side, so making it more immediate, but the idea of the Web as interaction between people is really what the Web is. That was what it was designed to be... a collaborative space where people can interact."
"The Web is great because that person can't foist anything on you—you have to go get it. They can make themselves available, but if nobody wants to look at their site, that's fine. To be honest, most people who have something to say get published now."[72]
"The task before us is to extend into the digital world the virtues of authenticity, expertise, and scholarly apparatus that have evolved over the 500 years of print, virtues often absent in the manuscript age that preceded print".
|
https://en.wikipedia.org/wiki/Web_2.0
|
CallAppis amobile appofferingcaller ID,call blockingandcall recording. It gives background information about the entities behind incoming or outgoing calls by utilizing the user's community-generated content andsocial networking services.
CallApp was invented and founded in 2011,[1]in Tel Aviv, Israel by its former CEO, Oded Volovitz and current CEO, Amit On.[2]raising $1 million inseed investment.[3]It was initially introduced publicly at theTechCrunch Disrupt New York2012, where it launched its application forAndroid,[4]at theDEMO conference,[2]and at theMobile World Congress,Barcelona.[5]It won the Geek Award for the fledgling start-up of the year 2012.[6]In 2014, the company raised $4 million from theangel investorsSaar Wilfand Moshe Lichtman and from Giza andSusquehannaventure capital funds.[7]Amit On was named the company's CEO[8]in 2014, CallApp had five million users along with 50,000 daily downloads, making it one of the 100 leading apps onGoogle Play.[9]As of February 2021, it has over 100 million users.[10]
CallApp providescaller ID, which gives the users the means to identifytelemarketing,spammersandrobocallnumbers, and enablescall blockingandblacklistingunsolicited callers. The app provides the user with real-time information about the entities behind incoming or outgoing calls by utilizing info that people and businesses share about themselves all over the web along with communications that CallApp community decides to share.[11]The app synchronizes the user email and calendar, shows mutual contacts, and includes business information.[12]It also offerscall recording,telephone directory,reverse telephone directory,private browsing,call reminderand car mode, as well as paid upgrade options.[13]
|
https://en.wikipedia.org/wiki/CallApp
|
RealCallis a US-basedAIcaller identificationandcall blockingsmartphone application, used to detect, engage and block call and SMS scamming and spamming. It has AI algorithms with built-in free reverse phone lookup service and customized answer bots for detection, engagement and blocking of unwanted calls and messages.[1]The app is available for Android andiOSdevices.
RealCall is a US basedAI smartphone application, to detect, engage and block spam calls, with a database of known numbers and an AI algorithm to identify phone numbers and block calls from robocallers, spammers, telemarketers and scammers. It analyze caller's voice and call's content to determine the nature of a call.[2]The app auto block unwanted calls, and use answer bots to answer calls from telemarketers.[3]It has reverse lookup service, used to find owner name, address, network carrier, location, risk level of unknown numbers,[4]and system integrated withFTC Do Not Call Registry.[5][6]
RealCall is developed by Second Phone Number Inc., a privately held company with a head office inSan Jose, California, US.[7]It released first iOS version of the app on 6 April 2022, andAndroidversion in December 2022.[8][9]As of September 2022, it has blocked 30.63 million spam calls and 11.6 billion spam text messages, originating mainly from 530, 502, 626, 915, and 315 US area codes.[10][11]In November 2022, Dingtone announced partnership with RealCall and integrated its API.[citation needed]As of 2022, It has collected 1.5 billion phone numbers in its global database.[1][12]
The app is only available in US and Canada, for iOS and Android users.[2]
|
https://en.wikipedia.org/wiki/RealCall
|
Algorithmic curationis the selection ofonline mediabyrecommendation algorithmsandpersonalized searches. Examples includesearch engineandsocial mediaproducts[1]such as theTwitter feed,Facebook'sNews Feed, and theGoogle Personalized Search.
Curation algorithms are typically proprietary or "black box", leading to concern aboutalgorithmic biasand the creation offilter bubbles.[1][2]
This Internet-related article is astub. You can help Wikipedia byexpanding it.
Thissociology-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Algorithmic_curation
|
Ambient awareness(AmA) is a term used bysocial scientiststo describe a form of peripheralsocial awarenessthrough social media. This awareness is propagated from relatively constant contact with one's friends and colleagues viasocial networkingplatforms on the Internet. The term essentially defines the sort of omnipresent knowledge one experiences by being a regular user of these media outlets that allow a constant connection with one's social circle.
According to Clive Thompson ofThe New York Times, ambient awareness is "very much like being physically near someone and picking up on mood through the little things;body language, sighs, stray comments". AcademicAndreas Kaplandefines ambient awareness as "awareness created through regular and constant reception, and/or exchange of information fragments through social media".[1]Two friends who regularly follow one another's digital information can already be aware of each other's lives without actually being physically present to have had a conversation.
Socially speaking, ambient awareness and social media are products of the new generations who are being born or growing up in thedigital age, startingcirca1998 and running to current times. Social media is personal media (what you're doing in the moment, how you feel, a picture of where you are) combined withsocial communication. Social media is the lattice work for ambient awareness. Without social media the state of ambient awareness cannot exist.
Artificial Social Networking Intelligence (ASNI) refers to the application of artificial intelligence within social networking services and social media platforms. It encompasses various technologies and techniques used to automate, personalize, enhance, improve, and synchronize user's interactions and experiences within social networks. ASNI is expected to evolve rapidly, influencing how we interact online and shaping their digital experiences. Transparency, ethical considerations, media influence bias, and user control over data will be crucial to ensure responsible development and positive impact.
A significant feature of social media is that it is created by those who also consume it. Mostly, those participating in this phenomenon are adolescents, college age, or young adult professionals. According to Dr.Mimi Ito, acultural anthropologistand Professor in Residence at theUniversity of California at Irvine,[2]the mobile device is the greatest proxy device used to create and distribute Social Media. She reportedly states that"teenagers capture and produce their own media, and stay in constant ambient contact with each other..."using mobile devices. Usually while doing this they are consuming other forms of media such as music or video content via theirsmart phones, tablets, or other similar devices.[3]Effectively this has led social scientists to believe that learning and multitasking will have a new face as the products of the digital generation enter the work force and begin to integrate their learning methods into the standard preexistingbusiness modelsof today. Professors Kaplan and Haenlein see ambient awareness as one of the major reasons for the success of suchmicrobloggingsites as Twitter.[4]
The earliest available technology that could be used for constant social contact is the cell phone. For the first time, people could be contacted readily and at will beyond the confines of their work or homes. Then later, with the additional service of texting, one can see the somewhat primitive form of the status update. Since the text message only allows for 160 characters to transmit pertinent information it paved the way for the status update as we know it today. The transition from only having a few points of regular long distance contact, to being constantly available via cell phone, is what primed society for social networking websites.
Perhaps the first instance where these websites created the possibility of larger scale ambient awareness was when Facebook installed the news feed. The news feed automatically sends compiled information on all of a users contacts activities directly to them so that they can access all of the happenings in their world from one location. For the first time, becoming someone's Facebook friend was the equivalent of subscribing to a feed of their daily minutiae. Since this innovation, a new wave ofmicro-bloggingservices have emerged, such as Twitter orTumblr. Although these services have often been criticized as containing seemingly meaningless snippets of information, when a follower gathers a certain amount of information, they begin to obtain an ambient understanding of who they are following. This has led to the mass usage of social media as not only a social tool but also as a marketing and business tool.
Websites such as Twitter, YouTube, Facebook, and Myspace, among many others, have been used by people in all forms of business to create a closer digital/ambient bond with their clientele base. This is most notably seen in the music industry where social media networking has become the mainstay of all advertising for independent and major artists. The effect of this type of ambient marketing is that the consumer begins to get a sense of the artist's life style and personality. In this way social media outlets and ambient awareness have managed to tighten the gap between consumers and producers in all areas of business.
As web-based collaboration tools and social project management suites proliferate, the addition ofactivity streamsto those products help to create business context-specific ambient awareness, and produce a new class of products, such associal project managementplatforms.[5]
|
https://en.wikipedia.org/wiki/Ambient_awareness
|
Inpsychology, thecollective unconsciousness(German:kollektives Unbewusstes) is a coined term byCarl Jung, which is the belief that theunconscious mindcomprises theinstinctsofJungian archetypes—innate symbols understood from birth in all humans.[1]Jung considered the collective unconscious to underpin and surround the unconscious mind, distinguishing it from thepersonal unconsciousofFreudianpsychoanalysis. He believed that the concept of the collective unconscious helps to explain why similar themes occur in mythologies around the world. He argued that the collective unconscious had a profound influence on the lives of individuals, who lived out its symbols and clothed them in meaning through their experiences. The psychotherapeutic practice ofanalytical psychologyrevolves around examining the patient's relationship to the collective unconscious.
Psychiatrist and Jungian analyst Lionel Corbett argues that the contemporary terms "autonomous psyche" or "objective psyche" are more commonly used today in the practice of depth psychology rather than the traditional term of the "collective unconscious".[2]Critics of the collective unconscious concept have called it unscientific and fatalistic, or otherwise very difficult to test scientifically (due to the mystical aspect of the collective unconscious).[3]Proponents suggest that it is borne out by findings ofpsychology,neuroscience, andanthropology.
The term "collective unconscious" first appeared in Jung's 1916 essay, "The Structure of the Unconscious".[4]This essay distinguishes between the "personal", Freudian unconscious, filled with sexual fantasies and repressed images, and the "collective" unconscious encompassing the soul of humanity at large.[5]
In "The Significance of Constitution and Heredity in Psychology" (November 1929), Jung wrote:
And the essential thing, psychologically, is that in dreams, fantasies, and other exceptional states of mind the most far-fetched mythological motifs and symbols can appearautochthonouslyat any time, often, apparently, as the result of particular influences, traditions, and excitations working on the individual, but more often without any sign of them. These "primordial images" or "archetypes," as I have called them, belong to the basic stock of the unconscious psyche and cannot be explained as personal acquisitions. Together they make up that psychic stratum which has been called the collective unconscious.The existence of the collective unconscious means that individual consciousness is anything but atabula rasaand is not immune to predetermining influences. On the contrary, it is in the highest degree influenced by inherited presuppositions, quite apart from the unavoidable influences exerted upon it by the environment. The collective unconscious comprises in itself the psychic life of our ancestors right back to the earliest beginnings. It is the matrix of all conscious psychic occurrences, and hence it exerts an influence that compromises the freedom of consciousness in the highest degree, since it is continually striving to lead all conscious processes back into the old paths.[6]
On October 19, 1936, Jung delivered a lecture "The Concept of the Collective Unconscious" to the Abernethian Society atSt. Bartholomew's Hospitalin London.[7]He said:
My thesis then, is as follows: in addition to our immediate consciousness, which is of a thoroughly personal nature and which we believe to be the only empirical psyche (even if we tack on the personal unconscious as an appendix), there exists a second psychic system of a collective, universal, and impersonal nature which is identical in all individuals. This collective unconscious does not develop individually but is inherited. It consists of pre-existent forms, the archetypes, which can only become conscious secondarily and which give definite form to certain psychic contents.[8]
Jung linked the collective unconscious to "what Freud called 'archaic remnants' – mental forms whose presence cannot be explained by anything in the individual's own life and which seem to be aboriginal, innate, and inherited shapes of the human mind".[9]He credited Freud for developing his "primal horde" theory inTotem and Tabooand continued further with the idea of an archaic ancestor maintaining its influence in the minds of present-day humans. Every human being, he wrote, "however high his conscious development, is still an archaic man at the deeper levels of his psyche."[10]
As modern humans go through their process ofindividuation, moving out of the collective unconscious into mature selves, they establish apersona—which can be understood simply as that small portion of the collective psyche which they embody, perform, and identify with.[11]
The collective unconscious exerts overwhelming influence on the minds of individuals. These effects of course vary widely, however, since they involve virtually every emotion and situation. At times, the collective unconscious can terrify, but it can also heal.[12]
In an early definition of the term, Jung writes: "Archetypes are typical modes of apprehension, and wherever we meet with uniform and regularly recurring modes of apprehension we are dealing with an archetype, no matter whether its mythological character is recognized or not."[13]He traces the term back toPhilo,Irenaeus, and theCorpus Hermeticum, which associate archetypes with divinity and the creation of the world, and notes the close relationship ofPlatonic ideas.[14]
These archetypes dwell in a world beyond the chronology of a human lifespan, developing on an evolutionary timescale. Regarding theanimus and anima, the male principle within the woman and the female principle within the man, Jung writes:
They evidently live and function in the deeper layers of the unconscious, especially in that phylogenetic substratum which I have called the collective unconscious. This localization explains a good deal of their strangeness: they bring into our ephemeral consciousness an unknown psychic life belonging to a remote past. It is the mind of our unknown ancestors, their way of thinking and feeling, their way of experiencing life and the world, gods, and men. The existence of these archaic strata is presumably the source of man's belief in reincarnations and in memories of "previous experiences". Just as the human body is a museum, so to speak, of its phylogenetic history, so too is the psyche.[15]
Jung also described archetypes as imprints of momentous or frequently recurring situations in the lengthy human past.[16]
A complete list of archetypes cannot be made, nor can differences between archetypes be absolutely delineated.[17]For example, the Eagle is a common archetype that may have a multiplicity of interpretations. It could mean the soul leaving the mortal body and connecting with the heavenly spheres, or it may mean that someone is sexually impotent, in that they have had their spiritual ego body engaged. In spite of this difficulty, Jungian analystJune Singersuggests a partial list of well-studied archetypes, listed in pairs of opposites:[18]
Jung made reference to contents of this category of the unconscious psyche as being similar toLevy-Bruhl's use of "collective representations",HubertandMauss's "categories of the imagination", andAdolf Bastian's "primordial thoughts". He also called archetypes "dominants" because of their profound influence on mental life.
Jung's exposition of the collective unconscious builds on the classic issue in psychology and biology regardingnature versus nurture. If we accept that nature, or heredity, has some influence on the individual psyche, we must examine the question of how this influence takes hold in the real world.[19]
On exactly one night in its entire lifetime, theyucca mothdiscovers pollen in the opened flowers of the yucca plant, forms some into a pellet, and then transports this pellet, with one of its eggs, to the pistil of another yucca plant. This activity cannot be "learned"; it makes more sense to describe the yucca moth as experiencingintuitionabout how to act.[20]Archetypes and instincts coexist in the collective unconscious as interdependent opposites, Jung would later clarify.[12][21]Whereas for most animals intuitive understandings completely intertwine with instinct, in humans the archetypes have become a separate register of mental phenomena.[22]
Humans experience five main types ofinstinct, wrote Jung: hunger, sexuality, activity, reflection, and creativity. These instincts, listed in order of increasing abstraction, elicit and constrain human behavior, but also leave room for freedom in their implementation and especially in their interplay. Even a simple hungry feeling can lead to many different responses, including metaphoricalsublimation.[22][23]These instincts could be compared to the "drives" discussed in psychoanalysis and other domains of psychology.[24]Several readers of Jung have observed that in his treatment of the collective unconscious, Jung suggests an unusual mixture of primordial, "lower" forces, and spiritual, "higher" forces.[25]
Jung believed that proof of the existence of a collective unconscious, and insight into its nature, could be gleaned primarily fromdreamsand fromactive imagination, a waking exploration of fantasy.[26]
Jung considered that 'theshadow' and theanima and animusdiffer from the other archetypes in the fact that their content is more directly related to the individual's personal situation'.[27]These archetypes, a special focus of Jung's work, become autonomous personalities within an individual psyche. Jung encouraged direct conscious dialogue of the patients with these personalities within.[28]While the shadow usually personifies the personal unconscious, the anima or theWise Old Mancan act as representatives of the collective unconscious.[29]
Jung suggested thatparapsychology,alchemy, andoccultreligious ideas could contribute understanding of the collective unconscious.[30]Based on his interpretation ofsynchronicityandextra-sensory perception, Jung argued that psychic activity transcended thebrain.[31]In alchemy, Jung found that plainwater, orseawater, corresponded to his concept of the collective unconscious.[32]
In humans, the psyche mediates between the primal force of the collective unconscious and the experience of consciousness or dream. Therefore, symbols may require interpretation before they can be understood as archetypes. Jung writes:
We have only to disregard the dependence of dream language on environment and substitute "eagle" for "aeroplane," "dragon" for "automobile" or "train," "snake-bite" for "injection," and so forth, in order to arrive at the more universal and more fundamental language of mythology. This give us access to the primordial images that underlie all thinking and have a considerable influence even on our scientific ideas.[33]
A single archetype can manifest in many different ways. Regarding the Mother archetype, Jung suggests that not only can it apply to mothers, grandmothers, stepmothers, mothers-in-law, and mothers in mythology, but to various concepts, places, objects, and animals:
Other symbols of the mother in a figurative sense appear in things representing the goal of our longing for redemption, such as Paradise, the Kingdom of God, the Heavenly Jerusalem. Many things arousing devotion or feelings of awe, as for instance the Church, university, city or country, heaven, earth, the woods, the sea or any still waters, matter even, the underworld and the moon, can be mother-symbols. The archetype is often associated with things and places standing for fertility and fruitfulness: the cornucopia, a ploughed field, a garden. It can be attached to a rock, a cave, a tree, a spring, a deep well, or to various vessels such as the baptismal font, or to vessel-shaped flowers like the rose or the lotus. Because of the protection it implies, the magic circle or mandala can be a form of mother archetype. Hollow objects such as ovens or cooking vessels are associated with the mother archetype, and, of course, the uterus,yoni, and anything of a like shape. Added to this list there are many animals, such as the cow, hare, and helpful animals in general.[34]
Care must be taken, however, to determine the meaning of a symbol through further investigation; one cannot simply decode a dream by assuming these meanings are constant. Archetypal explanations work best when an already-known mythological narrative can clearly help to explain the confusing experience of an individual.[35]
In his clinical psychiatry practice, Jung identified mythological elements which seemed to recur in the minds of his patients—above and beyond the usual complexes which could be explained in terms of their personal lives.[36]The most obvious patterns applied to the patient's parents: "Nobody knows better than the psychotherapist that the mythologizing of the parents is often pursued far into adulthood and is given up only with the greatest resistance."[37]
Jung cited recurring themes as evidence of the existence of psychic elements shared among all humans. For example: "The snake-motif was certainly not an individual acquisition of the dreamer, for snake-dreams are very common even among city-dwellers who have probably never seen a real snake."[38][35]Still better evidence, he felt, came when patients described complex images and narratives with obscure mythological parallels.[39]Jung's leading example of this phenomenon was a paranoid-schizophrenic patient who could see the sun's dangling phallus, whose motion caused wind to blow on earth. Jung found a direct analogue of this idea in the "Mithras Liturgy", from theGreek Magical Papyriof Ancient Egypt—only just translated into German—which also discussed a phallic tube, hanging from the sun, and causing wind to blow on earth. He concluded that the patient's vision and the ancient Liturgy arose from the same source in the collective unconscious.[40]
Going beyond the individual mind, Jung believed that "the whole of mythology could be taken as a sort of projection of the collective unconscious". Therefore, psychologists could learn about the collective unconscious by studyingreligionsandspiritual practicesof all cultures, as well as belief systems likeastrology.[41]
Popperiancritic Ray Scott Percival disputes some of Jung's examples and argues that his strongest claims are notfalsifiable. Percival takes special issue with Jung's claim that major scientific discoveries emanate from the collective unconscious and not from unpredictable or innovative work done by scientists. Percival charges Jung with excessivedeterminismand writes: "He could not countenance the possibility that people sometimes create ideas that cannot be predicted, even in principle." Regarding the claim that all humans exhibit certain patterns of mind, Percival argues that these common patterns could be explained by common environments (i.e. by shared nurture, not nature). Because all people have families, encounter plants and animals, and experience night and day, it should come as no surprise that they develop basic mental structures around these phenomena.[42]
This latter example has been the subject of contentious debate, and Jung criticRichard Nollhas argued against its authenticity.[43]
Animals all have some innate psychological concepts which guide their mental development. The concept ofimprintinginethologyis one well-studied example, dealing most famously with the Mother constructs of newborn animals. The many predetermined scripts for animal behavior are calledinnate releasing mechanisms.[44]
Proponents of the collective unconscious theory in neuroscience suggest that mental commonalities in humans originate especially from the subcortical area of the brain: specifically, thethalamusandlimbic system. These centrally located structures link the brain to the rest of the nervous system and are said to control vital processes including emotions and long-term memory .[25]
A more common experimental approach investigates the unique effects of archetypal images. An influential study of this type, by Rosen, Smith, Huston, & Gonzalez in 1991, found that people could better remember symbols paired with words representing their archetypal meaning. Using data from theArchive for Research in Archetypal Symbolismand a jury of evaluators, Rosen et al. developed an "Archetypal Symbol Inventory" listing symbols and one-word connotations. Many of these connotations were obscure to laypeople. For example, a picture of a diamond represented "self"; a square represented "Earth". They found that even when subjects did not consciously associate the word with the symbol, they were better able to remember the pairing of the symbol with its chosen word.[45]Brown & Hannigan replicated this result in 2013, and expanded the study slightly to include tests in English and in Spanish of people who spoke both languages.[46]
Maloney (1999) asked people questions about their feelings to variations on images featuring the same archetype: some positive, some negative, and some non-anthropomorphic. He found that although the images did not elicit significantly different responses to questions about whether they were "interesting" or "pleasant", but did provoke highly significant differences in response to the statement: "If I were to keep this image with me forever, I would be". Maloney suggested that this question led the respondents to process the archetypal images on a deeper level, which strongly reflected their positive or negative valence.[47]
Ultimately, although Jung referred to the collective unconscious as anempiricalconcept, based on evidence, its elusive nature does create a barrier to traditional experimental research. June Singer writes:
But the collective unconscious lies beyond the conceptual limitations of individual human consciousness, and thus cannot possibly be encompassed by them. We cannot, therefore, make controlled experiments to prove the existence of the collective unconscious, for the psyche of man, holistically conceived, cannot be brought under laboratory conditions without doing violence to its nature. ... In this respect, psychology may be compared to astronomy, the phenomena of which also cannot be enclosed within a controlled setting. The heavenly bodies must be observed where they exist in the natural universe, under their own conditions, rather than under conditions we might propose to set for them.[48]
Psychotherapy based on analytical psychology would seek to analyze the relationship between a person's individual consciousness and the deeper common structures which underlie them. Personal experiences both activate archetypes in the mind and give them meaning and substance for individual.[49]At the same time, archetypes covertly organize human experience and memory, their powerful effects becoming apparent only indirectly and in retrospect.[50][51]Understanding the power of the collective unconscious can help an individual to navigate through life.
In the interpretation of analytical psychologist Mary Williams, a patient who understands the impact of the archetype can help to dissociate the underlying symbol from the real person who embodies the symbol for the patient. In this way, the patient no longer uncritically transfers their feelings about the archetype onto people in everyday life, and as a result, can develop healthier and more personal relationships.[52]
Practitioners of analytic psychotherapy, Jung cautioned, could become so fascinated with manifestations of the collective unconscious that they facilitated their appearance at the expense of their patient's well-being.[52]Individuals withschizophrenia, it is said, fully identify with the collective unconscious, lacking a functioning ego to help them deal with actual difficulties of life.[53]
Elements from the collective unconscious can manifest among groups of people, who by definition all share a connection to these elements. Groups of people can become especially receptive to specific symbols due to the historical situation they find themselves in.[54]The common importance of the collective unconscious makes people ripe for political manipulation, especially in the era ofmass politics.[55]Jung compared mass movements to mass psychoses, comparable todemonic possessionin which people uncritically channel unconscious symbolism through the social dynamic of themoband theleader.[56]
Althoughcivilizationleads people to disavow their links with the mythological world of uncivilized societies, Jung argued that aspects of the primitive unconscious would nevertheless reassert themselves in the form ofsuperstitions, everyday practices, and unquestioned traditions such as theChristmas tree.[57]
Based on empirical inquiry, Jung felt that all humans, regardless of racial and geographic differences, share the same collective pool of instincts and images, though these manifest differently due to the moulding influence of culture.[58]However, above and in addition to the primordial collective unconscious, people within a certain culture may share additional bodies of primal collective ideas.[59]
Jung called theUFO phenomenona "living myth", a legend in the process of consolidation.[60]Belief in a messianic encounter with UFOs demonstrated the point, Jung argued, that even if a rationalistic modern ideology repressed the images of the collective unconscious, its fundamental aspects would inevitably resurface. The circular shape of the flying saucer confirms its symbolic connection to repressed but psychically necessary ideas of divinity.[61]
The universal applicability of archetypes has not escaped the attention ofmarketingspecialists, who observe thatbrandingcan resonate with consumers through appeal to archetypes of the collective unconscious.
Jung contrasted the collective unconscious with thepersonal unconscious, the unique aspects of an individual study which Jung says constitute the focus ofSigmund FreudandAlfred Adler.[62]Psychotherapy patients, it seemed to Jung, often described fantasies and dreams which repeated elements from ancient mythology. These elements appeared even in patients who were probably not exposed to the original story. For example, mythology offers many examples of the "dual mother" narrative, according to which a child has a biological mother and a divine mother. Therefore, argues Jung, Freudian psychoanalysis would neglect important sources for unconscious ideas, in the case of a patient with neurosis around a dual-mother image.[63]
This divergence over the nature of the unconscious has been cited as a key aspect of Jung's famous split fromSigmund Freudand his school ofpsychoanalysis.[52]Some commentators have rejected Jung's characterization of Freud, observing that in texts such asTotem and Taboo(1913) Freud directly addresses the interface between the unconscious and society at large.[42]Jung himself said that Freud had discovered a collective archetype, theOedipus complex, but that it "was the first archetype Freud discovered, the first and only one".[64]
Probably none of my empirical concepts has been met with so much misunderstanding as the idea of the collective unconscious.
Jung also distinguished the collective unconscious andcollective consciousness, between which lay "an almost unbridgeable gulf over which the subject finds himself suspended". According to Jung, collective consciousness (meaning something along the lines ofconsensus reality) offered only generalizations, simplistic ideas, and the fashionable ideologies of the age. This tension between collective unconscious and collective consciousness corresponds roughly to the "everlasting cosmic tug of war between good and evil" and has worsened in the time of themass man.[66][67]
Organized religion, exemplified by theCatholic Church, lies more with the collective consciousness; but, through its all-encompassingdogmait channels and molds the images which inevitably pass from the collective unconscious into the minds of people.[68][69](Conversely, religious critics includingMartin Buberaccused Jung of wrongly placing psychology above transcendental factors in explaining human experience.)[70]
In a minimalist interpretation of what would then appear as "Jung's much misunderstood idea of the collective unconscious", his idea was "simply that certain structures and predispositions of the unconscious are common to all of us ... [on] an inherited, species-specific, genetic basis".[71]Thus "one could as easily speak of the 'collective arm' – meaning the basic pattern of bones and muscles which all human arms share in common."[72]
Others point out however that "there does seem to be a basic ambiguity in Jung's various descriptions of the Collective Unconscious. Sometimes he seems to regard the predisposition to experience certain images as understandable in terms of some genetic model"[73]– as with the collective arm. However, Jung was "also at pains to stress thenuminousquality of these experiences, and there can be no doubt that he was attracted to the idea that the archetypes afford evidence of some communion with some divine or world mind', and perhaps 'his popularity as a thinker derives precisely from this"[74]– the maximal interpretation.
Marie-Louise von Franzaccepted that "it is naturally very tempting to identify the hypothesis of the collective unconscious historically and regressively with the ancient idea of an all-extensiveworld-soul."[75]New Agewriter Sherry Healy goes further, claiming that Jung himself "dared to suggest that the human mind could link to ideas and motivations called the collective unconscious ... a body of unconscious energy that lives forever."[76]This is the idea ofmonopsychism.
Other researchers, including Alexander Fowler, have proposed using the minimal interpretation of his work and incorporating it into that of the theory of biological evolution (i.e., sexual selection) or to unify disparate theoretical orientations within psychology such as neuropsychology, evolutionary psychology and analytical psychology as Jung's postulation of an evidenced mechanism for the genetic transmission of information through sexual selection provides a singular explanation for unanswered questions held by those of varied theoretical orientations.[77][78]
|
https://en.wikipedia.org/wiki/Collective_unconscious
|
Hyperconnectivityis a term invented by Canadian social scientistsAnabel Quan-HaaseandBarry Wellman, arising from their studies of person-to-person and person-to-machine communication in networked organizations and networked societies.[1]The term refers to the use of multiple means of communication, such asemail,instant messaging,telephone, face-to-face contact andWeb 2.0information services.[2]
Hyperconnectivity is also a trend incomputer networkingin which all things that can or should communicate through the network will communicate through the network. This encompasses person-to-person, person-to-machine and machine-to-machine communication. The trend is fueling large increases in bandwidth demand and changes in communications because of the complexity, diversity and integration of new applications and devices using the network.
The communications equipment makerNortelhas recognized hyperconnectivity as a pervasive and growing market condition that is at the core of their business strategy. CEO Mike Zafirovski and other executives have been quoted extensively in the press referring to the hyperconnected era.
Apart from network-connected devices such aslandline telephones,mobile phonesand computers, newly-connectable devices range from mobile devices such asPDAs,MP3 players,GPS receiversandcamerasthrough to an ever wider collection of machines including cars[3][4]refrigerators[5][6]and coffee makers,[7]all equipped with embedded wireline or wireless[8]networking capabilities.[9]The IP enablement of all devices is a fundamental limitation of IP version 4, andIPv6is the enabling technology to support massive address explosions.
There are other, independent, uses of the term:
Some examples to support the existence of this accelerating trend to hyperconnectivity include the following facts and assertions:
|
https://en.wikipedia.org/wiki/Hyperconnectivity
|
Media intelligenceusesdata mininganddata scienceto analyze public,socialand editorialmedia content. It refers to marketing systems that synthesize billions ofonline conversationsinto relevant information. This allow organizations to measure and manage content performance, understand trends, and drive communications andbusiness strategy.
Media intelligence can includesoftware as a serviceusingbig dataterminology.[1]This includes questions about messaging efficiency,share of voice, audience geographical distribution, message amplification,influencerstrategy, journalist outreach, creative resonance, and competitor performance in all these areas.
Media intelligence differs frombusiness intelligencein that it uses and analyzes data outside companyfirewalls. Examples of that data areuser-generated contenton social media sites,blogs, comment fields, and wikis etc. It may also include other public data sources likepress releases, news, blogs, legal filings, reviews and job postings.
Media intelligence may also include competitive intelligence, wherein information that is gathered from publicly available sources such as social media, press releases, and news announcements are used to better understand the strategies and tactics being deployed by competing businesses.[2]
Media intelligence is enhanced by means of emerging technologies likeambient intelligence,machine learning,semantic tagging,natural language processing,sentiment analysisandmachine translation.
Different media intelligence platforms use different technologies formonitoring, curating content, engaging with content, data analysis and measurement of communications and marketing campaign success. These technology providers may obtain content by scraping content directly from websites or by connecting to the API provided by social media, or other content platforms that are created for 3rd party developers to develop their own applications and services that access data. Technology companies may also get data from a data reseller.
Some social media monitoring and analytics companies use calls to data providers each time an end-user develops a query. Others archive and index social media posts to provide end users with on-demand access to historical data and enable methodologies and technologies leveraging network and relational data. Additional monitoring companies use crawlers and spidering technology to find keyword references, known assemantic analysisornatural language processing. Basic implementation involves curating data from social media on a large scale and analyzing the results to make sense out of it.[3]
|
https://en.wikipedia.org/wiki/Media_intelligence
|
Sentiment analysis(also known asopinion miningoremotion AI) is the use ofnatural language processing,text analysis,computational linguistics, andbiometricsto systematically identify, extract, quantify, and study affective states and subjective information. Sentiment analysis is widely applied tovoice of the customermaterials such as reviews and survey responses, online and social media, and healthcare materials for applications that range frommarketingtocustomer serviceto clinical medicine. With the rise of deep language models, such asRoBERTa, also more difficult data domains can be analyzed, e.g., news texts where authors typically express their opinion/sentiment less explicitly.[1]
A basic task in sentiment analysis is classifying thepolarityof a given text at the document, sentence, or feature/aspect level—whether the expressed opinion in a document, a sentence or an entity feature/aspect is positive, negative, or neutral. Advanced, "beyond polarity" sentiment classification looks, for instance, at emotional states such as enjoyment, anger, disgust, sadness, fear, and surprise.[2]
Precursors to sentimental analysis include the General Inquirer,[3]which provided hints toward quantifying patterns in text and, separately, psychological research that examined a person'spsychological statebased on analysis of their verbal behavior.[4]
Subsequently, the method described in a patent by Volcani and Fogel,[5]looked specifically at sentiment and identified individual words and phrases in text with respect to different emotional scales. A current system based on their work, called EffectCheck, presents synonyms that can be used to increase or decrease the level of evoked emotion in each scale.
Many other subsequent efforts were less sophisticated, using a mere polar view of sentiment, from positive to negative, such as work by Turney,[6]and Pang[7]who applied different methods for detecting the polarity ofproduct reviewsand movie reviews respectively. This work is at the document level. One can also classify a document's polarity on a multi-way scale, which was attempted by Pang[8]and Snyder[9]among others: Pang and Lee[8]expanded the basic task of classifying a movie review as either positive or negative to predict star ratings on either a 3- or a 4-star scale, while Snyder[9]performed an in-depth analysis of restaurant reviews, predicting ratings for various aspects of the given restaurant, such as the food and atmosphere (on a five-star scale).
First steps to bringing together various approaches—learning, lexical, knowledge-based, etc.—were taken in the 2004AAAISpring Symposium where linguists, computer scientists, and other interested researchers first aligned interests and proposed shared tasks and benchmark data sets for the systematic computational research on affect, appeal, subjectivity, and sentiment in text.[10]
Even though in most statistical classification methods, the neutral class is ignored under the assumption that neutral texts lie near the boundary of the binary classifier, several researchers suggest that, as in every polarity problem, three categories must be identified. Moreover, it can be proven that specific classifiers such as theMax Entropy[11]andSVMs[12]can benefit from the introduction of a neutral class and improve the overall accuracy of the classification. There are in principle two ways for operating with a neutral class. Either, the algorithm proceeds by first identifying the neutral language, filtering it out and then assessing the rest in terms of positive and negative sentiments, or it builds a three-way classification in one step.[13]This second approach often involves estimating a probability distribution over all categories (e.g.naive Bayesclassifiers as implemented by theNLTK). Whether and how to use a neutral class depends on the nature of the data: if the data is clearly clustered into neutral, negative and positive language, it makes sense to filter the neutral language out and focus on the polarity between positive and negative sentiments. If, in contrast, the data are mostly neutral with small deviations towards positive and negative affect, this strategy would make it harder to clearly distinguish between the two poles.
A different method for determining sentiment is the use of a scaling system whereby words commonly associated with having a negative, neutral, or positive sentiment are given an associated number on a −10 to +10 scale (most negative up to most positive) or simply from 0 to a positive upper limit such as +4. This makes it possible to adjust the sentiment of a given term relative to its environment (usually on the level of the sentence). When a piece of unstructured text is analyzed usingnatural language processing, each concept in the specified environment is given a score based on the way sentiment words relate to the concept and its associated score.[14][15]This allows movement to a more sophisticated understanding of sentiment, because it is now possible to adjust the sentiment value of a concept relative to modifications that may surround it. Words, for example, that intensify, relax or negate the sentiment expressed by the concept can affect its score. Alternatively, texts can be given a positive and negative sentiment strength score if the goal is to determine the sentiment in a text rather than the overall polarity and strength of the text.[16]
There are various other types of sentiment analysis, such as aspect-based sentiment analysis, grading sentiment analysis (positive, negative, neutral), multilingual sentiment analysis and detection of emotions.
This task is commonly defined as classifying a given text (usually a sentence) into one of two classes: objective or subjective.[17]This problem can sometimes be more difficult than polarity classification.[18]The subjectivity of words and phrases may depend on their context and an objective document may contain subjective sentences (e.g., a news article quoting people's opinions). Moreover, as mentioned by Su,[19]results are largely dependent on the definition of subjectivity used when annotating texts. However, Pang[20]showed that removing objective sentences from a document before classifying its polarity helped improve performance.
Subjective and objective identification, emerging subtasks of sentiment analysis to use syntactic, semantic features, and machine learning knowledge to identify if a sentence or document contains facts or opinions. Awareness of recognizing factual and opinions is not recent, having possibly first presented by Carbonell at Yale University in 1979.[clarify]
The term objective refers to the incident carrying factual information.[21]
The term subjective describes the incident contains non-factual information in various forms, such as personal opinions, judgment, and predictions, also known as 'private states'.[22]In the example down below, it reflects a private states 'We Americans'. Moreover, the target entity commented by the opinions can take several forms from tangible product to intangible topic matters stated in Liu (2010).[23]Furthermore, three types of attitudes were observed by Liu (2010), 1) positive opinions, 2) neutral opinions, and 3) negative opinions.[23]
This analysis is a classification problem.[24]
Each class's collections of words or phrase indicators are defined for to locate desirable patterns on unannotated text. For subjective expression, a different word list has been created. Lists of subjective indicators in words or phrases have been developed by multiple researchers in the linguist and natural language processing field states in Riloff et al. (2003).[25]A dictionary of extraction rules has to be created for measuring given expressions. Over the years, in subjective detection, the features extraction progression from curating features by hand to automated features learning. At the moment, automated learning methods can further separate into supervised andunsupervised machine learning. Patterns extraction with machine learning process annotated and unannotated text have been explored extensively by academic researchers.
However, researchers recognized several challenges in developing fixed sets of rules for expressions respectably. Much of the challenges in rule development stems from the nature of textual information. Six challenges have been recognized by several researchers: 1) metaphorical expressions, 2) discrepancies in writings, 3) context-sensitive, 4) represented words with fewer usages, 5) time-sensitive, and 6) ever-growing volume.
Previously, the research mainly focused on document level classification. However, classifying a document level suffers less accuracy, as an article may have diverse types of expressions involved. Researching evidence suggests a set of news articles that are expected to dominate by the objective expression, whereas the results show that it consisted of over 40% of subjective expression.[21]
To overcome those challenges, researchers conclude that classifier efficacy depends on the precisions of patterns learner. And the learner feeds with large volumes of annotated training data outperformed those trained on less comprehensive subjective features. However, one of the main obstacles to executing this type of work is to generate a big dataset of annotated sentences manually. The manual annotation method has been less favored than automatic learning for three reasons:
All these mentioned reasons can impact on the efficiency and effectiveness of subjective and objective classification. Accordingly, two bootstrapping methods were designed to learning linguistic patterns from unannotated text data. Both methods are starting with a handful of seed words and unannotated textual data.
Overall, these algorithms highlight the need for automatic pattern recognition and extraction in subjective and objective task.
Subjective and object classifier can enhance the several applications of natural language processing. One of the classifier's primary benefits is that it popularized the practice of data-driven decision-making processes in various industries. According to Liu, the applications of subjective and objective identification have been implemented in business, advertising, sports, and social science.[30]
It refers to determining the opinions or sentiments expressed on different features or aspects of entities, e.g., of a cell phone, a digital camera, or a bank.[35]A feature or aspect is an attribute or component of an entity, e.g., the screen of a cell phone, the service for a restaurant, or the picture quality of a camera. The advantage of feature-based sentiment analysis is the possibility to capture nuances about objects of interest. Different features can generate different sentiment responses, for example a hotel can have a convenient location, but mediocre food.[36]This problem involves several sub-problems, e.g., identifying relevant entities, extracting their features/aspects, and determining whether an opinion expressed on each feature/aspect is positive, negative or neutral.[37]The automatic identification of features can be performed with syntactic methods, withtopic modeling,[38][39]or withdeep learning.[40][41]More detailed discussions about this level of sentiment analysis can be found in Liu's work.[23]
Emotions and sentiments aresubjectivein nature. Thedegreeof emotions/sentiments expressed in a given text at the document, sentence, or feature/aspect level—to what degree of intensity is expressed in the opinion of a document, a sentence or an entity differs on a case-to-case basis.[42]However, predicting only the emotion and sentiment does not always convey complete information. The degree or level of emotions and sentiments often plays a crucial role in understanding the exact feeling within a single class (e.g., 'good' versus 'awesome'). Some methods leverage a stackedensemblemethod[43]forpredicting intensityfor emotion and sentiment by combining the outputs obtained and usingdeep learningmodels based onconvolutional neural networks,[44]long short-term memory networks andgated recurrent units.[45]
Existing approaches to sentiment analysis can be grouped into three main categories: knowledge-based techniques, statistical methods, and hybrid approaches.[46]Knowledge-based techniques classify text by affect categories based on the presence of unambiguous affect words such as happy, sad, afraid, and bored.[47]Some knowledge bases not only list obvious affect words, but also assign arbitrary words a probable "affinity" to particular emotions.[48]Statistical methods leverage elements frommachine learningsuch aslatent semantic analysis,support vector machines, "bag of words", "Pointwise Mutual Information" for Semantic Orientation,[6]semantic spacemodels orword embeddingmodels,[49]anddeep learning. More sophisticated methods try to detect the holder of a sentiment (i.e., the person who maintains that affective state) and the target (i.e., the entity about which the affect is felt).[50]To mine the opinion incontextand get the feature about which the speaker has opined, the grammatical relationships of words are used. Grammatical dependency relations are obtained by deep parsing of the text.[51]Hybrid approaches leverage both machine learning and elements fromknowledge representationsuch asontologiesandsemantic networksin order to detect semantics that are expressed in a subtle manner, e.g., through the analysis of concepts that do not explicitly convey relevant information, but which are implicitly linked to other concepts that do so.[52]
Open source software tools as well as range of free and paid sentiment analysis tools deploymachine learning, statistics, and natural language processing techniques to automate sentiment analysis on large collections of texts, including web pages, online news, internet discussion groups, online reviews, web blogs, and social media.[53]Knowledge-based systems, on the other hand, make use of publicly available resources, to extract the semantic and affective information associated with natural language concepts. The system can help perform affectivecommonsense reasoning.[54]Sentiment analysis can also be performed on visual content, i.e., images and videos (seeMultimodal sentiment analysis). One of the first approaches in this direction is SentiBank[55]utilizing an adjective noun pair representation of visual content. In addition, the vast majority of sentiment classification approaches rely on the bag-of-words model, which disregards context,grammarand evenword order. Approaches that analyses the sentiment based on how words compose the meaning of longer phrases have shown better result,[56]but they incur an additional annotation overhead.
A human analysis component is required in sentiment analysis, as automated systems are not able to analyze historical tendencies of the individual commenter, or the platform and are often classified incorrectly in their expressed sentiment. Automation impacts approximately 23% of comments that are correctly classified by humans.[57]However, humans often disagree, and it is argued that the inter-human agreement provides an upper bound that automated sentiment classifiers can eventually reach.[58]
The accuracy of a sentiment analysis system is, in principle, how well it agrees with human judgments. This is usually measured by variant measures based onprecision and recallover the two target categories of negative and positive texts. However, according to research human raters typically only agree about 80%[59]of the time (seeInter-rater reliability). Thus, a program that achieves 70% accuracy in classifying sentiment is doing nearly as well as humans, even though such accuracy may not sound impressive. If a program were "right" 100% of the time, humans would still disagree with it about 20% of the time, since they disagree that much aboutanyanswer.[citation needed]
On the other hand, computer systems will make very different errors than human assessors, and thus the figures are not entirely comparable. For instance, a computer system will have trouble with negations, exaggerations,jokes, or sarcasm, which typically are easy to handle for a human reader: some errors a computer system makes will seem overly naive to a human. In general, the utility for practical commercial tasks of sentiment analysis as it is defined in academic research has been called into question, mostly since the simple one-dimensional model of sentiment from negative to positive yields rather little actionable information for a client worrying about the effect of public discourse on e.g. brand or corporate reputation.[60][61][62]
To better fit market needs, evaluation of sentiment analysis has moved to more task-based measures, formulated together with representatives from PR agencies and market research professionals. The focus in e.g. the RepLab evaluation data set is less on the content of the text under consideration and more on the effect of the text in question onbrand reputation.[63][64][65]
Because evaluation of sentiment analysis is becoming more and more task based, each implementation needs a separate training model to get a more accurate representation of sentiment for a given data set.
The rise ofsocial mediasuch asblogsandsocial networkshas fueled interest in sentiment analysis. With the proliferation of reviews, ratings, recommendations and other forms of online expression, online opinion has turned into a kind of virtual currency for businesses looking to market their products, identify new opportunities and manage their reputations. As businesses look to automate the process of filtering out the noise, understanding the conversations, identifying the relevant content and actioning it appropriately, many are now looking to the field of sentiment analysis.[66]Further complicating the matter, is the rise of anonymous social media platforms such as4chanandReddit.[67]Ifweb 2.0was all about democratizing publishing, then the next stage of the web may well be based on democratizingdata miningof all the content that is getting published.[68]
One step towards this aim is accomplished in research. Several research teams in universities around the world currently focus on understanding the dynamics of sentiment ine-communitiesthrough sentiment analysis.[69]
The problem is that most sentiment analysis algorithms use simple terms to express sentiment about a product or service. However, cultural factors, linguistic nuances, and differing contexts make it extremely difficult to turn a string of written text into a simple pro or con sentiment.[66]The fact that humans often disagree on the sentiment of text illustrates how big a task it is for computers to get this right. The shorter the string of text, the harder it becomes.
Even though short text strings might be a problem, sentiment analysis withinmicroblogginghas shown thatTwittercan be seen as a valid online indicator of political sentiment. Tweets' political sentiment demonstrates close correspondence to parties' and politicians' political positions, indicating that the content of Twitter messages plausibly reflects the offline political landscape.[70]Furthermore, sentiment analysis onTwitterhas also been shown to capture the public mood behind human reproduction cycles globally,[71]as well as other problems of public-health relevance such as adverse drug reactions.[72]
While sentiment analysis has been popular for domains where authors express their opinion rather explicitly ("the movie is awesome"), such as social media and product reviews, only recently robust methods were devised for other domains where sentiment is strongly implicit or indirect. For example, in news articles - mostly due to the expected journalistic objectivity - journalists often describe actions or events rather than directly stating the polarity of a piece of information. Earlier approaches using dictionaries or shallow machine learning features were unable to catch the "meaning between the lines", but recently researchers have proposed a deep learning based approach and dataset that is able to analyze sentiment in news articles.[1]
Scholars have utilized sentiment analysis to analyse the construction health and safety Tweets (which is called X now). The research revealed that there is a positive correlation between favorites and retweets in terms of sentiment valence. Others have examined the impact of YouTube on the dissemination of construction health and safety knowledge. They investigated how emotions influence users' behaviors in terms of viewing and commenting through semantic analysis. In another study, positive sentiment accounted for an overwhelming figure of 85% in knowledge sharing of construction safety and health via Instagram.[73]
For arecommender system, sentiment analysis has been proven to be a valuable technique. Arecommender systemaims to predict the preference for an item of a target user. Mainstream recommender systems work on explicit data set. For example,collaborative filteringworks on the rating matrix, andcontent-based filteringworks on themeta-dataof the items.
In manysocial networking servicesore-commercewebsites, users can provide text review, comment or feedback to the items. These user-generated text provide a rich source of user's sentiment opinions about numerous products and items. Potentially, for an item, such text can reveal both the related feature/aspects of the item and the users' sentiments on each feature.[74]The item's feature/aspects described in the text play the same role with the meta-data incontent-based filtering, but the former are more valuable for the recommender system. Since these features are broadly mentioned by users in their reviews, they can be seen as the most crucial features that can significantly influence the user's experience on the item, while the meta-data of the item (usually provided by the producers instead of consumers) may ignore features that are concerned by the users. For different items with common features, a user may give different sentiments. Also, a feature of the same item may receive different sentiments from different users. Users' sentiments on the features can be regarded as a multi-dimensional rating score, reflecting their preference on the items.
Based on the feature/aspects and the sentiments extracted from the user-generated text, a hybrid recommender system can be constructed.[75]There are two types of motivation to recommend a candidate item to a user. The first motivation is the candidate item have numerous common features with the user's preferred items,[76]while the second motivation is that the candidate item receives a high sentiment on its features. For a preferred item, it is reasonable to believe that items with the same features will have a similar function or utility. So, these items will also likely to be preferred by the user. On the other hand, for a shared feature of two candidate items, other users may give positive sentiment to one of them while giving negative sentiment to another. Clearly, the high evaluated item should be recommended to the user. Based on these two motivations, a combination ranking score of similarity and sentiment rating can be constructed for each candidate item.[75]
Except for the difficulty of the sentiment analysis itself, applying sentiment analysis on reviews or feedback also faces the challenge of spam and biased reviews. One direction of work is focused on evaluating the helpfulness of each review.[77]Review or feedback poorly written is hardly helpful for recommender system. Besides, a review can be designed to hinder sales of a target product, thus be harmful to the recommender system even it is well written.
Researchers also found that long and short forms of user-generated text should be treated differently. An interesting result shows that short-form reviews are sometimes more helpful than long-form,[78]because it is easier to filter out the noise in a short-form text. For the long-form text, the growing length of the text does not always bring a proportionate increase in the number of features or sentiments in the text.
Lamba & Madhusudhan[79]introduce a nascent way to cater the information needs of today's library users by repackaging the results from sentiment analysis of social media platforms like Twitter and provide it as a consolidated time-based service in different formats. Further, they propose a new way of conducting marketing in libraries using social media mining and sentiment analysis.
Issues such as privacy, consent, and bias are crucial since sentiment analysis regularly analyzes personal data without explicit user consent. The potential for misinterpretation and misuse of sentiment data can significantly impact societal norms. Furthermore, the development of ethical frameworks, as seen in projects like SEWA, where Ethical and Industrial Valorisation Advisory Boards are established, is essential for addressing these challenges. These boards help ensure that sentiment analysis technologies are used responsibly, especially in applications involving the recognition of human emotions and behaviors. Such frameworks are vital for guiding the responsible use of sentiment analysis tools, ensuring they promote equity and respect user autonomy, and effectively address both routine and complex ethical issues.[80]
|
https://en.wikipedia.org/wiki/Sentiment_analysis
|
Social cloud computing, alsopeer-to-peer social cloud computing, is an area of computer science that generalizescloud computingto include the sharing, bartering and renting of computing resources across peers whose owners and operators are verified through asocial networkorreputation system.[1][2]It expands cloud computing past the confines of formal commercial data centers operated by cloud providers to include anyone interested in participating within the cloud servicessharing economy. This in turn leads to more options, greater economies of scale, while bearing additional advantages for hosting data and computing services closer to the edge where they may be needed most.[3][4]
Peer-to-peer(P2P) computing and networking to enable decentralized cloud computing has been an area of research for sometime.[5]Social cloud computing intersectspeer-to-peer cloud computingwithsocial computingto verify peer and peer owner reputation thus providing security and quality of service assurances to users. On demand computing environments may be constructed and altered statically or dynamically across peers on the Internet based on their available resources and verified reputation to provide such assurances.
Social cloud computing has been highlighted as a potential benefit to large-scale computing, video gaming, and media streaming.[6]The tenets of social cloud computing has been most famously employed in theBerkeley Open Infrastructure for Network Computing(BOINC), making the service the largest computing grid in the world.[7]Another service that uses social cloud computing is Subutai. Subutai allows peer-to-peer sharing of computing resources globally or within a select permissioned network.[8]
Many challenges arise when moving from a traditional cloud infrastructure, to a social cloud environment.[9]
In the case of traditional cloud computing, availability on demand is essential for many cloud customers. Social Cloud Computing doesn't provide this availability guarantee because in a P2P environment, peers are mobile devices which may enter or leave the P2P network at any time, or PCs which have a primary purpose that can override the P2P computation at any time. The only relatively successful use cases as of recent years are those which do not require real time results, only computation power for a small subset or module of a larger algorithm or data set.
Unlike large scale data centers and company brand image, people may be less likely to trust peers vs. a large company like Google or Amazon. Running some sort of computation with sensitive information would then need to be encrypted properly and the overhead of that encryption may reduce the usefulness of the P2P offloading. When resources are distributed in small pieces to many peers for computations, inherent trust must be placed in the client, regardless of the encryption that may be promised to the client.
Similar to availability, reliability of computations must be consistent and uniform. If computations offloaded to the client are continuously interrupted, some mechanism for detecting this must be in place such that the client may know the computation is tainted or needs to be completely re-run. In P2P social computing, reliable expected computation power is difficult to achieve because the speed of the client calculation may depend on how much the client is using the end device. Some ways of overcoming this may be to only allow computations to occur at night, or during specified times the client resources will not be in use.
|
https://en.wikipedia.org/wiki/Social_cloud_computing
|
Social media optimization(SMO) is the use of online platforms to generate income or publicity to increase the awareness of a brand, event, product or service. Types of social media involved includeRSS feeds,bloggingsites,social bookmarking sites,social news websites,video sharing websitessuch asYouTubeandsocial networking sitessuch asFacebook,Instagram,TikTokandX (Twitter). SMO is similar tosearch engine optimization(SEO) in that the goal is to drive web traffic, and draw attention to a company or creator. SMO's focal point is on gaining organic links to social media content. In contrast, SEO's core is about reaching the top of the search engine hierarchy.[1]In general, social media optimization refers to optimizing a website and its content to encourage more users to use and share links to thewebsiteacross social media and networking sites.[2]
SMO is used to strategically create online content ranging from well-written text to eye-catching digital photos or video clips that encourages and entices people to engage with a website. Users share this content, via its weblink, with social media contacts and friends. Common examples of social media engagement are "liking and commenting on posts, retweeting, embedding, sharing, and promoting content".[3]Social media optimization is also an effective way of implementingonline reputation management(ORM), meaning that if someone posts bad reviews of a business, an SMO strategy can ensure that the negative feedback is not the first link to come up in a list of search engine results.[4]
In the 2010s, with social media sites overtaking TV as a source for news for young people, news organizations have become increasingly reliant on social media platforms for generating web traffic. Publishers such asThe Economistemploy large social media teams to optimize their online posts and maximize traffic,[5]while other major publishers now use advancedartificial intelligence(AI) technology to generate higher volumes of web traffic.[6]
Social media optimization is an increasingly important factor insearch engine optimization, which is the process of designing a website in a way so that it has as high a ranking as possible onsearch engines. Search engines are increasingly utilizing the recommendations of users of social networks such asReddit,Facebook,Tumblr,Twitter,YouTube,LinkedIn,PinterestandInstagramto rank pages in thesearch engine result pages.[7]The implication is that when a webpage is shared or "liked" by a user on a social network, it counts as a "vote" for that webpage's quality. Thus, search engines can use such votes accordingly to properly ranked websites in search engine results pages. Furthermore, since it is more difficult to tip the scales or influence the search engines in this way, search engines are putting more stock into social search.[7]This, coupled with increasinglypersonalized searchbased on interests and location, has significantly increased the importance of a social media presence in search engine optimization. Due to personalized search results, location-based social media presences on websites such asYelp,Google Places,Foursquare, andYahoo! Localhave become increasingly important. While social media optimization is related tosearch engine marketing, it differs in several ways. Primarily, SMO focuses on driving web traffic from sourcesotherthan search engines, though improved search engine ranking is also a benefit of successful social media optimization. Further, SMO is helpful to target particular geographic regions in order to target and reach potential customers. This helps in lead generation (finding new customers) and contributes to high conversion rates (i.e., converting previously uninterested individuals into people who are interested in a brand or organization).
Social media optimization is in many ways connected to the technique ofviral marketingor "viral seeding" whereword of mouthis created through the use of networking insocial bookmarking,videoandphoto sharingwebsites. An effective SMO campaign can harness the power of viral marketing; for example, 80% of activity onPinterestis generated through "repinning."[citation needed]Furthermore, by followingsocial trendsand utilizing alternative social networks, websites can retain existing followers while also attracting new ones. This allows businesses to build an online following and presence, all linking back to the company's website for increased traffic. For example, with an effectivesocial bookmarkingcampaign, not only can website traffic be increased, but a site's rankings can also be increased. In a similar way, the engagement withblogscreates a similar result by sharing content through the use ofRSSin theblogosphere. Social media optimization is considered an integral part of anonline reputation management(ORM) or search engine reputation management (SERM) strategy for organizations or individuals who care about their online presence.[8]SMO is one of six key influencers that affect Social Commerce Construct (SCC). Online activities such as consumers' evaluations and advices on products and services constitute part of what creates a Social Commerce Construct (SCC).[citation needed]
Social media optimization is not limited to marketing and brand building. Increasingly, smart businesses are integrating social media participation as part of their knowledge management strategy (i.e., product/service development, recruiting,employee engagementand turnover, brand building, customer satisfaction and relations, business development and more). Additionally, social media optimization can be implemented to foster a community of the associated site, allowing for a healthy business-to-consumer (B2C) relationship.[9]
According to technologistDanny Sullivan, the term "social media optimization" was first used and described by marketer Rohit Bhargava[10][11]on his marketing blog in August 2006. In the same post, Bhargava established the five important rules of social media optimization. Bhargava believed that by following his rules, anyone could influence the levels of traffic and engagement on their site, increase popularity, and ensure that it ranks highly in search engine results. An additional 11 SMO rules have since been added to the list by other marketing contributors.
The 16 rules of SMO, according to one source, are as follows:[12]
Bhargava's initial five rules were more specifically designed to SMO, while the list is now much broader and addresses everything that can be done across different social media platforms. According to author and CEO of TopRank Online Marketing, Lee Odden, a Social Media Strategy is also necessary to ensure optimization. This is a similar concept to Bhargava's list of rules for SMO.
The Social Media Strategy may consider:[13]
According to Lon Safko and David K. Brake inThe Social Media Bible, it is also important to act like apublisherby maintaining an effective organizational strategy, to have an original concept and unique "edge" that differentiates one's approach from competitors, and to experiment with new ideas if things do not work the first time.[4]If a business is blog-based, an effective method of SMO is using widgets that allow users to share content to their personal social media platforms. This will ultimately reach a wider target audience and drive more traffic to the original post. Blog widgets and plug-ins for post-sharing are most commonly linked to Facebook, LinkedIn and x.com. They occasionally also link to social media platforms such as Tumblr and Pinterest. Many sharing widgets also include user counters which indicate how many times the content has been liked and shared across different social media pages. This can influence whether or not new users will engage with the post, and also gives businesses an idea of what kind of posts are most successful at engaging audiences. By using relevant and trending keywords in titles and throughout blog posts, a business can also increase search engine optimization and the chances of their content of being read and shared by a large audience.[13]The root of effective SMO is the content that is being posted, so professional content creation tools can be very beneficial. These can include editing programs such as Photoshop, GIMP, Final Cut Pro, and Dreamweaver. Many websites also offer customization options such as different layouts to personalize a page and create a point of difference.[4]
With social media sites overtaking TV as a source for news for young people, news organizations have become increasingly reliant on social media platforms for generating traffic. A report byReuters Institute for the Study of Journalismdescribed how a 'second wave of disruption' had hit news organizations,[14]with publishers such asThe Economisthaving to employ large social media teams to optimize their posts, and maximize traffic.[5]Within the context of the publishing industry, even professional fields are utilizing SMO. Because doctors want to maximize exposure to their research findings SMO has also found a place in the medical field.[15]
Today, 3.8 billion people globally are using some form of social media.[citation needed]People frequently obtain health-related information from online social media platforms like Twitter and Facebook. Healthcare professionals and scientists can communicate with other medical-counterparts to discuss research and findings through social media platforms. These platforms provide researchers with data sets and surveillance that help detect patterns and behavior in preventing, informing, and studying global disease; COVID-19. Additionally, researchers utilize SMO to reach and recruit hard-to-reach patients. SMO narrows specified demographics that filter necessary data in a given study.[citation needed]
Social media gamingis online gaming activity performed through social media sites with friends and online gaming activity that promotes social media interaction. Examples of the former includeFarmVille,Clash of Clans,Clash Royale,FrontierVille, andMafia Wars. In these games a player's social network is exploited to recruit additional players and allies. An example of the latter isEmpire Avenue, avirtual stock exchangewhere players buy and sell shares of each other's social network worth.Nielsen Media Researchestimates that, as of June 2010, social networking and playing online games account for about one-third of all online activity
by Americans.[16]
Facebookhas in recent years become a popular channel for advertising, alongside traditional forms such as television, radio, and print. With over 1 billion active users, and 50% of those users logging into their accounts every day[17]it is an important communication platform that businesses can utilize and optimize to promote their brand and drive traffic to their websites. There are three commonly used strategies to increase advertising reach on Facebook:
Improving effectiveness and increasing network size are organic approaches, while buying more reach is a paid approach which does not require any further action.[18]Most businesses will attempt an "organic" approach to gaining a significant following before considering a paid approach. Because Facebook requires a login, it is important that posts are public to ensure they will reach the widest possible audience. Posts that have been heavily shared and interacted with by users are displayed as 'highlighted posts' at the top of newsfeeds. In order to achieve this status, the posts need to be engaging, interesting, or useful. This can be achieved by being spontaneous, asking questions, addressing current events and issues, and optimizing trending hashtags and keywords. The more engagement a post receives, the further it will spread and the more likely it is to feature on first in search results.
Another organic approach to Facebook optimization is cross-linking different social platforms. By posting links to websites or social media sites in the profile 'about' section, it is possible to direct traffic and ultimately increase search engine optimization. Another option is to share links to relevant videos and blog posts.[13]Facebook Connect is a functionality that launched in 2008 to allow Facebook users to sign up to different websites, enter competitions, and access exclusive promotions by logging in with their existing Facebook account details. This is beneficial to users as they don't have to create a new login every time they want to sign up to a website, but also beneficial to businesses as Facebook users become more likely to share their content. Often the two are interlinked, where in order to access parts of a website, a user has to like or share certain things on their personal profile or invite a number of friends to like a page. This can lead to greater traffic flow to a website as it reaches a wider audience. Businesses have more opportunities to reach their target markets if they choose a paid approach to SMO. When Facebook users create an account, they are urged to fill out their personal details such as gender, age, location, education, current and previous employers, religious and political views, interests, and personal preferences such as movie and music tastes. Facebook then takes this information and allows advertisers to use it to determine how to best market themselves to users that they know will be interested in their product. This can also be known as micro-targeting. If a user clicks on a link to like a page, it will show up on their profile and newsfeed. This then feeds back into organic social media optimization, as friends of the user will see this and be encouraged to click on the page themselves. Although advertisers are buying mass reach, they are attracting a customer base with a genuine interest in their product. Once a customer base has been established through a paid approach, businesses will often run promotions and competitions to attract more organic followers.[12]
The number of businesses that use Facebook to advertise also holds significant relevance. in 2017, there were three million businesses that advertised on Facebook.[19]This makes Facebook the world's largest platform for social media advertising. What also holds importance is the amount of money leading businesses are spending on Facebook advertising alone.Procter & Gamblespend $60 million every year on Facebook advertising.[20]Other advertisers on Facebook include Microsoft, with a yearly spend of £35 million, Amazon, Nestle and American Express all with yearly expenditures above £25 million per year.
Furthermore, the number of small businesses advertising on Facebook is of relevance. This number has grown rapidly over the upcoming years and demonstrates how important social media advertising actually is. Currently 70% of the UK's small businesses use Facebook advertising.[21]This is a substantial number of advertisers. Almost half of the world's small businesses use social media marketing product of some sort. This demonstrates the impact that social media has had on the currentdigital marketingera.
The engagement rate (ER) represents the activity of users specific for a certain profile on Facebook, Instagram, TikTok or any other social media. A common way to calculate it is the following:
ER=interactions¯followers×100%{\displaystyle ER={\frac {\overline {interactions}}{followers}}\times 100\%}
In the above formulafollowersis the total number of followers (friends, subscribers, etc.),interactionsstands for the number of interactions, such as likes, comments, personal messages, shares. The latter is averaged over the certain period of time, which should normally be short enough to ensure the variance in followers number is negligible during this period.
|
https://en.wikipedia.org/wiki/Social_media_optimization
|
Altruismis the concern for thewell-beingof others, independently of personal benefit or reciprocity.
The wordaltruismwas popularised (and possibly coined) by the French philosopherAuguste Comtein French, asaltruisme, for anantonymofegoism.[1]He derived it from the Italianaltrui, which in turn was derived from Latinalteri, meaning "other people" or "somebody else".[2]Altruism may be considered a synonym of selflessness, the opposite ofself-centeredness.
Altruism is an important moral value in manyculturesandreligions. It canexpand beyond care for humansto include othersentientbeings andfuture generations.[3]
Altruism, as observed in populations of organisms, is when an individual performs an action at a cost to itself (in terms of e.g. pleasure and quality of life, time, probability of survival or reproduction) that benefits, directly or indirectly, another individual, without the expectation of reciprocity or compensation for that action.[4]
The theory ofpsychological egoismsuggests that no act ofsharing,helping, orsacrificingcan be "truly" altruistic, as the actor may receive an intrinsic reward in the form of personalgratification. The validity of this argument depends on whether suchintrinsic rewardsqualify as "benefits".[5][6]
The termaltruismcan also refer to an ethical doctrine that claims that individuals are morally obliged to benefit others. Used in this sense, it is usually contrasted withegoism, which claims individuals are morally obligated to serve themselves first.[7]
Effective altruismis the use of evidence and reason to determine the most effective ways to benefit others.[8]
The concept of altruism has a history inphilosophicalandethicalthought. The term was coined in the 19th century by the foundingsociologistandphilosopher of scienceAuguste Comte, and has become a major topic forpsychologists(especiallyevolutionary psychologyresearchers),evolutionary biologists, andethologists. Whilst ideas about altruism from one field can affect the other fields, the different methods and focuses of these fields always lead to different perspectives on altruism. In simple terms, altruism is caring about the welfare of other people and acting to help them, above oneself.
Cross-cultural perspectives on altruism show that how we view and experience helping others depends heavily on where we come from. In individualistic cultures, like many Western countries, acts of altruism often bring personal joy and satisfaction, as they align with values that emphasize individual achievement and self-fulfillment. On the other hand, in collectivist cultures, common in many Eastern societies, altruism is often seen as a responsibility to the group rather than a personal choice. This difference means that people in collectivist cultures might not feel the same personal happiness from helping others, as the act is more about fulfilling social obligations. Ultimately, these variations highlight how deeply cultural norms shape the way we approach and experience altruism.[9]
Marcel Mauss's essayThe Giftcontains a passage called "Note on alms". This note describes the evolution of the notion of alms (and by extension of altruism) from the notion of sacrifice. In it, he writes:
Alms are the fruits of a moral notion of the gift and of fortune on the one hand, and of a notion of sacrifice, on the other. Generosity is an obligation, because Nemesis avenges the poor and the gods for the superabundance of happiness and wealth of certain people who should rid themselves of it. This is the ancient morality of the gift, which has become a principle of justice. The gods and the spirits accept that the share of wealth and happiness that has been offered to them and had been hitherto destroyed in useless sacrifices should serve the poor and children.
Inethology(the scientific study of animal behaviour), and more generally in the study ofsocial evolution, altruism refers to behavior by an individual that increases thefitnessof another individual while decreasing the fitness of the actor.[10]Inevolutionary psychologythis term may be applied to a wide range of human behaviors such ascharity,emergency aid, help to coalition partners,tipping,courtshipgifts, production ofpublic goods, andenvironmentalism.[11]
The need for an explanation of altruistic behavior that is compatible with evolutionary origins has driven the development of new theories. Two related strands of research on altruism have emerged from traditional evolutionary analyses andevolutionary game theory: a mathematical model and analysis of behavioral strategies.
Some of the proposed mechanisms are:
Such explanations do not imply that humans consciously calculate how to increase theirinclusive fitnesswhen doing altruistic acts. Instead, evolution has shaped psychological mechanisms, such as emotions, that promote certain altruistic behaviors.[11]
The benefits for the altruist may be increased, and the costs reduced by being more altruistic towards certain groups. Research has found that people are more altruistic to kin than to no-kin, to friends than strangers, to those attractive than to those unattractive, to non-competitors than competitors, and to members in-groups than to members of out-groups.[11]
The study of altruism was the initial impetus behindGeorge R. Price's development of thePrice equation, a mathematical equation used to study genetic evolution. An interesting example of altruism is found in the cellularslime moulds, such asDictyosteliummucoroides. These protists live as individualamoebaeuntil starved, at which point they aggregate and form a multicellular fruiting body in which some cells sacrifice themselves to promote the survival of other cells in the fruiting body.[22]
Selective investment theory proposes that close social bonds, and associated emotional, cognitive, and neurohormonal mechanisms, evolved to facilitate long-term, high-cost altruism between those closely depending on one another for survival and reproductive success.[23]
Such cooperative behaviors have sometimes been seen as arguments for left-wing politics, for example, by the RussianzoologistandanarchistPeter Kropotkinin his 1902 bookMutual Aid: A Factor of Evolutionand moral philosopherPeter Singerin his bookA Darwinian Left.
Jorge Moll andJordan Grafman, neuroscientists at theNational Institutes of Healthand LABS-D'Or Hospital Network, provided the first evidence for the neural bases of altruistic giving in normal healthy volunteers, usingfunctional magnetic resonance imaging. In their research,[24]they showed that both pure monetary rewards and charitable donations activated themesolimbicreward pathway, a primitive part of the brain that usually responds to food and sex. However, when volunteers generously placed the interests of others before their own by making charitable donations, anotherbrain circuitwas also selectively activated: the subgenual cortex/septal region. These structures are related to social attachment and bonding in other species. The experiment suggested that altruism is not a higher moral faculty overpowering innate selfish desires, but a fundamental, ingrained, and enjoyable trait in the brain.[25]One brain region, the subgenualanterior cingulate cortex/basal forebrain, contributes to learning altruistic behavior, especially in people with a propensity forempathy.[26][27][28]
Bill Harbaugh, aUniversity of Oregoneconomist, in an fMRI scanner test conducted with his psychologist colleague Dr. Ulrich Mayr, reached the same conclusions as Jorge Moll and Jordan Grafman about giving to charity, although they were able to divide the study group into two groups: "egoists" and "altruists". One of their discoveries was that, though rarely, even some of the considered "egoists" sometimes gave more than expected because that would help others, leading to the conclusion that there are other factors in charity, such as a person's environment and values.[27]
A recent meta-analysis of fMRI studies conducted by Shawn Rhoads, Jo Cutler, and Abigail Marsh analyzed the results of prior studies of generosity in which participants could freely choose to give or not give resources to someone else.[29]The results of this study confirmed that altruism is supported by distinct mechanisms from giving motivated by reciprocity or by fairness. This study also confirmed that the right ventral striatum is recruited during altruistic giving, as well as the ventromedial prefrontal cortex, bilateralanterior cingulate cortex, and bilateral anteriorinsula, which are regions previously implicated inempathy.
Abigail Marshhas conducted studies of real-world altruists that have also identified an important role for theamygdalain human altruism. In real-world altruists, such as people who have donated kidneys to strangers, the amygdala is larger than in typical adults. Altruists' amygdalas are also more responsive than those of typical adults to the sight of others' distress, which is thought to reflect an empathic response to distress.[30][31]This structure may also be involved in altruistic choices due to its role in encoding the value of outcomes for others.[32]This is consistent with the findings of research in non-human animals, which has identified neurons within the amygdala that specifically encode the value of others' outcomes, activity in which appears to drive altruistic choices in monkeys.[33][34]
TheInternational Encyclopedia of the Social Sciencesdefinespsychological altruismas "a motivational state to increase another's welfare". Psychological altruism is contrasted withpsychological egoism, which refers to the motivation to increase one's welfare.[35]In keeping with this, research in real-world altruists, including altruistic kidney donors, bone marrow donors, humanitarian aid workers, andheroicrescuers findings that these altruists are primarily distinguished from other adults by unselfish traits and decision-making patterns. This suggests that human altruism reflects genuinely high valuation of others' outcomes.[36]
There has been some debate on whether humans are capable of psychological altruism.[37]Some definitions specify a self-sacrificial nature to altruism and a lack of external rewards for altruistic behaviors.[38]However, because altruism ultimately benefits the self in many cases, the selflessness of altruistic acts is difficult to prove. Thesocial exchange theorypostulates that altruism only exists when the benefits outweigh the costs to the self.[39]
Daniel Batson, a psychologist, examined this question and argued against the social exchange theory. He identified four significant motives: to ultimately benefit the self (egoism), to ultimately benefit the other person (altruism), to benefit a group (collectivism), or to uphold a moral principle (principlism). Altruism that ultimately serves selfish gains is thus differentiated from selfless altruism, but the general conclusion has been thatempathy-induced altruism can be genuinely selfless.[40]Theempathy-altruismhypothesisstates that psychological altruism exists and is evoked by the empathic desire to help someone suffering. Feelings of empathic concern are contrasted with personal distress, which compels people to reduce their unpleasant emotions and increase their positive ones by helping someone in need. Empathy is thus not selfless since altruism works either as a way to avoid those negative, unpleasant feelings and have positive, pleasant feelings when triggered by others' need for help or as a way to gain social reward or avoid social punishment by helping. People with empathic concern help others in distress even when exposure to the situation could be easily avoided, whereas those lacking in empathic concern avoid allowing it unless it is difficult or impossible to avoid exposure to another's suffering.[35]
Helping behavior is seen in humans from about two years old when a toddler can understand subtle emotional cues.[41]
In psychological research on altruism, studies often observe altruism as demonstrated throughprosocial behaviorssuch ashelping, comforting,sharing, cooperation,philanthropy, andcommunity service.[38]People are most likely to help if they recognize that a person is in need and feel personal responsibility for reducing the person's distress. The number of bystanders witnessing pain or suffering affects the likelihood of helping (theBystander effect). More significant numbers of bystanders decrease individual feelings of responsibility.[35][42]However, a witness with a high level of empathic concern is likely to assume personal responsibility entirely regardless of the number of bystanders.[35]
Many studies have observed the effects ofvolunteerism(as a form of altruism) on happiness and health and have consistently found that those who exhibit volunteerism also have better current and future health and well-being.[43][44]In a study of older adults, those who volunteered had higher life satisfaction and will to live, and lessdepression,anxiety, andsomatization.[45]Volunteerism and helping behavior have not only been shown to improve mental health but physical health and longevity as well, attributable to the activity and social integration it encourages.[43][46][47]One study examined the physical health of mothers who volunteered over 30 years and found that 52% of those who did not belong to a volunteer organization experienced a major illness while only 36% of those who did volunteer experienced one.[48]A study on adults aged 55 and older found that during the four-year study period, people who volunteered for two or more organizations had a 63% lower likelihood of dying. After controlling for prior health status, it was determined that volunteerism accounted for a 44% reduction in mortality.[49]Merely being aware of kindness in oneself and others is also associated with greater well-being. A study that asked participants to count each act of kindness they performed for one week significantly enhanced their subjective happiness. Happier people are kinder and more grateful, kinder people are happier and more grateful and more grateful people are happier and kinder, the study suggests.[50]
While research supports the idea that altruistic acts bring about happiness, it has also been found to work in the opposite direction—that happier people are also kinder. The relationship between altruistic behavior and happiness is bidirectional. Studies found thatgenerosityincreases linearly from sad to happy affective states.[51]
Feeling over-taxed by the needs of others has negative effects on health and happiness.[47]For example, one study on volunteerism found that feeling overwhelmed by others' demands had an even stronger negative effect on mental health than helping had a positive one (although positive effects were still significant).[52]
Older humans were found to have higher altruism.[53]
Both genetics and environment have been implicated in influencing pro-social or altruistic behavior.[54]Candidate genes include OXTR (polymorphismsin theoxytocin receptor),[55][56][57]CD38,COMT,DRD4,DRD5,IGF2,AVPR1A[58]andGABRB2.[59]It is theorized that some of these genes influence altruistic behavior by modulating levels of neurotransmitters such asserotoninanddopamine.
According toChristopher Boehm, altruistic behaviour evolved as a way of surviving within a group.[60]
"Sociologists have long been concerned with how to build the good society".[61]The structure of our societies and how individuals come to exhibit charitable, philanthropic, and other pro-social, altruistic actions for thecommon goodis a commonly researched topic within the field. The American Sociology Association (ASA) acknowledgespublic sociologysaying, "The intrinsic scientific, policy, and public relevance of this field of investigation in helping to construct 'good societies' is unquestionable".[61]This type of sociology seeks contributions that aid popular and theoretical understandings of what motivates altruism and how it is organized, and promotes an altruistic focus in order to benefit the world and people it studies.
How altruism is framed, organized, carried out, and what motivates it at the group level is an area of focus that sociologists investigate in order to contribute back to the groups it studies and "build the good society". The motivation of altruism is also the focus of study; for example, one study links the occurrence of moral outrage to altruistic compensation of victims.[62]Studies show thatgenerosityin laboratory and in online experiments is contagious – people imitate the generosity they observe in others.[63]
Most, if not all, of the world's religions promote altruism as a very important moral value.Buddhism,Christianity,Hinduism,Islam,Jainism,Judaism, andSikhism, etc., place particular emphasis on altruistic morality.
Altruism figures prominently in Buddhism.Loveandcompassionare components of all forms of Buddhism, and are focused on all beings equally: love is the wish that all beings be happy, and compassion is the wish that all beings be free from suffering. "Many illnesses can be cured by the one medicine of love and compassion. These qualities are the ultimate source of human happiness, and the need for them lies at the very core of our being" (Dalai Lama).[64][65]
The notion of altruism is modified in such a world-view, since the belief is that such a practice promotes the practitioner's own happiness: "The more we care for the happiness of others, the greater our own sense of well-being becomes" (Dalai Lama).[64]
In Buddhism, a person's actions cause karma, which consists of consequences proportional to the moral implications of their actions. Deeds considered to be bad are punished, while those considered to be good are rewarded.[66]
The fundamental principles ofJainismrevolve around altruism, not only for other humans but for all sentient beings. Jainism preachesahimsa– to live and let live, not harming sentient beings, i.e. uncompromising reverence for all life. The firstTirthankara,Rishabhdev, introduced the concept of altruism for all living beings, from extending knowledge and experience to others to donation, giving oneself up for others, non-violence, and compassion for all living things.[citation needed]
The principle of nonviolence seeks to minimize karmas which limit the capabilities of the soul. Jainism views everysoulas worthy of respect because it has the potential to becomeSiddha(God in Jainism). Because all living beings possess a soul, great care and awareness is essential in one's actions. Jainism emphasizes the equality of all life, advocating harmlessness towards all, whether the creatures are great or small. This policy extends even to microscopic organisms. Jainism acknowledges that every person has different capabilities and capacities to practice and therefore accepts different levels of compliance for ascetics and householders.[67]
Thomas Aquinasinterprets the biblical phrase "You should love your neighbour as yourself"[68]as meaning that love for ourselves is the exemplar of love for others.[69]Considering that "the love with which a man loves himself is the form and root of friendship", he quotes Aristotle that "the origin of friendly relations with others lies in our relations to ourselves",.[70]Aquinas concluded that though we are not bound to love others more than ourselves, we naturally seek thecommon good, the good of the whole, more than any private good, the good of a part. However, he thought we should love God more than ourselves and our neighbours, and more than our bodily life—since the ultimate purpose of loving our neighbour is to share in eternalbeatitude: a more desirable thing than bodily well-being. In coining the word "altruism", as stated above,Comtewas probably opposing this Thomistic doctrine, which is present in some theological schools within Catholicism. The aim and focus of Christian life is a life that glorifies God, while obeying Christ's command to treat others equally, caring for them and understanding that eternity in heaven is what Jesus' Resurrection at Calvary was all about.
Many biblical authors draw a strong connection between love of others and love of God.John 1:4states that for one to love God one must love his fellow man, and that hatred of one's fellow man is the same as hatred of God.Thomas Jay Oordhas argued in several books that altruism is but one possible form of love. An altruistic action is not always a loving action. Oord defines altruism as acting for the other's good, and he agrees with feminists who note that sometimes love requires acting for one's own good when the other's demands undermine overall well-being.
German philosopherMax Schelerdistinguishes two ways in which the strong can help the weak. One way is a sincere expression of Christian love, "motivated by a powerful feeling of security, strength, and inner salvation, of the invincible fullness of one's own life and existence".[71]: 88–89Another way is merely "one of the many modern substitutes for love,... nothing but the urge to turn away from oneself and to lose oneself in other people's business".[71]: 95–96At its worst, Scheler says, "love for the small, the poor, the weak, and the oppressed is really disguised hatred, repressed envy, an impulse to detract, etc., directed against the opposite phenomena: wealth, strength, power, largesse."[71]: 96–97
In theArabic language, "'iythar" (إيثار) means "preferring others to oneself".[72]
On the topic of donating blood to non-Muslims (a controversial topic within the faith), theShiareligious professor,Fadhil al-Milanihas provided theological evidence that makes it positively justifiable. In fact, he considers it a form of religious sacrifice andithar(altruism).[73]
ForSufis, 'iythar means devotion to others through complete forgetfulness of one's own concerns, where concern for others is deemed as a demand made byGodon the human body, considered to be property of God alone. The importance of 'iythar (also known asīthār) lies in sacrifice for the sake of the greater good;Islamconsiders those practicingīthāras abiding by the highest degree of nobility.[74]This is similar to the notion ofchivalry. A constant concern for God results in a careful attitude towards people, animals, and other things in this world.[75]
Judaism defines altruism as the desired goal of creation.[citation needed]RabbiAbraham Isaac Kookstated that love is the most important attribute in humanity.[76]Love is defined asbestowal, or giving, which is the intention of altruism. This can be altruism towards humanity that leads to altruism towards the creator or God.Kabbalahdefines God as the force of giving inexistence. RabbiMoshe Chaim Luzzattofocused on the "purpose of creation" and how the will of God was to bring creation into perfection and adhesion with this force of giving.[77]
ModernKabbalahdeveloped by RabbiYehuda Ashlag, in his writings about thefuture generation, focuses on how society could achieve an altruistic social framework.[78]: 120–130Ashlag proposed that such a framework is the purpose of creation, and everything that happens is to raise humanity to the level of altruism, love for one another. Ashlag focused on society and its relation todivinity.[78]: 175–180
Altruism is essential to theSikhreligion. The central faith in Sikhism is that the greatest deed anyone can do is to imbibe and live the godly qualities such as love, affection, sacrifice, patience, harmony, and truthfulness.Sevā, orselfless serviceto the community for its own sake, is an important concept in Sikhism.[79]
The fifthGuru,Guru Arjun, sacrificed his life to uphold "22 carats of pure truth, the greatest gift to humanity", according to theGuru Granth Sahib. The ninth Guru,Tegh Bahadur, sacrificed his life to protect weak and defenseless people against atrocity.
In the late seventeenth century,Guru Gobind Singh(the tenth Guru in Sikhism), was at war with theMughalrulers to protect the people of different faiths when a fellow Sikh,Bhai Kanhaiya, attended the troops of the enemy.[80]He gave water to both friends and foes who were wounded on the battlefield. Some of the enemy began to fight again and some Sikh warriors were annoyed by Bhai Kanhaiya as he was helping their enemy. Sikh soldiers brought Bhai Kanhaiya before Guru Gobind Singh, and complained of his action that they considered counterproductive to their struggle on the battlefield. "What were you doing, and why?" asked the Guru. "I was giving water to the wounded because I saw your face in all of them", replied Bhai Kanhaiya. The Guru responded, "Then you should also give them ointment to heal their wounds. You were practicing what you were coached in the house of the Guru."
Under the tutelage of the Guru, Bhai Kanhaiya subsequently founded a volunteer corps for altruism, which is still engaged today in doing good to others and in training new recruits for this service.[81]
In Hinduism, selflessness (Atmatyag), love (Prema), kindness (Daya), and forgiveness (Kshama) are considered as the highest acts of humanity or "Manushyattva". Giving alms to the beggars or poor people is considered as a divine act or "Punya" and Hindus believe it will free their souls from guilt or "Paapa" and will led them to heaven or "Swarga" in afterlife. Altruism is also the central act of various Hindu mythology and religious poems and songs. Mass donation of clothes to poor people (Vastraseva), or blood donation camp or mass food donation (Annaseva) for poor people is common in various Hindu religious ceremonies.[citation needed]
TheBhagavad Gitasupports the doctrine of karma yoga (achieving oneness with God through action) andNishkama Karmaor action without expectation or desire for personal gain which can be said to encompass altruism. Altruistic acts are generally celebrated and well received in Hindu literature and are central to Hindu morality.[82]
There is a wide range of philosophical views on humans' obligations or motivations to act altruistically. Proponents ofethical altruismmaintain that individuals are morally obligated to act altruistically.[83]The opposing view isethical egoism, which maintains that moral agents should always act in their own self-interest. Both ethical altruism and ethical egoism contrast withutilitarianism, which maintains that each agent should act in order to maximise the efficacy of their function and the benefit to both themselves and their co-inhabitants.
A related concept indescriptive ethicsispsychological egoism, the thesis that humans always act in their own self-interest and that true altruism is impossible.Rational egoismis the view thatrationalityconsists in acting in one's self-interest (without specifying how this affects one's moral obligations).
In his bookI am You: The Metaphysical Foundations for Global Ethics,Daniel Kolakargues thatopen individualismprovides a rational basis for altruism.[84]: 552According to Kolak, egoism is incoherent because the concept of afuture selfis incoherent, similar to the idea ofanattāin Buddhist philosophy, and everyone is in reality the same being.Derek Parfitmade similar arguments in the bookReasons and Persons, using thought experiments such as theteletransportation paradoxto illustrate the philosophical problems withpersonal identity.[85]
Effective altruismis aphilosophyandsocial movementthat uses evidence and reasoning to determine the most effective ways to benefit others.[86]Effective altruism encourages individuals to consider all causes and actions and to act in the way that brings about the greatest positive impact, based upon their values.[87]It is the broad, evidence-based, and cause-neutral approach that distinguishes effective altruism from traditional altruism orcharity.[88]Effective altruism is part of the larger movement towardsevidence-based practices.
While a substantial proportion of effective altruists have focused on thenonprofit sector, the philosophy of effective altruism applies more broadly to prioritizing the scientific projects, companies, and policy initiatives which can be estimated to save lives, help people, or otherwise have the biggest benefit.[89]People associated with the movement include philosopherPeter Singer,[90]Facebook co founderDustin Moskovitz,[91]Cari Tuna,[92]Oxford-based researchersWilliam MacAskill[93]andToby Ord,[94]and professional poker playerLiv Boeree.[95]
Pathological altruism is altruism taken to an unhealthy extreme, such that it either harms the altruistic person or the person's well-intentioned actions cause more harm than good.
The term "pathological altruism" was popularised by the bookPathological Altruism.
Examples includedepressionandburnoutseen in healthcare professionals, an unhealthy focus on others to the detriment of one's own needs,animal hoarding, and ineffective philanthropic and social programs that ultimately worsen the situations they are meant to aid.[96]
Extreme altruism also known as costly altruism, extraordinary altruism, or heroic behaviours (shall be distinguished fromheroism), refers to selfless acts directed to a stranger which significantly exceed the normal altruistic behaviours, often involving risks or great cost to the altruists themselves.[30]Since acts of extreme altruism are often directed towards strangers, many commonly accepted models of simple altruism appear inadequate in explaining this phenomenon.[97]
One of the initial concepts was introduced by Wilson in 1976, which he referred to as "hard-core" altruism.[98]This form is characterised by impulsive actions directed towards others, typically a stranger and lacking incentives for reward. Since then, several papers have mentioned the possibility of such altruism.[99][100]
In 21st century the progress in the field slowed down due to adopting ethical guidelines that restrict exposing research participants to costly or risky decisions (seeDeclaration of Helsinki). Consequently, much research has based their studies on living organ donations and the actions ofCarnegie Hero medal Recipients, actions which involve high risk, high cost, and are of infrequent occurrences.[101]A typical example of extreme altruism would be non-directed kidney donation—a living person donating one of their kidneys to a stranger without any benefits or knowing the recipient.
However, current research can only be carried out on a small population that meets the requirements of extreme altruism. Most of the time the research is also via the form of self-report which could lead to self-report biases.[102]Due to the limitations, the current gap between high stakes and normal altruism remains unknown.[103]
In 1970, Schwartz hypothesised that extreme altruism is positively related to a person's moral norms and is not influenced by the cost associated with the action.[103]This hypothesis was supported in the same study examining bone marrow donors. Schwartz discovered that individuals with strong personal norms and those who attribute more responsibility to themselves are more inclined to participate in bone marrow donation.[103]Similar findings were observed in a 1986 study by Piliavin and Libby focusing on blood donors.[104]These studies suggest that personal norms lead to the activation of moral norms, leading individuals to feel compelled to help others.[103]
Abigail Marsh has described psychopaths as the "opposite" group of people to extreme altruists[104]and has conducted a few research, comparing these two groups of individuals. Utilising techniques such as brain imaging and behavioural experiments, Marsh's team observed that kidney donors tend to have larger amygdala sizes and exhibit better abilities in recognizing fearful expressions compared to psychopathic individuals.[30]Furthermore, an improved ability to recognize fear has been associated with an increase in prosocial behaviours, including greater charity contribution.[105]
Rand and Epstein explored the behaviours of 51 Carnegie Hero Medal Recipients, demonstrating how extreme altruistic behaviours often stem from system I of theDual Process Theory, which leads to rapid and intuitive behaviours.[106]Additionally, a separate by Carlson et al. indicated that such prosocial behaviours are prevalent in emergencies where immediate actions are required.[107]
This discovery has led to ethical debates, particularly in the context of living organ donation, where laws regarding this issue differ by country.[108]As observed in extreme altruists, these decisions are made intuitively, which may reflect insufficient consideration. Critics are concerned about whether this rapid decision encompasses a thorough cost-benefit analysis and question the appropriateness of exposing donors to such risk.[109]
One finding suggests how extreme altruists exhibit lower levels of social discounting as compared to others. With that meaning extreme altruists place a higher value on the welfare of strangers than a typical person does.[36][110]
Analysis of 676Carnegie Hero Award Recipients[111]and another study on 243 rescuing acts[112]reveal that a significant proportion of rescuers come from lower socio-economic backgrounds. Johnson attributes the distribution to the high-risk occupations that are more prevalent between lower socioeconomic groups.[111]Another hypothesis proposed by Lyons is that individuals from these groups may perceive they have less to lose when engaging in high-risk extreme altruistic behaviours.[112]
Evolutionary theories such as the kin-selection, reciprocity, vested interest and punishment either contradict or do not fully explain the concept of extreme altruism.[113]As a result, considerable research has attempted for a separate explanation for this behaviour.
Research suggests that males are more likely to engage in heroic and risk-taking behaviours due to a preference among females for such traits.[114]These extreme altruistic behaviours could serve to act as an unconscious "signal" to showcase superior power and ability compared to ordinary individuals.[113]When an extreme altruist survives a high-risk situation, they send an "honest signal" of quality.[113]Three qualities hypothesized to be exhibited by extreme altruists, which could be interpreted as "signals", are: (1) traits that are difficult to fake, (2) a willingness to help, and (3) generous behaviours.[113]
The empathy altruism hypothesis appears to align with the concept of extreme altruism without contradiction. The hypothesis was supported with further brain scanning research, which indicates how this group of people demonstrate a higher level of empathy concern. The level of empathy concern then triggers activation in specific brain regions, urging the individual to engage in heroic behaviours.[115]
While most altruistic behaviours offer some form of benefit, extreme altruism may sometimes result from a mistake where the victim does not reciprocate.[113]Considering the impulsive characteristic of extreme altruists, some researchers suggest that these individuals have made a wrong judgement during the cost-benefit analysis.[106]Furthermore, extreme altruism might be a rare variation of altruism where they lie towards to ends of a normal distribution.[113]In the US, the annual prevalence rate per capita is less than 0.00005%, this shows the rarity of such behaviours.[36]
Digital altruism is the notion that some are willing to freely share information based on the principle ofreciprocityand in the belief that in the end, everyone benefits fromsharinginformation via theInternet.[116]
There are three types of digital altruism: (1) "everyday digital altruism", involving expedience, ease, moral engagement, and conformity; (2) "creative digital altruism", involving creativity, heightened moral engagement, and cooperation; and (3) "co-creative digital altruism" involving creativity, moral engagement, and meta cooperative efforts.[116]
|
https://en.wikipedia.org/wiki/Altruism
|
Community Notes, formerly known asBirdwatch, is afeatureonX (formerly Twitter)where contributors can addcontextsuch asfact-checksunder a post, image or video. It is a community-drivencontent moderationprogram, intended to provide helpful and informative context, based on acrowd-sourcedsystem. Notes are applied to potentiallymisleadingcontent by abridging-based algorithmnot based onmajority rule, but instead agreement from users on different sides of thepolitical spectrum.
The program launched on Twitter in 2021 and became widespread on X in 2023. Initially shown to U.S. users only, notes were popularized in March 2022 overmisinformation in the Russian invasion of Ukrainefollowed byCOVID-19 misinformationin October. Birdwatch was then rebranded to Community Notes and expanded in November 2022. As of November 2023, it had approximately 133,000 contributors; notes reportedly receive tens of millions of views per day, with its goal being to counterpropagandaandmisinformation. According to investigation and studies, the vast majority of users do not see notes correcting content.[a]In May 2024, a study ofCOVID-19 vaccinenotes were deemed accurate 97% of the time.[3][4]
Critics have also highlighted how it has spreaddisinformation, is vulnerable to manipulation, and has been inconsistent in its application of notes, as well as its efforts in combating of misinformation. Some suggest that structurally the system "lacks critical reflection on the potential for content to harm".[5][b]Elon Musk, the owner of X, considers the program as agame changerand having considerable potential. However, after a post by Musk received a Community Note, he claimed the program had beenmanipulatedbystate actors.[6]
In February 2020, Twitter began introducing labels and warning messages intended to limit potentially harmful and misleading content.[7]In August 2020, development of Birdwatch was announced, initially described as a moderation tool. Twitter first launched the Birdwatch program in January 2021, intended as a way to debunk misinformation and propaganda, with apilot programof 1,000 contributors,[8][9]weeks after theJanuary 6 United States Capitol attack.[10]The aim was to "build Birdwatch in the open, and have it shaped by the Twitter community." In November 2021, Twitter updated the Birdwatch moderation tool to limit the visibility of contributors' identities by creating aliases for their accounts, in an attempt to limit bias towards the author of notes.[9][11]
Twitter then expanded access to notes made by the Birdwatch contributors in March 2022, giving a randomized set of US users the ability to view notes attached to tweets and rate them,[12]with a pilot of 10,000 contributors.[13]On average, contributors were noting 43 times a day in 2022 prior to theRussian invasion of Ukraine. This then increased to 156 on the day of the invasion, estimated to be a very small portion of the misleading posts on the platform. By March 1, only 359 of 10,000 contributors had proposed notes in 2022, while a Twitter spokeswoman described plans to scale up the program, with the focus on "ensuring that Birdwatch is something people find helpful and can help inform understanding".[14][15]
By September 2022, the program had expanded to 15,000 users.[16]In October 2022, the most commonly published notes were related toCOVID-19 misinformationbased on historical usage.[17]In November 2022, at the request ofnew owner Elon Musk, Birdwatch was rebranded to Community Notes,taking anopen-sourceapproach to deal with misinformation,[18]and expanded to Europe and countries outside of the US.[19][20][21]
Community Notes was then extended to include notes on misleading images in May 2023[22]and in September 2023 further extended to videos, but only for a group ofpower-usersreferred to as "Top Writers".[23]Twitter subsequently ended the ability to report misleading posts, instead relying exclusively on Community Notes,[24]with contributors proposing over 21,200 notes on the platform.[25]
In October 2023, Elon Musk announced that posts "corrected" by Community Notes would no longer be eligible forad revenuein order to "maximize the incentive for accuracy oversensationalism" and in order to discourage the spread ofmisinformation and disinformationon the platform. The move was criticised by some users and applauded by others.[26][27]As of November 2023, it has expanded to over 50 countries, with approximately 133,000 contributors.[28]
In November 2024, Musk said "Community Notes is awesome. Everybody gets checked. Including me."[29]In December 2024, he wrote that Community Notes' "system is completely decentralized and open source, both code and data. Any manipulation would show up like a neon sore thumb!"[30]Musk argued in February 2025 that Community Notes "is increasingly being gamed by governments & legacy media", and that he was taking steps to "fix" it.[30][31]The statement came after his own claims on astronauts andlegacy mediawere contradicted by Community Notes, with Musk describing the Community Note on the astronauts as false and the Community Note vanishing within a week.[29]Musk's February statement also claimed that Ukrainian President Volodymyr Zelenskyy "is despised by the people of Ukraine", after Community Notes contradicted claims that Zelenskyy was unpopular among Ukrainians.[30]
In January 2025,Mark Zuckerbergannounced thatMetawill remove fact-checkers forFacebook,Instagram, andThreads, replacing them with a community-orientated system, similar to Community Notes. According to Meta, the feature will initially be launched for U.S. users.[32][33]
The Community Notes algorithm publishes notes based on agreement from contributors who have a history of disagreeing.[21]Rather than based onmajority rule,[35]the program's algorithm prioritizes notes that receive ratings from a "diverse range of perspectives".[28][36]For a note to be published, a contributor must first propose a note under a tweet.[21]The program assigns different values to contributors' ratings, categorising users with similar rating histories as a form of "opinion classification", determined by a vague alignment with theleftandright-wingpolitical spectrum. Thebridging-basedmachine-learningalgorithm requires ratings from both sides of the spectrum in order to publish notes, that can have the intended effect of decreasing interaction with such content.[36][37][38]
Contributors arevolunteerswith access to aninterfacefrom which they have the ability to monitor tweets and replies that may be misleading.[21][9][39]Notes in need of ratings by contributors are located under a "Needs your help" section of the interface. Other contributors then give their opinion on the usefulness of the note, identifying notes as "Helpful" or "Not Helpful".[21][40]The contributor gets points if their note is validated,[41][21]known as "Rating Impact", that reflects how helpful a contributors' ratings have been.[40][42][43]X users are able to vote on whether they find notes helpful or not,[18]but must apply to become contributors in order to write notes, the latter being restricted by "Rating Impact" as well as the Community Notes guidelines.[40][42]
Since 2023, Community Notes are often attached to shared articles missing context, misleading advertisements or political tweets with false arguments,[20]from content receiving widespread attention.[44]
Notes have appeared on posts by government accounts and various politicians: theWhite House,[45][44]theFederal Bureau of Investigation,[46]and U.S. PresidentJoe Biden;[47]UK Prime MinistersRishi Sunak[48]andLiz Truss;[49]formerU.S. speakers of the House[50]and presidential candidatesRon DeSantisandVivek Ramaswamy;[51]U.S. representatives,[52]senators,[53][54]and Australianministers;[55]as well as X ownerElon Muskmultiple times,[45][53][6][56]that in February 2024 led to Musk arguing with the program.[57]
The feature does not directly mention fact-checking but instead indicates that "readers added context".[41]They can also note when an image is digitally altered orAI-generated.[20][58]X allows contributors to add Community Notes to adverts, which theFinancial Timesnoted was good for consumers but not for advertisers.[41]This resulted in brands such asApple,Samsung,UberandEvonyreceiving notes on their adverts and being accused of false or misleading posts, advertisers deleting certain posts that received notes, as well as modifying content for future advertisements.[25]
A source is attached to the note so the information can be verified, in a similar manner toWikipedia,[21][20]and notes reportedly received tens of millions of views per day.[10]Elon Musk, the owner of X, considers the program as a "gamechanger for combating wrong information"[18]and having "incredible potential for improving information accuracy".[10]In December 2023, after receiving a note on one of his posts, Musk thanked contributors for "jumping in thehoney pot" after stating that the system had been "gamed bystate actors", with the intent of detecting so-called bad actors.[6]
In July 2024, as part of apilot program, X announced the ability for eligible users to request Community Notes for certain posts, that would be directed to "Top Writers" of the software. The threshold of five requests within 24 hours would determine a note being published.[59]
Former head of Twitter's Trust and Safety,Yoel Roth, has since expressed concern over the effectiveness of the system in the early stages of the program, stating that Birdwatch was never supposed to replace the curation team, but instead intended to complement it. Another former employee said it was "an imperfect replacement for Trust and Safety staff". In April 2022, a study presented byMITresearchers subsequently found users overwhelmingly prioritised political content, even though 80% were correctly considered misleading.[10]
Wirednoted that in thebackendof the database most notes remain unpublished, and that numerous contributors engage in "conspiracy-fueled" discussions.[10]According to Musk, anyone trying to "weaponize Community Notes to demonetize people will be immediately obvious", due to theopen-sourcenature of thecodeand data.[27]
Regarding the situation inIsraelandGaza, with the difficulty of identifying accurate information and the number of unknown factors, MIT professorDavid Randsaid "what I expect the crowd to produce is a lot of noise", regarding the crowd-sourced system. A contributor otherwise described that the system is "not really scalable for the amount of media that's being consumed or posted in any given day", while X states that the program is having a "significant impact on tackling disinformation on the platform".[10]
In October 2023, Community Notes experienced multi-day delays in publishing notes onmisinformation in the Gaza waror failed to do so. One study byNBC Newsfound that in the case of a fakeWhite Housepress release claiming thedestruction of the St. Porphyrius Orthodox Church– a week before the destruction – only 8% of posts had notes published, 26% had unpublished notes, while the majority had no proposed notes.[60]Analysis fromNewsGuardof 250 of the most-engaged posts, spreading the most common unsubstantiated claims about theGaza warand viewed more than 100 million times, failed to receive notes 68% of the time. The report found Community Notes were "inconsistently applied to top myths relating to the conflict."[61]Thefact-checkingwebsiteSnopesdiscovered three posts fromverified users, who had shared a video of a hospitalized man from Gaza with false captions claiming it showed "crisis actors", had failed to receive any Community Notes after 24 hours.[62]Bellingcatfound the program spread false information, in reference toTaylor Swift's bodyguard due to misinformation.[35]Wiredhas documented that Community Notes is susceptible to disinformation, after a graphicHamasvideo shared byDonald Trump Jr.was falsely flagged as being a year old, but was instead found to be part of the recent conflict.[63]The original note was later replaced with another citing the report fromWired.[10]
In November 2023, theAtlantic Councilconducted an interactive study of Community Notes highlighting how the system operated slowly and inconsistently regarding Israel and Gaza misinformation. In one example, an image originally received a Community Note but continued to spread regardless receiving over 3 million views after a week. Hundreds of viral posts from the notes public database were analyzed and according to researchers fast-movingbreaking newswasn't labeled. Across 400 posts of misinformation, a note took on average 7 hours to appear, while others took 70 hours. The analysis however did show that over 50% of the posts received a note within 8 hours, with only a few taking longer than 2 days. The study included 100 tweets from 83 users who had signed up to X Premium in the past 4 months, along with 42 tweets from 25 accounts that were reinstated by Elon Musk, includingLaura Loomer. The study also included Jackson Hinkle, who appeared multiple times.[64]
Another NewsGuard report found advertising appearing on 15 posts with Community Notes attached in the week of November 13, 2023, indicating that "misinformationsuper-spreaders" may still be eligible for ad revenue, despite posts with notes attached being ineligible according to Musk.[65]On November 30, aMashableinvestigation found most users never see published notes, with examples of notes seen by less than 1% to 5% of users who viewed misinformation content, and overall, a disproportionate number of views on posts compared to the attached notes.[1]
A 2024 study on fact-checking ofCOVID-19 vaccinessampled 205 Community Notes and found the information accurate in 97% of notes and 49% of cited sources were of high quality. The lead author stated that only a small percentage of misinformation received a note, while published notes were among the most viral content.[3][4]Another study found that users presented with community notes trusted them more than simple misinformation flags.[66]
In July 2024, after theattempted assassination of Donald Trump, theCenter for Countering Digital Hate(CCDH) published a report that of the 100 most popular conspiratorial posts on X about the shooting, only five Community Notes were published to counter the false claim.[67]In October 2024, the CCDH reported that 74% of misinformation about the2024 United States electionsfailed to receive notes, based on a sample of 283 posts. Where notes were published, they received 13 times less views than the original post, according to the group.[2]
|
https://en.wikipedia.org/wiki/Community_Notes
|
AOL(formerly a company known asAOL Inc.and originally known asAmerica Online)[1]is an Americanweb portalandonline service providerbased in New York City, and a brand marketed byYahoo! Inc.
The service traces its history to an online service known asPlayNET. PlayNET licensed its software toQuantum Link(Q-Link), which went online in November 1985. A newIBM PCclient was launched in 1988, and eventually renamed as America Online in 1989. AOL grew to become the largest online service, displacing established players likeCompuServeandThe Source. By 1995, AOL had about three million active users.[2]
AOL was at one point the most recognized brand on the Web in the United States. AOL once provided adial-up Internetservice to millions of Americans and pioneeredinstant messagingandchat roomswithAOL Instant Messenger(AIM). In 1998, AOL purchasedNetscapefor US$4.2 billion. By 2000, AOL was providing internet service to over 20 million consumers, dominating the market ofInternet service providers(ISPs).[3]In 2001, at the height of its popularity, it purchased the media conglomerateTime Warnerin the largest merger in US history. AOL shrank rapidly thereafter, partly due to the decline ofdial-upand rise ofbroadband.[4]AOL was eventuallyspun offfrom Time Warner in 2009, withTim Armstrongappointed the new CEO. Under his leadership, the company invested in media brands and advertising technologies.
On June 23, 2015, AOL was acquired byVerizon Communicationsfor $4.4 billion.[5][6]On May 3, 2021, Verizon announced it would sell Yahoo and AOL to private equity firmApollo Global Managementfor $5 billion.[7]On September 1, 2021, AOL became part of the newYahoo! Inc.
AOL began in 1983, as a short-lived venture calledControl Video Corporation(CVC), founded byWilliam von Meister. Its sole product was an online service calledGameLinefor theAtari 2600video game console, after von Meister's idea of buying music on demand was rejected byWarner Bros.[8]Subscribers bought amodemfrom the company for $49.95 and paid a one-time $15 setup fee. GameLine permitted subscribers to temporarily download games and keep track of high scores, at a cost of $1 per game.[9]The telephone disconnected and the downloaded game would remain in GameLine's Master Module, playable until the user turned off the console or downloaded another game.
In January 1983,Steve Casewas hired as a marketing consultant for Control Video on the recommendation of his brother, investment banker Dan Case. In May 1983,Jim Kimseybecame a manufacturing consultant for Control Video, which was near bankruptcy. Kimsey was brought in by his West Point friendFrank Caufield, an investor in the company.[8]In early 1985, von Meister left the company.[10]
On May 24, 1985,Quantum Computer Services, an online services company, was founded by Kimsey from the remnants of Control Video, with Kimsey aschief executive officerandMarc Seriffaschief technology officer. The technical team consisted of Seriff, Tom Ralston, Ray Heinrich, Steve Trus, Ken Huntsman, Janet Hunter, Dave Brown, Craig Dykstra, Doug Coward, and Mike Ficco. In 1987, Case was promoted again to executive vice-president. Kimsey soon began to groom Case to take over the role of CEO, which he did when Kimsey retired in 1991.[10]
Kimsey changed the company's strategy, and in 1985, launched a dedicated online service forCommodore 64and128computers, originally calledQuantum Link("Q-Link" for short).[9]The Quantum Link software was based on software licensed fromPlayNet, Inc., which was founded in 1983 by Howard Goldberg and Dave Panzl. The service was different from other online services as it used the computing power of the Commodore 64 and theApple IIrather than just a "dumb" terminal. It passed tokens back and forth and provided a fixed-price service tailored for home users. In May 1988, Quantum andApplelaunchedAppleLinkPersonal Edition for Apple II[11]andMacintoshcomputers. In August 1988, Quantum launched PC Link, a service forIBM-compatiblePCsdeveloped in a joint venture with theTandy Corporation. After the company parted ways with Apple in October 1989, Quantum changed the service's name to America Online.[12][13]Case promoted and sold AOL as the online service for people unfamiliar with computers, in contrast toCompuServe, which was well established in the technical community.[10]
From the beginning, AOL includedonline gamesin its mix of products; many classic and casual games were included in the original PlayNet software system. The company introduced many innovative online interactive titles and games, including:
In February 1991, AOL forDOSwas launched using aGeoWorksinterface; it was followed a year later by AOL forWindows.[9]This coincided with growth in pay-based online services, likeProdigy,CompuServe, andGEnie.
During the early 1990s, the average subscription lasted for about 25 months and accounted for $350 in total revenue. Advertisements invited modem owners to "Try America Online FREE", promising free software and trial membership.[14]AOL discontinuedQ-Linkand PC Link in late 1994. In September 1993, AOL addedUsenetaccess to its features.[15]This is commonly referred to as the "Eternal September", as Usenet's cycle of new users was previously dominated by smaller numbers of college and university freshmen gaining access in September and taking a few weeks to acclimate. This also coincided with a new "carpet bombing" marketing campaign by CMOJan Brandtto distribute as many free trial AOL trial disks as possible through nonconventional distribution partners. At one point, 50% of theCDsproduced worldwide had an AOL logo.[16]AOL quickly surpassedGEnie, and by the mid-1990s, it passed Prodigy (which for several years allowed AOL advertising) andCompuServe.[10]In November 1994, AOL purchased Booklink for its web browser, to give its users web access.[17]In 1996, AOL replaced Booklink with a browser based on Internet Explorer, reportedly in exchange for inclusion of AOL in Windows.[18]
AOL launched services with theNational Education Association, theAmerican Federation of Teachers,National Geographic, theSmithsonian Institution, theLibrary of Congress,Pearson,Scholastic,ASCD,NSBA, NCTE,Discovery Networks,TurnerEducation Services (CNN Newsroom),NPR,The Princeton Review,Stanley Kaplan,Barron's,Highlights for Kids, theUS Department of Education, and many other education providers. AOL offered the first real-time homework help service (the Teacher Pager—1990; prior to this, AOL provided homework help bulletin boards), the first service by children, for children (Kids Only Online, 1991), the first online service for parents (the Parents Information Network, 1991), the first online courses (1988), the first omnibus service for teachers (the Teachers' Information Network, 1990), the first online exhibit (Library of Congress, 1991), the first parental controls, and many other online education firsts.[19]
AOL purchased search engineWebCrawlerin 1995, but sold it toExcitethe following year; the deal made Excite the sole search and directory service on AOL.[20]After the deal closed in March 1997, AOL launched its own branded search engine, based on Excite, called NetFind. This was renamed to AOL Search in 1999.[21]
AOL charged its users an hourly fee until December 1996,[22]when the company changed to a flat monthly rate of $19.95.[9]During this time, AOL connections were flooded with users trying to connect, and many canceled their accounts due to constantbusy signals. A commercial was made featuring Steve Case telling people AOL was working day and night to fix the problem. Within three years, AOL's user base grew to 10 million people. In 1995, AOL was headquartered at 8619 Westwood Center Drive in theTysons Corner CDPinunincorporatedFairfax County, Virginia in theWashington, D.C. metropolitan area.[23][24]near theTown of Vienna.[25]
AOL was quickly running out of room in October 1996 for its network at the Fairfax County campus. In mid-1996, AOL moved to 22000 AOL Way inDulles, unincorporatedLoudoun County, Virginia to provide room for future growth.[26]In a five-year landmark agreement with the most popular operating system, AOL was bundled withWindowssoftware.[27]
On March 31, 1996, the short-livedeWorldwas purchased by AOL. In 1997, about half of all US homes with Internet access had it through AOL.[28]During this time, AOL's content channels, underJason Seiken, including News, Sports, and Entertainment, experienced their greatest growth as AOL become the dominant online service internationally with more than 34 million subscribers.
In February 1998, AOL acquiredCompuServeInteractive Services (CIS) viaWorldCom(laterVerizon), which kept Compuware's networking business.[29]
In November 1998, AOL announced it would acquireNetscape, best known for theirweb browser, in a major $4.2 billion deal.[9]The deal closed on March 17, 1999. Another large acquisition in December 1999 was that ofMapQuest, for $1.1 billion.[30]
In January 2000, as new broadband technologies were being rolled out around the New York City metropolitan area and elsewhere across the United States, AOL andTime Warner Entertainmentannounced plans to merge, forming AOL Time Warner, Inc. The terms of the deal called for AOL shareholders to own 55% of the new, combined company. The deal closed on January 11, 2001. The new company was led by executives from AOL, SBI, and Time Warner.Gerald Levin, who had served as CEO of Time Warner, was CEO of the new company.Steve Caseserved as chairman, J. Michael Kelly (from AOL) was the chief financial officer,Robert W. Pittman(from AOL) andDick Parsons(from Time Warner) served as co-chief operating officers.[31]In 2002,Jonathan Millerbecame CEO of AOL.[32]The following year, AOL Time Warner dropped the "AOL" from its name. It was the largest merger in history when completed with the combined value of the companies at $360 billion. This value fell sharply, to as low as $120 billion, as markets repriced AOL's valuation as a pure internet firm more modestly when combined with the traditional media and cable business. This status did not last long, and the company's value rose again within three months. By the end of that year, the tide had turned against "pure" internet companies, with many collapsing under falling stock prices, and even the strongest companies in the field losing up to 75% of theirmarket value. The decline continued though 2001, but even with the losses, AOL was among the internet giants that continued to outperformbrick and mortarcompanies.[33]
In 2004, along with the launch of AOL 9.0 Optimized, AOL also made available the option of personalized greetings which would enable the user to hear his or her name while accessing basic functions and mail alerts, or while logging in or out. In 2005, AOL broadcast theLive 8concert live over the Internet, and thousands of users downloaded clips of the concert over the following months.[34]In late 2005, AOL released AOL Safety & Security Center, a bundle ofMcAfee Antivirus,CAanti-spyware, and proprietaryfirewallandphishing protectionsoftware.[35]News reports in late 2005 identified companies such asYahoo!,Microsoft, andGoogleas candidates for turning AOL into a joint venture.[36]Those plans were abandoned when it was revealed on December 20, 2005, that Google would purchase a 5% share of AOL for $1 billion.[37]
On April 3, 2006, AOL announced that it would retire the full name America Online. The official name of the service became AOL, and the full name of theTime Warnersubdivision became AOLLLC.[38]On June 8, 2006,[39]AOL offered a new program called AOL Active Security Monitor, a diagnostic tool to monitor and rate PC security status, and recommended additional security software from AOL orDownload.com. Two months later,[40]AOL releasedAOL Active Virus Shield, a free product developed byKaspersky Lab, that did not require an AOL account, only an internet email address. TheISPside ofAOL UKwas bought byCarphone Warehousein October 2006 to take advantage of its 100,000LLUcustomers, making Carphone Warehouse the largest LLU provider in the UK.[41]
In August 2006, AOL announced that it would offeremailaccounts and software previously available only to its paying customers, provided that users accessed AOL or AOL.com through an access method not owned by AOL (otherwise known as "third party transit", "bring your own access" or "BYOA"). The move was designed to reduce costs associated with the "walled garden" business model by reducing usage of AOL-owned access points and shifting members with high-speed internet access from client-based usage to the more lucrative advertising provider AOL.com.[42]The change from paid to free access was also designed to slow the rate at which members canceled their accounts and defected toMicrosoftHotmail,Yahoo!or other free email providers. The other free services included:[43]
Also in August, AOL informed its US customers of an increase in the price of itsdial-up accessto $25.90. The increase was part of an effort to migrate the service's remaining dial-up users to broadband, as the increased price was the same as that of its monthlyDSLaccess.[51]However, AOL subsequently began offering unlimited dial-up access for $9.95 a month.[52]
On November 16, 2006,Randy FalcosucceededJonathan Milleras CEO.[53]In December 2006, AOL closed its last remaining call center in the United States, "taking the America out of America Online," according to industry pundits. Service centers based inIndiaand thePhilippinescontinue to provide customer support and technical assistance to subscribers.[54]
On September 17, 2007, AOL announced the relocation of one of its corporate headquarters fromDulles, Virginia toNew York City[55][56]and the combination of its advertising units into a new subsidiary called Platform A. This action followed several advertising acquisitions, most notablyAdvertising.com, and highlighted the company's new focus on advertising-driven business models. AOL management stressed that "significant operations" would remain in Dulles, which included the company's access services and modem banks.
In October 2007, AOL announced the relocation of its other headquarters fromLoudoun County, Virginia to New York City, while continuing to operate its Virginia offices.[57]As part of the move to New York and the restructuring of responsibilities at the Dulles headquarters complex after the Reston move, Falco announced on October 15, 2007, plans to lay off 2,000 employees worldwide by the end of 2007, beginning "immediately".[58]The result was a layoff of approximately 40% of AOL's employees. Most compensation packages associated with the October 2007 layoffs included a minimum of 120 days of severance pay, 60 of which were offered in lieu of the 60-day advance notice requirement by provisions of the 1988 federalWARN Act.[58]
By November 2007, AOL's customer base had been reduced to 10.1 million subscribers,[59]slightly more than the number of subscribers ofComcastandAT&T Yahoo!. According to Falco, as of December 2007, the conversion rate of accounts from paid access to free access was more than 80%.[60]
On January 3, 2008, AOL announced the closing of itsReston, Virginia, data center, which was sold toCRG West.[61]On February 6, Time Warner CEOJeff Bewkesannounced that Time Warner would divide AOL's internet-access and advertising businesses, with the possibility of later selling the internet-access division.[62]
On March 13, 2008, AOL purchased the social networking siteBebofor $850 million (£417 million).[63]On July 25, AOL announced that it was shuttering Xdrive, AOL Pictures and BlueString to save on costs and focus on its core advertising business.[50]AOL Pictures was closed on December 31. On October 31,AOL Hometown(a web-hosting service for the websites of AOL customers) and the AOL Journal blog hosting service were eliminated.[64]
On March 12, 2009,Tim Armstrong, formerly withGoogle, was named chairman and CEO of AOL.[65]On May 28, Time Warner announced that it would position AOL as an independent company afterGoogle's shares ceased at the end of the fiscal year.[66]On November 23, AOL unveiled a new brand identity with thewordmark"Aol." superimposed onto canvases created by commissioned artists. The new identity, designed byWolff Olins,[67]was integrated with all of AOL's services on December 10, the date upon which AOL traded independently for the first time since the Time Warner merger on theNew York Stock Exchangeunder the symbol AOL.[68]
On April 6, 2010, AOL announced plans to shutter or sell Bebo.[69]On June 16, the property was sold toCriterion Capital Partnersfor an undisclosed amount, believed to be approximately $10 million.[70]In December, AIM eliminated access to AOL chat rooms, noting a marked decline in usage in recent months.[71]
Under Armstrong's leadership, AOL followed a new business direction marked by a series of acquisitions. It announced the acquisition ofPatch Media, a network of community-specific news and information sites focused on towns and communities.[72]On September 28, 2010, at the San FranciscoTechCrunchDisrupt Conference, AOL signed an agreement to acquireTechCrunch.[73][74]On December 12, 2010, AOL acquiredabout.me, a personal profile and identity platform, four days after the platform's public launch.[75]
On January 31, 2011, AOL announced the acquisition of European video distribution network goviral.[76]In March 2011, AOL acquiredHuffPostfor $315 million.[77][78]Shortly after the acquisition was announced,Huffington Postco-founderArianna Huffingtonreplaced AOL content chief David Eun, assuming the role of president and editor-in-chief of the AOL Huffington Post Media Group.[79]On March 10, AOL announced that it would cut approximately 900 workers following theHuffPostacquisition.[80]
On September 14, 2011, AOL formed a strategic ad-selling partnership with two of its largest competitors,YahooandMicrosoft. The three companies would begin selling inventory on each other's sites. The strategy was designed to help the companies compete withGoogleand advertising networks.[81]
On February 28, 2012, AOL partnered withPBSto launch MAKERS, a digital documentary series focusing on high-achieving women in industries perceived as male-dominated such as war, comedy, space, business, Hollywood and politics.[82][83][84]Subjects for MAKERS episodes have includedOprah Winfrey,Hillary Clinton,Sheryl Sandberg,Martha Stewart,Indra Nooyi,Lena DunhamandEllen DeGeneres.
On March 15, 2012, AOL announced the acquisition of Hipster, a mobile photo-sharing app, for an undisclosed amount.[85]On April 9, 2012, AOL announced a deal to sell 800 patents toMicrosoftfor $1.056 billion. The deal included a perpetual license for AOL to use the patents.[86]
In April, AOL took several steps to expand its ability to generate revenue throughonline video advertising. The company announced that it would offergross rating point(GRP) guarantee for online video, mirroring the television-ratings system and guaranteeing audience delivery for online-video advertising campaigns bought across its properties.[87]This announcement came just days before theDigital Content NewFront (DCNF)a two-week event held by AOL,Google,Hulu,Microsoft,VevoandYahooto showcase the participating sites' digital video offerings. The DCNF was conducted in advance of the traditional television upfronts in the hope of diverting more advertising money into the digital space.[88]On April 24, the company launched theAOL Onnetwork, a single website for its video output.[89]
In February 2013, AOL reported its fourth quarter revenue of $599.5 million, its first growth in quarterly revenue in eight years.[90]
In August 2013, Armstrong announced thatPatch Mediawould scale back or sell hundreds of its local news sites.[91]Not long afterward, layoffs began, with up to 500 out of 1,100 positions initially impacted.[92]On January 15, 2014, Patch Media was spun off, and majority ownership was held by Hale Global.[93]By the end of 2014, AOL controlled 0.74% of the global advertising market, well behind industry leader Google's 31.4%.[94]
On January 23, 2014, AOL acquired Gravity, a software startup that tracked users' online behavior and tailored ads and content based on their interests, for $83 million.[95]The deal, which included approximately 40 Gravity employees and the company's personalization technology, was Armstrong's fourth-largest deal since taking command in 2009. Later that year, AOL acquired Vidible, a company that developed technology to help websites run video content from other publishers, and help video publishers sell their content to these websites. The deal, which was announced December 1, 2014, was reportedly worth roughly $50 million.[96]
On July 16, 2014, AOL earned anEmmynomination for the AOL original seriesThe Future Starts Herein the News and Documentary category.[97]This came days after AOL earned its firstPrimetime Emmy Awardnomination and win forPark Bench with Steve Buscemiin theOutstanding Short Form Variety Series.[98]Created and hosted byTiffany Shlain, the series focused on humans' relationship with technology and featured episodes such as "The Future of Our Species", "Why We Love Robots" and "A Case for Optimism".
On May 12, 2015,Verizonannounced plans to buy AOL for $50 per share in a deal valued at $4.4 billion. The transaction was completed on June 23.Armstrong, who continued to lead the firm following regulatory approval, called the deal the logical next step for AOL. "If you look forward five years, you're going to be in a space where there are going to be massive, global-scale networks, and there's no better partner for us to go forward with than Verizon." he said. "It's really not about selling the company today. It's about setting up for the next five to 10 years."[5]
Analyst David Bank said he thought the deal made sense for Verizon.[5]The deal will broaden Verizon's advertising sales platforms and increase its video production ability through websites such asHuffPost,TechCrunch, andEngadget.[94]However, Craig Moffett said it was unlikely the deal would make a big difference to Verizon's bottom line.[5]AOL had about two million dial-up subscribers at the time of the buyout.[94]The announcement caused AOL's stock price to rise 17%, while Verizon's stock price dropped slightly.[5]
Shortly before the Verizon purchase, on April 14, 2015, AOL launched ONE by AOL, a digital marketing programmatic platform that unifies buying channels and audience management platforms to track and optimize campaigns over multiple screens.[99]Later that year, on September 15, AOL expanded the product with ONE by AOL: Creative, which is geared towards creative and media agencies to similarly connect marketing and ad distribution efforts.[100]
On May 8, 2015, AOL reported its first-quarter revenue of $625.1 million, $483.5 million of which came from advertising and related operations, marking a 7% increase from Q1 2014. Over that year, the AOL Platforms division saw a 21% increase in revenue, but a drop in adjustedOIBDAdue to increased investments in the company's video and programmatic platforms.[101]
On June 29, 2015, AOL announced a deal withMicrosoftto take over the majority of its digital advertising business. Under the pact, as many as 1,200 Microsoft employees involved with the business will be transferred to AOL, and the company will take over the sale of display, video, and mobile ads on various Microsoft platforms in nine countries, including Brazil, Canada, the United States, and the United Kingdom. Additionally,Google Searchwill be replaced on AOL properties withBing—which will display advertisingsold by Microsoft. Both advertising deals are subject toaffiliate marketingrevenue sharing.[102][103]
On July 22, 2015, AOL received two News and Documentary Emmy nominations, one for MAKERS in the Outstanding Historical Programming category, and the other forTrue Trans WithLaura Jane Grace, which documented the story of Laura Jane Grace, atransgendermusician best known as the founder, lead singer, songwriter and guitarist of the punk rock bandAgainst Me!, and her decision to come out publicly and overall transition experience.[104]
On September 3, 2015, AOL agreed to buyMillennial Mediafor $238 million.[105]On October 23, 2015, AOL completed the acquisition.[106]
On October 1, 2015, Go90, a free ad-supported mobile video service aimed at young adult and teen viewers that Verizon owns and AOL oversees and operates, launched its content publicly after months of beta testing.[107][108]The initial launch line-up included content fromComedy Central,HuffPost,Nerdist News,UnivisionNews,Vice,ESPNandMTV.[107]
On April 20, 2016, AOL acquired virtual reality studioRYOTto bring immersive 360 degree video and VR content toHuffPost's global audience across desktop, mobile, and apps.[109]
In July 2016, Verizon Communications announced its intent to purchase the core internet business ofYahoo!. Verizon merged AOL with Yahoo into a new company called "Oath Inc.", which in January 2019 rebranded itself asVerizon Media.[110]
In April 2018,Oath Inc.soldMoviefonetoMoviePassParentHelios and Matheson Analytics.[111][112][113]
In November 2020 theHuffington Postwas sold toBuzzFeedin a stock deal.[114]
On May 3, 2021, Verizon announced it would sell 90 percent of its Verizon Media division toApollo Global Managementfor $5 billion. The division became thesecond incarnation of Yahoo! Inc.[7]
As of September 1, 2021, the following media brands became subsidiary of AOL's parentYahoo Inc.[115]
AOL's content contributors consists of over 20,000 bloggers, including politicians, celebrities, academics, and policy experts, who contribute on a wide range of topics making news.[119]
In addition to mobile-optimized web experiences, AOL produces mobile applications for existing AOL properties like Autoblog, Engadget, The Huffington Post, TechCrunch, and products such as Alto, Pip, and Vivv.
AOL has a global portfolio of media brands and advertising services across mobile, desktop, and TV. Services include brand integration and sponsorships through its in-house branded content arm, Partner Studio by AOL, as well as data and programmatic offerings through ad technology stack,ONE by AOL.
AOL acquired a number of businesses and technologies help to form ONE by AOL. These acquisitions includedAdapTVin 2013 and Convertro, Precision Demand, and Vidible in 2014.[120]ONE by AOL is further broken down into ONE by AOL for Publishers (formerly Vidible, AOL On Network and Be On for Publishers) and ONE by AOL for Advertisers, each of which have several sub-platforms.[121][122]
On September 10, 2018, AOL's parent company Oath consolidatedBrightRoll, One by AOL andYahoo Geminito 'simplify' adtech service by launching a single advertising proposition dubbed Oath Ad Platforms, now Yahoo! Ad Tech.[123]
AOL offers a range of integrated products and properties including communication tools, mobile apps and services and subscription packages.
In 2017, before the discontinuation of AIM, "billions of messages" were sent "daily" on it and AOL's other chat services.[1]
AOL Desktopis an internet suite produced by AOL from 2007[132][133]that integrates aweb browser, amedia playerand aninstant messengerclient.[130]Version 10.X was based onAOL OpenRide,[134]it is an upgrade from such.[135]ThemacOSversion is based onWebKit.
AOL Desktop version 10.X was different from previousAOL browsersand AOL Desktop versions. Its features are focused on web browsing as well asemail. For instance, one does not have to sign into AOL in order to use it as a regular browser. In addition, non-AOL email accounts can be accessed through it. Primary buttons include "MAIL", "IM", and several shortcuts to various web pages. The first two require users to sign in, but the shortcuts to web pages can be used without authentication. AOL Desktop version 10.X was later marked as unsupported in favor of supporting the AOL Desktop 9.X versions.
Version 9.8 was released, replacing the Internet Explorer components of the web browser withCEF[131](Chromium Embedded Framework) to give users an improved web browsing experience closer to that ofChrome.
Version 11 of AOL Desktop was a total rewrite but maintained a similar user interface to the previous 9.8.X series of releases.[131]
In 2017, a new paid version called AOL Desktop Gold was released, available for $4.99 per month after trial. It replaced the previous free version.[136]After the shutdown of AIM in 2017, AOL's original chat rooms continued to be accessible through AOL Desktop Gold, and some rooms remained active during peak hours. That chat system was shut down on December 15, 2020.[137]
In addition to AOL Desktop, the company also offered abrowser toolbarMozilla plug-in,AOL Toolbar, for several web browsers that provided quick access to AOL services. The toolbar was available from 2007 until 2018.
In its earlier incarnation as a "walled garden" community and service provider, AOL received criticism for its community policies, terms of service, and customer service. Prior to 2006, AOL was known for its direct mailing of CD-ROMs and 3.5-inch floppy disks containing its software. The disks were distributed in large numbers; at one point, half of the CDs manufactured worldwide had AOL logos on them.[16]The marketing tactic was criticized for its environmental cost, and AOL CDs were recognized asPC World's most annoying tech product.[138][139]
AOL used a system of volunteers to moderate its chat rooms, forums and user communities. The program dated back to AOL's early days, when it charged by the hour for access and one of its highest billing services was chat. AOL provided free access to community leaders in exchange for moderating the chat rooms, and this effectively made chat very cheap to operate, and more lucrative than AOL's other services of the era. There were 33,000 community leaders in 1996.[140]All community leaders received hours of training and underwent a probationary period. While most community leaders moderated chat rooms, some ran AOL communities and controlled their layout and design, with as much as 90% of AOL's content being created or overseen by community managers until 1996.[141]
By 1996,ISPswere beginning to charge flat rates for unlimited access, which they could do at a profit because they only provided internet access. Even though AOL would lose money with such a pricing scheme, it was forced by market conditions to offer unlimited access in October 1996. In order to return to profitability, AOL rapidly shifted its focus from content creation to advertising, resulting in less of a need to carefully moderate every forum and chat room to keep users willing to pay by the minute to remain connected.[142]
After unlimited access, AOL considered scrapping the program entirely, but continued it with a reduced number of community leaders, with scaled-back roles in creating content.[141]Although community leaders continued to receive free access, after 1996 they were motivated more by the prestige of the position and the access to moderator tools and restricted areas within AOL.[140][141]By 1999, there were over 15,000 volunteers in the program.[143]
In May 1999, two former volunteers filed a class-action lawsuit alleging AOL violated theFair Labor Standards Actby treating volunteers like employees. Volunteers had to apply for the position, commit to working for at least three to four hours a week, fill out timecards and sign a non-disclosure agreement.[144]On July 22, AOL ended its youth corps, which consisted of 350 underage community leaders.[140]At this time, theUnited States Department of Laborbegan an investigation into the program, but it came to no conclusions about AOL's practices.[144]
AOL ended its community leader program on June 8, 2005. The class action lawsuit dragged on for years, even after AOL ended the program and AOL declined as a major internet company. In 2010, AOL finally agreed to settle the lawsuit for $15 million.[145]The community leader program was described as an example ofco-productionin a 2009 article inInternational Journal of Cultural Studies.[141]
AOL has faced a number of lawsuits over claims that it has been slow to stop billing customers after their accounts have been canceled, either by the company or the user. In addition, AOL changed its method of calculating used minutes in response to aclass action lawsuit. Previously, AOL would add 15 seconds to the time a user was connected to the service and round up to the next whole minute (thus, a person who used the service for 12 minutes and 46 seconds would be charged for 14 minutes).[146][147]AOL claimed this was to account for sign on/sign off time, but because this practice was not made known to its customers, the plaintiffs won (some also pointed out that signing on and off did not always take 15 seconds, especially when connecting via another ISP). AOL disclosed its connection-time calculation methods to all of its customers and credited them with extra free hours. In addition, the AOL software would notify the user of exactly how long they were connected and how many minutes they were being charged.
AOL was sued by theOhio Attorney Generalin October 2003 for improper billing practices. The case was settled on June 8, 2005. AOL agreed to resolve any consumercomplaintsfiled with theOhioAG's office. In December 2006, AOL agreed to providerestitutionto Florida consumers to settle the case filed against them by theFlorida Attorney General.[148]
Many customers complained that AOL personnel ignored their demands to cancel service and stop billing. In response to approximately 300 consumer complaints, theNew York Attorney General's office began an inquiry of AOL's customer service policies.[citation needed]The investigation revealed that the company had an elaborate scheme for rewarding employees who purported toretainor "save" subscribers who had called to cancel their Internet service. In many instances, such retention was done against subscribers' wishes, or without their consent. Under the scheme, customer service personnel received bonuses worth tens of thousands of dollars if they could successfully dissuade or "save" half of the people who called to cancel service.[citation needed]For several years, AOL had instituted minimum retention or "save" percentages, which consumer representatives were expected to meet. These bonuses, and the minimum "save" rates accompanying them, had the effect of employees not honoring cancellations, or otherwise making cancellation unduly difficult for consumers.
On August 24, 2005, America Online agreed to pay $1.25 million to the state of New York and reformed its customer service procedures. Under the agreement, AOL would no longer require its customer service representatives to meet a minimum quota for customer retention in order to receive a bonus.[148]However the agreement only covered people in the state of New York.[149]
On June 13, 2006, Vincent Ferrari documented his account cancellation phone call in a blog post,[150]stating he had switched to broadband years earlier. In the recorded phone call, the AOL representative refused to cancel the account unless the 30-year-old Ferrari explained why AOL hours were still being recorded on it. Ferrari insisted that AOL software was not even installed on the computer. When Ferrari demanded that the account be canceled regardless, the AOL representative asked to speak with Ferrari's father, for whom the account had been set up. The conversation was aired on CNBC. When CNBC reporters tried to have an account on AOL cancelled, they were hung up on immediately and it ultimately took more than 45 minutes to cancel the account.[151]
On July 19, 2006, AOL's entireretentionmanual was released on the Internet.[152]On August 3, 2006,Time Warnerannounced that the company would be dissolving AOL's retention centers due to its profits hinging on $1 billion in cost cuts. The company estimated that it would lose more than six million subscribers over the following year.[153]
Prior to 2006, AOL often sent unsolicited massdirect mailof 31⁄2"floppy disksandCD-ROMscontaining their software. They were the most frequent user of this marketing tactic, and received criticism for the environmental cost of the campaign.[154]According toPC World, in the 1990s "you couldn't open a magazine (PC Worldincluded) or your mailbox without an AOL disk falling out of it".[149]
The mass distribution of these disks was seen as wasteful by the public and led to protest groups. One such was No More AOL CDs, a web-based effort by two IT workers[155]to collect one million disks with the intent to return the disks to AOL.[156]The website was started in August 2001, and an estimated 410,176 CDs were collected by August 2007 when the project was shut down.[156]
In 2000, AOL was served with an $8 billion lawsuit alleging that its AOL 5.0 software caused significant difficulties for users attempting to use third-party Internet service providers. The lawsuit sought damages of up to $1000 for each user that had downloaded the software cited at the time of the lawsuit. AOL later agreed to a settlement of $15 million, without admission of wrongdoing.[157]The AOL software then was given a feature called AOL Dialer, or AOL Connect onMac OS X. This feature allowed users to connect to the ISP without running the full interface. This allowed users to use only the applications they wish to use, especially if they do not favor the AOL Browser.
AOL 9.0 was once identified byStopbadwareas beingunder investigation[158]for installing additional software without disclosure, and modifying browser preferences, toolbars, and icons. However, as of the release of AOL 9.0 VR (Vista Ready) on January 26, 2007, it was no longer considered badware due to changes AOL made in the software.[159]
When AOL gave clients access toUsenetin 1993, they hid at least one newsgroup in standard list view:alt.aol-sucks. AOL did list the newsgroup in the alternative description view, but changed the description to "Flames and complaints about America Online". With AOL clients swarmingUsenetnewsgroups, the old, existing user base started to develop a strong distaste for both AOL and its clients, referring to the new state of affairs asEternal September.[160]
AOL discontinued access to Usenet on June 25, 2005.[161]No official details were provided as to the cause of decommissioning Usenet access, except providing users the suggestion to access Usenet services from a third-party,Google Groups. AOL then provided community-basedmessage boardsin lieu of Usenet.
AOL has a detailed set of guidelines and expectations for users on their service, known as theTerms of Service(TOS, also known as Conditions of Service (COS) in the UK). It is separated into three different sections:Member Agreement,Community GuidelinesandPrivacy Policy.[162][163]All three agreements are presented to users at time of registration and digital acceptance is achieved when they access the AOL service. During the period when volunteer chat room hosts and board monitors were used, chat room hosts were given a brief online training session and test on Terms of Service violations.
There have been many complaints over rules that govern an AOL user's conduct. Some users disagree with the TOS, citing the guidelines are too strict to follow coupled with the fact the TOS may change without users being made aware. A considerable cause for this was likely due to alleged censorship of user-generated content during the earlier years of growth for AOL.[164][165][166][167]
In early 2005, AOL stated its intention to implement acertified emailsystem called Goodmail, which will allow companies to send email to users with whom they have pre-existing business relationships, with a visual indication that the email is from a trusted source and without the risk that the email messages might be blocked or stripped byspam filters.
This decision drew fire fromMoveOn, which characterized the program as an "email tax", and theElectronic Frontier Foundation(EFF), which characterized it as a shakedown of non-profits.[168]A website called Dearaol.com[169]was launched, with an online petition and a blog that garnered hundreds of signatures from people and organizations expressing their opposition to AOL's use of Goodmail.
Esther Dysondefended the move in an editorial inThe New York Times, saying "I hope Goodmail succeeds, and that it has lots of competition. I also think it and its competitors will eventually transform into services that more directly serve the interests of mail recipients. Instead of the fees going to Goodmail and AOL, they will also be shared with the individual recipients."[170]
Tim Lee of the Technology Liberation Front[171]posted an article that questioned the Electronic Frontier Foundation's adopting a confrontational posture when dealing with private companies. Lee's article cited a series of discussions[172]onDeclan McCullagh's Politechbot mailing list on this subject between the EFF's Danny O'Brien and antispammer Suresh Ramasubramanian, who has also compared[173]the EFF's tactics in opposing Goodmail to tactics used by Republican political strategistKarl Rove.SpamAssassindeveloper Justin Mason posted some criticism of the EFF's and Moveon's "going overboard" in their opposition to the scheme.
The dearaol.com campaign lost momentum and disappeared, with the last post to the now defunct dearaol.com blog—"AOL starts the shakedown" being made on May 9, 2006.
Comcast, who also used the service, announced on its website that Goodmail had ceased operations and as of February 4, 2011, they no longer used the service.[174]
On August 4, 2006, AOL released a compressed text file on one of its websites containing 20 million searchkeywordsfor over 650,000 users over a three-month period between March 1 and May 31, 2006, intended for research purposes. AOL pulled the file from public access by August 7, but not before its wide distribution on the Internet by others. Derivative research, titledA Picture of Search[175]was published by authors Pass, Chowdhury and Torgeson for The First International Conference on Scalable Information Systems.[176]
The data were used by websites such as AOLstalker[177]for entertainment purposes, where users of AOLstalker are encouraged to judge AOL clients based on the humorousness of personal details revealed by search behavior.
In 2003, Jason Smathers, an AOL employee, was convicted of stealing America Online's 92 million screen names and selling them to a known spammer. Smathers pled guilty to conspiracy charges in 2005.[178][179]Smathers pled guilty to violations of the USCAN-SPAM Act of 2003.[180]He was sentenced in August 2005 to 15 months in prison; the sentencing judge also recommended Smathers be forced to pay $84,000 in restitution, triple the $28,000 that he sold the addresses for.[178]
On February 27, 2012, aclass action lawsuitwas filed againstSupport.com, Inc. and partnerAOL, Inc.The lawsuit alleged Support.com and AOL's Computer Checkup "scareware" (which uses software developed by Support.com) misrepresented that their software programs would identify and resolve a host of technical problems with computers, offered to perform a free "scan", which often found problems with users' computers. The companies then offered to sell software—for which AOL allegedly charged $4.99 a month and Support.com $29—to remedy those problems.[181]Both AOL, Inc. and Support.com, Inc. settled on May 30, 2013, for $8.5 million. This included $25.00 to each valid class member and $100,000 each toConsumer Watchdogand theElectronic Frontier Foundation.[182]JudgeJacqueline Scott Corleywrote: "Distributing a portion of the [funds] to Consumer Watchdog will meet the interests of the silent class members because the organization will use the funds to help protect consumers across the nation from being subject to the types of fraudulent and misleading conduct that is alleged here," and "EFF's mission includes a strong consumer protection component, especially in regards to online protection."[181]
AOL continues to market Computer Checkup.[183]
Following media reports aboutPRISM, NSA's massive electronicsurveillance program, in June 2013, several technology companies were identified as participants, including AOL. According to leaks of said program, AOL joined the PRISM program in 2011.[184]
At one time, most AOL users had an online "profile" hosted by theAOL Hometownservice. When AOL Hometown was discontinued, users had to create a new profile onBebo. This was an unsuccessful attempt to create a social network that would compete with Facebook. When the value of Bebo decreased to a tiny fraction of the $850 million AOL paid for it, users were forced to recreate their profiles yet again, on a new service called AOL Lifestream.
AOL decided to shut down Lifestream on February 24, 2017, and gave users one month's notice to save photos and videos that had been uploaded to Lifestream.[185]Following the shutdown, AOL no longer provides any option for hosting user profiles.
During the Hometown/Bebo/Lifestream era, another user's profile could be displayed by clicking the "Buddy Info" button in the AOL Desktop software. After the shutdown of Lifestream, this was no longer supported, but opened to theAIMhome page (www.aim.com), which also became defunct, redirecting to AOL's home page.
40°43′51″N73°59′29″W / 40.7308°N 73.9914°W /40.7308; -73.9914
|
https://en.wikipedia.org/wiki/Criticism_of_AOL#Community_leaders
|
Thislist of volunteering awardsis an index to articles about notable awards issued by organisations and governmental bodies honoring the contributions ofvolunteers(unpaid staff). Whereas many organisations honor volunteers who serve within those individual organisations, the awards listed here recognize volunteers contributing to a variety ofnonprofit organisations,non-governmental organisations,charities,civil society organizationsand communities.
|
https://en.wikipedia.org/wiki/List_of_volunteer_awards
|
Slacktivism(ablendofslackerandactivism) is the practice of supporting a political or social cause by means such associal mediaoronline petitions, characterized as involving very little effort or commitment.[1]Additional forms of slacktivism include engaging in online activities such asliking,sharingortweetingabout a cause on social media, signing an Internet petition,[2]copying and pasting a status or message in support of the cause, sharing specifichashtagsassociated with the cause, or altering one's profile photo oravataron social network services to indicate solidarity.
Critics of slacktivism suggest that it fails to make a meaningful contribution to an overall cause because a low-stakes show of support, whether online or offline, is superficial, ineffective, draws off energy that might be used more constructively, and serves as a substitute for more substantiveforms of activismrather than supplementing them, and might, in fact, be counter-productive.[3]As groups increasingly use social media to facilitate civic engagement andcollective action,[4][5]proponents of slacktivism have pointed out that it can lead to engagement and help generate support for lesser-known causes.[6][7][8]
The term was coined by Dwight Ozard and Fred Clark in 1995 at theCornerstone Festival. The term was meant to shorten the phrase slacker activism, which refers to bottom-up activities by young people to affect society on a small, personal scale (such as planting a tree, as opposed to participating in a protest). The term originally had a positive connotation.[9]
Monty Phan, staff writer forNewsday, was an early user of the term in his 2001 article titled, "On the Net, 'Slacktivism'/Do-Gooders Flood In-Boxes."[10]
An early example of using the term "slacktivism" appeared in Barnaby Feder's article inThe New York Timescalled "They Weren't Careful What They Hoped For." Feder quoted anti-scam crusader Barbara Mikkelson ofSnopes, who described activities such as those listed above. "It's all fed by slacktivism ... the desire people have to do something good without getting out of their chair."[11]
Another example of the term "Slacktivism" appeared inEvgeny Morozov's book,Net Delusion: The Dark Side of Internet Freedom(2011). In it, Morozov relates slacktivism to the Colding-Jørgensen experiment. In 2009, a Danish psychologist named Anders Colding-Jørgensen created a fictitious Facebook group as part of his research. On the page, he posted an announcement suggesting that the Copenhagen city authorities would be demolishing the historicalStork Fountain. Within the first day, 125 Facebook members joined Colding-Jørgensen's. The number of fans began to grow at a staggering rate, eventually reaching 27,500.[12]Morozov argues the Colding-Jørgensen experiment reveals a key component of slacktivism: "When communication costs are low, groups can easily spring into action."[13]Clay Shirkysimilarly characterized slacktivism as "ridiculously easy group forming".[13]
Various people and groups express doubts about the value and effectiveness of slacktivism. Particularly, some skeptics argue that it entails an underlying assumption that all problems can be seamlessly fixed using social media, and while this may be true for local issues, slacktivism could prove ineffective for solving global predicaments.[14]A 2009NPRpiece by Morozov asked whether "the publicity gains gained through this greater reliance on new media [are] worth the organizational losses that traditional activist entities are likely to suffer, as ordinary people would begin to turn away from conventional (and proven) forms of activism."[15]
Criticism of slacktivism often involves the idea that internet activities are ineffective, and/or that they prevent or lessenpolitical participationin real life. However, as many studies on slacktivism relate only to a specific case or campaign, it is difficult to find an exact percentage of slacktivist actions that reach a stated goal. Furthermore, many studies also focus on such activism in democratic or open contexts, whereas the act of publicly liking, RSVPing or adopting an avatar or slogan as one's profile picture can be a defiant act in authoritarian or repressive countries.
Micah White has argued that although slacktivism is typically the easiest route to participation in movements and changes, the novelty of online activism wears off as people begin to realize that their participation created virtually no effect, leading people to lose hope in all forms of activism.[16]
Malcolm Gladwell, in his October 2010New Yorkerarticle, lambasted those who compare social media "revolutions" with actual activism that challenges the status quo ante.[17]He argued that today's social media campaigns cannot compare with activism that takes place on the ground, using theGreensboro sit-insas an example of what real, high-risk activism looks like.[17]
A 2011 study looking at college students found only a small positive correlation between those who engage online in politics on Facebook with those who engage off of it. Those who did engage only did so by posting comments and other low forms of political participation, helping to confirm the slacktivism theoretical model.[18]
TheNew Statesmanhas analyzed the outcomes of ten most-shared petitions and listed all of them as unsuccessful.[19]
Brian Dunning, in his 2014 podcast,Slacktivism: Raising Awareness, argues that the internet activities that slacktivism is associated with are a waste of time at their best and at their worst are ways to "steal millions of dollars from armchair activists who are persuaded to donate actual money to what they're told is some useful cause."[20]He says that most slacktivism campaigns are "based on bad information, bad science, and are hoaxes as often as not".[20]
He uses theKony 2012campaign as an example of how slacktivism can be used as a way to exploit others. The movie asked viewers to send money to the filmmakers rather than African law enforcement. Four months after the movie was released,Invisible Children, the charity who created the film, reported $31.9 million of gross receipts. The money in the end was not used to stop Kony, but rather to make another movie about stopping Kony. Dunning goes as far as to say that raising awareness of Kony was not even useful, as law enforcement groups had been after him for years.
Dunning does state that today, however, slacktivism is generally more benign. He citesChange.orgas an example. The site is full of hundreds of thousands of petitions. A person signing one of these online petitions may feel good about himself, but these petitions are generally not binding nor do they lead to any major change. Dunning suggests that before donating, or even "liking", a cause one should research the issue and the organization to ensure nothing is misattributed, exaggerated, or wrong.[20]
An example of a campaign against slacktivism is the advertisement series "Liking Isn't Helping" created by the international advertisement company Publicis Singapore for a relief organization, Crisis Relief Singapore (CRS). This campaign features images of people struggling or in need, surrounded by many people giving a thumbs up with the caption "Liking isn't helping". Though the campaign lacked critical components that would generate success, it made viewers stop and think about their activism habits and question the effect that slacktivism really has.[21]
In response to Gladwell's criticism of slacktivism in theNew Yorker(see above), journalist Leo Mirani argues that he might be right if activism is defined only as sit-ins, taking direct action, and confrontations on the streets. However, if activism is about arousing awareness of people, changing people's minds, and influencing opinions across the world, then the revolution will indeed be "tweeted",[22]"hashtagged",[23]and "YouTubed."[24]In a March 2012Financial Timesarticle, referring to efforts to address the ongoing violence related to theLord's Resistance Army, Matthew Green wrote that the slacktivists behind theKony 2012video had "achieved more with their 30-minute video than battalions of diplomats, NGO workers and journalists have since the conflict began 26 years ago."[25]
Although slacktivism has often been used pejoratively, some scholars point out that activism within the digital space is a reality.[26][27]These scholars suggest that slacktivism may have its deficiencies, but it can be a positive contributor to activism, and it is inescapable in the current digital climate.[26][27]A 2011 correlational study conducted byGeorgetown Universityentitled "The Dynamics of Cause Engagement" determined that so-called slacktivists are indeed "more likely to take meaningful actions".[28]Notably, "slacktivists participate in more than twice as many activities as people who don't engage in slacktivism, and their actions "have a higher potential to influence others".[28]Cited benefits of slacktivism in achieving clear objectives include creating a secure, low-cost, effective means of organizing that is environmentally friendly.[29]These "social champions" have the ability to directly link social media engagement with responsiveness, leveraging their transparent dialogue into economic, social or political action.[7]Going along this mindset is Andrew Leonard, a staff writer atSalon, who published an article on the ethics of smartphones and how we use them. Though the means of producing these products go against ethical human rights standards, Leonard encourages the use of smartphones on the basis that the technology they provide can be utilized as a means of changing the problematic situation of their manufacture. The ability to communicate quickly and on a global scale enables the spread of knowledge, such as the conditions that corporations provide to the workers they employ, and the result their widespread manufacturing has on globalization. Leonard argues that phones and tablets can be effective tools in bringing about change through slacktivism, because they allow us to spread knowledge, donate money, and more effectively speak our opinions on important matters.[30]
Others keep a slightly optimistic outlook on the possibilities of slacktivism while still acknowledging the pitfalls that come with this digital form of protest. Zeynep Tufekci, an assistant professor at theUniversity of North Carolinaand a faculty associate at the Berkman Center for Internet & Society, analyzed the capacity of slacktivism to influence collective group action in a variety of different social movements in a segment of theBerkman Luncheon Series. She acknowledges that digital activism is a great enabler of rising social and political movements, and it is an effective means of enabling differentialcapacity buildingfor protest. A 2015 study describes how slacktivism can contribute to a quicker growth of social protests, by propagation of information through peripheral nodes in social networks. The authors note that although slacktivists are less active than committed minorities, their power lies in their numbers: "their aggregate contribution to the spread of protest messages is comparable in magnitude to that of core participants".[31]However, Tufekci argues that the enhanced ability to rally protest is accompanied by a weakened ability to actually make an impact, as slacktivism can fail to reach the level of protest required in order to bring about change.[32]
TheBlack Lives Mattermovement calls for the end of systemic racism.[33]The movement has been inextricably linked with social media since 2014, in particular to Twitter with the hashtags #blacklivesmatter and #BLM.[33]Much of the support and awareness of this movement has been made possible through social media. Studies show that the slacktivism commonly present within the movement has been linked with a positive effect on active participation in it.[34]The fact that participants in this movement were able to contribute from their phones increased awareness and participation of the public, particularly in the United States.[34]
The Western-centric nature of the critique of slacktivism discounts the impact it can have in authoritarian or repressive contexts.[35][36]JournalistCourtney C. Radschargues that even such low level of engagement was an important form of activism forArabyouth before and during theArab Springbecause it was a form of free speech, and could successfully spark mainstream media coverage, such as when a hashtag becomes "a trending topic [it] helps generate media attention, even as it helps organize information....The power of social media to help shape the international news agenda is one of the ways in which they subvert state authority and power."[37]In addition, studies suggest that "fears of Internet activities supplanting real-life activity are unsubstantiated," in that they do not cause a negative or positive effect on political participation.[38]
TheHuman Rights Campaign(HRC) on Marriage Equality offers another example of how slacktivism can be used to make a notable difference.[39]The campaign urged Facebook users to change their profile pictures to a red image that had anequals sign(=) in the middle.[39]The logo symbolized equality and if Facebook users put the image as their profile photo, it meant they were in support of marriage equality.[39]The campaign was credited for raising positive awareness and cultivating an environment of support for the marriage equality cause.[39]This study concluded that, although the act of changing one's profile photo is small, ultimately social media campaigns such as this make a cumulative difference over time.[39]
The term "clicktivism" is used to describe forms of internet-based slacktivism such as signing online petitions or signing and sending form letter emails to politicians or corporate CEOs.[16]For example, the British groupUK Uncutuse Twitter and other websites to organise protests and direct action against companies accused of tax avoidance.[40]It allows organizations to quantify their success by keeping track of how many "clicked" on their petition or other call to action.
The idea behind clicktivism is that social media allow for a quick and easy way to show support for an organization or cause.[41]The main focus of digital organizations has become inflating participation rates by asking less and less of their members/viewers.[16]
Clicktivism can also be demonstrated by monitoring the success of a campaign by how many "likes" it receives.[42]Clicktivism strives to quantify support, presence and outreach without putting emphasis on real participation.[42]The act of "liking" a photo on Facebook or clicking a petition is in itself symbolic because it demonstrates that the individual is aware of the situation and it shows their peers the opinions and thoughts they have on certain subject matters.[43]
Critics of clicktivism state that this new phenomenon turns social movements to resemble advertising campaigns in which messages are tested,clickthrough rateis recorded, andA/B testingis often done. In order to improve these metrics, messages are reduced to make their "asks easier and actions simpler". This in turn reduces social action to having members that are a list of email addresses, rather than engaged people.[44][16]
Charity slacktivism is an action in support of a cause that takes little effort on the part of the individual. Examples of online charity slacktivism include posting a Facebook status to support a cause, "liking" a charity organization's cause on Facebook, tweeting or retweeting a charity organization's request for support on Twitter, signing Internet petitions, and posting and sharing YouTube videos about a cause. It can be argued that a person is not "liking" the photo in order to help the person in need, but to feel better about themselves, and to feel like they have done something positive for the person or scene depicted in front of them. This phenomenon has become increasingly popular with individuals whether they are going on trips to help less fortunate people, or by "liking" many posts on Facebook in order to "help" the person in the picture. Examples include theKony 2012campaign that exploded briefly in social media in March 2012.[45]
Examples of offline charity slacktivism include awareness wristbands and paraphernalia in support of causes, such as theLivestrong wristband, as well as bumper stickers andmobile donating. In 2020, during theCOVID-19pandemic,Clap for Our Carersgained traction in several countries.
The term slacktivism is often used to describe the world's reaction to the2010 Haiti earthquake. The Red Cross managed to raise $5 million in 2 days via text message donations.[46]Social media outlets were used to spread the word about the earthquake. The day after the earthquake, CNN reported that four of Twitter's top topics were related to the Haitian earthquake.[46]
This is the act of purchasing products that highlight support for a particular cause and advertise that a percentage of the cost of the good will go to the cause. In some instances the donated funds are spread across various entities within one foundation, which in theory helps several deserving areas of the cause. Criticism tends to highlight the thin spread of the donation.[citation needed]An example of this is theProduct Redcampaign, whereby consumers can buy Red-branded variants of commons products, with a proportion of proceeds going towards fighting AIDS.
Slacktivists may also purchase a product from a company because it has a history of donating funds to charity, as a way to second-handedly support a cause. For example, a slacktivist may buyBen and Jerry'sice cream because its founders invested in the nation's children, or promoted social and environmental concerns.[47]
Certain forms of slacktivism have political goals in mind, such as gaining support for a presidential campaign, or signing an internet petition that aims to influence governmental action.
The online petition websiteChange.orgclaimed it was attacked by Chinese hackers and brought down in April 2011. Change.org claimed the fact that hackers "felt the need to bring down the website must be seen as a testament to Change.org's fast-growing success and a vindication of one particular petition: A Call for the Release ofAi Weiwei."[48]Ai Weiwei, a noted human rights activist who had been arrested by Chinese authorities in April 2011, was released on June 22, 2011, from Beijing, which was deemed as a victory by Change.org of its online campaign and petition demanding Ai's release.
Sympathy slacktivism can be observed on social media networks such as Facebook, where users can like pages to support a cause or show support to people in need. Also common in this type of slacktivism is for users to change their profile pictures to one that shows the user's peers that they care about the topic.[49]This can be considered a virtual counterpart of wearing a pin to display one's sympathies; however, acquiring such a pin often requires some monetary donation to the cause while changing profile picture does not.
In sympathy slacktivism, images of young children, animals and people seemingly in need are often used to give a sense of credibility to the viewers, making the campaign resonate longer in their memory. Using children in campaigns is often the most effective way of reaching a larger audience due to the fact that most adults, when exposed to the ad, would not be able to ignore a child in need.
An example of sympathy slacktivism is the Swedish newspaper Aftonbladet's campaign "Vi Gillar Olika" (literal translation: "We like different").[50]This campaign was launched against xenophobia and racism, something that was a hot topic in Sweden in 2010. The main icon of the campaign was an open hand with the text "Vi Gillar Olika," the icon that was adopted from the French organisationSOS Racisme's campaign Touche pas à mon Pote in 1985.[51]
Another example was when Facebook users added a Norwegian flag to their pictures after the2011 Norway attacksin which 77 people were killed. This campaign received attention from the Swedish Moderate Party, who encouraged their supporters to update their profile pictures.[52]
Kony 2012was a campaign created byInvisible Childrenin the form of a 28-minute video about the dangerous situation of many children in Africa at the hands ofJoseph Kony, the leader of theLord's Resistance Army(LRA). The LRA is said to have abducted a total of nearly 60,000 children,brainwashingthe boys to fight for them and turning the girls into sex slaves.[53]
The campaign was used as an experiment to see if an online video could reach such a large audience that it would make a war criminal, Joseph Kony, famous. It became the fastest-growing viral video of all time, reaching 100 million views in six days.[54]The campaign grew an unprecedented amount of awareness, calling to international leaders as well as the general population.
The reaction to and participation in this campaign demonstrates charity slacktivism due to the way in which many viewers responded. The success of the campaign has been attributed mostly by how many people viewed the video rather than the donations received. After watching the video, many viewers felt compelled to take action. This action, however, took the form of sharing the video and potentially pledging their support.[55]
As described bySarah Kendziorof Aljazeera:
The video seemed to embody the slacktivist ethos: viewers oblivious to a complex foreign conflict are made heroic by watching a video, buying a bracelet, hanging a poster. Advocates of Invisible Children's campaign protested that their desire to catch Kony was sincere, their emotional response to the film genuine—and that the sheer volume of supporters calling for the capture of Joseph Kony constituted a meaningful shift in human rights advocacy."[56]
In the weeks following the kidnapping of hundreds of schoolgirls by the organizationBoko Haram, the hashtag #BringBackOurGirls began to trend globally on Twitter as the story continued to spread[57]and by May 11 it had attracted 2.3 million tweets. One such tweet came from theFirst Lady of the United States,Michelle Obama, holding a sign displaying the hashtag, posted to her official Twitter account, helping to spread the awareness of the kidnapping.[58]Comparisons have been made between the #BringBackOurGirls campaign and the Kony 2012 campaign.[59]The campaign was labeled slacktivism by some critics, particularly as the weeks and months passed with no progress being made in recovery of the kidnapped girls.[60][61]
According to Mkeki Mutah, uncle of one of the kidnapped girls:
There is a saying: "Actions speak louder than words." Leaders from around the world came out and said they would assist to bring the girls back, but now we hear nothing. The question I wish to raise is: why? If they knew they would not do anything, they wouldn't have even made that promise at all. By just coming out to tell the world, I see that as a political game, which it shouldn't be so far as the girls are concerned.[62]
|
https://en.wikipedia.org/wiki/Slacktivism
|
Avirtual assistant(typically abbreviated toVA, also called avirtual office assistant)[1]is generally self-employed and providesprofessionaladministrative, technical, or creative (social) assistance to clients remotely from ahome office.[2]Because virtual assistants are independent contractors rather than employees, clients are not responsible for any employee-related taxes, insurance, or benefits, except in the context that those indirect expenses are included in the VA's fees. Clients also avoid the logistical problem of providing extra office space, equipment, or supplies. Clients pay for 100% productive work and can work with virtual assistants, individually, or in multi-VA firms to meet their exact needs. Virtual assistants usually work for othersmall businesses[3]but can also support busy executives. It is estimated that there are as few as 5,000 to 10,000 or as many as 25,000 virtual assistants worldwide. The profession is growing in centralized economies with "fly-in fly-out" staffing practices.[4][5][6]
In terms of pay, according to Glassdoor, the annual salary for virtual assistants in the US is $35,922.[7]However, worldwide, many virtual assistants work as freelancers on an hourly wage. One recent survey involving 400 virtual assistants on the popular freelancer siteUpworkshows a huge discrepancy in hourly pay commanded by virtual assistants in different countries.[8]
Common modes of communication and data delivery include the internet, e-mail and phone-call conferences,[9]online workspaces, and fax machine. Increasingly, virtual assistants are utilizing technology such asSkypeand Zoom, Slack, as well asGoogle Voice. Professionals in this business work on a contractual basis, and long-lasting cooperation is standard. Typically, office administrative experience is expected in positions such as executive assistant, office manager/supervisor, secretary, legal assistant, paralegal, legal secretary, real estate assistant, and information technology.
In recent years, virtual assistants have also worked their way into many mainstream businesses, and with the advent ofVOIPservices such as Skype and Zoom, it has been possible to have a virtual assistant who can answer phone calls remotely, without the end user's knowledge. This allows businesses to add a personal touch in the form of a receptionist, without the additional cost of hiring someone.[citation needed]
Virtual assistants consist of individuals as well as companies who work remotely as independent professionals, providing a wide range of products and services, both to businesses and consumers. Virtual assistants perform many different roles, including typical secretarial work, website editing, social media marketing, customer service, data entry, accounts (MYOB, Quick books), and many other remote tasks.
Virtual assistants come from a variety of business backgrounds, but most have several years' experience earned in the "real" (non-virtual) business world, or several years' experience working online or remotely.
A dedicated virtual assistant is someone working in the office under the management of a company. The facility and internet connection as well as training are provided by the company, though not in all cases. The home-based virtual assistant works either in anoffice sharingenvironment or from home. General VAs are sometimes called an online administrative assistant, online personal assistant, or online sales assistant. A virtual webmaster assistant, virtual marketing assistant, and virtual content writing assistant are specific professionals that are usually experienced employees from corporate environments who have set up their own virtual offices.
Virtual assistants were an integral part of the 2007 bestselling bookThe 4-Hour WorkweekbyTim Ferriss.[10]Ferriss claimed to have hired virtual assistants to check his email, pay his bills, and run parts of his company.[11]
|
https://en.wikipedia.org/wiki/Virtual_assistant_(occupation)
|
Virtual managementis the supervision, leadership, and maintenance ofvirtual teams—dispersed work groups that rarely meet face to face. As the number of virtual teams has grown, facilitated by theInternet,globalization,outsourcing, andremote work, the need to manage them has also grown. The challenging task of managing these teams have been made much easier by availability of online collaboration tools, adaptive project management software, efficient time tracking programs and other related systems and tools. This article provides information concerning some of the important management factors involved with virtual teams, and the life cycle of managing a virtual team.
Due to developments ininformation technologywithin the workplace, along with a need to compete globally and address competitive demands, organizations have embraced virtual management structures.[1]As in face-to-face teams, management of virtual teams is also a crucial component in the effectiveness of the team. However, when compared to leaders of face-to-face teams, virtual team leaders face the following difficulties: (a) logistical problems, including coordination of work across different time zones and physical distances; (b) interpersonal issues, including an ability to establish effective working relationships in the absence of frequent face-to-face communication; and (c) technological difficulties, including finding and learning to use appropriate technology.[2]In global virtual teams, there is an additional dimension of cultural differences which impact on a virtual team's functioning.
For the team to reap the benefits mentioned above, the manager considers the following factors.
A virtual team leader must ensure a feeling of trust among all team members—something all team members have an influence on and must be aware of. However, the team leader is responsible for this in the first place. Team leaders must ensure a sense of psychological safety within a team by allowing all members to speak honestly and directly, but respectfully, to each other.[3]
For a team to succeed, the manager must schedule meetings to ensure participation. This carries over to the realm of virtual teams, but in this case these meetings are also virtual. Due to the difficulties of communicating in a virtual team, it is imperative that team members attend meetings. The first team meeting is crucial and establishes lasting precedents for the team.[4]Furthermore, there are numerous features of a virtual team environment that may impact on the development of follower trust. The team members have to trust that the leader is allocating work fairly and evaluating team members equally.[5]
An extensive study conducted over 8 years[6]examined what factors increase leader effectiveness in virtual teams. One such factor is that virtual team leaders need to spend more time than conventional team counterparts being explicit about expectations. This is due to the patterns of behavior and dynamics of interaction which are unfamiliar. Moreover, even in information rich virtual teams using video conferencing, it is hard to replicate the rapid exchange of information and cues available in face-to-face discussions. To develop role clarity within virtual teams, leaders should focus on developing: (a) clear objectives and goals for tasks; (b) comprehensive milestones for deliverables; and (c) communication channels for seeking feedback on unclear role guidance.
When determining an effective way of leadership for a culturally diverse team there are various ways: directive (from directive to participatory), transactional (rewarding) or transformational influence. Leadership must ensure effective communication and understanding, clear and shared plans and task assignments and collective sense of belonging in team. Further, the role of a team leader is to coordinate tasks and activities, motivate team members, facilitate collaboration and solve conflicts when needed. This proofs that a team leader's role in effective virtual team management and creating knowledge sharing environment is crucial.[7]
Virtual team leaders must become virtually present so they can closely monitor team members and note changes that might affect their ability to undertake their tasks. Due to the distributed nature of virtual teams, team members have less awareness of the wider situation of the team or dynamics of the overall team environment. Consequently, as situations change in a virtual team environment, such as adjustments to task requirements, modification of milestones, or changes to the goals of the team, it is important that leaders monitor followers to ensure they are aware of these changes and make amendments as required.[8]The leaders of virtual teams do not possess the same powers of physical observation, and have to be creative in setting up structures and processes so that variations from expectations can be observed well virtually (for instance, virtual team leaders have to sense when "electronic" silence means acquiescence rather than inattention). At the same time, leaders of virtual teams cannot assume that members are prepared for virtual meetings and also have to ensure that the unique knowledge of each distributed person on the virtual team is fully utilized.[9]Virtual team leaders should be aware thatinformation overloadmay result in situations when a leader has provided too much information to a team member.[10]
Finally, when examining virtual teams, it is crucial to consider that they differ in terms of their virtuality. Virtuality refers to a continuum of how "virtual" a team is.[11]There are three predominant factors that contribute to virtuality, namely: (a) the richness of communication media; (b) distance between team members, both in time zones and geographical dispersion; and (c) organizational and cultural diversity.
In the field of managing virtual research and development (R&D) teams there have arisen certain detriments to the management decisions made when leading a team.[12]The first of these detriments is the lack of potential for radical innovation, this is brought about by the lack of affinity with certain technologies or processes. This causes a decrease in certainty about the feasibility of the execution. As a result, virtual R&D teams focus on incremental innovations. The second detriment is the nature of the project may need to change. Depending on how interdependent each step is, the ability for a virtual team to successfully complete the project varies at each step. Thirdly, the sharing of knowledge, which was identified above as an important ingredient in managing a virtual team, becomes even more important albeit difficult. There is some knowledge and information that is simple and easy to explain and share, but there is other knowledge that may be more content or domain specific that is not so easy to explain. In a face to face group this can be done by walking a team member through the topic slowly during a lunch break, but in a virtual team this is no longer possible and the information is at risk of being misunderstood leading to set backs in the project. Finally, the distribution and bundling of resources is also very much altered by the move from collocation to virtual space. Where once the team was all in one place and the resources could be split there as needed, now the team can be anywhere, and the same resources still need to get to the correct people. This takes time, effort, and coordination to avoid potential setbacks or conflicts.[12]
To effectively use the management factors described above, it is important to know when in the life cycle of a virtual team they would be most useful. According to[13]the life cycle of virtual team management includes five stages:
The initial task during the implementation of a team is the definition of the general purpose of the team together with the determination of the level of virtuality that might be appropriate to achieve these goals. These decisions are usually determined by strategic factors such as mergers, increase of the market span, cost reductions, flexibility and reactivity to the market, etc. Management-related activities that should take place during preparation phase includes mission statement, personnel selection, task design, rewards system design, choose appropriate technology and organizational integration.[13]
In regards to personnel selection, virtual teams have an advantage. To maximize outcomes, management wants the best team it can have. Before virtual teams, they did this by gathering the "best available" workers and forming a team. These teams did not contain the best workers of the field, because they were busy with their own projects, or were too far away to meet the group. With virtual teams, managers can select personnel from anywhere in the world, and so from a wider pool.[14]
It is highly recommended that, in the beginning of virtualteamwork, all members should meet each other face to face. Crucial elements of such a “kick-off” workshop are getting acquainted with the other team members, clarifying the team goals, clarifying the roles and functions of the team members, information and training how communication technologies can be used efficiently, and developing general rules for the teamwork. As a consequence, “kick-off” workshops are expected to promote clarification of team processes, trust building, building of a shared interpretive context, and high identification with the team.[13]
Getting acquainted, goal clarification and development of intra-team rules should also be accomplished during this phase. Initial field data that compare virtual teams with and without such “kick-off” meetings confirm a general positive effects on team effectiveness, although more differentiated research is necessary. Experimental studies demonstrate that getting acquainted before the start of computer-mediated work facilitates cooperation and trust.[13]
One of the manager's roles during launch is to create activities or events that allow for team building. These kickoff events should serve three major goals: everyone on the team is well versed in the technology involved, everyone knows what is expected of them and when it is expected, and finally have everyone get to know one another. By meeting all three goals the virtual team may be far more successful, and it lightens everyone's load.[15]
After the launch of a virtual team, work effectiveness and a constructive team climate has to be maintained using performance management strategies. These comprehensive management strategies arise from the agreed upon difficulty of working in virtual teams.[16]Research shows that constructs and expectations of team membership, leadership, goal setting, social loafing and conflict differ in cultural groups and therefore affects team performance a lot. In early team formation process, one thing to agree on within a team is the meaning of leadership and role differentiation for the team leader and other team members. To apply this, the leader must show active leadership to create a shared conceptualization of team meaning, its focus and function.[7]
The following discussion is again restricted to issues on which empirical results are already available. These issues are leadership, communication within virtual teams, team members' motivation, and knowledge management.[13]
Leadership is a central challenge in virtual teams. Particularly, all kinds of direct control are difficult when team managers are not at the same location as the team members. As a consequence, delegative management principles are considered that shift parts of classic managerial functions to the team members. However, team members only accept and fulfill such managerial functions when they are motivated and identify with the team and its goals, which is again more difficult to achieve in virtual teams. Next, empirical results on three leadership approaches are summarized that differ in the degree of autonomy of the team members: Electronic monitoring as an attempt to realize directive leadership over distance, management by objectives (MBO) as an example for delegative leadership principles, and self-managing teams as an example for rather autonomous teamwork.[13]
One way to maintain control over a virtual team is through motivators and incentives. Both are common techniques implemented by managers for collocated teams, but with slight adjustments they can be used effectively for virtual teams as well. A commonly held belief is that working online, is not particularly important or impactful. This belief can be changed by notifying employees that their work is being sent to the managers. This attaches the importance of career prospects to the work, and makes it more meaningful for the workers.[15]
Communication processes are perhaps the most frequently investigated variables relevant for the regulation of virtual teamwork. By definition, communication in virtual teams is predominantly based on electronic media such as e-mail, telephone, video-conference, etc. The main concern here is that electronic media reduce the richness ofinformation exchangecompared toface-to-facecommunication.[13][15]This difference in richness of information is an idea shared by multiple researchers, and there are some methods to move around the drop created by working in a virtual environment. One such method is to use the anonymity provided by working digitally. It lets people share concerns without worrying about being identified.[15]This serves to over come the lack of richness by providing a safe method to honestly provide feedback and information. Predominant research issues have been conflict escalation and disinhibited communication (“flaming”), the fit between communication media and communication contents, and the role of non-job-related communication.[13]These research issues revolve around the idea that people become more hostile over a virtual medium making the working environment unhealthy.[17]These findings were quickly dismissed in the presence of virtual teams due to the fact that virtual teams have the expectation that one will work longer together, and the level of anonymity is different from just a one off online interaction.[18]One of the important needs for successful communication is the ability to have every member of the group together repeatedly over time. Effective dispersed groups show spikes in presence during communication over time, while ineffective groups do not have as dramatic spikes.[19]
For the management of motivational and emotional processes, three groups of such processes have been addressed in empirical investigations so far: motivation and trust, team identification and cohesion, and satisfaction of the team members. Since most of the variables are originated within the person, they can vary considerably among the members of a team, requiring appropriate aggregation procedures for multilevel analyses (e.g. motivation may be mediated by interpersonal trust[20]).[13]
Systematic research is needed on the management of knowledge and the development of shared understanding within the teams, particularly since theoretical analyses sometimes lead to conflicting expectations. The development of such “common ground” might be particularly difficult in virtual teams because sharing of information and the development of a “transactive memory” (i.e., who knows what in the team) is harder due to the reduced amount of face-to-face communication and the reduced information about individual work contexts.[13]
Virtual teams can be supported by personnel and team development interventions. The development of such training concepts should be based on an empirical assessment of the needs and/or deficits of the team and its members, and the effectiveness of the trainings should be evaluated empirically.[21]The steps of team developments include assessment of needs/deficits, individual and team training, and evaluation of training effects.[13]
One such development intervention is to have the virtual team self-facilitate. Normally, a team brings in an outside facilitator to ensure that the team is correctly using the technology. This is a costly method of developing the team, but virtual teams can self-facilitate. This lessens the need for an outside facilitator, and saves the team time, effort, and resources.[15]
Finally, the disbanding of virtual teams and the re-integration of the team members is an important issue that has been neglected not only in empirical but also in most of the conceptual work on virtual teams. However, particularly when virtual project teams have only a short life-time and reform again quickly, careful and constructive disbanding is mandatory to maintain high motivation and satisfaction among the employees. Members of transient project teams anticipate the end of the teamwork in the foreseeable future, which in turn overshadows the interaction and shared outcomes. The final stage of group development should be a gradual emotional disengagement that includes both sadness about separation and (at least in successful groups) joy and pride in the achievements of the team.[13]
Post pandemic the virtual team concept has been further popularized although even before the COVID-19 pandemic, many organizations were actively shifting toward remote work. As per market sources, around 80% of global corporate remote work policies had shifted to virtual and mixed forms of team collaboration during the pandemic.[22]With the onslaught of worldwide lockdowns and challenging time management, remote work has become a necessity for the majority and virtual management has become a way of life for business owners/leaders.
|
https://en.wikipedia.org/wiki/Virtual_management
|
Inargumentation theory, anargumentum ad populum(Latinfor 'appeal to the people')[1]is afallacious argumentwhich is based on claiming a truth or affirming something is good or correct because many people think so.[2]
Other names for the fallacy include:
Argumentum ad populumis a type ofinformal fallacy,[1][14]specifically afallacy of relevance,[15][16]and is similar to anargument from authority(argumentum ad verecundiam).[14][4][9]It uses an appeal to the beliefs, tastes, or values of a group of people,[12]stating that because a certain opinion or attitude is held by a majority, or even everyone, it is therefore correct.[12][17]
Appeals to popularity are common in commercial advertising that portrays products as desirable because they are used by many people[9]or associated with popular sentiments[18]instead of communicating the merits of the products themselves.
Theinverseargument, that something that is unpopular must be flawed, is also a form of this fallacy.[6]
The fallacy is similar in structure to certain other fallacies that involve a confusion between the "justification" of a belief and its "widespread acceptance" by a given group of people. When an argument uses the appeal to the beliefs of a group of experts, it takes on the form of an appeal to authority; if the appeal relates to the beliefs of a group of respected elders or the members of one's community over a long time, then it takes on the form of anappeal to tradition.
The philosopherIrving Copidefinedargumentum ad populumdifferently from an appeal to popular opinion itself,[19]as an attempt to rouse the "emotions and enthusiasms of the multitude".[19][20]
Douglas N. Waltonargues that appeals to popular opinion can be logically valid in some cases, such as in political dialogue within ademocracy.[21]
In some circumstances, a person may argue that the fact that Y people believe X to be true implies that X isfalse. This line of thought is closely related to theappeal to spitefallacy given that it invokes a person's contempt for the general populace or something about the general populace to persuade them that most are wrong about X. Thisad populumreversal commits the same logical flaw as the original fallacy given that the idea "X is true" is inherently separate from the idea that "Y people believe X": "Y people believe in X as true, purely because Y people believe in it, and not because of any further considerations. Therefore X must be false." While Y people can believe X to be true for fallacious reasons, X might still be true. Their motivations for believing X do not affect whether X is true or false.
Y = most people, a given quantity of people, people of a particular demographic.
X = a statement that can be true or false.
Examples:
In general, the reversal usually goes: "Most people believe A and B are both true. B is false. Thus, A is false." The similar fallacy ofchronological snobberyis not to be confused with thead populumreversal. Chronological snobbery is the claim that if belief in both X and Y was popularly held in the past and if Y was recently proved to be untrue then X must also be untrue. That line of argument is based on a belief in historical progress and not—like thead populumreversal is—on whether or not X and/or Y is currently popular.
Appeals to public opinion are valid in situations where consensus is the determining factor for the validity of a statement, such as linguistic usage and definitions of words.
Linguistic descriptivistsargue that correct grammar, spelling, and expressions are defined by the language's speakers, especially in languages which do not have a central governing body. According to this viewpoint, if an incorrect expression is commonly used, it becomes correct. In contrast,linguistic prescriptivistsbelieve that incorrect expressions are incorrect regardless of how many people use them.[22]
Special functionsaremathematical functionsthat have well-established names and mathematical notations due to their significance in mathematics and other scientific fields.
There is no formal definition of what makes a function a special function; instead, the termspecial functionis defined by consensus. Functions generally considered to be special functions includelogarithms,trigonometric functions, and theBessel functions.
|
https://en.wikipedia.org/wiki/Argumentum_ad_populum
|
Tyranny of the majorityrefers to a situation inmajority rulewhere the preferences and interests of the majority dominate the political landscape, potentially sidelining or repressing minority groups and using majority rule to take non-democratic actions.[1]This idea has been discussed by various thinkers, includingJohn Stuart MillinOn Liberty[2]andAlexis de TocquevilleinDemocracy in America.[3][4]
To reduce the risk of majority tyranny, modern democracies frequently have countermajoritarian institutions that restrict the ability of majorities to repress minorities and stymie political competition.[1][5]In the context of a nation,constitutionallimits on the powers of a legislative body such as abill of rightsorsupermajority clausehave been used.Separation of powersorjudicial independencemay also be implemented.[6]
Insocial choice, a tyranny-of-the-majority scenario can be formally defined as a situation where the candidate or decision preferred by a majority is greatly inferior (hence "tyranny") to the socially optimal candidate or decision according to some measure of excellence such astotal utilitarianismor theegalitarian rule.
The origin of the term "tyranny of the majority" is commonly attributed toAlexis de Tocqueville, who used it in his bookDemocracy in America. It appears in Part 2 of the book in the title of Chapter 8, "What Moderates the Tyranny of the Majority in the United States' Absence of Administrative Centralization" (French:De ce qui tempère aux États-Unis latyrannie de la majorité[7]) and in the previous chapter in the names of sections such as "The Tyranny of the Majority" and "Effects of the Tyranny of the Majority on American National Character; the Courtier Spirit in the United States".[8]
While the specific phrase "tyranny of the majority" is frequently attributed to variousFounding Fathers of the United States, onlyJohn Adamsis known to have used it, arguing against government by a singleunicameralelected body. Writing in defense of theConstitutionin March 1788,[9]Adams referred to "a single sovereign assembly, each member…only accountable to his constituents; and the majority of members who have been of one party" as a "tyranny of the majority", attempting to highlight the need instead for "amixed government, consisting ofthree branches". Constitutional authorJames Madisonpresented a similar idea inFederalist 10, citing the destabilizing effect of "the superior force of an interested and overbearing majority" on a government, though the essay as a whole focuses on the Constitution's efforts to mitigate factionalism generally.
Later users includeEdmund Burke, who wrote in a 1790 letter that "The tyranny of a multitude is a multiplied tyranny."[10]It was further popularised byJohn Stuart Mill, influenced by Tocqueville, inOn Liberty(1859).Friedrich Nietzscheused the phrase in the first sequel toHuman, All Too Human(1879).[11]Ayn Randwrote that individual rights are not subject to a public vote, and that the political function of rights is precisely to protect minorities from oppression by majorities and "the smallest minority on earth is the individual".[12]InHerbert Marcuse's 1965 essayRepressive Tolerance, he said "tolerance is extended to policies, conditions, and modes of behavior which should not be tolerated because they are impeding, if not destroying, the chances of creating an existence without fear and misery" and that "this sort of tolerance strengthens the tyranny of the majority against which authentic liberals protested".[13]In 1994, legal scholarLani Guinierused the phrase as the title for a collection oflaw reviewarticles.[14]
A term used inClassicalandHellenistic Greecefor oppressive popular rule wasochlocracy("mob rule");tyrannymeant rule by one man—whether undesirable or not.
Herbert Spencer, in "The Right to Ignore the State" (1851), pointed the problem with the following example:[15]
Suppose, for the sake of argument, that, struck by someMalthusian panic, a legislature duly representing public opinion were to enact that all children born during the next ten years should be drowned. Does anyone think such an enactment would be warrantable? If not, there is evidently a limit to the power of a majority.
Secession of theConfederate States of Americafrom the United States was anchored by a version ofsubsidiarity, found within the doctrines ofJohn C. Calhoun.Antebellum South Carolinautilized Calhoun's doctrines in theOld Southas public policy, adopted from his theory ofconcurrent majority. This "localism" strategy was presented as a mechanism to circumvent Calhoun's perceived tyranny of the majority in the United States. Each state presumptively held the Sovereign power to block federal laws that infringed uponstates' rights, autonomously. Calhoun's policies directly influenced Southern public policy regarding slavery, and undermined theSupremacy Clausepower granted to the federal government. The subsequent creation of theConfederate States of Americacatalyzed theAmerican Civil War.
19th century concurrent majority theories held logical counterbalances to standard tyranny of the majority harms originating fromAntiquityand onward. Essentially, illegitimate or temporary coalitions that held majority volume could disproportionately outweigh and hurt any significant minority, by nature and sheer volume. Calhoun's contemporary doctrine was presented as one of limitation within American democracy to prevent traditional tyranny, whether actual or imagined.[16]
Federalist No. 10"The Same Subject Continued: The Union as a Safeguard Against Domestic Faction and Insurrection" (November 23, 1787):[17]
The inference to which we are brought is, that the CAUSES of faction cannot be removed, and that relief is only to be sought in the means of controlling its EFFECTS. If a faction consists of less than a majority, relief is supplied by the republican principle, which enables the majority to defeat its sinister views by regular vote. It may clog the administration, it may convulse the society; but it will be unable to execute and mask its violence under the forms of the Constitution. When a majority is included in a faction, the form of popular government, on the other hand, enables it to sacrifice to its ruling passion or interest both the public good and the rights of other citizens. To secure the public good and private rights against the danger of such a faction, and at the same time to preserve the spirit and the form of popular government, is then the great object to which our inquiries are directed...By what means is this object attainable? Evidently by one of two only. Either the existence of the same passion or interest in a majority at the same time must be prevented, or the majority, having such coexistent passion or interest, must be rendered, by their number and local situation, unable to concert and carry into effect schemes of oppression.
With respect to American democracy, Tocqueville, in his bookDemocracy in America, says:
So what is a majority taken as a whole, if not an individual who has opinions and, most often, interests contrary to another individual called the minority. Now, if you admit that an individual vested with omnipotence can abuse it against his adversaries, why would you not admit the same thing for the majority? Have men, by gathering together, changed character? By becoming stronger, have they become more patient in the face of obstacles? As for me, I cannot believe it; and the power to do everything that I refuse to any one of my fellows, I will never grant to several.[18]
So when I see the right and the ability to do everything granted to whatever power, whether called people or king, democracy or aristocracy, whether exercised in a monarchy or a republic, I say: the seed of tyranny is there and I try to go and live under other laws.[19]
When a man or a party suffers from an injustice in the United States, to whom do you want them to appeal? To public opinion? That is what forms the majority. To the legislative body? It represents the majority and blindly obeys it. To the executive power? It is named by the majority and serves it as a passive instrument. To the police? The police are nothing other than the majority under arms. To the jury? The jury is the majority vested with the right to deliver judgments. The judges themselves, in certain states, are elected by the majority. However iniquitous or unreasonable the measure that strikes you may be, you must therefore submit to it or flee. What is that if not the very soul of tyranny under the forms of liberty[20]
Robert A. Dahlargues that the tyranny of the majority is a spurious dilemma (p. 171):[21]
Critic: Are you trying to say that majority tyranny is simply an illusion? If so, that is going to be small comfort to a minority whose fundamental rights are trampled on by an abusive majority. I think you need to consider seriously two possibilities; first, that a majority will infringe on the rights of a minority, and second, that a majority may oppose democracy itself.Advocate: Let's take up the first. The issue is sometimes presented as a paradox. If a majority is not entitled to do so, then it is thereby deprived of its rights; but if a majority is entitled to do so, then it can deprive the minority of its rights. The paradox is supposed to show that no solution can be both democratic and just. But the dilemma seems to be spurious.Of course a majority might have the power or strength to deprive a minority of its political rights. […] The question is whether a majority mayrightlyuse its primary political rights to deprive a minority of its primary political rights.The answer is clearly no. To put it another way, logically it can't be true that the members of an association ought to govern themselves by the democratic process, and at the same time a majority of the association may properly strip a minority of its primary political rights. For, by doing so the majority would deny the minority the rights necessary to the democratic process. In effect therefore the majority would affirm that the association ought not to govern itself by the democratic process. They can't have it both ways.Critic: Your argument may be perfectly logical. But majorities aren't always perfectly logical. They may believe in democracy to some extent and yet violate its principles. Even worse, they maynotbelieve in democracy and yet they may cynically use the democratic process to destroy democracy. […] Without some limits, both moral and constitutional, the democratic process becomes self-contradictory, doesn't it?Advocate: That's exactly what I've been trying to show. Of course democracy has limits. But my point is that these are built into the very nature of the process itself. If you exceed those limits, then you necessarily violate the democratic process.
Regarding recent American politics (specificallyinitiatives), Donovan et al. argue that:
One of the original concerns about direct democracy is the potential it has to allow a majority of voters to trample the rights of minorities. Many still worry that the process can be used to harm gays and lesbians as well as ethnic, linguistic, and religious minorities. … Recent scholarly research shows that the initiative process is sometimes prone to produce laws that disadvantage relatively powerless minorities … State and local ballot initiatives have been used to undo policies – such as school desegregation, protections against job and housing discrimination, and affirmative action – that minorities have secured from legislatures.[22]
The notion that, in a democracy, the greatest concern is that the majority will tyrannise and exploit diverse smaller interests, has been criticised byMancur OlsoninThe Logic of Collective Action, who argues instead that narrow and well organised minorities are more likely to assert their interests over those of the majority. Olson argues that when the benefits of political action (e.g., lobbying) are spread over fewer agents, there is a stronger individual incentive to contribute to that political activity. Narrow groups, especially those who can reward active participation to their group goals, might therefore be able to dominate or distort political process, a process studied inpublic choice theory.
Class studies
Tyranny of the majority has also been prevalent in some class studies. Rahim Baizidi uses the concept of "democratic suppression" to analyze the tyranny of the majority in economic classes. According to this, the majority of the upper and middle classes, together with a small portion of the lower class, form the majority coalition of conservative forces in the society.[23]
Anti-federalists of public choice theory point out thatvote tradingcan protect minority interests from majorities in representative democratic bodies such as legislatures.[citation needed]They continue that direct democracy, such as statewide propositions on ballots, does not offer such protections.[weasel words]
|
https://en.wikipedia.org/wiki/Tyranny_of_the_majority
|
Thebandwagon effectis a psychological phenomenon where people adopt certain behaviors, styles, or attitudes simply because others are doing so.[1]More specifically, it is acognitive biasby whichpublic opinionor behaviours can alter due to particular actions and beliefs rallying amongst the public.[2]It is a psychological phenomenon whereby the rate of uptake of beliefs, ideas,fads and trendsincreases with respect to the proportion of others who have already done so.[3]As more people come to believe in something, others also "hop on thebandwagon" regardless of the underlying evidence.[citation needed]
Following others' actions or beliefs can occur because ofconformismor deriving information from others. Much of the influence of the bandwagon effect comes from the desire to 'fit in' with peers; by making similar selections as other people, this is seen as a way to gain access to a particular social group.[4]An example of this isfashion trendswherein the increasing popularity of a certain garment or style encourages more acceptance.[5]When individuals makerationalchoices based on the information they receive from others, economists have proposed thatinformation cascadescan quickly form in which people ignore their personal information signals and follow the behaviour of others.[6]Cascades explain why behaviour is fragile as people understand that their behaviour is based on a very limited amount of information. As a result, fads form easily but are also easily dislodged.[citation needed]The phenomenon is observed in various fields, such aseconomics,political science,medicine, andpsychology.[7]Insocial psychology, people's tendency to align their beliefs and behaviors with a group is known as 'herd mentality' or 'groupthink'.[8]Thereverse bandwagon effect(also known as thesnob effectin certain contexts) is a cognitive bias that causes people to avoid doing something, because they believe that other people are doing it.[9]
The phenomenon where ideas become adopted as a result of their popularity has been apparent for some time. However, the metaphorical use of the termbandwagonin reference to this phenomenon began in 1848.[10]A literal "bandwagon" is awagonthat carries amusical ensemble, or band, during a parade, circus, or other entertainment event.[11][12]
The phrase "jump on the bandwagon" first appeared in American politics in 1848 during thepresidential campaignofZachary Taylor.Dan Rice, a famous and popular circus clown of the time, invited Taylor to join his circus bandwagon. As Taylor gained more recognition and his campaign became more successful, people began saying that Taylor's political opponents ought to "jump on the bandwagon" themselves if they wanted to be associated with such success.
Later, during the time ofWilliam Jennings Bryan's 1900 presidential campaign, bandwagons had become standard in campaigns,[13]and the phrase "jump on the bandwagon" was used as a derogatory term[when?], implying that people were associating themselves with success without considering that with which they associated themselves.
Despite its emergence in the late 19th century, it was only rather recently that the theoretical background of bandwagon effects has been understood.[12]One of the best-known experiments on the topic is the 1950s'Asch conformity experiment, which illustrates the individual variation in the bandwagon effect.[14][9]Academic study of the bandwagon effect especially gained interest in the 1980s, as scholars studied the effect ofpublic opinion pollson voter opinions.[10]
Individuals are highly influenced by the pressure and norms exerted by groups. As an idea or belief increases in popularity, people are more likely to adopt it; when seemingly everyone is doing something, there is an incredible pressure toconform.[1]Individuals' impressions of public opinion or preference can originate from several sources.
Some individual reasons behind the bandwagon effect include:
Another cause can come from distorted perceptions of mass opinion, known as 'false consensus' or 'pluralistic ignorance'.[failed verification]In politics, bandwagon effects can also come as result of indirect processes that are mediated by political actors. Perceptions of popular support may affect the choice of activists about which parties or candidates to support by donations or voluntary work in campaigns.[12]
The bandwagon effect works through aself-reinforcingmechanism, and can spread quickly and on a large-scale through apositive feedback loop, whereby the more who are affected by it, the more likely other people are to be affected by it too.[7][9]
A new concept that is originally promoted by only a single advocate or a minimal group of advocates can quickly grow and become widely popular, even when sufficient supporting evidence is lacking. What happens is that a new concept gains a small following, which grows until it reaches acritical mass, until for example it begins being covered bymainstream media, at which point a large-scale bandwagon effect begins, which causes more people to support this concept, in increasingly large numbers. This can be seen as a result of theavailability cascade, a self-reinforcing process through which a certain belief gains increasing prominence in public discourse.[9]
The bandwagon effect can take place invoting:[15]it occurs on an individual scale where a voters opinion on vote preference can be altered due to the rising popularity of a candidate[16]or a policy position.[17]The aim for the change in preference is for the voter to end up picking the "winner's side" in the end.[18]Voters are more so persuaded to do so in elections that are non-private or when the vote is highly publicised.[19]
The bandwagon effect has been applied to situations involvingmajority opinion, such as political outcomes, where people alter their opinions to the majority view.[20]Such a shift in opinion can occur because individualsdraw inferences[clarification needed]from the decisions of others, as in aninformational cascade.[21]
Perceptions of popular support may affect the choice of activists about which parties or candidates to support by donations or voluntary work in campaigns. They may strategically funnel these resources to contenders perceived as well supported and thus electorally viable, thereby enabling them to run more powerful, and thus more influential campaigns.[12]
American economistGary Beckerhas argued that the bandwagon effect is powerful enough to flip thedemand curveto be upward sloping. A typical demand curve is downward sloping—as prices rise,demandfalls. However, according to Becker, an upward sloping would imply that even as prices rise, the demand rises.[7]
The bandwagon effect comes about in two ways infinancial markets.
First, throughprice bubbles: these bubbles often happen in financial markets in which the price for a particularly popularsecuritykeeps on rising. This occurs when manyinvestorsline up to buy a securitybiddingup the price, which in return attracts more investors. The price can rise beyond a certain point, causing the security to be highlyovervalued.[7]
Second isliquidityholes: when unexpected news or events occur,market participantswill typically stop trading activity until the situation becomes clear. This reduces the number of buyers and sellers in the market, causing liquidity to decrease significantly. The lack of liquidity leavesprice discoverydistorted and causes massive shifts inasset prices, which can lead to increased panic, which further increases uncertainty, and the cycle continues.[7]
Inmicroeconomics, bandwagon effects may play out in interactions of demand and preference.[22]The bandwagon effect arises when people's preference for a commodity increases as the number of people buying it increases. Consumers may choose their product based on others' preferences believing that it is the superior product. This selection choice can be a result of directly observing the purchase choice of others or by observing the scarcity of a product compared to its competition as a result of the choice previous consumers have made. This scenario can also be seen in restaurants where the number of customers in a restaurant can persuade potential diners to eat there based on the perception that the food must be better than the competition due to its popularity.[4]This interaction potentially disturbs the normal results of the theory ofsupply and demand, which assumes that consumers make buying decisions exclusively based on price and their own personal preference.[7]
Decisions made by medical professionals can also be influenced by the bandwagon effect. Particularly, the widespread use and support of now-disproven medical procedures throughout history can be attributed to their popularity at the time. Layton F. Rikkers (2002),professor emeritusof surgery at theUniversity of Wisconsin–Madison,[23]calls these prevailing practicesmedical bandwagons, which he defines as "the overwhelming acceptance of unproved but popular [medical] ideas."[10]
Medical bandwagons have led to inappropriate therapies for numerous patients, and have impeded the development of more appropriate treatment.[24]
One paper from 1979 on the topic of bandwagons of medicine describes how a new medical concept or treatment can gain momentum and become mainstream, as a result of a large-scale bandwagon effect:[25]
One who supports a particular sports team, despite having shown no interest in that team until it started gaining success, can be considered a "bandwagon fan".[26]
As an increasing number of people begin to use a specific social networking site or application, people are more likely to begin using those sites or applications. The bandwagon effect alsoaffects random people that which posts are viewed and shared.[clarification needed][27]
This research used bandwagon effects to examine the comparative impact of two separate bandwagon heuristic indicators (quantitative vs. qualitative) on changes in news readers' attitudes in an online comments section. Furthermore, Study 1 demonstrated that qualitative signals had a higher influence on news readers' judgments than quantitative clues. Additionally, Study 2 confirmed the results of Study 1 and showed that people's attitudes are influenced by apparent public opinion, offering concrete proof of the influence that digital bandwagons.[28]
The bandwagon effect can also affect the way the masses dress and can be responsible for clothing trends. People tend to want to dress in a manner that suits the current trend and will be influenced by those who they see often – normally celebrities. Such publicised figures will normally act as the catalyst for the style of the current period. Once a small group of consumers attempt to emulate a particular celebrity's dress choice more people tend to copy the style due to the pressure or want to fit in and be liked by their peers.[citation needed]
|
https://en.wikipedia.org/wiki/Bandwagon_effect
|
Groupthinkis a psychologicalphenomenonthat occurs within agroup of peoplein which the desire for harmony orconformityin the group results in an irrational or dysfunctionaldecision-makingoutcome. Cohesiveness, or the desire for cohesiveness, in a group may produce a tendency among its members to agree at all costs.[1]This causes the group to minimize conflict and reach a consensus decision withoutcritical evaluation.[2][3]
Groupthink is a construct ofsocial psychologybut has an extensive reach and influences literature in the fields ofcommunication studies,political science,management, andorganizational theory,[4]as well as important aspects of deviant religiouscultbehaviour.[5][6]
Groupthink is sometimes stated to occur (more broadly) within natural groups within the community, for example to explain the lifelong different mindsets of those with differing political views (such as "conservatism" and "liberalism" in the U.S. political context[7]or the purported benefits of team work vs. work conducted in solitude).[8]However, this conformity of viewpoints within a group does not mainly involve deliberategroup decision-making, and might be better explained by the collectiveconfirmation biasof the individual members of the group.[citation needed]
The term was coined in 1952 byWilliam H. Whyte Jr.[9]Most of the initial research on groupthink was conducted byIrving Janis, a research psychologist fromYale University.[10]Janis published an influential book in 1972, which was revised in 1982.[11][12]Janis used theBay of Pigsdisaster (the failed invasion of Castro's Cuba in 1961) and the Japanese attack on Pearl Harbor in 1941 as his two prime case studies. Later studies have evaluated and reformulated his groupthink model.[13][14]
Groupthink requires individuals to avoid raisingcontroversialissues or alternative solutions, and there is loss of individual creativity, uniqueness and independent thinking. The dysfunctionalgroup dynamicsof the "ingroup" produces an "illusion of invulnerability" (an inflated certainty that the right decision has been made). Thus the "ingroup" significantly overrates its own abilities in decision-making and significantly underrates the abilities of its opponents (the "outgroup"). Furthermore, groupthink can produce dehumanizing actions against the "outgroup". Members of a group can often feel underpeer pressureto "go along with the crowd" for fear of "rocking the boat" or of how their speaking out will be perceived by the rest of the group. Group interactions tend to favor clear and harmonious agreements and it can be a cause for concern when little to no new innovations or arguments for better policies, outcomes and structures are called to question. (McLeod). Groupthink can often be referred to asa group of "yes men"[citation needed]because group activities and group projects in general make it extremely easy to pass on not offering constructive opinions.
Some methods that have been used to counteract group think in the past are selecting teams from more diverse backgrounds, and even mixing men and women for groups (Kamalnath). Groupthink can be considered by many to be a detriment to companies, organizations and in any work situations. Most positions that are senior level need individuals to be independent in their thinking. There is a positive correlation found between outstanding executives and decisiveness (Kelman). Groupthink also prohibits an organization from moving forward and innovating if no one ever speaks up and says something could be done differently.
Antecedent factors such asgroup cohesiveness, faulty group structure, and situational context (e.g., community panic) play into the likelihood of whether or not groupthink will impact the decision-making process.
William H. Whyte Jr.derived the term from George Orwell'sNineteen Eighty-Four, and popularized it in 1952 inFortunemagazine:
Groupthink being a coinage – and, admittedly, a loaded one – a working definition is in order. We are not talking about mere instinctive conformity – it is, after all, a perennial failing of mankind. What we are talking about is arationalizedconformity – an open, articulate philosophy which holds that group values are not only expedient but right and good as well.[9][15]
Groupthink was Whyte's diagnosis of the malaise affecting both the study and practice of management (and, by association, America) in the 1950s. Whyte was dismayed that employees had subjugated themselves to the tyranny of groups, which crushed individuality and were instinctively hostile to anything or anyone that challenged the collective view.[16]
American psychologistIrving Janis(Yale University) pioneered the initial research on the groupthink theory. He does not cite Whyte, but coined the term again by analogy with "doublethink" and similar terms that were part of the newspeak vocabulary in the novelNineteen Eighty-Fourby George Orwell. He initially defined groupthink as follows:
I use the term groupthink as a quick and easy way to refer to the mode of thinking that persons engage in whenconcurrence-seekingbecomes so dominant in a cohesive ingroup that it tends to override realistic appraisal of alternative courses of action. Groupthink is a term of the same order as the words in the newspeak vocabulary George Orwell used in his dismaying world of1984. In that context, groupthink takes on an invidious connotation. Exactly such a connotation is intended, since the term refers to a deterioration in mental efficiency, reality testing and moral judgments as a result of group pressures.[10]: 43
He went on to write:
The main principle of groupthink, which I offer in the spirit ofParkinson's Law, is this: "The more amiability andesprit de corpsthere is among the members of a policy-making ingroup, the greater the danger that independent critical thinking will be replaced by groupthink, which is likely to result in irrational and dehumanizing actions directed against outgroups".[10]: 44
Janis set the foundation for the study of groupthink starting with his research in the American Soldier Project where he studied the effect of extreme stress on group cohesiveness. After this study he remained interested in the ways in which people make decisions under external threats. This interest led Janis to study a number of "disasters" inAmerican foreign policy, such as failure to anticipate theJapanese attack on Pearl Harbor(1941); theBay of Pigs Invasionfiasco (1961); and the prosecution of theVietnam War(1964–67) by PresidentLyndon Johnson. He concluded that in each of these cases, the decisions occurred largely because of groupthink, which prevented contradictory views from being expressed and subsequently evaluated.
After the publication of Janis' bookVictims of Groupthinkin 1972,[11]and a revised edition with the titleGroupthink: Psychological Studies of Policy Decisions and Fiascoesin 1982,[12]the concept of groupthink was used[by whom?]to explain many other faulty decisions in history. These events includedNazi Germany's decision to invade theSoviet Unionin 1941, theWatergate scandaland others. Despite the popularity of the concept of groupthink, fewer than two dozen studies addressed the phenomenon itself following the publication ofVictims of Groupthink, between the years 1972 and 1998.[4]: 107This was surprising considering how many fields of interests it spans, which includepolitical science, communications,organizational studies,social psychology, management, strategy, counseling, and marketing. One can most likely explain this lack of follow-up in that group research is difficult to conduct, groupthink has many independent and dependent variables, and it is unclear "how to translate [groupthink's] theoretical concepts into observable and quantitative constructs".[4]: 107–108
Nevertheless, outside research psychology and sociology, wider culture has come to detect groupthink in observable situations, for example:
To make groupthink testable,Irving Janisdevised eight symptoms indicative of groupthink:[20]
Type I: Overestimations of the group — its power and morality
Type II: Closed-mindedness
Type III: Pressures toward uniformity
When a group exhibits most of the symptoms of groupthink, the consequences of a failing decision process can be expected: incomplete analysis of the other options, incomplete analysis of the objectives, failure to examine the risks associated with the favored choice, failure to reevaluate the options initially rejected, poor information research, selection bias in available information processing, failure to prepare for a back-up plan.[11]
Janis identified three antecedent conditions to groupthink:[11]: 9
Although it is possible for a situation to contain all three of these factors, all three are not always present even when groupthink is occurring. Janis considered a high degree of cohesiveness to be the most important antecedent to producing groupthink, and always present when groupthink was occurring; however, he believed high cohesiveness would not always produce groupthink. A very cohesive group abides with all groupnorms; but whether or not groupthink arises is dependent on what the group norms are. If the group encourages individualdissentand alternative strategies to problem solving, it is likely that groupthink will be avoided even in a highly cohesive group. This means that high cohesion will lead to groupthink only if one or both of the other antecedents is present, situational context being slightly more likely than structural faults to produce groupthink.[21]
A 2018 study found that absence of a tenured project leader can also create conditions for groupthink to prevail. Presence of an "experienced" project manager can reduce the likelihood of groupthink by taking steps like critically analysing ideas, promoting open communication, encouraging diverse perspectives, and raising team awareness of groupthink symptoms.[22]
It was found that among people who have bicultural identity, those with highly integrated bicultural identity as opposed to less integrated were more prone to groupthink.[23]In another 2022 study in Tanzania, Hofstede's cultural dimensions come into play. It was observed that in high power distance societies, individuals are hesitant to voice dissent, deferring to leaders' preferences in making decisions. Furthermore, as Tanzania is a collectivist society, community interests supersede those of individuals. The combination of high power distance and collectivism creates optimal conditions for groupthink to occur.[24]
As observed by Aldag and Fuller (1993), the groupthink phenomenon seems to rest on a set of unstated and generally restrictive assumptions:[25]
It has been thought that groups with the strong ability to work together will be able to solve dilemmas in a quicker and more efficient fashion than an individual. Groups have a greater amount of resources which lead them to be able to store and retrieve information more readily and come up with more alternative solutions to a problem. There was a recognized downside to groupproblem solvingin that it takes groups more time to come to a decision and requires that people make compromises with each other. However, it was not until the research of Janis appeared that anyone really considered that a highly cohesive group could impair the group's ability to generate quality decisions. Tight-knit groups may appear to make decisions better because they can come to a consensus quickly and at a low energy cost; however, over time this process of decision-making may decrease the members' ability to think critically. It is, therefore, considered by many to be important to combat the effects of groupthink.[21]
According to Janis, decision-making groups are not necessarily destined to groupthink. He devised ways of preventing groupthink:[11]: 209–215
The devil's advocate in a group may provide questions and insight which contradict the majority group in order to avoid groupthink decisions.[26]A study by Ryan Hartwig confirms that the devil's advocacy technique is very useful for group problem-solving.[27]It allows for conflict to be used in a way that is most-effective for finding the best solution so that members will not have to go back and find a different solution if the first one fails. Hartwig also suggests that the devil's advocacy technique be incorporated with other group decision-making models such as thefunctional theoryto find and evaluate alternative solutions. The main idea of the devil's advocacy technique is that somewhat structured conflict can be facilitated to not only reduce groupthink, but to also solve problems.
Diversity of all kinds is also instrumental in preventing groupthink. Individuals with varying backgrounds, thought, professional and life experiences etc. can offer unique perspectives and challenge assumptions.[28][29]In a 2004 study, a diverse team of problem-solver outperformed a team consisting of best problem solvers as they start to think alike.[30]
Psychological safety, emphasized by Edmondson and Lei[31]and Hirak et al.,[32]is crucial for effective group performance. It involves creating an environment that encourages learning and removes barriers perceived as threats by team members. Edmondson et al.[33]demonstrated variations in psychological safety based on work type, hierarchy, and leadership effectiveness, highlighting its importance in employee development and fostering a culture of learning within organizations.[34]
A similar situation to groupthink is theAbilene paradox, another phenomenon that is detrimental when working in groups. When organizations fall into an Abilene paradox, they take actions in contradiction to what their perceived goals may be and therefore defeat the very purposes they are trying to achieve.[35]Failure to communicate desires or beliefs can cause an Abilene paradox.
TheWatergate scandalis an example of this.[citation needed]Before the scandal had occurred, a meeting took place where they discussed the issue. One of Nixon's campaign aides was unsure if he should speak up and give his input. If he had voiced his disagreement with the group's decision, it is possible that the scandal could have been avoided.[citation needed]
After the Bay of Pigs invasion fiasco, PresidentJohn F. Kennedysought to avoid groupthink during theCuban Missile Crisisusing "vigilant appraisal".[12]: 148–153During meetings, he invited outside experts to share their viewpoints, and allowed group members to question them carefully. He also encouraged group members to discuss possible solutions with trusted members within their separate departments, and he even divided the group up into various sub-groups, to partially break the group cohesion. Kennedy was deliberately absent from the meetings, so as to avoid pressing his own opinion.
Cass Sunstein reports thatintrovertscan sometimes be silent in meetings withextroverts; he recommends explicitly asking for each person's opinion, either during the meeting or afterwards in one-on-one sessions. Sunstein points to studies showing groups with a high level of internal socialization andhappy talkare more prone to bad investment decisions due to groupthink, compared with groups of investors who are relative strangers and more willing to be argumentative. To avoidgroup polarization, where discussion with like-minded people drives an outcome further to an extreme than any of the individuals favored before the discussion, he recommends creating heterogeneous groups which contain people with different points of view. Sunstein also points out that people arguing a side they do not sincerely believe (in the role of devil's advocate) tend to be much less effective than a sincere argument. This can be accomplished by dissenting individuals, or a group like aRed Teamthat is expected to pursue an alternative strategy or goal "for real".[36]
Testing groupthink in a laboratory is difficult because synthetic settings remove groups from real social situations, which ultimately changes the variables conducive or inhibitive to groupthink.[37]Because of its subjective nature, researchers have struggled to measure groupthink as a complete phenomenon, instead frequently opting to measure its particular factors. These factors range fromcausal to effectual[clarification needed]and focus on group and situational aspects.[38][39]
Park (1990) found that "only 16 empirical studies have been published on groupthink", and concluded that they "resulted in only partial support of his [Janis's] hypotheses".[40]: 230Park concludes, "despite Janis' claim that group cohesiveness is the major necessary antecedent factor, no research has shown a significant main effect of cohesiveness on groupthink."[40]: 230Park also concludes that research does not support Janis' claim that cohesion and leadership style interact to produce groupthink symptoms.[40]Park presents a summary of the results of the studies analyzed. According to Park, a study by Huseman and Drive (1979) indicates groupthink occurs in both small and large decision-making groups within businesses.[40]This results partly from group isolation within the business. Manz and Sims (1982) conducted a study showing that autonomous work groups are susceptible to groupthink symptoms in the same manner as decisions making groups within businesses.[40][41]Fodor and Smith (1982) produced a study revealing that group leaders with high power motivation create atmospheres more susceptible to groupthink.[40][42]Leaders with high power motivation possess characteristics similar to leaders with a "closed" leadership style—an unwillingness to respect dissenting opinion. The same study indicates that level of group cohesiveness is insignificant in predicting groupthink occurrence. Park summarizes a study performed by Callaway, Marriott, and Esser (1985) in which groups with highly dominant members "made higher quality decisions, exhibited lowered state of anxiety, took more time to reach a decision, and made more statements of disagreement/agreement".[40]: 232[43]Overall, groups with highly dominant members expressed characteristics inhibitory to groupthink. If highly dominant members are considered equivalent to leaders with high power motivation, the results of Callaway, Marriott, and Esser contradict the results of Fodor and Smith. A study by Leana (1985) indicates the interaction between level of group cohesion and leadership style is completely insignificant in predicting groupthink.[40][44]This finding refutes Janis' claim that the factors of cohesion and leadership style interact to produce groupthink. Park summarizes a study by McCauley (1989) in which structural conditions of the group were found to predict groupthink while situational conditions did not.[14][40]The structural conditions included group insulation, group homogeneity, and promotional leadership. The situational conditions included group cohesion. These findings refute Janis' claim about group cohesiveness predicting groupthink.
Overall, studies on groupthink have largely focused on the factors (antecedents) that predict groupthink. Groupthink occurrence is often measured by number of ideas/solutions generated within a group, but there is no uniform, concrete standard by which researchers can objectively conclude groupthink occurs.[37]The studies of groupthink and groupthink antecedents reveal a mixed body of results. Some studies indicate group cohesion and leadership style to be powerfully predictive of groupthink, while other studies indicate the insignificance of these factors. Group homogeneity and group insulation are generally supported as factors predictive of groupthink.
Groupthink can have a strong hold on political decisions and military operations, which may result in enormous wastage of human and material resources. Highly qualified and experienced politicians and military commanders sometimes make very poor decisions when in a suboptimal group setting. Scholars such as Janis and Raven attribute political and military fiascoes, such as theBay of Pigs Invasion, theVietnam War, and theWatergate scandal, to the effect of groupthink.[12][45]More recently, Dina Badie argued that groupthink was largely responsible for the shift in the U.S. administration's view onSaddam Husseinthat eventually led to the2003 invasion of Iraqby the United States.[46]After theSeptember 11 attacks, "stress, promotional leadership, andintergroup conflict" were all factors that gave rise to the occurrence of groupthink.[46]: 283Political case studies of groupthink serve to illustrate the impact that the occurrence of groupthink can have in today's political scene.
The United StatesBay of Pigs Invasionof April 1961 was the primary case study that Janis used to formulate his theory of groupthink.[10]The invasion plan was initiated by theEisenhoweradministration, but when theKennedyadministration took over, it "uncritically accepted" the plan of theCentral Intelligence Agency(CIA).[10]: 44When some people, such asArthur M. Schlesinger Jr.and SenatorJ. William Fulbright, attempted to present their objections to the plan, the Kennedy team as a whole ignored these objections and kept believing in the morality of their plan.[10]: 46Eventually Schlesinger minimized his own doubts, performingself-censorship.[10]: 74The Kennedy team stereotypedFidel Castroand the Cubans by failing to question the CIA about its many false assumptions, including the ineffectiveness ofCastro's air force, the weakness ofCastro's army, and the inability of Castro to quell internal uprisings.[10]: 46
Janis argued the fiasco that ensued could have been prevented if the Kennedy administration had followed the methods to preventing groupthink adopted during theCuban Missile Crisis, which took place just one year later in October 1962. In the latter crisis, essentially the same political leaders were involved in decision-making, but this time they learned from their previous mistake of seriously under-rating their opponents.[10]: 76
Theattack on Pearl Harboron December 7, 1941, is a prime example of groupthink. A number of factors such as shared illusions and rationalizations contributed to the lack of precaution taken by U.S. Navy officers based in Hawaii. The United States had intercepted Japanese messages and they discovered that Japan was arming itself for an offensive attacksomewherein the Pacific Ocean. Washington took action by warning officers stationed atPearl Harbor, but their warning was not taken seriously. They assumed that theEmpire of Japanwas taking measures in the event that their embassies and consulates in enemy territories were usurped.
The U.S. Navy and Army in Pearl Harbor also shared rationalizations about why an attack was unlikely. Some of them included:[12]: 83, 85
On January 28, 1986,NASAlaunched the space shuttleChallenger. This was significant because a civilian, non-astronaut, high school teacher was to be the first American civilian in space. The space shuttle was perceived to be so safe as to make this possible. NASA's engineering and launch teams rely on teamwork. To launch the shuttle, individual team members must affirm each system is functioning nominally.Morton Thiokolengineers who designed and built theChallenger's rocket boosters ignored warnings that cooler temperature during the day of the launch could result in failure and death of the crew.[47]TheSpace Shuttle Challenger Disastergrounded space shuttle flights for nearly three years. Ironic that this particular flight was to be a demonstration showing confidence in the safety of the space shuttle technology.
TheChallengercase was subject to a more quantitatively oriented test of Janis's groupthink model performed by Esser and Lindoerfer, who found clear signs of positive antecedents to groupthink in the critical decisions concerning the launch of theshuttle.[48]The day of the launch was rushed for publicity reasons. NASA wanted to captivate and hold the attention of America. Having civilian teacherChrista McAuliffeon board to broadcast a live lesson, and the possible mention by president Ronald Reagan in the State of the Union address, were opportunities NASA deemed critical to increasing interest in its potential civilian space flight program. The schedule NASA set out to meet was, however, self-imposed. It seemed incredible to many that an organization with a perceived history of successful management would have locked itself into a schedule it had no chance of meeting.[49]
In the corporate world, ineffective and suboptimal group decision-making can negatively affect the health of a company and cause a considerable amount of monetary loss.
Aaron Hermann and Hussain Rammal illustrate the detrimental role of groupthink in the collapse ofSwissair, a Swiss airline company that was thought to be so financially stable that it earned the title the "Flying Bank".[50]The authors argue that, among other factors, Swissair carried two symptoms of groupthink: the belief that the group is invulnerable and the belief in the morality of the group.[50]: 1056In addition, before the fiasco, the size of the company board was reduced, subsequently eliminating industrial expertise. This may have further increased the likelihood of groupthink.[50]: 1055With the board members lacking expertise in the field and having somewhat similar background, norms, and values, the pressure to conform may have become more prominent.[50]: 1057This phenomenon is called group homogeneity, which is an antecedent to groupthink. Together, these conditions may have contributed to the poor decision-making process that eventually led to Swissair's collapse.
Another example of groupthink from the corporate world is illustrated in theUnited Kingdom-based companiesMarks & SpencerandBritish Airways. The negative impact of groupthink took place during the 1990s as both companies released globalization expansion strategies. Researcher Jack Eaton's content analysis of media press releases revealed that all eight symptoms of groupthink were present during this period. The most predominant symptom of groupthink was the illusion of invulnerability as both companies underestimated potential failure due to years of profitability and success during challenging markets. Up until the consequence of groupthink erupted they were consideredblue chipsand darlings of theLondon Stock Exchange. During 1998–1999 the price of Marks & Spencer shares fell from 590 to less than 300 and that of British Airways from 740 to 300. Both companies had previously been prominently featured in the UK press and media for more positive reasons, reflecting national pride in their undeniable sector-wide performance.[51]
Recent literature of groupthink attempts to study the application of this concept beyond the framework of business and politics. One particularly relevant and popular arena in which groupthink is rarely studied is sports. The lack of literature in this area prompted Charles Koerber and Christopher Neck to begin a case-study investigation that examined the effect of groupthink on the decision of theMajor League Umpires Association(MLUA) to stage a mass resignation in 1999. The decision was a failed attempt to gain a stronger negotiating stance againstMajor League Baseball.[52]: 21Koerber and Neck suggest that three groupthink symptoms can be found in the decision-making process of the MLUA. First, the umpires overestimated the power that they had over the baseball league and the strength of their group's resolve. The union also exhibited some degree of closed-mindedness with the notion that MLB is the enemy. Lastly, there was the presence of self-censorship; some umpires who disagreed with the decision to resign failed to voice their dissent.[52]: 25These factors, along with other decision-making defects, led to a decision that was suboptimal and ineffective.
Researcher Robert Baron (2005) contends that the connection between certain antecedents which Janis believed necessary has not been demonstrated by the current collective body of research on groupthink. He believes that Janis' antecedents for groupthink are incorrect, and argues that not only are they "not necessary to provoke the symptoms of groupthink, but that they often will not even amplify such symptoms".[53]As an alternative to Janis' model, Baron proposed a ubiquity model of groupthink. This model provides a revised set of antecedents for groupthink, includingsocial identification, salientnorms, and lowself-efficacy.
Aldag and Fuller (1993) argue that the groupthink concept was based on a "small and relatively restricted sample" that became too broadly generalized.[25]Furthermore, the concept is too rigidly staged and deterministic. Empirical support for it has also not been consistent. The authors compare groupthink model to findings presented byMaslowandPiaget; they argue that, in each case, the model incites great interest and further research that, subsequently, invalidate the original concept. Aldag and Fuller thus suggest a new model called thegeneral group problem-solving (GGPS) model, which integrates new findings from groupthink literature and alters aspects of groupthink itself.[25]: 534The primary difference between the GGPS model and groupthink is that the former is more value neutral and more political.[25]: 544
Later scholars have re-assessed the merit of groupthink by reexamining case studies that Janis originally used to buttress his model. Roderick Kramer (1998) believed that, because scholars today have a more sophisticated set of ideas about the general decision-making process and because new and relevant information about the fiascos have surfaced over the years, a reexamination of the case studies is appropriate and necessary.[54]He argues that new evidence does not support Janis' view that groupthink was largely responsible for President Kennedy's and President Johnson's decisions in the Bay of Pigs Invasion and U.S. escalated military involvement in theVietnam War, respectively. Both presidents sought the advice of experts outside of their political groups more than Janis suggested.[54]: 241Kramer also argues that the presidents were the final decision-makers of the fiascos; while determining which course of action to take, they relied more heavily on their ownconstrualsof the situations than on any group-consenting decision presented to them.[54]: 241Kramer concludes that Janis' explanation of the two military issues is flawed and that groupthink has much less influence on group decision-making than is popularly believed.
Groupthink, while it is thought to be avoided, does have some positive effects. Choi and Kim[55]found thatgroup identitytraits such as believing in the group's moral superiority, were linked to less concurrence seeking, better decision-making, better team activities, and better team performance. This study also showed that the relationship between groupthink and defective decision making was insignificant. These findings mean that in the right circumstances, groupthink does not always have negative outcomes. It also questions the original theory of groupthink.
Scholars are challenging the original view of groupthink proposed by Janis.
Whyte (1998) argues that a group's collective efficacy, i.e. confidence in its abilities, can lead to reduced vigilance and a higher risk tolerance, similar to how groupthink was described.[56]McCauley (1998) proposes that the attractiveness of group members might be the most prominent factor in causing poor decisions.[57]Turner and Pratkanis (1991) suggest that from social identity perspective, groupthink can be seen as a group's attempt to ward off potentially negative views of the group.[6]Together, the contributions of these scholars have brought about new understandings of groupthink that help reformulate Janis' original model.
According to a theory many of the basic characteristics of groupthink – e.g., strong cohesion, indulgent atmosphere, and exclusive ethos – are the result of a special kind of mnemonic encoding (Tsoukalas, 2007). Members of tightly knit groups have a tendency to represent significant aspects of their community asepisodic memoriesand this has a predictable influence on their group behavior and collective ideology, as opposed to what happens when they are encoded assemantic memories(which is common in formal and more loose group formations).[58]
According to scientist Todd Rose, Collective Illusions and Groupthink are linked concepts that show how social dynamics affect behavior. Groupthink occurs when individuals who are right about what the group wants, conform to the group's consensus. Collective illusions are a specific form of Groupthink where individuals mistakenly assume the group's wants, leading everyone to behave in ways that don't reflect their true preferences. Both the concepts involve social influence and conformity.[59]
|
https://en.wikipedia.org/wiki/Groupthink
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.