text
stringlengths
16
172k
source
stringlengths
32
122
Thecalculus of variations(orvariational calculus) is a field ofmathematical analysisthat uses variations, which are small changes infunctionsandfunctionals, to find maxima and minima of functionals:mappingsfrom a set offunctionsto thereal numbers.[a]Functionals are often expressed asdefinite integralsinvolving functions and theirderivatives. Functions that maximize or minimize functionals may be found using theEuler–Lagrange equationof the calculus of variations. A simple example of such a problem is to find the curve of shortest length connecting two points. If there are no constraints, the solution is astraight linebetween the points. However, if the curve is constrained to lie on a surface in space, then the solution is less obvious, and possibly many solutions may exist. Such solutions are known asgeodesics. A related problem is posed byFermat's principle: light follows the path of shortestoptical lengthconnecting two points, which depends upon the material of the medium. One corresponding concept inmechanicsis theprinciple of least/stationary action. Many important problems involve functions of several variables. Solutions ofboundary value problemsfor theLaplace equationsatisfy theDirichlet's principle.Plateau's problemrequires finding a surface of minimal area that spans a given contour in space: a solution can often be found by dipping a frame in soapy water. Although such experiments are relatively easy to perform, their mathematical formulation is far from simple: there may be more than one locally minimizing surface, and they may have non-trivialtopology. The calculus of variations began with the work ofIsaac Newton, such as withNewton's minimal resistance problem, which he formulated and solved in 1685, and later published in hisPrincipiain 1687,[2]which was the first problem in the field to be formulated and correctly solved,[2]and was also one of the most difficult problems tackled by variational methods prior to the twentieth century.[3][4][5]This problem was followed by thebrachistochrone curveproblem raised byJohann Bernoulli(1696),[6]which was similar to one raised byGalileo Galileiin 1638, but he did not solve the problem explicity nor did he use the methods based on calculus.[3]Bernoulli had solved the problem, using the principle of least time in the process, but not calculus of variations, whereas Newton did to solve the problem in 1697, and as a result, he pioneered the field with his work on the two problems.[4]The problem would immediately occupy the attention ofJacob Bernoulliand theMarquis de l'Hôpital, butLeonhard Eulerfirst elaborated the subject, beginning in 1733.Joseph-Louis Lagrangewas influenced by Euler's work to contribute greatly to the theory. After Euler saw the 1755 work of the 19-year-old Lagrange, Euler dropped his own partly geometric approach in favor of Lagrange's purely analytic approach and renamed the subject thecalculus of variationsin his 1756 lectureElementa Calculi Variationum.[7][8][b] Adrien-Marie Legendre(1786) laid down a method, not entirely satisfactory, for the discrimination of maxima and minima.Isaac NewtonandGottfried Leibnizalso gave some early attention to the subject.[9]To this discriminationVincenzo Brunacci(1810),Carl Friedrich Gauss(1829),Siméon Poisson(1831),Mikhail Ostrogradsky(1834), andCarl Jacobi(1837) have been among the contributors. An important general work is that ofPierre Frédéric Sarrus(1842) which was condensed and improved byAugustin-Louis Cauchy(1844). Other valuable treatises and memoirs have been written byStrauch[which?](1849),John Hewitt Jellett(1850),Otto Hesse(1857),Alfred Clebsch(1858), and Lewis Buffett Carll (1885), but perhaps the most important work of the century is that ofKarl Weierstrass. His celebrated course on the theory is epoch-making, and it may be asserted that he was the first to place it on a firm and unquestionable foundation. The20thand the23rdHilbert problempublished in 1900 encouraged further development.[9] In the 20th centuryDavid Hilbert,Oskar Bolza,Gilbert Ames Bliss,Emmy Noether,Leonida Tonelli,Henri LebesgueandJacques Hadamardamong others made significant contributions.[9]Marston Morseapplied calculus of variations in what is now calledMorse theory.[10]Lev Pontryagin,Ralph Rockafellarand F. H. Clarke developed new mathematical tools for the calculus of variations inoptimal control theory.[10]Thedynamic programmingofRichard Bellmanis an alternative to the calculus of variations.[11][12][13][c] The calculus of variations is concerned with the maxima or minima (collectively calledextrema) of functionals. A functional mapsfunctionstoscalars, so functionals have been described as "functions of functions." Functionals have extrema with respect to the elementsy{\displaystyle y}of a givenfunction spacedefined over a givendomain. A functionalJ[y]{\displaystyle J[y]}is said to have an extremum at the functionf{\displaystyle f}ifΔJ=J[y]−J[f]{\displaystyle \Delta J=J[y]-J[f]}has the samesignfor ally{\displaystyle y}in an arbitrarily small neighborhood off.{\displaystyle f.}[d]The functionf{\displaystyle f}is called anextremalfunction or extremal.[e]The extremumJ[f]{\displaystyle J[f]}is called a local maximum ifΔJ≤0{\displaystyle \Delta J\leq 0}everywhere in an arbitrarily small neighborhood off,{\displaystyle f,}and a local minimum ifΔJ≥0{\displaystyle \Delta J\geq 0}there. For a function space of continuous functions, extrema of corresponding functionals are calledstrong extremaorweak extrema, depending on whether the first derivatives of the continuous functions are respectively all continuous or not.[15] Both strong and weak extrema of functionals are for a space of continuous functions but strong extrema have the additional requirement that the first derivatives of the functions in the space be continuous. Thus a strong extremum is also a weak extremum, but theconversemay not hold. Finding strong extrema is more difficult than finding weak extrema.[16]An example of anecessary conditionthat is used for finding weak extrema is theEuler–Lagrange equation.[17][f] Finding the extrema of functionals is similar to finding the maxima and minima of functions. The maxima and minima of a function may be located by finding the points where its derivative vanishes (i.e., is equal to zero). The extrema of functionals may be obtained by finding functions for which thefunctional derivativeis equal to zero. This leads to solving the associatedEuler–Lagrange equation.[g] Consider the functionalJ[y]=∫x1x2L(x,y(x),y′(x))dx.{\displaystyle J[y]=\int _{x_{1}}^{x_{2}}L\left(x,y(x),y'(x)\right)\,dx\,.}where If the functionalJ[y]{\displaystyle J[y]}attains alocal minimumatf,{\displaystyle f,}andη(x){\displaystyle \eta (x)}is an arbitrary function that has at least one derivative and vanishes at the endpointsx1{\displaystyle x_{1}}andx2,{\displaystyle x_{2},}then for any numberε{\displaystyle \varepsilon }close to 0,J[f]≤J[f+εη].{\displaystyle J[f]\leq J[f+\varepsilon \eta ]\,.} The termεη{\displaystyle \varepsilon \eta }is called thevariationof the functionf{\displaystyle f}and is denoted byδf.{\displaystyle \delta f.}[1][h] Substitutingf+εη{\displaystyle f+\varepsilon \eta }fory{\displaystyle y}in the functionalJ[y],{\displaystyle J[y],}the result is a function ofε,{\displaystyle \varepsilon ,} Φ(ε)=J[f+εη].{\displaystyle \Phi (\varepsilon )=J[f+\varepsilon \eta ]\,.}Since the functionalJ[y]{\displaystyle J[y]}has a minimum fory=f{\displaystyle y=f}the functionΦ(ε){\displaystyle \Phi (\varepsilon )}has a minimum atε=0{\displaystyle \varepsilon =0}and thus,[i]Φ′(0)≡dΦdε|ε=0=∫x1x2dLdε|ε=0dx=0.{\displaystyle \Phi '(0)\equiv \left.{\frac {d\Phi }{d\varepsilon }}\right|_{\varepsilon =0}=\int _{x_{1}}^{x_{2}}\left.{\frac {dL}{d\varepsilon }}\right|_{\varepsilon =0}dx=0\,.} Taking thetotal derivativeofL[x,y,y′],{\displaystyle L\left[x,y,y'\right],}wherey=f+εη{\displaystyle y=f+\varepsilon \eta }andy′=f′+εη′{\displaystyle y'=f'+\varepsilon \eta '}are considered as functions ofε{\displaystyle \varepsilon }rather thanx,{\displaystyle x,}yieldsdLdε=∂L∂ydydε+∂L∂y′dy′dε{\displaystyle {\frac {dL}{d\varepsilon }}={\frac {\partial L}{\partial y}}{\frac {dy}{d\varepsilon }}+{\frac {\partial L}{\partial y'}}{\frac {dy'}{d\varepsilon }}}and becausedydε=η{\displaystyle {\frac {dy}{d\varepsilon }}=\eta }anddy′dε=η′,{\displaystyle {\frac {dy'}{d\varepsilon }}=\eta ',}dLdε=∂L∂yη+∂L∂y′η′.{\displaystyle {\frac {dL}{d\varepsilon }}={\frac {\partial L}{\partial y}}\eta +{\frac {\partial L}{\partial y'}}\eta '.} Therefore,∫x1x2dLdε|ε=0dx=∫x1x2(∂L∂fη+∂L∂f′η′)dx=∫x1x2∂L∂fηdx+∂L∂f′η|x1x2−∫x1x2ηddx∂L∂f′dx=∫x1x2(∂L∂fη−ηddx∂L∂f′)dx{\displaystyle {\begin{aligned}\int _{x_{1}}^{x_{2}}\left.{\frac {dL}{d\varepsilon }}\right|_{\varepsilon =0}dx&=\int _{x_{1}}^{x_{2}}\left({\frac {\partial L}{\partial f}}\eta +{\frac {\partial L}{\partial f'}}\eta '\right)\,dx\\&=\int _{x_{1}}^{x_{2}}{\frac {\partial L}{\partial f}}\eta \,dx+\left.{\frac {\partial L}{\partial f'}}\eta \right|_{x_{1}}^{x_{2}}-\int _{x_{1}}^{x_{2}}\eta {\frac {d}{dx}}{\frac {\partial L}{\partial f'}}\,dx\\&=\int _{x_{1}}^{x_{2}}\left({\frac {\partial L}{\partial f}}\eta -\eta {\frac {d}{dx}}{\frac {\partial L}{\partial f'}}\right)\,dx\\\end{aligned}}}whereL[x,y,y′]→L[x,f,f′]{\displaystyle L\left[x,y,y'\right]\to L\left[x,f,f'\right]}whenε=0{\displaystyle \varepsilon =0}and we have usedintegration by partson the second term. The second term on the second line vanishes becauseη=0{\displaystyle \eta =0}atx1{\displaystyle x_{1}}andx2{\displaystyle x_{2}}by definition. Also, as previously mentioned the left side of the equation is zero so that∫x1x2η(x)(∂L∂f−ddx∂L∂f′)dx=0.{\displaystyle \int _{x_{1}}^{x_{2}}\eta (x)\left({\frac {\partial L}{\partial f}}-{\frac {d}{dx}}{\frac {\partial L}{\partial f'}}\right)\,dx=0\,.} According to thefundamental lemma of calculus of variations, the part of the integrand in parentheses is zero, i.e.∂L∂f−ddx∂L∂f′=0{\displaystyle {\frac {\partial L}{\partial f}}-{\frac {d}{dx}}{\frac {\partial L}{\partial f'}}=0}which is called theEuler–Lagrange equation. The left hand side of this equation is called thefunctional derivativeofJ[f]{\displaystyle J[f]}and is denotedδJ{\displaystyle \delta J}orδf(x).{\displaystyle \delta f(x).} In general this gives a second-orderordinary differential equationwhich can be solved to obtain the extremal functionf(x).{\displaystyle f(x).}The Euler–Lagrange equation is anecessary, but notsufficient, condition for an extremumJ[f].{\displaystyle J[f].}A sufficient condition for a minimum is given in the sectionVariations and sufficient condition for a minimum. In order to illustrate this process, consider the problem of finding the extremal functiony=f(x),{\displaystyle y=f(x),}which is the shortest curve that connects two points(x1,y1){\displaystyle \left(x_{1},y_{1}\right)}and(x2,y2).{\displaystyle \left(x_{2},y_{2}\right).}Thearc lengthof the curve is given byA[y]=∫x1x21+[y′(x)]2dx,{\displaystyle A[y]=\int _{x_{1}}^{x_{2}}{\sqrt {1+[y'(x)]^{2}}}\,dx\,,}withy′(x)=dydx,y1=f(x1),y2=f(x2).{\displaystyle y'(x)={\frac {dy}{dx}}\,,\ \ y_{1}=f(x_{1})\,,\ \ y_{2}=f(x_{2})\,.}Note that assumingyis a function ofxloses generality; ideally both should be a function of some other parameter. This approach is good solely for instructive purposes. The Euler–Lagrange equation will now be used to find the extremal functionf(x){\displaystyle f(x)}that minimizes the functionalA[y].{\displaystyle A[y].}∂L∂f−ddx∂L∂f′=0{\displaystyle {\frac {\partial L}{\partial f}}-{\frac {d}{dx}}{\frac {\partial L}{\partial f'}}=0}withL=1+[f′(x)]2.{\displaystyle L={\sqrt {1+[f'(x)]^{2}}}\,.} Sincef{\displaystyle f}does not appear explicitly inL,{\displaystyle L,}the first term in the Euler–Lagrange equation vanishes for allf(x){\displaystyle f(x)}and thus,ddx∂L∂f′=0.{\displaystyle {\frac {d}{dx}}{\frac {\partial L}{\partial f'}}=0\,.}Substituting forL{\displaystyle L}and taking the derivative,ddxf′(x)1+[f′(x)]2=0.{\displaystyle {\frac {d}{dx}}\ {\frac {f'(x)}{\sqrt {1+[f'(x)]^{2}}}}\ =0\,.} Thusf′(x)1+[f′(x)]2=c,{\displaystyle {\frac {f'(x)}{\sqrt {1+[f'(x)]^{2}}}}=c\,,}for some constantc.{\displaystyle c.}Then[f′(x)]21+[f′(x)]2=c2,{\displaystyle {\frac {[f'(x)]^{2}}{1+[f'(x)]^{2}}}=c^{2}\,,}where0≤c2<1.{\displaystyle 0\leq c^{2}<1.}Solving, we get[f′(x)]2=c21−c2{\displaystyle [f'(x)]^{2}={\frac {c^{2}}{1-c^{2}}}}which implies thatf′(x)=m{\displaystyle f'(x)=m}is a constant and therefore that the shortest curve that connects two points(x1,y1){\displaystyle \left(x_{1},y_{1}\right)}and(x2,y2){\displaystyle \left(x_{2},y_{2}\right)}isf(x)=mx+bwithm=y2−y1x2−x1andb=x2y1−x1y2x2−x1{\displaystyle f(x)=mx+b\qquad {\text{with}}\ \ m={\frac {y_{2}-y_{1}}{x_{2}-x_{1}}}\quad {\text{and}}\quad b={\frac {x_{2}y_{1}-x_{1}y_{2}}{x_{2}-x_{1}}}}and we have thus found the extremal functionf(x){\displaystyle f(x)}that minimizes the functionalA[y]{\displaystyle A[y]}so thatA[f]{\displaystyle A[f]}is a minimum. The equation for a straight line isy=mx+b.{\displaystyle y=mx+b.}In other words, the shortest distance between two points is a straight line.[j] In physics problems it may be the case that∂L∂x=0,{\displaystyle {\frac {\partial L}{\partial x}}=0,}meaning the integrand is a function off(x){\displaystyle f(x)}andf′(x){\displaystyle f'(x)}butx{\displaystyle x}does not appear separately. In that case, the Euler–Lagrange equation can be simplified to theBeltrami identity[20]L−f′∂L∂f′=C,{\displaystyle L-f'{\frac {\partial L}{\partial f'}}=C\,,}whereC{\displaystyle C}is a constant. The left hand side is theLegendre transformationofL{\displaystyle L}with respect tof′(x).{\displaystyle f'(x).} The intuition behind this result is that, if the variablex{\displaystyle x}is actually time, then the statement∂L∂x=0{\displaystyle {\frac {\partial L}{\partial x}}=0}implies that the Lagrangian is time-independent. ByNoether's theorem, there is an associated conserved quantity. In this case, this quantity is the Hamiltonian, the Legendre transform of the Lagrangian, which (often) coincides with the energy of the system. This is (minus) the constant in Beltrami's identity. IfS{\displaystyle S}depends on higher-derivatives ofy(x),{\displaystyle y(x),}that is, ifS=∫abf(x,y(x),y′(x),…,y(n)(x))dx,{\displaystyle S=\int _{a}^{b}f(x,y(x),y'(x),\dots ,y^{(n)}(x))dx,}theny{\displaystyle y}must satisfy the Euler–Poissonequation,[21]∂f∂y−ddx(∂f∂y′)+⋯+(−1)ndndxn[∂f∂y(n)]=0.{\displaystyle {\frac {\partial f}{\partial y}}-{\frac {d}{dx}}\left({\frac {\partial f}{\partial y'}}\right)+\dots +(-1)^{n}{\frac {d^{n}}{dx^{n}}}\left[{\frac {\partial f}{\partial y^{(n)}}}\right]=0.} The discussion thus far has assumed that extremal functions possess two continuous derivatives, although the existence of the integralJ{\displaystyle J}requires only first derivatives of trial functions. The condition that the first variation vanishes at an extremal may be regarded as aweak formof the Euler–Lagrange equation. The theorem of Du Bois-Reymond asserts that this weak form implies the strong form. IfL{\displaystyle L}has continuous first and second derivatives with respect to all of its arguments, and if∂2L∂f′2≠0,{\displaystyle {\frac {\partial ^{2}L}{\partial f'^{2}}}\neq 0,}thenf{\displaystyle f}has two continuous derivatives, and it satisfies the Euler–Lagrange equation. Hilbert was the first to give good conditions for the Euler–Lagrange equations to give a stationary solution. Within a convex area and a positive thrice differentiable Lagrangian the solutions are composed of a countable collection of sections that either go along the boundary or satisfy the Euler–Lagrange equations in the interior. HoweverLavrentievin 1926 showed that there are circumstances where there is no optimum solution but one can be approached arbitrarily closely by increasing numbers of sections. The Lavrentiev Phenomenon identifies a difference in the infimum of a minimization problem across different classes of admissible functions. For instance the following problem, presented by Manià in 1934:[22]L[x]=∫01(x3−t)2x′6,{\displaystyle L[x]=\int _{0}^{1}(x^{3}-t)^{2}x'^{6},}A={x∈W1,1(0,1):x(0)=0,x(1)=1}.{\displaystyle {A}=\{x\in W^{1,1}(0,1):x(0)=0,\ x(1)=1\}.} Clearly,x(t)=t13{\displaystyle x(t)=t^{\frac {1}{3}}}minimizes the functional, but we find any functionx∈W1,∞{\displaystyle x\in W^{1,\infty }}gives a value bounded away from the infimum. Examples (in one-dimension) are traditionally manifested acrossW1,1{\displaystyle W^{1,1}}andW1,∞,{\displaystyle W^{1,\infty },}but Ball and Mizel[23]procured the first functional that displayed Lavrentiev's Phenomenon acrossW1,p{\displaystyle W^{1,p}}andW1,q{\displaystyle W^{1,q}}for1≤p<q<∞.{\displaystyle 1\leq p<q<\infty .}There are several results that gives criteria under which the phenomenon does not occur - for instance 'standard growth', a Lagrangian with no dependence on the second variable, or an approximating sequence satisfying Cesari's Condition (D) - but results are often particular, and applicable to a small class of functionals. Connected with the Lavrentiev Phenomenon is the repulsion property: any functional displaying Lavrentiev's Phenomenon will display the weak repulsion property.[24] For example, ifφ(x,y){\displaystyle \varphi (x,y)}denotes the displacement of a membrane above the domainD{\displaystyle D}in thex,y{\displaystyle x,y}plane, then its potential energy is proportional to its surface area:U[φ]=∬D1+∇φ⋅∇φdxdy.{\displaystyle U[\varphi ]=\iint _{D}{\sqrt {1+\nabla \varphi \cdot \nabla \varphi }}\,dx\,dy.}Plateau's problemconsists of finding a function that minimizes the surface area while assuming prescribed values on the boundary ofD{\displaystyle D}; the solutions are calledminimal surfaces. The Euler–Lagrange equation for this problem is nonlinear:φxx(1+φy2)+φyy(1+φx2)−2φxφyφxy=0.{\displaystyle \varphi _{xx}(1+\varphi _{y}^{2})+\varphi _{yy}(1+\varphi _{x}^{2})-2\varphi _{x}\varphi _{y}\varphi _{xy}=0.}See Courant (1950) for details. It is often sufficient to consider only small displacements of the membrane, whose energy difference from no displacement is approximated byV[φ]=12∬D∇φ⋅∇φdxdy.{\displaystyle V[\varphi ]={\frac {1}{2}}\iint _{D}\nabla \varphi \cdot \nabla \varphi \,dx\,dy.}The functionalV{\displaystyle V}is to be minimized among all trial functionsφ{\displaystyle \varphi }that assume prescribed values on the boundary ofD.{\displaystyle D.}Ifu{\displaystyle u}is the minimizing function andv{\displaystyle v}is an arbitrary smooth function that vanishes on the boundary ofD,{\displaystyle D,}then the first variation ofV[u+εv]{\displaystyle V[u+\varepsilon v]}must vanish:ddεV[u+εv]|ε=0=∬D∇u⋅∇vdxdy=0.{\displaystyle \left.{\frac {d}{d\varepsilon }}V[u+\varepsilon v]\right|_{\varepsilon =0}=\iint _{D}\nabla u\cdot \nabla v\,dx\,dy=0.}Provided that u has two derivatives, we may apply the divergence theorem to obtain∬D∇⋅(v∇u)dxdy=∬D∇u⋅∇v+v∇⋅∇udxdy=∫Cv∂u∂nds,{\displaystyle \iint _{D}\nabla \cdot (v\nabla u)\,dx\,dy=\iint _{D}\nabla u\cdot \nabla v+v\nabla \cdot \nabla u\,dx\,dy=\int _{C}v{\frac {\partial u}{\partial n}}\,ds,}whereC{\displaystyle C}is the boundary ofD,{\displaystyle D,}s{\displaystyle s}is arclength alongC{\displaystyle C}and∂u/∂n{\displaystyle \partial u/\partial n}is the normal derivative ofu{\displaystyle u}onC.{\displaystyle C.}Sincev{\displaystyle v}vanishes onC{\displaystyle C}and the first variation vanishes, the result is∬Dv∇⋅∇udxdy=0{\displaystyle \iint _{D}v\nabla \cdot \nabla u\,dx\,dy=0}for all smooth functionsv{\displaystyle v}that vanish on the boundary ofD.{\displaystyle D.}The proof for the case of one dimensional integrals may be adapted to this case to show that∇⋅∇u=0{\displaystyle \nabla \cdot \nabla u=0}inD.{\displaystyle D.} The difficulty with this reasoning is the assumption that the minimizing functionu{\displaystyle u}must have two derivatives. Riemann argued that the existence of a smooth minimizing function was assured by the connection with the physical problem: membranes do indeed assume configurations with minimal potential energy. Riemann named this idea theDirichlet principlein honor of his teacherPeter Gustav Lejeune Dirichlet. However Weierstrass gave an example of a variational problem with no solution: minimizeW[φ]=∫−11(xφ′)2dx{\displaystyle W[\varphi ]=\int _{-1}^{1}(x\varphi ')^{2}\,dx}among all functionsφ{\displaystyle \varphi }that satisfyφ(−1)=−1{\displaystyle \varphi (-1)=-1}andφ(1)=1.{\displaystyle \varphi (1)=1.}W{\displaystyle W}can be made arbitrarily small by choosing piecewise linear functions that make a transition between −1 and 1 in a small neighborhood of the origin. However, there is no function that makesW=0.{\displaystyle W=0.}[k]Eventually it was shown that Dirichlet's principle is valid, but it requires a sophisticated application of the regularity theory forelliptic partial differential equations; see Jost and Li–Jost (1998). A more general expression for the potential energy of a membrane isV[φ]=∬D[12∇φ⋅∇φ+f(x,y)φ]dxdy+∫C[12σ(s)φ2+g(s)φ]ds.{\displaystyle V[\varphi ]=\iint _{D}\left[{\frac {1}{2}}\nabla \varphi \cdot \nabla \varphi +f(x,y)\varphi \right]\,dx\,dy\,+\int _{C}\left[{\frac {1}{2}}\sigma (s)\varphi ^{2}+g(s)\varphi \right]\,ds.}This corresponds to an external force densityf(x,y){\displaystyle f(x,y)}inD,{\displaystyle D,}an external forceg(s){\displaystyle g(s)}on the boundaryC,{\displaystyle C,}and elastic forces with modulusσ(s){\displaystyle \sigma (s)}acting onC.{\displaystyle C.}The function that minimizes the potential energywith no restriction on its boundary valueswill be denoted byu.{\displaystyle u.}Provided thatf{\displaystyle f}andg{\displaystyle g}are continuous, regularity theory implies that the minimizing functionu{\displaystyle u}will have two derivatives. In taking the first variation, no boundary condition need be imposed on the incrementv.{\displaystyle v.}The first variation ofV[u+εv]{\displaystyle V[u+\varepsilon v]}is given by∬D[∇u⋅∇v+fv]dxdy+∫C[σuv+gv]ds=0.{\displaystyle \iint _{D}\left[\nabla u\cdot \nabla v+fv\right]\,dx\,dy+\int _{C}\left[\sigma uv+gv\right]\,ds=0.}If we apply the divergence theorem, the result is∬D[−v∇⋅∇u+vf]dxdy+∫Cv[∂u∂n+σu+g]ds=0.{\displaystyle \iint _{D}\left[-v\nabla \cdot \nabla u+vf\right]\,dx\,dy+\int _{C}v\left[{\frac {\partial u}{\partial n}}+\sigma u+g\right]\,ds=0.}If we first setv=0{\displaystyle v=0}onC,{\displaystyle C,}the boundary integral vanishes, and we conclude as before that−∇⋅∇u+f=0{\displaystyle -\nabla \cdot \nabla u+f=0}inD.{\displaystyle D.}Then if we allowv{\displaystyle v}to assume arbitrary boundary values, this implies thatu{\displaystyle u}must satisfy the boundary condition∂u∂n+σu+g=0,{\displaystyle {\frac {\partial u}{\partial n}}+\sigma u+g=0,}onC.{\displaystyle C.}This boundary condition is a consequence of the minimizing property ofu{\displaystyle u}: it is not imposed beforehand. Such conditions are callednatural boundary conditions. The preceding reasoning is not valid ifσ{\displaystyle \sigma }vanishes identically onC.{\displaystyle C.}In such a case, we could allow a trial functionφ≡c,{\displaystyle \varphi \equiv c,}wherec{\displaystyle c}is a constant. For such a trial function,V[c]=c[∬Dfdxdy+∫Cgds].{\displaystyle V[c]=c\left[\iint _{D}f\,dx\,dy+\int _{C}g\,ds\right].}By appropriate choice ofc,{\displaystyle c,}V{\displaystyle V}can assume any value unless the quantity inside the brackets vanishes. Therefore, the variational problem is meaningless unless∬Dfdxdy+∫Cgds=0.{\displaystyle \iint _{D}f\,dx\,dy+\int _{C}g\,ds=0.}This condition implies that net external forces on the system are in equilibrium. If these forces are in equilibrium, then the variational problem has a solution, but it is not unique, since an arbitrary constant may be added. Further details and examples are in Courant and Hilbert (1953). Both one-dimensional and multi-dimensionaleigenvalue problemscan be formulated as variational problems. The Sturm–Liouvilleeigenvalue probleminvolves a general quadratic formQ[y]=∫x1x2[p(x)y′(x)2+q(x)y(x)2]dx,{\displaystyle Q[y]=\int _{x_{1}}^{x_{2}}\left[p(x)y'(x)^{2}+q(x)y(x)^{2}\right]\,dx,}wherey{\displaystyle y}is restricted to functions that satisfy the boundary conditionsy(x1)=0,y(x2)=0.{\displaystyle y(x_{1})=0,\quad y(x_{2})=0.}LetR{\displaystyle R}be a normalization integralR[y]=∫x1x2r(x)y(x)2dx.{\displaystyle R[y]=\int _{x_{1}}^{x_{2}}r(x)y(x)^{2}\,dx.}The functionsp(x){\displaystyle p(x)}andr(x){\displaystyle r(x)}are required to be everywhere positive and bounded away from zero. The primary variational problem is to minimize the ratioQ/R{\displaystyle Q/R}among ally{\displaystyle y}satisfying the endpoint conditions, which is equivalent to minimizingQ[y]{\displaystyle Q[y]}under the constraint thatR[y]{\displaystyle R[y]}is constant. It is shown below that the Euler–Lagrange equation for the minimizingu{\displaystyle u}is−(pu′)′+qu−λru=0,{\displaystyle -(pu')'+qu-\lambda ru=0,}whereλ{\displaystyle \lambda }is the quotientλ=Q[u]R[u].{\displaystyle \lambda ={\frac {Q[u]}{R[u]}}.}It can be shown (see Gelfand and Fomin 1963) that the minimizingu{\displaystyle u}has two derivatives and satisfies the Euler–Lagrange equation. The associatedλ{\displaystyle \lambda }will be denoted byλ1{\displaystyle \lambda _{1}}; it is the lowest eigenvalue for this equation and boundary conditions. The associated minimizing function will be denoted byu1(x).{\displaystyle u_{1}(x).}This variational characterization of eigenvalues leads to theRayleigh–Ritz method: choose an approximatingu{\displaystyle u}as a linear combination of basis functions (for example trigonometric functions) and carry out a finite-dimensional minimization among such linear combinations. This method is often surprisingly accurate. The next smallest eigenvalue and eigenfunction can be obtained by minimizingQ{\displaystyle Q}under the additional constraint∫x1x2r(x)u1(x)y(x)dx=0.{\displaystyle \int _{x_{1}}^{x_{2}}r(x)u_{1}(x)y(x)\,dx=0.}This procedure can be extended to obtain the complete sequence of eigenvalues and eigenfunctions for the problem. The variational problem also applies to more general boundary conditions. Instead of requiring thaty{\displaystyle y}vanish at the endpoints, we may not impose any condition at the endpoints, and setQ[y]=∫x1x2[p(x)y′(x)2+q(x)y(x)2]dx+a1y(x1)2+a2y(x2)2,{\displaystyle Q[y]=\int _{x_{1}}^{x_{2}}\left[p(x)y'(x)^{2}+q(x)y(x)^{2}\right]\,dx+a_{1}y(x_{1})^{2}+a_{2}y(x_{2})^{2},}wherea1{\displaystyle a_{1}}anda2{\displaystyle a_{2}}are arbitrary. If we sety=u+εv{\displaystyle y=u+\varepsilon v}, the first variation for the ratioQ/R{\displaystyle Q/R}isV1=2R[u](∫x1x2[p(x)u′(x)v′(x)+q(x)u(x)v(x)−λr(x)u(x)v(x)]dx+a1u(x1)v(x1)+a2u(x2)v(x2)),{\displaystyle V_{1}={\frac {2}{R[u]}}\left(\int _{x_{1}}^{x_{2}}\left[p(x)u'(x)v'(x)+q(x)u(x)v(x)-\lambda r(x)u(x)v(x)\right]\,dx+a_{1}u(x_{1})v(x_{1})+a_{2}u(x_{2})v(x_{2})\right),}where λ is given by the ratioQ[u]/R[u]{\displaystyle Q[u]/R[u]}as previously. After integration by parts,R[u]2V1=∫x1x2v(x)[−(pu′)′+qu−λru]dx+v(x1)[−p(x1)u′(x1)+a1u(x1)]+v(x2)[p(x2)u′(x2)+a2u(x2)].{\displaystyle {\frac {R[u]}{2}}V_{1}=\int _{x_{1}}^{x_{2}}v(x)\left[-(pu')'+qu-\lambda ru\right]\,dx+v(x_{1})[-p(x_{1})u'(x_{1})+a_{1}u(x_{1})]+v(x_{2})[p(x_{2})u'(x_{2})+a_{2}u(x_{2})].}If we first require thatv{\displaystyle v}vanish at the endpoints, the first variation will vanish for all suchv{\displaystyle v}only if−(pu′)′+qu−λru=0forx1<x<x2.{\displaystyle -(pu')'+qu-\lambda ru=0\quad {\hbox{for}}\quad x_{1}<x<x_{2}.}Ifu{\displaystyle u}satisfies this condition, then the first variation will vanish for arbitraryv{\displaystyle v}only if−p(x1)u′(x1)+a1u(x1)=0,andp(x2)u′(x2)+a2u(x2)=0.{\displaystyle -p(x_{1})u'(x_{1})+a_{1}u(x_{1})=0,\quad {\hbox{and}}\quad p(x_{2})u'(x_{2})+a_{2}u(x_{2})=0.}These latter conditions are thenatural boundary conditionsfor this problem, since they are not imposed on trial functions for the minimization, but are instead a consequence of the minimization. Eigenvalue problems in higher dimensions are defined in analogy with the one-dimensional case. For example, given a domainD{\displaystyle D}with boundaryB{\displaystyle B}in three dimensions we may defineQ[φ]=∭Dp(X)∇φ⋅∇φ+q(X)φ2dxdydz+∬Bσ(S)φ2dS,{\displaystyle Q[\varphi ]=\iiint _{D}p(X)\nabla \varphi \cdot \nabla \varphi +q(X)\varphi ^{2}\,dx\,dy\,dz+\iint _{B}\sigma (S)\varphi ^{2}\,dS,}andR[φ]=∭Dr(X)φ(X)2dxdydz.{\displaystyle R[\varphi ]=\iiint _{D}r(X)\varphi (X)^{2}\,dx\,dy\,dz.}Letu{\displaystyle u}be the function that minimizes the quotientQ[φ]/R[φ],{\displaystyle Q[\varphi ]/R[\varphi ],}with no condition prescribed on the boundaryB.{\displaystyle B.}The Euler–Lagrange equation satisfied byu{\displaystyle u}is−∇⋅(p(X)∇u)+q(x)u−λr(x)u=0,{\displaystyle -\nabla \cdot (p(X)\nabla u)+q(x)u-\lambda r(x)u=0,}whereλ=Q[u]R[u].{\displaystyle \lambda ={\frac {Q[u]}{R[u]}}.}The minimizingu{\displaystyle u}must also satisfy the natural boundary conditionp(S)∂u∂n+σ(S)u=0,{\displaystyle p(S){\frac {\partial u}{\partial n}}+\sigma (S)u=0,}on the boundaryB.{\displaystyle B.}This result depends upon the regularity theory for elliptic partial differential equations; see Jost and Li–Jost (1998) for details. Many extensions, including completeness results, asymptotic properties of the eigenvalues and results concerning the nodes of the eigenfunctions are in Courant and Hilbert (1953). Fermat's principlestates that light takes a path that (locally) minimizes the optical length between its endpoints. If thex{\displaystyle x}-coordinate is chosen as the parameter along the path, andy=f(x){\displaystyle y=f(x)}along the path, then the optical length is given byA[f]=∫x0x1n(x,f(x))1+f′(x)2dx,{\displaystyle A[f]=\int _{x_{0}}^{x_{1}}n(x,f(x)){\sqrt {1+f'(x)^{2}}}dx,}where the refractive indexn(x,y){\displaystyle n(x,y)}depends upon the material. If we tryf(x)=f0(x)+εf1(x){\displaystyle f(x)=f_{0}(x)+\varepsilon f_{1}(x)}then thefirst variationofA{\displaystyle A}(the derivative ofA{\displaystyle A}with respect to ε) isδA[f0,f1]=∫x0x1[n(x,f0)f0′(x)f1′(x)1+f0′(x)2+ny(x,f0)f11+f0′(x)2]dx.{\displaystyle \delta A[f_{0},f_{1}]=\int _{x_{0}}^{x_{1}}\left[{\frac {n(x,f_{0})f_{0}'(x)f_{1}'(x)}{\sqrt {1+f_{0}'(x)^{2}}}}+n_{y}(x,f_{0})f_{1}{\sqrt {1+f_{0}'(x)^{2}}}\right]dx.} After integration by parts of the first term within brackets, we obtain the Euler–Lagrange equation−ddx[n(x,f0)f0′1+f0′2]+ny(x,f0)1+f0′(x)2=0.{\displaystyle -{\frac {d}{dx}}\left[{\frac {n(x,f_{0})f_{0}'}{\sqrt {1+f_{0}'^{2}}}}\right]+n_{y}(x,f_{0}){\sqrt {1+f_{0}'(x)^{2}}}=0.} The light rays may be determined by integrating this equation. This formalism is used in the context ofLagrangian opticsandHamiltonian optics. There is a discontinuity of the refractive index when light enters or leaves a lens. Letn(x,y)={n(−)ifx<0,n(+)ifx>0,{\displaystyle n(x,y)={\begin{cases}n_{(-)}&{\text{if}}\quad x<0,\\n_{(+)}&{\text{if}}\quad x>0,\end{cases}}}wheren(−){\displaystyle n_{(-)}}andn(+){\displaystyle n_{(+)}}are constants. Then the Euler–Lagrange equation holds as before in the region wherex<0{\displaystyle x<0}orx>0,{\displaystyle x>0,}and in fact the path is a straight line there, since the refractive index is constant. At thex=0,{\displaystyle x=0,}f{\displaystyle f}must be continuous, butf′{\displaystyle f'}may be discontinuous. After integration by parts in the separate regions and using the Euler–Lagrange equations, the first variation takes the formδA[f0,f1]=f1(0)[n(−)f0′(0−)1+f0′(0−)2−n(+)f0′(0+)1+f0′(0+)2].{\displaystyle \delta A[f_{0},f_{1}]=f_{1}(0)\left[n_{(-)}{\frac {f_{0}'(0^{-})}{\sqrt {1+f_{0}'(0^{-})^{2}}}}-n_{(+)}{\frac {f_{0}'(0^{+})}{\sqrt {1+f_{0}'(0^{+})^{2}}}}\right].} The factor multiplyingn(−){\displaystyle n_{(-)}}is the sine of angle of the incident ray with thex{\displaystyle x}axis, and the factor multiplyingn(+){\displaystyle n_{(+)}}is the sine of angle of the refracted ray with thex{\displaystyle x}axis.Snell's lawfor refraction requires that these terms be equal. As this calculation demonstrates, Snell's law is equivalent to vanishing of the first variation of the optical path length. It is expedient to use vector notation: letX=(x1,x2,x3),{\displaystyle X=(x_{1},x_{2},x_{3}),}lett{\displaystyle t}be a parameter, letX(t){\displaystyle X(t)}be the parametric representation of a curveC,{\displaystyle C,}and letX˙(t){\displaystyle {\dot {X}}(t)}be its tangent vector. The optical length of the curve is given byA[C]=∫t0t1n(X)X˙⋅X˙dt.{\displaystyle A[C]=\int _{t_{0}}^{t_{1}}n(X){\sqrt {{\dot {X}}\cdot {\dot {X}}}}\,dt.} Note that this integral is invariant with respect to changes in the parametric representation ofC.{\displaystyle C.}The Euler–Lagrange equations for a minimizing curve have the symmetric formddtP=X˙⋅X˙∇n,{\displaystyle {\frac {d}{dt}}P={\sqrt {{\dot {X}}\cdot {\dot {X}}}}\,\nabla n,}whereP=n(X)X˙X˙⋅X˙.{\displaystyle P={\frac {n(X){\dot {X}}}{\sqrt {{\dot {X}}\cdot {\dot {X}}}}}.} It follows from the definition thatP{\displaystyle P}satisfiesP⋅P=n(X)2.{\displaystyle P\cdot P=n(X)^{2}.} Therefore, the integral may also be written asA[C]=∫t0t1P⋅X˙dt.{\displaystyle A[C]=\int _{t_{0}}^{t_{1}}P\cdot {\dot {X}}\,dt.} This form suggests that if we can find a functionψ{\displaystyle \psi }whose gradient is given byP,{\displaystyle P,}then the integralA{\displaystyle A}is given by the difference ofψ{\displaystyle \psi }at the endpoints of the interval of integration. Thus the problem of studying the curves that make the integral stationary can be related to the study of the level surfaces ofψ.{\displaystyle \psi .}In order to find such a function, we turn to the wave equation, which governs the propagation of light. This formalism is used in the context ofLagrangian opticsandHamiltonian optics. Thewave equationfor an inhomogeneous medium isutt=c2∇⋅∇u,{\displaystyle u_{tt}=c^{2}\nabla \cdot \nabla u,}wherec{\displaystyle c}is the velocity, which generally depends uponX.{\displaystyle X.}Wave fronts for light are characteristic surfaces for this partial differential equation: they satisfyφt2=c(X)2∇φ⋅∇φ.{\displaystyle \varphi _{t}^{2}=c(X)^{2}\,\nabla \varphi \cdot \nabla \varphi .} We may look for solutions in the formφ(t,X)=t−ψ(X).{\displaystyle \varphi (t,X)=t-\psi (X).} In that case,ψ{\displaystyle \psi }satisfies∇ψ⋅∇ψ=n2,{\displaystyle \nabla \psi \cdot \nabla \psi =n^{2},}wheren=1/c.{\displaystyle n=1/c.}According to the theory offirst-order partial differential equations, ifP=∇ψ,{\displaystyle P=\nabla \psi ,}thenP{\displaystyle P}satisfiesdPds=n∇n,{\displaystyle {\frac {dP}{ds}}=n\,\nabla n,}along a system of curves (the light rays) that are given bydXds=P.{\displaystyle {\frac {dX}{ds}}=P.} These equations for solution of a first-order partial differential equation are identical to the Euler–Lagrange equations if we make the identificationdsdt=X˙⋅X˙n.{\displaystyle {\frac {ds}{dt}}={\frac {\sqrt {{\dot {X}}\cdot {\dot {X}}}}{n}}.} We conclude that the functionψ{\displaystyle \psi }is the value of the minimizing integralA{\displaystyle A}as a function of the upper end point. That is, when a family of minimizing curves is constructed, the values of the optical length satisfy the characteristic equation corresponding the wave equation. Hence, solving the associated partial differential equation of first order is equivalent to finding families of solutions of the variational problem. This is the essential content of theHamilton–Jacobi theory, which applies to more general variational problems. In classical mechanics, the action,S,{\displaystyle S,}is defined as the time integral of the Lagrangian,L.{\displaystyle L.}The Lagrangian is the difference of energies,L=T−U,{\displaystyle L=T-U,}whereT{\displaystyle T}is thekinetic energyof a mechanical system andU{\displaystyle U}itspotential energy.Hamilton's principle(or the action principle) states that the motion of a conservative holonomic (integrable constraints) mechanical system is such that the action integralS=∫t0t1L(x,x˙,t)dt{\displaystyle S=\int _{t_{0}}^{t_{1}}L(x,{\dot {x}},t)\,dt}is stationary with respect to variations in the pathx(t).{\displaystyle x(t).}The Euler–Lagrange equations for this system are known as Lagrange's equations:ddt∂L∂x˙=∂L∂x,{\displaystyle {\frac {d}{dt}}{\frac {\partial L}{\partial {\dot {x}}}}={\frac {\partial L}{\partial x}},}and they are equivalent to Newton's equations of motion (for such systems). The conjugate momentaP{\displaystyle P}are defined byp=∂L∂x˙.{\displaystyle p={\frac {\partial L}{\partial {\dot {x}}}}.}For example, ifT=12mx˙2,{\displaystyle T={\frac {1}{2}}m{\dot {x}}^{2},}thenp=mx˙.{\displaystyle p=m{\dot {x}}.}Hamiltonian mechanicsresults if the conjugate momenta are introduced in place ofx˙{\displaystyle {\dot {x}}}by a Legendre transformation of the LagrangianL{\displaystyle L}into the HamiltonianH{\displaystyle H}defined byH(x,p,t)=px˙−L(x,x˙,t).{\displaystyle H(x,p,t)=p\,{\dot {x}}-L(x,{\dot {x}},t).}The Hamiltonian is the total energy of the system:H=T+U.{\displaystyle H=T+U.}Analogy with Fermat's principle suggests that solutions of Lagrange's equations (the particle trajectories) may be described in terms of level surfaces of some function ofX.{\displaystyle X.}This function is a solution of theHamilton–Jacobi equation:∂ψ∂t+H(x,∂ψ∂x,t)=0.{\displaystyle {\frac {\partial \psi }{\partial t}}+H\left(x,{\frac {\partial \psi }{\partial x}},t\right)=0.} Further applications of the calculus of variations include the following: Calculus of variations is concerned with variations of functionals, which are small changes in the functional's value due to small changes in the function that is its argument. Thefirst variation[l]is defined as the linear part of the change in the functional, and thesecond variation[m]is defined as the quadratic part.[26] For example, ifJ[y]{\displaystyle J[y]}is a functional with the functiony=y(x){\displaystyle y=y(x)}as its argument, and there is a small change in its argument fromy{\displaystyle y}toy+h,{\displaystyle y+h,}whereh=h(x){\displaystyle h=h(x)}is a function in the same function space asy,{\displaystyle y,}then the corresponding change in the functional is[n]ΔJ[h]=J[y+h]−J[y].{\displaystyle \Delta J[h]=J[y+h]-J[y].} The functionalJ[y]{\displaystyle J[y]}is said to bedifferentiableifΔJ[h]=φ[h]+ε‖h‖,{\displaystyle \Delta J[h]=\varphi [h]+\varepsilon \|h\|,}whereφ[h]{\displaystyle \varphi [h]}is a linear functional,[o]‖h‖{\displaystyle \|h\|}is the norm ofh,{\displaystyle h,}[p]andε→0{\displaystyle \varepsilon \to 0}as‖h‖→0.{\displaystyle \|h\|\to 0.}The linear functionalφ[h]{\displaystyle \varphi [h]}is the first variation ofJ[y]{\displaystyle J[y]}and is denoted by,[30]δJ[h]=φ[h].{\displaystyle \delta J[h]=\varphi [h].} The functionalJ[y]{\displaystyle J[y]}is said to betwice differentiableifΔJ[h]=φ1[h]+φ2[h]+ε‖h‖2,{\displaystyle \Delta J[h]=\varphi _{1}[h]+\varphi _{2}[h]+\varepsilon \|h\|^{2},}whereφ1[h]{\displaystyle \varphi _{1}[h]}is a linear functional (the first variation),φ2[h]{\displaystyle \varphi _{2}[h]}is a quadratic functional,[q]andε→0{\displaystyle \varepsilon \to 0}as‖h‖→0.{\displaystyle \|h\|\to 0.}The quadratic functionalφ2[h]{\displaystyle \varphi _{2}[h]}is the second variation ofJ[y]{\displaystyle J[y]}and is denoted by,[32]δ2J[h]=φ2[h].{\displaystyle \delta ^{2}J[h]=\varphi _{2}[h].} The second variationδ2J[h]{\displaystyle \delta ^{2}J[h]}is said to bestrongly positiveifδ2J[h]≥k‖h‖2,{\displaystyle \delta ^{2}J[h]\geq k\|h\|^{2},}for allh{\displaystyle h}and for some constantk>0{\displaystyle k>0}.[33] Using the above definitions, especially the definitions of first variation, second variation, and strongly positive, the following sufficient condition for a minimum of a functional can be stated. Sufficient condition for a minimum:
https://en.wikipedia.org/wiki/Calculus_of_variations
Probabilistic latent semantic analysis(PLSA), also known asprobabilistic latent semantic indexing(PLSI, especially in information retrieval circles) is astatistical techniquefor the analysis of two-mode and co-occurrence data. In effect, one can derive a low-dimensional representation of the observed variables in terms of their affinity to certain hidden variables, just as inlatent semantic analysis, from which PLSA evolved. Compared to standardlatent semantic analysiswhich stems fromlinear algebraand downsizes the occurrence tables (usually via asingular value decomposition), probabilistic latent semantic analysis is based on a mixture decomposition derived from alatent class model. Considering observations in the form of co-occurrences(w,d){\displaystyle (w,d)}of words and documents, PLSA models the probability of each co-occurrence as a mixture of conditionally independentmultinomial distributions: withc{\displaystyle c}being the words' topic. Note that the number of topics is a hyperparameter that must be chosen in advance and is not estimated from the data. The first formulation is thesymmetricformulation, wherew{\displaystyle w}andd{\displaystyle d}are both generated from the latent classc{\displaystyle c}in similar ways (using the conditional probabilitiesP(d|c){\displaystyle P(d|c)}andP(w|c){\displaystyle P(w|c)}), whereas the second formulation is theasymmetricformulation, where, for each documentd{\displaystyle d}, a latent class is chosen conditionally to the document according toP(c|d){\displaystyle P(c|d)}, and a word is then generated from that class according toP(w|c){\displaystyle P(w|c)}. Although we have used words and documents in this example, the co-occurrence of any couple of discrete variables may be modelled in exactly the same way. So, the number of parameters is equal tocd+wc{\displaystyle cd+wc}. The number of parameters grows linearly with the number of documents. In addition, although PLSA is a generative model of the documents in the collection it is estimated on, it is not a generative model of new documents. Their parameters are learned using theEM algorithm. PLSA may be used in a discriminative setting, viaFisher kernels.[1] PLSA has applications ininformation retrievalandfiltering,natural language processing,machine learningfrom text,bioinformatics,[2]and related areas. It is reported that theaspect modelused in the probabilistic latent semantic analysis has severeoverfittingproblems.[3] This is an example of alatent class model(see references therein), and it is related[6][7]tonon-negative matrix factorization. The present terminology was coined in 1999 by Thomas Hofmann.[8]
https://en.wikipedia.org/wiki/Probabilistic_latent_semantic_indexing
David Meir Bleiis a professor in the Statistics and Computer Science departments atColumbia University. Prior to fall 2014 he was an associate professor in the Department ofComputer ScienceatPrinceton University. His work is primarily inmachine learning. His research interests includetopic modelsand he was one of the original developers oflatent Dirichlet allocation, along withAndrew NgandMichael I. Jordan. As of June 18, 2020, his publications have been cited 109,821 times, giving him anh-indexof 97.[1] Blei received the ACM Infosys Foundation Award in 2013. (This award is given to a computer scientist under the age of 45. It has since been renamed the ACM Prize in Computing.) He was named Fellow ofACM"For contributions to the theory and practice of probabilistic topic modeling and Bayesian machine learning" in 2015.[2] This biographical article relating to acomputer scientistis astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/David_Blei
Andrew Yan-Tak Ng(Chinese:吳恩達; born April 18, 1976[2]) is a British-Americancomputer scientistandtechnology entrepreneurfocusing onmachine learningandartificial intelligence(AI).[3]Ng was a cofounder and head ofGoogle Brainand was the former Chief Scientist atBaidu, building the company's Artificial Intelligence Group into a team of several thousand people.[4] Ng is anadjunct professoratStanford University(formerlyassociate professorand Director of itsStanford AI Labor SAIL). Ng has also worked in the field ofonline education, cofoundingCourseraand DeepLearning.AI.[5]He has spearheaded many efforts to "democratize deep learning" teaching over 8 million students through his online courses.[6][3][7]Ng is renowned globally in computer science, recognized inTime magazine's 100 Most Influential People in 2012 andFast Company'sMost Creative People in 2014. His influence extends to being named in theTime100AI Most Influential People in 2023.[7] In 2018, he launched and currently heads the AI Fund, initially a $175-million investment fund for backing artificial intelligence startups. He has founded Landing AI, which provides AI-poweredSaaSproducts.[8] On April 11, 2024,Amazonannounced the appointment of Ng to its board of directors.[9] Ng was born inLondon,United Kingdom,[10]in 1976 to Ronald Paul Ng, a hematologist and lecturer atUCL Medical SchoolandTisa Ho, an arts administrator working at the London Film Festival.[11][12][13]His parents were both immigrants fromHong Kong. He has at least one brother.[12]Ng and his family moved back to Hong Kong and he spent his early years there. At the age of six he began learning the basics of programming through some books. In 1984 he and his family moved toSingapore.[10]Ng attended and graduated fromRaffles Institution.[14]During his high school years, he demonstrated exceptional mathematical ability, winning a Silver Medal at theInternational Mathematical Olympiad.[15] In 1997, he earned his undergraduate degree with a triple major incomputer science,statistics, andeconomicsfromCarnegie Mellon UniversityinPittsburgh,Pennsylvania. Between 1996 and 1998 he also conducted research onreinforcement learning,model selection, andfeature selectionat the AT&TBell Labs.[16] In 1998, Ng earned hismaster's degreein Electrical Engineering and Computer Science from theMassachusetts Institute of Technology(MIT) inCambridge, Massachusetts. At MIT, he built the first publicly available, automatically indexed web-search engine for research papers on the web. It was a precursor toCiteSeerX/ResearchIndex, but specialized in machine learning.[16] In 2002, he received hisDoctor of Philosophy(Ph.D.) in Computer Science from theUniversity of California, Berkeley, under the supervision ofMichael I. Jordan. His thesis is titled "Shaping and policy search inreinforcement learning" and is well-cited to this day.[16][17] He started working as an assistant professor atStanford Universityin 2002 and as an associate professor in 2009.[18] He currently lives inLos Altos Hills,California. In 2014, he marriedCarol E. Reiley.[19]They have two children: a daughter born in 2019[20]and a son born in 2021.[21]TheMIT Technology Reviewnamed Ng and Reiley an "AI power couple".[22][23] Ng is aprofessorat Stanford University departments of Computer Science andelectrical engineering. He served as the director of theStanford Artificial Intelligence Laboratory(SAIL), where he taught students and undertook research related todata mining,big data, and machine learning. His machine learning course CS229 at Stanford is the most popular course offered on campus with over 1,000 students enrolling some years.[24][25]As of 2020, three of the most popular courses on Coursera are Ng's: Machine Learning (#1), AI for Everyone (#5), Neural Networks and Deep Learning (#6).[26] In 2008, his group at Stanford was one of the first in the US to start advocating the use ofGPUsin deep learning.[citation needed]The rationale was that an efficient computation infrastructure could speed upstatistical modeltraining by orders of magnitude, ameliorating some of the scaling issues associated with big data. At the time it was a controversial and risky decision, but since then and following Ng's lead, GPUs have become a cornerstone in the field. Since 2017, Ng has been advocating the shift tohigh-performance computing(HPC) for scaling up deep learning and accelerating progress in the field.[citation needed] In 2012, along with Stanford computer scientistDaphne Kollerhe cofounded and was CEO ofCoursera, a website that offers free online courses to everyone.[3][failed verification]It took off with over 100,000 students registered for Ng's popular CS229A course.[27]Today, several million people have enrolled in Coursera courses, making the site one of the leadingmassive open online courses(MOOCs) in the world. From 2011 to 2012, he worked atGoogle, where he founded and directed theGoogle BrainDeep Learning Project withJeff Dean, Greg Corrado, and Rajat Monga. In 2014, he joinedBaiduas chief scientist, and carried out research related to big data and AI.[28]There he set up several research teams for things likefacial recognitionand Melody, an AIchatbotforhealthcare.[4]He also developed for the company the AI platform called DuerOS and other technologies that positioned Baidu ahead of Google in the discourse and development of AI.[29]In March 2017, he announced his resignation from Baidu.[3][30] He soon afterward launched DeepLearning.AI, an online series of deep learning courses (including the AI for Good Specialization).[31]Then Ng launched Landing AI, which provides AI-poweredSaaSproducts.[32] In January 2018, Ng unveiled the AI Fund, raising $175 million to invest in newstartups.[33]In November 2021, Landing AI secured a $57 million round of series A funding led by McRock Capital, to help manufacturers adopt computer vision.[34] In October 2024, Ng's AI Fund made its first investment in India, backing AI healthcare startup Jivi, which uses AI for diagnoses, treatment recommendations, and administrative tasks. The investment highlights the growth of India's AI sector, expected to reach $22 billion by 2027.[35] Ng researches primarily inmachine learning,deep learning,machine perception,computer vision, andnatural language processing; and is one of the world's most famous and influential computer scientists.[36]He's frequently won best paper awards at academic conferences and has had a huge impact on the field of AI, computer vision, and robotics.[37][38] During graduate school, together withDavid M. BleiandMichael I. Jordan, Ng co-authored the influential paper that introducedlatent Dirichlet allocation(LDA) for his thesis on reinforcement learning for drones.[39] His early work includes the Stanford Autonomous Helicopter project, which developed one of the most capable autonomous helicopters in the world.[40][41]He was the leading scientist and principal investigator on the STAIR (Stanford Artificial Intelligence Robot) project,[42]which resulted inRobot Operating System(ROS), a widely usedopen source softwareroboticsplatform. His vision to build an AI robot and put a robot in every home inspired Scott Hassan to back him and createWillow Garage.[43]He is also one of the founding team members for the Stanford WordNet project, which uses machine learning to expand thePrincetonWordNetdatabase created byChristiane Fellbaum.[16][44] In 2011, Ng founded theGoogle Brainproject atGoogle, which developed large-scale artificialneural networksusing Google's distributed computing infrastructure.[45]Among its notable results was a neural network trained usingdeep learningalgorithms on 16,000CPU cores, which learned to recognize cats after watching onlyYouTubevideos, and without ever having been told what a "cat" is.[46][47]The project's technology is also currently used in theAndroidoperating system'sspeech recognitionsystem.[48] In 2011, Stanford launched a total of threemassive open online course(MOOCs) on machine learning (CS229a),databases, and AI, taught by Ng,Peter Norvig,Sebastian Thrun, andJennifer Widom.[50][51]This has led to the modern MOOC movement. Ng taught machine learning and Widom taught databases. The course on AI taught by Thrun led to the genesis ofUdacity.[50] The seeds of massive open online courses (MOOCs) go back a few years before the founding of Coursera in 2012. Two themes emphasized in the founding of modern MOOCs werescaleandavailability.[50] By 2023, Ng has notably expanded access to AI education, with an estimated 8 million individuals worldwide taking his courses via platforms like DeepLearning.AI and Coursera.[7] Ng started theStanford Engineering Everywhere(SEE) program, which in 2008 published a number of Stanford courses online for free. Ng taught one of these courses, "Machine Learning", which includes his video lectures, along with the student materials used in the Stanford CS229 class. It offered a similar experience toMIT OpenCourseWare, except it aimed at providing a more "complete course" experience, equipped with lectures, course materials, problems and solutions, etc. The SEE videos were viewed by the millions and inspired Ng to develop and iterate new versions of online tech.[50] Within Stanford, they includeDaphne Kollerwith her "blended learning experiences" and codesigning a peer-grading system, John Mitchell (Courseware, aLearning Management System),Dan Boneh(using machine learning to sync videos, later teachingcryptographyon Coursera),Bernd Girod(ClassX), and others. Outside Stanford, Ng and Thrun creditSal KhanofKhan Academyas a huge source of inspiration. Ng was also inspired bylynda.comand the design of the forums ofStack Overflow.[50] Widom, Ng, and others were ardent advocates of Khan-styled tablet recordings, and between 2009 and 2011, several hundred hours of lecture videos recorded by Stanford instructors were recorded and uploaded. Ng tested some of the original designs with a local high school to figure the best practices for recording lessons.[50] In October 2011, the "applied" version of the Stanford class (CS229a) was hosted on ml-class.org and launched, with over 100,000 students registered for its first edition. The course featured quizzes and graded programming assignments and became one of the first and most successful massive open online courses (MOOCs) created by a Stanford professor.[52] Two other courses on databases (db-class.org) and AI (ai-class.org) were launched. The ml-class and db-class ran on a platform developed by students, including Frank Chen, Jiquan Ngiam, Chuan-Yu Foo, and Yifan Mai. Word spread through social media and popular press. The three courses were 10 weeks long, and over 40,000 "Statements of Accomplishment" were awarded.[50] His work subsequently led to his founding of Coursera with Koller in 2012. As of 2019, the two most popular courses on the platform were taught and designed by Ng: "Machine Learning" (#1) and "Neural Networks and Deep Learning" (#2). In 2019, Ng launched a new course "AI for Everyone". This is a non-technical course designed to help people understand AI's impact on society and its benefits and costs for companies, as well as how they can navigate through thistechnological revolution.[53] Ng is the chair of the board for Woebot Labs, a psychological clinic that usesdata scienceto providecognitive behavioral therapy. It provides a therapychatbotto help treat depression, among other things.[54] He is also a member of the board of directors fordrive.ai, which uses AI forself-driving carsand was acquired by Apple in 2019.[55][56] Through Landing AI, he also focuses on democratizing AI technology and lowering the barrier for entrance to businesses and developers.[8] Ng is also the author or co-author of over 300 publications inrobotics, and related fields.[57]His work incomputer visionanddeep learninghas been featured often in press releases and reviews.[58] He has corefereed hundreds of AI publications in journals likeNeurIPS. He has also been the editor of the Journal of Artificial Intelligence Research (JAIR), Associate Editor for theIEEERobotics and Automation Society Conference Editorial Board (ICRA), and much more.[16] He has given invited talks atNASA,Google,Microsoft,Lockheed Martin, theMax Planck Society,Stanford,Princeton,UPenn,Cornell,MIT,UC Berkeley, and dozens of other universities. Outside of the US, he has lectured in Spain, Germany, Israel, China, Korea, and Canada.[16] He has also written forHarvard Business Review,HuffPost,Slate,Apple News, and Quora Sessions' Twitter.[citation needed]He also writes a weekly digital newsletter calledThe Batch. He also wrote a bookMachine Learning Yearning, a practical guide for those interested in machine learning, which he distributed for free.[70]In December 2018, he wrote a sequel calledAI Transformation Playbook.[71] Ng contributed one chapter toArchitects of Intelligence: The Truth About AI from the People Building it(2018) by the American futuristMartin Ford. Ng thinks that the real threat is contemplating the future of work: "Rather than being distracted by evil killer robots, the challenge to labor caused by these machines is a conversation that academia and industry and government should have."[72]He has emphasized the importance of expanding access to AI education, stating that empowering people around the world to use AI tools is essential to building AI applications.[7] In a December 2023Financial Timesinterview, Ng highlighted concerns regarding the impact of potential regulations onopen-source AI, emphasizing how reporting, licensing, and liability risks could unfairly burden smaller firms and stifle innovation. He argued that regulating basic technologies like open-source models could hinder progress without markedly enhancing safety. Ng advocated for carefully designed regulations to prevent obstacles to the development and distribution of beneficial AI technologies.[73] In a June 2024 interview with theFinancial Times, Ng expressed concerns about proposed AI legislation in California that would have required developers to implement safety mechanisms such as a "kill switch" for advanced models. He described the bill as creating "massive liabilities for science-fiction risks" and said it "stokes fear in anyone daring to innovate." Other critics argued the bill would impose burdens onopen-sourcedevelopers and smaller AI companies.[74]The bill was ultimately vetoed by GovernorGavin Newsomin September 2024.[75]
https://en.wikipedia.org/wiki/Andrew_Ng
Michael Irwin JordanForMemRS[6](born February 25, 1956) is an American scientist, professor at theUniversity of California, Berkeley, research scientist at theInriaParis, and researcher inmachine learning,statistics, andartificial intelligence.[7][8][9] Jordan was elected a member of theNational Academy of Engineeringin 2010 for contributions to the foundations and applications of machine learning. He is one of the leading figures in machine learning, and in 2016Sciencereported him as the world's most influential computer scientist.[10][11][12][13][14][15] In 2022, Jordan won the inaugural World Laureates Association Prize in Computer Science or Mathematics, "for fundamental contributions to the foundations of machine learning and its application."[16][17][18] Jordan received aBachelor of Sciencemagna cum laudein psychology from theLouisiana State Universityin 1978, aMaster of Sciencein mathematics fromArizona State Universityin 1980, and aDoctor of Philosophyin cognitive science from theUniversity of California, San Diegoin 1985.[19] At UC San Diego, Jordan was a student ofDavid Rumelhartand a member of theParallel Distributed Processing(PDP) Group in the 1980s. Jordan is the Pehong Chen DistinguishedProfessorat theUniversity of California, Berkeley, where his appointment is split across EECS and Statistics. He was a professor at the Department of Brain and Cognitive Sciences atMITfrom 1988 to 1998.[19] In the 1980s Jordan started developingrecurrent neural networksas a cognitive model. In recent years, his work is less driven from a cognitive perspective and more from the background of traditional statistics. Jordan popularisedBayesian networksin the machine learning community and is known for pointing out links between machine learning andstatistics. He was also prominent in the formalisation ofvariational methodsforapproximate inference[1]and the popularisation of theexpectation–maximization algorithm[20]in machine learning. In 2001, Jordan and others resigned from the editorial board of the journalMachine Learning. In a public letter, they argued for less restrictive access and pledged support for a newopen accessjournal, theJournal of Machine Learning Research, which was created byLeslie Kaelblingto support the evolution of the field of machine learning.[21] Jordan has received numerous awards, including a best student paper award[22](with X. Nguyen andM. Wainwright) at the International Conference on Machine Learning (ICML2004), a best paper award (with R. Jacobs) at the American Control Conference (ACC 1991), theACM-AAAI Allen Newell Award, theIEEENeural Networks Pioneer Award, and anNSFPresidential Young Investigator Award. In 2002 he was named anAAAI Fellow"for significant contributions to reasoning under uncertainty, machine learning, and human motor control."[23]In 2004 he was named anIMSFellow "for contributions to graphical models and machine learning."[24]In 2005 he was named anIEEE Fellow"for contributions to probabilistic graphical models and neural information processing systems."[25]In 2007 he was named anASAFellow.[26]In 2010 he was named aCognitive Science SocietyFellow[19][27]and named anACM Fellow"for contributions to the theory and application of machine learning."[28]In 2012 he was named aSIAM Fellow"for contributions to machine learning, in particular variational approaches to statistical inference."[29]In 2014 he was named anInternational Society for Bayesian AnalysisFellow "for his outstanding research contributions at the interface of statistics, computer sciences and probability, for his leading role in promoting Bayesian methods in machine learning, engineering and other fields, and for his extensive service to ISBA in many roles."[30] Jordan is amember of the National Academy of Sciences, a member of theNational Academy of Engineeringand a member of the American Academy of Arts and Sciences.[citation needed] He has been named a Neyman Lecturer and a Medallion Lecturer by the Institute of Mathematical Statistics. He received theDavid E. Rumelhart Prizein 2015 and the ACM/AAAI Allen Newell Award in 2009. He also won the 2020IEEE John von Neumann Medal. In 2016, Jordan was identified as the "most influential computer scientist", based on an analysis of the published literature by theSemantic Scholarproject.[31] In 2019, Jordan argued that the artificial intelligence revolution hasn't happened yet and that the AI revolution required a blending ofcomputer sciencewithstatistics.[32] In 2022, Jordan was awarded the inaugural World Laureates Association Prize by non-governmental and non-profit international organization World Laureates Association, for fundamental contributions to the foundations of machine learning and its application.[33][34] For 2024 he received theBBVA Foundation Frontiers of Knowledge Awardin the category of "Information and Communication Technologies".[35]
https://en.wikipedia.org/wiki/Michael_I._Jordan
L2 Syntactical Complexity Analyzer(L2SCA) developed byXiaofei Luat thePennsylvania State University, is acomputationaltool which producessyntactic complexityindices of written English language texts.[1]Along withCoh-Metrix, the L2SCA is one of the most extensively used computational tool to compute indices ofsecond language writingdevelopment. The L2SCA is also widely utilised in the field ofcorpus linguistics.[2]The L2SCA is available in a single and a batch mode. The first provides the possibility of analyzing a single written text for 14 syntactic complexity indices.[3]The latter allows the user to analyze 30 written texts simultaneously. The L2SCA has been used in numerous studies in the field of second language writing development to compute indices of syntactic complexity.[4][5][6] The L2SCA has also been used in various studies in the field ofcorpus linguistics.[7][8] Thiscomputational linguistics-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/L2_Syntactic_Complexity_Analyzer
Cloakingis asearch engine optimization(SEO) technique in which the content presented to thesearch engine spideris different from that presented to the user'sbrowser. This is done by delivering content based on theIP addressesor theUser-AgentHTTPheader of the user requesting the page. When a user is identified as a search engine spider, a server-sidescriptdelivers a different version of theweb page, one that contains content not present on the visible page, or that is present but not searchable. The purpose of cloaking is sometimes to deceivesearch enginesso they display the page when it would not otherwise be displayed (black hatSEO). However, it can also be a functional (though antiquated) technique for informing search engines of content they would not otherwise be able to locate because it is embedded in non-textual containers, such as video or certainAdobe Flashcomponents. Since 2006, better methods of accessibility, includingprogressive enhancement, have been available, so cloaking is no longer necessary for regular SEO.[1] Cloaking is often used as aspamdexingtechnique to attempt to sway search engines into giving the site a higher ranking. By the same method, it can also be used to trick search engine users into visiting a site that is substantially different from the search engine description, including deliveringpornographiccontent cloaked within non-pornographic search results. Cloaking is a form of thedoorway pagetechnique. A similar technique is used onDMOZweb directory, but it differs in several ways from search engine cloaking: IP deliverycan be considered a more benign variation of cloaking, where different content is served based upon the requester'sIP address. With cloaking, search engines and people never see the other's pages, whereas, with other uses of IP delivery, both search engines and people can see the same pages. This technique is sometimes used by graphics-heavy sites that have little textual content for spiders to analyze.[2] One use of IP delivery is to determine the requester's location, and deliver content specifically written for that country. This isn't necessarily cloaking. For instance, Google uses IP delivery forAdWordsandAdSenseadvertisingprograms to target users in different geographic locations. IP delivery is a crude and unreliable method of determining the language in which to provide content. Many countries and regions are multilingual, or the requestor may be a foreign national. A better method ofcontent negotiationis to examine the client'sAccept-LanguageHTTP header.
https://en.wikipedia.org/wiki/Cloaking
Acontent farmorcontent millis an organization, focused on generating a large amount of webcontent, often specifically designed to satisfyalgorithmsfor maximal retrieval bysearch engines, a practice known assearch engine optimization(SEO). Such organizations often employfreelancecreators or useartificial intelligence(AI) tools, with the goal of generating high amounts of content for the lowest time and cost. The primary goal is to attract as manypage viewsas possible, and thus generate moreadvertising revenue.[1]The emergence of these media outlets is often tied to the demand for "true market demand" content based on search engine queries.[1] Some content farms produce thousands of articles each month using freelance writers or AI tools. For example, in 2009,Wiredreported thatDemand Media—owner ofeHow—was publishing one million items per month, the equivalent of four English-language Wikipedias annually.[2]Another notable example wasAssociated Content, purchased byYahoo!in 2010 for $90 million, which later becameYahoo! Voicesbefore shutting down in 2014.[3][4] Pay scales for writers at content farms are low compared to traditional salaries. For instance, writers may be compensated $3.50 per article, though some prolific contributors can produce enough content to earn a living.[5]Writers are often not experts in the topics they cover.[6] Since the rise oflarge language modelslikeChatGPT, content farms have shifted towards AI-generated content. A report byNewsGuardin 2023 identified over 140 internationally recognized brands supporting AI-driven content farms.[7]AI tools allow these sites to generate hundreds of articles daily, often with minimal human oversight.[8] Critics argue that content farms prioritize SEO and ad revenue over factual accuracy and relevance.[9]Critics also highlight the potential for misinformation, such as conspiracy theories and fake product reviews, being spread through AI-generated content.[10]Some have compared content farms to the fast food industry, calling them "fast content" providers that pollute the web with low-value material.[11]The word, "sponsored" when searching has raised questions on the reliability of the site as it was likely paid to be pushed to the top of the search options.[12] Search engines likeGooglehave taken steps to limit the influence of content farms. In 2011, Google introduced theGoogle Pandaupdate to lower the rankings of low-quality websites.[13]Other search engines, likeDuckDuckGo, have also implemented measures to block low-quality AI-driven sites.[14]
https://en.wikipedia.org/wiki/Content_farm
Doorway pages(bridge pages,portal pages,jump pages,gateway pagesorentry pages) are web pages that are created for the deliberate manipulation of search engine indexes (spamdexing). A doorway page will affect the index of asearch engineby inserting results for particular phrases while sending visitors to a different page. Doorway pages that redirect visitors without their knowledge use some form ofcloaking. This usually falls under Black HatSEO. If a visitor clicks through to a typical doorway page from asearch engine results page, in most cases they will beredirectedwith a fastMeta refreshcommand to another page. Other forms of redirection include use ofJavaScriptandserversideredirection, from the server configuration file. Some doorway pages may be dynamic pages generated by scripting languages such asPerlandPHP. Doorway pages are often easy to identify in that they have been designed primarily for search engines, not for human beings. Sometimes a doorway page is copied from another high ranking page, but this is likely to cause the search engine to detect the page as a duplicate and exclude it from the search engine listings. Because many search engines give a penalty for using the META refresh command,[1]some doorway pages just trick the visitor into clicking on a link to get them to the desired destination page, or they useJavaScriptfor redirection. More sophisticated doorway pages, calledContent Rich Doorways, are designed to gain high placement in search results without using redirection. They incorporate at least a minimum amount of design and navigation similar to the rest of the site to provide a more human-friendly and natural appearance. Visitors are offered standard links as calls to action. Landing pagesare regularly misconstrued to equate to Doorway pages within the literature. The former are content rich pages to which traffic is directed within the context ofpay-per-clickcampaigns and to maximizeSEOcampaigns. Doorway pages are also typically used for sites that maintain a blacklist of URLs known to harbor spam, such asFacebook,TumblrandDeviantArt. Doorway pages often also employcloakingtechniques for misdirection. Cloaked pages will show a version of that page to human visitor which is different from the one provided tocrawlers—usually implemented via server-side scripts. The server can differentiate between bots, crawlers and human visitors based on various flags, including sourceIP addressoruser-agent. Cloaking will simultaneously trick search engines to rank sites higher for irrelevant keywords, while displaying monetizing any human traffic by showing visitors spammy, often irrelevant, content. The practice of cloaking is considered to be highly manipulative and condemned within the SEO industry and by search engines, and its use can result in significant penalty or the complete removal of sites from being indexed.[2] Webmasters that use doorway pages would generally prefer that users never actually see these pages and instead be delivered to a "real" page within their sites. To achieve this goal, redirection is sometimes used. This may be as simple as installing a meta refresh tag on the doorway pages. An advanced system might make use of cloaking. In either case, such redirection may make your doorway pages unacceptable to search engines. A content-rich doorway page must be constructed in a search-engine-friendly manner, or it may be construed as search engine spam, possibly resulting in the page being banned from the index for an undisclosed amount of time. These types of doorways utilize (but are not limited to) the following: Doorway pages were examined as a cultural and political phenomenon along withspam poetryandflarf.[3]
https://en.wikipedia.org/wiki/Doorway_pages
Hidden textiscomputertextthat is displayed in such a way as to be invisible or unreadable. Hidden text is most commonly achieved by setting thefontcolour to the same colour as the background, rendering the text invisible unless the user highlights it. Hidden text can serve several purposes. Often, websites use it to disguisespoilersfor readers who do not wish to read that text. Hidden text can also be used to hide data from users who are lessInternet-experienced or who are not familiar with a particular website. Another meaning may refer to hidden text to small messages at the bottom of advertisements that are permitted by some law to state a particular liability or requirement in text (also known asfine print). An example of this practice is to display anFTPpassword in hidden text to reduce the number of users who are able to access downloads and thereby save bandwidth.Parodysites (such asUncyclopedia) occasionally use the technique as a joke aboutcensorship, with the "censored" text displayed black-on-black in an obvious manner akin to a theatricalstage whisper. It is also used by websites as aspamdexingtechnique to fill a page withkeywordsthat asearch enginewill recognize but are not visible to a visitor. However,Googlehas taken steps to prevent this by parsing the color of text as it indexes it and checking to see if it is transparent, and may penalize pages and give them lower rankings.[1] Conversely,Project Honey Potuses links intended only to be followed by spambots; the links point tohoneypotswhich detect e-mail address harvesting. A link usingrel="nofollow"(to hide it from legitimate search engine spiders) and hidden text (to remove it for human visitors) would remain visible to malicious 'bots. Compare withmetadata, which is usually also hidden, but is used for different purposes. Hidden charactersare characters that are required for computer text to render properly but which are not a part of the content, so they are hidden. This includes characters such as those used to add a new line of text or to add space between words, commonly referred to as "white space characters". ThisWorld Wide Web–related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Hidden_text
On theWorld Wide Web, alink farmis any group ofwebsitesthat allhyperlinkto other sites in the group for the purpose of increasingSEOrankings.[1]In graph theoretic terms, a link farm is aclique. Although some link farms can be created by hand, most are created throughautomated programsand services. A link farm is a form ofspammingthe index of aweb search engine(sometimes calledspamdexing). Other link exchange systems are designed to allow individual websites to selectively exchange links with other relevant websites, and are not considered a form of spamdexing. Search engines require ways to confirm page relevancy. A known method is to examine for one-way links coming directly from relevant websites. The process of building links should not be confused with being listed on link farms, as the latter requires reciprocal return links, which often renders the overall backlink advantage useless. This is due to oscillation, causing confusion over which is the vendor site and which is the promoting site. Link farms were first developed bysearch engine optimizers(SEOs) in 1999 to take advantage of theInktomisearch engine's dependence upon link popularity. Although link popularity is used by some search engines to help establish a ranking order for search results, the Inktomi engine at the time maintained two indexes. Search results were produced from the primary index, which was limited to approximately 100 million listings. Pages with few inbound links fell out of the Inktomi index on a monthly basis. Inktomi was targeted for manipulation through link farms because it was then used by several independent but popular search engines.Yahoo!, then the most popular search service, also used Inktomi results to supplement its directory search feature. The link farms helped stabilize listings, primarily for online business Websites that had few natural links from larger, more stable sites in the Inktomi index. Link farm exchanges were at first handled on an informal basis, but several service companies were founded to provide automated registration, categorization, and link page updates to member Websites. When theGooglesearch engine became popular, search engine optimizers learned that Google's ranking algorithm depended in part on a link-weighting scheme calledPageRank. Rather than simply count all inbound links equally, the PageRank algorithm determines that some links may be more valuable than others, and therefore assigns them more weight than others. Link farming was adapted to help increase the PageRank of member pages.[2][3] However, the link farms became susceptible to manipulation by unscrupulous webmasters who joined the services, received inbound linkage, and then found ways to hide their outbound links or to avoid posting any links on their sites at all. Link farm managers had to implement quality controls and monitor member compliance with their rules to ensure fairness. Alternative link farm products emerged, particularly link-finding software that identified potential reciprocal link partners, sent them template-based emails offering to exchange links, and created directory-like link pages for Websites, in the hope of building their link popularity and PageRank. These link farms are sometimes considered aspamdexingstrategy. Search engines countered the link farm movement by identifying specific attributes associated with link farm pages and filtering those pages fromindexingand search results. In some cases, entire domains were removed from the search engine indexes in order to prevent them from influencing search results. Aprivate blog network(PBN) is a group ofblogsthat are owned by the same entity. A blog network can either be a group of loosely connected blogs, or a group of blogs that are owned by the same company. The purpose of such a network is usually to promote other sites outside the network and therefore increase the search engine rankings or advertising revenue generated fromonline advertisingon the sites the PBN links to. In September 2014, Google targeted private blog networks (PBNs) with manual action ranking penalties.[4]This served to dissuadesearch engine optimizationand online marketers from using PBNs to increase their online rankings. The "thin content" warnings are closely tied toPandawhich focuses on thin content and on-page quality. PBNs have a history of being targeted by Google and therefore may not be the safest option. Since Google is on the search for blog networks, they are not always linked together. In fact, interlinking your blogs could help Google, and a single exposed blog could reveal the whole blog network by looking at the outbound links. A blog network may also refer to a central website, such asWordPress, where a user creates an account and is then able to use their own blog. The created blog forms part of a network because it uses either a subdomain or a subfolder of the main domain, although in all other ways it can be entirely autonomous. This is also known as a hosted blog platform and usually uses the free WordPress Multisite software. Hosted blog networks are also known asWeb 2.0networks, since they became more popular with the rise of the second phase of web development.
https://en.wikipedia.org/wiki/Link_farm
Web scraping,web harvesting, orweb data extractionisdata scrapingused forextracting datafromwebsites.[1]Web scraping software may directly access theWorld Wide Webusing theHypertext Transfer Protocolor a web browser. While web scraping can be done manually by a software user, the term typically refers to automated processes implemented using abotorweb crawler. It is a form of copying in which specific data is gathered and copied from the web, typically into a central localdatabaseorspreadsheet, for laterretrievaloranalysis. Scraping a web page involves fetching it and then extracting data from it. Fetching is the downloading of a page (which a browser does when a user views a page). Therefore, web crawling is a main component of web scraping, to fetch pages for later processing. Having fetched, extraction can take place. The content of a page may beparsed, searched and reformatted, and its data copied into a spreadsheet or loaded into a database. Web scrapers typically take something out of a page, to make use of it for another purpose somewhere else. An example would be finding and copying names and telephone numbers, companies and their URLs, or e-mail addresses to a list (contact scraping). As well ascontact scraping, web scraping is used as a component of applications used forweb indexing,web mininganddata mining, online price change monitoring andprice comparison, product review scraping (to watch the competition), gathering real estate listings, weather data monitoring,website change detection, research, tracking online presence and reputation,web mashup, andweb data integration. Web pagesare built using text-based mark-up languages (HTMLandXHTML), and frequently contain a wealth of useful data in text form. However, most web pages are designed for humanend-usersand not for ease of automated use. As a result, specialized tools and software have been developed to facilitate the scraping of web pages. Web scraping applications includemarket research, price comparison, content monitoring, and more. Businesses rely on web scraping services to efficiently gather and utilize this data. Newer forms of web scraping involve monitoringdata feedsfrom web servers. For example,JSONis commonly used as a transport mechanism between the client and the web server. There are methods that some websites use to prevent web scraping, such as detecting and disallowing bots from crawling (viewing) their pages. In response, web scraping systems use techniques involvingDOMparsing,computer visionandnatural language processingto simulate human browsing to enable gathering web page content for offline parsing. After thebirth of the World Wide Webin 1989, the first web robot,[2]World Wide Web Wanderer, was created in June 1993, which was intended only to measure the size of the web. In December 1993, the first crawler-based web search engine,JumpStation, was launched. As there were fewer websites available on the web, search engines at that time used to rely on human administrators to collect and format links. In comparison, Jump Station was the first WWW search engine to rely on a web robot. In 2000, the first Web API and API crawler were created. AnAPI(Application Programming Interface) is an interface that makes it much easier to develop a program by providing the building blocks. In 2000,SalesforceandeBaylaunched their own API, with which programmers could access and download some of the data available to the public. Since then, many websites offer web APIs for people to access their public database. Web scraping is the process of automatically mining data or collecting information from the World Wide Web. It is a field with active developments sharing a common goal with thesemantic webvision, an ambitious initiative that still requires breakthroughs in text processing, semantic understanding, artificial intelligence andhuman-computer interactions. The simplest form of web scraping is manually copying and pasting data from a web page into a text file or spreadsheet. Sometimes even the best web-scraping technology cannot replace a human's manual examination and copy-and-paste, and sometimes this may be the only workable solution when the websites for scraping explicitly set up barriers to prevent machine automation. A simple yet powerful approach to extract information from web pages can be based on the UNIXgrepcommand orregular expression-matching facilities of programming languages (for instancePerlorPython). Staticanddynamic web pagescan be retrieved by posting HTTP requests to the remote web server usingsocket programming. Many websites have large collections of pages generated dynamically from an underlying structured source like a database. Data of the same category are typically encoded into similar pages by a common script or template. In data mining, a program that detects such templates in a particular information source, extracts its content, and translates it into a relational form, is called awrapper. Wrapper generation algorithms assume that input pages of a wrapper induction system conform to a common template and that they can be easily identified in terms of a URL common scheme.[3]Moreover, somesemi-structured dataquery languages, such asXQueryand the HTQL, can be used to parse HTML pages and to retrieve and transform page content. By using a program such asSeleniumorPlaywright, developers can control a web browser such asChromeorFirefoxwherein they can load, navigate, and retrieve data from websites. This method can be especially useful for scraping data from dynamic sites since a web browser will fully load each page. Once an entire page is loaded, you can access and parse theDOMusing an expression language such asXPath. There are several companies that have developed vertical specific harvesting platforms. These platforms create and monitor a multitude of "bots" for specific verticals with no "man in the loop" (no direct human involvement), and no work related to a specific target site. The preparation involves establishing the knowledge base for the entire vertical and then the platform creates the bots automatically. The platform's robustness is measured by the quality of the information it retrieves (usually number of fields) and its scalability (how quick it can scale up to hundreds or thousands of sites). This scalability is mostly used to target theLong Tailof sites that common aggregators find complicated or too labor-intensive to harvest content from. The pages being scraped may embracemetadataor semantic markups and annotations, which can be used to locate specific data snippets. If the annotations are embedded in the pages, asMicroformatdoes, this technique can be viewed as a special case of DOM parsing. In another case, the annotations, organized into a semantic layer,[4]are stored and managed separately from the web pages, so the scrapers can retrieve data schema and instructions from this layer before scraping the pages. There are efforts usingmachine learningandcomputer visionthat attempt to identify and extract information from web pages by interpreting pages visually as a human being might.[5] Uses advanced AI to interpret and process web page content contextually, extracting relevant information, transforming data, and customizing outputs based on the content's structure and meaning. This method enables more intelligent and flexible data extraction, accommodating complex and dynamic web content. The legality of web scraping varies across the world. In general, web scraping may be against theterms of serviceof some websites, but the enforceability of these terms is unclear.[6] In the United States, website owners can use three majorlegal claimsto prevent undesired web scraping: (1) copyright infringement (compilation), (2) violation of theComputer Fraud and Abuse Act("CFAA"), and (3)trespass to chattel.[7]However, the effectiveness of these claims relies upon meeting various criteria, and the case law is still evolving. For example, with regard to copyright, while outright duplication of original expression will in many cases be illegal, in the United States the courts ruled inFeist Publications v. Rural Telephone Servicethat duplication of facts is allowable. U.S. courts have acknowledged that users of "scrapers" or "robots" may be held liable for committingtrespass to chattels,[8][9]which involves a computer system itself being considered personal property upon which the user of a scraper is trespassing. The best known of these cases,eBay v. Bidder's Edge, resulted in an injunction ordering Bidder's Edge to stop accessing, collecting, and indexing auctions from the eBay web site. This case involved automatic placing of bids, known asauction sniping. However, in order to succeed on a claim of trespass tochattels, theplaintiffmust demonstrate that thedefendantintentionally and without authorization interfered with the plaintiff's possessory interest in the computer system and that the defendant's unauthorized use caused damage to the plaintiff. Not all cases of web spidering brought before the courts have been considered trespass to chattels.[10] One of the first major tests ofscreen scrapinginvolvedAmerican Airlines(AA), and a firm called FareChase.[11]AA successfully obtained aninjunctionfrom a Texas trial court, stopping FareChase from selling software that enables users to compare online fares if the software also searches AA's website. The airline argued that FareChase's websearch software trespassed on AA's servers when it collected the publicly available data. FareChase filed an appeal in March 2003. By June, FareChase and AA agreed to settle and the appeal was dropped.[12] Southwest Airlineshas also challenged screen-scraping practices, and has involved both FareChase and another firm, Outtask, in a legal claim. Southwest Airlines charged that the screen-scraping is Illegal since it is an example of "Computer Fraud and Abuse" and has led to "Damage and Loss" and "Unauthorized Access" of Southwest's site. It also constitutes "Interference with Business Relations", "Trespass", and "Harmful Access by Computer". They also claimed that screen-scraping constitutes what is legally known as "Misappropriation and Unjust Enrichment", as well as being a breach of the web site's user agreement. Outtask denied all these claims, claiming that the prevailing law, in this case, should beUS Copyright lawand that under copyright, the pieces of information being scraped would not be subject to copyright protection. Although the cases were never resolved in theSupreme Court of the United States, FareChase was eventually shuttered by parent companyYahoo!, and Outtask was purchased by travel expense company Concur.[13]In 2012, a startup called 3Taps scraped classified housing ads from Craigslist. Craigslist sent 3Taps a cease-and-desist letter and blocked their IP addresses and later sued, inCraigslist v. 3Taps. The court held that the cease-and-desist letter and IP blocking was sufficient for Craigslist to properly claim that 3Taps had violated theComputer Fraud and Abuse Act(CFAA). Although these are early scraping decisions, and the theories of liability are not uniform, it is difficult to ignore a pattern emerging that the courts are prepared to protect proprietary content on commercial sites from uses which are undesirable to the owners of such sites. However, the degree of protection for such content is not settled and will depend on the type of access made by the scraper, the amount of information accessed and copied, the degree to which the access adversely affects the site owner's system and the types and manner of prohibitions on such conduct.[14] While the law in this area becomes more settled, entities contemplating using scraping programs to access a public web site should also consider whether such action is authorized by reviewing the terms of use and other terms or notices posted on or made available through the site. InCvent Inc.v.Eventbrite Inc.(2010), the United Statesdistrict court for the eastern district of Virginia, ruled that the terms of use should be brought to the users' attention in order for abrowsewrapcontract or license to be enforceable.[15]In a 2014 case, filed in theUnited States District Court for the Eastern District of Pennsylvania,[16]e-commerce siteQVCobjected to the Pinterest-like shopping aggregator Resultly's 'scraping of QVC's site for real-time pricing data. QVC alleges that Resultly "excessively crawled" QVC's retail site (allegedly sending 200-300 search requests to QVC's website per minute, sometimes to up to 36,000 requests per minute) which caused QVC's site to crash for two days, resulting in lost sales for QVC.[17]QVC's complaint alleges that the defendant disguised its web crawler to mask its source IP address and thus prevented QVC from quickly repairing the problem. This is a particularly interesting scraping case because QVC is seeking damages for the unavailability of their website, which QVC claims was caused by Resultly. In the plaintiff's web site during the period of this trial, the terms of use link are displayed among all the links of the site, at the bottom of the page as most sites on the internet. This ruling contradicts the Irish ruling described below. The court also rejected the plaintiff's argument that the browse-wrap restrictions were enforceable in view of Virginia's adoption of the Uniform Computer Information Transactions Act (UCITA)—a uniform law that many believed was in favor on common browse-wrap contracting practices.[18] InFacebook, Inc. v. Power Ventures, Inc., a district court ruled in 2012 that Power Ventures could not scrape Facebook pages on behalf of a Facebook user. The case is on appeal, and theElectronic Frontier Foundationfiled a brief in 2015 asking that it be overturned.[19][20]InAssociated Press v. Meltwater U.S. Holdings, Inc., a court in the US held Meltwater liable for scraping and republishing news information from the Associated Press, but a court in the United Kingdom held in favor of Meltwater. TheNinth Circuitruled in 2019 that web scraping did not violate the CFAA inhiQ Labs v. LinkedIn. The case was appealed to theUnited States Supreme Court, which returned the case to the Ninth Circuit to reconsider the case in light of the 2021 Supreme Court decision inVan Buren v. United Stateswhich narrowed the applicability of the CFAA.[21]On this review, the Ninth Circuit upheld their prior decision.[22] Internet Archivecollects and distributes a significant number of publicly available web pages without being considered to be in violation of copyright laws.[citation needed] In February 2006, theDanish Maritime and Commercial Court(Copenhagen) ruled that systematic crawling, indexing, and deep linking by portal site ofir.dk of real estate site Home.dk does not conflict with Danish law or the database directive of the European Union.[23] In a February 2010 case complicated by matters of jurisdiction, Ireland's High Court delivered a verdict that illustrates theinchoatestate of developing case law. In the case ofRyanair Ltd v Billigfluege.de GmbH, Ireland's High Court ruledRyanair's"click-wrap" agreement to be legally binding. In contrast to the findings of the United States District Court Eastern District of Virginia and those of the Danish Maritime and Commercial Court, JusticeMichael Hannaruled that the hyperlink to Ryanair's terms and conditions was plainly visible, and that placing the onus on the user to agree to terms and conditions in order to gain access to online services is sufficient to comprise a contractual relationship.[24]The decision is under appeal in Ireland's Supreme Court.[25] On April 30, 2020, the French Data Protection Authority (CNIL) released new guidelines on web scraping.[26]The CNIL guidelines made it clear that publicly available data is still personal data and cannot be repurposed without the knowledge of the person to whom that data belongs.[27] In Australia, theSpam Act 2003outlaws some forms of web harvesting, although this only applies to email addresses.[28][29] Leaving a few cases dealing with IPR infringement, Indian courts have not expressly ruled on the legality of web scraping. However, since all common forms of electronic contracts are enforceable in India, violating the terms of use prohibiting data scraping will be a violation of the contract law. It will also violate theInformation Technology Act, 2000, which penalizes unauthorized access to a computer resource or extracting data from a computer resource. The administrator of a website can use various measures to stop or slow a bot. Some techniques include:
https://en.wikipedia.org/wiki/Web_scraping
SmartScreen(officially calledWindows SmartScreen,Windows Defender SmartScreenandSmartScreen Filterin different places) is acloud-basedanti-phishingandanti-malwarecomponent included in several Microsoft products: SmartScreen as a business unit includes the intelligence platform, backend, serving frontend, UX, policy, expert graders, andclosed-loop intelligence(machine learningandstatistical techniques) designed to help protect Microsoft customers from safety threats likesocial engineeringanddrive-by downloads. SmartScreen was first introduced inInternet Explorer 7, then known as thePhishing Filter. Phishing Filter does not check every website visited by the user, only those that are known to be suspicious.[1] With the release ofInternet Explorer 8, the Phishing Filter was renamed to SmartScreen and extended to include protection from socially engineered malware. Every website and download is checked against a local list of popular legitimate websites; if the site is not listed, the entire address is sent to Microsoft for further checks.[2]If it has been labeled as animpostoror harmful, Internet Explorer 8 will show a screen prompting that the site is reported harmful and shouldn't be visited. From there the user can either visit theirhomepage, visit the previous site, or continue to the unsafe page.[3]If a user attempts to download a file from a location reported harmful, then the download is cancelled. The effectiveness of SmartScreen filtering has been reported to be superior to socially engineered malware protection in other browsers.[4] According to Microsoft, the SmartScreen technology used by Internet Explorer 8 was successful against phishing or other malicious sites and in blocking ofsocially engineeredmalware.[5] Beginning with Internet Explorer 8, SmartScreen can be enforced usingGroup Policy. InInternet Explorer 9, SmartScreen added protection againstmalwaredownloads by launching SmartScreen Application Reputation to identify both safe and malicious software. The system blocked known malware while warning the user if an executable was not yet known to be safe. The system took into account the download website’s reputation based on SmartScreen’sphishingfilterlaunched in priorweb browserversionsInternet Explorer 7andInternet Explorer 8.[6] Internet Explorer Mobile10 was the first release of Internet Explorer Mobile to support the SmartScreen Filter.[7] Microsoft Edge [Legacy]was Microsoft's new browser beginning inWindows 10, built on the same Windows web platform powering Internet Explorer.Microsoft Edgewas later rebuilt onGoogle'sChromium browser stackto go cross-platform ontomacOSand down-level intoWindows 8.1and below. SmartScreen shipped with each version of Microsoft Edge, mostly with Internet Explorer parity, in progressive versions adding protection improvements targeting new consumer threat classes liketech support scamsor adding new enterprise configurability features. In October 2017, criticisms regarding URL submission methods were addressed with the creation of theReport unsafe siteURL submission page. Prior to 2017, Microsoft required a user to visit a potentially dangerous website to use the in-browser reporting tool, potentially exposing users to dangerous web content. In 2017, Microsoft reversed that policy by adding the URL submission page, allowing a user to submit an arbitrary URL without having to visit the website. SmartScreen Filter inMicrosoft Outlookwas previously bypassable due to a data gap in Internet Explorer. Some phishing attacks use a phishing email linking to a front-end URL unknown to Microsoft; clicking this URL in the inbox opens the URL in Internet Explorer; the loaded website then, using client-side or server-side redirections, redirects the user to the malicious site.[8]In the original implementation of SmartScreen, the "Report this website" option in Internet Explorer only reported the currently-open page (the final URL in the redirect chain); the original referrer URL in the phishing attack was not reported to Microsoft and remained accessible. This was mitigated beginning with some versions ofMicrosoft Edge Legacyby sending the full redirection chain to Microsoft for further analysis. InMicrosoft Windows 8, SmartScreen added built-inoperating systemprotections againstweb-delivered malwareperforming reputation checks by default on any file or applicationdownloadedfrom the Internet, including those downloaded fromemail clientslikeMicrosoft Outlookor non-Microsoftweb browserslikeGoogle Chrome.[9][10] WindowsSmartScreen functioned inline at theWindows shelldirectly prior to execution of any downloaded software. Whereas SmartScreen inInternet Explorer 9warned against downloading and executing unsafe programs only in Internet Explorer, Windows SmartScreen blocked execution of unsafe programs of anyInternetorigin. With SmartScreen left at its default settings,administratorprivilege would be required to launch and run an unsafe program. Microsoft faced concerns surrounding the privacy, legality and effectiveness of the new system, suggesting that the automatic analysis of files (which involves sending a cryptographic hash of the file and the user's IP address to a server) could be used to build a database of users' downloads online, and that the use of the outdatedSSL2.0 protocol for communication could allow an attacker to eavesdrop on the data. In response, Microsoft later issued a statement noting that IP addresses were only being collected as part of the normal operation of the service and would be periodically deleted, that SmartScreen on Windows 8 would only use SSL 3.0 for security reasons, and that information gathered via SmartScreen would not be used for advertising purposes or sold to third parties.[11] Beginning inWindows 10, Microsoft placed the SmartScreen settings into theWindows Defender Security Center.[12] Further Windows 10 andWindows 11updates have added more enterprise configurability as part of Microsoft's enterpriseendpoint protectionproduct. Outlook.comuses SmartScreen to protect users from unsolicited e-mail messages (spam/junk), fraudulent emails (phishing) and malware spread via e-mail. After its initial review of the body text, the system focuses on the hyperlinks and attachments. To filter spam, SmartScreen Filter uses machine learning fromMicrosoft Researchwhich learns from known spam threats and user feedback when emails are marked as "Spam" by the user. Over time, these preferences help SmartScreen Filter to distinguish between the characteristics of unwanted and legitimate e-mail and can also determine the reputation of senders by a number of emails having had this checked. Using these algorithms and the reputation of the sender is an SCL rating (Spam Confidence Level score) assigned to each e-mail message (the lower the score, the more desirable). A score of -1, 0, or 1 is considered not spam, and the message is delivered to the recipient's inbox. A score of 5, 6, 7, 8, or 9 is considered spam and is delivered to the recipient's Junk Folder. Scores of 5 or 6 are considered to be suspected spam, while a score of 9 is considered certainly spam.[13]The SCL score of an email can be found in the various x-headers of the received email. SmartScreen Filter also analyses email messages from fraudulent and suspicious Web links. If such suspicious characteristics are found in an email, the message is either[clarification needed]directly sent to the Spam folder with a red information bar at the top of the message which warns of the suspect properties. SmartScreen also protects against spoofed domain names (spoofing) in emails to verify whether an email is sent by the domain which it claims to be sent. For this, it uses the technologySender IDandDomainKeys Identified Mail(DKIM). SmartScreen Filter also ensures that one email[clarification needed]from authenticated senders can distinguish more easily by placing a green-shield icon for the subject line of these emails.[14][15] In late 2010, the results of browser malware testing undertaken by NSS Labs were published.[16]The study looked at the browser's capability to prevent users followingsocially engineeredlinks of a malicious nature and downloading malicious software. It did not test the browser's ability to block malicious web pages or code. According to NSS Labs, Internet Explorer 9 blocked 99% of malware downloads compared to 90% for Internet Explorer 8 that does not have the SmartScreen Application Reputation feature as opposed to the 13% achieved byFirefox,Chrome, andSafari; which all use aGoogle Safe Browsingmalware filter.Opera11 was found to block just 5% of malware.[17][18][19]SmartScreen Filter was also noted for adding legitimate sites to its blocklists almost instantaneously, as opposed to the several hours it took for blocklists to be updated on other browsers. In early 2010, similar tests had given Internet Explorer 8 an 85% passing grade, the 5% improvement being attributed to "continued investments in improved data intelligence".[20]By comparison, the same research showed that Chrome 6, Firefox 3.6 and Safari 5 scored 6%, 19% and 11%, respectively. Opera 10 scored 0%, failing to "detect any of the socially engineered malware samples".[21] In July 2010, Microsoft claimed that SmartScreen on Internet Explorer had blocked over a billion attempts to access sites containing security risks.[22]According to Microsoft, the SmartScreen Filter included in Outlook.com blocks 4.5 billion unwanted e-mails daily from reaching users. Microsoft also claims that only 3% of incoming email is junk mail but a test by Cascade Insights says that just under half of all junk mail still arrives in the inbox of users.[23][24]In a September 2011 blog post, Microsoft stated that 1.5 billion attempted malware attacks and over 150 million attempted phishing attacks have been stopped.[25] In 2017, Microsoft addressed criticisms about the URL submission process by creating a dedicated page to report unsafe sites, rather than requiring users to visit the potentially dangerous site.[26] Overtime, SmartScreen has expanded to protect against new threats like tech support scam, potentially unwanted applications (PUAs) and drive by attacks that don't require user interaction. Manufacturers of other browsers have criticized the third-party tests which claim Internet Explorer has superior phishing and malware protection compared to that of Chrome, Firefox, or Opera. Criticisms have focused mostly on the lack of transparency of URLs tested and the lack of consideration of layered security additional to the browser, withGooglecommenting that "The report itself clearly states that it does not evaluate browser security related to vulnerabilities in plug-ins or the browsers themselves",[27]and Opera commenting that the results appeared "odd that they received no results from our data providers" and that "social malware protection is not an indicator of overall browser security".[28] SmartScreen builds reputation based on code signing certificates that identify the author of the software. This means that once a reputation has been built, new versions of an application can be signed with the same certificate and maintain the same reputation. However, code signing certificates need to be renewed every two years. SmartScreen does not relate a renewed certificate to an expired one. This means that reputations need to be rebuilt every two years, with users getting frightening messages in the meantime. Extended Validation (EV) certificates seem to avoid this issue, but they are expensive and difficult to obtain for small developers.[29] SmartScreen Filter creates a problem for small software vendors when they distribute an updated version of installation or binary files over the internet.[30]Whenever an updated version is released, SmartScreen responds by stating that the file is not commonly downloaded and can therefore install harmful files on your system. This can be fixed by the author digitally signing the distributed software. Reputation is then based not only on a file's hash but on the signing certificate as well. A common distribution method for authors to bypass SmartScreen warnings is to pack their installation program (for example Setup.exe) into aZIP-archiveand distribute it that way, though this can confuse novice users. Another criticism is that SmartScreen increases the cost of non-commercial and small scale software development. Developers either have to purchase standard code signing certificates or more expensive extended validation certificates. Extended validation certificates allow the developer to immediately establish reputation with SmartScreen[31]but are often unaffordable for people developing software either for free or not for immediate profit. The standard code signing certicates however pose a "catch-22" for developers, since SmartScreen warnings make people reluctant to download software, as a consequence to get downloads requires first passing SmartScreen, passing SmartScreen requires getting reputation and getting reputation is dependent on downloads.
https://en.wikipedia.org/wiki/Microsoft_SmartScreen
Microsoft Defender Antivirus(formerlyWindows Defender) is anantivirus softwarecomponent ofMicrosoft Windows. It was first released as a downloadable free anti-spyware program forWindows XPand was shipped withWindows VistaandWindows 7. It has evolved into a full antivirus program, replacingMicrosoft Security EssentialsinWindows 8or later versions.[7] In March 2019, Microsoft announced Microsoft Defender ATP for Mac for business customers to protect theirMac[8]devices from attacks on a corporate network, and a year later, to expand protection for mobile devices, it announced Microsoft Defender ATP forAndroid[9]andiOS[10]devices, which incorporatesMicrosoft SmartScreen, afirewall, andmalwarescanning. The mobile version of Microsoft Defender also includes a feature to block access to corporate data if it detects a malicious app is installed. As of 2021, Microsoft Defender Antivirus is part of the much larger Microsoft Defender brand, which includes several other software and service offerings, including: Microsoft Defender Antivirus provides several key features to protect endpoints from computer virus. In Windows 10, Windows Defender settings are controlled in theWindows Defender Security Center.Windows 10 Anniversary Updateincludes several improvements, including a new popup that announces the results of a scan.[20] In the Windows Defender options, the user can configurereal-time protectionoptions. Windows 10's Anniversary Update introduced Limited Periodic Scanning, which optionally allows Windows Defender to scan a system periodically if another antivirus app is installed.[20]It also introduced Block at First Sight, which uses machine learning to predict whether a file is malicious.[21] Integration withInternet ExplorerandMicrosoft Edgeenables files to be scanned as they are downloaded to detect malicious software inadvertently downloaded. As of April 2018, Microsoft Defender is also available forGoogle Chromevia an extension[22]and works in conjunction withGoogle Safe Browsing, but as of late 2022, this extension is now deprecated.[23] A feature released in early 2018, Windows Defender Application Guard is a feature exclusive to Microsoft Edge that allows users tosandboxtheir current browsing session from the system. This prevents a malicious website ormalwarefrom affecting the system and the browser. Application Guard is a feature only available on Windows 10 Pro and Enterprise. In May 2019, Microsoft announced Application Guard for Google Chrome andFirefox. The extension, once installed, will open the current tabs web page in Microsoft Edge with Application Guard enabled. In April 2024, Microsoft announced that Microsoft Defender Application Guard will be deprecated for Edge for Business. The Chrome and Firefox extensions will not be migrating toManifest V3and will be deprecated after May 2024.[24] Controlled Folder Access is a feature introduced with Windows 10 Fall Creators Update to protect a user's important files from the growing threat ofransomware. This feature was released about a year later after thePetyafamily of ransomware first appeared. The feature will notify the user every time a program tries to access these folders and will be blocked unless given access via the user. Windows will warn the user with aUser Account Controlpopup as a final warning if they opt to "Allow" a program to read Controlled Folders. Introduced in Windows 10 version 1903[25], Tamper Protection protects certain security settings, such as antivirus settings, from being disabled or changed by unauthorized programs. Windows Defender was initially based on GIANT AntiSpyware, formerly developed by GIANT Company Software, Inc.[26]The company's acquisition was announced by Microsoft on December 16, 2004.[27][28]While the original GIANT AntiSpyware officially supported older Windows versions, support for theWindows 9xline of operating systems was later dropped by Microsoft. The firstbetarelease ofMicrosoft AntiSpywarefrom January 6, 2005, was a repackaged version of GIANT AntiSpyware.[27]There were more builds released in 2005, with the last Beta 1 refresh released on November 21, 2005. At the 2005RSA Securityconference,Bill Gates, the Chief Software Architect and co-founder of Microsoft, announced that Microsoft AntiSpyware would be made available free-of-charge to users with validly licensedWindows 2000,Windows XP, andWindows Server 2003operating systems to secure their systems against the increasing malware threat.[29] On November 4, 2005, it was announced that Microsoft AntiSpyware was renamed toWindows Defender.[30][31]Windows Defender (Beta 2) was released on February 13, 2006. It featured the program's new name and a redesigned user interface. The core engine was rewritten inC++, unlike the original GIANT-developed AntiSpyware, which was written inVisual Basic.[32]This improved the application's performance. Also, since Beta 2, the program works as a Windows service, unlike earlier releases, which enables the application to protect the system even when a user is not logged on. Beta 2 also requiresWindows Genuine Advantage(WGA) validation. However, Windows Defender (Beta 2) did not contain some of the tools found in Microsoft AntiSpyware (Beta 1). Microsoft removed theSystem Inoculation,Secure ShredderandSystem Explorertools found in MSAS (Beta 1) as well as theTracks Erasertool, which allowed users to easily delete many different types of temporary files related toInternet Explorer 6, includingHTTP cookies,web cache, andWindows Media Playerplayback history.[27]German and Japanese versions of Windows Defender (Beta 2) were later released by Microsoft.[33][34] On October 23, 2006, Microsoftreleasedthe final version of Windows Defender.[35]It supports Windows XP and Windows Server 2003; however, unlike the betas, it doesn't run on Windows 2000.[36]Some of the key differences from the beta version are improved detection, redesigned user interface and delivery of definition updates viaAutomatic Updates.[37] Windows Defender has the ability to remove installedActiveXsoftware.[38]Windows Defender featured an integrated support forMicrosoft SpyNetthat allows users to report to Microsoft what they consider to be spyware,[39]and what applications and device drivers they allow to be installed on their systems. Windows Vista included several security functionalities related to the Windows Defender. Some of the functionality was removed in subsequent versions of Windows.[40] Security agents which monitor the computer for malicious activities: TheAdvanced Toolssection allows users to discover potential vulnerabilities with a series of Software Explorers. They provide views of startup programs, currently running software, network connected applications, andWinsockproviders (Winsock LSPs). In each Explorer, every element is rated as either "Known", "Unknown" or "Potentially Unwanted". The first and last categories carry a link to learn more about the particular item, and the second category invites users to submit the program to Microsoft SpyNet for analysis by community members.[41][42]The Software Explorer feature has been removed from Windows Defender in Windows 7.[43] Windows Defender was released with Windows Vista and Windows 7, serving as their built-inanti-spywarecomponent.[44]In Windows Vista and Windows 7, Windows Defender was superseded byMicrosoft Security Essentials, anantivirusproduct from Microsoft which provided protection against a wider range of malware. Upon installation, Microsoft Security Essentials disabled and replaced Windows Defender.[45][46] In Windows 8, Microsoft upgraded Windows Defender into anantivirusprogram very similar to Microsoft Security Essentials for Windows 7,[7]and it also uses the same anti-malware engine andvirus definitionsfrom MSE. Microsoft Security Essentials itself does not run on Windows versions beyond 7.[45]In Windows 8 or later, Microsoft Defender Antivirus is on by default. It switches itself off upon installation of a third-party anti-virus package.[47][48] Following the consumer-end launch,Windows Server 2016was the first version of Windows Server to include Windows Defender.[49] UntilWindows 10 version 1703, Windows Defender had a dedicated GUI similar to Microsoft Security Essentials.[7]Additionally,Windows Security and Maintenancetracked the status of Windows Defender. With the first release of Windows 10, Microsoft removed the "Settings" dialog box from Windows Defender's GUI in favor of a dedicated page in theSettingsapp. Then, in the 1703 update, Microsoft tried to merge both Windows Defender's GUI and Windows Security and Maintenance into a unifiedUWP appcalledWindows Defender Security Center(WDSC).[50]Users could still access original GUI by alternative methods,[51][52]until the1803 update, which saw the UI removed altogether.[a]The Security and Maintenance control panel entry however, is still available in Windows 11; it contains links to reliability and performance monitoring, which is of the telemetry (one of the countless Vista major innovations) and allows to examine in depth issues detected, to the maintenance tools, File History, UAC Settings and Recovery (among others). With the release of Windows Server 2016, Microsoft introduced a Defender module forPowerShell, which allows interacting with Windows Defender via acommand-line interface(CLI).[58] Microsoft continued to decouple the management front-end from the core antivirus. In addition, to WDSC andPowerShell, it is possible to manage the antivirus viaWindows Admin Center,Group Policy,WMI,Microsoft Endpoint Manager, andMicrosoft Intune's "tenant attach" feature.[59] InWindows 10 version 1703, Microsoft renamed Windows Defender, calling it Windows Defender Antivirus.[60]Windows FirewallandMicrosoft SmartScreenalso saw their names changed to Windows Defender Firewall and Windows Defender SmartScreen.[61]Microsoft added other components under the "Windows Defender" brand name, includingWindows Defender Application Guard(WDAG),[61]Windows Defender Exploit Guard(WDEG),[61]Windows Defender Application Control,[62]andWindows Defender Advanced Threat Protection(Defender ATP).[62] A year later, Microsoft began dissolving the Windows Defender brand in favor a of the cloud-oriented"Microsoft Defender" brand. The company removed WDSC from the brand in the1809 update, renaming itWindows Security Center (WSC).[63]The2004 updaterenamed Windows Defender Antivirus, calling itMicrosoft Defender Antivirus, as Microsoft extended Defender ATP's capabilities beyond the Windows OS.[64][65] Windows Defender Offline (formerly known as Standalone System Sweeper)[66]is a stand-alone anti-malware program that runs from bootable removable media (e.g. CD or USB flash drive) designed to scan infected systems while the Windows operating system is offline.[67]SinceWindows 10 Anniversary Updatein 2016, the option to boot into Windows Defender Offline can be initiated from within Windows itself, negating the need for the separate boot disk. Microsoft Defender for Individualswas released to the general public in June 2022 for Windows 10, Windows 11, Mac OS, Android, and iOS devices.[68][69]On Windows it works alongside Microsoft's first and third-party antivirus solutions, such as Microsoft Defender Antivirus. Microsoft Defender for Individuals requires a Microsoft 365 personal or family license.[70] Microsoft Defender for Individuals is a stand-alone app that adds central management with visibility of family devices, as well as Identity Theft Monitoring (in supported regions[71]) to existing anti-malware features on Windows devices. On macOS and Android, the app includes its own anti-malware protection and on Android and iOS it also includes web protection (malicious link detection).[72] All supported platforms share a common user interface, which is also accessible from a web browser through Microsoft'sMy Defender portal. On May 5, 2017, Tavis Ormandy, a vulnerability researcher fromGoogle, discovered asecurity vulnerabilityin theJavaScriptanalysis module (NScript) of Microsoft Antimalware Engine (MsMpEngine) that impacted Windows Defender,Microsoft Security EssentialsandSystem Center Endpoint Protection. By May 8, 2017, Microsoft had released a patch to all affected systems.Ars Technicacommended Microsoft for its unprecedented patching speed and said that the disaster had been averted.[73][74] During a December 2017 test of various anti-malware software carried out byAV-TESTon Windows 10, Windows Defender earned 6 out of 6 points in detection rate of various malware samples, earning its "AV-TEST Certified" seal.[75] During a February 2018 "Real-World Protection Test" performed byAV-Comparatives, Windows Defender achieved a 100% detection rate of malicious URL samples, along with 3false positiveresults.[76] An AV-TEST test of Windows Defender in October 2019 demonstrated it provides excellent protection both against viruses and 0-day / malware attacks.[77] On December 1, 2021, AV-TEST gave Defender a maximum protection score of 34 points after successfully managing to detect ten out of ten ransomware samples in a lab test.[78] Microsoft Defender has often been subjected to criticisms related to privacy concerns, performance issues, and intrusive behavior in recent versions of Microsoft Windows operating systems. Microsoft Defender features cloud file analysis and file submission under Microsoft Spynet Membership which eventually became Microsoft Advanced Protection Service (MAPS) when opted in with basic or advanced membership collects user data and sends to Microsoft which arises privacy concerns among users.[79][80]The cloud integration of Microsoft Defender also raised concerns among privacy advocates. The MsmpEngine of Microsoft Defender in recent versions of Windows was found to be using high amounts of system resources, especially CPU Resources when Real-time protection and scheduled scan is configured to be turned on.[81]This issue is more apparent in PCs with Intel CPUs.[82]Microsoft defender is configured by default to take up 50% of the system's CPU resources available by default, although this can be configured usingGroup Policy Editoralong with limiting the process of MsmpEngine to use a Low Priority Process during a Realtime Scan and customizing scheduled scans.[83][84]Recent Windows Versions also deeply integrated Microsoft Defender with the operating system using mechanisms like Early Boot Anti-Malware, Tamper Protection, etc., making it almost impossible to remove or uninstall. Although these are useful to prevent malware from disabling or removing the antivirus itself, they also lead to frustration among users who utilize and seek 3rd party alternatives.[85][86][87][88]In late-July 2020, Microsoft Defender began to classify modifications of thehosts filethat blocks Microsoft telemetry and data collection servers as being a severe security risk.[89][90]
https://en.wikipedia.org/wiki/Microsoft_Defender
Ascraper siteis awebsitethat copies content from other websites usingweb scraping. The content is then mirrored with the goal of creating revenue, usually through advertising and sometimes by selling user data. Scraper sites come in various forms: Some provide little if any material or information and are intended to obtain user information such as e-mail addresses to be targeted for spam e-mail. Price aggregation and shopping sites access multiple listings of a product and allow a user to rapidly compare the prices. Search enginessuch asGooglecould be considered a type of scraper site. Search engines gather content from other websites, save it in their own databases, index it and present the scraped content to the search engines' own users. The majority of content scraped by search engines is copyrighted.[1] The scraping technique has been used on various dating websites as well. These sites often combine their scraping activities withfacial recognition.[2][3][4][5][6][7][8][9][10][11][excessive citations] Scraping is also used on generalimage analysis(recognition) websites, as well as websites specifically made to identify images of crops with pests and diseases.[12][13] Some scraper sites are created to make money by using advertising programs. In such case, they are calledMade forAdSensesites or MFA. This derogatory term refers to websites that have no redeeming value except to lure visitors to the website for the sole purpose of clicking on advertisements.[14] Made for AdSensesites are consideredsearch engine spamthat dilute the search results with less-than-satisfactory search results. The scraped content is redundant compared to content shown by the search engine under normal circumstances, had no MFA website been found in the listings. Some scraper sites link to other sites in order to improve theirsearch engine rankingthrough aprivate blog network. Prior to Google's update to its search algorithm known asPanda, a type of scraper site known as anauto blogwas quite common among black-hat marketers who used a method known asspamdexing. Scraper sites may violatecopyright law. Even taking content from anopen contentsite can be acopyright violation, if done in a way which does not respect the license. For instance, theGNU Free Documentation License(GFDL)[15]andCreative CommonsShareAlike (CC-BY-SA)[16]licenses used on Wikipedia[17]require that a republisher of Wikipedia inform its readers of the conditions on these licenses, and give credit to the original author. Depending upon the objective of a scraper, the methods in which websites are targeted differ. For example, sites with large amounts of content such as airlines, consumer electronics, department stores, etc. might be routinely targeted by their competition just to stay abreast of pricing information. Another type of scraper will pull snippets and text from websites that rank high for keywords they have targeted. This way they hope to rank highly in thesearch engine results pages(SERPs), piggybacking on the original page'spage rank.RSSfeeds are vulnerable to scrapers. Other scraper sites consist of advertisements and paragraphs of words randomly selected from a dictionary. Often a visitor will click on apay-per-clickadvertisement on such site because it is the only comprehensible text on the page. Operators of these scraper sites gain financially from these clicks. Advertising networks claim to be constantly working to remove these sites from their programs, although these networks benefit directly from the clicks generated at this kind of site. From the advertisers' point of view, the networks don't seem to be making enough effort to stop this problem. Scrapers tend to be associated withlink farmsand are sometimes perceived as the same thing, when multiple scrapers link to the same target site. A frequent target victim site might be accused of link-farm participation, due to the artificial pattern of incoming links to a victim website, linked from multiple scraper sites. Some programmers who create scraper sites may purchase a recently expireddomain nameto reuse its SEO power in Google. Whole businesses focus on understanding all[citation needed]expired domains and utilising them for their historical ranking ability exist. Doing so will allow SEOs to utilize the already-establishedbacklinksto the domain name. Some spammers may try to match the topic of the expired site or copy the existing content from theInternet Archiveto maintain the authenticity of the site so that the backlinks don't drop. For example, an expired website about a photographer may be re-registered to create a site about photography tips or use the domain name in theirprivate blog networkto power their own photography site. Services at some expired domain name registration agents provide both the facility to find these expired domains and to gather the HTML that the domain name used to have on its web site.[citation needed]
https://en.wikipedia.org/wiki/Scraper_site
Trademark stuffingis a form ofkeyword stuffing, an unethicalsearch engine optimizationmethod used bywebmastersand Internet marketers in order to manipulatesearch engineranking results served by websites such asGoogle,Yahoo!andMicrosoft Bing. A key characteristic of trademark stuffing is the intent of the infringer to confuse search engines and Internet users into thinking a website or web page is owned or otherwise authorized by thetrademarkowner. Trademark stuffing does not include using trademarks on third party website pages with the boundaries ofFair Use. When used effectively, trademark stuffing enables infringing websites to capture search engine traffic that may have otherwise been received by an authorized website or trademark owner. Trademark stuffing is most often used to manipulate organic search engine optimization, however, can also be used with other forms ofsearch engine marketing, such as within the text ofpay-per-clickadvertisements. Using another's trademark or service mark as a keyword without permission is ill-advised, could constitute trademark infringement and result in other claims.[citation needed] Trademark stuffing may be accomplished by placing trademarked text with the following areas of a web page: By extension, another form of keyword stuffing involves placing trademarks within theanchor textof third party websites, then pointing the website address within the linked text back to an infringing website. An anchor link signals to Internet users that the link points to a website address relating to the trademark. Additionally, search engines are widely known to use anchor text linking data within their search engine ranking algorithms. Thus, trademark-stuffed anchor links signal relationship information to the search engines, thereby increasing the chance that an infringing website could achieve higher organic search rankings for a trademarkkeywordphrase.
https://en.wikipedia.org/wiki/Trademark_stuffing
White fontingis the practice of inserting hidden keywords into the body of an electronic document, in order to influence the actions of a search program reviewing that document. The name white fonting comes from the practice of adding keywords to a webpage, using a white font on a white background, in an effort to hide the additional keywords from sight. White fonting can be used as asearch engine optimization(SEO) technique in webpages, by inserting the same keyword multiple times, to create a higher ranking in search engine results whereKeyword densityis a factor.[1]White fonting can also add unrelated, but often searched, keywords to a webpage in order to gain additional traffic. White fonting is considered to be an unethical search engine optimization technique, because pages containing white fonting are not indexed and ranked on the basis of page content viewable by human readers. Sites perceived to contain hidden text and links that are deceptive in intent, may be removed from search engine listings such as Google, and will not appear in search results pages.[2] White fonting can also be used to boost the visibility of other electronic documents evaluated by search engines or automated tracking software, such as resumes or reports.
https://en.wikipedia.org/wiki/White_fonting
Open-source governance(also known asopen governanceandopen politics) is apolitical philosophywhich advocates the application of the philosophies of theopen-sourceandopen-contentmovements todemocraticprinciples to enable any interested citizen to add to the creation of policy, as with awikidocument. Legislation is democratically opened to the general citizenry, employing theircollective wisdomto benefit the decision-making process and improve democracy.[1] Theories on how to constrain, limit or enable this participation vary. Accordingly, there is no one dominant theory of how to go about authoring legislation with this approach. There are a wide array of projects and movements which are working on building open-source governance systems.[2] Manyleft-libertarianandradical centristorganizations around the globe have begun advocating open-source governance and its related political ideas as a reformist alternative to current governance systems. Often, these groups have their origins indecentralizedstructures such as the Internet and place particular importance on the need for anonymity to protect an individual's right to free speech in democratic systems. Opinions vary, however, not least because the principles behind open-source government are still very loosely defined.[3] In practice, several applications have evolved and been used by democratic institutions:[4] Some models are significantly more sophisticated than a plain wiki, incorporating semantic tags, levels of control or scoring to mediate disputes – however this always risks empowering a clique of moderators more than would be the case given their trust position within the democratic entity – a parallel to the common wiki problem ofofficial vandalismby persons entrusted with power by owners or publishers (so-called "sysop vandalism" or "administrative censorship"). Some advocates of these approaches, by analogy to software code, argue[citation needed]for a "central codebase" in the form of a set of policies that are maintained in a public registry and that areinfinitely reproducible. "Distributions" of this policy-base are released (periodically or dynamically) for use in localities, which can apply "patches" to customize them for their own use. Localities are also able to cease subscribing to the central policy-base and "fork" it or adopt someone else's policy-base. In effect, the government stems from emergent cooperation and self-correction among members of a community. As the policies are put into practice in a number of localities, problems and issues are identified and solved, and where appropriate communicated back to the core. These goals for instance were cited often during theGreen Party of Canada's experiments with open-political-platform development.[citation needed]As one of over a hundred nationalGreen partyentities worldwide and the ability to co-ordinate policy among provincial and municipal equivalents within Canada, it was in a good position to maintain just such a central repository of policy, despite being legally separate from those other entities. Open-source governance differs from previous open-government initiatives in its broader emphasis on collaborative processes. After all... ...simply publishing snapshots of government information is not enough to make it open. The "Imagine Halifax" (IH) project was designed to create a citizens' forum for elections inHalifax, Nova Scotiain fall 2004. Founded by Angela Bischoff, the widow ofTooker Gomberg, a notable advocate of combiningdirect actionwith open politics methods, IH brought a few dozen activists together to compile a platform (using live meetings and email and seedwiki followup). When it became clear that candidates could not all endorse all elements of the platform, it was then turned into questions for candidates in the election. The best ideas from candidates were combined with the best from activists – the final scores reflected a combination of convergence and originality. In contrast to most such questionnaires, it was easier for candidates to excel by contributing original thought than by simply agreeing. One high scorer,Andrew Younger, had not been involved with the project originally but was elected and appeared on TV with project leaderMartin Willison. The project had not only changed its original goal from a partisan platform to a citizen questionnaire, but it had recruited a previously uninvolved candidate to its cause during the election. A key output of this effort was aglossaryof about 100 keywords relevant to municipal laws. The 2004–05Green Party of Canada Living Platformwas a much more planned and designed effort at open politics. As it prepared itself for an electoral breakthrough in the2004 federal election, theGreen Party of Canadabegan to compile citizen, member and expert opinions in preparation of its platform. During the election, it gathered input even fromInternet trollsincluding supporters of other parties, with no major problems:anonymitywas respected and, if they were within the terms of use, comments remained intact. Despite, or perhaps because of, its early success, it was derailed byJim Harris, the party's leader, when he discovered that it was a threat to his status as aparty boss.[citation needed]The Living Platform split off as another service entirely out of GPC control and eventually evolved into OpenPolitics.ca[11]and a service to promote wiki usage among citizens and political groups. TheLiberal Party of Canadaalso attempted a deep policy renewal effort in conjunction with its leadership race in 2006.[12][13]While candidates in that race, notablyCarolyn Bennett,Stéphane DionandMichael Ignatieff, all made efforts to facilitate web-threaded policy-driven conversations between supporters, all failed to create lateral relationships and thus also failed to contribute much to the policy renewal effort. Numerous very different projects related to open-source governance collaborate under the umbrella of theMetagovernmentproject;[14]Metagovernment uses the term "collaborative governance",[15]most of which are building platforms of open-source governance. Future Melbourne is a wiki-based collaborative environment for developing Melbourne's 10-year plan. During public consultation periods, it enables the public to edit the plan with the same editing rights as city personnel and councilors.[21] The New Zealand Police Act Review was a wiki used to solicit public commentary during the public consultation period of the acts review.[22] Atlinux.conf.auon January 14, 2015, inAuckland,New Zealand,AustralianAudrey Lobo-PulopresentedEvaluating Government Policies Using Open Source Models, agitating for government policy related knowledge, data and analysis to be freely available to everyone to use, modify and distribute without restriction — "a parallel universe where public policy development and analysis is a dynamic, collaborative effort between government and its citizens". Audrey reported that the motivation for her work was personal uncertainty about the nature and accuracy of models, estimates and assumptions used to prepare policies released with the 2014 Australian Federal Government Budget, and whether and to what extent their real world impact is assessed following implementation.[23]A white paper on "Evaluating Government Policies using Open Source Models" was released on September 10, 2015.[24] The open-politics theory, a narrow application of open-source governance, combines aspects of thefree softwareandopen-contentmovements, promotingdecision-makingmethods claimed to be more open, less antagonistic, and more capable of determining what is in thepublic interestwith respect topublic policyissues. It takes special care for instance to deal with equity differences, geographic constraints, defamation versus free political speech, accountability to persons affected by decisions, and the actual standing law and institutions of a jurisdiction. There is also far more focus on compiling actual positions taken by real entities than developing theoretical "best" answers or "solutions". One example,DiscourseDB, simply lists articles pro and con a given position without organizing their argument or evidence in any way. While some interpret it as an example of "open-source politics", open politics is not a top–down theory but a set of best practices fromcitizen journalism,participatory democracyanddeliberative democracy, informed bye-democracyandnetrootsexperiments, applying argumentation framework for issue-based argument as they evolved in academic and military use through the 1980s to present. Some variants of it draw on the theory ofscientific methodandmarket methods, includingprediction marketsandanticipatory democracy. Its advocates often engage in legal lobbying and advocacy to directly change laws in the way of the broader application of the technology, e.g. opposingpolitical libelcases in Canada, fightinglibel chillgenerally, and calling for clarification of privacy and human rights law especially as they relate to citizen journalism. They are less focused on tools although thesemantic mediawikiandtikiwikiplatforms seem to be generally favored above all others.
https://en.wikipedia.org/wiki/Open_politics
1800s:Martineau·Tocqueville·Marx·Spencer·Le Bon·Ward·Pareto·Tönnies·Veblen·Simmel·Durkheim·Addams·Mead·Weber·Du Bois·Mannheim·Elias Symbolic interactionismis asociologicaltheory that develops from practical considerations and alludes to humans' particular use of shared language to create common symbols and meanings, for use in both intra- and interpersonal communication.[1] It is particularly important inmicrosociologyandsocial psychology. It is derived from the American philosophy ofpragmatismand particularly from the work ofGeorge Herbert Mead, as a pragmatic method to interpretsocial interactions.[2][3] According to Mead, symbolic interactionism is "The ongoing use of language and gestures in anticipation of how the other will react; a conversation".[4]Symbolic interactionism is "a framework for building theory that sees society as the product of everyday interactions of individuals". In other words, it is a frame of reference to better understand how individuals interact with one another to create symbolic worlds, and in return, how these worldsshape individual behaviors.[5]It is a framework that helps understand how society is preserved and created through repeated interactions between individuals. The interpretation process that occurs between interactions helps create and recreate meaning. It is the shared understanding and interpretations of meaning that affect the interaction between individuals. Individuals act on the premise of a shared understanding of meaning within their social context. Thus, interaction and behavior are framed through the shared meaning that objects and concepts have attached to them. Symbolic Interactionism refers to both verbal and nonverbal communication. From this view, people live in both natural and symbolic environments. Symbolic interaction was conceived byGeorge Herbert MeadandCharles Horton Cooley. Mead was born in South Hadley, Massachusetts in the year 1863. Mead was influenced by many theoretical and philisocial traditions, such as, utilitarianism, evolutionism, pragmatism, behaviorism, and the looking-glass-self. Mead was asocial constructionist.[6]Mead argued that people's selves are social products, but that these selves are alsopurposiveand creative, and believed that the true test of any theory was that it was "useful in solving complex social problems".[7]Mead's influence was said to be so powerful that sociologists regard him as the one "true founder" of the symbolic interactionism tradition. Although Mead taught in a philosophy department, he is best known by sociologists as the teacher who trained a generation of the best minds in their field. Strangely, he never set forth his wide-ranging ideas in a book or systematic treatise. Mead began his teachings at the University of Michigan then moved to the University of Chicago.[8]After his death in 1931, his students pulled together class notes and conversations with their mentor and publishedMind, Self and Societyin his name.[7]It is a common misconception thatJohn Deweywas the leader of this sociological theory; according toThe Handbook of Symbolic Interactionism,Mead was undoubtedly the individual who "transformed the inner structure of the theory, moving it to a higher level of theoretical complexity."[9] Mind, Self and Societyis the book published by Mead's students based on his lectures and teaching, and the title of the book highlights the core concept of social interactionism.Mindrefers to an individual's ability to use symbols to create meanings for the world around the individual – individuals use language and thought to accomplish this goal.Selfrefers to an individual's ability to reflect on the way that the individual is perceived by others. Finally,society, according to Mead, is where all of these interactions are taking place. A general description of Mead's compositions portray how outsidesocial structures,classes, andpowerand abuse affect the advancement of self, personality for gatherings verifiably denied of the ability to characterize themselves.[10] Herbert Blumer, a student and interpreter of Mead, coined the term and put forward an influential summary: people act a certain way towards things based on the meaning those things already have, and these meanings are derived from social interaction and modified through interpretation.[11]Blumer was asocial constructionist, and was influenced byJohn Dewey; as such, this theory is veryphenomenologically-based. Given that Blumer was the first to use symbolic interaction as a term, he is known as the founder of symbolic interaction.[12]He believed that the "Most human and humanizing activity that people engage in is talking to each other."[7]According to Blumer, human groups are created by people and it is only actions between them that define a society.[13]He argued that with interaction and through interaction individuals are able to "produce common symbols by approving, arranging, and redefining them."[13]Having said that, interaction is shaped by a mutual exchange of interpretation, the ground of socialization.[2] While having less influential work in the discipline,Charles Horton CooleyandWilliam Isaac Thomasare considered to be influential representatives of the theory. Cooley's work on connecting society and the individuals influenced Mead's further workings. Cooley felt society and the individuals could only be understood in relationship to each other. Cooley's concept of the "looking-glass self," influenced Mead's theory of self and symbolic interactionism.[14]W. I. Thomas is also known as a representative of symbolic interactionism. His main work was a theory of human motivation addressing interactions between individuals and the "social sources of behaviors."[15]He attempted to "explain the proper methodological approach to social life; develop a theory of human motivation; spell out a working conception of adult socialization; and provide the correct perspective on deviance and disorganization."[16]A majority of scholars agree with Thomas.[17] Two other theorists who have influenced symbolic interaction theory areYrjö Engeströmand David Middleton. Engeström and Middleton explained the usefulness of symbolic interactionism in the communication field in a variety of work settings, including "courts of law, health care, computer software design, scientific laboratory, telephone sales, control, repair, and maintenance of advanced manufacturing systems".[18]Other scholars credited for their contribution to the theory are Thomas, Park, James, Horton Cooley,Znaniecki, Baldwin, Redfield, and Wirth.[13]Unlike other social sciences, symbolic interactionism emphasizes greatly on the ideas of action instead of culture, class and power. According tobehaviorism,Darwinism,pragmatism, as well asMax Weber,action theorycontributed significantly to the formation of social interactionism as a theoretical perspective incommunication studies.[2] Most symbolic interactionists believe a physical reality does indeed exist by an individual's social definitions, and that social definitions do develop in part or in relation to something "real". People thus do not respond to this reality directly, but rather to thesocial understanding of reality; i.e., they respond to this reality indirectly through a kind of filter which consists of individuals' different perspectives. This means that humans exist not in the physical space composed of realities, but in the "world" composed only of "objects". According toErving Goffman, what motivates humans to position their body parts in certain manners and the desires to capture and examine those moments are two of the elements that constitute the composition of the social reality which is made of various individuals' perceptions, it's crucial to examine how these two elements occur. It appeals to symbolic interactionists to shift more emphases on the realistic aspect of their empirical observation and theorizing.[19] Three assumptions frame symbolic interactionism:[5] Having defined some of the underlying assumptions of symbolic interactionism, it is necessary to address the premises that each assumption supports. According to Blumer (19f,.69), there are three premises that can be derived from the assumptions above.[13] 1)"Humans act toward things on the basis of the meanings they ascribe to those things."[13] The first premise includes everything that a human being may note in their world, including physical objects, actions and concepts. Essentially, individuals behave towards objects and others based on the personal meanings that the individual has already given these items. Meaning is not automatically associated, it is ascribed through interactions.[20]Blumer was trying to put emphasis on the meaning behind individual behaviors, specifically speaking, psychological and sociological explanations for those actions and behaviors.[21] 2)"The meaning of such things is derived from, or arises out of, the social interaction that one has with others and the society."[13] The second premise explains the meaning of such things is derived from, or arises out of, the social interaction that one has with other humans. Blumer, following Mead, claimed people interact with each other by interpreting or defining each other's actions instead of merely reacting to each other's actions Their "response" is not made directly to the actions of one another but instead is based on the meaning which they attach to such actions. Thus, human interaction is mediated by the use ofsymbolsandsignification, byinterpretation, or by ascertaining the meaning of one another's actions. Mead believed not in stimulus-response, but in stimulus-interpretation-response. The meaning we assign to our communication is what is important.[20]Meaningis either taken for granted and pushed aside as an unimportant element which need not to be investigated, or it is regarded as a mere neutral link or one of thecausal chainsbetween the causes or factors responsible for human behavior and this behavior as the product of such factors. 3)"The Meanings are handled in, and modified through, an interpretative process used by the person in dealing with the things he/she [sic] encounters." Symbolic interactionists describe thinking as aninner conversation.[7]Mead called this inner dialogueminding, which is the delay in one's thought process that happens when one thinks about what they will do next.[20]These meanings are handled in, and modified through, an interpretive process[a][22]used by the person in dealing with the things that they encounter. We naturally talk to ourselves in order to sort out the meaning of a difficult situation. But first, we need language. Before we can think, we must be able to interact symbolically.[7]The emphasis on symbols, negotiated meaning, and social construction of society brought attention to therolespeople play.Role-takingis a key mechanism that permits people to see another person's perspective to understand what an action might mean to another person. Role-taking is a part of our lives at an early age, for instance, playing house and pretending to be someone else. There is an improvisational quality to roles; however,actorsoften take on a script that they follow. Because of the uncertainty of roles in social contexts, the burden of role-making is on the person in the situation. In this sense, we are proactive participants in our environment.[23] Some theorists have proposed an additional fourth premise: 4)"It's the inherent human desire to acquire potential psychological rewards from interacting with others that motivates us to establish realities filtered through social interactions" Some symbolic interactionists point out the ineradicable nexus of the desire for potential psychological reward between individuals and their respective socially constructed realities that is commonly known as the "society", these experts have confirmed that one crucial premise for analyzing and dissecting symbolic interactionism is the psychological reward that drives individuals to connect with others and create meanings via social interactions.[24]We as humans instinctively discern individuals whom we want to be associated with, before we initiate an interaction with them, we would experience an internal emotional rush biologically that encourages us to initiate the interaction, thus beginning to form various socially constructed realities that enables symbolic interactionism to examine, namely it's our desires for emotional rewards that makes the theory of symbolic interactionism possible and viable.[25] The majority of interactionist research usesqualitative researchmethods, likeparticipant observation, to study aspects ofsocial interaction, and/or individuals' selves. Participant observation allows researchers to access symbols and meanings, as inHoward Becker'sArt WorldsandArlie Hochschild'sThe Managed Heart.[26]They argue that close contact and immersion in the everyday activities of the participants is necessary for understanding the meaning of actions, defining situations and the process that actors construct the situation through their interaction. Because of this close contact, interactions cannot remain completely liberated of value commitments. In most cases, they make use of their values in choosing what to study; however, they seek to be objective in how they conduct the research. Therefore, the symbolic-interaction approach is a micro-level orientation focusing on human interaction in specific situations.[27] There are five central ideas to symbolic interactionism according toJoel M. Charon(2004):[28] To Blumer's conceptual perspective, he put them in three core propositions: that people act toward things, including each other, on the basis of the meanings they have for them; that these meanings are derived through social interaction with others; and that these meanings are managed and transformed through an interpretive process that people use to make sense of and handle the objects that constitute their social worlds. This perspective can also be described as three core principles- Meaning, Language and Thinking- in which social constructs are formed. The principle of meaning is the center of human behavior. Language provides meaning by providing means to symbols. These symbols differentiate social relations of humans from that of animals. By humans giving meaning to symbols, they can express these things with language. In turn, symbols form the basis of communication. Symbols become imperative components for the formation of any kind of communicative act. Thinking then changes the interpretation of individuals as it pertains to symbols.[29] Some symbolic interactionists like Goffman had pointed out the obvious defects of the pioneering Mead concept upon which the contemporary symbolic interactionism is built, it has influenced the modern symbolic interactionism to be more conducive to conceiving "social-psychological concerns rather than sociological concerns".[19]For instance, during analyzing symbolic interactionism, the participants' emotional fluctuations that are inexorably entailed are often ignored because they are too sophisticated and volatile to measure.[19]When the participants are being selected to participate in certain activities that are not part of their normal daily routine, it will inevitably disrupt the participants psychologically, causing spontaneous thoughts to flow that are very likely to make the participants veer away from their normal behaviors. These psychological changes could result in the participants' emotional fluctuations that manifest themselves in the participants' reactions; therefore, manufacturing biases that will the previously mentioned biases. This critique unveiled the lack of scrutiny on participants' internal subjective processing of their environment which initiates the reasoning and negotiating faculties, which the contemporary symbolic interactionism also reflects.[19]Henceforth, prejudice is not a purely psychological phenomenon, instead it can be interpreted from a symbolic interactionism standpoint,[19]taking individuals' construction of the social reality into account.[30] Keeping Blumer's earlier work in mindDavid A. Snow, professor of sociology at theUniversity of California, Irvine, suggests four broader and even more basic orienting principles:human agency, interactive determination, symbolization, andemergence. Snow uses these four principles as the thematic bases for identifying and discussing contributions to the study of social movements. Symbolic interaction can be used to explain one's identity in terms of roles being "ideas and principles on 'what to do' in a given situation," as noted by Hewitt.[31][32]Symbolic Interactionist identity presents in 3 categories- situated, personal and social. Situated identity refers to the ability to view themselves as others do. This is often a snapshot view in that it is short, but can be very impactful. From this experience, one wishes to differentiate themselves from others and the personal identity comes to exist. This view is when one wishes to make themselves known for who they truly are, not the view of others. From the personal identity taking place, comes the social identity where connections and likeness are made with individuals sharing similar identities or identity traits.[31] This viewpoint of symbolic interactionism can be applied to the use of social networking sites and how one's identity is presented on those sites. With social networking sites, one can boast (or post) their identity through their newsfeed. The personal identity presents itself in the need for individuals to post milestones that one has achieved, in efforts to differentiate themselves. The social identity presents itself when individuals "tag" others in their posts, pictures, etc.[31]Situated identities may be present in the need to defend something on social media or arguments that occur in comments, where one feels it necessary to "prove" themselves. Coming from the viewpoint that we learn, or at least desire, how to expect other people's reactions/responses to things, Bruce Link and his colleagues studied how expectations of the reactions of others can affect the mental illness stigma. The participants of the study were individuals with psychosis who answered questions relating to discrimination, stigma, and rejection. The goal of the study was to determine whether others' expectations affect the participants' internalized stigmas, anticipated rejection, concerns with staying in, and other. Results found that high levels of internalized stigma were only present in the minority, however, anticipation of rejection, stigma consciousness, perceived devaluation discrimination and concerns with staying in were found to be more prevalent in participants. These perceptions were correlated with the outcomes of withdrawal, self-esteem and isolation from relatives. The study found that anticipation of rejection played the largest role in internalized stigmas.[33] Applications on social roles Symbolic interactionism can be used to dissect the concept of social role[34]and further study relations between friends.[35]A social role begins to exist when an individual initiates interaction with other people who would comprise a social circle in which the initiator is the central terminal, the accumulated proceedings of duties and rights performed by the central person and all the other participants in this social circle reinforces this dynamic circle. Apart from the central role, such social groups are constituted of participants who benefit from the central figure and those who are eligible and capable of helping the central role to achieve its envisioned objectives.[34]The roles in the social role dynamic aren't preordained although the prevalent culture of a specific society usually possesses a default structure to most social roles.[34]Despite the fact that the predominant culture of a certain society typically exerts large amount of influence on the instinctive formation of the structures in social groups, the roles in social groups are eventually formed based on the interactions occurred between the central figure and other potential participants in this role.[34]For illustration, if a central person of the social role is a police officer, then this social role can contain victims, teammates, operators, the dispatch, potential suspects, lieutenant. Social roles could be formulated by happenstances, but it can't escape the inexorable reconfiguration of multilateral exchanges of each role's obligations in a social role. (Lopata 1964). Through this lens, the examination of various social roles becomes more receptive and accessible, which also possesses the same effects on examining friendship and other vocations.[35] Symbolic interactionists are often criticized for being overly impressionistic in their research methods and somewhat unsystematic in their theories. It is argued that the theory is not one theory, but rather, theframeworkfor many different theories. Additionally, some theorists have a problem with symbolic interaction theory due to its lack oftestability. These objections, combined with the fairly narrow focus of interactionist research on small-group interactions and other social psychological issues, have relegated the interactionist camp to a minority position among sociologists (albeit a fairly substantial minority). Much of this criticism arose during the 1970s in the U.S. whenquantitativeapproaches to sociology were dominant, and perhaps the best known of these is byAlvin Gouldner.[36] Some critiques of symbolic interactionism are based on the assumption that it is atheory, and the critiques apply the criteria for a "good" theory to something that does not claim to be a theory. Some critics find the symbolic interactionist framework too broad and general when they are seeking specific theories. Symbolic interactionism is a theoreticalframeworkrather than a theory[b][37]and can be assessed on the basis of effective conceptualizations. The theoretical framework, as with any theoretical framework, is vague when it comes to analyzingempirical dataor predicting outcomes in social life. As a framework rather than a theory, many scholars find it difficult to use. Interactionism being a framework rather than a theory makes it impossible to test interactionism in the manner that a specific theoretical claim about the relationship between specific variables in a given context allows. Unlike the symbolic interactionist framework, the many theories derived from symbolic interactionism, such asrole theoryand the versions of identity theory developed bySheldon Stryker,[38][39]as well as Peter Burke and colleagues,[40][41]clearly define concepts and the relationships between them in a given context, thus allowing for the opportunity to develop and test hypotheses. Further, especially among Blumerian processual interactionists, a great number of very useful conceptualizations have been developed and applied in a very wide range of social contexts, types of populations, types of behaviors, and cultures and subcultures. Symbolic interactionism is often related and connected with social structure. This concept suggests that symbolic interactionism is a construction of people's social reality.[38]It also implies that from a realistic point of view, the interpretations that are being made will not make much difference. When the reality of a situation is defined, the situation becomes a meaningful reality. This includes methodological criticisms, and critical sociological issues. A number of symbolic interactionists have addressed these topics, the best known being Stryker's structural symbolic interactionism[38][42]and the formulations of interactionism heavily influenced by this approach (sometimes referred to as the "Indiana School" of symbolic interactionism), including the works of key scholars in sociology and psychology using different methods and theories applying astructuralversion of interactionism that are represented in a 2003 collection edited by Burkeet al.[43]Another well-known structural variation of symbolic interactionism that applies quantitative methods is Manford H. Kuhn's formulation which is often referred to in sociological literature as the "Iowa School."Negotiated order theoryalso applies a structural approach.[44] Language is viewed as the source of all meaning.[23]Blumer illuminates several key features about social interactionism. Most people interpret things based on assignment and purpose. The interaction occurs once the meaning of something has become identified. This concept of meaning is what starts to construct the framework of social reality. By aligning social reality, Blumer suggests that language is the meaning of interaction. Communication, especially in the form of symbolic interactionism is connected with language. Language initiates all forms of communication, verbal and non-verbal. Blumer defines this source of meaning as a connection that arises out of the social interaction that people have with each other.[45] According tosocial theoristPatricia Burbank, the concepts of synergistic and diverging properties are what shape the viewpoints of humans as social beings. These two concepts are different in a sense because of their views of human freedom and their level of focus. According to Burbank, actions are based on the effects of situations that occur during the process of social interaction. Another important factor in meaningful situations is the environment in which the social interaction occurs. The environment influences interaction, which leads to a reference group and connects with perspective, and then concludes to a definition of the situation. This illustrates the proper steps to define a situation. An approval of the action occurs once the situation is defined. An interpretation is then made upon that action, which may ultimately influence the perspective, action, and definition. Stryker emphasizes that the sociology world at large is the most viable and vibrant intellectual framework.[38]By being made up of our thoughts and self-belief, the social interactionism theory is the purpose of all human interaction, and is what causes society to exist. This fuels criticisms of the symbolic interactionist framework for failing to account for social structure, as well as criticisms that interactionist theories cannot be assessed viaquantitative methods, and cannot befalsifiableor testedempirically. Framework is important for the symbolic interaction theory because in order for the social structure to form, there are certain bonds of communication that need to be established to create the interaction. Much of the symbolic interactionist framework's basic tenets can be found in a very wide range of sociological and psychological work, without being explicitly cited as interactionist, making the influence of symbolic interactionism difficult to recognize given this general acceptance of its assumptions as "common knowledge."[46] Another problem with this model is two-fold, in that it 1) does not take into account human emotions very much, implying that symbolic interaction is not completely psychological; and 2) is interested in social structure to a limited extent, implying that symbolic interaction is not completely sociological. These incompetencies frame meaning as something that occurs naturally within an interaction under a certain condition, rather than taking into account the basic social context in which interaction is positioned. From this view, meaning has no source and does not perceive a social reality beyond what humans create with their own interpretations.[47] Another criticism of symbolic interactionism is more so on the scholars themselves. They are noted to not take interest in the history of this sociological approach. This has the ability to produce shallow understanding and can make the subject "hard to teach" based on the lack of organization in its teachings to relate with other theories or studies.[48] Some symbolic interactionists like Goffman had pointed out the obvious defects of the pioneering Mead concept upon which the contemporary symbolic interactionism is built, it has influenced the modern symbolic interactionism to be more conducive to conceiving "social-psychological concerns rather than sociological concerns".[24]For instance, during analyzing symbolic interactionism, the participants' emotional fluctuations that are inexorably entailed are often ignored because they are too sophisticated and volatile to measure.[24]When the participants are being selected to participate in certain activities that are not part of their normal daily routine, it will inevitably disrupt the participants psychologically, causing spontaneous thoughts to flow that are very likely to make the participants veer away from their normal behaviors. These psychological changes could result in the participants' emotional fluctuations that manifest themselves in the participants' reactions; therefore, manufacturing biases that will the previously mentioned biases. This critique unveiled the lack of scrutiny on participants' internal subjective processing of their environment which initiates the reasoning and negotiating faculties, which the contemporary symbolic interactionism also reflects.[24]Henceforth, prejudice is not a purely psychological phenomenon, instead it can be interpreted from a symbolic interactionism standpoint,[24]taking individuals' construction of the social reality into account. The Society for the Study of Symbolic Interaction (SSSI)[49]is an international professional organization for scholars, who are interested in the study of symbolic interaction. SSSI holds a conference in conjunction with the meeting of theAmerican Sociological Association(ASA) andthe Society for the Study of Social Problems. This conference typically occurs in August and sponsors the SSSI holds the Couch-Stone Symposium each spring. The Society provides travel scholarships for student members interested in attending the annual conference.[50]At the annual conference, the SSSI sponsors yearly awards in different categories of symbolic interaction. Additionally, some of the awards are open to student members of the society. The Ellis-Bochner Autoethnography and Personal Narrative Research Award is given annually by the SSSI affiliate of theNational Communication Associationfor the best article, essay, or book chapter inautoethnographyandpersonal narrativeresearch. The award is named after renowned autoethnographersCarolyn EllisandArt Bochner. The society also sponsors a quarterly journal,Symbolic Interaction,[51]and releases a newsletter,SSSI Notes.[50] SSSI also has a European branch,[52]which organizes an annual conference that integrates European symbolic interactionists.
https://en.wikipedia.org/wiki/Symbolic_interactionism
Acommunity of practice(CoP) is a group of people who "share a concern or a passion for something they do and learn how to do it better as they interact regularly".[1]The concept was first proposed bycognitive anthropologistJean Laveand educational theoristEtienne Wengerin their 1991 bookSituated Learning.[2]Wenger significantly expanded on this concept in his 1998 bookCommunities of Practice.[3]A CoP can form around members' shared interests or goals. Through being part of a CoP, the members learn from each other and develop their identities.[2] CoP members can engage with one another in physical settings (for example, in a lunchroom at work, an office, a factory floor), but CoP members are not necessarily co-located.[3]They can form avirtual community of practice(VCoP)[4]where the CoP is primarily located in anonline communitysuch as a discussion board, newsgroup, or on asocial networking service. Communities of practice have existed for as long as people have been learning and sharing their experiences through storytelling. The idea is rooted inAmerican pragmatism, especiallyC. S. Peirce's concept of the "community of inquiry",[5]as well asJohn Dewey's principle of learning through occupation.[6] ForEtienne Wenger,learningin a CoP is central toidentitybecause learning is conceptualized as social participation – the individual actively participates in the practices of social communities, thus developing their role and identity within the community.[7]In this context, a community of practice is a group of individuals with shared interests or goals who develop both their individual and shared identities through community participation. The structural characteristics of a community of practice are redefined to a domain of knowledge, a notion of community and a practice: In many organizations, communities of practice are integral to the organization structure.[8]These communities take on knowledge stewarding tasks that were previously covered by more formal organizational structures. Both formal and informal communities of practice may be established in an organization. There is a great deal of interest within organizations to encourage, support, and sponsor communities of practice to benefit from shared knowledge that may lead to higher productivity.[citation needed]Communities of practice are viewed by many within business settings as a means to explicatetacit knowledge, or the "know-how" that is difficult to articulate. An important aspect and function of communities of practice is increasing organization performance. Lesser and Storck identify four areas of organizational performance that can be affected by communities of practice:[9] Collaboration constellations differ in various ways. Some are under organizational control (e.g., teams), whereas others, like CoPs, are self-organized or under the control of individuals. Researchers have studied how collaboration types vary in their temporal or boundary focus, and the basis of their members' relationships.[10] Aproject teamdiffers from a community of practice in several ways.[citation needed] By contrast, In some cases, it may be useful to differentiate CoP from acommunity of inquiry(CoI). Social capitalis a multi-dimensional concept with public and private facets.[11]That is, social capital may provide value to both the individual and the group as a whole. As participants build informal connections in their community of practice, they also share their expertise, learn from others, participate in the group, and demonstrate their expertise - all of which can be viewed as acquiringsocial capital. Wasko and Faraj describe three kinds of knowledge: knowledge as object, knowledge embedded within individuals, and knowledge embedded in a community.[12]CoPs are associated with finding, sharing, transferring, and archiving knowledge, as well as making explicit "expertise", or articulatingtacit knowledge. Tacit knowledge is considered to be valuable context-based experiences that cannot easily be captured, codified and stored.[13][14] Becauseknowledge managementis seen "primarily as a problem of capturing, organizing, and retrieving information, evoking notions of databases, documents, query languages, and data mining",[15]the community of practice is viewed as a potential rich source for helpful information in the form of actual experiences; in other words,best practices. Thus, for knowledge management, if community practices within a CoP can be codified and archived, they provide rich content and contexts that can be accessed for future use. Members of CoPs are thought to be more efficient and effective conduits of information and experiences. While organizations tend to provide manuals to meet employee training needs, CoPs help foster the process of storytelling among colleagues, which helps them strengthen their skills.[16] Studies have shown that workers spend a third of their time looking for information and are five times more likely to turn to a co-worker than an explicit source of information (book, manual, or database).[13]Conferring with CoP members saves time because community members havetacit knowledge, which can be difficult to store and retrieve for people unfamiliar with the CoP. For example, someone might share one of their best ways of responding to a situation based on their experiences, which may enable another person to avoid mistakes, thus shortening the learning curve. In a CoP, members can openly discuss and brainstorm about a project, which can lead to new capabilities. The type of information that is shared and learned in a CoP is boundless.[17]Paul Duguid distinguishestacit knowledge(knowinghow) fromexplicit knowledge(knowingwhat).[18]Performing optimally in a job requires the application of theory into practice. CoPs help individuals bridge the gap between knowingwhatand knowinghow.[18] As members of CoPs, individuals report increased communication with people (professionals, interested parties, hobbyists), less dependence on geographic proximity, and the generation of new knowledge.[19]This assumes that interactions occur naturally when individuals come together. Social and interpersonal factors play a role in the interaction, and research shows that some individuals share or withhold knowledge and expertise from others because their knowledge relates to their professional identities, position, and interpersonal relationships.[20][21] Communicating with others in a CoP involves creatingsocial presence. Chih-Hsiung defines social presence as "the degree of salience of another person in an interaction and the consequent salience of an interpersonal relationship".[22]Social presence may affect the likelihood for an individual to participate in a CoP (especially in online environments andvirtual communities of practice).[22]CoP management often encounter barriers that inhibit knowledge exchange between members. Reasons for these barriers may include egos and personal attacks, large overwhelming CoPs, and time constraints.[12] Motivation to share knowledge is critical to success in communities of practice. Studies show that members are motivated to become active participants in a CoP when they view knowledge as a public good, a moral obligation and/or a community interest.[19]CoP members can also be motivated to participate through tangible returns (promotion, raises or bonuses), intangible returns (reputation, self-esteem) and community interest (exchange of practice related knowledge, interaction). Collaboration is essential to ensure that communities of practice thrive. In a study on knowledge exchange in a business network, Sveiby and Simons found that more seasoned colleagues tend to foster a more collaborative culture.[23]Additionally they noted that a higher educational level predicted a tendency to favor collaboration. What makes a community of practice succeed depends on the purpose and objective of the community as well as the interests and resources of community members. Wenger identified seven actions to cultivate communities of practice: Since the publication of "Situated Learning: Legitimate Peripheral Participation",[2]communities of practice have been the focus of attention, first as a theory of learning and later as part of the field of knowledge management.[24]Andrew Cox offers a more critical view of the different ways in which the term communities of practice can be interpreted.[25] To understand how learning occurs outside the classroom, Lave and Wenger studied how newcomers or novices become established community members within an apprenticeship.[2]Lave and Wenger first used the term communities of practice to describe learning through practice and participation, which they described assituated learning. The process by which a community member becomes part of a community occurs throughlegitimate peripheral participation. Legitimation and participation define ways of belonging to a community, whereas peripherality and participation are concerned with location and identity in the social world.[2] Lave and Wenger's research examined how a community and its members learn within apprenticeships. When newcomers join an established community, they initially observe and perform simple tasks in basic roles while they learn community norms and practices. For example, an apprentice electrician might watch and learn through observation before doing any electrical work, but would eventually take on more complicated electrical tasks. Lave and Wenger described this socialization process as legitimate peripheral participation. Lave and Wenger referred to a "community of practice" as a group that shares a common interest and desire to learn from and contribute to the community.[2] In his later work, Wenger shifted his focus from legitimate peripheral participation toward tensions that emerge fromdualities.[3]He identifies four dualities that exist in communities of practice: participation-reification, designed-emergent, identification-negotiability and local-global. The participation-reification duality has been a particular focus in the field ofknowledge management. Wenger describes three dimensions of practice that support community cohesion:mutual engagement, negotiation of a joint enterprise and shared repertoire.[3] The communities Lave and Wenger studied were naturally forming as practitioners of craft and skill-based activities met to share experiences and insights.[2] Lave and Wenger observed situated learning within a community of practice among Yucatánmidwives, Liberian tailors, navy quartermasters and meat cutters,[2]and insurance claims processors.[3]Other fields have used the concept of CoPs in education,[26]sociolinguistics, material anthropology,medical education,second language acquisition,[27]Parliamentary Budget Offices,[28]health care and business sectors,[29]research data,[30][31]and child mental health practice (AMBIT). A famous example of a community of practice within an organization is theXeroxcustomer service representatives who repaired machines.[32]The Xerox reps began exchanging repair tips and tricks in informal meetings over breakfast or lunch. Eventually, Xerox saw the value of these interactions and created the Eureka project, which allowed these interactions to be shared across its global network of representatives. The Eureka database is estimated to have saved the corporation $100 million. Examples of large virtual CoPs include:
https://en.wikipedia.org/wiki/Community_of_practice
Acadastreorcadaster(/kəˈdæstər/kə-DAS-tər) is a comprehensive recording of thereal estateorreal property'smetes-and-boundsof a country.[1][2]Often it is represented graphically in acadastral map. In most countries, legal systems have developed around the original administrative systems and use the cadastre to define the dimensions and location of land parcels described in legal documentation.Aland parcelorcadastral parcelis defined as "a continuous area, or more appropriately volume, that is identified by a unique set of homogeneous property rights".[3] Cadastral surveys document theboundariesof land ownership, by the production of documents, diagrams, sketches, plans (platsin the US), charts, and maps. They were originally used to ensure reliable facts for land valuation and taxation. An example from earlyEnglandis theDomesday Bookin 1086.Napoleonestablished a comprehensive cadastral system for France that is regarded as theforerunnerof most modern versions. Cadastral survey information is often a base element inGeographic Information Systems(GIS) orLand Information Systems(LIS) used to assess and manage land and built infrastructure. Such systems are also employed on a variety of other tasks, for example, to track long-term changes over time for geological or ecological studies, where land tenure is a significant part of the scenario. The cadastre is a fundamental source of data in disputes andlawsuitsbetween landowners.Land registrationand cadastre are both types of land recording and complement each other.[2] By clearly assigning property rights and demarcating land, cadasters have been attributed with strengthening state fiscal capacity and economic growth.[4] The wordcadastrecame intoEnglishthrough French from theGreekkatástikhon(κατάστιχον), a list or register, fromkatà stíkhon(κατὰ στίχον)—literally, "(organised) line by line".[5] A cadastre commonly includes details of theownership, thetenure, the precise location, the dimensions (and area), the cultivations if rural, and thevalueof individual parcels of land. Cadastres are used by many nations around the world, some in conjunction with other records, such as a title register.[1] TheInternational Federation of Surveyorsdefines cadastre as follows:[6] A Cadastre is normally a parcel-based, and up-to-date land information system containing a record of interests in land (e.g. rights, restrictions and responsibilities). It usually includes a geometric description of land parcels linked to other records describing the nature of the interests, the ownership or control of those interests, and often the value of the parcel and its improvements. Some of the earliest cadastres were ordered byRoman Emperorsto recover state owned lands that had been appropriated by private individuals, and thereby recover income from such holdings. One such cadastre was done in AD 77 in Campania, a surviving stone marker of the survey reads "The EmperorVespasian, in the eighth year of his tribunician power, so as to restore the state lands which the EmperorAugustushad given to the soldiers of Legion II Gallica, but which for some years had been occupied by private individuals, ordered a survey map to be set up with a record on each 'century' of the annual rental".[7][8]In this way Vespasian was able to reimpose taxation formerly uncollected on these lands.[7] With the fall of Rome, the use of cadastral maps effectively discontinued. Medieval practice used written descriptions of the extent of land rather than using more precise surveys. Only in the sixteenth and early seventeenth centuries did the use of cadastral maps resume, beginning in the Netherlands. With the emergence ofcapitalisminRenaissanceEurope, the need for cadastral maps reemerged as a tool to determine and express control of land as a means of production. This took place first privately in land disputes and later spread to governmental practice as a means of more precise tax assessment.[7] Acadastral mapis amapthat shows theboundariesand ownership of land parcels. Some cadastral maps show additional details, such as survey district names, unique identifying numbers for parcels, certificate of title numbers, positions of existing structures, section or lot numbers and their respective areas, adjoining and adjacent street names, selected boundary dimensions and references to prior maps. James C. Scott, inSeeing Like a State, argues that all maps, but particularly cadastral maps, are designed to make local situations legible to an outsider, and in doing so, enable states to collect data on their subjects. He sees the origins of this inEarly Modern Europe, wheretaxationbecame more complex. Cadastral maps, he argues, are always a great simplification, but they in themselves help change reality.[10] Cadastral documentation comprises the documentary materials submitted to cadastre or land administration offices for renewal of cadastral recordings. Cadastral documentation is kept in paper and/or electronic form.[11]Jurisdiction statutes and further provisions specify the content and form of the documentation,[12]as well as the person(s) authorized to prepare and sign the documentation, including concerned parties (owner, etc.), licensedsurveyorsand legal advisors. The office concerned reviews the submitted information; if the documentation does not comply with stated provisions, the office may set a deadline for the applicant to submit complete documentation.[13][14] The concept of cadastral documentation emerged late in the English language, as the institution of cadastre developed outside English-speaking countries. In a Danish textbook, one out of fifteen chapters regards the form and content of documents concerning subdivision and other land matters.[15]Early textbooks of international scope focused on recordings in terms ofland registrationand technical aspects ofcadastral survey, yet note that 'cadastral surveying has been carried out within a tight framework of legislation'.[16][3]With the view of assessing transaction costs, a European project: Modelling real property transactions (2001–2005) charted procedures for the transfer of ownership and other rights in land and buildings.[17]Cadastral documentation is described, e.g. for Finland as follows '8. Surveyor draws up cadastral map and cadastral documents … 10. Surveyor sends cadastral documents to cadastral authority.'[18]In Australia, similar activities are referred to as 'lodgement of plans of subdivision at land titles offices'[19] Cadastre management has been used by the software industry since at least 2005.[20][21]It mainly refers to the use of technology for management of cadastre and land information ingeographic information systems,spatial data infrastructuresandsoftware architecture, rather than to general management issues of cadastral and other land information agencies.[16] In 1836,Colonel Robert Dawsonof theRoyal Engineersproposed that a cadastre be implemented in light of his experiences as on secondment to theTithe Commission. In theUnited States, cadastral survey within theBureau of Land Management(BLM) maintains records of allpublic lands. Such surveys often require detailed investigation of the history of land use, legal accounts, and other documents. ThePublic Lands Survey Systemis a cadastral survey of the United States originating in legislation from 1785, afterinternational recognitionof the United States. TheDominion Land Surveyis a similar cadastral survey conducted in Western Canada, begun in 1871 after the creation of the Dominion of Canada in 1867. Both cadastral surveys are made relative toprincipal meridianandbaselines. These cadastral surveys divided the surveyed areas intotownships. Some much earlier surveys in Ohio created 25 square mile townships when the design of the system was being explored. Later, the design became square land areas of approximately 36 square miles (six miles by six miles). These townships are divided intosections, each approximately one-mile square. Unlike in Europe, this cadastral survey largely preceded settlement and as a result greatly influenced settlement patterns. Properties are generally rectangular, boundary lines often run on cardinal bearings, and parcel dimensions are often in fractions or multiples ofchains.Land descriptionsin Western North America are principally based on these land surveys. Extensions of the conventional cadastre concept include the3D cadastre, considering the vertical domain;[22]and themultipurpose cadastre, considering non-parcel data.[23] According to theUN Economic Commission for Europe, a "Marine Cadastredescribes the location and spatial extent of rights, restrictions and responsibilities in the marine environment".[24]Marine cadastres apply the same governance principles to the water.[25]They help further conservation and sustainability efforts.[25]This is especially a concern in Europe's large aquatic market.[26][24]In Australia, they are used by many parties to plan around legal, technical, and institutional considerations.[27]A related concept is that ofmarinespatial data infrastructures.[28]
https://en.wikipedia.org/wiki/Cadastral_map
This is a list of notable commercialsatellite navigationsoftware (also known asGPSsoftware) for various devices, with a specific focus onmobile phones, tablets, tablet PCs, (Android, iOS, Windows).
https://en.wikipedia.org/wiki/Comparison_of_commercial_GPS_software
Afrikaans, Azerbaijani, Indonesian, Malay, Bosnian, Catalan, Czech, Danish, German (Germany), Estonian, English (United States), Spanish (Spain), Spanish (Latin America), Basque, Filipino, French (France), Galician, Croatian, Zulu, Icelandic, Italian, Swahili, Latvian, Lithuanian, Hungarian, Dutch, Norwegian, Uzbek, Polish, Portuguese (Brazil), Portuguese (Portugal), Romanian, Albanian, Slovak, Slovenian, Finnish, Swedish, Vietnamese, Turkish, Greek, Bulgarian, Kyrgyz, Kazakh, Macedonian, Mongolian, Russian, Serbian, Ukrainian, Georgian, Armenian, Hebrew, Urdu, Arabic, Persian, Amharic, Nepali, Hindi, Marathi, Bengali, Punjabi, Gujarati, Tamil, Telugu, Kannada, Malayalam, Sinhala, Thai, Lao, Burmese, Khmer, Korean, Japanese, Simplified Chinese, Traditional Chinese[4] English, Arabic, Catalan, Croatian, Czech, Danish, Dutch, Finnish, French, German, Greek, Hebrew, Hindi, Hungarian, Indonesian, Italian, Japanese, Korean, Malay, Norwegian Bokmål, Polish, Portuguese, Romanian, Russian, Simplified Chinese, Slovak, Spanish, Swedish, Thai, Traditional Chinese, Turkish, Ukrainian, Vietnamese[6]
https://en.wikipedia.org/wiki/Comparison_of_web_map_services
Counter-mappingis creating maps that challenge "dominant power structures, to further seemingly progressive goals".[1]Counter-mapping is used in multiple disciplines to reclaim colonized territory. Counter-maps are prolific in indigenous cultures, "counter-mapping may reify, reinforce, and extend settler boundaries even as it seeks to challenge dominant mapping practices; and still, counter-mapping may simultaneously create conditions of possibility for decolonial ways of representing space and place."[2]The term came into use in the United States whenNancy Pelusoused it in 1995 to describe the commissioning of maps by forest users inKalimantan,Indonesia, to contest government maps of forest areas that underminedindigenousinterests.[3]The resultant counter-hegemonic maps strengthen forest users' resource claims.[3]There are numerous expressions closely related to counter-mapping: ethnocartography, alternative cartography, mapping-back, counter-hegemonic mapping,deep mapping[4]and public participatory mapping.[5]Moreover, the termscritical cartography,subversive cartography,bio-regional mapping, andremappingare sometimes used interchangeably withcounter-mapping, but in practice encompass much more.[5] Whilst counter-mapping still refers to indigenous mapping, it is increasingly being applied to non-indigenous mapping in economically developed countries.[5]Such counter-mapping has been facilitated by processes ofneoliberalism,[6]and technologicaldemocratisation.[3]Examples of counter-mapping include attempts to demarcate and protect traditional territories, community mapping,public participation geographic information systems, and mapping by a relatively weak state to counter the resource claims of a stronger state.[7]The power of counter-maps to advocate policy change in abottom-upmanner led commentators to affirm that counter-mapping should be viewed as a tool ofgovernance.[8] Despite its emancipatory potential, counter-mapping has not gone without criticism. There is a tendency for counter-mapping efforts to overlook the knowledge of women, minorities, and other vulnerable, disenfranchised groups.[9]From this perspective, counter-mapping is only empowering for a small subset of society, whilst others become further marginalised.[10] Nancy Peluso, professor of forest policy, coined the term 'counter-mapping' in 1995, having examined the implementation of two forest mapping strategies inKalimantan. One set of maps belonged to state forest managers, and theinternational financial institutionsthat supported them, such as theWorld Bank. This strategy recognised mapping as a means of protecting local claims to territory and resources to a government that had previously ignored them.[3]The other set of maps had been created by IndonesianNGOs, who often contract international experts to assist with mapping village territories.[3]The goal of the second set of maps was to co-opt the cartographic conventions of the Indonesian state, to legitimise the claims by theDayakpeople, indigenous to Kalimantan, to the rights to forest use.[5]Counter-mappers in Kalimantan have acquiredGIStechnologies, satellite technology, and computerisedresource managementtools, consequently making the Indonesian state vulnerable to counter-maps.[3]As such, counter-mapping strategies in Kalimantan have led to successful community action to block, and protest against, oil palm plantations and logging concessions imposed by the central government.[3] It must, however, be recognised that counter-mapping projects existed long before coinage of the term.[5]Counter-maps are rooted in map art practices that date to the early 20th century; in themental mapsmovement of the 1960s; in indigenous and bioregional mapping; and parish mapping.[11] In 1985, the charityCommon Groundlaunched theParish Maps Project, abottom-upinitiative encouraging local people to map elements of the environment valued by their parish.[12]Since then, more than 2,500 English parishes have made such maps.[11]Parish mapping projects aim to put every local person in an 'expert' role.[13]Clifford[14]exemplifies this notion, affirming: "making a parish map is about creating a community expression of values, and about beginning to assert ideas for involvement. It is about taking the place in your own hands". The final map product is typically an artistic artefact, usually painted, and often displayed in village halls or schools.[15]By questioning the biases of cartographic conventions and challenging predominant power effects of mapping,[16]The Parish Maps Project is an early example of what Peluso[3]went on to term 'counter-mapping' The development of counter-mapping can be situated within theneoliberalpolitical-economic restructuring of the state.[17]Prior to the 1960s, equipping a map-making enterprise was chiefly the duty of a single agency, funded by the national government.[18]In this sense, maps have conventionally been the products of privileged knowledges.[19]However, processes ofneoliberalism, predominantly since the late 1970s, have reconfigured the state's role in the cartographic project.[6]Neoliberalism denotes an emphasis on markets and minimal states, whereby individual choice is perceived to have replaced the mass-production of commodities.[20]The fact that citizens are now performing cartographic functions that were once exclusively state-controlled can be partially explained through a shift from "roll-back neoliberalism", in which the state dismantled some of its functions, to "roll-out neoliberalism", in which new modes of operating have been constructed.[21]In brief, the state can be seen to have "hollowed out" and delegated some of its mapping power to citizens.[22] Governmentalityrefers to a particular form of state power that is exercised when citizens self-discipline by acquiescing to state knowledge.[23]Historically,cartographyhas been a fundamental governmentality strategy,[24]a technology of power, used for surveillance and control.[25]Competing claimants and boundaries made no appearance on state-led maps.[25]This links toFoucault's[26]notion of "subjugated knowledges" - ones that did not rise to the top, or were disqualified.[24]However, through neoliberalising processes, the state has retracted from performing some of its cartographic functions.[17]Consequently, rather than being passive recipients of top-down map distribution, people now have the opportunity to claim sovereignty over the mapping process.[27]In this new regime of neoliberal cartographic governmentality the "insurrection of subjugated knowledges" occurs,[26]as counter-mapping initiatives incorporate previously marginalised voices. In response to technological change, predominantly since the 1980s, cartography has increasingly been democratised.[28]The wide availability of high-quality location information has enabled mass-market cartography based onGlobal Positioning Systemreceivers, home computers, and the internet.[29]The fact that civilians are using technologies which were once elitist led Brosiuset al..[30]to assert that counter-mapping involves "stealing the master's tools". Nevertheless, numerous early counter-mapping projects successfully utilised manual techniques, and many still use them. For instance, in recent years, the use of simple sketch mapping approaches has been revitalised, whereby maps are made on the ground, using natural materials.[31]Similarly, the use of scale model constructions and felt boards, as means of representing cartographic claims of different groups, have become increasingly popular.[9]Consequently,Woodet al.[11]assert that counter-mappers can "make gateau out of technological crumbs". In recent years,Public Participation Geographical Information Systems(PPGIS) have attempted to take the power of the map out of the hands of the cartographic elite, putting it into the hands of the people. For instance, Kyem[32]designed a PPGIS method termed Exploratory Strategy for Collaboration, Management, Allocation, and Planning (ESCMAP). The method sought to integrate the concerns and experiences of three rural communities in the Ashanti Region of Ghana into officialforest managementpractices.[32]Kyem[32]concluded that, notwithstanding the potential of PPGIS, it is possible that the majority of the rich and powerful people in the area would object to some of the participatory uses ofGIS. For example, loggers in Ghana affirmed that the PPGIS procedures were too open and democratic.[32]Thus, despite its democratising potential, there are barriers to its implementation. More recently,Woodet al..[11]disputed the notion of PPGIS entirely, affirming that it is "scarcelyGIS, intensely hegemonic, hardly public, and anything but participatory". Governancemakes problematic state-centric notions of regulation, recognising that there has been a shift to power operating across severalspatial scales.[33]Similarly, counter-mapping complicates state distribution ofcartography, advocatingbottom-upparticipatory mapping projects (seeGIS and environmental governance). Counter-mapping initiatives, often without state assistance, attempt to exert power. As such, counter-mapping conforms toJessop's[22]notion of "governance without government". Another characteristic of governance is its "purposeful effort to steer, control or manage sectors or facets of society" towards a common goal.[34]Likewise, as maps exude power and authority,[35]they are a trusted medium[36]with the ability to 'steer' society in a particular direction. In brief,cartography, once the tool of kings and governments,[37]is now being used as a tool of governance - to advocate policy change from thegrassroots.[8]The environmental sphere is one context in which counter-mapping has been utilised as a governance tool.[8] In contrast to expert knowledges, lay knowledges are increasingly valuable to decision-makers, in part due to the scientific uncertainty surrounding environmental issues.[38]Participatorycounter-mapping projects are an effective means of incorporating lay knowledges[39]into issues surroundingenvironmental governance. For instance, counter-maps depicting traditional use of areas now protected for biodiversity have been used to allow resource use, or to promote public debate about the issue, rather than forcing relocation.[8]For example, theWorld Wide Fund for Natureused the results of counter-mapping to advocate for the reclassification of several strictly protected areas into Indonesian national parks, including Kayan Mentarang and Gunung Lorentz.[8]The success of such counter-mapping efforts led Alcorn[8]to affirm thatgovernance(grassrootsmapping projects), rather than government (top-downmap distribution), offers the best hope for goodnatural resource management. In short, it can be seen that "maps are powerful political tools in ecological and governance discussions".[8] Numerous counter-mapping types exist, for instance: protest maps, map art, counter-mapping for conservation, andPPGIS. In order to emphasise the wide scope of what has come to be known as counter-mapping, three contrasting counter-mapping examples are elucidated in this section: indigenous counter-mapping, community mapping, and state counter-mapping, respectively. Counter-mapping has been undertaken predominantly in under-represented communities.[15]Indigenous peoplesare increasingly turning toparticipatorymapping, appropriating both the state's techniques and manner of representation.[40]Counter-mapping is a tool for indigenous identity-building,[41]and for bolstering the legitimacy of customary resource claims.[3]The success of counter-mapping in realising indigenous claims can be seen through Nietschmann's[42]assertion: More indigenous territory has been claimed by maps than by guns. And more Indigenous territory can be reclaimed and defended by maps than by guns. The power of indigenous counter-mapping can be exemplified through the creation ofNunavut. In 1967,Frank Arthur Calderand the Nisaga'a Nation Tribal Council brought an action against theProvince of British Columbiafor a declaration thataboriginaltitle to specified land had not been lawfully extinguished. In 1973, theCanadian Supreme Courtfound that there was, in fact, an aboriginal title. The Canadian government attempted to extinguish such titles by negotiatingtreatieswith the people who had not signed them.[11]As a first step, theInuit Tapirisat of CanadastudiedInuitland occupancy in the Arctic, resulting in the publication of theInuit Land Use and Occupancy Project.[43]Diverse interests, such as those of hunters, trappers, fishermen and berry-pickers mapped out the land they had used during their lives.[11]As Usher[44]noted: We were no longer mapping the 'territories' of Aboriginal people based on the cumulative observations of others of where they were…but instead, mapping the Aboriginal peoples’ own recollections of their own activities. These maps played a fundamental role in the negotiations that enabled the Inuit to assert an aboriginal title to the 2 million km2in Canada, today known as Nunavut.[11]Counter-mapping is a tool by which indigenous groups can re-present the world in ways which destabilise dominant representations.[45] Indigenous peoples have begun remapping areas of the world that were once occupied by their ancestors as an act of reclamation of land stolen from them by country governments. Indigenous peoples have begun this process all over the world from the Indigenous peoples from the United States, Aboriginal peoples from Australia, and Amazonian people from Brazil. The people of the lands have begun creating their own maps of the land in terms of the borders of the territory and pathways around the territory. When Native peoples first began this process it was done by hand, but presently GPS systems and other technological mapping devices are used.[46]Indigenous maps are reconceptualizing the "average" map and creatively representing space as well as the culture of those who live in the space. Indigenous people are creating maps that are for their power and social benefit instead of the ones forced on them through different titling, and description. Indigenous peoples are also creating maps to adjust to the contamination and pollution that is present In their land. Specifically in Peru, Indigenous peoples are using mapping to identify problem areas and innovating and creating strategies to combat these risks for the future.[47] White colonists saw land as property and a commodity to be possessed. As a result, as settlers grew in numbers and journeyed west, land was claimed and sold for profit. White colonists would “develop” the land and take ownership of it, believing the land was theirs to own. Indigenous peoples, on the other hand, saw themselves as connected with the land spiritually and that the land, instead owned them. Land to Aboriginal people is a major part of their identity and spirituality. They saw the land as being sacred and needing to be protected. Indigenous peoples believe it is their responsibility to take care of the land. As Marion Kickett states in her research, “Land is very important to Aboriginal people with the common belief of 'we don't own the land, the land owns us'. Aboriginal people have always had a spiritual connection to their land...[48]" These differing perspectives on land caused many disputes during the era of Manifest Destiny and as white settler populations began to increase and move into Indigenous peoples’ territory. The Indigenous people believed they were to serve the land while white colonists believed the land should serve them. As a result, when the two sides came in contact, they disputed over how to "claim" land. The height of this conflict began to occur duringManifest Destinyas the white colonist population began to grow and move westward into more parts of Indigenous lands and communities. Maps represent and reflect how an individual or society names and projects themselves onto nature, literally and symbolically. Mapping, while seemingly objective, is political and a method of control on territory.[46]Mapmaking has thus both socio-cultural (myth-making) and technical (utilitarian and economic) functions and traditions.[49]The difference between boundaries and territories made by the White colonists and Indigenous people were vastly different, and expressed their views on the land and nature. Indigenous peoples' territory often ended at rivers, mountains, and hills or were defined by relationships between different tribes, resources, and trade networks. The relationships between tribes would determine the access to the land and its resources. Instead of the borders being hard edges like the United States’, border on Indigenous peoples’ lands were more fluid and would change based on marriages between chiefs and their family members, hunting clans, and heredity. In Indigenous maps the landmarks would be drawn on paper and in some cases described. Detailed knowledge of the thickness of ice, places of shelter and predators were placed in maps to inform the user for what to look for when in the territory. Maps made by White colonists in America were first based on populations, created territories based on the edges of civilization. After the creation of the United States government, state land was designated by Congress and intended to be given equally by latitude and longitudinal coordinates. The ending of railroad tracks and crossings also designated the ending of one state to another, creating a fence-like boundary. In a special case, after the acquisition of theLouisiana Purchase, the United States had to decide between the territory where slavery was legal and where it was not. TheMissouri Compromisewas birthed as a result and a boundary line was created at the longitude and latitude lines of 36’30”. The states were documented by their coordinates and borders were made at the numbered locations. These numbered locations would stretch for miles and encompass all in that territory even if it belonged to Indigenous peoples’. That is often how land would be stolen from Indigenous peoples. The land that would be "claimed" by the United States Government would stretch across Indigenous lands without consideration of their borders. Indigenous peoples' lands were absorbed by the borders of America's newly mapped states and were forced out as a result. Their livelihoods and mythology tied to the land was also destroyed. White colonists claimed the land for their own and Indigenous peoples were no longer allowed to occupy the space. Another way was the differences in the way each group mapped the land. The United States Government would not recognize a Tribes territory without a map and most tribes did not have maps that were in the style of European maps, therefore they were ignored. Community mapping can be defined as "local mapping, produced collaboratively, by local people and often incorporating alternative local knowledge".[15]OpenStreetMap is an example of a community mapping initiative, with the potential to counter the hegemony of state-dominated map distribution.[50] OpenStreetMap(OSM), a citizen-led spatial data collection website, was founded bySteve Coastin 2004. Data are collected from diverse public domain sources; of whichGPStracks are the most important, collected by volunteers with GPS receivers.[15]As of 10 January 2011[update]there were 340,522 registered OSM users, who had uploaded 2.121 billion GPS points onto the website.[51]The process of map creation explicitly relies upon sharing and participation; consequently, every registered OSM user can edit any part of the map. Moreover, 'map parties' – social events which aim to fill gaps in coverage – help foster a community ethos.[52]In short, thegrassrootsOSM project can be seen to represent aparadigm shiftin who creates and shares geographic information - from the state, to society.[53]However, rather than countering the state-dominated cartographic project, some commentators have affirmed that OSM merely replicates the 'old' socio-economic order.[54]For instance, Haklay[54]affirmed that OSM users in the United Kingdom tend not to map council estates; consequently, middle-class areas are disproportionately mapped. Thus, in opposition to notions that OSM is a radical cartographic counter-culture,[55]are contentions that OSM "simply recreates a mirror copy of existingtopographic mapping".[56] What has come to be known as counter-mapping is not limited to the activities ofnon-stateactors within a particular nation-state; relatively weak states also engage in counter-mapping in an attempt to challenge other states.[57] East Timor's ongoing effort to gain control of gas and oil resources from Australia, which it perceives at its own, is a form of counter-mapping. This dispute involves a cartographic contestation of Australia's mapping of the seabed resources between the two countries.[57]As Nevins contends: whilst Australia's map is based on the status quo – a legacy of a 1989 agreement between Australia and the Indonesian occupier of East Timor at that time, East Timor's map represents an enlarged notion of what its sea boundaries should be, thereby entailing a redrawing of the map.[57]This form of counter-mapping thus represents a claim by a relatively weak state, East Timor, to territory and resources that are controlled by a stronger state, Australia.[5]However, Nevins notes that there is limited potential of realising a claim through East Timor's counter-map: counter-mapping is an effective strategy only when combined with broader legal and political strategies.[57] Counter-mapping's claim to incorporate counter-knowledges, and thereby empower traditionally disempowered people, has not gone uncontested.[58]A sample of criticisms leveled at counter-mapping: To summarise, whilst counter-mapping has the potential to transform map-making from "a science of princes",[63]the investment required to create a map with the ability to challenge state-produced cartography means that counter-mapping is unlikely to become a "science of the masses".[3]
https://en.wikipedia.org/wiki/Counter-mapping
This article contains a list of notablewikis, which arewebsitesthat usewiki software, allowing users to collaboratively edit content and view old versions of the content. These websites useseveral different wiki software packages. 550,069[citation needed]
https://en.wikipedia.org/wiki/List_of_wikis
Neogeography(literally "new geography") is the use of geographical techniques and tools for personal and community activities or by a non-expert group of users.[1]Application domains of neogeography are typically not formal or analytical.[2] From the point of view of human geography,neogeographycould be also defined as the use of new specific information society tools, especially the Internet, to the aims and purposes of geography as an academic discipline; in all branches of geographical thought and incorporating contributions from outside of geography performed by non-specialist users in this discipline through the use of specific geographic ICT tools. This new definition, complementing previous ones, restores to academic geography the leading role proponents claim it should play when considering a renewal of the discipline with the rigor and right granted by its centuries-existence, but also includes the interesting social phenomenon of citizen participation in the geographical knowledge from its dual role: as undoubted possibility of enrichment for geography and as social phenomenon with geographic interest.[citation needed] The termneogeographyhas been used since at least 1922. In the early 1950s in the U.S. it was a term used in thesociologyof production & work. The French philosopherFrançois Dagognetused it in the title of his 1977 bookUne Epistemologie de l'espace concret: Neo-geographie. The word was first used in relation to the study of online communities in the 1990s by Kenneth Dowling, the Librarian of the City and County ofSan Francisco.[3]Immediate precursor terms in the industry press were: "the geospatial Web" and "the geoaware Web" (both 2005); "Where 2.0" (2005); "a dissident cartographic aesthetic" and "mapping and counter-mapping" (2006).[3]These terms arose with the concept ofWeb 2.0, around the increased public appeal of mapping andgeospatialtechnologies that occurred with the release of such tools as "slippy maps" such asGoogle Maps,Google Earth, and also with the decreased cost of geolocated mobile devices such asGPSunits. Subsequently, the use of geospatial technologies began to see increased integration with non-geographically focused applications. The term neogeography was first defined in its contemporary sense by Randall Szott in 2006. He argued for a broad scope, to include artists,psychogeography, and more. The technically oriented aspects of the field, far more tightly defined than in Scott's definition, were outlined by Andrew Turner in hisIntroduction to Neogeography(O'Reilly, 2006). The contemporary use of the term, and the field in general, owes much of its inspiration to thelocative mediamovement that sought to expand the use of location-based technologies to encompass personal expression and society.[3] TraditionalGeographic Information Systemshistorically have developed tools and techniques targeted towards formal applications that require precision and accuracy. By contrast, neogeography tends to apply to the areas of approachable, colloquial applications. The two realms can have overlap as the same problems are presented to different sets of users: experts and non-experts.[citation needed] Neogeography has also been connected[4]with the increase in user-generated geographic content, closely related toVolunteered Geographic Information.[5]This can be an active collection of data such asOpenStreetMapor passive collection of user-data such as Flickr tags for folksonomic toponyms. While involving non-trained volunteers in the data creation process, research proves users perceive volunteered geographic information as highly valuable and trustworthy.[6][7][8] There is currently much debate about the scope and application of neogeography in the web mapping, geography, and GIS fields. Some of this discussion considers neogeography to be the ease of use of geographic tools and interfaces while other points focus on the domains of application. Neogeography is not limited to a specific technology and is not strictly web-based, so is not synonymous withweb mappingthough it is commonly conceived as such. A number of geographers and geoinformatics scientists (such as Mike Goodchild[9]) have expressed strong reservations about the term "neogeography". They say thatgeographyis an established scientific discipline; uses such as mashups and tags in Google Earth are not scientific works, but are better described asVolunteered Geographic Information. There are also a great many artists and inter-disciplinary practitioners involved in an engagement with new forms of mapping and locative art.[10]It is thus far wider than simplyweb mapping.
https://en.wikipedia.org/wiki/Neogeography
Participatory 3D modelling(P3DM) is a community-basedmappingmethod which integrates local spatial knowledge with data on elevation of the land and depth of the sea to produce stand-alone, scaled andgeo-referencedrelief models. Essentially based on local spatial knowledge, land use and cover, and other features are depicted by informants on the model by the use of pushpins (points), yarns (lines) and paints (polygons). On completion, a scaled and geo-referenced grid is applied to facilitate data extraction or importation. Data depicted on the model are extracted, digitised and plotted. On completion of the exercise the model remains with the community.[1][2][3] On November 5, 2007 at a ceremony which took place during the Global Forum 2007 at theFondazione Giorgio Ciniin Venice, Italy, the CTA-supported projectParticipatory 3D Modelling (P3DM) for Resource Use, Development Planning and Safeguarding Intangible Cultural Heritage in Fiji[4]was granted theWorld Summit Award2007 in the category e-culture. The product, based on the use of P3DM, has been considered as one of the 40 best practice examples of quality e-Content in the world[5]. The product has been delivered by the following organizations: Fiji Locally-Managed Marine Area (FLMMA) Network,WWF South Pacific Programme, Native Lands Trust Board,Secretariat of the Pacific Community,National Trust of Fiji, Lomaiviti Provincial Council and theTechnical Centre for Agricultural and Rural Cooperation ACP-EU (CTA). Networks Organizations Bibliography Multimedia
https://en.wikipedia.org/wiki/Participatory_3D_modelling
Participatory GIS(PGIS) orpublic participation geographic information system(PPGIS) is aparticipatoryapproach tospatial planningand spatial information andcommunications management.[1][2] PGIS combinesParticipatory Learning and Action(PLA) methods withgeographic information systems(GIS).[3]PGIS combines a range of geo-spatial information management tools and methods such as sketch maps,participatory 3D modelling(P3DM),aerial photography,satellite imagery, andglobal positioning system(GPS) data to represent peoples' spatial knowledge in the forms of (virtual or physical) two- or three-dimensional maps used as interactive vehicles for spatial learning, discussion, information exchange, analysis, decision making and advocacy.[4]Participatory GIS implies making geographic technologies available to disadvantaged groups in society in order to enhance their capacity in generating, managing, analysing and communicating spatial information. PGIS practice is geared towards community empowerment through measured, demand-driven, user-friendly and integrated applications of geo-spatial technologies.[citation needed]GIS-based maps and spatial analysis become major conduits in the process. A good PGIS practice is embedded into long-lasting spatial decision-making processes, is flexible, adapts to different socio-cultural and bio-physical environments, depends on multidisciplinary facilitation and skills and builds essentially on visual language. The practice integrates several tools and methods whilst often relying on the combination of 'expert' skills with socially differentiated local knowledge. It promotes interactive participation of stakeholders in generating and managing spatial information and it uses information about specific landscapes to facilitate broadly-based decision making processes that support effective communication and community advocacy. If appropriately utilized,[5]the practice could exert profound impacts on community empowerment, innovation and social change.[6]More importantly, by placing control of access and use ofculturally sensitivespatial information in the hands of those who generated them, PGIS practice could protect traditional knowledge and wisdom from external exploitation. PPGIS is meant to bring the academic practices ofGISand mapping to the local level in order to promote knowledge production by local and non-governmental groups.[7]The idea behind PPGIS is empowerment and inclusion of marginalized populations, who have little voice in the public arena, through geographic technology education and participation. PPGIS uses and produces digital maps, satellite imagery, sketch maps, and multiple other spatial and visual tools, to change geographic involvement and awareness on a local level. The term was coined in 1996 at the meetings of theNational Center for Geographic Information and Analysis(NCGIA).[8][9][10] Attendees to theMapping for Change International Conference on Participatory Spatial Information Management and Communicationconferred to at least three potential implications of PPGIS; it can: (1) enhance capacity in generating, managing, and communicating spatial information; (2) stimulate innovation; and ultimately; (3) encourage positive social change.[11][12]This reflects on the rather nebulous definition of PPGIS as referenced in theEncyclopedia of GIS[13]which describes PPGIS as having a definition problem. There are a range of applications for PPGIS. The potential outcomes can be applied from community andneighborhood planningand development to environmental andnatural resource management. Marginalized groups, be theygrassrootsorganizations toindigenous populationscould benefit from GIS technology. Governments,non-government organizationsandnon-profit groupsare a big force behind multiple programs. The current extent of PPGIS programs in the US has been evaluated by Sawicki and Peterman.[14]They catalog over 60 PPGIS programs who aid in "public participation in community decision making by providing local-area data to community groups," in the United States.[15]: 24The organizations providing these programs are mostly universities, localchambers of commerce, non-profitfoundations. In general, neighborhood empowerment groups can form and gain access to information that is normally easy for the official government and planning offices to obtain. It is easier for this to happen than for individuals of lower-income neighborhoods just working by themselves. There have been several projects where university students help implement GIS in neighborhoods and communities. It is believed[by whom?]that access to information is the doorway to more effective government for everybody and community empowerment. In a case study of a group in Milwaukee, residents of aninner cityneighborhood became active participants in building a community information system, learning to access public information and create and analyze new databases derived from their own surveys, all with the purpose of making these residents useful actors in city management and in the formation of public policy.[16]In a number of cases, there are providers of data for community groups, but the groups may not know that such entities exist. Getting the word out would be beneficial.[citation needed] Some of the spatial data that the neighborhood wanted was information on abandoned or boarded-up buildings and homes, vacant lots, and properties that contained garbage, rubbish and debris that contributed to health and safety issues in the area. They also appreciated being able to find landlords that were not keeping up the properties. The university team and the community were able to build databases and make maps that would help them find these areas and perform the spatial analysis that they needed. Community members learned how to use the computer resources, ArcView 1.0, and build a theme or land use map of the surrounding area. They were able to perform spatial queries and analyze neighborhood problems. Some of these problems included finding absentee landlords and finding code violations for the buildings on the maps.[16] There are two approaches to PPGIS use and application. These two perspectives, top–down and bottom–up, are the currently debated schism in PPGIS. According to Sieber (2006), PPGIS was first envisioned as a means of mapping individuals by multiple social and economic demographic factors in order to analyze the spatial differences in access to social services. She refers to this kind of PPGIS astop-down, being that it is less hands on for the public, but theoretically serves the public by making adjustments for the deficiencies, and improvements in public management.[17] A current trend with academic involvement in PPGIS, is researching existing programs, and or starting programs in order to collect data on the effectiveness of PPGIS. Elwood (2006) inThe Professional Geographer, talks in depth about the "everyday inclusions, exclusions, and contradictions of Participatory GIS research."[18]The research is being conducted in order to evaluate if PPGIS is involving the public equally. In reference to Sieber's top-down PPGIS, this is a counter method of PPGIS, rightly referred to asbottom-upPPGIS. Its purpose is to work with the public to let them learn the technologies, then producing their own GIS. Public participation GIS is defined by Sieber as the use of geographic information systems to broaden public involvement in policymaking as well as to the value of GIS to promote the goals of nongovernmental organizations, grassroots groups and community-based organizations.[17]It would seem on the surface that PPGIS, as it is commonly referred to, in this sense would be of a beneficial nature to those in the community or area that is being represented. But in truth only certain groups or individuals will be able to obtain the technology and use it. Is PPGIS becoming more available to the underprivileged sector of the community? The question of "who benefits?" should always be asked, and does this harm a community or group of individuals. The local, participatory management of urban neighborhoods usually follows on from 'claiming the territory', and has to be made compatible with national or local authority regulations on administering, managing and planning urban territory.[19]PPGIS applied to participatory community/neighborhood planning has been examined by many others.[20][21][22]Specific attention has been given to applications such as housing issues[23]or neighborhood revitalization.[24]Spatial databases along with the P-mapping are used to maintain a public records GIS or community land information systems.[25]These are just a few of the uses of GIS in the community. Public Participation in decision making processes works not only to identify areas of common values or variability, but also as an illustrative and instructional tool. One example of effective dialogue and building trust between the community and decision makers comes from pre-planning for development in the United Kingdom. It involves using GIS andmulti-criteria decision analysis(MCDA) to make a decision about wind farm siting. This method hinges upon taking all stakeholder perspectives into account to improve chances of reaching consensus . This also creates a more transparent process and adds weight to the final decision by building upon traditional methods such as public meetings and hearings, surveys, focus groups, and deliberative processes enabling participants more insights and more informed opinions on environmental issues.[26] Collaborative processes that consider objective and subjective inputs have the potential to efficiently address some of the conflict between development and nature as they involve a fuller justification by wind farm developers for location, scale, and design. Spatial tools such as creation of 3D view sheds offer participants new ways of assessing visual intrusion to make a more informed decision. Higgs et al.[26]make a telling statement when analyzing the success of this project – "the only way of accommodating people's landscape concerns is to site wind farms in places that people find more acceptable". This implies that developers recognize the validity of citizens' concerns and are willing to compromise in identifying sites where wind farms will not only be successful financially, but also successful politically and socially. This creates greater accountability and facilitates the incorporation of stakeholder values to resolve differences and gain public acceptance for vital development projects. In another planning example, Simao et al.[27]analyzed how to create sustainable development options with widespread community support. They determined that stakeholders need to learn likely outcomes that result from stated preferences, which can be supported through enhanced access to information and incentives to increase public participation. Through a multi-criteria spatialdecision support systemstakeholders were able to voice concerns and work on a compromise solution to have final outcome accepted by majority when siting wind farms. This differs from the work of Higgs et al. in that the focus was on allowing users to learn from the collaborative process, both interactively and iteratively about the nature of the problem and their own preferences for desirable characteristics of solution. This stimulated sharing of opinions and discussion of interests behind preferences. After understanding the problem more fully, participants could discuss alternative solutions and interact with other participants to come to a compromise solution.[27]Similar work has been done to incorporate public participation in spatial planning for transportation system development,[28]and this method of two-way benefits is even beginning to move towards web-based mapping services to further simplify and extend the process into the community.[29]
https://en.wikipedia.org/wiki/Participatory_GIS
Participatory GIS(PGIS) orpublic participation geographic information system(PPGIS) is aparticipatoryapproach tospatial planningand spatial information andcommunications management.[1][2] PGIS combinesParticipatory Learning and Action(PLA) methods withgeographic information systems(GIS).[3]PGIS combines a range of geo-spatial information management tools and methods such as sketch maps,participatory 3D modelling(P3DM),aerial photography,satellite imagery, andglobal positioning system(GPS) data to represent peoples' spatial knowledge in the forms of (virtual or physical) two- or three-dimensional maps used as interactive vehicles for spatial learning, discussion, information exchange, analysis, decision making and advocacy.[4]Participatory GIS implies making geographic technologies available to disadvantaged groups in society in order to enhance their capacity in generating, managing, analysing and communicating spatial information. PGIS practice is geared towards community empowerment through measured, demand-driven, user-friendly and integrated applications of geo-spatial technologies.[citation needed]GIS-based maps and spatial analysis become major conduits in the process. A good PGIS practice is embedded into long-lasting spatial decision-making processes, is flexible, adapts to different socio-cultural and bio-physical environments, depends on multidisciplinary facilitation and skills and builds essentially on visual language. The practice integrates several tools and methods whilst often relying on the combination of 'expert' skills with socially differentiated local knowledge. It promotes interactive participation of stakeholders in generating and managing spatial information and it uses information about specific landscapes to facilitate broadly-based decision making processes that support effective communication and community advocacy. If appropriately utilized,[5]the practice could exert profound impacts on community empowerment, innovation and social change.[6]More importantly, by placing control of access and use ofculturally sensitivespatial information in the hands of those who generated them, PGIS practice could protect traditional knowledge and wisdom from external exploitation. PPGIS is meant to bring the academic practices ofGISand mapping to the local level in order to promote knowledge production by local and non-governmental groups.[7]The idea behind PPGIS is empowerment and inclusion of marginalized populations, who have little voice in the public arena, through geographic technology education and participation. PPGIS uses and produces digital maps, satellite imagery, sketch maps, and multiple other spatial and visual tools, to change geographic involvement and awareness on a local level. The term was coined in 1996 at the meetings of theNational Center for Geographic Information and Analysis(NCGIA).[8][9][10] Attendees to theMapping for Change International Conference on Participatory Spatial Information Management and Communicationconferred to at least three potential implications of PPGIS; it can: (1) enhance capacity in generating, managing, and communicating spatial information; (2) stimulate innovation; and ultimately; (3) encourage positive social change.[11][12]This reflects on the rather nebulous definition of PPGIS as referenced in theEncyclopedia of GIS[13]which describes PPGIS as having a definition problem. There are a range of applications for PPGIS. The potential outcomes can be applied from community andneighborhood planningand development to environmental andnatural resource management. Marginalized groups, be theygrassrootsorganizations toindigenous populationscould benefit from GIS technology. Governments,non-government organizationsandnon-profit groupsare a big force behind multiple programs. The current extent of PPGIS programs in the US has been evaluated by Sawicki and Peterman.[14]They catalog over 60 PPGIS programs who aid in "public participation in community decision making by providing local-area data to community groups," in the United States.[15]: 24The organizations providing these programs are mostly universities, localchambers of commerce, non-profitfoundations. In general, neighborhood empowerment groups can form and gain access to information that is normally easy for the official government and planning offices to obtain. It is easier for this to happen than for individuals of lower-income neighborhoods just working by themselves. There have been several projects where university students help implement GIS in neighborhoods and communities. It is believed[by whom?]that access to information is the doorway to more effective government for everybody and community empowerment. In a case study of a group in Milwaukee, residents of aninner cityneighborhood became active participants in building a community information system, learning to access public information and create and analyze new databases derived from their own surveys, all with the purpose of making these residents useful actors in city management and in the formation of public policy.[16]In a number of cases, there are providers of data for community groups, but the groups may not know that such entities exist. Getting the word out would be beneficial.[citation needed] Some of the spatial data that the neighborhood wanted was information on abandoned or boarded-up buildings and homes, vacant lots, and properties that contained garbage, rubbish and debris that contributed to health and safety issues in the area. They also appreciated being able to find landlords that were not keeping up the properties. The university team and the community were able to build databases and make maps that would help them find these areas and perform the spatial analysis that they needed. Community members learned how to use the computer resources, ArcView 1.0, and build a theme or land use map of the surrounding area. They were able to perform spatial queries and analyze neighborhood problems. Some of these problems included finding absentee landlords and finding code violations for the buildings on the maps.[16] There are two approaches to PPGIS use and application. These two perspectives, top–down and bottom–up, are the currently debated schism in PPGIS. According to Sieber (2006), PPGIS was first envisioned as a means of mapping individuals by multiple social and economic demographic factors in order to analyze the spatial differences in access to social services. She refers to this kind of PPGIS astop-down, being that it is less hands on for the public, but theoretically serves the public by making adjustments for the deficiencies, and improvements in public management.[17] A current trend with academic involvement in PPGIS, is researching existing programs, and or starting programs in order to collect data on the effectiveness of PPGIS. Elwood (2006) inThe Professional Geographer, talks in depth about the "everyday inclusions, exclusions, and contradictions of Participatory GIS research."[18]The research is being conducted in order to evaluate if PPGIS is involving the public equally. In reference to Sieber's top-down PPGIS, this is a counter method of PPGIS, rightly referred to asbottom-upPPGIS. Its purpose is to work with the public to let them learn the technologies, then producing their own GIS. Public participation GIS is defined by Sieber as the use of geographic information systems to broaden public involvement in policymaking as well as to the value of GIS to promote the goals of nongovernmental organizations, grassroots groups and community-based organizations.[17]It would seem on the surface that PPGIS, as it is commonly referred to, in this sense would be of a beneficial nature to those in the community or area that is being represented. But in truth only certain groups or individuals will be able to obtain the technology and use it. Is PPGIS becoming more available to the underprivileged sector of the community? The question of "who benefits?" should always be asked, and does this harm a community or group of individuals. The local, participatory management of urban neighborhoods usually follows on from 'claiming the territory', and has to be made compatible with national or local authority regulations on administering, managing and planning urban territory.[19]PPGIS applied to participatory community/neighborhood planning has been examined by many others.[20][21][22]Specific attention has been given to applications such as housing issues[23]or neighborhood revitalization.[24]Spatial databases along with the P-mapping are used to maintain a public records GIS or community land information systems.[25]These are just a few of the uses of GIS in the community. Public Participation in decision making processes works not only to identify areas of common values or variability, but also as an illustrative and instructional tool. One example of effective dialogue and building trust between the community and decision makers comes from pre-planning for development in the United Kingdom. It involves using GIS andmulti-criteria decision analysis(MCDA) to make a decision about wind farm siting. This method hinges upon taking all stakeholder perspectives into account to improve chances of reaching consensus . This also creates a more transparent process and adds weight to the final decision by building upon traditional methods such as public meetings and hearings, surveys, focus groups, and deliberative processes enabling participants more insights and more informed opinions on environmental issues.[26] Collaborative processes that consider objective and subjective inputs have the potential to efficiently address some of the conflict between development and nature as they involve a fuller justification by wind farm developers for location, scale, and design. Spatial tools such as creation of 3D view sheds offer participants new ways of assessing visual intrusion to make a more informed decision. Higgs et al.[26]make a telling statement when analyzing the success of this project – "the only way of accommodating people's landscape concerns is to site wind farms in places that people find more acceptable". This implies that developers recognize the validity of citizens' concerns and are willing to compromise in identifying sites where wind farms will not only be successful financially, but also successful politically and socially. This creates greater accountability and facilitates the incorporation of stakeholder values to resolve differences and gain public acceptance for vital development projects. In another planning example, Simao et al.[27]analyzed how to create sustainable development options with widespread community support. They determined that stakeholders need to learn likely outcomes that result from stated preferences, which can be supported through enhanced access to information and incentives to increase public participation. Through a multi-criteria spatialdecision support systemstakeholders were able to voice concerns and work on a compromise solution to have final outcome accepted by majority when siting wind farms. This differs from the work of Higgs et al. in that the focus was on allowing users to learn from the collaborative process, both interactively and iteratively about the nature of the problem and their own preferences for desirable characteristics of solution. This stimulated sharing of opinions and discussion of interests behind preferences. After understanding the problem more fully, participants could discuss alternative solutions and interact with other participants to come to a compromise solution.[27]Similar work has been done to incorporate public participation in spatial planning for transportation system development,[28]and this method of two-way benefits is even beginning to move towards web-based mapping services to further simplify and extend the process into the community.[29]
https://en.wikipedia.org/wiki/Public_participation_GIS
Avirtual globeis athree-dimensional(3D)softwaremodel or representation ofEarthor another world. A virtual globe provides the user with the ability to freely move around in the virtual environment by changing the viewing angle and position. Compared to a conventionalglobe, virtual globes have the additional capability of representing many different views of thesurface of Earth.[1]These views may be of geographical features, man-made features such asroadsandbuildings, or abstract representations of demographic quantities such as population. On November 20, 1997, Microsoft released an offline virtual globe in the form ofEncartaVirtual Globe 98, followed byCosmi's3D World Atlasin 1999. The first widely publicized online virtual globes wereNASA WorldWind(released in mid-2004) andGoogle Earth(mid-2005). Virtual globes may be used for study or navigation (by connecting to aGPSdevice) and their design varies considerably according to their purpose. Those wishing to portray a visually accurate representation of the Earth often use satellite image servers and are capable not only of rotation but also zooming and sometimes horizon tilting. Very often such virtual globes aim to provide as true a representation of the world as is possible, with worldwide coverage up to a very detailed level. When this is the case, the interface often has the option of providing simplified graphical overlays to highlight man-made features, since these are not necessarily obvious from a photographic aerial view. The other issue raised by such detail available is that of security, with some governments having raised concerns about the ease of access to detailed views of sensitive locations such as airports and military bases. Another type of virtual globe exists whose aim is not the accurate representation of the planet, but instead a simplified graphical depiction. Most early computerized atlases were of this type and, while displaying less detail, these simplified interfaces are still widespread since they are faster to use because of the reduced graphics content and the speed with which the user can understand the display. As more and more high-resolutionsatellite imageryandaerial photographybecome accessible for free, many of the latest online virtual globes are built to fetch and display these images. They include: As well as the availability of satellite imagery, online public domain factual databases such as theCIA World Factbookhave been incorporated into virtual globes. In 1993 the German company ART+COM developed a first interactive Virtual globe, the project Terravision; supported by theDeutsche Postas a "networked virtual representation of the Earth based on satellite images, aerial shots, altitude data and architectural data".[2] The use of virtual globe software was widely popularized by (and may have been first described in)Neal Stephenson's famousscience fictionnovelSnow Crash. In themetaverseinSnow Crash, there is a piece of software called Earth made by the Central Intelligence Corporation (CIC). The CIC uses their virtual globe as auser interfacefor keeping track of all their geospatial data, including maps, architectural plans, weather data, and data from real-time satellite surveillance. Virtual globes (along with allhypermediaandvirtual realitysoftware) are distant descendants of theAspen Movie Mapproject, which pioneered the concept of using computers to simulate distant physical environments (though the Movie Map's scope was limited to the city ofAspen, Colorado). Many of the functions of virtual globes were envisioned byBuckminster Fullerwho in 1962 envisioned the creation of aGeoscopethat would be a giant globe connected by computers to various databases. This would be used as an educational tool to display large scale global patterns related to topics such as economics, geology, natural resource use, etc.[3]
https://en.wikipedia.org/wiki/Virtual_globe
Volunteered geographic information(VGI) is the harnessing of tools to create, assemble, and disseminate geographic data provided voluntarily by individuals.[1][2]VGI is a special case of the larger phenomenon known asuser-generated content,[3]and allows people to have a more active role in activities such asurban planningand mapping.[4] VGI can be seen as an extension of critical and participatory approaches togeographic information systems.[5]Some examples of this phenomenon areWikiMapia,OpenStreetMap, andYandex.Map editor. These sites provide general base map information and allow users to create their own content by marking locations where various events occurred or certain features exist, but aren't already shown on the base map. Other examples include 311-style request systems[6]and 3D spatial technology.[7]Additionally, VGI commonly populates the content offered throughlocation-based servicessuch as the restaurant review siteYelp.[8] One of the most important elements of VGI in contrast to standard user-generated content is the geographic element, and its relationship withcollaborative mapping. The information volunteered by the individual is linked to a specific geographic region. While this is often taken to relate to elements of traditional cartography, VGI offers the possibility of including subjective, emotional, or other non-cartographic information.[9] Geo-referenced data produced within services such asTrip Advisor,Flickr,Twitter,[10]Instagram[11]andPanoramiocan be considered as VGI. VGI has attracted concerns aboutdata quality, and specifically about itscredibility[12]and the possibility ofvandalism.[13] The term VGI has been criticized for poorly representing common variations in the data ofOpenStreetMapand other sites: that some of the data is paid, in the case of CloudMade's ambassadors, or generated by another entity, as in US Census data.[14] Because it is gathered by individuals with no formal training, the quality and reliability of VGI is a topic of much debate.[15]Some methods of quality assurance have been tested, namely, the use of control data to verify VGI accuracy.[16] While there is concern over the authority of the data,[17]VGI may provide benefits beyond that of professional geographic information (PGI),[18][19]partly due to its ability to collect and present data not collected or curated by traditional/professional sources.[20][21][22]Additionally, VGI provides positive emotional value to users in functionality, satisfaction, social connection and ethics.[23][24]
https://en.wikipedia.org/wiki/Volunteered_geographic_information
OpenStreetMap(abbreviatedOSM) is a free,openmap databaseupdated and maintained by a community ofvolunteersviaopen collaboration.[5]Contributors collect data fromsurveys, trace fromaerial photo imageryorsatellite imagery, and import from other freely licensedgeodatasources. OpenStreetMap isfreely licensedunder theOpen Database Licenseand is commonly used to make electronicmaps, informturn-by-turn navigation, and assist inhumanitarian aidanddata visualisation. OpenStreetMap uses its own data model to storegeographical featureswhich can then be exported into otherGIS file formats. The OpenStreetMap website itself is anonline map, geodatasearch engine, and editor. OpenStreetMap was created bySteve Coastin response to theOrdnance Survey, the United Kingdom's national mapping agency, failing to release its data to the public under free licences in 2004. Initially, maps in OSM were created only viaGPS traces, but it was quickly populated by importingpublic domaingeographical data such as the U.S.TIGERand by tracing imagery as permitted by source. OpenStreetMap's adoption was accelerated byGoogle Maps's introduction of pricing in 2012 and the development of supporting software and applications. The database is hosted by theOpenStreetMap Foundation, a non-profit organisation registered inEngland and Walesand is funded mostly via donations. Steve Coastfounded the project in 2004 while attendingUniversity College London, initially focusing on mapping the United Kingdom.[4]In the UK and elsewhere, government-run and tax-funded projects like theOrdnance Surveycreated massivedatasetsbut declined to freely and widely distribute them. The first contribution was a street that Coast entered in December 2004 after cycling aroundRegent's ParkinLondonwith aGPS tracking unit.[6][7][8]In April 2006, theOpenStreetMap Foundationwas established to encourage the growth, development and distribution of freegeospatialdata and provide geospatial data for anybody to use and share. In April 2007,Automotive Navigation Data(AND) donated a complete road data set for theNetherlandsandtrunk roaddata forIndiaandChinato the project.[9]By July 2007, when the first "The State of the Map" (SotM) conference[10]was held, there were 9,000 registered users. In October 2007, OpenStreetMap completed the import of aUS CensusTIGERroad dataset.[11]In December 2007,Oxford Universitybecame the first major organisation to use OpenStreetMap data on their main website.[12]Ways to import and export data have continued to grow – by 2008, the project developed tools to export OpenStreetMap data to powerportable GPS units, replacing their existingproprietaryand out-of-date maps.[13]In March 2008, two founders of CloudMade, a commercial company that uses OpenStreetMap data, announced that they had receivedventure capitalfunding of €2.4million.[14]In 2010,AOLlaunched an OSM-based version ofMapQuestand committed $1 million to increasing OSM's coverage of local communities for itsPatchwebsite.[15] In 2012, the launch of pricing forGoogle Mapsled several prominent websites to switch from their service to OpenStreetMap and other competitors.[16]Chief among these were Foursquare andCraigslist, which adopted OpenStreetMap, andApple, which ended a contract with Google and launched a self-built mapping platform usingTomTomand OpenStreetMap data.[17] As of 2025,TomTom,Microsoft,EsriandMetaare the highest-tiercorporate sponsorsof theOpenStreetMap Foundation.[18] The OSM project aims to collect data about stationary objects throughout the world, includinginfrastructureand other aspects of thebuilt environment,points of interest,land useandcoverclassifications, andtopography. Map features range in scale from international boundaries to hyperlocal details such as shops andstreet furniture. Although historically significant features and ongoing construction projects are routinely included in the database, the project's scope is limited to the present day, as opposed to the past or future.[19] OSM'sdata modeldiffers markedly from that of a conventional GIS orCADsystem. It is atopologicaldata structurewithout the formal concept of a layer, allowing thematically diverse data to commingle and interconnect. Amap featureorelementis modelled as one of threegeometric primitives:[20][21] The OpenStreetMap data primitives are stored and processed in different formats. OpenStreetMap server usesPostgreSQLdatabase, with onetablefor each data primitive, with individual objects stored asrows.[23][24] The data structure is defined as part of the OSM API. The current version of the API, v0.6, was released in 2009. A 2023 study found that this version's changes to the relation data structure had the effect of reducing the total number of relations; however, it simultaneously lowered the barrier to creating new relations and spurred the application of relations to new use cases.[25] OSM manages metadata as afolksonomy. Each element containskey-value pairs, calledtags, that identify and describe the feature.[22]A recommendedontologyof map features (the meaning oftags) ismaintained on a wiki. New tagging schemes can always be proposed by apopular vote of a written proposalin OpenStreetMap wiki, however, there is no requirement to follow this process: editors are free to useany tags they liketo describe a feature. There are over 89 million different kinds of tags in use as of June 2017.[26] OpenStreetMap data has been favourably compared with proprietary datasources,[27]although as of 2009[update]data quality varied across the world.[28][29]A study in 2011 compared OSM data withTomTomfor Germany. For car navigation TomTom has 9% more information, while for the entire street network, OSM has 27% more information.[30]In 2011,TriMet, which serves thePortland, Oregon, metropolitan area, found that OSM's street data, consumed through the routing engineOpenTripPlannerand the search engineApache Solr, yields better results than analogous GIS datasets managed by local government agencies.[31] A 2021 study compared theOpenStreetMap Cartostyle's symbology to that of theSoviet Union's comprehensivemilitary mapping programme, finding that OSM matched the Soviet maps in coverage of some features such as road infrastructure but gave less prominence to the natural environment.[32] A study from 2021 found the mean completeness of shop data in the German regions Baden-Württemberg and Saxony to be 88% and 82% respectively. Instead of comparing OSM data to other datasets, the authors looked at how the number of shops developed over time. They then determined the expected number of shops by estimating the saturation level.[33] According to a 2024 study usingPyPSA, OSM has the most detailed and up-to-date publicly available coverage of the European high-voltage electrical grid, comparable to official data from theEuropean Network of Transmission System Operators for Electricity.[34] All data added to the project needs to have a licence compatible with the Open Data CommonsOpen Database Licence(ODbL). This can include out-of-copyright information, public domain or other licences. Software used in the production and presentation of OpenStreetMap data may have separate licensing terms. OpenStreetMap data and derived tiles were originally published under theCreative CommonsAttribution-ShareAlike licence (CC BY-SA) with the intention of promoting free use and redistribution of the data. In September 2012, the licence was changed to the ODbL in order to define its bearing on data rather than representation more specifically.[35][36]As part of this relicensing process, some of the map data was removed from the public distribution. This included all data contributed by members that did not agree to the new licensing terms, as well as all subsequent edits to those affected objects. It also included any data contributed based on input data that was not compatible with the new terms. Estimates suggested that over 97% of data would be retained globally, but certain regions would be affected more than others, such as in Australia where 24 to 84% of objects would be retained, depending on the type of object.[37]Ultimately, more than 99% of the data was retained, with Australia and Poland being the countries most severely affected by the change.[38]The license change and resulting deletions prompted a group of dissenting mappers to establish Free Open Street Map (FOSM), aforkof OSM that remained under the previous license.[39] Map tiles provided by the OpenStreetMap project were licensed underCC-BY-SA-2.0until 1 August 2020. The ODbL license requires attribution to be attached to maps produced from OpenStreetMap data, but does not require that any particular license be applied to those maps. "©OpenStreetMap Contributors" with link to ODbL copyright page as attribution requirement is used on the site.[40] OSM publishes officialdatabase dumpsof the entire "planet" for reuse on minutely and weekly intervals, formatted asXMLor binaryProtocol Buffers. Alternative third-party distributions provide access to OSM data in other formats or to more manageable subsets of the data. Geofabrik publishes extracts of the database in OSM andshapefileformats for individual countries and political subdivisions.Amazon Web Servicespublishes the planet onS3for querying inAthena.[41]As part of theQLeverproject, theUniversity of FreiburgpublishesTurtledumps suitable forlinked datasystems.[42]From 2020 to 2024,Metapublished the Daylight Map Distribution, which appliedquality assuranceprocesses and added some external datasets to OSM data to make it more production-ready.[43]OSM data also forms a major part of theOverture Maps Foundation's dataset and commercial datasets fromMapboxandMapTiler.[44] Map data is collected by ground survey, personal knowledge, digitizing from imagery, and government data. Ground survey data is collected by volunteers traditionally using tools such as a handheld GPS unit, anotebook,digital cameraandvoice recorder. Software applications onsmartphones(mobile devices) have made it easy for anybody to survey. The data is then entered into the OpenStreetMap database using a number ofsoftware toolsincludingJOSMandMerkaator.[45]Additionally, more recently apps such asStreetCompleteoffer "quests" to users in nearby vicinity, allowing them to add metadata to specific points of interest (such as, for example, the opening hours of a restaurant or whether or not a particular crosswalk hastactile paving). Mapathoncompetition events are also held by local OpenStreetMap teams and by non-profit organisations and local governments to map a particular area. The availability of aerial photography and other data from commercial and government sources has added important sources of data for manual editing and automated imports. Special processes are in place to handle automated imports and avoid legal and technical problems. Ground surveys are performed by a mapper,on foot,bicycle, or in acar,motorcycle, orboat. Map data is typically recorded on aGPS unitor on a smart phone with mapping app; a common file format isGPX. Once the data has been collected, it is entered into the database by uploading it onto the project's website together with appropriate attribute data. As collecting and uploading data may be separated from editing objects, contribution to the project is possible without using a GPS unit, such as by using paper mapping. Similar to users contributing data using aGPS unit, corporations (e.g. Amazon) with large vehicle fleets may use telemetry data from the vehicles to contribute data to OpenStreetMap.[46] Some committed contributors adopt the task of mapping whole towns and cities, or organising mapping parties to gather the support of others to complete a map area. A large number of less-active users contributes corrections and small additions to the map.[citation needed] Maxar,[47]Bing,[48]ESRI, andMapboxare some of the providers of aerial/satelliteimagery which are used as a backdrop for map production. Yahoo!(2006–2011),[49][50]Bing(2010 – present),[48]andDigitalGlobe(2017[47]–2023[51]) allowed theiraerial photography, satellite imagery to be used as a backdrop for map production. For a period from 2009 to 2011,NearMap Pty Ltdmade their high-resolution PhotoMaps (of major Australian cities, plus some rural Australian areas) available under aCC BY-SAlicence.[52] Data from several street-level image platforms are available as map data photo overlays.Bing Streetside360° imagetracks, and the open andcrowdsourcedMapillaryandKartaViewplatforms provide generally smartphone anddashcamimages. Additionally, a Mapillarytraffic signdata layer, a product of user-submitted images is also available.[53] Some government agencies have released official data on appropriate licences. This includes the United States, where works of thefederal governmentare placed underpublic domain. In the United States, most roads originate fromTIGERfrom the Census Bureau.[54]Geographic nameswere initially sourced fromGeographic Names Information System, and some areas contain water features from theNational Hydrography Dataset. In the UK, someOrdnance SurveyOpenDatais imported. In CanadaNatural Resources Canada's CanVec vector data andGeoBaseprovide landcover and streets.[citation needed] Globally, OpenStreetMap initially used the prototype global shoreline fromNOAA. Due to it being oversimplified and crude, it has been mainly replaced by other government sources or manual tracing.[citation needed] Out-of-copyrightmaps can be good sources of information about features that do not change frequently. Copyright periods vary, but in the UKCrown copyrightexpires after 50 years and hence old Ordnance Survey maps can legally be used. A complete set of UK 1 inch/mile maps from the late 1940s and early 1950s has been collected, scanned, and is available online as a resource for contributors.[55] The map data can be edited from a number of editing applications that provide aids including satellite and aerial imagery, street-level imagery, GPS traces, and photo and voice annotations. By default, the official OSM website directs contributors to the Web-basediDeditor.[56][57]Metadevelops aforkof this editor, Rapid, that provides access to external datasets, including some derived frommachine learningdetections.[58]For complex or large-scale changes, experienced users often turn to more powerful desktop editing applications such asJOSMandPotlatch. Several mobile applications also edit OSM. Go Map!! and Vespucci are the primary full-featured editors foriOSandAndroid, respectively.StreetCompleteis an Android application designed for laypeople around a guided question-and-answer format. Every Door,Maps.me,Organic Maps, andOsmAndinclude basic functionality for editing points of interest. Between 2018 and 2023, the top five editing tools by number of edits were JOSM, iD, StreetComplete, Rapid, and Potlatch.[59] OSM accepts contributions from the general public.Changesetssubmitted through editors and the OSM API immediately enter the database and are quickly published for reuse, without going through peer review beforehand. The API only validates changes for basic well-formedness, but not for topological or logical consistency or for adherence to community norms. As acrowdsourcedproject, OSM is susceptible to several forms of datavandalism, including copyright infringement, graffiti, and spam.[60]Overall, vandalism accounts for an estimated 0.2% of edits to OSM, which is relatively low compared tovandalism on Wikipedia. Members of the community detect and fix most unintentional errors and vandalism promptly,[61]by monitoring the slippy map andrevision historyon the main website, as well as by searching for issues using tools like OSMCha, OSM Inspector, and Osmose. In addition to community vigilance, theOpenStreetMap Foundation's Data Working Group and a group of administrators are responsible for responding to vandals.[60]As of 2022[update], a comprehensive security assessment of the OSM data model has yet to take place.[62] There have been several high-profile incidents of vandalism and other errors in OSM: Players ofPokémon Gohave been known to vandalize OSM, one of the game's map data sources, to manipulate gameplay. However, this vandalism is casual, rarely sustained, and it is predictable based on the mechanics of the game.[61] The project has a geographically diverse user-base, due to emphasis of local knowledge and "on-the-ground" situation in the process of data collection.[69]Many early contributors werecyclistswho survey with and for bicyclists, chartingcycleroutesand navigabletrails.[70]Others areGISprofessionals. Contributors are predominately men, with only 3–5% being women.[71] By August 2008, shortly after the second The State of the Map conference was held, there were over 50,000 registered contributors; by March 2009, there were 100,000 and by the end of 2009 the figure was nearly 200,000. In April 2012, OpenStreetMap cleared 600,000 registered contributors.[72]On 6 January 2013, OpenStreetMap reached one million registered users.[73]Around 30% of users have contributed at least one point to the OpenStreetMap database.[74][75] As per a study conducted in 2011, only 38% of members carried out at least one edit and only 5% of members created more than 1000 nodes. Most members are in Europe (72%).[76]According to another study, when a competing maps platform is launched, OSM attracts fewer new contributors and pre-existing contributors increase their level of contribution possibly driven by their ideological attachment to the platform. Overall, there is a negative effect on the quantum of contributions.[77] Some companies freely license satellite/aerial/street imagery sources from which OpenStreetMap contributors trace roads and features, while other companies make data available for importing map data.Automotive Navigation Data(AND) provided a complete road data set for Netherlands and trunk roads data for China and India.Amazon Logisticsuses OpenStreetMap for navigation and has a team which revises the map based on GPS traces and feedback from its drivers.[78]In eight Southeast Asian countries,Grabhas contributed more than 800,000 kilometres (500,000 mi) of roads based on drivers' GPS traces, including many narrow alleyways that are missing from other mapping platforms.[79]eCourieralso contributes its drivers' GPS traces to OSM. According to a study, about 17% of road kilometers were last touched by corporate teams in March 2020.[80]The top 13 corporate contributors during 2014–2020 include Apple, Kaart, Amazon, Facebook, Mapbox, Digital Egypt, Grab, Microsoft, Telenav, Developmentseed, Uber, Lightcyphers and Lyft.[78] According to OpenStreetMap Statistics, the over all percentage of edits from corporations peaked at about 10% in 2020 and 2021 and has since fallen to about 2-3% in 2024.[81] Humanitarian OpenStreetMap Team (HOT) is a nonprofit organisation promoting community mapping across the world. It developed the open source HOT Tasking Manager for collaboration, and contributed to mapping efforts after theApril 2015 Nepal earthquake, the2016 Kumamoto earthquakes, and the2016 Ecuador earthquake. TheMissing MapsProject, founded by the American Red Cross, Doctors Without Borders, and other NGOs, uses HOT Tasking Manager. TheUniversity of Heidelberghosts the Disastermappers Project for training university students in mapping for humanitarian purposes. WhenEbola broke out in 2014, the volunteers mapped 100,000 buildings and hundreds of miles of roads in Guinea in just five days.[82]Local groups such asRamani HuriainDar es Salaamincorporate OSM mapping into theircommunity resilienceprogrammes.Community emergency response teamsinSan Franciscoand elsewhere organize field surveys and mapathons to contribute information aboutfire alarm call boxes,hazard symbols, and other relevant features.[83] Since 2007, the OpenStreetMap community has organised State of the Map (SotM), an annual international conference at which stakeholders present on technical progress and discuss policy issues.[10][85]The conference is held each year in a different city around the world. Various regional editions of State of the Map are also held for each continent, regions such as the Baltics and Latin America, and some countries with especially active local communities, such as France, Germany, and the United States. The official OSM website at openstreetmap.org is the project's main hub for contributors. A reference implementation of aslippy map(featuring a selection of third-party tile layers), arevision log, and integrations with basicgeocodersand route planners facilitate the community's management of the database contents. Logged-in users can access an embedded copy of theiDeditor and shortcuts for desktop editors for contributing to the database, as well as some rudimentarysocial networkingfeatures such as user profiles and diaries. The website's built-inRESTAPI andOAuthauthentication enable third-party applications to programmatically interact with the site's major functionality, including submitting changes. Much of the website runs as aRuby on Railsapplication backed by aPostgreSQLdatabase. Strictly speaking, the OSM project produces only a geographic database, leaving data consumers to handle every aspect of postprocessing the data and presenting it to end users. However, a large ecosystem of command line tools, software libraries, and cloud services has developed around OSM, much of it asfree and open-source software. Two kinds ofsoftware stackshave emerged for rendering OSM data as an interactiveslippy map. In one, a server-side rendering engine such asMapnikprerenders the data as a series ofraster imagetiles, then serves them using a library such asmod_tile. A library such asOpenLayersorLeafletdisplays these tiles on the client side on the slippy map. Alternatively, a server application converts raw OSM data intovector tilesaccording to a schema, such as Mapbox Streets, OpenMapTiles, or Shortbread. These tiles are rendered on the client side by a library such as theMapboxMaps SDK,MapLibre,Mapzen's Tangrams, or OpenLayers. Applications such asMapboxStudio allow designers to author vector styles in an interactive, visual environment.[86]Vector maps are especially common among three-dimensional mapping applications and mobile applications. Plugins are available for embedding slippy maps incontent management systemssuch asWordPress.[87] A geocoder indexes map data so that users can search it by name and address (geocoding) or look up an address based on a given coordinate pair (reverse geocoding). Several geocoders are designed to index OSM data, includingNominatim (from the Latin, 'by name'), which is built into the official OSM website along withGeoNames.[88][89]Komoot's Photonsearch engineprovidesincremental searchfunctionality based on a Nominatim database. The nonprofitSocial Web Foundation's places.pub formats OSM locations asActivityPubobjects, enabling social media applications to enrich geocodes associated withcheck-ins.[90]Element 84'snatural languagegeocoder uses alarge language modelto identify OSM geometries to return.[91] A variety ofroute planninglibraries and services are based on OSM data. OSM's official website has featuredGraphHopper, theOpen Source Routing Machine, and Valhalla since February 2015.[92][93]Other widely deployed routing engines include Openrouteservice andOpenTripPlanner, which specializes in public transport routing. OSM is an important source ofgeographic datain many fields, including transportation, analysis, public services, and humanitarian aid. However, much of its use by consumers is indirect via third-party products, becausecustomer reviewsandaerialandsatellite imageryare not part of the project per se.[44] A variety of applications and services allow users to visualise OSM data in the form of a map. The official OSM website features an interactiveslippy mapinterface so that users can efficiently edit maps and viewchangesets. It presents the general-purposeOpenStreetMap Cartostyle alongside a selection of specialised styles for cycling and public transport. Beyond this reference implementation, community-maintained map applications focus on alternative cartographic representations and specialised use cases. For example,OpenRailwayMapis a detailed online map of the world's railway infrastructure based on OSM data.[94]OpenSeaMapis a worldnautical chartbuilt as amashupof OpenStreetMap, crowdsourced water depth tracks, and third-party weather and bathymetric data. OpenTopoMap uses OSM andSRTMdata to createtopographic maps.[95]Tactile Map Automated Productionprintstactile mapsthat feature embossed streets, paths, and railroads from OSM.[96] On the desktop, applications such asGNOME MapsandMarbleprovide their own interactive styles. GIS suites such asQGISallow users to produce their own custom maps based on the same data. Many commercial and noncommercial websites feature maps powered by OSM data inlocator maps,store locators,infographics, story maps, and othermashups. Locator maps onWikipediaandWikivoyagearticles for cities and points of interest are powered by aMediaWikiextension and the OSM-based Wikimedia Maps service.[97]The locator maps onCraigslist,[98]Facebook,[99]Flickr,[100]Foursquare City Guide,[101]Gurtam's Wialon,[102]andSnapchat[103]are also powered by OSM. From 2013 to 2022,GitHubvisualized any uploadedGeoJSONdata atop an OSM-basedMapboxbasemap.[104][105] In 2012,Applequietly switched the locator map iniPhotofromGoogle Mapsto OSM.[106]Interactive OSM-based maps appear in many mobile navigation applications, fitness applications, andaugmented reality games, such asStrava.[107] TheOverpass APIsearches the OSM database for features whose metadata or topology match criteria specified in a structuredquery language.[108]Overpass turbo is anintegrated development environmentfor querying this API.Bellingcatdevelops an alternative Overpass frontend for geolocating photographs.[109] QLeverandSophoxaretriplestoresthat accept standardSPARQLqueries to return facts about the OSM database.Geographic information retrievalsystems such as NLMaps Web[110]and OSCAR[111]answer natural language queries based on OSM data. OSMnx is aPythonpackage for analysing and visualising the OSMroad network.[112] OSM is often a source for realistic, large-scaletransport network analyses[113]because the raw road network data is freely available or because of aspects of coverage that are uncommon in proprietary alternatives. OSM data can be imported into professional-grade traffic simulation frameworks such asAimsunNext,[114]Eclipse SUMO,[115]andMATSim,[116]as well asurban planning–focused simulators such as A/B Street.[117]A team at theVirginia Tech Transportation Institutehas used Valhalla'smap matchingfunction to evaluateadvanced driver-assistance systems.[118]TheUnited States Census Bureauhas analysed routes generated by theOpen Source Routing Machinealong withAmerican Community Surveydata to develop a socioeconomic profile of commuters affected by theFrancis Scott Key Bridge collapse.[119] OSM is also used in conservation andland-use planningresearch. The annualForest Landscape Integrity Indexis based on a comprehensive map of remaining roadless areas derived from OSM's road network.[120][121]OpenSentinelMap is a global land use dataset based on OSM's land use areas andSentinel-2imagery, designed forfeature detectionandimage segmentationusing computer vision.[122] Some newsrooms routinely incorporate OSM data into their workflows anddata journalismprojects. TheChicago Tribunemaintains a dashboard ofcrime in Chicagovisualized against an OSM basemap.[123]The Washington PostandLos Angeles Timesaccompany articles with locator maps and more in-depth visuals that rely on OSM's hyperlocal coverage of places that have less detail in proprietary maps.[124][125] Various groups, including researchers, data journalists, theOpen Knowledge Foundation, andGeochicas, have used OSM in conjunction withWikidatato explore the demographics of people honoured by street names and raise awareness of gender bias in naming decisions.[126][127][128] OSM is a data source for some Web-based map services. In 2010,Bing Mapsintroduced an option to display an OSM-based map[129]and later began including building data from OSM by default.[66]Wheelmap.orgis a portal for discovering wheelchair-accessible places,mashing upOSM data with a separate, crowdsourcedcustomer reviewdatabase. Mobile applications such asCycleStreets,Karta GPS,Komoot,[130]Locus Map,Maps.me,Organic Maps, andOsmAndalso provide offline route planning capabilities.Apple Mapsuses OSM data in many countries.[131]Some ofGarmin's GPS products incorporate OSM data.[132]OSM is a popular source for road data among Iranian navigation applications, such as Balad.[67]Geotab[133]andTeleNav[134]also use OSM data in their in-car navigation systems. Some public transportation providers rely on OpenStreetMap data in their route planning services and for other analysis needs. OSM data appears in the driver or rider application or powers backend operations forridesharing companiesand related services.[135]In 2022,Grabcompleted a migration from Google Maps andHere Mapsto an in-house, OSM-based navigation solution, reducing trip times by about 90 seconds.[79]In 2024,Olaintroduced a mapping platform partly based on OSM data.[136] In 2019, owners ofTeslacars found that the Smart Summonautomatic valet parkingfeature withinTesla Autopilotrelied on OSM's coverage of parking lot details.[137]Webotsuses OSM data to simulate realistic surroundings for autonomous vehicles.[138] Humanitarianaid agenciesuse OSM data both proactively and reactively. OSM's road and building coverage allow them to discover patterns of disease outbreaks and target interventions such asantimalarial medicationstoward remote villages. After a disaster occurs, they produce large-format printed maps and downloadable maps forGPS tracking unitsfor aid workers to use in the field.[140] The2010 Haiti earthquakeestablished a model for non-governmental organisations (NGOs) to collaborate with international organisations. OpenStreetMap and Crisis Commons volunteers used available satellite imagery to map the roads, buildings and refugee camps ofPort-au-Princein just two days, building "the most complete digital map of Haiti's roads".[141][142]The resulting data and maps have been used by several organisations providing relief aid, such as theWorld Bank, the European Commission Joint Research Centre, theOffice for the Coordination of Humanitarian Affairs,UNOSATand others.[143] After Haiti, the OpenStreetMap community continued mapping to support humanitarian organisations for various crises and disasters. After theNorthern Mali conflict(January 2013),Typhoon Haiyan[144][145]in the Philippines (November 2013), and theEbola virus epidemic in West Africa(March 2014), the OpenStreetMap community in association with the NGO Humanitarian OpenStreetMap Team (HOT) has shown it can play a significant role in supporting humanitarian organisations.[82] OSM is a map data source for manylocation-based gamesthat require broad coverage of local details such as streets and buildings. One of the earliest such games wasHasbro's short-livedMonopoly City Streets(2009), which offered a choice between OSM andGoogle Mapsas the playing board.[146][147]Battlefield 4(2013) used a customized OSM-basedMapboxmap in its leaderboards.[148]In 2013, Ballardia shut down testing ofWorld of the Living Dead: Resurrection, because too many players attempted to use the Google Maps–based game, then relaunched it after switching to OSM, which could handle thousands of players.[149] Flight simulatorscombine OSM's coverage of roads and structures with other sources of natural environment data, acting as sophisticated 3D map renderers, in order to add realism to the ground below.X-Plane10(2011) replacedTIGERandVMAP0with OSM for roads, railways, and some bodies of water.[150][151]Microsoft Flight Simulator(2020) introduced software-generated building models based in part on OSM data.[66]In 2020,FlightGear Flight Simulatorofficially integrated OSM buildings and roads into the official scenery.[152] City-building gamesuse a subset of OSM data as a base layer to take advantage of the player's familiarity with their surroundings. InNIMBY Rails(2021), the player develops a railway network that coexists with real-world roads and bodies of water.[153]In Jutsu Games'Infection Free Zone(2024), the player builds fortifications amid a post-apocalyptic world based on OSM streets and buildings.[154] Alternate reality gamesrely on OSM data to determine where rewards and other elements of the gamespawnin the player's presence, such as the 'portals' inIngress, the 'PokéStops' and 'Pokémon Gyms' inPokémon Go, and the 'tappables' inMinecraft Earth(2019).[155]In 2017, whenNianticmigrated its augmented reality titles, includingIngressandPokémon Go, from Google Maps to OSM, theoverworldmaps in these games initially became more detailed for some players but completely blank for others, due to OSM's uneven geographic coverage at the time.[156][157]In the first six weeks after launching in South Korea,Pokémon Goproduced a seventeenfold spike in daily OSM contributions within the country.[158]In 2024, Niantic migrated its titles toOverture Mapsdata, which incorporates some OSM data.[159] OSM and projects based on it have been recognized for their contributions to design and the public good: According to theOpen Data Institute, OSM is one of the most successful collaboratively maintained open datasets in existence.[166]A 2020research reportbyAccentureestimated the totalreplacement valueof the OSM database, the value of OSM software development effort, and maintenance overhead at $1.67 billion,[167]roughly equivalent to the value of theLinux kernelin 2008.[168]Several startups have turned OSM-basedsoftware as a serviceinto a business model, includingCarto,Mapbox,[169]MapTiler, andMapzen. TheOverture Maps Foundationincorporates OSM data in some of its GIS layers.[170] Several opencollaborative mappingprojects are modeled after OSM and rely on OSM software.OpenHistoricalMapis a worldhistorical mapthat tracks the evolution ofhuman geographyover time, from prehistory to the present day.OpenGeofictionfocuses onfantasy cartographyandworldbuilding. The OSM community sees these projects as complements for aspects of geography that are out of scope for OSM.[171][172][19] Foody, Giles; et al. (2017).Mapping and the Citizen Sensor. London:Ubiquity Press.ISBN978-1-911529-16-3.JSTORj.ctv3t5qzc.
https://en.wikipedia.org/wiki/Humanitarian_OpenStreetMap_Team
Civic intelligenceis an "intelligence" that is devoted to addressing public or civic issues. The term has been applied to individuals and, more commonly, to collective bodies, like organizations, institutions, or societies.[1]Civic intelligence can be used in politics by groups of people who are trying to achieve a common goal.[2]Social movements and political engagement in history might have been partly involved withcollective thinkingand civic intelligence. Education, in its multiple forms, has helped some countries to increase political awareness and engagement by amplifying the civic intelligence of collaborative groups.[3]Increasingly,artificial intelligenceandsocial media, modern innovations of society, are being used by many political entities and societies to tackle problems in politics, the economy, and society at large. Like the termsocial capital,civic intelligencehas been used independently by several people since the beginning of the 20th century. Although there has been little or no direct contact between the various authors, the different meanings associated with the term are generally complementary to each other. The first usage identified was made in 1902 by Samuel T. Dutton,[4]Superintendent of Teachers College Schools on the occasion of the dedication of the Horace Mann School when it noted that "increasing civic intelligence" is a "true purpose of education in this country." More recently, in 1985, David Matthews, president of theKettering Foundation, wrote an article entitledCivic Intelligencein which he discussed the decline of civic engagement in the United States. A still more recent version is Douglas Schuler's "Cultivating Society's Civic Intelligence: Patterns for a New 'World Brain'".[5]In Schuler's version, civic intelligence is applied to groups of people because that is the level where public opinion is formed and decisions are made or at least influenced. It applies to groups, formal or informal, who are working towards civic goals such as environmental amelioration or non-violence among people. This version is related to many other concepts that are currently[when?]receiving a great deal of attention includingcollective intelligence, civic engagement,participatory democracy,emergence,new social movements,collaborative problem-solving, andWeb 2.0. When Schuler developed the Liberating Voices[6]pattern languagefor communication revolution, he made civic intelligence the first of 136 patterns.[7] Civic intelligence is similar[1]toJohn Dewey's "cooperative intelligence" or the "democratic faith" that asserts that "each individual has something to contribute, and the value of each contribution can be assessed only as it entered into the final pooled intelligence constituted by the contributions of all".[8]Civic intelligence is implicitly invoked by the subtitle ofJared Diamond's 2004 book,Collapse: Why Some Societies Choose to Fail or Succeed[9]and to the question posed inThomas Homer-Dixon's 2000 bookIngenuity Gap: How Can We Solve the Problems of the Future?[10]that suggests civic intelligence will be needed if humankind is to stave off problems related to climate change and other potentially catastrophic occurrences. With these meanings, civic intelligence is less a phenomenon to be studied and more of a dynamic process or tool to be shaped and wielded by individuals or groups.[1]Civic intelligence, according to this logic, can affect how society is built and how groups or individuals can utilize it as a tool for collective thinking or action. Civic intelligence sometimes involves large groups of people, but other times it involves only a few individuals. civic intelligence might be more evidently seen in smaller groups when compared to bigger groups due to more intimate interactions and group dynamics.[2] Robert Putnam, who is largely responsible for the widespread consideration of "social capital", has written that social innovation often occurs in response to social needs.[11]This resonates withGeorge Basalla's findings related to technological innovation,[12]which simultaneously facilitates and responds to social innovation. The concept of "civic intelligence," an example of social innovation, is a response to a perceived need. The reception that it receives or doesn't receive will be in proportion to its perceived need by others. Thus, social needs serve as causes for social innovation and collective civic intelligence.[12] Civic intelligence focuses on the role ofcivil societyand the public for several reasons. At a minimum, the public's input is necessary to ratify important decisions made by business or government. Beyond that, however, civil society has originated and provided the leadership for a number of vital social movements. Any inquiry into the nature of civic intelligence is also collaborative and participatory. Civic intelligence is inherently multi-disciplinary and open-ended. Cognitive scientists address some of these issues in the study of "distributed cognition." Social scientists study aspects of it with their work on group dynamics, democratic theory, social systems, and many other subfields. The concept is important in business literature ("organizational learning") and in the study of "epistemic communities" (scientific research communities, notably). Politically, civic intelligence brings people together to form collective thoughts or ideas to solve political problems. Historically,Jane Addamswas an activist who reformed Chicago's cities in terms of housing immigrants, hosting lecture events on current issues, building the first public playground, and conducting research on cultural and political elements of communities around her.[2]She is just one example of how civic intelligence can influence society. Historical movements in America such as those related to human rights, the environment, and economic equity have been started by ordinary citizens, not by governments or businesses.[2]To achieve changes in these topics, people of different backgrounds come together to solve both local andglobal issues. Another example of civic intelligence is how governments in 2015 came together in Paris to formulate a plan to curbgreenhouse gas emissionand alleviate some effects ofglobal warming.[2] Politically, no atlas of civic intelligence exists, yet the quantity and quality of examples worldwide is enormous. While a comprehensive "atlas" is not necessarily a goal, people are currently developing online resources to record at least some small percentage of these efforts. The rise in the number of transnational advocacy networks,[13]the coordinated worldwide demonstrations protesting the invasion of Iraq,[14]and theWorld Social Forumsthat provided "free space" for thousands of activists from around the world,[15]all support the idea that civic intelligence is growing. Although smaller in scope, efforts like the work of theFriends of Naturegroup to create a "Green Map" ofBeijingare also notable. Political engagement of citizens sometimes comes from the collective intelligence of engaging local communities through political education.[2]Tradition examples of political engagement includes voting, discussing issues with neighbors and friends, working for a political campaign, attending rallies, forming political action groups, etc. Today, social and economic scientists such as Jason Corburn andElinor Ostromcontinue to analyze how people come together to achieve collective goals such as sharing natural resources, combating diseases, formulating political action plans, and preserving the natural environment.[16] From one study, the author suggests that it might be helpful for educational facilities such as colleges or even high schools to educate students on the importance of civic intelligence in politics so that better choices could be made when tackling societal issues through a collective citizen intelligence.[17]Harry C. Boyte, in an article he wrote, argues that schools serve as a sort of "free space" for students to engage in community engagement efforts as describe above.[17]Schools, according to Boyte, empower people to take actions in their communities, thus rallying increasing number of people to learn about politics and form political opinions. He argues that this chain reaction is what then leads to civic intelligence and the collective effort to solve specific problems in local communities. It is shown by one study that citizens who are more informed and more attentive to the world of politics around them are more politically engaged both at the local and national level.[18]One study, aggregating the results of 70 articles about political awareness, finds that political awareness is important in the onset of citizen participation and voicing opinion.[18]In recent years and the modern world, there is a shift in how citizens stay informed and become attentive to the political world. Although traditional political engagement methods are still being used by most individuals, particularly older people, there is a trend shifting towards social media and the internet in terms of political engagement and civic intelligence.[19] Civic intelligence is involved in economic policymaking and decision-making around the world. According to one article, community members in Olympia, Washington worked with local administrations and experts onaffordable housingimprovements in the region.[20]This collaboration utilized the tool of civic intelligence. In addition, the article argues that nonprofit organizations can facilitate local citizen participation in discussions about economic issues such as public housing, wage rates, etc.[20]In Europe, according to RSA's report on Citizens' Economic Council, democratic participation and discussions have positive impacts on economic issues in society such as poverty, housing situations, the wage gap, healthcare, education, food availability, etc. The report emphasizes citizen empowerment, clarity and communication, and building legitimacy around economic development.[21]The RSA's economic council is working towards enforcing more crowdsourced economic ideas and increasing the expertise level of fellows who will advise policymakers on engaging citizens in the economy. The report argues that increasing citizen engagement makes governments more legitimate through increased public confidence, stockholder engagement, and government political commitment.[21]Ideas such as creating citizen juries, citizen reference panels, and the devolution process of policymaking are explored in more depth in the report. Collective civic intelligence is seen as a tool by the RSA to improve economic issues in society.[21] Globally, civic participation and intelligence interact with the needs of businesses and governments. One study finds that increased local economic concentration is correlated with decreased levels of civic engagement because citizen's voices are covered up by the needs of corporations.[22]In this situation, governments overvalue the needs of big corporations when compared to the needs of groups of individual citizens. This study points out that corporations can negatively impact civic intelligence if citizens are not given enough freedom to voice their opinions regarding economic issues. The study shows that the US has faced civic disengagement in the past three decades due to monopolizations of opinions by corporations.[22]On the other hand, if a government supports local capitalism and civic engagement equally, there might be beneficial socioeconomic outcomes such as more income equality, less poverty, and less unemployment.[23]The article adds that in a period of global development, local forces of civic intelligence and innovation will likely benefit citizen's lives and distinguish one region from another in terms of socioeconomic status.[23]The concept of civic health is introduced by one study as a key component to the wellbeing of local or national economy. According to the article, civic engagement can increase citizens's professional employment skills, foster a sense of trust in communities, and allow a greater amount of community investment from citizens themselves.[24] One recent prominent example of civic intelligence in the modern world is the creation and improvements of artificial intelligence. According to one article, AI enables people to propose solutions, communicate with each other more effectively, obtain data for planning, and tackle society issues from across the world.[25]In 2018, at the second annual AI for Good Global summit, industry leaders, policymakers, research scientists, AI enthusiasts all came together to formulate plans and ideas regarding how to use artificial intelligence to solve modern society issues, including political problems in countries of different backgrounds.[25]The summit proposed ideas regarding how AI can benefit safety, health, and city governance. The article mentions that in order for artificial intelligence to achieve effective use in society, researchers, policymakers, community members, and technology companies all need to work together to improve artificial intelligence. With this logic, it takes coordinated civic intelligence to make artificial intelligence work. There are some shortcomings to artificial intelligence. According to one report, AI is increasingly beingused by governmentsto limit civil freedom of citizens through authoritarian regimes and restrictive regulations.[26]Technology and the use of automated systems are used by powerful governments to dismiss civic intelligence. There is also the concern for losing civic intelligence and human jobs if AI was to replace many sectors of the economy and political landscapes around the world.[27]AI has the dangerous possibility of getting out of control and self-replicate destructive behaviors that might be detrimental to society.[27] However, according to one article, if world communities work together to form international standards, improve AI regulation policies, and educate people about AI, political and civil freedom might be more easily achieved.[28] Recent shifts towards modern technology, social media, and the internet influence how civic intelligence interact with politics in the world.[29]New technologies expand the reach of data and information to more people, and citizens can engage with each other or the government more openly through the internet.[30]Civic intelligence can take a form of increased presence among groups of individuals, and the speed of civic intelligence onset is intensified as well.[29] The internet and social media play roles in civic intelligence. Social Medias like Facebook, Twitter, and Reddit became popular sites for political discoveries, and many people, especially younger adults, choose to engage with politics online.[19]There are positive effects of social media on civic engagement. According to one article, social media has connected people in unprecedented ways. People now find it easier to form democratic movements, engage with each other and politicians, voice opinions, and take actions virtually.[3]Social media has been incorporated into people's lives, and many people obtain news and other political ideas from online sources.[3] One study explains that social media increase political participation through more direct forms of democracy and bottom-up approach of solving political, social, or economical issues.[29]The idea is that social media will lead people to participate politically in novel ways other than traditional actions of voting, attending rallies, and supporting candidates in real life. The study argues that this leads to new ways of enacting civic intelligence and political participation.[29]Thus, the study points out that social media is designed to gather civic intelligence at one place, the internet. A third article featuring an Italian case study finds that civic collaboration is important in helping a healthy government function in both local and national communities.[30]The article explains that there seems to be more individualized political actions and efforts when people choose to innovate new ways of political participation. Thus, one group's actions of political engagement might be entirely different than those of another group. However, social media also has some negative effects on civic intelligence in politics or economics. One study explains that even though social media might have increased direct citizen participation in politics and economics, it might have also opened more room formisinformationandecho chambers.[31]More specifically, trolling, the spread of false political information, stealing of person data, and usage of bots to spread propaganda are all examples of negative consequences of internet and social media.[3]These negative results, along the lines of the article, influence civic intelligence negatively because citizens have trouble discovering the lies from the truths in the political arena. Thus, civic intelligence would either be misleading or vanish altogether if a group is using false sources or misleading information.[3]A second article points out that a filter bubble is created through group isolation as a result ofgroup polarization.[31]False information and deliberate deception of political agendas play a major role in formingfilter bubblesof citizens. People are conditioned to believe what they want to believe, so citizens who focus more on one-sided political news might form one's own filter bubble.[31]Next, a research journal found that Twitter increases political knowledge of users while Facebook decrease the political knowledge of users.[19]The journal points out that different social media platforms can affect users differently in terms of political awareness and civic intelligence. Thus, social media might have uncertain political effects on civic intelligence.[19]
https://en.wikipedia.org/wiki/Civic_intelligence
Group decision-making(also known ascollaborative decision-makingorcollective decision-making) is a situation faced when individuals collectively make a choice from the alternatives before them. The decision is then no longer attributable to any single individual who is a member of the group. This is because all the individuals andsocial groupprocesses such as social influence contribute to the outcome. The decisions made by groups are often different from those made by individuals. In workplace settings, collaborative decision-making is one of the most successful models to generate buy-in from other stakeholders, build consensus, and encourage creativity. According to the idea ofsynergy, decisions made collectively also tend to be more effective than decisions made by a single individual. In this vein, certain collaborative arrangements have the potential to generate better net performance outcomes than individuals acting on their own.[1]Under normal everyday conditions, collaborative or group decision-making would often be preferred and would generate more benefits than individual decision-making when there is the time for proper deliberation, discussion, and dialogue.[2]This can be achieved through the use of committee, teams, groups, partnerships, or other collaborative social processes. However, in some cases, there can also be drawbacks to this method. In extreme emergencies or crisis situations, other forms of decision-making might be preferable as emergency actions may need to be taken more quickly with less time for deliberation.[2]On the other hand, additional considerations must also be taken into account when evaluating the appropriateness of a decision-making framework. For example, the possibility ofgroup polarizationalso can occur at times, leading some groups to make more extreme decisions than those of its individual members, in the direction of the individual inclinations.[3]There are also other examples where the decisions made by a group are flawed, such as theBay of Pigs invasion, the incident on which thegroupthinkmodel of group decision-making is based.[4] Factors that impact other socialgroup behavioursalso affect group decisions. For example, groups high incohesion, in combination with other antecedent conditions (e.g. ideological homogeneity and insulation from dissenting opinions) have been noted to have a negative effect on group decision-making and hence on group effectiveness.[4]Moreover, when individuals make decisions as part of a group, there is a tendency to exhibit a bias towards discussing shared information (i.e.shared information bias), as opposed to unshared information. Thesocial identity approachsuggests a more general approach to group decision-making than the popular groupthink model, which is a narrow look at situations where group and other decision-making is flawed. Social identity analysis suggests that the changes which occur during collective decision-making are part of rational psychological processes which build on the essence of the group in ways that are psychologically efficient, grounded in the social reality experienced by members of the group, and have the potential to have a positive impact on society.[5] Decision-making in groups is sometimes examined separately as process and outcome. Process refers to the group interactions. Some relevant ideas include coalitions among participants as well as influence and persuasion. The use of politics is often judged negatively, but it is a useful way to approach problems when preferences among actors are in conflict, when dependencies exist that cannot be avoided, when there are no super-ordinate authorities, and when the technical or scientific merit of the options is ambiguous. In addition to the different processes involved in making decisions, groupdecision support systems(GDSSs) may have different decision rules. A decision rule is the GDSS protocol a group uses to choose amongscenario planningalternatives. Plurality and dictatorship are less desirable as decision rules because they do not require the involvement of the broader group to determine a choice. Thus, they do not engender commitment to the course of action chosen. An absence of commitment from individuals in the group can be problematic during the implementation phase of a decision. There are no perfect decision-making rules. Depending on how the rules are implemented in practice and the situation, all of these can lead to situations where either no decision is made, or to situations where decisions made are inconsistent with one another over time. Sometimes, groups may have established and clearly defined standards for making decisions, such as bylaws and statutes. However, it is often the case that the decision-making process is less formal, and might even be implicitly accepted. Social decision schemes are the methods used by a group to combine individual responses to come up with a single group decision. There are a number of these schemes, but the following are the most common: There are strengths and weaknesses to each of these social decision schemes. Delegation saves time and is a good method for less important decisions, but ignored members might react negatively. Averaging responses will cancel out extreme opinions, but the final decision might disappoint many members. Plurality is the most consistent scheme when superior decisions are being made, and it involves the least amount of effort.[6]Voting, however, may lead to members feeling alienated when they lose a close vote, or to internal politics, or to conformity to other opinions.[7]Consensus schemes involve members more deeply, and tend to lead to high levels of commitment. But, it might be difficult for the group to reach such decisions.[8] Groups have many advantages and disadvantages when making decisions. Groups, by definition, are composed of two or more people, and for this reason naturally have access to more information and have a greater capacity to process this information.[9]However, they also present a number of liabilities to decision-making, such as requiring more time to make choices and by consequence rushing to a low-quality agreement in order to be timely. Some issues are also so simple that a group decision-making process leads to too many cooks in the kitchen: for such trivial issues, having a group make the decision is overkill and can lead to failure. Because groups offer both advantages and disadvantages in making decisions,Victor Vroomdeveloped a normative model of decision-making[10]that suggests different decision-making methods should be selected depending on the situation. In this model, Vroom identified five different decision-making processes.[9] The idea of using computerized support systems is discussed byJames Reasonunder the heading of intelligent decision support systems in his work on the topic of human error. James Reason notes that events subsequent to The Three Mile accident have not inspired great confidence in the efficacy of some of these methods. In the Davis-Besse accident, for example, both independent safety parameter display systems were out of action before and during the event.[11] Decision-making softwareis essential forautonomous robotsand for different forms of active decision support for industrial operators, designers and managers. Due to the large number of considerations involved in many decisions, computer-baseddecision support systems(DSS) have been developed to assist decision-makers in considering the implications of various courses of thinking. They can help reduce theriskof humanerrors. DSSs which try to realize some human-cognitivedecision-makingfunctionsare calledIntelligent Decision Support Systems(IDSS).[12]On the other hand, an active and intelligent DSS is an important tool for the design of complex engineering systems and the management of large technological and business projects.[13] With age, cognitive function decreases and decision-making ability decreases. Generally speaking, the low age group uses the team decision effect to be good; with the age, the gap between the team decision and the excellent choice increases. Past experience can influence future decisions. It can be concluded that when a decision produces positive results, people are more likely to make decisions in similar ways in similar situations. On the other hand, people tend to avoid repeating the same mistakes, because future decisions based on past experience are not necessarily the best decisions. Cognitive bias is a phenomenon in which people often distort their perceived results due to their own or situational reasons when they perceive themselves, others or the external environment. in the decision-making process, cognitive bias influences people by making them over-dependent or giving more trust to expected observations and prior knowledge, while discarding information or observations that are considered uncertain, rather than focusing on more factors. The prospects are broad.[14] Groups have greater informational and motivational resources, and therefore have the potential to outperform individuals. However they do not always reach this potential. Groups often lack proper communication skills. On the sender side this means that group members may lack the skills needed to express themselves clearly. On the receiver side this means that miscommunication can result from information processing limitations and faulty listening habits of human beings. In cases where an individual controls the group it may prevent others from contributing meaningfully.[15] It is also the case that groups sometimes use discussion to avoid rather than make a decision. Avoidance tactics include the following:[9] Two fundamental "laws" that groups all too often obey:[citation needed] Research using the hidden profiles task shows that lack of information sharing is a common problem in group decision making. This happens when certain members of the group have information that is not known by all of the members in the group. If the members were to all combine all of their information, they would be more likely to make an optimal decision. But if people do not share all of their information, the group may make a sub-optimal decision. Stasser and Titus have shown that partial sharing of information can lead to a wrong decision.[16]And Lu and Yuan found that groups were eight times more likely to correctly answer a problem when all of the group members had all of the information rather than when some information was only known by select group members.[17] Individuals in a group decision-making setting are often functioning under substantial cognitive demands. As a result, cognitive and motivationalbiasescan often affect group decision-making adversely. According to Forsyth,[9]there are three categories of potential biases that a group can fall victim to when engaging in decision-making: The misuse, abuse and/or inappropriate use of information, including: Overlooking useful information. This can include: Relying too heavily onheuristicsthat over-simplify complex decisions. This can include:
https://en.wikipedia.org/wiki/Collective_decision-making
Collective effervescenceis a sociological concept coined byÉmile Durkheim. According to Durkheim, a community or society may at times come together and simultaneously communicate the same thought and participate in the same action. Such an event then causes collective effervescence which excites individuals and serves to unify the group.[1] Émile Durkheim'stheory of religion, as presented in his 1912 volumeElementary Forms of Religious Life, is rooted in the concept of collective effervescence. Durkheim argues that the universal religiousdichotomy of profane and sacredresults from the lives of these tribe members: most of their life is spent performing menial tasks such as hunting and gathering, which are profane. However, during the rare occasions when the entire tribe comes together, a sense of heightened energy and unity, "collective effervescence," emerges. This intense communal experience transforms certain physical objects or individuals into sacred symbols, as the energy of the gathering is projected onto them. The force is thus associated with thetotemwhich is the symbol of the clan, mentioned by Durkheim in his study of "elementary forms" of religion in Aboriginal societies. Because it provides the tribe's name, the symbol is present during the gathering of the clan. Through its presence in these gatherings, the totem comes to represent both the scene and the strongly felt emotion, and thus becomes a collective representation of the group.[2] For Durkheim, religion is a fundamentally social phenomenon. The beliefs and practices of the sacred are a method ofsocial organization. This explanation is detailed inElementary Forms"Book 2/The Elementary Beliefs", chapter 7, "Origins of These Beliefs: Origin of the Idea of theTotemicPrinciple orMana". According to Durkheim, "god and society are one of the same…the god of the clan…can be none other than the clan itself, but the clan transfigured and imagined in the physical form of a plant or animal that serves as a totem."[3] The group members experience a feeling of a loss of individuality and unity with the gods and according to Durkheim, thus with the group.[4]
https://en.wikipedia.org/wiki/Collective_effervescence
Collective memoryis the shared pool of memories, knowledge and information of asocial groupthat is significantly associated with the group's identity.[1][2][3]The English phrase "collective memory" and the equivalent French phrase "la mémoire collective" appeared in the second half of the nineteenth century. The philosopher and sociologistMaurice Halbwachsanalyzed and advanced the concept of the collective memory in the bookLes cadres sociaux de la mémoire(1925).[4] Collective memory can be constructed, shared, and passed on by large and small social groups. Examples of these groups can include nations, generations, communities, among others.[1] Collective memory has been a topic of interest and research across a number of disciplines, includingpsychology,sociology,history,philosophy, andanthropology.[5] Collective memory has been conceptualized in several ways and proposed to have certain attributes. For instance, collective memory can refer to a shared body of knowledge (e.g., memory of a nation's past leaders or presidents);[6][7][8]the image, narrative, values and ideas of a social group; or the continuous process by which collective memories of events change.[1] The difference between history and collective memory is best understood when comparing the aims and characteristics of each. A goal of history broadly is to provide a comprehensive, accurate, and unbiased portrayal of past events. This often includes the representation and comparison of multiple perspectives and the integration of these perspectives and details to provide a complete and accurate account. In contrast, collective memory focuses on a single perspective, for instance, the perspective of one social group, nation, or community. Consequently, collective memory represents past events as associated with the values, narratives and biases specific to that group.[9][1] Studies have found that people from different nations can have major differences in their recollections of the past. In one study where American and Russian students were instructed to recall significant events from World War II and these lists of events were compared, the majority of events recalled by the American and Russian students were not shared.[10]Differences in the events recalled and emotional views towards the Civil War, World War II and the Iraq War have also been found in a study comparing collective memory between generations of Americans.[11] The concept of collective memory, initially developed byHalbwachs, has been explored and expanded from various angles – a few of these are introduced below. James E. Young has introduced the notion of 'collected memory' (opposed to collective memory), marking memory's inherently fragmented, collected and individual character, while Jan Assmann[12]develops the notion of 'communicative memory', a variety of collective memory based on everyday communication. This form of memory resembles the exchanges in oral cultures or the memories collected (and made collective) throughoral tradition. As another subform of collective memories, Assmann mentions forms detached from the everyday; they can be particular materialized and fixed points as, e.g. texts and monuments.[13] The theory of collective memory was also discussed by former Hiroshima resident and atomic-bomb survivor,Kiyoshi Tanimoto, in a tour of the United States as an attempt to rally support and funding for the reconstruction of his Memorial Methodist Church in Hiroshima. He theorized that the use of the atomic bomb had forever added to the world's collective memory and would serve in the future as a warning against such devices. SeeJohn Hersey's 1946 bookHiroshima.[14] HistorianGuy Beiner(1968- ), an authority on memory and the history of Ireland, has criticized the unreflective use of the adjective "collective" in many studies of memory: The problem is with crude concepts ofcollectivity, which assume a homogeneity that is rarely, if ever, present, and maintain that, since memory is constructed, it is entirely subject to the manipulations of those invested in its maintenance, denying that there can be limits to the malleability of memory or to the extent to which artificial constructions of memory can be inculcated. In practice, the construction of a completely collective memory is at best an aspiration of politicians, which is never entirely fulfilled and is always subject to contestations.[15] In its place, Beiner has promoted the term "social memory"[16]and has also demonstrated its limitations by developing a related concept of "social forgetting".[17] HistorianDavid Riefftakes issue with the term "collective memory", distinguishing between memories of people who were actually alive during the events in question, and people who only know about them from culture or media. Rieff writes in opposition toGeorge Santayana's aphorism "those who cannot remember the past are condemned to repeat it", pointing out that strong cultural emphasis on certain historical events (often wrongs against the group) can prevent resolution of armed conflicts, especially when the conflict has been previously fought to a draw.[18]The sociologist David Leupold draws attention to the problem of structural nationalism inherent in the notion of collective memory, arguing in favor of "emancipating the notion of collective memory from being subjected to the national collective" by employing amulti-collective perspectivethat highlights the mutual interaction of other memory collectives that form around generational belonging, family, locality or socio-political world-views.[19] Pierre Lévyargues that the phenomenon of human collective intelligence undergoes a profound shift with the arrival of theinternetparadigm, as it allows the vast majority of humanity to access and modify a common shared online collective memory.[citation needed] Though traditionally a topic studied in the humanities, collective memory has become an area of interest in psychology. Common approaches taken in psychology to study collective memory have included investigating the cognitive mechanisms involved in the formation and transmission of collective memory; and comparing the social representations of history between social groups.[1][20][21][22][23][24] Research on collective memory has compared how different social groups form their own representations of history and how such collective memories can impact ideals, values, behaviors and vice versa. Research has proposed that groups form social representations of history in order to develop their own social identity, as well as to evaluate the past, often in order to prevent past patterns of conflict and error from being repeated. Research has also compared differences in recollections of historical events, such as the examples given earlier when comparing history and collective memory.[20] Differences in collective memories between social groups, such as nations or states, have been attributed to collective narcissism and egocentric/ethnocentric bias. In one related study where participants from 35 countries were questioned about their country's contribution to world history and provided a percentage estimation from 0% to 100%, evidence for collective narcissism was found as many countries gave responses exaggerating their country's contribution. In another study where Americans from 50 states were asked similar questions regarding their state's contribution to the history of the United States, patterns of overestimation and collective narcissism were also found.[25][26][27] Certain cognitive mechanisms involved during group recall and the interactions between these mechanisms have been suggested to contribute to the formation of collective memory. Below are some mechanisms involved during when groups of individuals recall collaboratively.[28][20][24] When groups collaborate to recall information, they experience collaborative inhibition, a decrease in performance compared to the pooled memory recall of an equal number of individuals. Weldon and Bellinger (1997) and Basden, Basden, Bryner, and Thomas (1997) provided evidence that retrieval interference underlies collaborative inhibition, as hearing other members' thoughts and discussion about the topic at hand interferes with one's own organization of thoughts and impairs memory.[29][30] The main theoretical account for collaborative inhibition isretrieval disruption. During the encoding of information, individuals form their own idiosyncratic organization of the information. This organization is later used when trying to recall the information. In a group setting as members exchange information, the information recalled by group members disrupts the idiosyncratic organization one had developed. As each member's organization is disrupted, this results in the less information recalled by the group compared to the pooled recall of participants who had individually recalled (an equal number of participants as in the group).[31] Despite the problem of collaborative inhibition, working in groups may benefit an individual's memory in the long run, as group discussion exposes one to many different ideas over time. Working alone initially prior to collaboration seems to be the optimal way to increase memory. Early speculations about collaborative inhibition have included explanations, such as diminished personal accountability, social loafing and the diffusion of responsibility, however retrieval disruption remains the leading explanation. Studies have found that collective inhibition to sources other than social loafing, as offering a monetary incentive have been evidenced to fail to produce an increase in memory for groups.[29]Further evidence from this study suggest something other than social loafing is at work, as reducing evaluation apprehension – the focus on one's performance amongst other people – assisted in individuals' memories but did not produce a gain in memory for groups. Personal accountability – drawing attention to one's own performance and contribution in a group – also did not reduce collaborative inhibition. Therefore, group members' motivation to overcome the interference of group recall cannot be achieved by several motivational factors.[32] Information exchange among group members often helps individuals to remember things that they would not have remembered had they been working alone. In other words, the information provided by person A may 'cue' memories in person B. This results in enhanced recall. During a group recall, an individual might not remember as much as they would on their own, as their memory recall cues may be distorted because of other team members. Nevertheless, this has enhanced benefits, team members can remember something specific to the disruption of the group. Cross-cueing plays a role in formulation of group recall (Barber, 2011).[33] In 2010, a study was done to see how individuals remembereda bombing that occurred in the 1980s. The clock was later set at 10.25 to remember the tragic bomb (de Vito et al. 2009).[34]The individuals were asked to remember if the clock at Bologna central station in Italy had remained functioning, everyone said no, in fact it was the opposite (Legge, 2018). There have been many instances in history where people create afalse memory. In a 2003 study done in the Claremont Graduate University, results demonstrated that during a stressful event and the actual event are managed by the brain differently.[35]Other instances of false memories may occur when remembering something on an object that is not actually there or mistaking how someone looks in a crime scene (Legge, 2018). It is possible for people to remember the same false memories; some people call it the "Mandela effect". The name "Mandela effect" comes from the name of South African civil rights leaderNelson Mandelawhom many people falsely believed was dead. (Legge, 2018). The Pandora Box experiment explains that language complexes the mind more when it comes to false memories. Language plays a role with imaginative experiences, because it makes it hard for humans to gather correct information (Jablonka, 2017).[36] Compared to recalling individually, group members can provide opportunities for error pruning during recall to detect errors that would otherwise be uncorrected by an individual.[37] Group settings can also provide opportunities for exposure to erroneous information that may be mistaken to be correct or previously studied.[38] Listening to group members recall the previously encoded information can enhance memory as it provides a second exposure opportunity to the information.[31] Studies have shown that information forgotten and excluded during group recall can promote the forgetting of related information compared to information unrelated to that which was excluded during group recall. Selective forgetting has been suggested to be a critical mechanism involved in the formation of collective memories and what details are ultimately included and excluded by group members. This mechanism has been studied using the socially-shared retrieval induced forgetting paradigm, a variation of theretrieval induced forgetting methodwith individuals.[39][40][41]The brain has many important brain regions that are directed at memory, the cerebral cortex, the fornix and the structures that they contain. These structures in the brain are required for attaining new information, and if any of these structures are damaged you can get anterograde or retrograde amnesia (Anastasio et al.,p. 26, 2012).[42]Amnesia could be anything that disrupts your memory or affects you psychologically. Over time, memory loss becomes a natural part of amnesia. Sometimes you can get retrograde memory of a recent or past event.[43] Bottom-up approaches to the formation of collective memories investigate how cognitive-level phenomena allow for people to synchronize their memories following conversational remembering. Due to the malleability of human memory, talking with one another about the past results in memory changes that increase the similarity between the interactional partners' memories[39]When these dyadic interactions occur in a social network, one can understand how large communities converge on a similar memory of the past.[44][further explanation needed]Research on larger interactions show that collective memory in larger social networks can emerge due to cognitive mechanisms involved in small group interactions.[44] With the ability of online data such as social media and social network data and developments innatural language processingas well asinformation retrievalit has become possible to study how online users refer to the past and what they focus at. In an early study[45]in 2010 researchers extracted absolute year references from large amounts of news articles collected for queries denoting particular countries. This allowed to portray so-called memory curves that demonstrate which years are particularly strongly remembered in the context of different countries (commonly, exponential shape of memory curves with occasional peaks that relate to commemorating important past events) and how the attention to more distant years declines in news. Based on a topic modelling and analysis they then detected major topics portraying how particular years are remembered. Rather than news, Wikipedia was also the target of analysis.[46][47]Viewership statistics of Wikipedia articles on aircraft crashes were analyzed to study the relation between recent events and past events, particularly for understanding memory-triggering patterns.[48] Other studies focused on the analysis of collective memory in social networks such as investigation of over 2 million tweets (both quantitively and qualitatively) that are related to history to uncover their characteristics and ways in which history-related content is disseminated in social networks.[49]Hashtags, as well as tweets, can be classified into the following types: The study ofdigital memorialization, which encompasses the ways in social and collective memory has shifted after the digital turn, has grown substantially responding to rising proliferation of memorial content not only on the internet, but also the increased use of digital formats and tools in heritage institutions, classrooms, and among individual users worldwide.
https://en.wikipedia.org/wiki/Collective_memory
Crowd psychology(ormob psychology) is a subfield ofsocial psychologywhich examines how the psychology of a group of people differs from the psychology of any one person within the group. The study of crowd psychology looks into the actions andthought processesof both the individual members of thecrowdand of the crowd as acollective social entity. Thebehaviorof a crowd is much influenced bydeindividuation(seen as a person's loss ofresponsibility[1]) and by the person's impression of the universality of behavior, both of which conditions increase in magnitude with size of the crowd.[2][3]Notable theorists in crowd psychology includeGustave Le Bon(1841-1931),Gabriel Tarde(1843-1904), andSigmund Freud(1856-1939).[4]Many of these theories are today tested or used to simulate crowd behaviors in normal or emergency situations. One of the main focuses in these simulation works aims to prevent crowd crushes and stampedes.[5][6] According to his biological theory of criminology suggests that criminality is inherited and that someone “born criminal” could be identified by the way they look[7]Enrico Ferriexpressed his view of crime as degeneration more profound than insanity, for in most insane persons the primitive moral sense has survived the wreck of their intelligence. Along similar lines were the remarks ofBenedickt,SergiandMarro. A response from the French, who put forward an environmental theory of human psychology, M. Anguilli called attention to the importance of the influence of the social environment upon crime. ProfessorAlexandre Lacassagnethought that the atavistic and degenerative theories as held by the Italian school were exaggerations and false interpretations of the facts, and that the important factor was the social environment."[8] In Paris during 10–17 August 1889, the Italian school received a stronger rebuke of their biological theories during the 2nd International Congress of Criminal Anthropology. A radical divergence in the views between the Italian and the French schools was reflected in the proceedings. Earlier, literature on crowds and crowd behavior had appeared as early as 1841, with the publication ofCharles Mackay's bookExtraordinary Popular Delusions and the Madness of Crowds.[10]The attitude towards crowds underwent an adjustment with the publication ofHippolyte Taine's six-volumeThe Origins of Contemporary France(1875). In particular Taine's work helped to change the opinions of his contemporaries on the actions taken by the crowds during the 1789 Revolution. Many Europeans held him in great esteem. While it is difficult to directly link his works to crowd behavior, it may be said that his thoughts stimulated further study of crowd behavior. However, it was not until the latter half of the 19th century that scientific interest in the field gained momentum. French physician and anthropologistGustave Le Bonbecame its most-influential theorist.[4][11][12][13][14][15] There is limited research into the types of crowd and crowd membership and there is no consensus as to the classification of types of crowds. Two recent scholars, Momboisse (1967)[16]and Berlonghi (1995)[17]focused upon purpose of existence to differentiate among crowds. Momboisse developed a system of four types: casual, conventional, expressive, and aggressive. Berlonghi classified crowds as spectator, demonstrator, or escaping, to correlate to the purpose for gathering. Another approach to classifying crowds is sociologist Herbert Blumer's system of emotional intensity. He distinguishes four types of crowds: casual, conventional, expressive, and active. A group of people who just so happen to be at the same location at the same time is known as a casual crowd. This kind of mob lacks any true identity, long-term goal, or shared connection.[18]A group of individuals who come together for a particular reason is known as a conventional crowd. They could be going to a theater, concert, movie, or lecture. According toErich Goode, conventional crowds behave in a very conventional and hence somewhat structured manner; as their name suggests, they do not truly act out collective behavior.[18]A group of people who come together solely to show their excitement and feelings is known as an expressive crowd. A political candidate's rally, a religious revival, and celebrations like Mardi Gras are a few examples.[18]An active crowd behaves violently or in other damaging ways, such looting, going above and beyond an expressive crowd. One of the main examples of an acting crowd is a mob, which is an extremely emotional group that either commits or is prepared to do violence.[18]A crowd changes its level of emotional intensity over time, and therefore, can be classed in any one of the four types. Generally, researchers in crowd psychology have focused on the negative aspects of crowds,[11]but not all crowds are volatile or negative in nature. For example, in the beginning of the socialist movement crowds were asked to put on their Sunday dress and march silently down the street. A more-modern example involves the sit-ins during theCivil Rights movement. Crowds can reflect and challenge the held ideologies of their sociocultural environment. They can also serve integrative social functions, creating temporary communities.[2][11] Crowds can be defined as active ("mobs") or passive ("audiences"). Active crowds can be further divided into aggressive, escapist, acquisitive, or expressive mobs.[2]Aggressive mobs are often violent and outwardly focused. Examples are football riots,theLos Angeles riots of 1992,andthe2011 English riots.[19]Escapist mobs are characterized by a large number of people trying to get out of a dangerous situation like the November 2021Astroworld Festival.[20]Incidents involving crowds are often reported by media as the results of "panic",[21][22]but some experts have criticized the media's implication that panic is a main cause of crowd disasters, noting that actual panic is relatively rare in fire situations, and that the major factors in dangerous crowd incidents are infrastructure design, crowd density and breakdowns in communication.[23][24][25]Acquisitive mobs occur when large numbers of people are fighting for limited resources. An expressive mob is any other large group of people gathering for an active purpose. Civil disobedience, rock concerts, and religious revivals all fall under this category.[2] Gustave Le Bonheld that crowds existed in three stages: submergence, contagion, and suggestion.[26]During submergence, the individuals in the crowd lose their sense of individual self and personal responsibility. This is quite heavily induced by the anonymity of the crowd.[26]Contagion refers to the propensity for individuals in a crowd to unquestioningly follow the predominant ideas and emotions of the crowd. In Le Bon's view, this effect is capable of spreading between "submerged" individuals much like a disease.[2]Suggestion refers to the period in which the ideas and emotions of the crowd are primarily drawn from a shared unconscious ideology. Crowd members become susceptible to any passing idea or emotion.[26]This behavior comes from an archaic shared unconscious and is therefore uncivilized in nature. It is limited by the moral and cognitive abilities of the least capable members.[26]Le Bon believed that crowds could be a powerful force only for destruction.[11]Additionally, Le Bon and others have indicated that crowd members feel a lessened sense of legal culpability, due to the difficulty in prosecuting individual members of a mob.[2]In short, the individual submerged in the crowd loses self control as the "collective mind" takes over and makes the crowd member capable of violating personal or social norms.[27] Le Bon's idea that crowds foster anonymity and generate emotion has been contested by some critics. Clark McPhail points out studies which show that "the madding crowd" does not take on a life of its own, apart from the thoughts and intentions of members.[28]Norris Johnson, after investigatinga panic at a 1979 The Who concertconcluded that the crowd was composed of many small groups of people mostly trying to help each other. Additionally, Le Bon's theory ignores the socio-cultural context of the crowd, which some theorists argue can disempower social change.[11]R. Brown disputes the assumption that crowds are homogenous, suggesting instead that participants exist on a continuum, differing in their ability to deviate from social norms.[2] Sigmund Freud's crowd behavior theory primarily consists of the idea that becoming a member of a crowd serves to unlock the unconscious mind. This occurs because thesuper-ego, or moral center of consciousness, is displaced by the larger crowd, to be replaced by a charismatic crowd leader. McDougall argues similarly to Freud, saying that simplistic emotions are widespread, and complex emotions are rarer. In a crowd, the overall shared emotional experience reverts to the least common denominator (LCD), leading to primitive levels of emotional expression.[4]This organizational structure is that of the "primal horde"—pre-civilized society—and Freud states that one must rebel against the leader (re-instate the individual morality) in order to escape from it.[4] Theodor Adornocriticized the belief in a spontaneity of the masses: according to him, the masses were an artificial product of "administrated" modern life. TheEgoof the bourgeois subject dissolved itself, giving way to theIdand the "de-psychologized" subject. Furthermore, Adorno stated the bond linking the masses to the leader through the spectacle is feigned: "When the leaders become conscious of mass psychology and take it into their own hands, it ceases to exist in a certain sense. ... Just as little as people believe in the depth of their hearts that the Jews are the devil, do they completely believe in their leader. They do not really identify themselves with him but act this identification, perform their own enthusiasm, and thus participate in their leader's performance. ... It is probably the suspicion of this fictitiousness of their own 'group psychology' which makes fascist crowds so merciless and unapproachable. If they would stop to reason for a second, the whole performance would go to pieces, and they would be left to panic."[29] Deindividuationtheory is largely based on the ideas of Gustave Le Bon[27]and argues that in typical crowd situations, factors such as anonymity, group unity, and arousal can weaken personal controls (e.g. guilt, shame, self-evaluating behavior) by distancing people from their personal identities and reducing their concern for social evaluation.[4][11]This lack of restraint increases individual sensitivity to the environment and lessens rational forethought, which can lead to antisocial behavior.[4][11]More recent theories have stated that deindividuation hinges upon a person being unable, due to situation, to have strong awareness of their self as an object of attention. This lack of attention frees the individual from the necessity of normal social behavior.[4] American social psychologist Leon Festinger and colleagues first elaborated the concept of deindividuation in 1952. It was further refined by American psychologistPhilip Zimbardo, who detailed why mental input and output became blurred by such factors as anonymity, lack of social constraints, and sensory overload.[30]Zimbardo'sStanford Prison Experimenthas been presented as a strong argument for the power of deindividuation,[4]although it was later criticised as unscientific.[31]Further experimentation has had mixed results when it comes to aggressive behaviors, and has instead shown that the normative expectations surrounding the situations of deindividuation influence behavior (i.e. if one is deindividuated as aKKKmember, aggression increases, but if it is as a nurse, aggression does not increase).[4] A further distinction has been proposed between public and private deindividuation. When private aspects of self are weakened, one becomes more subject to crowd impulses, but not necessarily in a negative way. It is when one no longer attends to the public reaction and judgement of individual behavior that antisocial behavior is elicited.[4]Philip Zimbardo also did not view deindividuation exclusively as a group phenomenon, and applied the concept to suicide, murder, and interpersonal hostility.[27] Convergence theory[32]holds that crowd behavior is not a product of the crowd, but rather the crowd is a product of the coming together of like-minded individuals.[2][11]Floyd Allportargued that "An individual in a crowd behaves just as he would behave alone, only more so."[33]Convergence theory holds that crowds form from people of similar dispositions, whose actions are then reinforced and intensified by the crowd.[11] Convergence theory claims that crowd behavior is not irrational; rather, people in crowds express existing beliefs and values so that the mob reaction is the rational product of widespread popular feeling. However, this theory is questioned by certain research which found that people involved in the 1970s riots were less likely than nonparticipant peers to have previous convictions.[11] Critics of this theory report that it still excludes the social determination of self and action, in that it argues that all actions of the crowd are born from the individuals' intents.[11] Ralph H. Turnerand Lewis Killian put forth the idea that norms emerge from within the crowd. Emergent norm theory states that crowds have little unity at their outset, but during a period of milling about, key members suggest appropriate actions, and following members fall in line, forming the basis for the crowd's norms.[11] Key members are identified through distinctive personalities or behaviors. These garner attention, and the lack of negative response elicited from the crowd as a whole stands as tacit agreement to their legitimacy.[4]The followers form the majority of the mob, as people tend to be creatures ofconformitywho are heavily influenced by the opinions of others.[10]This has been shown in the conformity studies conducted bySherifandAsch.[34]Crowd members are further convinced by the universality phenomenon, described by Allport as the persuasive tendency of the idea that if everyone in the mob is acting in such-and-such a way, then it cannot be wrong.[2] Emergent norm theory allows for both positive and negative mob types, as the distinctive characteristics and behaviors of key figures can be positive or negative in nature. An antisocial leader can incite violent action, but an influential voice of non-violence in a crowd can lead to a mass sit-in.[4]When a crowd described as above targets an individual, anti-social behaviors may emerge within its members. A major criticism of this theory is that the formation and following of new norms indicates a level of self-awareness that is often missing in the individuals in crowds (as evidenced by the study of deindividuation). Another criticism is that the idea of emergent norms fails to take into account the presence of existent sociocultural norms.[4][11]Additionally, the theory fails to explain why certain suggestions or individuals rise to normative status while others do not.[11] Thesocial identity theoryposits that the self is a complex system made up primarily of the concept of membership or non-membership in various social groups. These groups have various moral and behavioral values and norms, and the individual's actions depend on which group membership (or non-membership) is most personally salient at the time of action.[11] This influence is evidenced by findings that when the stated purpose and values of a group changes, the values and motives of its members also change.[34] Crowds are an amalgam of individuals, all of whom belong to various overlapping groups. However, if the crowd is primarily related to some identifiable group (such as Christians or Hindus or Muslims or civil-rights activists), then the values of that group will dictate the crowd's action.[11] In crowds which are more ambiguous, individuals will assume a new social identity as a member of the crowd.[4]This group membership is made more salient by confrontation with other groups – a relatively common occurrence for crowds.[4] The group identity serves to create a set of standards for behavior; for certain groups violence is legitimate, for others it is unacceptable.[4]This standard is formed from stated values, but also from the actions of others in the crowd, and sometimes from a few in leadership-type positions.[4] A concern with this theory is that while it explains how crowds reflect social ideas and prevailing attitudes, it does not explain the mechanisms by which crowds enact to drive social change.[11]
https://en.wikipedia.org/wiki/Crowd_psychology
TheGlobal Consciousness Project(GCP, also called theEGG Project) is aparapsychologyexperiment begun in 1998 as an attempt to detect possible interactions of "globalconsciousness" with physical systems. The project monitors a geographically distributed network ofhardware random number generatorsin a bid to identify anomalous outputs that correlate with widespread emotional responses to sets of world events, or periods of focused attention by large numbers of people. The GCP is privately funded through theInstitute of Noetic Sciences[1]and describes itself as an international collaboration of about 100 research scientists and engineers. Skepticssuch asRobert T. Carroll, Claus Larsen, and others have questioned the methodology of the Global Consciousness Project, particularly how the data are selected and interpreted,[2][3]saying the data anomalies reported by the project are the result of "pattern matching" andselection biaswhich ultimately fail to support a belief inpsior global consciousness.[4]But in analyzing the data for 11 September 2001, May et al. concluded that the statistically significant result given by the published GCP hypothesis was fortuitous, and found that as far as this particular event was concerned an alternative method of analysis gave only chance deviations throughout.[5]: 2 Roger D. Nelsondeveloped the project as an extrapolation of two decades of experiments from the controversialPrinceton Engineering Anomalies Research Lab(PEAR).[6] Nelson began usingrandom event generator(REG) technology in the field to study effects of special states ofgroup consciousness.[7] In an extension of the laboratory research utilizinghardware Random Event Generators(REG)[8]called FieldREG, investigators examined the outputs of REGs in the field before, during and after highly focused or coherent group events. The group events studied included psychotherapy sessions, theater presentations, religious rituals, sports competitions such as theFootball World Cup, and television broadcasts such as theAcademy Awards.[9] FieldREG was extended to global dimensions in studies looking at data from 12 independent REGs in the US and Europe during a web-promoted "Gaiamind Meditation" in January 1997, and then again in September 1997 after thedeath of Diana, Princess of Wales. The project claimed the results suggested it would be worthwhile to build a permanent network of continuously-running REGs.[10][non-primary source needed]This became the EGG project or Global Consciousness Project. Comparing the GCP to PEAR, Nelson, referring to the "field" studies with REGs done by PEAR, said the GCP used "exactly the same procedure... applied on a broader scale."[11][non-primary source needed] The GCP's methodology is based on the hypothesis that events which elicit widespread emotion or draw the simultaneous attention of large numbers of people may affect the output of hardware random number generators in astatistically significantway. The GCP maintains a network ofhardware random number generatorswhich are interfaced to computers at 70 locations around the world. Custom software reads the output of the random number generators and records a trial (sum of 200 bits) once every second. The data are sent to a server in Princeton, creating a database of synchronized parallel sequences of random numbers. The GCP is run as a replication experiment, essentially combining the results of many distinct tests of the hypothesis. The hypothesis is tested by calculating the extent of data fluctuations at the time of events. The procedure is specified by a three-step experimental protocol. In the first step, the event duration and the calculation algorithm are pre-specified and entered into a formal registry.[12][non-primary source needed]In the second step, the event data are extracted from the database and aZ score, which indicates the degree of deviation from the null hypothesis, is calculated from the pre-specified algorithm. In the third step, the event Z-score is combined with the Z-scores from previous events to yield an overall result for the experiment. The remote devices have been dubbedPrinceton Eggs, a reference to the coinageelectrogaiagram(EGG), aportmanteauofelectroencephalogramandGaia.[13][non-primary source needed]Supporters and skeptics have referred to the aim of the GCP as being analogous to detecting "a great disturbance inthe Force."[2][14][15] The GCP has suggested that changes in the level of randomness may have occurred during theSeptember 11, 2001 attackswhen the planes first impacted, as well as in the two days following the attacks.[16][non-primary source needed] Independent scientists Edwin May and James Spottiswoode conducted an analysis of the data around theSeptember 11 attacksand concluded there was no statistically significant change in the randomness of the GCP data during the attacks and the apparent significant deviation reported by Nelson and Radin existed only in their chosen time window.[5]Spikes and fluctuations are to be expected in any random distribution of data, and there is no set time frame for how close a spike has to be to a given event for the GCP to say they have found a correlation.[5]Wolcotte Smith said "A couple of additional statistical adjustments would have to be made to determine if there really was a spike in the numbers," referencing the data related to September 11, 2001.[17]Similarly, Jeffrey D. Scargle believes unless bothBayesianand classicalp-valueanalysis agree and both show the same anomalous effects, the kind of result GCP proposes will not be generally accepted.[18] In 2003, aNew York Timesarticle concluded "All things considered at this point, the stock market seems a more reliable gauge of the national—if not the global—emotional resonance."[19] In 2007,The Agereported that "[Nelson] concedes the data, so far, is not solid enough for global consciousness to be said to exist at all. It is not possible, for example, to look at the data and predict with any accuracy what (if anything) the eggs may be responding to."[20] Robert Matthewssaid that while it was "the most sophisticated attempt yet" to prove psychokinesis existed, the unreliability of significant events to cause statistically significant spikes meant that "the only conclusion to emerge from the Global Consciousness Project so far is that data without a theory is as meaningless as words without a narrative".[21] Peter Bancel reviews the data in a 2017 article and "finds that the data do not support the global consciousness proposal" and rather "All of the tests favor the interpretation of a goal-oriented effect."[22] Roger D. Nelsonis an American parapsychologist and researcher and the director of the GCP.[23]From 1980 to 2002, he was Coordinator of Research at thePrinceton Engineering Anomalies Research(PEAR) laboratory at Princeton University.[24]His professional focus was the study ofconsciousnessandintentionand the role of the mind in the physical world. His work integratesscienceandspirituality[citation needed], including research that is directly focused on numinous communal experiences.[25] Nelson's professional degrees are in experimentalcognitive psychology.[25]Until his retirement in 2002, he served as the coordinator of experimental work in thePrinceton Engineering Anomalies Research Lab(PEAR), directed byRobert Jahnin the department ofMechanical and Aerospace Engineering, School of Engineering/Applied Science,Princeton University.[26]
https://en.wikipedia.org/wiki/Global_Consciousness_Project
Group dynamicsis a system of behaviors and psychological processes occurring within asocial group(intragroup dynamics), or between social groups (intergroup dynamics). The study of group dynamics can be useful in understanding decision-making behaviour, tracking the spread of diseases in society, creating effective therapy techniques, and following the emergence and popularity of new ideas and technologies.[1]These applications of the field are studied inpsychology,sociology,anthropology,political science,epidemiology,education,social work,leadership studies, business and managerial studies, as well ascommunication studies. The history of group dynamics (or group processes)[2]has a consistent, underlying premise: "the whole is greater than the sum of its parts." Asocial groupis an entity that has qualities which cannot be understood just by studying the individuals that make up the group. In 1924,GestaltpsychologistMax Wertheimerproposed "There are entities where the behaviour of the whole cannot be derived from its individual elements nor from the way these elements fit together; rather the opposite is true: the properties of any of the parts are determined by the intrinsic structural laws of the whole".[3] As a field of study, group dynamics has roots in both psychology and sociology.Wilhelm Wundt(1832–1920), credited as the founder of experimental psychology, had a particular interest in the psychology of communities, which he believed possessed phenomena (human language, customs, and religion) that could not be described through a study of the individual.[2]On the sociological side,Émile Durkheim(1858–1917), who was influenced by Wundt, also recognized collective phenomena, such as public knowledge. Other key theorists includeGustave Le Bon(1841–1931) who believed that crowds possessed a 'racial unconscious' with primitive, aggressive, and antisocial instincts, andWilliam McDougall (psychologist), who believed in a 'group mind,' which had a distinct existence born from the interaction of individuals.[2] Eventually, thesocial psychologistKurt Lewin(1890–1947) coined the termgroup dynamicsto describe the positive and negative forces within groups of people.[4]In 1945, he establishedThe Group Dynamics Research Centerat theMassachusetts Institute of Technology, the first institute devoted explicitly to the study of group dynamics.[5]Throughout his career, Lewin was focused on how the study of group dynamics could be applied to real-world, social issues. Increasingly, research has appliedevolutionary psychologyprinciples to group dynamics. As human's social environments became more complex, they acquiredadaptationsby way of group dynamics that enhance survival. Examples include mechanisms for dealing with status, reciprocity, identifying cheaters, ostracism, altruism, group decision, leadership, andintergroup relations.[6] Gustave Le Bon was a French social psychologist whose seminal study,The Crowd: A Study of the Popular Mind(1896) led to the development ofgroup psychology. The British psychologist William McDougall in his workThe Group Mind(1920) researched the dynamics of groups of various sizes and degrees of organization. InGroup Psychology and the Analysis of the Ego,(1922), Sigmund Freud based his preliminary description of group psychology on Le Bon's work, but went on to develop his own, original theory, related to what he had begun to elaborate inTotem and Taboo.Theodor Adornoreprised Freud's essay in 1951 with hisFreudian Theory and the Pattern of Fascist Propaganda, and said that "It is not an overstatement if we say that Freud, though he was hardly interested in the political phase of the problem, clearly foresaw the rise and nature of fascist mass movements in purely psychological categories."[7] Jacob L. Moreno was a psychiatrist, dramatist, philosopher and theoretician who coined the term "group psychotherapy" in the early 1930s and was highly influential at the time. Kurt Lewin (1943, 1948, 1951) is commonly identified as the founder of the movement to study groups scientifically. He coined the termgroup dynamicsto describe the way groups and individuals act and react to changing circumstances.[8] William Schutz (1958, 1966) looked atinterpersonal relationsas stage-developmental, inclusion (am I included?), control (who is top dog here?), and affection (do I belong here?). Schutz sees groups resolving each issue in turn in order to be able to progress to the next stage. Conversely, a struggling group can devolve to an earlier stage, if unable to resolve outstanding issues at its present stage. Schutz referred to these group dynamics as "the interpersonal underworld," group processes which are largely unseen and un-acknowledged, as opposed to "content" issues, which are nominally the agenda of group meetings.[9][10] Wilfred Bion (1961) studied group dynamics from apsychoanalyticperspective, and stated that he was much influenced byWilfred Trotterfor whom he worked atUniversity College HospitalLondon, as did another key figure in the Psychoanalytic movement,Ernest Jones. He discovered several mass group processes which involved the group as a whole adopting an orientation which, in his opinion, interfered with the ability of a group to accomplish the work it was nominally engaged in.[11]Bion's experiences are reported in his published books, especiallyExperiences in Groups.TheTavistock Institutehas further developed and applied the theory and practices developed by Bion. Bruce Tuckman (1965) proposed the four-stage model calledTuckman's Stagesfor a group. Tuckman's model states that the ideal group decision-making process should occur in four stages: Tuckman later added a fifth stage for the dissolution of a group calledadjourning. (Adjourningmay also be referred to asmourning, i.e. mourning the adjournment of the group). This model refers to the overall pattern of the group, but of course individuals within a group work in different ways. If distrust persists, a group may never even get to the norming stage. M. Scott Peck developed stages for larger-scale groups (i.e., communities) which are similar to Tuckman's stages of group development.[12]Peck describes the stages of a community as: Communities may be distinguished from other types of groups, in Peck's view, by the need for members to eliminate barriers to communication in order to be able to form true community. Examples of common barriers are: expectations and preconceptions;prejudices;ideology,counterproductive norms,theologyand solutions; the need to heal, convert, fix or solve and the need to control. A community is born when its members reach a stage of "emptiness" orpeace. Richard Hackman developed a synthetic, research-based model for designing and managing work groups. Hackman suggested that groups are successful when they satisfy internal and external clients, develop capabilities to perform in the future, and when members find meaning and satisfaction in the group. Hackman proposed five conditions that increase the chance that groups will be successful.[13]These include: Intragroup dynamics(also referred to as ingroup-, within-group, or commonly just ‘group dynamics’) are the underlying processes that give rise to a set of norms, roles, relations, and common goals that characterize a particularsocial group. Examples of groups include religious, political, military, and environmental groups, sports teams, work groups, and therapy groups. Amongst the members of a group, there is a state of interdependence, through which the behaviours, attitudes, opinions, and experiences of each member are collectively influenced by the other group members.[14]In many fields of research, there is an interest in understanding how group dynamics influence individual behaviour, attitudes, and opinions. The dynamics of a particular group depend on how one defines theboundariesof the group. Often, there are distinctsubgroupswithin a more broadly defined group. For example, one could define U.S. residents (‘Americans’) as a group, but could also define a more specific set of U.S. residents (for example, 'Americans in the South'). For each of these groups, there are distinct dynamics that can be discussed. Notably, on this very broad level, the study of group dynamics is similar to the study ofculture. For example, there are group dynamics in the U.S. South that sustain aculture of honor, which is associated with norms of toughness, honour-related violence, and self-defence.[15][16] Group formation starts with a psychological bond between individuals. Thesocial cohesion approachsuggests that group formation comes out of bonds ofinterpersonal attraction.[2]In contrast, thesocial identity approachsuggests that a group starts when a collection of individuals perceive that they share some social category (‘smokers’, ‘nurses,’ ‘students,’ ‘hockey players’), and that interpersonal attraction only secondarily enhances the connection between individuals.[2]Additionally, from the social identity approach, group formation involves both identifying with some individuals and explicitlynotidentifying with others. So to say, a level of psychologicaldistinctivenessis necessary for group formation. Through interaction, individuals begin to develop group norms, roles, and attitudes which define the group, and are internalized to influence behaviour.[17] Emergent groupsarise from a relatively spontaneous process of group formation. For example, in response to a natural disaster, anemergent response groupmay form. These groups are characterized as having no preexisting structure (e.g. group membership, allocated roles) or prior experience working together.[18]Yet, these groups still express high levels of interdependence and coordinate knowledge, resources, and tasks.[18] Joining a group is determined by a number of different factors, including an individual's personal traits;[19]gender;[20]social motives such as need for affiliation,[21]need for power,[22]and need for intimacy;[23]attachment style;[24]and prior group experiences.[25]Groups can offer some advantages to its members that would not be possible if an individual decided to remain alone, including gainingsocial supportin the forms of emotional support,[26]instrumental support,[27]and informational support.[27]It also offers friendship, potential new interests, learning new skills, and enhancing self esteem.[28]However, joining a group may also cost an individual time, effort, and personal resources as they may conform tosocial pressuresand strive to reap the benefits that may be offered by the group.[28] TheMinimax Principleis a part ofsocial exchange theorythat states that people will join and remain in a group that can provide them with the maximum amount of valuable rewards while at the same time, ensuring the minimum amount of costs to themselves.[29]However, this does not necessarily mean that a person will join a group simply because the reward/cost ratio seems attractive. According to Howard Kelley and John Thibaut, a group may be attractive to us in terms of costs and benefits, but that attractiveness alone does not determine whether or not we will join the group. Instead, our decision is based on two factors: our comparison level, and our comparison level for alternatives.[29] In John Thibaut and Harold Kelley'ssocial exchange theory, comparison level is the standard by which an individual will evaluate the desirability of becoming a member of the group and forming new social relationships within the group.[29]This comparison level is influenced by previous relationships and membership in different groups. Those individuals who have experienced positive rewards with few costs in previous relationships and groups will have a higher comparison level than a person who experienced more negative costs and fewer rewards in previous relationships and group memberships. According to thesocial exchange theory, group membership will be more satisfying to a new prospective member if the group's outcomes, in terms of costs and rewards, are above the individual's comparison level. As well, group membership will be unsatisfying to a new member if the outcomes are below the individual's comparison level.[29] Comparison level only predicts how satisfied a new member will be with the social relationships within the group.[30]To determine whether people will actually join or leave a group, the value of other, alternative groups needs to be taken into account.[30]This is called the comparison level for alternatives. This comparison level for alternatives is the standard by which an individual will evaluate the quality of the group in comparison to other groups the individual has the opportunity to join. Thiabaut and Kelley stated that the "comparison level for alternatives can be defined informally as the lowest level of outcomes a member will accept in the light of available alternative opportunities.”[31] Joining and leaving groups is ultimately dependent on the comparison level for alternatives, whereas member satisfaction within a group depends on the comparison level.[30]To summarize, if membership in the group is above the comparison level for alternatives and above the comparison level, the membership within the group will be satisfying and an individual will be more likely to join the group. If membership in the group is above the comparison level for alternatives but below the comparison level, membership will be not be satisfactory; however, the individual will likely join the group since no other desirable options are available. When group membership is below the comparison level for alternatives but above the comparison level, membership is satisfying but an individual will be unlikely to join. If group membership is below both the comparison and alternative comparison levels, membership will be dissatisfying and the individual will be less likely to join the group. Groups can vary drastically from one another. For example, three best friends who interact every day as well as a collection of people watching a movie in a theater both constitute a group. Past research has identified four basic types of groups which include, but are not limited to: primary groups, social groups, collective groups, and categories.[30]It is important to define these four types of groups because they are intuitive to most lay people. For example, in an experiment,[32]participants were asked to sort a number of groups into categories based on their own criteria. Examples of groups to be sorted were a sports team, a family, people at a bus stop and women. It was found that participants consistently sorted groups into four categories: intimacy groups, task groups, loose associations, and social categories. These categories are conceptually similar to the four basic types to be discussed. Therefore, it seems that individuals intuitively define aggregations of individuals in this way. Primary groups are characterized by relatively small, long-lasting groups of individuals who share personally meaningful relationships. Since the members of these groups often interact face-to-face, they know each other very well and are unified. Individuals that are a part of primary groups consider the group to be an important part of their lives. Consequently, members strongly identify with their group, even without regular meetings.[30]Cooley[33]believed that primary groups were essential for integrating individuals into their society since this is often their first experience with a group. For example, individuals are born into a primary group, their family, which creates a foundation for them to base their future relationships. Individuals can be born into a primary group; however, primary groups can also form when individuals interact for extended periods of time in meaningful ways.[30]Examples of primary groups include family, close friends, and gangs. A social group is characterized by a formally organized group of individuals who are not as emotionally involved with each other as those in a primary group. These groups tend to be larger, with shorter memberships compared to primary groups.[30]Further, social groups do not have as stable memberships, since members are able to leave their social group and join new groups. The goals of social groups are often task-oriented as opposed to relationship-oriented.[30]Examples of social groups include coworkers, clubs, and sports teams. Collectives are characterized by large groups of individuals who display similar actions or outlooks. They are loosely formed, spontaneous, and brief.[30]Examples of collectives include a flash mob, an audience at a movie, and a crowd watching a building burn. Categories are characterized by a collection of individuals who are similar in some way.[30]Categories become groups when their similarities have social implications. For example, when people treat others differently because of certain aspects of their appearance or heritage, for example, this creates groups of different races.[30]For this reason, categories can appear to be higher in entitativity and essentialism than primary, social, and collective groups. Entitativity is defined by Campbell[34]as the extent to which collections of individuals are perceived to be a group. The degree of entitativity that a group has is influenced by whether a collection of individuals experience the same fate, display similarities, and are close in proximity. If individuals believe that a group is high in entitativity, then they are likely to believe that the group has unchanging characteristics that are essential to the group, known as essentialism.[35]Examples of categories are New Yorkers, gamblers, and women. The social group is a critical source of information about individual identity.[36]We naturally make comparisons between our own group and other groups, but we do not necessarily make objective comparisons. Instead, we make evaluations that are self-enhancing, emphasizing the positive qualities of our own group (seeingroup bias).[2]In this way, these comparisons give us a distinct and valued social identity that benefits our self-esteem. Our social identity and group membership also satisfies a need to belong.[37]Of course, individuals belong to multiple groups. Therefore, one's social identity can have several, qualitatively distinct parts (for example, one's ethnic identity, religious identity, and political identity).[38] Optimal distinctiveness theorysuggests that individuals have a desire to be similar to others, but also a desire to differentiate themselves, ultimately seeking some balance of these two desires (to obtainoptimal distinctiveness).[39]For example, one might imagine a young teenager in the United States who tries to balance these desires, not wanting to be ‘just like everyone else,’ but also wanting to ‘fit in’ and be similar to others. One's collective self may offer a balance between these two desires.[2]That is, to be similar to others (those who you share group membership with), but also to be different from others (those who are outside of your group). In the social sciences, group cohesion refers to the processes that keep members of a social group connected.[4]Terms such as attraction, solidarity, and morale are often used to describe group cohesion.[4]It is thought to be one of the most important characteristics of a group, and has been linked to group performance,[40]intergroup conflict[41]and therapeutic change.[42] Group cohesion, as a scientifically studied property of groups, is commonly associated with Kurt Lewin and his student,Leon Festinger. Lewin defined group cohesion as the willingness of individuals to stick together, and believed that without cohesiveness a group could not exist.[4]As an extension of Lewin's work, Festinger (along withStanley Schachterand Kurt Back) described cohesion as, “the total field of forces which act on members to remain in the group” (Festinger, Schachter, & Back, 1950, p. 37).[4]Later, this definition was modified to describe the forces acting on individual members to remain in the group, termedattraction to the group.[4]Since then, several models for understanding the concept of group cohesion have been developed, including Albert Carron's hierarchical model[43]and several bi-dimensional models (vertical v. horizontal cohesion, task v. social cohesion, belongingness and morale, and personal v. social attraction). Before Lewin and Festinger, there were, of course, descriptions of a very similar group property. For example,Emile Durkheimdescribed two forms of solidarity (mechanical and organic), which created a sense of collective conscious and an emotion-based sense of community.[44] Beliefs within theingroupare based on how individuals in the group see their other members. Individuals tend to upgrade likeable in-group members and deviate from unlikeable group members, making them a separate outgroup. This is called theblack sheepeffect.[45]The way a person judges socially desirable and socially undesirable individuals depends upon whether they are part of the ingroup or outgroup. This phenomenon has been later accounted for by subjective group dynamics theory.[46]According to this theory, people derogate socially undesirable (deviant) ingroup members relative to outgroup members, because they give a bad image of the ingroup and jeopardize people's social identity. In more recent studies, Marques and colleagues[47]have shown that this occurs more strongly with regard to ingroup full members than other members. Whereasnew membersof a group must prove themselves to the full members to become accepted, full members have undergone socialization and are already accepted within the group. They have more privilege than newcomers but more responsibility to help the group achieve its goals.Marginal memberswere once full members but lost membership because they failed to live up to the group's expectations. They can rejoin the group if they go through re-socialization. Therefore, full members' behavior is paramount to define the ingroup's image. Bogart and Ryan surveyed the development of new members' stereotypes about in-groups and out-groups during socialization. Results showed that the new members judged themselves as consistent with the stereotypes of their in-groups, even when they had recently committed to join those groups or existed as marginal members. They also tended to judge the group as a whole in an increasingly less positive manner after they became full members.[48]However, there is no evidence that this affects the way they are judged by other members. Nevertheless, depending on theself-esteemof an individual, members of the in-group may experience different private beliefs about the group's activities but will publicly express the opposite—that they actually share these beliefs. One member may not personally agree with something the group does, but to avoid the black sheep effect, they will publicly agree with the group and keep the private beliefs to themselves. If the person is privatelyself-aware, he or she is more likely to comply with the group even if they possibly have their own beliefs about the situation.[49] In situations ofhazingwithinfraternities and sororitieson college campuses, pledges may encounter this type of situation and may outwardly comply with the tasks they are forced to do regardless of their personal feelings about the Greek institution they are joining. This is done in an effort to avoid becoming an outcast of the group.[48]Outcasts who behave in a way that might jeopardize the group tend to be treated more harshly than the likeable ones in a group, creating a black sheep effect. Full members of a fraternity might treat the incoming new members harshly, causing the pledges to decide if they approve of the situation and if they will voice their disagreeing opinions about it. Individual behaviour is influenced by the presence of others.[36]For example, studies have found that individuals work harder and faster when others are present (seesocial facilitation), and that an individual's performance is reduced when others in the situation create distraction or conflict.[36]Groups also influence individual's decision-making processes. These include decisions related toingroup bias, persuasion (seeAsch conformity experiments), obedience (seeMilgram Experiment), andgroupthink. There are both positive and negative implications of group influence on individual behaviour. This type of influence is often useful in the context of work settings, team sports, and political activism. However, the influence of groups on the individual can also generate extremely negative behaviours, evident in Nazi Germany, theMy Lai massacre, and in theAbu Ghraib prison(also seeAbu Ghraib torture and prisoner abuse).[50] A group's structure is the internal framework that defines members' relations to one another over time.[51]Frequently studied elements of group structure include roles, norms, values, communication patterns, and status differentials.[52]Group structure has also been defined as the underlying pattern of roles, norms, and networks of relations among members that define and organize the group.[53] Rolescan be defined as a tendency to behave, contribute and interrelate with others in a particular way. Roles may be assigned formally, but more often are defined through the process of role differentiation.[54]Role differentiation is the degree to which different group members have specialized functions. A group with a high level of role differentiation would be categorized as having many different roles that are specialized and narrowly defined.[53]A key role in a group is the leader, but there are other important roles as well, including task roles, relationship roles, and individual roles.[53]Functional (task) roles are generally defined in relation to the tasks the team is expected to perform.[55]Individuals engaged in task roles focus on the goals of the group and on enabling the work that members do; examples of task roles include coordinator, recorder, critic, or technician.[53]A group member engaged in a relationship role (or socioemotional role) is focused on maintaining the interpersonal and emotional needs of the groups' members; examples of relationship role include encourager, harmonizer, or compromiser.[53] Normsare the informal rules that groups adopt to regulate members' behaviour. Norms refer to what should be done and represent value judgments about appropriate behaviour in social situations. Although they are infrequently written down or even discussed, norms have powerful influence on group behaviour.[56][unreliable source?]They are a fundamental aspect of group structure as they provide direction and motivation, and organize the social interactions of members.[53]Norms are said to be emergent, as they develop gradually throughout interactions between group members.[53]While many norms are widespread throughout society, groups may develop their own norms that members must learn when they join the group. There are various types of norms, including: prescriptive, proscriptive, descriptive, and injunctive.[53] Intermember Relationsare the connections among the members of a group, or the social network within a group. Group members are linked to one another at varying levels. Examining the intermember relations of a group can highlight a group's density (how many members are linked to one another), or the degree centrality of members (number of ties between members).[53]Analysing the intermember relations aspect of a group can highlight the degree centrality of each member in the group, which can lead to a better understanding of the roles of certain group (e.g. an individual who is a 'go-between' in a group will have closer ties to numerous group members which can aid in communication, etc.).[53] Valuesare goals or ideas that serve as guiding principles for the group.[57]Like norms, values may be communicated either explicitly or on an ad hoc basis. Values can serve as a rallying point for the team. However, some values (such asconformity) can also be dysfunction and lead to poor decisions by the team. Communication patternsdescribe the flow of information within the group and they are typically described as either centralized or decentralized. With a centralized pattern, communications tend to flow from one source to all group members. Centralized communications allow standardization of information, but may restrict the free flow of information. Decentralized communications make it easy to share information directly between group members. When decentralized, communications tend to flow more freely, but the delivery of information may not be as fast or accurate as with centralized communications. Another potential downside of decentralized communications is the sheer volume of information that can be generated, particularly with electronic media. Status differentialsare the relative differences in status among group members. When a group is first formed the members may all be on an equal level, but over time certain members may acquire status and authority within the group; this can create what is known as apecking orderwithin a group.[53]Status can be determined by a variety of factors and characteristics, including specific status characteristics (e.g. task-specific behavioural and personal characteristics, such as experience) or diffuse status characteristics (e.g. age, race, ethnicity).[53]It is important that other group members perceive an individual's status to be warranted and deserved, as otherwise they may not have authority within the group.[53]Status differentials may affect the relative amount of pay among group members and they may also affect the group's tolerance to violation of group norms (e.g. people with higher status may be given more freedom to violate group norms). Forsyth suggests that while many daily tasks undertaken by individuals could be performed in isolation, the preference is to perform with other people.[53] In a study of dynamogenic stimulation for the purpose of explaining pacemaking and competition in 1898,Norman Tripletttheorized that "the bodily presence of another rider is a stimulus to the racer in arousing the competitive instinct...".[58]This dynamogenic factor is believed to have laid the groundwork for what is now known as social facilitation—an "improvement in task performance that occurs when people work in the presence of other people".[53] Further to Triplett's observation, in 1920,Floyd Allportfound that although people in groups were more productive than individuals, the quality of their product/effort was inferior.[53] In 1965,Robert Zajoncexpanded the study of arousal response (originated by Triplett) with further research in the area of social facilitation. In his study, Zajonc considered two experimental paradigms. In the first—audience effects—Zajonc observed behaviour in the presence of passive spectators, and the second—co-action effects—he examined behaviour in the presence of another individual engaged in the same activity.[59] Zajonc observed two categories of behaviours—dominant responsesto tasks that are easier to learn and which dominate other potential responses andnondominant responsesto tasks that are less likely to be performed. In hisTheory of Social Facilitation, Zajonc concluded that in the presence of others, when action is required, depending on the task requirement, either social facilitation or social interference will impact the outcome of the task. If social facilitation occurs, the task will have required a dominant response from the individual resulting in better performance in the presence of others, whereas if social interference occurs the task will have elicited a nondominant response from the individual resulting in subpar performance of the task.[53] Several theories analysing performance gains in groups via drive, motivational, cognitive and personality processes, explain why social facilitation occurs. Zajonc hypothesized thatcompresence(the state of responding in the presence of others) elevates an individual's drive level which in turn triggers social facilitation when tasks are simple and easy to execute, but impedes performance when tasks are challenging.[53] Nickolas Cottrell, 1972, proposed theevaluation apprehension modelwhereby he suggested people associate social situations with an evaluative process. Cottrell argued this situation is met with apprehension and it is this motivational response, not arousal/elevated drive, that is responsible for increased productivity on simple tasks and decreased productivity on complex tasks in the presence of others.[53] InThe Presentation of Self in Everyday Life(1959),Erving Goffmanassumes that individuals can control how they are perceived by others. He suggests that people fear being perceived as having negative, undesirable qualities and characteristics by other people, and that it is this fear that compels individuals to portray a positive self-presentation/social image of themselves. In relation to performance gains, Goffman'sself-presentation theorypredicts, in situations where they may be evaluated, individuals will consequently increase their efforts in order to project/preserve/maintain a positive image.[53] Distraction-conflicttheorycontends that when a person is working in the presence of other people, an interference effect occurs splitting the individual's attention between the task and the other person. On simple tasks, where the individual is not challenged by the task, the interference effect is negligible and performance, therefore, is facilitated. On more complex tasks, where drive is not strong enough to effectively compete against the effects of distraction, there is no performance gain. TheStroop task(Stroop effect) demonstrated that, by narrowing a person's focus of attention on certain tasks, distractions can improve performance.[53] Social orientation theoryconsiders the way a person approaches social situations. It predicts that self-confident individuals with a positive outlook will show performance gains through social facilitation, whereas a self-conscious individual approaching social situations with apprehension is less likely to perform well due to social interference effects.[53] Intergroup dynamics(orintergroup relations) refers to the behavioural and psychological relationship between two or more groups. This includes perceptions, attitudes, opinions, and behaviours towards one's own group, as well as those towards another group. In some cases,intergroup dynamicsis prosocial, positive, and beneficial (for example, when multiple research teams work together to accomplish a task or goal). In other cases,intergroup dynamicscan create conflict. For example, Fischer & Ferlie found initially positive dynamics between a clinical institution and its external authorities dramatically changed to a 'hot' and intractable conflict when authorities interfered with its embedded clinical model.[60]Similarly, underlying the 1999Columbine High School shootinginLittleton, Colorado, United States,intergroup dynamicsplayed a significant role inEric Harris’ and Dylan Klebold’s decision to kill a teacher and 14 students (including themselves).[50] According tosocial identity theory, intergroup conflict starts with a process of comparison between individuals in one group (the ingroup) to those of another group (the outgroup).[61]This comparison process is not unbiased and objective. Instead, it is a mechanism for enhancing one's self-esteem.[2]In the process of such comparisons, an individual tends to: Even without anyintergroup interaction(as in theminimal group paradigm), individuals begin to show favouritism towards their own group, and negative reactions towards the outgroup.[62]This conflict can result in prejudice,stereotypes, anddiscrimination. Intergroup conflict can be highly competitive, especially for social groups with a long history of conflict (for example, the 1994Rwandan genocide, rooted in group conflict between the ethnic Hutu and Tutsi).[2]In contrast, intergroup competition can sometimes be relatively harmless, particularly in situations where there is little history of conflict (for example, between students of different universities) leading to relatively harmless generalizations and mild competitive behaviours.[2]Intergroup conflict is commonly recognized amidst racial, ethnic, religious, and political groups. The formation of intergroup conflict was investigated in a popular series of studies byMuzafer Sherifand colleagues in 1961, called theRobbers Cave Experiment.[63]The Robbers Cave Experiment was later used to supportrealistic conflict theory.[64]Other prominent theories relating to intergroup conflict includesocial dominance theory, and social-/self-categorization theory. There have been several strategies developed for reducing the tension, bias, prejudice, and conflict between social groups. These include thecontact hypothesis, thejigsaw classroom, and several categorization-based strategies. In 1954,Gordon Allportsuggested that by promoting contact between groups, prejudice can be reduced.[65]Further, he suggested four optimal conditions for contact: equal status between the groups in the situation; common goals; intergroup cooperation; and the support of authorities, law, or customs.[66]Since then, over 500 studies have been done on prejudice reduction under variations of the contact hypothesis, and a meta-analytic review suggests overall support for its efficacy.[66]In some cases, even without the four optimal conditions outlined by Allport, prejudice between groups can be reduced.[66] Under the contact hypothesis, several models have been developed. A number of these models utilize asuperordinate identityto reduce prejudice. That is, a more broadly defined, ‘umbrella’ group/identity that includes the groups that are in conflict. By emphasizing this superordinate identity, individuals in both subgroups can share a common social identity.[67]For example, if there is conflict between White, Black, and Latino students in a high school, one might try to emphasize the ‘high school’ group/identity that students share to reduce conflict between the groups. Models utilizing superordinate identities include thecommon ingroup identitymodel, the ingroup projection model, the mutual intergroup differentiation model, and the ingroup identity model.[67]Similarly, "recategorization" is a broader term used by Gaertner et al. to describe the strategies aforementioned.[62] There are techniques that utilize interdependence, between two or more groups, with the aim of reducing prejudice. That is, members across groups have to rely on one another to accomplish some goal or task. In theRobbers Cave Experiment, Sherif used this strategy to reduce conflict between groups.[62]Elliot Aronson’sJigsaw Classroomalso uses this strategy of interdependence.[68]In 1971, thick racial tensions were abounding in Austin, Texas. Aronson was brought in to examine the nature of this tension within schools, and to devise a strategy for reducing it (so to improve the process of school integration, mandated underBrown v. Board of Educationin 1954). Despite strong evidence for the effectiveness of thejigsaw classroom,the strategy was not widely used (arguably because of strong attitudes existing outside of the schools, which still resisted the notion that racial and ethnic minority groups are equal to Whites and, similarly, should be integrated into schools).
https://en.wikipedia.org/wiki/Group_behaviour
Ahive mind,group mind,group ego,mind coalescence, orgestalt intelligenceinscience fictionis aplot devicein which multiple minds, or consciousnesses, are linked into a singlecollective consciousnessorintelligence.[1][2] This term may be used interchangeably withhive mind.[3][4]"Hive mind" tends to describe a group mind in which the linked individuals have noidentityorfree willand arepossessedormind-controlledas extensions of the hive mind. It is frequently associated with the concept of an entity that spreads among individuals and suppresses or subsumes theirconsciousnessin the process of integrating them into its own collective consciousness. The concept of thegrouporhive mindis an intelligent version of real-lifesuperorganismssuch as abeehiveor anant colony.[citation needed] The first alien hive society was depicted inH. G. Wells'sThe First Men in the Moon(1901) while the use of human hive minds in literature goes back at least as far asDavid H. Keller'sThe Human Termites(published inWonder Storiesin 1929) andOlaf Stapledon's science-fiction novelLast and First Men(1930),[5][6]which is the first known use of the term "group mind" in science fiction.[7][2]The phrase "hive mind" in science fiction has been traced toEdmond Hamilton's novelThe Face of the Deep(published inCaptain Futurein 1942) referring to the hive mind of bees as a simile,[8][9]thenJames H. Schmitz'sSecond Night of Summer(1950).[10][3]A group mind might be formed by any fictional plot device that facilitates brain to brain communication, such astelepathy. Some hive minds feature members that are controlled by a centralised "hive brain" or "hive queen," but others feature a decentralised approach in which members interact equally or roughly equally to come to decisions.[11]The packs of Tines inVernor Vinge'sA Fire Upon the DeepandThe Children of the Skyare an example of such decentralized group minds.[12] Hive minds are typically viewed in a negative light, especially in earlier works, but some newer works portray them as neutral or positive.[5][13] As conceived inspeculative fiction, hive minds often imply (almost) complete loss (or lack) ofindividuality,identity, andpersonhood. However, while the individual members of a group mind may not have such things, the group mind as whole will have them, possibly even to greater degree than individual people (just like a human has more personhood than a single neuron cell). The individuals forming the hive may specialize in different functions, similarly tosocial insects.[citation needed]
https://en.wikipedia.org/wiki/Group_mind_(science_fiction)
The idea of aknowledge ecosystemis an approach toknowledge managementwhich claims to foster the dynamic evolution ofknowledgeinteractions between entities to improvedecision-makingand innovation through improved evolutionary networks ofcollaboration.[1][2] In contrast to purely directivemanagementefforts that attempt either to manage or direct outcomes, knowledgedigital ecosystemsespouse that knowledgestrategiesshould focus more on enablingself-organizationin response to changing environments.[3]The suitability between knowledge and problems confronted defines the degree of "fitness" of a knowledge ecosystem. Articles discussing such ecological approaches typically incorporate elements ofcomplex adaptive systemstheory. Known implementation considerations of knowledge ecosystem include theCanadian Government.[4] To understand knowledge ecology as a productive operation, it is helpful to focus on the knowledge ecosystem that lies at its core. Like natural ecosystems, these knowledge ecosystems have inputs, throughputs and outputs operating in open exchange relationship with their environments. Multiple layers and levels of systems may be integrated to form a complete ecosystem. These systems consist of interlinked knowledge resources, databases, human experts, and artificial knowledge agents that collectively provide an online knowledge for anywhere anytime performance of organizational tasks. The availability of knowledge on an anywhere-anytime basis blurs the line between learning and work performance. Both can occur simultaneously and sometimes interchangeably.[5] Knowledge ecosystems operate on two types of technology cores – one involving content or substantive industry knowledge, and the other involving computer hardware and software – telecommunications, which serve as the "procedural technology" for performing operations. These technologies provide knowledge management capabilities that are far beyond individual human capabilities. In a corporate training context, a substantive technology would be knowledge of various business functions, tasks, R&D process products, markets, finances, and relationships.[6]Research, coding, documentation, publication and sharing of electronic resources create this background knowledge. Computer-to-computer and human-to-human communications enable knowledge ecosystems to be interactive and responsive within a larger community and its subsystems.[7]
https://en.wikipedia.org/wiki/Knowledge_ecosystem
Open-source intelligence(OSINT) refers to the systematic collection, evaluation, and analysis of publicly available information from open sources to produce actionable intelligence. These sources include, but are not limited to, traditional media (newspapers, radio, television), government publications, academic research, commercial databases, public websites, social media platforms, geospatial data, and technical infrastructure information. OSINT is employed across a wide range of sectors including national security, law enforcement, corporate intelligence, journalism, cybersecurity, humanitarian aid, and academic research. It supports decision-making by providing timely, relevant, and verified insights derived from legally accessible and non-classified materials. Unlike other forms of intelligence such as human intelligence (HUMINT), signals intelligence (SIGINT), or imagery intelligence (IMINT), OSINT does not rely on covert or classified means of collection. Instead, it leverages information that is freely accessible to the public, often through digital channels, though physical documents and broadcasts also remain valid sources. OSINT sources can be divided up into six different categories of information flow:[1] OSINT is distinguished from research in that it applies theprocess of intelligenceto create tailored knowledge supportive of a specific decision by a specific individual or group.[2] Collecting open-source intelligence is achieved in a variety of different ways,[3]such as: OSINT, broadly defined, involves gathering and analyzing publicly accessible information to produce actionable insights.[4] TheU.S. Department of Homeland Securitydefines OSINT as intelligence derived from publicly available information, collected and disseminated promptly to address specific intelligence needs.[5] NATOdescribes OSINT as intelligence obtained from publicly available information and other unclassified data with limited public distribution or access.[6] TheEuropean Uniondefines OSINT as the collecting and analyzing information from open sources to generate actionable intelligence, supporting areas like national security, law enforcement, and business intelligence.[7] TheUnited Nationshas also recognized OSINT’s potential, noting its value in monitoring member states’ compliance with international regulations across various sectors, including public health and human rights.[8] In theprivate sector, companies likeIBMdefine OSINT as the process of gathering and analyzing publicly available information to assess threats, inform decisions, or answer specific questions. Similarly, cybersecurity firms such asCrowdStrikedescribe OSINT as the act of collecting and analyzing publicly available data for intelligence purposes.[9] OSINT practices have been documented as early as the mid-19th century in the United States and early 20th century in the United Kingdom.[10] OSINT in theUnited Statestraces its origins to the 1941 creation of theForeign Broadcast Monitoring Service (FBMS), an agency responsible for the monitoring of foreign broadcasts. An example of their work was the correlation of changes in the price of oranges in Paris with successful bombings of railway bridges duringWorld War II.[11] TheAspin-Brown Commissionstated in 1996 that US access to open sources was "severely deficient" and that this should be a "top priority" for both funding andDCIattention.[12] In July 2004, following theSeptember 11 attacks, the9/11 Commissionrecommended the creation of an open-source intelligence agency.[13]In March 2005, theIraq Intelligence Commissionrecommended[14]the creation of an open-source directorate at the CIA. Following these recommendations, in November 2005 theDirector of National Intelligenceannounced the creation of the DNIOpen Source Center. The Center was established to collect information available from "the Internet, databases, press, radio, television, video, geospatial data, photos and commercial imagery."[15]In addition to collecting openly available information, it would train analysts to make better use of this information. The center absorbed theCIA's previously existingForeign Broadcast Information Service(FBIS), originally established in 1941, with FBIS head Douglas Naquin named as director of the center.[16]Then, following the events of9/11theIntelligence Reform and Terrorism Prevention Actmerged FBIS and other research elements into theOffice of the Director of National Intelligencecreating theOpen Source Enterprise. Furthermore, the private sector has invested in tools which aid in OSINT collection and analysis. Specifically,In-Q-Tel, aCentral Intelligence Agencysupported venture capital firm in Arlington, VA assisted companies develop web-monitoring and predictive analysis tools. In December 2005, the Director of National Intelligence appointedEliot A. Jardinesas the Assistant Deputy Director of National Intelligence for Open Source to serve as the Intelligence Community's senior intelligence officer for open source and to provide strategy, guidance and oversight for theNational Open Source Enterprise.[17]Mr. Jardines has established the National Open Source Enterprise[18]and authoredintelligence community directive 301. In 2008, Mr. Jardines returned to the private sector and was succeeded byDan Butlerwho is ADDNI/OS[19]and previously Mr. Jardines' Senior Advisor for Policy.[20] Open-source intelligence (OSINT) relies on a wide range of tools and platforms to collect, analyze, and validate publicly available information. These tools vary from general-purpose web browsers to specialized software and frameworks designed specifically for open-source investigations. The web browser serves as a foundational tool in OSINT workflows, granting access to vast amounts of publicly available data across websites, forums, blogs, and databases. It also enables the use of both open-source and proprietary software tools—either purpose-built for OSINT or adaptable for intelligence-gathering purposes. A number of dedicated tools and platforms have been developed to streamline the process of gathering and analyzing open-source information. These include: The OSINT Framework , an open-source project maintained on GitHub, provides a categorized directory of over 30 major types of OSINT tools, covering areas such as social media investigation, geolocation, domain analysis, and more.[3] Given the evolving nature of digital platforms and online tools, continuous learning is essential for effective OSINT practice. Numerous educational organizations, investigative groups, and training institutions offer resources to support skill development in this field. Notable contributors include: Books such asOpen Source Intelligence Techniquesby Michael Bazzell serve as practical guides to navigating the digital landscape, offering curated lists of tools and techniques across multiple domains. However, due to the rapid pace of change in the online environment, the author emphasizes the importance of ongoing study, training, and adaptation to maintain relevance and effectiveness in OSINT operations.[1] As OSINT practitioners often conduct sensitive or public investigations, maintaining personal safety and operational security is critical. Analysts may employ various tools to protect their identity and digital footprint. Ryan Fedasiuk, an analyst at the Center for Security and Emerging Technology, recommends several OPSEC best practices and tools, including: These tools help reduce exposure to potential threats when conducting online investigations, especially when researching adversarial or high-risk subjects.[2] In institutional settings, OSINT is often integrated into broader command and control systems. For example, CPCE (Command Post Communications Environment) by Systematic incorporates open-source feeds such as those provided by Jane’s Information Services , enabling real-time intelligence integration into military and defense operations. One of the primary challenges in open-source intelligence (OSINT) is the sheer volume of publicly available information, often referred to as the "information explosion." The exponential growth of digital content across news platforms, social media, forums, blogs, and official publications presents significant difficulties for analysts attempting to identify, verify, and contextualize relevant data. The rapid pace at which new information is generated often outstrips the capacity of analysts to process and evaluate it effectively. This can lead to difficulties in distinguishing reliable sources from misinformation or disinformation, and in prioritizing intelligence that is both timely and actionable. To mitigate these challenges, some organizations have explored the use of automated tools, machine learning algorithms, and crowdsourcing techniques. While large-scale automation remains a developing field, limited efforts involving amateur or citizen analysts have occasionally contributed to the filtering and categorization of open-source data—though such methods are generally considered supplementary rather than definitive. While OSINT involves only legally accessible, publicly available information, the distinction between lawful research and illegal activity becomes critical when individuals or entities misuse open-source practices. In most jurisdictions, the unauthorized collection and transmission of sensitive information to a foreign government or intelligence agency—even if obtained from public sources—can constitute espionage , particularly if it involves strategic, military, or national security-related data. Espionage of this nature, distinct from treason (which typically involves betrayal of one’s own state), has historically been employed by states as a tool of diplomacy, warfare, and influence. It is important to note that while OSINT itself is a legitimate and transparent discipline, its misuse—particularly when combined with covert intent or malicious purpose—can cross into legally and ethically prohibited territory. Therefore, responsible OSINT practitioners emphasize adherence to legal standards, ethical guidelines, and operational security best practices. As open-source intelligence (OSINT) has grown in prominence across government, military, corporate, and journalistic sectors, a number of professional associations and certification programs have emerged to support practitioners, standardize methodologies, and promote ethical conduct. The OSINT Foundation is a U.S.-based professional association dedicated to advancing the practice of open-source intelligence within the U.S. Intelligence Community and beyond. Open exclusively to U.S. citizens, the organization aims to elevate the visibility and professionalism of OSINT as a formal intelligence discipline. It serves as a platform for knowledge sharing, networking, and advocacy among current and aspiring OSINT practitioners. OSMOSIS , an offshoot of the Hetherington Group—a private investigation and corporate intelligence firm—offers training programs and conferences that lead to the Open-Source Certified (OSC) designation. The OSC program was developed to help formalize and standardize professional OSINT practices. According to the program’s guidelines, candidates must fulfill specific prerequisites and pass a 100-question examination to demonstrate proficiency in open-source research and analysis. The certification emphasizes legal compliance, ethical behavior, and technical competence in gathering and interpreting publicly available information. IntelTechniques , a provider of investigative and OSINT training, offers the Open Source Intelligence Professional (OSIP) certification. The program is designed to test participants' ability to produce actionable intelligence using real-world scenarios and standardized evaluation criteria. In addition to structured courses, IntelTechniques fosters a moderated online community where professionals can exchange insights on best practices, tools, and methodologies. While the OSIP certification is optional, participation in the training itself helps individuals develop and refine their OSINT skills for use in law enforcement, journalism, cybersecurity, and corporate investigations. Beyond formal certifications, several organizations offer specialized OSINT training and platforms for professional development: These informal and semi-formal learning environments play a vital role in expanding access to OSINT education and fostering global collaboration among practitioners.
https://en.wikipedia.org/wiki/Open-source_intelligence
Social commerce[1]is a subset ofelectronic commercethat involvessocial mediaand online media that supports social interaction, and user contributions to assist online buying and selling of products and services.[2] More succinctly, social commerce is the use ofsocial network(s) in the context of e-commerce transactions from browsing to checkout, without ever leaving a social media platform.[3] The term social commerce was introduced by Yahoo! in November 2005[4]which describes a set of online collaborative shopping tools such as shared pick lists, user ratings and otheruser-generated content-sharing of online product information and advice. The concept of social commerce was developed byDavid Beiselto denote user-generatedadvertorialcontent on e-commerce sites,[5]and bySteve Rubel[6]to include collaborative e-commerce tools that enable shoppers "to get advice from trusted individuals, find goods and services and then purchase them". The social networks that spread this advice have been found[7]to increase the customer's trust in one retailer over another. Social commerce aims to assist companies in achieving the following purposes. Firstly, social commerce helps companies engage customers with their brands according to the customers' social behaviors. Secondly, it provides an incentive for customers to return to their website. Thirdly, it provides customers with a platform to talk about their brand on their website. Fourthly, it provides all the information customers need to research, compare, and ultimately choose you over your competitor, thus purchasing from you and not others.[8] In these days, the range of social commerce has been expanded to include social media tools and content used in the context of e-commerce, especially in thefashion industry. Examples of social commerce include customer ratings and reviews, user recommendations and referrals,social shoppingtools (sharing the act of shopping online), forums and communities,social media optimization, social applications and social advertising.[9]Technologies such asaugmented realityhave also been integrated with social commerce, allowing shoppers to visualize apparel items on themselves and solicit feedback through social media tools.[10] Some academics[11]have sought to distinguish "social commerce" from "social shopping", with the former being referred to as collaborative networks of online vendors; the latter, the collaborative activity of online shoppers. The attraction and effectiveness of Social Commerce can understood in terms ofRobert Cialdini's Principles of InfluenceInfluence: Science and Practice": Social Commerce has become a really broad term encapsulating a lot of different technologies. It can be categorized as Offsite and Onsite social commerce.[citation needed] Onsite social commercerefers to retailers including social sharing and other social functionality on their website. Some notable examples includeZazzlewhich enables users to share their purchases,Macy'swhich allows users to create a poll to find the right product, andFab.comwhich shows a live feed of what other shoppers are buying. Onsite user reviews are also considered a part of social commerce. This approach has been successful in improvingcustomer engagement, conversion and word-of-mouth branding according to several industry sources.[16] Offsite social commerceincludes activities that happen outside of the retailers' website. These may include Facebook storefronts, posting products on Facebook, Twitter, Pinterest and other social networks, advertisement etc. However, many large brands seem to be[when?]abandoning that approach.[17]A recent[when?]study by W3B suggests that just two percent of Facebook's 1.5 billion users have ever made a purchase through the social network.[18] Social commerce can be measured by any of the principle ways to measure social media.[19] This category is based on individuals' shopping, selling, recommending behaviors.[20] Here are some notable business examples of Social Commerce: Facebook commerce, f-commerce, and f-comm refer to the buying and selling of goods or services through Facebook, either through Facebook directly or through the Facebook Open Graph.[22]Until March 2010, 1.5 million businesses had pages on Facebook[23]which were built byFacebook Markup Language(FBML). A year later, in March 2011, FacebookdeprecatedFBML and adoptediframes.[24]This allowed developers to gather more information about their Facebook visitors.[25] The "2011 Social Commerce Study" estimated that 42% of online consumers had "followed" a retailer proactively through Facebook, Twitter or the retailer's blog, and that a full one-third of shoppers said they would be likely to make a purchase directly from Facebook (35%) or Twitter (32%).[26] Micro-influencers are designers, photographers, writers, athletes, bohemian world-wanderers, professors, or anyprofessionalwho could authentically channel things that speak about a brand. It is clear that thesechannelshave fewer followers than the average celebrity accounts, most of the time they have less than 10,000 followers (according to Georgia Hatton from Social Media Today[27]), but the quality of theaudiencestends to be better, with a higher potential for like-minded tight-knit community of shoppers eager to take recommendations from one another.[28]This topic has been also discussed by many other organizations such as Adweek,[29]Medium,[30]Forbes,[31]Brand24,[32]and many others.
https://en.wikipedia.org/wiki/Social_commerce
Social epistemologyrefers to a broad set of approaches that can be taken inepistemology(the study ofknowledge) that construes human knowledge as a collective achievement. Another way of characterizing social epistemology is as the evaluation of the social dimensions of knowledge or information.[1] As a field of inquiry inanalytic philosophy, social epistemology deals with questions about knowledge in social contexts, meaning those in which knowledge attributions cannot be explained by examining individuals in isolation from one another. The most common topics discussed in contemporary social epistemology aretestimony(e.g. "When does a belief that x is true which resulted from being told 'x is true' constitute knowledge?"), peer disagreement (e.g. "When and how should I revise my beliefs in light of other people holding beliefs that contradict mine?"), and group epistemology (e.g. "What does it mean to attribute knowledge to groups rather than individuals, and when are such knowledge attributions appropriate?").[1]Social epistemology also examines the social justification of belief.[1] One of the enduring difficulties with defining "social epistemology" that arises is the attempt to determine what the word "knowledge" means in this context. There is also a challenge in arriving at a definition of "social" which satisfies academics from different disciplines.[1]Social epistemologists may exist working in many of the disciplines of the humanities andsocial sciences, most commonly inphilosophyandsociology. In addition to marking a distinct movement in traditional and analyticepistemology, social epistemology is associated with the interdisciplinary field ofscience and technology studies(STS). The consideration of social dimensions of knowledge in relation to philosophy started in 380 B.C.E with Plato’s dialogue:Charmides.[2]This dialogue included Socrates' argument about whether anyone is capable of examining if another man's claim that he knows something, is true or not.[1]In it he questions the degree of certainty an unprofessional in a field can have towards a person’s claim to be a specialist in that same field. Charmides also explored the tendency of the utopian vision of social relations to degenerate into dystopian fantasy.[3]As the exploration of a dependence on authoritative figures constitutes a part of the study of social epistemology, it confirms the existence of the ideology in minds long before it was given its label. In 1936,Karl MannheimturnedKarl Marx‘s theory of ideology (which interpreted the “social” aspect in epistemology to be of a political or sociological nature) into an analysis of how the human society develops and functions in this respect. Particularly, this Marxist analysis prompted Mannheim to write Ideology and Utopia, which investigated the classical sociology of knowledge and the construct of ideology.[4] The term “social epistemology” was first coined by the library scientistsMargaret Egan.[5]andJesse Shera[6]in aLibrary Quarterlypaper at theUniversity of Chicago Graduate Library Schoolin the 1950s.[7]The term was used byRobert K. Mertonin a 1972 article in theAmerican Journal of Sociologyand then bySteven Shapinin 1979. However, it was not until the 1980s that the current sense of “social epistemology” began to emerge. In the 1980s, there was a powerful growth of interest amongst philosophers in topics such as epistemic value of testimony, the nature and function of expertise, proper distribution of cognitive labor and resources among individuals in the communities and the status of group reasoning and knowledge. In 1987, the philosophical journalSynthesepublished a special issue on social epistemology which included two authors that have since taken the branch of epistemology in two divergent directions:Alvin GoldmanandSteve Fuller.[8]Fuller founded a journal calledSocial Epistemology: A journal of knowledge, culture, and policyin 1987 and published his first book,Social Epistemology, in 1988. Goldman’sKnowledge in a Social Worldcame out in 1999. Goldman advocates for a type of epistemology which is sometimes called “veritistic epistemology” because of its large emphasis on truth.[9]This type of epistemology is sometimes seen to side with “essentialism” as opposed to “multiculturalism”.[9]But Goldman has argued that this association between veritistic epistemology and essentialism is not necessary.[9]He describes Social Epistemology as knowledge derived from one’s interactions with another person, group or society. Goldman looks into one of the two strategies of the socialization of epistemology. This strategy includes the evaluation of social factors that impact knowledge formed on true belief. In contrast, Fuller takes preference for the second strategy that defines knowledge influenced by social factors as collectively accepted belief. The difference between the two can be simplified with exemplars e.g.: the first strategy means analyzing how your degree of wealth (a social factor) influences what information you determine to be valid whilst the second strategy occurs when an evaluation is done on wealth’s influence upon your knowledge acquired from the beliefs of the society in which you find yourself.[10] Fuller's position supports the conceptualization that social epistemology is a critique of context, particularly in his approach to "knowledge society" and the "university" as integral contexts of modern learning.[11]It is said that this articulated a reformulation of theDuheim-Quine thesis, which covers theunderdeterminationof theory by data.[11]It explains that the problem of context will assume this form: "knowledge is determined by its context".[11]In 2012, on the occasion of the 25th anniversary ofSocial Epistemology, Fuller reflected upon the history and the prospects of the field, including the need for social epistemology to re-connect with the larger issues of knowledge production first identified byCharles Sanders Peirceas ‘’cognitive economy’’ and nowadays often pursued bylibrary and information science. As for the “analytic social epistemology”, to which Goldman has been a significant contributor, Fuller concludes that it has “failed to make significant progress owing, in part, to a minimal understanding of actual knowledge practices, a minimised role for philosophers in ongoing inquiry, and a focus on maintaining the status quo of epistemology as a field.”[12] The basic view of knowledge that motivated the emergence of social epistemology as it is perceived today can be traced to the work ofThomas KuhnandMichel Foucault, which gained acknowledgment at the end of the 1960s. Both brought historical concerns directly to bear on problems long associated with thephilosophy of science. Perhaps the most notable issue here was the nature oftruth, which both Kuhn and Foucault described as a relative and contingent notion. On this background, ongoing work in thesociology of scientific knowledge(SSK) and thehistory and philosophy of science(HPS) was able to assert its epistemological consequences, leading most notably to the establishment of thestrong programmeat theUniversity of Edinburgh. In terms of the two strands of social epistemology, Fuller is more sensitive and receptive to this historical trajectory (if not always in agreement) than Goldman, whose “veritistic” social epistemology can be reasonably read as a systematic rejection of the more extreme claims associated with Kuhn and Foucault. In the standard sense of the term today, social epistemology is a field withinanalytic philosophy. It focuses on the social aspects of how knowledge is created and disseminated. What precisely these social aspects are, and whether they have beneficial or detrimental effects upon the possibilities to create, acquire and spread knowledge is a subject of continuous debate. The most common topics discussed in contemporary social epistemology aretestimony(e.g. "When does a belief that 'x is true' which resulted from being told that 'x is true' constitute knowledge?"), peer disagreement (e.g. "When and how should I revise my beliefs in light of other people holding beliefs that contradict mine?"), and group epistemology (e.g. "What does it mean to attribute knowledge to groups rather than individuals, and when are such knowledge attributions appropriate?").[1] Within the field, "the social" is approached in two complementary and not mutually exclusive ways: "the social" character of knowledge can either be approached through inquiries ininter-individualepistemic relations or through inquiries focusing on epistemiccommunities. The inter-individual approach typically focuses on issues such as testimony, epistemic trust as a form of trust placed by one individual in another, epistemic dependence, epistemic authority, etc. The community approach typically focuses on issues such as community standards of justification, community procedures of critique, diversity, epistemic justice, and collective knowledge.[1] Social epistemology as a field within analytic philosophy has close ties to, and often overlaps withphilosophy of science. While parts of the field engage in abstract, normative considerations of knowledge creation and dissemination, other parts of the field are "naturalized epistemology" in the sense that they draw on empirically gained insights---which could meannatural scienceresearch from, e.g.,cognitive psychology, be thatqualitativeorquantitativesocial scienceresearch. (For the notion of "naturalized epistemology" seeWillard Van Orman Quine.) And while parts of the field are concerned with analytic considerations of rather general character, case-based and domain-specific inquiries in, e.g., knowledge creation in collaborative scientific practice, knowledge exchange on online platforms or knowledge gained in learning institutions play an increasing role. Important academic journals for social epistemology as a field within analytic philosophy are, e.g.,Episteme,Social Epistemology, andSynthese. However, major works within this field are also published in journals that predominantly address philosophers of science and psychology or in interdisciplinary journals which focus on particular domains of inquiry (such as, e.g.,Ethics and Information Technology). In both stages, both varieties of social epistemology remain largely "academic" or "theoretical" projects. Yet both emphasize the social significance of knowledge and therefore the cultural value of social epistemology itself. A range of journals publishing social epistemology welcome papers that include a policy dimension. More practical applications of social epistemology can be found in the areas oflibrary science,academic publishing, guidelines for scientific authorship and collaboration,knowledge policyand debates over the role of the Internet in knowledge transmission and creation. Social epistemology is still considered a relatively new addition to philosophy, with its problems and theories still fresh and in rapid movement.[13]Of increasing importance is social epistemology developments within transdisciplinarity as manifested by media ecology.
https://en.wikipedia.org/wiki/Social_epistemology
Stigmergy(/ˈstɪɡmərdʒi/STIG-mər-jee) is a mechanism of indirectcoordination, through the environment, between agents or actions.[1]The principle is that the trace left in theenvironmentby an individual action stimulates the performance of a succeeding action by the same or different agent. Agents that respond to traces in the environment receive positive fitness benefits, reinforcing the likelihood of these behaviors becoming fixed within a population over time.[2] Stigmergy is a form ofself-organization. It produces complex, seemingly intelligent structures, without need for any planning, control, or even direct communication between the agents. As such it supports efficient collaboration between extremely simple agents, who may lack memory or individual awareness of each other.[1][3] The term "stigmergy" was introduced by French biologistPierre-Paul Grasséin 1959 to refer totermitebehavior. He defined it as: "Stimulation of workers by the performance they have achieved." It is derived from the Greek wordsστίγμαstigma"mark, sign" andἔργονergon"work, action", and captures the notion that an agent’s actions leave signs in the environment, signs that it and other agents sense and that determine and incite their subsequent actions.[4][5][6] Later on, a distinction was made between the stigmergic phenomenon, which is specific to the guidance of additional work, and the more general, non-work specific incitation, for which the termsematectoniccommunication was coined[7]byE. O. Wilson, from the Greek wordsσῆμαsema"sign, token", andτέκτωνtecton"craftsman, builder": "There is a need for a more general, somewhat less clumsy expression to denote the evocation of any form of behavior or physiological change by the evidences of work performed by other animals, including the special case of the guidance of additional work." Stigmergy is now one of the key concepts in the field ofswarm intelligence.[8] Stigmergy was first observed insocial insects. For example,antsexchange information by laying downpheromones(the trace) on their way back to the nest when they have found food. In that way, they collectively develop acomplex networkof trails, connecting the nest in an efficient way to various food sources. When ants come out of the nest searching for food, they are stimulated by the pheromone to follow the trail towards the food source. The network of trails functions as a shared external memory for the ant colony.[9] In computer science, this general method has been applied in a variety of techniques calledant colony optimization, which search for solutions to complex problems by depositing "virtual pheromones" along paths that appear promising.[10]In the field ofartificial neural networks, stigmergy can be used as a computational memory. Federico Galatolo showed that a stigmergic memory can achieve the same performances of more complex and well established neural networks architectures likeLSTM.[11][12] Othereusocialcreatures, such astermites, use pheromones to build their complex nests by following a simpledecentralizedrule set. Each insect scoops up a 'mudball' or similar material from its environment, infuses the ball with pheromones, and deposits it on the ground, initially in a random spot. However, termites are attracted to their nestmates' pheromones and are therefore more likely to drop their own mudballs on top of their neighbors'. The larger the heap of mud becomes, the more attractive it is, and therefore the more mud will be added to it (positive feedback). Over time this leads to the construction of pillars, arches, tunnels and chambers.[13] Stigmergy has been observed inbacteria, various species of which differentiate into distinct cell types and which participate in group behaviors that are guided by sophisticated temporal and spatial control systems.[14]Spectacular examples of multicellular behavior can be found among themyxobacteria. Myxobacteria travel inswarmscontaining manycellskept together by intercellular molecularsignals. Most myxobacteria are predatory: individuals benefit from aggregation as it allows accumulation of extracellularenzymeswhich are used to digest prey microorganisms. When nutrients are scarce, myxobacterial cells aggregate intofruiting bodies, within which the swarming cells transform themselves into dormant myxospores with thick cell walls. The fruiting process is thought to benefit myxobacteria by ensuring thatcell growthis resumed with a group (swarm) of myxobacteria, rather than isolated cells. Similar life cycles have developed among the cellularslime molds. The best known of the myxobacteria,Myxococcus xanthusandStigmatella aurantiaca, are studied in various laboratories asprokaryoticmodels of development.[15] Stigmergy studied ineusocialcreatures and physical systems, has been proposed as a model of analyzing someroboticssystems,[16]multi-agentsystems,[17]communication incomputer networks, andonline communities.[18] On theInternetthere are many collective projects where users interact only by modifying local parts of their shared virtual environment.Wikipediais an example of this.[19][20]The massive structure of information available in awiki,[21]or anopen source softwareproject such as theFreeBSD kernel[21]could be compared to atermitenest; one initial user leaves a seed of an idea (a mudball) which attracts other users who then build upon and modify this initial concept, eventually constructing an elaborate structure of connected thoughts.[22][23] In addition the concept of stigmergy has also been used to describe how cooperative work such as building design may be integrated. Designing a large contemporary building involves a large and diverse network of actors (e.g. architects, building engineers, static engineers, building services engineers). Their distributed activities may be partly integrated through practices of stigmergy.[24][25][26] The rise ofopen source softwarein the 21st century hasdisruptedthe business models of someproprietary softwareproviders, andopen contentprojects like Wikipedia have threatened the business models of companies likeBritannica. Researchers have studied collaborative open source projects, arguing they provide insights into the emergence of large-scale peer production and the growth ofgift economy.[27] Heather Marsh, associated with theOccupy Movement,Wikileaks, andAnonymous, has proposed a new social system where competition as a driving force would be replaced with a more collaborative society.[28]This proposed society would not userepresentative democracybut new forms of idea and action based governance and collaborative methods including stigmergy.[29][30][31]"With stigmergy, an initial idea is freely given, and the project is driven by the idea, not by a personality or group of personalities. No individual needs permission (competitive) or consensus (cooperative) to propose an idea or initiate a project."[29] Some at the Hong KongUmbrella Movementin 2014 were quoted recommending stigmergy as a way forward.[32][33]
https://en.wikipedia.org/wiki/Stigmergy
Raymond Bernard Cattell(20 March 1905 – 2 February 1998) was a British-Americanpsychologist, known for his psychometric research into intrapersonal psychological structure.[1][2]His work also explored the basic dimensions ofpersonalityandtemperament, the range of cognitive abilities, the dynamic dimensions ofmotivationandemotion, the clinical dimensions of abnormal personality, patterns of group syntality and social behavior,[3]applications of personality research topsychotherapyandlearning theory,[4]predictors ofcreativityand achievement,[5]and many multivariate research methods[6]including the refinement of factor analytic methods for exploring and measuring these domains.[7][8]Cattell authored, co-authored, or edited almost 60 scholarly books, more than 500 research articles, and over 30 standardized psychometric tests, questionnaires, and rating scales.[9][10]According to a widely cited ranking, Cattell was the 16th most eminent,[11]7th most cited in the scientific journal literature,[12]and among the most productive psychologists of the 20th century.[13] Cattell was an early proponent of usingfactor analytic methodsinstead of what he called "subjective verbal theorizing" to explore empirically the basic dimensions of personality, motivation, and cognitive abilities. One of the results of Cattell's application of factor analysis was his discovery of 16 separate primary trait factors within the normal personality sphere (based on the trait lexicon).[14]He called these factors "source traits".[15]This theory of personality factors and the self-report instrument used to measure them are known respectively as the16 personality factor modeland the16PF Questionnaire(16PF).[16] Cattell also undertook a series of empirical studies into the basic dimensions of other psychological domains:intelligence,[17]motivation,[18]career assessmentand vocational interests.[19]Cattell theorized the existence offluid and crystallized intelligenceto explain human cognitive ability,[20]investigated changes in Gf and Gc over the lifespan,[21]and constructed theCulture Fair Intelligence Testto minimize the bias of written language and cultural background in intelligence testing.[22] Cattell's research was mainly in personality, abilities, motivations, and innovative multivariate research methods and statistical analysis (especially his many refinements to exploratory factor analytic methodology).[8][23]In his personality research, he is best remembered for his factor-analytically derived16-factor model of normal personality structure,[10]arguing for this model overEysenck's simpler higher-order 3-factor model, and constructing measures of these primary factors in the form of the 16PF Questionnaire (and its downward extensions: HSPQ, and CPQ, respectively).[15]He was the first to propose a hierarchical, multi-level model of personality with the many basic primary factors at the first level and the fewer, broader, "second-order" factors at a higher stratum of personality organization.[24]These "global trait" constructs are the precursors of the currently popularBig Five(FFM) model of personality.[25][26][27][28]Cattell's research led to further advances, such as distinguishing between state and trait measures (e.g., state-trait anxiety),[29]ranging on a continuum from immediate transitory emotional states, through longer-acting mood states, dynamic motivational traits, and also relatively enduring personality traits.[30]Cattell also conducted empirical studies into developmental changes in personality trait constructs across the lifespan.[31] In the cognitive abilities domain, Cattell researched a wide range of abilities, but is best known for the distinction betweenfluid and crystallized intelligence.[20]He distinguished between the abstract, adaptive, biologically-influenced cognitive abilities that he called "fluid intelligence" and the applied, experience-based and learning-enhanced ability that he called "crystallized intelligence." Thus, for example, a mechanic who has worked on airplane engines for 30 years might have a huge amount of "crystallized" knowledge about the workings of these engines, while a new young engineer with more "fluid intelligence" might focus more on the theory of engine functioning, these two types of abilities might complement each other and work together toward achieving a goal. As a foundation for this distinction, Cattell developed the investment-model of ability, arguing that crystallized ability emerged from the investment of fluid ability in a particular topic of knowledge. He contributed tocognitive epidemiologywith his theory that crystallized knowledge, while more applied, could be maintained or even increase after fluid ability begins to decline with age, a concept used in theNational Adult Reading Test(NART). Cattell constructed a number of ability tests, including the Comprehensive Ability Battery (CAB) that provides measures of 20 primary abilities,[32]and theCulture Fair Intelligence Test(CFIT) which was designed to provide a completely non-verbal measure of intelligence like that now seen in theRaven's. The Culture Fair Intelligence Scales were intended to minimize the influence of cultural or educational background on the results of intelligence tests.[33] In regard to statistical methodology, in 1960 Cattell founded theSociety of Multivariate Experimental Psychology(SMEP), and its journalMultivariate Behavioral Research, in order to bring together, encourage, and support scientists interested in multi-variate research.[34]He was an early and frequent user offactor analysis(a statistical procedure for finding underlying factors in data). Cattell also developed new factor analytic techniques, for example, by inventing thescree test, which uses the curve of latent roots to judge the optimal number of factors to extract.[35]He also developed a new factor analysis rotation procedure—the "Procrustes" or non-orthogonal rotation, designed to let the data itself determine the best location of factors, rather than requiring orthogonal factors. Additional contributions include the Coefficient of Profile Similarity (taking account of shape, scatter, and level of two score profiles); P-technique factor analysis based on repeated measurements of a single individual (sampling of variables, rather than sampling of persons); dR-technique factor analysis for elucidating change dimensions (including transitory emotional states, and longer-lasting mood states); the Taxonome program for ascertaining the number and contents of clusters in a data set; the Rotoplot program for attaining maximum simple structure factor pattern solutions.[8]As well, he put forward the Dynamic Calculus for assessing interests and motivation,[18][36]the Basic Data Relations Box (assessing dimensions of experimental designs),[37]the group syntality construct ("personality" of a group),[38]the triadic theory of cognitive abilities,[39]the Ability Dimension Analysis Chart (ADAC),[40]and Multiple Abstract Variance Analysis (MAVA), with "specification equations" to embody genetic and environmental variables and their interactions.[41] AsLee J. CronbachatStanford Universitystated: The thirty-year evolution of the data box and related methodology fed on bold conjecture, self-criticism, unbridled imagination, rational comparison of models in the abstract, and responsiveness to the nasty surprises of data. The story epitomizes scientific effort at its best. Raymond Cattell was born on 20 March 1905 inHill Top, West Bromwich, a small town in England nearBirminghamwhere his father's family was involved in inventing new parts for engines, automobiles and other machines. Thus, his growing up years were a time when great technological and scientific ideas and advances were taking place and this greatly influenced his perspective on how a few people could actually make a difference in the world. He wrote: "1905 was a felicitous year in which to be born. The airplane was just a year old. The Curies and Rutherford in that year penetrated the heart of the atom and the mystery of its radiations,Alfred Binetlaunched the first intelligence test, andEinstein, the theory of relativity.[1][42] When Cattell was about five years old, his family moved to Torquay,Devon, in the south-west of England, where he grew up with strong interests in science and spent a lot of time sailing around the coastline. He was the first of his family (and the only one in his generation) to attend university: in 1921, he was awarded a scholarship to study chemistry atKing's College, London, where he obtained a BSc (Hons) degree with 1st-class honors at age 19 years.[1][43]While studying physics and chemistry at university he learned from influential people in many other fields, who visited or lived in London. He writes: [I] browsed far outside science in my reading and attended public lectures—Bertrand Russell,H. G. Wells,Huxley, andShawbeing my favorite speakers (the last, in a meeting at King's College, converted me to vegetarianism)—for almost two years![44] As he observed first-hand the terrible destruction and suffering afterWorld War I, Cattell was increasingly attracted to the idea of applying the tools of science to the serious human problems that he saw around him. He stated that in the cultural upheaval after WWI, he felt that his laboratory table had begun to seem too small and the world's problems so vast.[44]Thus, he decided to change his field of study and pursue aPhDin psychology atKing's College, London, which he received in 1929. The title of his PhD dissertation was "The Subjective Character of Cognition and Pre-Sensational Development of Perception". His PhD advisor at King's College, London, wasFrancis Aveling, D.D., D.Sc., PhD, D.Litt., who was also President of theBritish Psychological Societyfrom 1926 until 1929.[1][45][46][47][48]In 1939, Cattell was honored for his outstanding contributions to psychological research with conferral of the prestigious higher doctorate –D.Sc.from the University of London.[34] While working on his PhD, Cattell had accepted a position teaching and counseling in the Department of Education atExeter University.[1]He ultimately found this disappointing because there was limited opportunity to conduct research.[1]Cattell did his graduate work withCharles Spearman, the English psychologist and statistician who is famous for his pioneering work on assessing intelligence, including the development of the idea of a general factor of intelligence termedg.[49]During his three years at Exeter, Cattell courted and married Monica Rogers, whom he had known since his boyhood inDevonand they had a son together. She left him about four years later.[50]Soon afterward he moved toLeicesterwhere he organized one of England's first child guidance clinics. It was also in this time period that he finished his first book "Under Sail Through Red Devon," which described his many adventures sailing around the coastline and estuaries of South Devon and Dartmoor.[44] In 1937, Cattell left England and moved to the United States when he was invited byEdward Thorndiketo come toColumbia University. When theG. Stanley Hallprofessorship in psychology became available atClark Universityin 1938, Cattell was recommended by Thorndike and was appointed to the position. However, he conducted little research there and was "continually depressed."[50]Cattell was invited byGordon Allportto join theHarvard Universityfaculty in 1941. While at Harvard he began some of the research in personality that would become the foundation for much of his later scientific work.[1] DuringWorld War II, Cattell served as a civilian consultant to the U.S. government researching and developing tests for selecting officers in the armed forces. Cattell returned to teaching atHarvardand married Alberta Karen Schuettler, a PhD student inmathematicsatRadcliffe College. Over the years, she worked with Cattell on many aspects of his research, writing, and test development. They had three daughters and a son.[44]They divorced in 1980.[50] Herbert Woodrow, professor of psychology at theUniversity of Illinois at Urbana-Champaign, was searching for someone with a background in multivariate methods to establish a research laboratory. Cattell was invited to assume this position in 1945. With this newly created research professorship in psychology, he was able to obtain sufficient grant support for two PhD associates, four graduate research assistants, and clerical assistance.[44] One reason that Cattell moved to theUniversity of Illinoiswas because the first electronic computer built and owned entirely by a US educational institution – "Illinois Automatic Computer" – was being developed there, which made it possible for him to complete large-scale factor analyses. Cattell founded the Laboratory of Personality Assessment and Group Behavior.[44]In 1949, he and his wife founded the Institute for Personality and Ability Testing (IPAT). Karen Cattell served as director of IPAT until 1993. Cattell remained in the Illinois research professorship until he reached the university's mandatory retirement age in 1973. A few years after he retired from the University of Illinois he built a home inBoulder, Colorado, where he wrote and published the results of a variety of research projects that had been left unfinished in Illinois.[1] In 1977, Cattell moved to Hawaii, largely because of his love of the ocean and sailing. He continued his career as a part-time professor and adviser at theUniversity of Hawaii. He also served as adjunct faculty of the Hawaii School of Professional Psychology. After settling in Hawaii he married Heather Birkett, aclinical psychologist, who later carried out extensive research using the 16PF and other tests.[51][52]During the last two decades of his life in Hawaii, Cattell continued to publish a variety of scientific articles, as well as books on motivation, the scientific use of factor analysis, two volumes of personality and learning theory, the inheritance of personality, and co-edited a book on functional psychological testing, as well as a complete revision of his highly renownedHandbook of Multivariate Experimental Psychology.[37] Cattell and Heather Birkett Cattell lived on a lagoon in the southeast corner ofOahuwhere he kept a small sailing boat. Around 1990, he had to give up his sailing career because of navigational challenges resulting from old age. He died at home inHonoluluon 2 February 1998, at age 92 years. He is buried in the Valley of the Temples on a hillside overlooking the sea.[53]His will provided for his remaining funds to build a school for underprivileged children in Cambodia.[54]He was an agnostic.[1] When Cattell began his career in psychology in the 1920s, he felt that the domain of personality was dominated by speculative ideas that were largely intuitive with little/no empirical research basis.[10]Cattell acceptedE.L. Thorndike's empiricist viewpoint that "If something actually did exist, it existed in some amount and hence could be measured.".[55] Cattell found that constructs used by early psychological theorists tended to be somewhat subjective and poorly defined. For example, after examining over 400 published papers on the topic of "anxiety" in 1965, Cattell stated: "The studies showed so many fundamentally different meanings used for anxiety and different ways of measuring it, that the studies could not even be integrated."[56]Early personality theorists tended to provide little objective evidence or research bases for their theories. Cattell wanted psychology to become more like other sciences, whereby a theory could be tested in an objective way that could be understood and replicated by others. In Cattell's words: Emeritus Professor Arthur B. Sweney, an expert in psychometric test construction,[58]summed up Cattell's methodology: Also, according to Sheehy (2004, p. 62), In 1994, Cattell was one of 52 signatories of "Mainstream Science on Intelligence,"[60]an editorial written byLinda Gottfredsonand published in theWall Street Journal. In the letter the signers, some of whom were intelligence researchers, defended the publication of the bookThe Bell Curve. There was sharp pushback on the letter, with a number of signers (not Cattell) having received funding from white supremacist organizations. His works can be categorized or defined as part of cognitive psychology, due to his nature to measure every psychological aspect especially personality aspect. Rather than pursue a "univariate" research approach to psychology, studying the effect that a single variable (such as "dominance") might have on another variable (such as "decision-making"), Cattell pioneered the use of multivariate experimental psychology (the analysis of several variables simultaneously).[6][37][59]He believed that behavioral dimensions were too complex and interactive to fully understand variables in isolation. The classical univariate approach required bringing the individual into an artificial laboratory situation and measuring the effect of one particular variable on another – also known as the "bivariate" approach, while the multivariate approach allowed psychologists to study the whole person and their unique combination of traits within a natural environmental context. Multivariate experimental research designs and multivariate statistical analyses allowed for the study of "real-life" situations (e.g., depression, divorce, loss) that could not be manipulated in an artificial laboratory environment.[34] Cattell applied multivariate research methods across several intrapersonal psychological domains: the trait constructs (both normal and abnormal) of personality, motivational or dynamic traits, emotional and mood states, as well as the diverse array of cognitive abilities.[14]In each of these domains, he considered there must be a finite number of basic, unitary dimensions that could be identified empirically. He drew a comparison between these fundamental, underlying (source) traits and the basic dimensions of the physical world that were discovered and presented, for example, in the periodic table of chemical elements.[15] In 1960, Cattell organized and convened an international symposium to increase communication and cooperation among researchers who were usingmultivariate statisticsto study human behavior. This resulted in the foundation of theSociety of Multivariate Experimental Psychology(SMEP) and its flagship journal,Multivariate Behavioral Research. He brought many researchers from Europe, Asia, Africa, Australia, and South America to work in his lab at the University of Illinois.[34]Many of his books involving multivariate experimental research were written in collaboration with notable colleagues.[61] Cattell noted that in the hard sciences such as chemistry, physics, astronomy, as well as in medical science, unsubstantiated theories were historically widespread until new instruments were developed to improve scientific observation and measurement. In the 1920s, Cattell worked withCharles Spearmanwho was developing the new statistical technique offactor analysisin his effort to understand the basic dimensions and structure of human abilities. Factor analysis became a powerful tool to help uncover the basic dimensions underlying a confusing array of surface variables within a particular domain.[8] Factor analysis was built upon the earlier development of thecorrelation coefficient, which provides a numerical estimate of the degree to which variables are "co-related". For example, if "frequency of exercise" and "blood pressure level" were measured on a large group of people, then intercorrelating these two variables would provide a quantitative estimate of the degree to which "exercise" and "blood pressure" are directly related to each other. Factor analysis performs complex calculations on the correlation coefficients among the variables within a particular domain (such as cognitive ability or personality trait constructs) to determine the basic, unitary factors underlying the particular domain.[62] While working at the University of London with Spearman exploring the number and nature of human abilities, Cattell postulated that factor analysis could be applied to other areas beyond the domain of abilities. In particular, Cattell was interested in exploring the basic taxonomic dimensions and structure of human personality.[14]He believed that if exploratory factor analysis were applied to a wide range of measures of interpersonal functioning, the basic dimensions within the domain of social behavior could be identified. Thus, factor analysis could be used to discover the fundamental dimensions underlying the large number of surface behaviors, thereby facilitating more effective research. As noted above, Cattell made many important innovative contributions to factor analytic methodology, including the Scree Test to estimate the optimal number of factors to extract,[35]the "Procrustes" oblique rotation strategy, the Coefficient of Profile Similarity, P-technique factor analysis, dR-technique factor analysis, the Taxonome program, as well the Rotoplot program for attaining maximum simple structure solutions.[8]In addition, many eminent researchers received their grounding in factor analytic methodology under the guidance of Cattell, including Richard Gorsuch, an authority on exploratory factor analytic methods.[63] In order to apply factor analysis to personality, Cattell believed it was necessary to sample the widest possible range of variables. He specified three kinds of data for comprehensive sampling, to capture the full range of personality dimensions: In order for a personality dimension to be called "fundamental and unitary," Cattell believed that it needed to be found in factor analyses of data from all three of these measurement domains. Thus, Cattell constructed measures of a wide range of personality traits in each medium (L-data; Q-data; T-data). He then conducted a programmatic series of factor analyses on the data derived from each of the three measurement media in order to elucidate the dimensionality of human personality structure.[10] With the help of many colleagues, Cattell's factor-analytic studies[8]continued over several decades, eventually finding at least16 primary trait factorsunderlying human personality (comprising 15 personality dimensions and one cognitive ability dimension: Factor B in the 16PF). He decided to name these traits with letters (A, B, C, D, E...) in order to avoid misnaming these newly discovered dimensions, or inviting confusion with existing vocabulary and concepts. Factor-analytic studies conducted by many researchers in diverse cultures around the world have provided substantial support for the validity of these 16 trait dimensions.[64] In order to measure these trait constructs across different age ranges, Cattell constructed (Q-data) instruments that included theSixteen Personality Factor Questionnaire(16PF) for adults, the High School Personality Questionnaire (HSPQ) – now named the Adolescent Personality Questionnaire (APQ), and the Children's Personality Questionnaire (CPQ).[65]Cattell also constructed the (T-data) Objective Analytic Battery (OAB) that provided measures of the 10 largest personality trait factors extracted factor analytically,[66][67]as well as objective (T-data) measures of dynamic trait constructs such as the Motivation Analysis Test (MAT), the School Motivation Analysis Test (SMAT), and the Children's Motivation Analysis Test (CMAT).[18][68]In order to measure trait constructs within the abnormal personality sphere, Cattell constructed the Clinical Analysis Questionnaire (CAQ)[69][70]Part 1 of the CAQ measures the 16PF factors, While Part 2 measures an additional 12 abnormal (psychopathological) personality trait dimensions. The CAQ was later re-badged as the PsychEval Personality Questionnaire (PEPQ).[71][72]Also within the broadly conceptualized personality domain, Cattell constructed measures of mood states and transitory emotional states, including the Eight State Questionnaire (8SQ)[73][74]In addition, Cattell was at the forefront in constructing the Central Trait-State Kit.[75][76] From the very beginning of his academic career, Cattell reasoned that, as in other scientific domains like intelligence, there might be an additional, higher level of organization within personality which would provide a structure for the many primary traits. When he factor analyzed the intercorrelations of the 16 primary trait measures themselves, he found no fewer than five "second-order" or "global factors", now commonly known as theBig Five.[25][27][28]These second-stratum or "global traits" are conceptualized as broad, overarching domains of behavior, which provide meaning and structure for the primary traits. For example, the "global trait" Extraversion has emerged from factor-analytic results comprising the five primary trait factors that are interpersonal in focus.[77] Thus, "global" Extraversion is fundamentally defined by the primary traits that are grouped together factor analytically, and, moving in the opposite direction, the second-order Extraversion factor gives conceptual meaning and structure to these primary traits, identifying their focus and function in human personality. These two levels of personality structure can provide an integrated understanding of the whole person, with the "global traits" giving an overview of the individual's functioning in a broad-brush way, and the more-specific primary trait scores providing an in-depth, detailed picture of the individual's unique trait combinations (Cattell's "Depth Psychometry" p. 71).[4] Research into the 16PF personality factors has shown these constructs to be useful in understanding and predicting a wide range of real life behaviors.[78][79]Thus, the 16 primary trait measures plus the five major second-stratum factors have been used in educational settings to study and predict achievement motivation, learning or cognitive style, creativity, and compatible career choices; in work or employment settings to predict leadership style, interpersonal skills, creativity, conscientiousness, stress-management, and accident-proneness; in medical settings to predict heart attack proneness, pain management variables, likely compliance with medical instructions, or recovery pattern from burns or organ transplants; in clinical settings to predict self-esteem, interpersonal needs, frustration tolerance, and openness to change; and, in research settings to predict a wide range of behavioral proclivities such as aggression, conformity, and authoritarianism.[80] Cattell's programmatic multivariate research which extended from the 1940s through the 70's[81][82][83]resulted in several books that have been widely recognized as identifying fundamental taxonomic dimensions of human personality and motivation and their organizing principles: The books listed above document a programmatic series of empirical research studies based on quantitative personality data derived from objective tests (T-data), from self-report questionnaires (Q-data), and from observer ratings (L-data). They present a theory of personality development over the human life span, including effects on the individual's behavior from family, social, cultural, biological, and genetic influences, as well as influences from the domains of motivation and ability.[84] AsHans Eysenckat theInstitute of Psychiatry,Londonremarked: "Cattell has been one of the most prolific writers in psychology since Wilhelm Wundt....According to the Citation Index, he is one of the ten most cited psychologists, and this is true with regard to not only citations in social science journals but also those in science journals generally. Of the two hundred and fifty most cited scientists, only three psychologists made the grade, namely, Sigmund Freud in the first place, then the reviewer [H.J. Eysenck], and then Cattell. Thus there is no question that Cattell has made a tremendous impression on psychology and science in general."[85] He was a controversial figure due in part to his friendships with, and intellectual respect for, white supremacists and neo-Nazis.[50] William H. Tucker[86][50]andBarry Mehler[87][88]have criticized Cattell based on his writings about evolution and political systems. They argue that Cattell adhered to a mixture ofeugenicsand a new religion of his devising which he eventually named Beyondism and proposed as "a new morality from science". Tucker notes that Cattell thanked the prominentneo-Naziandwhite supremacistideologuesRoger Pearson,Wilmot Robertson, andRevilo P. Oliverin the preface to hisBeyondism,and that a Beyondist newsletter with which Cattell was involved favorably reviewed Robertson's bookThe Ethnostate. Cattell claimed that a diversity of cultural groups was necessary to allow that evolution. He speculated about natural selection based on both the separation of groups and also the restriction of "external" assistance to "failing" groups from "successful" ones. This included advocating for "educational and voluntary birth control measures"—i.e., by separating groups and limiting excessive growth of failing groups.[89]John Gillis argued in his biography of Cattell that, although some of Cattell's views were controversial, Tucker and Mehler exaggerated and misrepresented his views by taking quotes out of context and referring to outdated writings. Gillis maintained that Cattell was not friends with white supremacists and described Hitler's ideas as "lunacy."[1] In 1997, Cattell was chosen by theAmerican Psychological Association(APA) for its "Gold Medal Award for Lifetime Achievement in the Science of Psychology." Before the medal was presented, Mehler launched a publicity campaign against Cattell through his nonprofit foundationISAR,[90]accusing Cattell of being sympathetic to racist and fascist ideas.[91]Mehler claimed that "it is unconscionable to honor this man whose work helps to dignify the most destructive political ideas of the twentieth century". A blue-ribbon committee was convened by the APA to investigate the legitimacy of the charges. Before the committee reached a decision, Cattell issued an open letter to the committee saying "I believe in equal opportunity for all individuals, and I abhor racism and discrimination based on race. Any other belief would be antithetical to my life's work" and saying that "it is unfortunate that the APA announcement ... has brought misguided critics' statements a great deal of publicity."[92]Cattell refused the award, withdrawing his name from consideration, and the committee was disbanded. Cattell died months later at the age of 92. In 1984, Cattell said that: "The only reasonable thing is to be noncommittal on the race question – that's not the central issue, and it would be a great mistake to be sidetracked into all the emotional upsets that go on in discussions of racial differences. We should be quite careful to dissociate eugenics from it – eugenics' real concern should be with individual differences."[13]Richard L. Gorsuch(1997) wrote (in a letter to theAmerican Psychological Foundation, para. 4) that: "The charge of racism is 180 degrees off track. [Cattell] was the first one to challenge the racial bias in tests and to attempt to reduce that problem."[13] Raymond Cattell's papers and books are the 7th most highly referenced in peer-reviewed psychology journals over the past century.[12]Some of his most cited publications are:[93]
https://en.wikipedia.org/wiki/Raymond_Cattell#Innovations_and_accomplishments
Athink tank, orpublic policy institute, is aresearch institutethat performsresearchandadvocacyconcerning topics such associal policy,political strategy,economics,military,technology, andculture. Most think tanks arenon-governmental organizations, but some are semi-autonomous agencies within a government, and some are associated with particular political parties, businesses, or the military.[1]Think tanks are often funded by individual donations, with many also accepting government grants.[2] Think tanks publish articles and studies, and sometimes draftlegislationon particular matters of policy or society. This information is then used by governments, businesses, media organizations,social movementsor other interest groups.[3][4]Think tanks range from those associated with highly academic or scholarly activities to those that are overtly ideological and pushing for particular policies, with a wide range among them in terms of the quality of their research. Later generations of think tanks have tended to be more ideologically oriented.[3] Modern think tanks began as a phenomenon in theUnited Kingdomin the 19th and early 20th centuries, with most of the rest being established in other English-speaking countries.[3][5]Prior to 1945, they tended to focus on the economic issues associated with industrialization and urbanization. During theCold War, many more American and otherWesternthink tanks were established, which often guided government Cold War policy.[3][6][4]Since 1991, more think tanks have been established in non-Western parts of the world. More than half of all think tanks that exist today were established after 1980.[5]As of 2023, there are more than 11,000 think tanks around the world.[7] According to historianJacob Soll, while the term "think tank" is modern, with its origintraced to the humanist academies and scholarly networks of the 16th and 17th centuries,evidence shows that,in Europe, the origins of think tanks go back to the 800s when emperors and kings began arguing with the Catholic Church about taxes. A tradition of hiring teams of independent lawyers to advise monarchs about their financial and political prerogatives against the church spans from Charlemagne all the way to the 17th century, when the kings of France were still arguing about whether they had the right to appoint bishops and receive a cut of their income. Soll cites as an early example theAcadémie des frères Dupuy, created inParisaround 1620 by the brothersPierreand Jacques Dupuy and also known after 1635 as thecabinet des frères Dupuy.[8]TheClub de l'Entresol, active in Paris between 1723 and 1731, was another prominent example of an early independent think tank focusing on public policy and current affairs, especially economics and foreign affairs.[9] Several major current think tanks were founded in the 19th century. TheRoyal United Services Institutewas founded in 1831 inLondon, and theFabian Societyin 1884. The oldestUnited States–based think tank, theCarnegie Endowment for International Peace, was founded inWashington, D.C., in 1910 by philanthropistAndrew Carnegie. Carnegie charged trustees to use the fund tohasten the abolition of international war, the foulest blot upon our civilization.[10]TheBrookings Institutionwas founded shortly thereafter in 1916 byRobert S. Brookingsand was conceived as a bipartisanresearch center modeled on academic institutions and focused on addressing the questions of the federal government.[11] After 1945, the number of policy institutes increased, with many small new ones forming to express various issues and policy agendas. Until the 1940s, most think tanks were known only by the name of the institution. During the Second World War, think tanks were often referred to as "brain boxes".[12] Before the 1950s, the phrase "think tank" did not refer to organizations. From its first appearances in the 1890s up to the 1950s, the phrase was most commonly used inAmerican Englishto colloquially refer to thebraincaseor especially in a pejorative context to thehuman brainitself when commenting on an individual's failings (in the sense that something was wrong with that person's "think tank").[13]: 25Around 1958, the first organization to be regularly described in published writings as "the Think Tank" (note thetitle caseand the use of thedefinite article) was theCenter for Advanced Study in the Behavioral Sciences.[13]: 26However, the Center does not count itself as and is not perceived to be a think tank in the contemporary sense.[13]: 26During the 1960s, the phrase "think tank" was attached more broadly to meetings of experts,electronic computers,[13]: 27and independent military planning organizations.[13]: 26The prototype and most prominent example of the third category was theRAND Corporation, which was founded in 1946 as an offshoot ofDouglas Aircraftand became an independent corporation in 1948.[13]: 70[14]In the 1970s, the phrase became more specifically defined in terms of RAND and others.[13]: 28During the 1980s and 1990s, the phrase evolved again to arrive at its broader contemporary meaning of an independent public policy research institute.[13]: 28 For most of the 20th century, such institutes were found primarily in the United States, along with much smaller numbers in Canada, the United Kingdom, and Western Europe. Although think tanks had also existed inJapanfor some time, they generally lacked independence, having close associations with government ministries or corporations. There has been a veritable proliferation of "think tanks" around the world that began during the 1980s as a result of globalization, the end of theCold War, and the emergence of transnational problems. Two-thirds of all the think tanks that exist today were established after 1970 and more than half were established since 1980.[5] The effect ofglobalisationon the proliferation of think tanks is most evident in regions such as Africa, Eastern Europe, Central Asia, and parts of Southeast Asia, where there was a concerted effort by other countries to assist in the creation of independent public policy research organizations. A survey performed by the Foreign Policy Research Institute'sThink Tanks and Civil Societies Programunderscores the significance of this effort and documents the fact that most of the think tanks in these regions have been established since 1992. As of 2014[update], there were more than 11,000 of these institutions worldwide.[15][16]Many of the more established think tanks, created during theCold War, are focused on international affairs, security studies, and foreign policy.[5] Think tanks vary by ideological perspectives, sources of funding, topical emphasis and prospective consumers.[17]Funding may also represent who or what the institution wants to influence; in the United States, for example,Some donors want to influence votes in Congress or shape public opinion, others want to position themselves or the experts they fund for future government jobs, while others want to push specific areas of research or education.[17] McGann distinguishes think tanks based on independence, source of funding and affiliation, grouping think tanks into autonomous and independent, quasi-independent, government affiliated, quasi-governmental, university affiliated, political-party affiliated or corporate.[18] A new trend, resulting from globalization, is collaboration between policy institutes in different countries. For instance, theCarnegie Endowment for International Peaceoperates offices inWashington, D.C.,Beijing,Beirut,Brusselsand formerly inMoscow, where it was closed in April 2022.[17] TheThink Tanks and Civil Societies Program(TTCSP) at theUniversity of Pennsylvania, led byJames McGann, annually rates policy institutes worldwide in a number of categories and presents its findings in theGlobal Go-To Think Tanksrating index.[19]However, this method of the study and assessment of policy institutes has been criticized by researchers such as Enrique Mendizabal and Goran Buldioski, Director of the Think Tank Fund, assisted by theOpen Society Institute.[20][21] Think tanks may attempt to broadly inform the public by holding conferences to discuss issues which they may broadcast; encouraging scholars to give public lectures, testifying before committees of governmental bodies; publishing and widely distributing books, magazines, newsletters or journals; creatingmailing liststo distribute new publications; and engaging in social media.[22]: 90 Think tanks may privately influence policy by having their members accept bureaucratic positions, having members serve on political advisory boards, inviting policy-makers to events, allowing individuals to work at the think tank; employing former policy-makers; or preparing studies for policy makers.[22]: 95 The role of think tanks has been conceptualized through the lens of social theory. Plehwe argues that think tanks function asknowledge actorswithin a network of relationships with other knowledge actors. Such relationships including citing academics in publications or employing them on advisory boards, as well as relationships with media, political groups and corporate funders. They argue that these links allow for the construction of adiscourse coalitionwith a common aim, citing the example of deregulation of trucking, airlines, and telecommunications in the 1970s.[23]: 369Plejwe argues that this deregulation represented a discourse coalition between theFord Motor Company,FedEx,neo-liberaleconomists, theBrookings Institutionand theAmerican Enterprise Institute.[23]: 372 Elite theoryconsiders how an "elite" influence the actions of think tanks and potentially bypass the political process, analysing the social background and values of those who work in think tanks. Pautz criticizes this viewpoint because there is in practice a variety of viewpoints in think tanks and argues it dismisses the influence that ideas can have.[24]: 424 In some cases, corporate interests,[25]military interests[1]and political groups have found it useful to create policy institutes, advocacy organizations, and think tanks. For example,The Advancement of Sound Science Coalitionwas formed in the mid-1990s to dispute research finding an association betweensecond-hand smokeandcancer.[26]Military contractors may spend a portion of their tender on funding pro-war think tanks.[1]According to an internal memorandum fromPhilip Morris Companiesreferring to theUnited States Environmental Protection Agency(EPA),The credibility of the EPA is defeatable, but not on the basis of ETS [environmental tobacco smoke] alone,... It must be part of a larger mosaic that concentrates all the EPA's enemies against it at one time.[27] According to the progressive media watchdogFairness & Accuracy in Reporting, both left-wing and right-wing policy institutes are often quoted and rarely identified as such. The result is that think tank "experts" are sometimes depicted as neutral sources without any ideological predispositions when, in fact, they represent a particular perspective.[28][29]In the United States, think tank publications on education are subjected to expert review by theNational Education Policy Center's "Think Twice" think tank review project.[30] A 2014New York Timesreport asserted that foreign governments buy influence at many United States think tanks. According to the article:More than a dozen prominent Washington research groups have received tens of millions of dollars from foreign governments in recent years while pushing United States government officials to adopt policies that often reflect the donors' priorities.[31] Ghana's first president,Kwame Nkrumah, set up various state-supported think tanks in the 1960s. By the 1990s, a variety of policy research centers sprang up in Africa set up by academics who sought to influence public policy in Ghana. One such think tank wasThe Institute of Economic Affairs, Ghana, which was founded in 1989 when the country was ruled by theProvisional National Defence Council. The IEA undertakes and publishes research on a range of economic and governance issues confrontingGhanaandSub-Saharan Africa. It has also been involved in bringing political parties together to engage in dialogue. In particular it has organised Presidential debates every election year since theGhanaian presidential election, 1996. Notable think tanks in Ghana include: Afghanistan has a number of think tanks that are in the form of governmental, non-governmental, and corporate organizations. Bangladesh has a number of think tanks that are in the form of governmental, non-governmental, and corporate organizations. In China a number of think tanks are sponsored by governmental agencies such asDevelopment Research Center of the State Council, but still retain sufficient non-official status to be able to propose and debate ideas more freely. In January 2012, the first non-official think tank in mainland China, South Non-Governmental Think-Tank, was established in the Guangdong province.[37]In 2009 theChina Center for International Economic Exchangeswas founded. In Hong Kong, early think tanks established in the late 1980s and early 1990s focused on political development, including the first direct Legislative Council members election in 1991 and the political framework of "One Country, Two Systems", manifested in theSino-British Joint Declaration. After the transfer of sovereignty to China in 1997, more think tanks were established by various groups of intellectuals and professionals. They have various missions and objectives including promoting civic education; undertaking research on economic, social and political policies; and promoting "public understanding of and participation in the political, economic, and social development of the Hong KongSpecial Administrative Region". Think tanks in Hong Kong include: India has the world's second-largestnumber of think tanks.[38]Most are based in New Delhi, and a few are government-sponsored.[citation needed]There are few think tanks that promote environmentally responsible and climate resilient ideas likeCentre for Science and Environment,Centre for Policy ResearchandWorld Resources Institute.[39][40][41]There are other prominent think tanks likeObserver Research Foundation, Tillotoma Foundation andCentre for Civil Society.[citation needed] In Mumbai,Strategic Foresight Groupis a global think tank that works on issues such aswater diplomacy,peace and conflictandforesight (futures studies). Think tanks with a development focus include those like theNational Centre for Cold-chain Development('NCCD'), which serve to bring an inclusive policy change by supporting the Planning Commission and related government bodies with industry-specific inputs – in this case, set up at the behest of the government to direct cold chain development. Some think tanks have a fixed set of focus areas and they work towards finding out policy solutions to social problems in the respective areas. Initiatives such asNational e-Governance Plan(to automate administrative processes)[42]andNational Knowledge Network(NKN) (for data and resource sharing amongst education and research institutions), if implemented properly, should help improve the quality of work done by think tanks.[43] Some notable think tanks in India include: Over 50 think tanks have emerged in Iraq, particularly in the Kurdistan Region. Iraq's leading think tank is the Middle East Research Institute (MERI),[44]based in Erbil. MERI is an independent non-governmental policy research organization, established in 2014 and publishes in English, Kurdish, and Arabic. It was listed in the global ranking by the United States's Lauder Institute of theUniversity of Pennsylvaniaas 46th in the Middle East.[45] There are many think tank teams in Israel, including:[46] InSouth Korea, think tanks are prolific and influential and are a government go-to. Think tanks are prolific in the Korean landscape. Many policy research organisations in Korea focus on economoy and most research is done in public think tanks. There is a strong emphasis on the knowledge-based economy and, according to one respondent, think tank research is generally considered high quality.[47] Japanhas over 100 think tanks, most of which cover not only policy research but also economy, technology and so on. Some are government related, but most of the think tanks are sponsored by the private sector.[48] Institute of World Economics and Politics (IWEP) at the Foundation of the First President of the Republic of Kazakhstan was created in 2003. IWEP activities aimed at research problems of the world economy, international relations, geopolitics, security, integration and Eurasia, as well as the study of the First President of theRepublic of Kazakhstanand its contribution to the establishment and strengthening of Kazakhstan as an independent state, the development of international cooperation and the promotion of peace and stability.[49][50] The Kazakhstan Institute for Strategic Studies under the President of the RK (KazISS) was established by the Decree of the President of RK on 16 June 1993. Since its foundation the main mission of the Kazakhstan Institute for Strategic Studies under the President of the Republic of Kazakhstan, as a national think tank, is to maintain analytical and research support for the President of Kazakhstan.[51] Most Malaysian think tanks are related either to the government or a political party. Historically they focused on defense, politics and policy. However, in recent years, think tanks that focus on international trade, economics, and social sciences have also been founded. Notable think tanks in Malaysia include: Pakistan's think tanks mainly revolve around social policy, internal politics, foreign security issues, and regional geo-politics. Most of these are centered on the capital,Islamabad. One such think tank is the Sustainable Development Policy Institute (SDPI), which focuses on policy advocacy and research particularly in the area of environment and social development. Another policy research institute based in Islamabad is theInstitute of Social and Policy Sciences(I-SAPS) which works in the fields of education, health, disaster risk reduction,governance, conflict and stabilization. Since 2007 – 2008, I-SAPS has been analyzing public expenditure of federal and provincial governments.[52] Think tanks in the Philippines could be generally categorized in terms of their linkages with the national government. Several were set up by the Philippine government for the specific purpose of providing research input into the policy-making process.[53] Sri Lanka has a number of think tanks that are in the form of governmental, non-governmental and corporate organizations. There are several think tanks in Singapore that advise the government on various policies and as well as private ones for corporations within the region. Many of them are hosted within the local public educational institutions. Among them are theSingapore Institute of International Affairs(SIIA),Institute of Southeast Asian Studies(ISEAS), and theS. Rajaratnam School of International Studies.[54] In 2017 Taiwan had 58 think tanks.[55]As in most countries there is a mix of government- and privately-funded think tanks.[56] Taiwanese think tanks in alphabetical order: The UAE has been a center for political oriented think tanks which concentrate on both regional and global policy. Notable think tank have emerged in the global debate on terrorism, education & economical policies in the MENA region. Think tanks include: Key projects: Preparation of the National human development report for Uzbekistan, Sociological "portrait" of the Uzbek businessman, Preparation of an analytical report on export procedures optimization in Uzbekistan, various industry and marketing researches in Uzbekistan, Tajikistan, and Turkmenistan. Brusselshosts most of the European Institutions, hence a large number of international think tanks are based there. Notable think tanks areBruegel, theCentre for European Policy Studies(CEPS),Centre for the New Europe(CNE), theEuropean Centre of International Political Economy(ECIPE), theEuropean Policy Centre(EPC), the Friends of Europe, theGlobal Governance Institute(GGI),Liberales, andSport and Citizenship, among others. Bulgaria has a number of think tanks providing expertise and shaping policies, includingInstitute of Modern Politics. Finland has several small think tanks that provide expertise in very specific fields. Notable think tanks include: In addition to specific independent think tanks, the largest political parties have their own think tank organizations. This is mainly due to support granted by state for such activity. Examples of such think tanks are theGreen Think Tank Visio[fi][66]andSuomen Perusta.[67]The corporate world has focused their efforts to central representative organizationConfederation of Finnish Industries, which acts as think tank in addition to negotiating salaries with workers unions. Furthermore, there is the Finnish Business and Policy Forum (Elinkeinoelämän valtuuskunta, EVA). Agricultural and regional interests, associated with The Central Union of Agricultural Producers and Forest Owners (Maa- ja metsätaloustuottajain Keskusliitto, MTK) and theCentre Party, are researched by Pellervo Economic Research (Pellervon taloustutkimus, PTT). TheCentral Organisation of Finnish Trade Unions(Suomen Ammattiliittojen Keskusjärjestö, SAK) and theSocial Democratic Partyare associated with the Labour Institute for Economic Research (Palkansaajien tutkimuslaitos, PT). Each of these organizations often release forecasts concerning the national economy. TheFrench Institute of International Relations(IFRI) was founded in 1979 and is the third oldest think tank of western Europe, after Chatham House (UK, 1920) and the Stockholm International Peace Research Institute (Sweden, 1960). The primary goals of IFRI are to develop applied research in the field of public policy related to international issues, and foster interactive and constructive dialogue between researchers, professionals, and opinion leaders. France also hosts theEuropean Union Institute for Security Studies(EUISS), aParis-basedagency of the European Unionand think tank researching security issues of relevance for theEU. There are also a number of pro-business think tanks, notably the Paris-based Fondation Concorde.[68]The foundation focuses on increasing the competitiveness of French SME's and aims to revive entrepreneurship in France. On the left, the main think tanks in France are theFondation Jean-Jaurès, which is organizationally linked to theFrench Socialist Party, andTerra Nova. Terra Nova is an independent left-leaning think tank, although it is nevertheless considered to be close to the Socialists. It works on producing reports and analyses of current public policy issues from a progressive point of view, and contributing to the intellectual renewal of social democracy. The only French Think Tank mentioned in the "think tanks to watch" list of the 2014 Global Go To Think Tank Index Report[69]GenerationLibre[fr]is a French think tank created by Gaspard Koenig in 2013, independent from all political parties, which aims at promoting freedoms in France, in terms of fundamental rights, economics and societal issues. GenerationLibre is described as being able to connect to the right on pro business freedom and regulations issues but also to the left on issues such as basic income, gay marriage and the legalization of marijuana. In Germany all of the major parties are loosely associated with research foundations that play some role in shaping policy, but generally from the more disinterested role of providing research to support policymakers than explicitly proposing policy. These include theKonrad-Adenauer-Stiftung(Christian Democratic Union-aligned), theFriedrich-Ebert-Stiftung(Social Democratic Party-aligned), theHanns-Seidel-Stiftung(Christian Social Union-aligned), theHeinrich-Böll-Stiftung(aligned with the Greens),Friedrich Naumann Foundation(Free Democratic Party-aligned) and theRosa Luxemburg Foundation(aligned withDie Linke). TheGerman Institute for International and Security Affairsis a foreign policy think tank.Atlantic Communityis an independent,non-partisanandnon-profitorganization set up as a joint project of Atlantische Initiative e.V. and Atlantic Initiative United States. TheInstitute for Media and Communication Policydeals with media-related issues.Transparency Internationalis a think tank on the role of corporate and political corruption in international development. In Greece there are many think tanks, also called research organisations or institutes. While think tanks are not widespread in Latvia, as opposed to single-issue advocacy organizations, there are several noticeable institutions in the Latvian think tank landscape: Several think tanks are established and operate under the auspices of Universities, such as: Vilnius Institute for Policy Analysis (VIPA) is an independent non-governmental, non-profit, non-partisan policy think tank in Lithuania whose mission is to stand for the principles of open society, liberal democracy, rule of law and human rights. VIPA acts via advocacy for strong and safe European Union, analyzing and advocating for anti-authoritarian, transparent, and open governance ideas in Central and Eastern Europe, is an opinion leader offering an alternative opinion to the public versus populism, radicalism, and authoritarian trends, reinforcing active citizens' participation in decision making, analyzing fake news, disinformation, and offering media literacy initiatives, putting forward solutions to improve the accountability, transparency, and openness of Lithuania'spublic sector, building a network of open society values oriented experts, civil activists and NGO's.[74] All major political parties in the Netherlands have state-sponsored research foundations that play a role in shaping policy. The Dutch government also has its own think tank: theScientific Council for Government Policy. The Netherlands furthermore hosts theNetherlands Institute of International Relations Clingendael, or Clingendael Institute, an independent think tank and diplomatic academy which studies various aspects ofinternational relations. There is a large pool of think tanks in Poland on a wide variety of subjects. The oldest state-sponsored think tank isThe Western Institutein Poznań (Polish:Instytut Zachodni). The second oldest is thePolish Institute of International Affairs(PISM) established in 1947. Another notable state-sponsored think tank is theCentre for Eastern Studies(OSW), which specializes in the countries neighboring Poland and in the Baltic Sea region, the Balkans, Turkey, the Caucasus and Central Asia. Among the private think tanks notable organizations include theInstitute for Structural Research(IBS) on economic policy,The Casimir Pulaski Foundationon foreign policy, theInstitute of Public Affairs(ISP) on social policy, and theSobieski Institute. Founded in 1970, theSEDESis one of the oldest Portuguese civic associations and think tanks.Contraditório think tankwas founded in 2008. Contraditório is a non-profit, independent and non-partisan think tank. TheRomanian Academic Society(SAR), founded in 1996, is a Romanian think tank for policy research. The Foundation for the Advancement of Economics (FREN) was founded in 2005 by theBelgrade University's Faculty of Economics. Think tanks originating in Slovakia: International think tanks with presence in Slovakia: TheElcano Royal Institutewas created in 2001 following the example of the Royal Institute of International Affairs (Chatham House) in the UK, although it is closely linked to (and receives funding from) the government in power.[76] More independent but clearly to the left of the political spectrum are the Centro de Investigaciones de Relaciones Internacionales y Desarrollo (CIDOB) founded in 1973; and the Fundación para las Relaciones Internacionales y el Diálogo Exterior (FRIDE) established in 1999 by Diego Hidalgo and main driving force behind projects such as the Club de Madrid, a group of democratic former heads of state and government, the Foreign Policy Spanish Edition andDARA.[citation needed] Former Prime MinisterJosé Maria Aznarpresides over the Fundación para el Analisis y los Estudios Sociales (FAES), a policy institute that is associated with the conservative Popular Party (PP). Also linked to the PP is the Grupo de Estudios Estratégicos (GEES), which is known for its defense- and security-related research and analysis. For its part, theFundación Alternativasis independent but close to left-wing ideas. The SocialistPartido Socialista Obrero Español(PSOE) created Fundación Ideas in 2009 and dissolved it in January 2014. Also in 2009, the centristUnion, Progress and Democracy(UPyD) created Fundación Progreso y Democracia (FPyD). Timbrois afree marketthink tank and book publisher based inStockholm. Think tanks based within Switzerland include: There are more than 100 registered think tanks inUkraine[citation needed], including: In Britain, think tanks play a similar role to the United States, attempting to shape policy, and indeed there is some cooperation between British and American think tanks. For example, the London-based think tankChatham Houseand theCouncil on Foreign Relationswere both conceived at theParis Peace Conference, 1919and have remained sister organisations. TheBow Group, founded in 1951, is the oldest centre-right think tank and many of its members have gone on to serve as Members of Parliament or Members of the European Parliament. Past chairmen have includedConservative PartyleaderMichael Howard,Margaret Thatcher's longest-serving Cabinet MinisterGeoffrey Howe,Chancellor of the ExchequerNorman Lamontand formerBritish TelecomchairmanChristopher Bland. Since 2000, a number of influential centre-right think tanks have emerged includingPolicy Exchange,Centre for Social Justiceand most recentlyOnward.[81] Most Australian think tanks are based at universities – for example, theMelbourne Institute– or are government-funded – for example, theProductivity Commissionor theCSIRO. Private sources fund about 20 to 30 "independent" Australian think tanks.[82]The best-known of these think tanks play a much more limited role in Australian public and business policy-making than do their equivalents in the United States. However, in the past decade[which?]the number of think tanks has increased substantially.[83]Prominent Australian conservative think tanks includethe Centre for Independent Studies, theSydney Instituteand theInstitute of Public Affairs. Prominent leftist Australian think tanks includethe McKell Institute, Per Capita, the Australia Institute, theLowy Instituteand the Centre for Policy Development. In recent years[when?]regionally-based independent and non-partisan think tanks have emerged.[citation needed] Some think tanks, such as theIllawarra's i-eat-drink-think, engage in discussion, research and advocacy within a broader civics framework. Commercial think tanks like the Gartner Group, Access Economics, the Helmsman Institute, and others provide additional insight which complements not-for-profit organisations such asCEDA, theAustralian Strategic Policy Institute, and theAustralian Institute of Company Directorsto provide more targeted policy in defence, program governance, corporate governance and similar.[citation needed] Think tanks in Australia include: Think tanks based in New Zealand include: Canada has many notable think tanks (listed in alphabetical order). Each has specific areas of interest with some overlaps. As the classification is most often used today, the oldest American think tank is theCarnegie Endowment for International Peace, founded in 1910.[90]The Institute for Government Research, which later merged with two organizations to form theBrookings Institution, was formed in 1916. Other early twentieth century organizations now classified as think tanks include theHoover Institution(1919),The Twentieth Century Fund(1919, and now known as the Century Foundation), theNational Bureau of Economic Research(1920), theCouncil on Foreign Relations(1921), and theSocial Science Research Council(1923). The Great Depression and its aftermath spawned several economic policy organizations, such as the National Planning Association (1934), theTax Foundation(1937),[91]and theCommittee for Economic Development(1943).[90] In collaboration with the Douglas Aircraft Company, the Air Force set up theRAND Corporationin 1946 to develop weapons technology and strategic defense analysis. TheHudson Instituteis a conservative American think tank founded in 1961 by futurist, military strategist, and systems theorist Herman Kahn and his colleagues at the RAND Corporation. Recent members include Mike Pompeo, the former secretary of state under Donald Trump who joined in 2021.[92] More recently, progressive and liberal think tanks have been established, most notably theCenter for American Progressand the Center for Research on Educational Access and Leadership (CREAL). The organization has close ties to former United States PresidentBarack Obamaand other prominent Democrats.[93] Think tanks have been important allies for United States presidents since theReagan administration, writing and suggesting policies to implement, and providing staff for the administration. For recent conservative presidents, think tanks such asThe Heritage Foundation, theHoover Institution, and theAmerican Enterprise Institute(AEI) were closely associated with theReagan administration. TheGeorge H. W. Bush administrationworked closely with AEI, and theGeorge W. Bush administrationworked closely with AEI and the Hoover Institution. TheTrump administrationworks closely with the Heritage Foundation. For recent liberal presidents, theProgressive Policy Instituteand its parent theDemocratic Leadership Councilwere closely associated with theClinton administration, and theCenter for American Progresswas closely associated with theObamaandBiden administrations.[94] Think tanks help shape both foreign and domestic policy. They receive funding from private donors, and members of private organizations. By 2013, the largest 21 think tanks in the US spent more thanUS$1billion per year.[95]Think tanks may feel more free to propose and debate controversial ideas than people within government. The progressive media watchdog Fairness and Accuracy in Reporting (FAIR) has identified the top 25 think tanks by media citations, noting that from 2006 to 2007 the number of citations declined 17%.[96]The FAIR report reveals the ideological breakdown of the citations: 37% conservative, 47% centrist, and 16% liberal. Their data show that the most-cited think tank was theBrookings Institution, followed by theCouncil on Foreign Relations, theAmerican Enterprise Institute,The Heritage Foundation, and theCenter for Strategic and International Studies. In 2016, in response to scrutiny about think tanks appearing to have a "conflict of interest" or lack transparency, executive vice president, Martin S. Indyk of Brookings Institution – the "most prestigious think tank in the world"[97]admitted that they had "decided to prohibit corporations or corporate-backed foundations from making anonymous contributions." In August 2016,The New York Timespublished a series on think tanks that blur the line. One of the cases the journalists cited was Brookings, where scholars paid by a seemingly independent think tank "push donors' agendas amplifying a culture of corporate influence in Washington." For example, in exchange for hundreds of thousands of dollars the Brookings Institution providedLennar– one of the United States' largest home builders – with a significant advantage in pursuing a US$8billion revitalization project in Hunters Point, San Francisco. In 2014, Lennar's then-regional vice president in charge of the San Francisco revitalization, Kofi Bonner was named as a Brookings senior fellow – a position as 'trusted adviser' that carries some distinction. Bruce Katz, a Brookings vice president, also offered to help Lennar "engage with national media to develop stories that highlight Lennar's innovative approach."[97] Government think tanks are also important in the United States, particularly in the security and defense field. These include the Center for Technology and National Security Policy at theNational Defense University, the Center for Naval Warfare Studies at theNaval War College, and theStrategic Studies Instituteat theU.S. Army War College. The government funds, wholly or in part, activities at approximately 30Federally Funded Research and Development Centers(FFRDCs). FFRDCs, are unique independent nonprofit entities sponsored and funded by the United States government to meet specific long-term technical needs that cannot be met by any other single organization. FFRDCs typically assist government agencies with scientific research and analysis, systems development, and systems acquisition. They bring together the expertise and outlook of government, industry, and academia to solve complex technical problems. These FFRDCs include theRAND Corporation, theMITRE Corporation, theInstitute for Defense Analyses, theAerospace Corporation, theMIT Lincoln Laboratory, and other organizations supporting various departments within the United States Government. Similar to the above quasi-governmental organizations areFederal Advisory Committees. These groups, sometimes referred to as commissions, are a form of think tank dedicated to advising the US Presidents or the Executive branch of government. They typically focus on a specific issue and as such, might be considered similar to special interest groups. However, unlike special interest groups these committees have come under some oversight regulation and are required to make formal records available to the public. As of 2002, about 1,000 of these advisory committees were described in the FACA searchable database.[98] Research done by Enrique Mendizabal[99]shows that South American think tanks play various roles depending on their origins, historical development and relations to other policy actors. In this study, Orazio Bellettini fromGrupo FAROsuggests that they:[100] How a policy institute addresses these largely depends on how they work, their ideology vs. evidence credentials, and the context in which they operate including funding opportunities, the degree and type of competition they have and their staff. This functional method addresses the inherit challenge of defining a think tank. As Simon James said in 1998, "Discussion of think tanks...has a tendency to get bogged down in the vexed question of defining what we mean by 'think tank'—an exercise that often degenerates into futile semantics."[101]It is better (as in the Network Functions Approach) to describe what the organisation should do. Then the shape of the organisation should follow to allow this to happen. The following framework (based on Stephen Yeo's description of think tanks' mode of work) is described in Enrique Mendizabal's blog "onthinktanks": First, policy institutes may work in or base their funding on one or more of:[102] Second, policy institutes may base their work or arguments on: According to theNational Institute for Research Advancement, a Japanese policy institute, think tanks are "one of the main policy actors in democratic societies ..., assuring a pluralistic, open and accountable process of policy analysis, research, decision-making and evaluation".[103]A study in early 2009 found a total of 5,465 think tanks worldwide. Of that number, 1,777 were based in the United States and approximately 350 in Washington, DC, alone.[104] As of 2009,Argentinais home to 122 think tanks, many specializing inpublic policyandeconomicsissues. Argentina ranks fifth in the number of these institutions worldwide.[105] Working on public policies, Brazil hosts, for example,Instituto Liberdade, a University-based Center at Tecnopuc inside thePontifícia Universidade Católica do Rio Grande do Sul, located in the South Region of the country, in the city ofPorto Alegre. Instituto Liberdade is among the Top 40 think tanks in Latin America and the Caribbean, according to the 2009 Global Go To Think Tanks Index[106]a report from the University of Pennsylvania's Think Tanks and Civil Societies Program (TTCSP). Fundação Getulio Vargas(Getulio Vargas Foundation (FGV)) is a Brazilian higher education institution. Its original goal was to train people for the country's public- and private-sector management. Today it hosts faculties (Law, Business, Economics, Social Sciences and Mathematics), libraries, and also research centers in Rio, São Paulo and Brasilia. It is considered byForeign Policymagazine to be a top-five "policymaker think tank" worldwide. TheIgarapé Instituteis a Brazilian think tank focusing on public, climate, and digital security.[107] According to a 2020 report, there are 32 think tanks or similar institutions in Armenia.[18] The government closed theNoravank Foundation, a government-affiliated think tank, in 2018 after almost two decades of operation. However, other think tanks continue to operate, include theCaucasus Institute, theCaucasus Research Resource Center-Armenia(CRRC-Armenia) (which publishes the "Caucasus Barometer" annual public opinion survey of theSouth Caucasus, the "Enlight" Public Research Center, and the AMBERD research center at theArmenian State University of Economics.[108] According to research done by the University of Pennsylvania, there are a total of 12 think tanks in Azerbaijan. The Center for Economic and Social Development, or CESD; in Azeri, Azerbaijan, İqtisadi və Sosial İnkişaf Mərkəzi (İSİM) is anAzerithink tank,non-profit organization,NGObased inBaku, Azerbaijan. The center was established in 2005. CESD focuses on policy advocacy and reform, and is involved with policy research and capacity building. TheEconomic Research Center(ERC) is a policy-research oriented non-profit think tank established in 1999 with a mission to facilitate sustainable economic development and good governance in the new public management system of Azerbaijan. It seeks to do this by building favorable interactions between the public, private and civil society and working with different networks both in local (EITI NGO Coalition, National Budget Group, Public Coalition Against Poverty, etc.) and international levels (PWYP, IBP, ENTO, ALDA, PASOS, WTO NGO Network etc.).[109] The Center for Strategic Studies under the President of Azerbaijan is a governmental, non-profit think tank founded in 2007. It focuses on domestic and foreign policy. According to theForeign Policy Research Institute, Russia has 112 think tanks, while Russian think tanks claimed four of the top ten spots in 2011's "Top Thirty Think Tanks in Central and Eastern Europe".[110] Notable Russian think tanks include: Turkish think tanksare relatively new, having emerged in the 1960's.[111]There are at least 20 think tanks in the country, both independent and supported by government. Many of them are sister organizations of political parties, universities or companies some are independent and others are supported by government. Most Turkish think tanks provide research and ideas, yet they play less important roles in policy making than American think tanks. Turksam, Tasam and theJournal of Turkish Weeklyare the leading information sources. The oldest and most influential think tank in Turkey is ESAM (The Center for Economic and Social Research;Turkish:Ekonomik ve Sosyal Araştırmalar Merkezi) which was established in 1969 and has headquarters in Ankara. There are also branch offices of ESAM in Istanbul, Bursa, Konya and elsewhere. ESAM has strong international relationships, especially with Muslim countries and societies. Ideologically it performs policies, produces ideas and manages projects in parallel toMilli Görüşand also influences political parties and international strategies. The founder and leader of Milli Görüş,Necmettin Erbakan, was very concerned with the activities and brainstorming events of ESAM. In The Republic of Turkey, two presidents, four prime ministers, various ministers, many members of the parliament, and numerous mayors and bureaucrats have been members of ESAM. Currently the General Chairman of ESAM isRecai Kutan(former minister for two different ministries, former main opposition party leader, and founder and General Chairman of theSaadet Party).[citation needed] TheTurkish Economic and Social Studies Foundation(TESEV) is another leading think tank. Established in 1994, TESEV is an independent non-governmental think tank, analyzing social, political and economic policy issues facing Turkey. TESEV has raised issues about Islam and democracy, combating corruption, state reform, and transparency and accountability. TESEV serve as a bridge between academic research and policy-making. Its core program areas are democratization, good governance, and foreign policy.[112] Other notable Turkish think tanks are theInternational Strategic Research Organisation(USAK), theFoundation for Political, Economic and Social Research(SETA), and theWise Men Center for Strategic Studies(BİLGESAM). A poll by the British firm Cast From Clay found that only 20 percent of Americans trusted think tanks in 2018.[113]
https://en.wikipedia.org/wiki/Think_tank
“Anti-rival good” is aneologismsuggested bySteven Weber. According to his definition, it is the opposite of arival good. The more people share an anti-rival good, the more utility each person receives. Examples includesoftwareand other information goods created through the process ofcommons-based peer production. An anti-rival good meets the test of apublic goodbecause it is non-excludable (freely available to all) and non-rival (consumption by one person does not reduce the amount available for others). However, it has the additional quality of being created by private individuals for common benefit without being motivated by purealtruism, because the individual contributor also receives benefits from the contributions of others. Lawrence Lessigdescribedfree and open-source softwareas anti-rivalrous: "It's not just that code is non-rival; it's that code in particular, and (at least some) knowledge in general, is, as Weber calls it, 'anti-rival'. I am not only not harmed when you share an anti-rival good: I benefit."[1] The production of anti-rival goods typically benefits fromnetwork effects. Leung (2006)[2]quotes from Weber (2004), "Under conditions of anti-rivalness, as the size of the Internet-connected group increases, and there is a heterogeneous distribution of motivations with people who have a high level of interest and some resources to invest, then the large group is more likely,all things being equal, to provide the good than is a small group."[3] Although this term is a neologism, this category of goods may be neither new nor specific to theInternetera. According to Lessig, English also meets the criteria, as anynatural languageis an anti-rival good.[4]The term also invokesreciprocityand the concept of agift economy. Nikander et al. insist that somedata setsare anti-rivalrous. This claim rests on three observations:[5] Of course, this assumes that the data shared does not involve uses that would likely harm humans.[7]
https://en.wikipedia.org/wiki/Anti-rival_good
TheCarr–Benkler wagerbetweenYochai BenklerandNicholas Carrconcerned the question whether the most influential sites on the Internet will bepeer-producedorprice-incentivizedsystems. Thewagerwas proposed by Benkler in July 2006 in a comment to a blog post where Carr criticized Benkler's views about volunteer peer-production. Benkler believed that by 2011 the major sites would have content provided by volunteers in what Benkler callscommons-based peer production, as inWikipedia,reddit,FlickrandYouTube. Carr argued that the trend would favor content provided by paid workers, as in most traditional news outlets.[1][2][3][4] In May 2012 Carr resurrected the discussion, arguing that he had clearly won the wager, pointing out that the most popular blogs and online videos at that time were corporate productions.[5]Benkler replied with a rebuttal shortly after,[6]arguing that the only way Carr could be seen to have won is if social software was considered as commercial content.Gigaomwriter Matthew Ingram stated that "Benkler has clearly won. While there are large corporate entities with profit-oriented motives involved in the web, a group that includes Facebook and Twitter, the bulk of the value that is produced in those networks and services comes from the free behavior of crowds of users."[7] The early exchange of arguments from the two sides shows the crevasse between two opposing realities: Carr looks at the market-oriented outcome of a, at the time, nascent digital economy, while Benkler looks at the peer-based process, on which the market capitalizes.[8]There are many layers where this tension can be observed. First, there is a subtle difference between peer production andcommons-based peer production(CBPP). On one hand, for-profit initiatives, such as Facebook or Google, utilize peer production practices to maximize shareholder value. On the other hand, commons-oriented initiatives, such as Wikipedia,L’Atelier Paysan,Farm Hackor FOSS projects, utilize such practices to maximize sharing and commons creation.[9] Second, even though the majority of the most influential websites seem to be run by commercial companies, a considerable part of their technological infrastructure, as well as nearly all software used by Fortune 500 companies and governments is based on CBPP: from Apache, the most popular web server, toLinux, on which the top-500 supercomputers run, to WordPress, the most popular content management system, to OpenSSL, the most popular encryption protocol to secure transactions.[10] Finally, CBPP draws from a diverse set of motivations. Contributors participate to gain knowledge, to produce something useful for them, to build their social capital, to communicate and have a sense of belonging, but also to get financial rewards.[11][12]So, the price-incentivized production does exist in CBPP but it is relegated to being a peripheral concept only.[13]Moreover, public infrastructure and institutions make the digital economy possible to begin with, by regulating the conditions under which service providers can offer services, information is transmitted and users get access to it. It is only after all the above are in place that competition and price-incentives can actually function. Hence, the dominance of one modality over the other is not an outcome of “natural selection,” rather a result of political definition. The state steers competition and profit-motives, implicitly rationalizing the produced economic outcomes, in the way they are measured in business and national accounts. Likewise, the state could use similar leverages to enable and support the direct creation of public purpose value by the civil society and commons-based enterprises.[9][14]
https://en.wikipedia.org/wiki/Carr%E2%80%93Benkler_wager
Co-creation, in the context of abusiness, refers to aproduct or service designprocess in which input fromconsumersplays a central role from beginning to end. Less specifically, the term is also used for any way in which a business allows consumers to submit ideas, designs or content. This way, the firm will not run out of ideas regarding the design to be created and at the same time, it will further strengthen the business relationship between the firm and its customers. Another meaning is the creation of value by ordinary people, whether for a company or not. The first person to use the "Co-" in "co-creation" as a marketing prefix was Koichi Shimizu, professor of Josai University, in 1979. In 1979, "co-marketing" was introduced at the Japan Society of Commerce's national conference. Everything with "Co" comes from here.[1][2] Aric Rindfleischand Matt O'Hern define customer co-creation indigital marketingas "a collaborative NPD (new product development) activity in which customers actively contribute and/or select the content of a new product offering" and state that, like all NPD processes, it consists of two steps, namely contribution (of content) and selection (of the best contributions).[3] Rindfleisch and O'Hern categorize different types of co-creation indigital marketingbased on how strict the requirements on submissions are (fixed vs. open) and if the selection is done by the customers themselves or by the firm (firm-led vs. customer-led). They distinguish four types of co-creation, which roughly correspond to the four possible combinations of the contribution and selection styles, like this below: According to O'Hern and Rindfleisch (2019), the best example of open contribution and customer-led selection isopen-source software. They note that while open-source software is not typically commercial, some firms use it as part of their strategy; the examples they give are ofSun MicrosystemswithNetBeansand ofIBMpaying people to improveLinux. According toAric Rindfleisch, Collaboration is a form in which the companies have the least control, while submission is the form that provides companies with the most control.[4] O'Hern and Rindfleisch describe what they call "tinkering" as having fewer open contributions than "collaborating". Customers are allowed to tinker with the product, but only in certain ways, and to make their creations available to others, but only under certain conditions. They give the examples ofmodsfor video games andpublic APIs. Whether or not the user creations are incorporated into the official product is decided by the firm, so selection is firm-led. "Co-designing", as described by O'Hern and Rindfleisch, is a type of co-creation process in which a number of customers, the "co-designers", submit product designs to the firm, with a larger group of customers selecting which designs the firm will produce. With co-designing, there are often relatively strict submission requirements, so it is categorized as having fixed contribution. They give the example ofThreadless, a website selling user-designedT-shirts, which typically accepted two percent of customer submissions for product designs. Another example can beLegowhich has introduced the "Lego Ideas" platform to rope in the users to contribute a newer design. Once the design garners 10,000 supporters, Lego forwards the idea to the expert review phase. If selected by the experts at Lego, they actually bring it on their Lego store shelves. As this shows there are also hybrids of different co-creation types. In this case the Lego Ideas platform includes parts of co-designing and submitting because the customer and the company choose the final product. O'Hern's and Rindfleisch's concept of "submitting" is closest to traditional NPD in that the selection of ideas is entirely done by the firm and there are often strict criteria contributions must follow. "Submitting"-type co-creation is different from traditionalmarket researchin that the firm asks people to come up with their own detailed solutions or designs, rather than just answering pre-determined questions. According to O'Hern and Rindfleisch, typical examples of this type of co-creation are a firm organizing a competition or using a crowdsourcing platform likeInnoCentive. Selected ideas are often rewarded with money. In 1972, Koichi Shimizu devised the "4Cs of marketing mix (Commodity, Cost, Communication, Channel)." "Commodity" is a Latin word that means "comfortable together," so the term "co-created goods and services."[5] In 1979, Koichi Shimizu announced the concept of "co-marketing" at an academic conference. The framework is the "7Cs Compass Model", which is shown by the 7Cs and the compass needle (NWSE). [Corporation(C-O-S)]: Competitor, Organization, Stakeholder, [4Cs]: Commodity, Cost, Communication, Channel, [Consumer]: Needs, Wants, Security, Education, [Circumstances]: National and international, Weather, Social and Cultural , Economic. Commodity in this is Latin for "Commodus: both convenient, both comfortable." Therefore, Commodity is "Co-creation Goods and Services."[6] In their review of the literature on "customer participation in production", Neeli Bendapudi and Robert P. Leone found that the first academic work dates back to 1979.[7] In 1990, John Czepiel suggests that customer's participation may lead to greater customer satisfaction.[8]Also in 1990, Scott Kelley, James Donnelly and Steven J. Skinner suggest other ways to look at customer participation: quality, employee's performance, and emotional responses.[9] An article by R. Normann and R. Ramirez written in 1993 suggests that successful companies do not focus on themselves or even on the industry but on the value-creating system.[10]Michel, Vargo and Lusch acknowledged that something similar to their concept of co-creation can be found in Normann's work - particularly, they consider his idea of "density of offerings" to be valuable.[11] In 1995, Michael Schrage argues that not all customers are alike in their capacity to bring some kind of knowledge to the firm.[12] In 1995, Firat, Fuat, Dholakia, and Venkatesh introduce the concept "customerization" as a form of buyer-centric mass-customization and state that it would enable consumers to act as a co-producer.[13]However, Bendapudi and Leone (2003) conclude that "the assumption of greater customization under co-production may hold only when the customer has the expertise".[7] The term "co-creation" was initially framed as a strategy by Kambil and coauthors in two articles in 1996 and 1999.[14]In "Reinventing Value Propositions" (1996), Kambil, Ginsberg and Bloch present co-creation as a strategy to transform value propositions working with customers or complementary resources.[15]In "Co-creation: A new source of value" (1999), Kambil, Friesen and Sundaram present co-creation as an important source of value enabled by the Internet and analyze what risks companies must consider in utilizing this strategy.[16] In 2000,C. K. Prahaladand Venkat Ramaswamy popularized[17]the concept in their article "Co-Opting Customer Competence". In their bookThe future of competition(2004), they defined co-creation as the "joint creation of value by the company and the customer; allowing the customer to co-construct the service experience to suit their context".[18] Also in 2004, Vargo and Lush introduce their "service-dominant logic" of marketing. One of its "foundational premises" was "the customer is always a coproducer". Prahalad commented that the authors did not go far enough.[19] In 2006, Kalaignanam and Varadarajan analyze the implications of information technology for co-creation.[citation needed]They state that developments in IT will support co-creation. They introduce a conceptual model of customer participation as a function of the characteristics of the product, the market, the customer and the firm. They suggest demand-side issues may have a negative effect on satisfaction. In the mid-2000s, co-creation and similar concepts such ascrowdsourcingandopen innovationwere popularized greatly, for instance by the bookWikinomics. In 2013 Jansen and Pieters argue that co-creation is often used as abuzzwordand can mean many different things. The term is used incorrectly to refer to forms ofmarket research, such asfocus groupsor social media analysis. They say simply working together with or collecting input from customers is also not enough to be called co-creation - it should only be called co-creation if "the end user plays an active role and it is a continuous process".[20]They introduced the term "complete co-creation" for this, with the following definition:"a transparent process of value creation in ongoing, productive collaboration with, and supported by all relevant parties, with end-users playing a central role" (Jansen and Pieters, 2017, p. 15).[21]Complete co-creation is regarded[21]as a practical answer to the predominantly academic and holistic understanding of co-creation. The concept of value co-creation has been also applied to the educational field[22][23]where Dolinger, Lodge & Coates define it as the "process of students' feedback, opinions, and other resources such as their intellectual capabilities and personalities, integrated alongside institutional resources".[24] Co-creation can be seen as a new way of thinking about the economical concept of "value."[25]Prahalad & Ramaswamy describe it as a "consumer-centric" view in opposition to the traditional "company-centric" view.[26]In the traditional view, the consumer is not part of the value creation process, while in the consumer-centric view the consumer plays a key role in it. In the traditional view, the company decides on the methods and structure of the process, while in the consumer-centric view the consumer can influence those. In the traditional view, the goal is to extract value money from consumers in the form of money, while in the consumer-centric view, the goal is to create value together for both consumer and company. In the traditional view, there is one point of exchange controlled by the company, while in the consumer-centric view, there are multiple points of exchange where company and consumers come together.[27] Prahalad and Ramaswamy[28]suggested that in order to apply co-creation, the following fundamental requirements should be prepared in advance. Co-created value arises in the form ofpersonalizedexperiences for the customer and ongoing revenue, learning and customer loyalty andword of mouthfor the firm. Co-creation also enable customers to come up with their own idea which might help the firm. Ramaswamy and his co-author Francis Gouillart wrote: "Through their interactions with thousands of managers globally who had begun experimenting with co-creation, they discovered that enterprises were building platforms that engaged not only the firm and its customers but also the entire network of suppliers, partners, and employees, in a continuous development of new experiences with individuals."[30] There are two key steps involved in co-creation:1. CONTRIBUTION-customers must submit contributions.2. SELECTION-firm must select a few valuable contributions from a larger set. As outlined by Fisher and Rindfleisch (2023), these two steps face the following challenges: CONTRIBUTION CHALLENGES: 1. Attracting and motivating external contributors; 2. Incremental ideas; 3. The "Rule of One." SELECTION CHALLENGES: 1. Protecting contributors' egos; 2. Harvesting co-created value; 3. Maintaining the peace. [31] If the ideas highlight negative sides of the firm's products or services, there might be a risk in losing out on the brand image. A design contest or other co-creation event may backfire and lead to negative word of mouth if the expectations of the participants are not met.[32] The risk of the selection process is that most submissions are not very useful, impractical and difficult to implement. Firms have to deal with the submitted ideas in a very subtle way as throughout the process they don't want to reject customer submissions and risk alienating them which may eventually lead to customer disengagement. Unless customers are incentivized in an attractive way, they may be reluctant to participate and benefit the company. Due to these risks, co-creation is an attractive but uncertain innovation strategy. In fact, several prominent co-creation examples have failed, including Local Motors, Dell IdeaStorm and MyStarbucksIdeas.[31] In order to help firms assess and manage these risks, Fisher & Rindfleisch (2023) developed the CO-CREATION READINESS SCOREARD. This Scorecard asks 12 questions (6 focus on Contribution Readiness & 6 focus on Selection Readiness).[31]
https://en.wikipedia.org/wiki/Co-creation
Cognitive Surplus: How Technology Makes Consumers into Collaboratorsis a 2010 non-fiction book byClay Shirky, originally published in with the subtitle "Creativity and Generosity in a Connected Age". The book is an indirect sequel to Shirky'sHere Comes Everybody, which covered the impact ofsocial media.Cognitive Surplusfocuses on describing the free time that individuals have to engage with collaborative activities withinnew media. Shirky's text searches to prove that global transformation can come from individuals committing their time to actively engage with technology. Overall response has been mixed with some critics praising Shirky's insights but also decrying some of the shortcomings of his theory. Clay Shirkyhas long been interested in and published works concerning the Internet and its impact on society. He currently works atNew York University, where he "has been making the case that the Internet is an inherently participatory and social medium".[1] Shirky wrote this book two years after its predecessor,Here Comes Everybody, which relates to the topics of the Internet and organization of people, was published. In it, Shirky argues that "As the Internet radically reduces the costs of collective action for everyone, it will transform the relationship between ordinary individuals and the large, hierarchical institutions that were a dominant force in 20th-century societies".[2]This transformation of relationships between individuals is a concept Shirky builds on inCognitive Surplus. A central concern Shirky had in mind when writing it was in illuminating the difference between communal and civic values, and how the Internet is a vehicle for both. In particular, he was interested in showing "effusions of people pooling their spare time and talent" and showing how we can create a culture "that celebrates the creation of civic value".[3] Shirky has stated that he is interested in exploring "the changes in the way people collaborate"[4]that are spurred on by technology and new media, and these changes are a large part of whatCognitive Surplusis devoted to examining. Topics that Shirky frequently writes about include Network Economics, Media and Community, Internet Globalization, and Open Source Software.[5]He has also been featured in many magazines and journals including The New York Times, Wall Street Journal andHarvard Business Review.[6] Shirky argues that since the 1940s, people are learning how to usefree timemore constructively for creative acts rather than consumptive ones, particularly with the advent of online tools that allownew forms of collaboration. While Shirky acknowledges that the activities that we use our cognitive surplus for may be frivolous (such as creatingLOLcats),[7]the trend as a whole is leading to valuable and influential new forms of human expression. These forms of human collaboration that he argues the Internet provides take the form of four categories of varying degrees of value: personal, communal, public, and civic.[8]Shirky argues that while all of these are legitimate uses of "cognitive surplus", the civic value (the power to actually change society) that social media provides is what should be celebrated about the Internet. The negative criticisms largely address the issue of negative uses of cognitive surplus.[citation needed]For example, Shirky discusseslolcatsin the book, but this is a pretty innocuous example of negative or trite uses of cognitive surplus, especially considering the reality ofcyber crimes, and other far more drastically negative uses. The main criticism of Shirky is that he is not realistic about the many possible ways we might waste this cognitive surplus, or worse, the many terrible ways it can and is being used for destructive and criminal activities, for example theglobal Jihadist movement.[citation needed]On the positive side, Shirky is praised for explaining the potential opportunities we can harness. He shows us effectively that we can not only make better use of our time, but also, that technology enables us to do so in a way that maximizes our ability to share and communicate.[citation needed] One criticRussell Davieswrites, "There are revealing thoughts in every chapter and they're particularly important for people trying to do business on the internet, because they shed light on some fundamental motivations and forces that we often miss or misconstrue".[9]Sorin Adam Matei ofPurdue University,West Lafayettewrites, "Despite shortcomings,Cognitive Surplusremains overall a very well-written and generally well-informed contribution to our discussion about the social effects ofsocial media. The academic research that shapes some of its assumptions and conclusions is well translated in everyday language,"[10]Davies describes Shirky as "the best and most helpful writer about the internet and society there is."[9]He praised the book for elucidating the power of new technology for business. Davies says Shirky elucidated the personal/public media distinction "That explains a lot to me. It's obvious when you read it but failing to grasp the fused state of public/personal media is responsible for a lot of the things we get wrong online. We often take it to be a commercial, public media space (and we always seem to be looking for another small group of professionals out there to deal with)—but it's not just that. Things that are perfectly appropriate in public media just don't work in personal media. You wouldn't steam open people's letters and insert magazines ads, but that's sometimes how we seem to behave."[9]Upon its release,Cognitive Surpluswas praised by Tom Chatfield ofThe Guardianand James Harkin ofThe Financial Timeswho both are complimentary of Shirkey's depiction of the Internet and its effect on society.[11][12] His approach has been criticized byFarhad ManjooinThe New York Timesfor being too academic and forcheerleadingpositive examples of the online use of cognitive surplus.[13]Similarly, Lehmann describes it as "the latest, monotonous installment in the sturdy tradition of exuberant web yay-saying."[14] Lehmann's review compares the contradictions Shirky makes in his argument about quality being democratized to hailing "a cascade of unrefereed digital content as a breakthrough in creativity and critical thought is roughly akin to greeting news of a massive national egg recall by laying off the country's food inspectors."[14]Moreover, he objects to Shirky's selective use of anecdotes to support his point.[14]Meanwhile, he finds it disorienting and obscene to suggest the web is hailing a new economy and abolishing class in the midst of financial distress and joblessness.[14]He also questions Shirky's assumption that free time was squandered prior to the web and suggests instead people did useful things with their time. Furthermore, he questions the intrinsic value of time spent online as a lot of time spent online may be used for things like gambling and porn. There's nothing innately compassionate or generous about the web. For any good thing people do online, someone could also be doing something bad with the internet.[14] Lehmann also suggests that a cognitive surplus raises a question about what the baseline value of time spent was to begin with, "one", he claims "that might be better phrased as either 'Surplus for what?' or 'Whose surplus, white man?'"[14]In the same vein, Lehmann accuses Shirky of being myopic. Shirky says the worst thing on the web isLOLcatswhen actually there are some bad things such as, for example, fake Obama birth certificates.[14]Shirky says you cannot communicate with society on the basis of a web search to which Lehmann responds, The idea of society as a terminally unresponsive, nonconversant entity would certainly be news to the generations of labor and gender-equality advocates who persistently engaged the social order with demands for the ballot and the eight-hour workday. It would likewise ring strangely in the ears of the leaders of the civil rights movement, who used a concerted strategy of nonviolent protest as a means of addressing an abundance-obsessed white American public who couldn't find the time to regardracial inequalityas a pressing social concern. The explicit content of such protests, meanwhile, indicted that same white American public on the basis of the civic and political standards—or rather double standards—of equality and opportunity that fueled the nation's chauvinist self-regard.[14] Shirky bases a lot of his conclusions of generosity on theUltimatum Gameexperiment to which Lehmann objects "The utility of the Ultimatum Game for a new market enabled theory of human nature thins out considerably when one realizes that the players are bartering with unearned money." and if you "Consult virtually any news story following up on alottery winner's post-windfall life—to say nothing of the well-chronicledimplosion of the past decade's market in mortgage backed securities—and you'll get a quick education in how playing games with other people's money can have a deranging effect on human behavior."[14] Lehmann also criticizes Shirky's expectation of the web to change economics and governmental systems. For example, he criticizes Shirky's idealising of amateurism: As forcrowdsourcingbeing a "labor of love" (Shirky primly reminds us that the term "amateur" "derives from the Latin amare—'to love'"), the governing metaphor here wouldn't seem to be digital sharecropping so much as the digital plantation. For all too transparent reasons of guilt sublimation, patrician apologists for antebellum slavery also insisted that their uncompensated workers loved their work, and likewise embraced their overseers as virtual family members. This is not, I should caution, to brand Shirky as a latter-day apologist for slavery but rather to note that it's an exceptionally arrogant tic of privilege to tell one's economic inferiors, online or off, what they do and do not love, and what the extra-material wellsprings of their motivation are supposed to be. To use an old-fashioned Enlightenment construct, it's at minimum an intrusion into a digital contributor's private life—even in the barrier-breaking world of Web 2.0 oversharing and friending. The just and proper rejoinder to any propagandist urging the virtues of uncompensated labor from an empyrean somewhere far above mere "society" is, "You try it, pal." The idea of crowdsourcing as a more egalitarian economic tool also draws criticism saying crowdsourcing is just cost-cutting, much akin tooutsourcing.[14]The possibility for the web to fundamentally change the government is also questioned as Cognitive Surplus is already aging badly, with theWikiLeaksfuror showing just how little web-based traffic in raw information, no matter how revelatory or embarrassing, has upended the lumbering agendas of the old nation-state on the global chessboard of realpolitik—a place where everything has a price, often measured in human lives. More than that, though, Shirky's book inadvertently reminds us of the lesson we should have absorbed more fully with the 2000 collapse of the high-tech market: theutopianenthusiasms of our country's cyber-elite exemplify not merely what the historianE.P. Thompsoncalled "the enormous condescension of posterity" but also a dangerous species of economic and civic illiteracy.[14] The WesternCold Warattitude has spawned a delusion about the power of information spreading to topple authoritarian regimes. This will not be the case in Eastern countries.[14]Paul Barrett takes a similar though softer stance, claiming all of Shirky's examples are relatively tame and mildly progressive. Moreover, Shirky presents everything as civic change when some things such ascarpoolingservices are really stretching the term.[15] According to Matei, "A broader conclusion of the book is that converting 'cognitive surplus' intosocial capitaland collective action is the product of technologies fueled by the passion of affirming the individual need for autonomy and competence." His enthusiasm for social media and for the Internet produces at times overly drawn statements.[14]AuthorJonah Lehrercriticized what he saw as Shirky's premise that forms of consumption, cultural consumption in particular, are inherently less worthy than producing and sharing.[16]
https://en.wikipedia.org/wiki/Cognitive_Surplus
Software developmentis the process ofdesigningandimplementingasoftwaresolution tosatisfyauser. The process is more encompassing thanprogramming, writingcode, in that it includes conceiving the goal, evaluating feasibility, analyzingrequirements,design,testingandrelease. The process is part ofsoftware engineeringwhich also includesorganizational management,project management,configuration managementand other aspects.[1] Software development involves many skills and job specializations includingprogramming,testing,documentation,graphic design,user support,marketing, andfundraising. Software development involves manytoolsincluding:compiler,integrated development environment(IDE),version control,computer-aided software engineering, andword processor. The details of the process used for a development effort varies. The process may be confined to a formal, documentedstandard, or it can be customized andemergentfor the development effort. The process may be sequential, in which each major phase (i.e. design, implement and test) is completed before the next begins, but an iterative approach – where small aspects are separately designed, implemented and tested – can reduce risk and cost and increase quality. Each of the available methodologies are best suited to specific kinds of projects, based on various technical, organizational, project, and team considerations.[3] Another focus in many programming methodologies is the idea of trying to catch issues such assecurity vulnerabilitiesandbugsas early as possible (shift-left testing) to reduce the cost of tracking and fixing them.[13] In 2009, it was estimated that 32 percent of software projects were delivered on time and budget, and with the full functionality. An additional 44 percent were delivered, but missing at least one of these features. The remaining 24 percent were cancelled prior to release.[14] Software development life cyclerefers to the systematic process of developingapplications.[15] The sources of ideas for software products are plentiful. These ideas can come frommarket researchincluding thedemographicsof potential new customers, existing customers, sales prospects who rejected the product, other internal software development staff, or a creative third party. Ideas for software products are usually first evaluated bymarketingpersonnel for economic feasibility, fit with existing channels of distribution, possible effects on existing product lines, requiredfeatures, and fit with the company's marketing objectives. In the marketing evaluation phase, the cost and time assumptions become evaluated.[16]The feasibility analysis estimates the project'sreturn on investment, its development cost and timeframe. Based on this analysis, the company can make a business decision to invest in further development.[17]After deciding to develop the software, the company is focused on delivering the product at or below the estimated cost and time, and with a high standard of quality (i.e., lack of bugs) and the desired functionality. Nevertheless, most software projects run late and sometimes compromises are made in features or quality to meet a deadline.[18] Software analysis begins with arequirements analysisto capture the business needs of the software.[19]Challenges for the identification of needs are that current or potential users may have different and incompatible needs, may not understand their own needs, and change their needs during the process of software development.[20]Ultimately, the result of analysis is a detailed specification for the product that developers can work from. Software analysts oftendecomposethe project into smaller objects, components that can be reused for increased cost-effectiveness, efficiency, and reliability.[19]Decomposing the project may enable amulti-threadedimplementation that runs significantly faster onmultiprocessorcomputers.[21] During the analysis and design phases of software development,structured analysisis often used to break down the customer's requirements into pieces that can be implemented by software programmers.[22]The underlying logic of the program may be represented indata-flow diagrams,data dictionaries,pseudocode,state transition diagrams, and/orentity relationship diagrams.[23]If the project incorporates a piece oflegacy softwarethat has not been modeled, this software may be modeled to help ensure it is correctly incorporated with the newer software.[24] Design involves choices about the implementation of the software, such as whichprogramming languagesand database software to use, or how the hardware and network communications will be organized. Design may be iterative with users consulted about their needs in a process oftrial and error. Design often involves people expert in aspect such asdatabase design, screen architecture, and the performance of servers and other hardware.[19]Designers often attempt to findpatternsin the software's functionality to spin off distinct modules that can be reused withobject-oriented programming. An example of this is themodel–view–controller, an interface between agraphical user interfaceand thebackend.[25] The central feature of software development is creating and understanding the software that implements the desired functionality.[26]There are various strategies for writing the code. Cohesive software has various components that are independent from each other.[19]Coupling is the interrelation of different software components, which is viewed as undesirable because it increases the difficulty ofmaintenance.[27]Often, software programmers do not follow industry best practices, resulting in code that is inefficient, difficult to understand, or lackingdocumentationon its functionality.[28]These standards are especially likely to break down in the presence of deadlines.[29]As a result, testing, debugging, and revising the code becomes much more difficult.Code refactoring, for example adding more comments to the code, is a solution to improve the understandability of code.[30] Testing is the process of ensuring that the code executes correctly and without errors.Debuggingis performed by each software developer on their own code to confirm that the code does what it is intended to. In particular, it is crucial that the software executes on all inputs, even if the result is incorrect.[31]Code reviewsby other developers are often used to scrutinize new code added to the project, and according to some estimates dramatically reduce the number of bugs persisting after testing is complete.[32]Once the code has been submitted,quality assurance—a separate department of non-programmers for most large companies—test the accuracy of the entire software product.Acceptance testsderived from the original software requirements are a popular tool for this.[31]Quality testing also often includes stress and load checking (whether the software is robust to heavy levels of input or usage),integration testing(to ensure that the software is adequately integrated with other software), andcompatibility testing(measuring the software's performance across different operating systems or browsers).[31]When tests are written before the code, this is calledtest-driven development.[33] Production is the phase in which software is deployed to the end user.[34]During production, the developer may create technical support resources for users[35][34]or a process for fixing bugs and errors that were not caught earlier. There might also be a return to earlier development phases if user needs changed or were misunderstood.[34] Software development is performed bysoftware developers, usually working on a team. Efficient communications between team members is essential to success. This is more easily achieved if the team is small, used to working together, and located near each other.[36]Communications also help identify problems at an earlier state of development and avoid duplicated effort. Many development projects avoid the risk of losing essential knowledge held by only one employee by ensuring that multiple workers are familiar with each component.[37]Software development involves professionals from various fields, not just softwareprogrammersbut also product managers who set the strategy and roadmap for the product,[38]individuals specialized in testing, documentation writing,graphic design, user support,marketing, and fundraising. Although workers for proprietary software are paid, most contributors toopen-source softwareare volunteers.[39]Alternately, they may be paid by companies whosebusiness modeldoes not involve selling the software, but something else—such as services and modifications to open source software.[40] Computer-aided software engineering(CASE) is tools for the partialautomationof software development.[41]CASE enables designers to sketch out the logic of a program, whether one to be written, or an already existing one to help integrate it with new code orreverse engineerit (for example, to change theprogramming language).[42] Documentation comes in two forms that are usually kept separate—that intended for software developers, and that made available to the end user to help them use the software.[43][44]Most developer documentation is in the form ofcode commentsfor each file,class, andmethodthat cover theapplication programming interface(API)—how the piece of software can be accessed by another—and often implementation details.[45]This documentation is helpful for new developers to understand the project when they begin working on it.[46]In agile development, the documentation is often written at the same time as the code.[47]User documentation is more frequently written bytechnical writers.[48] Accurate estimation is crucial at the feasibility stage and in delivering the product on time and within budget. The process of generating estimations is often delegated by theproject manager.[49]Because the effort estimation is directly related to the size of the complete application, it is strongly influenced by addition of features in the requirements—the more requirements, the higher the development cost. Aspects not related to functionality, such as the experience of the software developers and code reusability, are also essential to consider in estimation.[50]As of 2019[update], most of the tools for estimating the amount of time and resources for software development were designed for conventional applications and are not applicable toweb applicationsormobile applications.[51] Anintegrated development environment(IDE) supports software development with enhanced features compared to a simpletext editor.[52]IDEs often include automatedcompiling,syntax highlightingof errors,[53]debugging assistance,[54]integration withversion control, and semi-automation of tests.[52] Version control is a popular way of managing changes made to the software. Whenever a new version is checked in, the software saves abackupof all modified files. If multiple programmers are working on the software simultaneously, it manages the merging of their code changes. The software highlights cases where there is a conflict between two sets of changes and allows programmers to fix the conflict.[55] Aview modelis a framework that provides theviewpointson thesystemand itsenvironment, to be used in thesoftware development process. It is a graphical representation of the underlying semantics of a view. The purpose of viewpoints and views is to enable human engineers to comprehend verycomplex systemsand to organize the elements of the problem around domains ofexpertise. In theengineeringof physically intensive systems, viewpoints often correspond to capabilities and responsibilities within the engineering organization.[56] Fitness functionsare automated and objective tests to ensure that the new developments don't deviate from the established constraints, checks and compliance controls.[57] Intellectual propertycan be an issue when developers integrateopen-sourcecode or libraries into a proprietary product, because mostopen-source licensesused for software require that modifications be released under the same license. As an alternative, developers may choose a proprietary alternative or write their own software module.[58]
https://en.wikipedia.org/wiki/Collaborative_software_development_model
Common ownershiprefers to holding the assets of an organization,enterprise, or community indivisibly rather than in the names of the individual members or groups of members as common property. Forms of common ownership exist in everyeconomic system. Common ownership of themeans of productionis a central goal ofsocialistpolitical movements as it is seen as a necessarydemocraticmechanism for the creation and continued function of acommunist society. Advocates make a distinction betweencollective ownershipand common property (thecommons) as the former refers to property owned jointly by agreement of a set of colleagues, such as producercooperatives, whereas the latter refers to assets that are completely open for access, such as a public park freely available to everyone.[1][2] TheEarly Church of Jerusalemshared all their money and possessions (Acts of the Apostles 2 and 4).[3][4]Inspired by theearly Christians, many Christians have since tried to follow their example ofcommunity of goodsand common ownership.[5]Common ownership is practiced by some Christian groups, such as theHutterites(for about 500 years), theBruderhof Communities(for some 100 years), and others.[6][7]In those cases, property is generally owned by a charity set up for the purpose of maintaining the members of the religious groups.[8][9]Christian communiststypically regard biblical texts inActs 2andActs 4as evidence that the first Christians lived in acommunist society.[10][11][12]Additionally, the phrase "To each according to his needs" has a biblical basis in Acts 4:35, which says "to the emissaries to distribute to each according to his need".[13][14] Common ownership is practiced by large numbers of voluntary associations and non-profit organizations, as well as implicitly by all public bodies. While cooperatives generally align with collectivist and socialist economics,retailers' cooperativesin particular exhibit elements of common ownership, and their retailer members may be individually owned. Some individuals and organizations intentionally produce or supportfree content, includingopen sourcesoftware,public domainworks, andfair usemedia.[15][16]Mutual aidis a form of common ownership that is practiced on small scales within capitalist economies, particularly among marginalized communities,[17][18][19][20]and during emergencies such as theCOVID-19 pandemic.[21][22][23][24] Manysocialist movements, includingMarxist,anarchist,reformist, andcommunalistmovements, advocate the common ownership of the means of production by all of society as an eventual goal to be achieved through the development of theproductive forces, although many socialists classify socialism aspublic ownershiporcooperative ownershipof the means of production, reserving common ownership for whatKarl MarxandFriedrich Engelstermed "upper-stage communism",[25]or what other socialist theoreticians, such asVladimir Lenin,[26]Emma Goldman,[27]andPeter Kropotkin,[28]simply termed "communism". From Marxist and anarchist analyses, a society based on a superabundance of goods and common ownership of the means of production would be devoid of classes based on ownership of productive property.[29][27] Common ownership in a hypotheticalcommunist societyis often distinguished fromprimitive communism, in that communist common ownership is the outcome of social and technological developments leading topost-scarcityand thus the elimination of material scarcity in society.[30]From 1918 until 1995, the "common ownership of the means of production, distribution and exchange" was cited inClause IVof its constitution as a goal of the BritishLabour Partyand was quoted on the back of its membership cards. The clause read: To secure for the workers by hand or by brain the full fruits of their industry and the most equitable distribution thereof that may be possible upon the basis of the common ownership of the means of production, distribution and exchange, and the best obtainable system of popular administration and control of each industry or service.[31] Inantitrusteconomics, common ownership describes a situation in which largeinvestorsownsharesin several firms that compete within the sameindustry. As a result of this overlapping ownership, these firms may have reduced incentives to compete against each other because they internalize the profit-reducing effect that their competitive actions have on each other. The theory was first developed byJulio Rotembergin 1984.[32]Several empirical contributions document the growing importance of common ownership and provide evidence to support the theory.[33]Because of concern about these anticompetitive effects, common ownership has "stimulated a major rethinking of antitrust enforcement".[34]Several government departments and intergovernmental organizations, such as theUnited States Department of Justice,[35]theFederal Trade Commission,[36]theEuropean Commission,[37]and theOECD,[38]have acknowledged concerns about the effects of common ownership on lessening productmarket competition. Neoclassical economic theoryanalyzes common ownership usingcontract theory. According to theincomplete contractingapproach pioneered byOliver Hartand his co-authors, ownership matters because the owner of an asset has residual control rights.[39][40]This means that the owner can decide what to do with the asset in every contingency not covered by a contract. In particular, an owner has stronger incentives to make relationship-specific investments than a non-owner, so ownership can ameliorate thehold-up problem. As a result, ownership is a scarce resource (i.e. there are limits to how much they can invest) that should not be wasted. In particular, a central result of the property rights approach says that joint ownership is suboptimal.[41]If there is a start with joint ownership (where each party has veto power over the use of the asset) and move to a situation in which there is a single owner, the investment incentives of the new owner are improved while the investment incentives of the other parties remain the same; however, in the basic incomplete contracting framework, the suboptimal aspect of joint ownership holds only if the investments are inhuman capitalwhile joint ownership can be optimal if the investments are in physical capital.[42]Several authors have shown that joint ownership can actually be optimal even if investments are in human capital.[43]In particular, joint ownership can be optimal if the parties are asymmetrically informed,[44]if there is a long-term relationship between the parties,[45]or if the parties have know-how that they may disclose.[46]
https://en.wikipedia.org/wiki/Common_ownership
Theopen-source software movementis a social movement that supports the use ofopen-source licensesfor some or all software, as part of the broader notion ofopen collaboration.[1]The open-source movement was started to spread the concept/idea ofopen-source software. Programmerswho support the open-source-movement philosophy contribute to the open-source community by voluntarily writing and exchanging programming code forsoftware development.[2]The termopen sourcerequires that no one can discriminate against a group in not sharing the edited code or hinder others from editing their already-edited work. This approach to software development allows anyone to obtain and modify open-source code. These modifications are distributed back to the developers within the open-source community of people who are working with the software. In this way, the identities of all individuals participating in code modification are disclosed and the transformation of the code is documented over time.[3]This method makes it difficult to establish ownership of a particular bit of code but is in keeping with the open-source-movement philosophy. These goals promote the production of high-quality programs as well as working cooperatively with other similarly-minded people to improve open-source technology.[2] The labelopen sourcewas created and adopted by a group of people in thefree software movementat a strategy session[4]held atPalo Alto, California, in reaction toNetscape's January 1998 announcement of a source-code release forNavigator. One of the reasons behind using the term was that "the advantage of using the term open source is that the business world usually tries to keep free technologies from being installed."[5]Those people who adopted the term used the opportunity before the release of Navigator's source code to free themselves of the ideological and confrontational connotations of the term "free software". Later in February 1998,Bruce PerensandEric S. Raymondfounded an organization calledOpen Source Initiative(OSI) "as an educational, advocacy, and stewardship organization at a cusp moment in the history of that culture."[6] a difference between hardware and software did not exist. The user and programmer of a computer were one and the same. When the first commercial electronic computer was introduced byIBMin 1952, the machine was hard to maintain and expensive. Putting the price of the machine aside, it was the software that caused the problem when owning one of these computers. Then in 1952, a collaboration of all the owners of the computer got together and created a set of tools. The collaboration of people were in a group called PACT (The Project for the Advancement of Coding techniques). After passing this hurdle, in 1956, the Eisenhower administration decided to put restrictions on the types of salesAT&Tcould make. This did not stop the inventors from developing new ideas of how to bring the computer to the mass population. The next step was making the computer more affordable which slowly developed through different companies. Then they had to develop software that would host multiple users.MITcomputation center developed one of the first systems, CTSS (Compatible Time-Sharing System). This laid the foundation for many more systems, and what we now call the open-source software movement.[7] The open-source movement is branched from thefree software movementwhich began in the late 80s with the launching of the GNU project byRichard Stallman.[8]Stallman is regarded within the open-source community as sharing a key role in the conceptualization of freely-shared source code for software development.[3]The term "free software" in the free software movement is meant to imply freedom of software exchange and modification. The term does not refer to any monetary freedom.[3]Both the free-software movement and the open-source movement share this view of free exchange ofprogramming code, and this is often why both of the movements are sometimes referenced in literature as part of theFOSSor "Free and Open Software" orFLOSS"Free/Libre Open-Source" communities. These movements share fundamental differences in the view on open software. The main, factionalizing difference between the groups is the relationship between open-source and proprietary software. Often, makers ofproprietary software, such asMicrosoft, may make efforts to support open-source software to remain competitive.[9]Members of the open-source community are willing to coexist with the makers of proprietary software[3]and feel that the issue of whether software is open source is a matter of practicality.[10] In contrast, members of the free-software community maintain the vision that all software is a part of freedom of speech[3]and that proprietary software is unethical and unjust.[3]The free-software movement openly champions this belief through talks that denounce proprietary software. As a whole, the community refuses to support proprietary software. Further there are external motivations for these developers. One motivation is that, when a programmer fixes a bug or makes a program it benefits others in an open-source environment. Another motivation is that a programmer can work on multiple projects that they find interesting and enjoyable. Programming in the open-source world can also lead to commercial job offers or entrance into the venture capital community. These are just a few reasons why open-source programmers continue to create and advance software.[11] While cognizant of the fact that both the free-software movement and the open-source movement share similarities in practical recommendations regarding open source, the free-software movement fervently continues to distinguish themselves from the open-source movement entirely.[12]The free-software movement maintains that it has fundamentally different attitudes towards the relationship between open-source and proprietary software. The free-software community does not view the open-source community as their target grievance, however. Their target grievance is proprietary software itself.[3] The open-source movement has faced a number of legal challenges. Companies that manage open-source products have some difficulty securing their trademarks. For example, the scope of "implied license" conjecture remains unclear and can compromise an enterprise's ability to patent productions made with open-source software. Another example is the case of companies offering add-ons for purchase; licensees who make additions to the open-source code that are similar to those for purchase may have immunity from patent suits. In the court case "Jacobsen v. Katzer", the plaintiff sued the defendant for failing to put the required attribution notices in his modified version of the software, thereby violating license. The defendant claimed Artistic License in not adhering to the conditions of the software's use, but the wording of the attribution notice decided that this was not the case. "Jacobsen v Katzer" established open-source software's equality to proprietary software in the eyes of the law. In a court case accusing Microsoft of being a monopoly, Linux and open-source software was introduced in court to prove that Microsoft had valid competitors and was grouped in withApple.[citation needed] There are resources available for those involved open-source projects in need of legal advice. TheSoftware Freedom Law Centerfeatures a primer on open-source legal issues. International Free and Open Source Software Law Review offers peer-reviewed information for lawyers on free-software issues. TheOpen Source Initiative(OSI) was instrumental in the formalization of the open-source movement. The OSI was founded by Eric Raymond and Bruce Perens in February 1998 with the purpose of providing general education and advocacy of the open-source label through the creation of the Open Source Definition that was based on the Debian Free Software Guidelines. The OSI has become one of the main supporters and advocators of the open-source movement.[6] In February 1998, the open-source movement was adopted, formalized, and spearheaded by the Open Source Initiative (OSI), an organization formed to market software "as something more amenable to commercial business use"[3]The OSI applied to register "Open Source" with the US Patent and Trademark Office, but was denied due to the term being generic and/or descriptive. Consequently, the OSI does not own the trademark "Open Source" in a national or international sense, although it does assert common-law trademark rights in the term.[2]The main tool they adopted for this wasThe Open Source Definition.[13] The open-source label was conceived at a strategy session that was held on February 3, 1998 in Palo Alto, California and on April 8 of the same year, the attendees of Tim O’Reilly's Free Software Summit voted to promote the use of the termopen source.[6] Overall, the software developments that have come out of the open-source movement have not been unique to the computer-science field, but they have been successful in developing alternatives to propriety software. Members of the open-source community improve upon code and write programs that can rival much of the propriety software that is already available.[3] The rhetorical discourse used in open-source movements is now being broadened to include a larger group of non-expert users as well as advocacy organizations. Several organized groups such as the Creative Commons and global development agencies have also adopted the open-source concepts according to their own aims and for their own purposes.[14] The factors affecting the open-source movement's legal formalization are primarily based on recent political discussion over copyright, appropriation, and intellectual property.[15] Historically, researchers have characterizedopen-sourcecontributors as a centralized, onion-shaped group.[16]The center of the onion consists of the core contributors who drive the project forward through large amounts ofcodeand software design choices. The second-most layer are contributors who respond topull requestsandbugreports. The third-most layer out are contributors who mainly submit bug reports. The farthest out layer are those who watch the repository and users of the software that's generated. This model has been used in research to understand the lifecycle of open-source software, understand contributors to open-source software projects, how tools such as can help contributors at the various levels of involvement in the project, and further understand how the distributed nature of open source software may affect the productivity of developers.[17][18][19] Some researchers have disagreed with this model. Crowston et al.'s work has found that some teams are much less centralized and follow a more distributed workflow pattern.[17]The authors report that there's a weak correlation between project size and centralization, with smaller projects being more centralized and larger projects showing less centralization. However, the authors only looked at bug reporting and fixing, so it remains unclear whether this pattern is only associated with bug finding and fixing or if centralization does become more distributed with size for every aspect of the open-source paradigm. An understanding of a team's centralization versus distributed nature is important as it may inform tool design and aid new developers in understanding a team's dynamic. One concern with open-source development is the high turnover rate of developers, even among core contributors (those at the center of the "onion").[20]In order to continue an open-source project, new developers must continually join but must also have the necessary skill-set to contribute quality code to the project. Through a study ofGitHubcontribution on open-source projects, Middleton et al. found that the largest predictor of contributors becoming full-fledged members of an open-source team (moving to the "core" of the "onion") was whether they submitted and commented on pull requests. The authors then suggest that GitHub, as a tool, can aid in this process by supporting "checkbox" features on a team's open-source project that urge contributors to take part in these activities.[19] The open-source community has long recognized the importance of engaging younger generations to ensure the sustainability and innovation of open-source projects. However, concerns have been raised about the aging demographic of open source contributors and the challenges of attracting younger developers. In 2010, James Bottomley, a prominentLinuxkernelmaintainer, observed the "graying" of the Linux kernel community, a trend that continues today. David Nalley, president of theApache Software Foundation(ASF), emphasized that maintaininglegacy codeis often unappealing to younger developers, who prefer to work on new and innovative projects.[21] While contributing to open source projects can provide valuable experience in development, documentation, internationalization, and other areas, barriers to entry often make it difficult for newcomers, particularly younger individuals, to get involved. These challenges include technical, psychological, and motivational factors.[22] To address these challenges, initiatives like theLinux Kernel Mentorship Programaim to recruit and train new developers. The LFX Mentorship Program also seeks to sponsor and mentor the next generation of open source developers and leaders across various projects.[23] With the growth and attention on the open-source movement, the reasons and motivations of programmers for creating code for free has been under investigation. In a paper from the 15th Annual Congress of the European Economic Association on the open-source movement, the incentives of programmers on an individual level as well as on a company or network level were analyzed. What is essentially the intellectual gift giving of talented programmers challenges the "self-interested-economic-agent paradigm",[24]and has made both the public and economists search for an understanding of what the benefits are for programmers. The vast majority of programmers in open-source communities are male. In a 2006 study for the European Union on free and open-source software communities, researchers found that only 1.5% of all contributors are female.[28]Although women are generally underrepresented in computing, the percentage of women in tech professions is actually much higher, close to 25%.[29]This discrepancy suggests that female programmers are overall less likely than male programmers to participate in open-source projects. Some research and interviews with members of open-source projects have described a male-dominated culture within open-source communities that can be unwelcoming or hostile towards females.[30]There are initiatives such asOutreachythat aim to support more women and other underrepresented gender identities to participate in open-source software. However, within the discussion forums of open-source projects the topic of gender diversity can be highly controversial and even inflammatory.[30]A central vision in open-source software is that because the software is built and maintained on the merit of individual code contributions, open-source communities should act as a meritocracy.[31]In a meritocracy, the importance of an individual in the community depends on the quality of their individual contributions and not demographic factors such as age, race, religion, or gender. Thus proposing changes to the community based on gender, for example, to make the community more inviting towards females, go against the ideal of a meritocracy by targeting certain programmers by gender and not based on their skill alone.[30] There is evidence that gender does impact a programmer's perceived merit in the community. A 2016 study identified the gender of over one million programmers onGitHub, by linking the programmer'sGitHubaccount to their other social media accounts.[32]Between male and female programmers, the researchers found that female programmers were actuallymorelikelyto have their pull requests accepted into the project than male programmers, however only when the female had a gender-neutral profile. When females had profiles with a name or image that identified them as female, they were less likely than male programmers to have their pull requests accepted. Another study in 2015 found that of open-source projects on GitHub, gender diversity was a significant positive predictor of a team's productivity, meaning that open-source teams with a more even mix of different genders tended to be more highly productive.[31] Many projects have adopted theContributor Covenantcode of conduct in an attempt to address concerns of harassment of minority developers. Anyone found breaking the code of conduct can be disciplined and ultimately removed from the project. In order to avoid offense to minorities many software projects have started to mandate the use ofinclusive languageand terminology.[33] Libraries are using open-source software to develop information as well as library services. The purpose of open source is to provide a software that is cheaper, reliable and has better quality. The one feature that makes this software so sought after is that it is free. Libraries in particular benefit from this movement because of the resources it provides. They also promote the same ideas of learning and understanding new information through the resources of other people. Open source allows a sense of community. It is an invitation for anyone to provide information about various topics. The open-source tools even allow libraries to create web-based catalogs. According to the IT source there are various library programs that benefit from this.[34] Government agencies and infrastructure software— Government Agencies are utilizing open-source infrastructure software, like the Linux operating system and the Apache Web-server into software, to manage information.[35]In 2005, a new government lobby was launched under the name National Center for Open Source Policy and Research (NCOSPR) "a non-profit organization promoting the use of open source software solutions within government IT enterprises."[36] Open-source movement in the military— Open-source movement has potential to help in the military. The open-source software allows anyone to make changes that will improve it. This is a form of invitation for people to put their minds together to grow a software in a cost efficient manner. The reason the military is so interested is because it is possible that this software can increase speed and flexibility. Although there are security setbacks to this idea due to the fact that anyone has access to change the software, the advantages can outweigh the disadvantages. The fact that the open-source programs can be modified quickly is crucial. A support group was formed to test these theories. TheMilitary Open Source Software Working Groupwas organized in 2009 and held over 120 military members. Their purpose was to bring together software developers and contractors from the military to discover new ideas for reuse and collaboration. Overall, open-source software in the military is an intriguing idea that has potential drawbacks but they are not enough to offset the advantages.[37] Open source in education— Colleges and organizations use software predominantly online to educate their students. Open-source technology is being adopted by many institutions because it can save these institutions from paying companies to provide them with these administrative software systems. One of the first major colleges to adopt an open-source system was Colorado State University in 2009 with many others following after that. Colorado State Universities system was produced by theKualiFoundation who has become a major player in open-source administrative systems. The Kuali Foundation defines itself as a group of organizations that aims to "build and sustain open-source software for higher education, by higher education."[38]There are many other examples of open-source instruments being used in education other than the Kuali Foundation as well.[citation needed] "For educators, The Open Source Movement allowed access to software that could be used in teaching students how to apply the theories they were learning".[39]With open networks and software, teachers are able to share lessons, lectures, and other course materials within a community. OpenTechComm is a program that is dedicated to "open to access, open to use, and open to edit — textbook or pedagogical resource that teachers of technical and professional communication courses at every level can rely on to craft free offerings to their students."[40]As stated earlier, access to programs like this would be much more cost efficient for educational departments. Open source in healthcare— Created in June 2009 by the nonprofit eHealthNigeria, the open-source softwareOpenMRSis used to document health care in Nigeria. The use of this software began in Kaduna, Nigeria to serve the purpose of public health. OpenMRS manages features such as alerting health care workers when patients show warning signs for conditions and records births and deaths daily, among other features. The success of this software is caused by its ease of use for those first being introduced to the technology, compared to more complex proprietary healthcare software available in first world countries. This software is community-developed and can be used freely by anyone, characteristic of open-source applications. So far, OpenMRS is being used in Rwanda, Mozambique, Haiti, India, China, and the Philippines.[41]The impact of open source in healthcare is also observed by Apelon Inc, the "leading provider of terminology and data interoperability solutions". Recently, its Distributed Terminology System (Open DTS) began supporting the open-source MySQL database system. This essentially allows for open-source software to be used in healthcare, lessening the dependence on expensive proprietary healthcare software. Due to open-source software, the healthcare industry has available a free open-source solution to implement healthcare standards. Not only does open source benefit healthcare economically, but the lesser dependence on proprietary software allows for easier integration of various systems, regardless of the developer.[42] Originally, IBM was not the company that branched out to any means of open source software. They upheld into believing that intellectual property along with other privatized means of software around the 1990s.[43]From a citation, it wasn't until IBM was challenged by the evolving competitive market, specifically from Microsoft, that they decided to invest their resources more into open source software. Since then, their focus shifted more on customer service and a more robust software support.[43]IBMhas been a leading proponent of theOpen Source Initiative, and began supportingLinuxin 1998.[44] As another example, IBM had decided to make the Eclipse IDE(integrated development environment) open-source causing other companies to release their other IDEs due to Eclipse's popularity and outreach to the market.[45] Before summer of 2008,Microsofthas generally been known as an enemy of the open-source community[citation needed]. The company's anti-open-source sentiment was enforced by former CEOSteve Ballmer, who referred to Linux, a widely used open-source software, as a "cancer that attaches itself ... to everything it touches."[46]Microsoft also threatened Linux that they would charge royalties for violating 235 of their patents. In 2004, Microsoft lost a European Union court case,[47]and lost the appeal in 2007,[48]and their further appeal in 2012:[49]being convicted of abusing its dominant position. Specifically they had withheld inter-operability information with the open-sourceSamba (software)project, which can be run on many platforms and aims to "removing barriers to interoperability".[This quote needs a citation] In 2008, however, Sam Ramji, the then head of open-source-software strategy in Microsoft, began working closely with Bill Gates to develop a pro-open-source attitude within the software industry as well as Microsoft itself. Ramji, before leaving the company in 2009, built Microsoft's familiarity and involvement with open source, which is evident in Microsoft's contributions of open-source code toMicrosoft Azureamong other projects. These contributions would have been previously unimaginable by Microsoft.[50]Microsoft's change in attitude about open source and efforts to build a stronger open-source community is evidence of the growing adoption and adaptation of open source.[51]
https://en.wikipedia.org/wiki/Motivations_of_open_source_programmers
Aplanned economyis a type ofeconomic systemwhereinvestment,productionand the allocation ofcapital goodstakes place according to economy-wide economic plans and production plans. A planned economy may usecentralized,decentralized,participatoryorSoviet-typeforms ofeconomic planning.[1][2]The level ofcentralizationordecentralizationin decision-making and participation depends on the specific type of planning mechanism employed.[3] Socialist statesbased on the Soviet model have used central planning, although a minority such as the formerSocialist Federal Republic of Yugoslaviahave adopted some degree ofmarket socialism.Market abolitionistsocialism replacesfactor marketswith direct calculation as the means to coordinate the activities of the varioussocially ownedeconomic enterprises that make up the economy.[4][5][6]More recent approaches to socialist planning and allocation have come from some economists and computer scientists proposing planning mechanisms based on advances in computer science and information technology.[7] Planned economies contrast withunplanned economies, specificallymarket economies, where autonomous firms operating inmarketsmake decisions about production, distribution, pricing and investment. Market economies that useindicative planningare variously referred to asplanned market economies,mixed economiesandmixed market economies. Acommand economyfollows anadministrative-command systemand uses Soviet-type economic planning which was characteristic of the formerSoviet UnionandEastern Blocbefore most of these countries converted to market economies. This highlights the central role of hierarchical administration and public ownership of production in guiding the allocation of resources in these economic systems.[8][9][10] In theHellenisticand post-Hellenistic world, "compulsory state planning was the most characteristic trade condition for theEgyptiancountryside, forHellenistic India, and to a lesser degree the more barbaric regions of theSeleucid, thePergamenian, the southernArabian, and theParthianempires".[11]Scholars have argued that theIncaneconomy was a flexible type of command economy, centered around the movement and utilization of labor instead of goods.[12]One view ofmercantilismsees it as involving planned economies.[13] The Soviet-style planned economy in Soviet Russia evolved in the wake of a continuing existingWorld War Iwar-economyas well as other policies, known aswar communism(1918–1921), shaped to the requirements of theRussian Civil Warof 1917–1923. These policies began their formal consolidation under an official organ of government in 1921, when the Soviet government foundedGosplan. However, the period of theNew Economic Policy(c.1921toc.1928) intervened before the planned system of regularfive-year plansstarted in 1928.Leon Trotskywas one of the earliest proponents of economic planning during theNEPperiod.[14][15][16]Trotsky argued thatspecialization, the concentration ofproductionand the use of planning could "raise in the near future thecoefficientofindustrial growthnot only two, but even three times higher than thepre-war rateof 6% and, perhaps, even higher".[17]According to historianSheila Fitzpatrick, the scholarly consensus was thatStalinappropriated the position of theLeft Oppositionon such matters asindustrialisationandcollectivisation.[18] AfterWorld War II(1939–1945) France and Great Britain practiceddirigisme– government direction of the economy through non-coercive means. The Swedish government planned public-housing models in a similar fashion asurban planningin a project calledMillion Programme, implemented from 1965 to 1974. Some decentralized participation in economic planning occurred across Revolutionary Spain, most notably in Catalonia, during theSpanish Revolution of 1936.[19][20] In the May 1949 issue of theMonthly Reviewtitled "Why Socialism?",Albert Einsteinwrote:[21] I am convinced there is only one way to eliminate these grave evils, namely through the establishment of a socialist economy, accompanied by an educational system which would be oriented toward social goals. In such an economy, the means of production are owned by society itself and are utilized in a planned fashion. A planned economy, which adjusts production to the needs of the community, would distribute the work to be done among all those able to work and would guarantee a livelihood to every man, woman, and child. The education of the individual, in addition to promoting his own innate abilities, would attempt to develop in him a sense of responsibility for his fellow-men in place of the glorification of power and success in our present society. Whilesocialismis not equivalent to economic planning or to the concept of a planned economy, an influential conception of socialism involves the replacement of capital markets with some form of economic planning in order to achieveex-antecoordination of the economy. The goal of such an economic system would be to achieve conscious control over the economy by the population, specifically so that the use of thesurplus productis controlled by the producers.[22]The specific forms of planning proposed for socialism and their feasibility are subjects of thesocialist calculation debate. In 1959Anatoly Kitovproposed a distributed computing system (Project "Red Book",Russian:Красная книга) with a focus on the management of the Soviet economy. Opposition from theDefence Ministrykilled Kitov's plan.[23] In 1971 the socialistAllende administrationof Chile launchedProject Cybersynto install a telex machine in every corporation and organization in the economy for the communication of economic data between firms and the government. The data was also fed into a computer-simulated economy for forecasting. A control room was built for real-time observation and management of the overall economy. The prototype-stage of the project showed promise when it was used to redirect supplies around a trucker's strike,[24]but after CIA-backedAugusto Pinochetled acoup in 1973that established amilitary dictatorshipunder his rule the program was abolished and Pinochet moved Chile towards a moreliberalizedmarket economy. In their bookTowards a New Socialism(1993), the computer scientistPaul Cockshottfrom theUniversity of Glasgowand the economist Allin Cottrell fromWake Forest Universityclaim to demonstrate how a democratically planned economy built on modern computer technology is possible and drives the thesis that it would be both economically more stable than the free-market economies and also morally desirable.[7] The use of computers to coordinate production in an optimal fashion has been variously proposed forsocialist economies. The Polish economistOskar Lange(1904–1965) argued that the computer is more efficient than the market process at solving the multitude of simultaneous equations required for allocating economic inputs efficiently (either in terms of physical quantities or monetary prices).[25] In the Soviet Union,Anatoly Kitovhad proposed to the Central Committee of the Communist Party of the Soviet Union a detailed plan for the re-organization of the control of the Soviet armed forces and of the Soviet economy on the basis of a network of computing centers in 1959.[26]Kitov's proposal was rejected, as later was the 1962OGASeconomy management network project.[27]Sovietcybernetician,Viktor Glushkovargued that his OGAS information network would have delivered a fivefoldsavings returnfor theSoviet economyover the first fifteen-year investment.[28] Salvador Allende's socialist government pioneered the 1970 Chilean distributeddecision support systemProject Cybersynin an attempt to move towards a decentralized planned economy with theexperimental viable system modelof computed organisational structure of autonomous operative units through analgedonic feedbacksetting and bottom-up participative decision-making in the form ofparticipative democracyby the Cyberfolk component.[29] Supporters of a planned economy argue that the government can harnessland,labor, andcapitalto serve the economic objectives of the state. Consumer demand can be restrained in favor of greater capital investment for economic development in a desired pattern. In international comparisons, supporters of a planned economy have said that state-socialist nations have compared favorably with capitalist nations in health indicators such as infant mortality and life expectancy. However, according toMichael Ellman, the reality of this, at least regarding infant mortality, varies depending on whether official Soviet orWHOdefinitions are used.[30] The state can begin building massive heavy industries at once in an underdeveloped economy without waiting years for capital to accumulate through the expansion of light industry and without reliance on external financing. This is what happened in the Soviet Union during the 1930s when the government forced the share ofgross national incomededicated to private consumption down from 80% to 50%. As a result of this development, the Soviet Union experienced massive growth in heavy industry, with a concurrent massive contraction of its agricultural sector due to the labor shortage.[31] Studies of command economies of theEastern Blocin the 1950s and 1960s by both American and Eastern European economists found that contrary to the expectations of both groups they showed greater fluctuations inoutputthan market economies during the same period.[32] Critics of planned economies argue that planners cannot detect consumer preferences, shortages and surpluses with sufficient accuracy and therefore cannot efficiently co-ordinate production (in amarket economy, afree price systemis intended to serve this purpose). This difficulty was notably written about by economistsLudwig von MisesandFriedrich Hayek, who referred to subtly distinct aspects of the problem as theeconomic calculation problemandlocal knowledge problem, respectively.[33][34]These distinct aspects were also present in the economic thought ofMichael Polanyi.[35] Whereas the former stressed the theoretical underpinnings of a market economy tosubjective value theorywhile attacking thelabor theory of value, the latter argued that the only way to satisfy individuals who have a constantly changing hierarchy of needs and are the only ones to possess their particular individual's circumstances is by allowing those with the most knowledge of their needs to have it in their power to use their resources in a competing marketplace to meet the needs of the most consumers most efficiently. This phenomenon is recognized asspontaneous order. Additionally, misallocation of resources would naturally ensue by redirecting capital away from individuals with direct knowledge and circumventing it into markets where a coercive monopoly influences behavior, ignoring market signals. According toTibor Machan, "[w]ithout a market in which allocations can be made in obedience to the law of supply and demand, it is difficult or impossible to funnel resources with respect to actual human preferences and goals".[36] HistorianRobert Vincent Danielsregarded theStalinistperiod to represent an abrupt break with Lenin's government in terms of economic planning in which an deliberated,scientific systemof planning that featured formerMenshevikeconomistsatGosplanhad been replaced with a hasty version of planning with unrealistic targets, bureaucratic waste,bottlenecksandshortages. Stalin's formulations of national plans in terms of physical quantity of output was also attributed by Daniels as a source for the stagnant levels of efficiency and quality.[37] EconomistRobin Hahnel, who supportsparticipatory economics, a form ofsocialistdecentralized planned economy, notes that even if central planning overcame its inherent inhibitions of incentives and innovation, it would nevertheless be unable to maximize economic democracy and self-management, which he believes are concepts that are more intellectually coherent, consistent and just than mainstream notions of economic freedom.[38]Furthermore, Hahnel states: Combined with a more democratic political system, and redone to closer approximate a best case version, centrally planned economies no doubt would have performed better. But they could never have delivered economic self-management, they would always have been slow to innovate as apathy and frustration took their inevitable toll, and they would always have been susceptible to growing inequities and inefficiencies as the effects of differentialeconomic powergrew. Under central planning neither planners, managers, nor workers had incentives to promote the social economic interest. Nor did impeding markets for final goods to the planning system enfranchise consumers in meaningful ways. But central planning would have been incompatible with economic democracy even if it had overcome its information and incentive liabilities. And the truth is that it survived as long as it did only because it was propped up by unprecedented totalitarian political power.[38] Planned economies contrast with command economies in that a planned economy is "an economic system in which the government controls and regulates production, distribution, prices, etc."[39]whereas a command economy necessarily has substantial public ownership of industry while also having this type of regulation.[40]In command economies, important allocation decisions are made by government authorities and are imposed by law.[41] This is contested by someMarxists.[5][42]Decentralized planninghas been proposed as a basis forsocialismand has been variously advocated byanarchists,council communists,libertarian Marxistsand otherdemocraticandlibertariansocialists who advocate a non-market form of socialism, in total rejection of the type of planning adopted in theeconomy of the Soviet Union.[43] Most of a command economy is organized in a top-down administrative model by a central authority, where decisions regarding investment and production output requirements are decided upon at the top in thechain of command, with little input from lower levels. Advocates of economic planning have sometimes been staunch critics of these command economies.Leon Trotskybelieved that those at the top of the chain of command, regardless of their intellectual capacity, operated without the input and participation of the millions of people who participate in the economy and who understand/respond to local conditions and changes in the economy. Therefore, they would be unable to effectively coordinate all economic activity.[44] Historians have associated planned economies withMarxist–Leninist statesand theSoviet economic model. Since the 1980s, it has been contested that the Soviet economic model did not actually constitute a planned economy in that a comprehensive and binding plan did not guide production and investment.[45]The further distinction of anadministrative-command systememerged as a new designation in some academic circles for the economic system that existed in the formerSoviet UnionandEastern Bloc, highlighting the role of centralized hierarchical decision-making in the absence of popular control over the economy.[46]The possibility of a digital planned economy was explored in Chile between 1971 and 1973 with the development ofProject Cybersynand byAleksandr Aleksandrovich Kharkevich, head of the Department of Technical Physics in Kiev in 1962.[47][48] While both economic planning and a planned economy can be either authoritarian ordemocraticandparticipatory,democratic socialistcritics argue that command economies under modern-day communism is highly undemocratic and totalitarian in practice.[49][50]Indicative planningis a form of economic planning in market economies that directs the economy through incentive-based methods. Economic planning can be practiced in a decentralized manner through different government authorities. In some predominantly market-oriented and Western mixed economies, the state utilizes economic planning in strategic industries such as the aerospace industry. Mixed economies usually employmacroeconomicplanning while micro-economic affairs are left to the market and price system. A decentralized-planned economy, occasionally called horizontally planned economy due to itshorizontalism, is a type of planned economy in which theinvestmentandallocationofconsumerandcapital goodsis explicated accordingly to an economy-wide plan built and operatively coordinated through a distributed network of disparate economic agents or even production units itself.Decentralized planningis usually held in contrast to centralized planning, in particular theSoviet-type economic planningof theSoviet Union's command economy, where economic information is aggregated and used to formulate a plan for production, investment and resource allocation by a single central authority. Decentralized planning can take shape both in the context of amixed economyas well as in apost-capitalisteconomic system. This form of economic planning implies some process of democratic and participatory decision-making within the economy and within firms itself in the form ofindustrial democracy. Computer-based forms of democratic economic planning and coordination between economic enterprises have also been proposed by variouscomputer scientistsandradical economists.[25][7][24]Proponents present decentralized and participatory economic planning as an alternative tomarket socialismfor a post-capitalist society.[52] Decentralized planning has been a feature ofanarchistandsocialist economics. Variations of decentralized planning such aseconomic democracy, industrial democracy andparticipatory economicshave been promoted by various political groups, most notablyanarchists,democratic socialists,guild socialists,libertarian Marxists,libertarian socialists,revolutionary syndicalistsandTrotskyists.[44]During theSpanish Revolution, some areas where anarchist and libertarian socialist influence through theCNTandUGTwas extensive, particularly rural regions, were run on the basis of decentralized planning resembling the principles laid out byanarcho-syndicalistDiego Abad de Santillanin the bookAfter the Revolution.[53]Trotsky had urged economicdecentralisationbetween the state,oblastregions and factories during theNEPperiod to counter structural inefficiency and the problem of bureaucracy.[54] EconomistPat Devinehas created a model of decentralized economic planning called "negotiated coordination" which is based uponsocial ownershipof themeans of productionby those affected by the use of the assets involved, with theallocationofconsumerandcapital goodsmade through a participatory form of decision-making by those at the most localized level of production.[55]Moreover, organizations that utilizemodularityin their production processes may distribute problem solving and decision making.[56] The planning structure of a decentralized planned economy is generally based on a consumers council and producer council (or jointly, a distributive cooperative) which is sometimes called aconsumers' cooperative. Producers and consumers, or their representatives, negotiate the quality and quantity of what is to be produced. This structure is central toguild socialism,participatory economicsand the economic theories related toanarchism. Some decentralized participation in economic planning has been implemented in various regions and states inIndia, most notably inKerala. Local level planning agencies assess the needs of people who are able to give their direct input through the Gram Sabhas (village-based institutions) and the planners subsequently seek to plan accordingly.[57] Some decentralized participation in economic planning has been implemented across Revolutionary Spain, most notably in Catalonia, during theSpanish Revolution of 1936.[19][20] TheUnited Nationshas developed local projects that promote participatory planning on a community level, requiring opportunities for all people to be politically involved and share in thecommunity developmentprocess.[58] The 1888 novelLooking BackwardbyEdward Bellamydepicts a fictional planned economy in a United States around the year 2000 which has become a socialist utopia. Other literary portrayals of planned economies includeYevgeny Zamyatin'sWe(1924). Case studies (Soviet-type economies)
https://en.wikipedia.org/wiki/Decentralized_planning_(economics)
Distributed manufacturing, also known asdistributed production,cloud producing, distributed digital manufacturing,andlocal manufacturing, is a form of decentralized manufacturing practiced by enterprises using a network of geographically dispersed manufacturing facilities that are coordinated usinginformation technology. It can also refer to local manufacture via the historiccottage industrymodel, or manufacturing that takes place in the homes of consumers. In enterprise environments, the primary attribute of distributed manufacturing is the ability to create value at geographically dispersed locations. For example,shippingcosts could be minimized when products are built geographically close to their intended markets.[1]Also, products manufactured in a number of small facilities distributed over a wide area can be customized with details adapted to individual or regional tastes. Manufacturing components in different physical locations and then managing the supply chain to bring them together for final assembly of a product is also considered a form of distributed manufacturing.[2][3]Digital networks combined with additive manufacturing allow companies a decentralized and geographically independent distributed production (cloud manufacturing).[4] Within themaker movementandDIYculture, small scale production by consumers often usingpeer-to-peerresources is being referred to as distributed manufacturing. Consumers download digital designs from anopen designrepository website likeYoumagineorThingiverseand produce a product for low costs through a distributed network of3D printingservices such as3D Hubs,Geomiq. In the most distributed form of distributed manufacturing the consumer becomes aprosumerand manufacturers products at home[5]with anopen-source 3-D printersuch as theRepRap.[6][7]In 2013 a desktop 3-D printer could be economically justified as a personal product fabricator and the number of free and open hardware designs were growing exponentially.[8]Today there are millions ofopen hardwareproduct designs at hundreds of repositories[9]and there is some evidence consumers are 3-D printing to save money. For example, 2017 case studies probed the quality of: (1) six common complex toys; (2)Lego blocks; and (3) the customizability of open sourceboard gamesand found that all filaments analyzed saved the prosumer over 75% of the cost of commercially available true alternative toys and over 90% for recyclebot filament.[10]Overall, these results indicate a single 3D printing repository,MyMiniFactory, is saving consumers well over $60 million/year in offset purchases of only toys.[10]These 3-D printers can now be used to make sophisticated high-value products likescientific instruments.[11][12]Similarly, a study in 2022 found that 81% of open source designs provided economic savings and the total savings for the 3D printing community is more than $35 million from downloading only the top 100 products at YouMagine.[13]In general, the savings are largest when compared to conventional products when prosumers use recycled materials in 'distributed recycling and additive manufacturing' (DRAM).[14] Some[15][16][17]call attention to the conjunction ofcommons-based peer productionwith distributed manufacturing techniques. The self-reinforced fantasy of a system of eternal growth can be overcome with the development of economies of scope, and here, the civil society can play an important role contributing to the raising of the whole productive structure to a higher plateau of more sustainable and customised productivity.[15]Further, it is true that many issues, problems and threats rise due to the largedemocratizationof the means of production, and especially regarding the physical ones.[15]For instance, the recyclability of advanced nanomaterials is still questioned; weapons manufacturing could become easier; not to mention the implications on counterfeiting[18]and on "intellectual property".[19]It might be maintained that in contrast to the industrial paradigm whose competitive dynamics were abouteconomies of scale, commons-based peer production and distributed manufacturing could develop economies of scope. While the advantages of scale rest on cheap global transportation, the economies of scope share infrastructure costs (intangible and tangible productive resources), taking advantage of the capabilities of the fabrication tools.[15]And followingNeil Gershenfeld[20]in that “some of the least developed parts of the world need some of the most advanced technologies”, commons-based peer production and distributed manufacturing may offer the necessary tools for thinking globally but act locally in response to certain problems and needs. As well as supporting individual personal manufacturing[21]social and economic benefits are expected to result from the development of local production economies. In particular, the humanitarian and development sector are becoming increasingly interested in how distributed manufacturing can overcome the supply chain challenges of last mile distribution.[22]Further, distributed manufacturing has been proposed as a key element in theCosmopolitan localismor cosmolocalism framework to reconfigure production by prioritizing socio-ecological well-being over corporate profits, over-production and excess consumption.[23] By localizing manufacturing, distributed manufacturing may enable a balance between two opposite extreme qualities in technology development:Low technologyandHigh tech.[24]This balance is understood as an inclusive middle, a "mid-tech", that may go beyond the two polarities, incorporating them into a higher synthesis. Thus, in such an approach, low-tech and high-tech stop being mutually exclusive. They instead become a dialectic totality. Mid-tech may be abbreviated to “both…and…” instead of “neither…nor…”. Mid-tech combines the efficiency and versatility of digital/automated technology with low-tech's potential for autonomy and resilience.[24] Distributed manufacturing (DM) is a production model that decentralizes manufacturing processes, enabling products to be designed, produced, and distributed closer to end-users. This shift from centralized production to localized networks offers advantages such as increased flexibility, cost efficiency, and local empowerment. However, it also introduces significant challenges in contracting due to the decentralized nature of roles and varying stakeholder responsibilities. Research into contracting and order processing models tailored for distributed manufacturing has highlighted the need for flexible, role-based frameworks and advanced digital tools.[25]These tools and frameworks are essential for addressing issues related to quality assurance, payment structures, legal compliance, and coordination among multiple actors. By addressing these challenges, contracting models for distributed manufacturing can unlock its potential for more localized, efficient, and sustainable production systems. Asystem prototypehas been developed to simplify contracting for distributed manufacturing. This tool allows buyers to manage orders across multiple manufacturers using a single interface, automating workflows to ensure clarity and accountability for everyone involved. This research was led by theInternet of Production, as part of themAkEproject (African European Maker Innovation Ecosystem), funded by the European Horizon 2020 research and innovation programme.
https://en.wikipedia.org/wiki/Distributed_manufacturing
Afab lab(fabrication laboratory) is a small-scaleworkshopoffering (personal)digital fabrication.[1][2] A fab lab is typically equipped with an array of flexible computer-controlled tools that cover several different length scales and various materials, with the aim to make "almost anything".[3]This includesprototypingandtechnology-enabled products generally perceived as limited tomass production. While fab labs have yet to compete with mass production and its associatedeconomies of scalein fabricating widely distributed products, they have already shown the potential to empower individuals to create smart devices for themselves. These devices can be tailored to local or personal needs in ways that are not practical or economical using mass production. The fab lab movement is closely aligned with theDIYmovement,open-source hardware,maker culture, and thefree and open-sourcemovement, and shares philosophy as well as technology with them. The fab lab program was initiated to broadly explore how the content ofinformationrelates to its physical representation and how an under-served community can be powered by technology at the grassroots level.[4]The program began as a collaboration between the Grassroots Invention Group and theCenter for Bits and Atomsat theMIT Media Labin theMassachusetts Institute of Technologywith a grant from theNational Science Foundation(NSF,Washington, D.C.) in 2001.[5] Vigyan AshraminIndiawas the first fab lab to be set up outside MIT. It is established in 2002 and received capital equipment by NSF-USA andIIT Kanpur. While the Grassroots Invention Group is no longer in the MIT Media Lab, The Center for Bits and Atoms consortium is still actively involved in continuing research in areas related to description and fabrication but does not operate or maintain any of the labs worldwide (with the excmobile fab lab). The fab lab concept also grew out of a popular class at MIT (MAS.863) named "How To Make (Almost) Anything". The class is still offered in the fall semesters.[6] Flexible manufacturing equipment within a fab lab can include: One of the larger projects undertaken by fab labs include free communityFabFiwireless networks (in Afghanistan, Kenya and US). The first city-scale FabFi network, set up in Afghanistan, has remained in place and active for three years under community supervision and with no special maintenance. The network in Kenya, (Based in the University of Nairobi (UoN)) building on that experience, started to experiment with controlling service quality and providing added services for a fee to make the network cost-neutral. Fab Academy leverages the Fab Lab network to teach hands-on, digital fabrication skills.[7]Students convene at Fab Lab "Supernodes" for the 19 week course to earn a diploma and build a portfolio. In some cases, the diploma is accredited or offers academic credit.[8]The curriculum is based onMIT's rapid prototyping course MAS 863: How to Make (Almost) Anything.[9]The course is estimated to cost US$5000, but varies with location and available scholarship opportunities. All course materials are publicly archived onlinehere. Fab City has been set up to explore innovative ways of creating the city of the future.[10]It focuses on transforming and shaping the way how materials are sourced and used. This transformation should lead to a shift in the urban model from 'PITO to DIDO' that is, 'product-in, trash-out' to, data-in, data-out'.[11]This can eventually transform cities intoself-sufficiententities in 2054; in line with the pledge thatBarcelonahas made.[12]The Fab City links to the fab lab movement, because they make use of the samehuman capital. The Fab cities make use of the innovative spirit of the users of the fab labs.[13] The Green Fab Lab Network,which started inCatalonia's Green Fablab,[14]promotes environmental awareness through entrepreneurship.[15]For example, they promote distributed recycling, where locals recycled theirplastic wasteturning locally sourced shredded plastic into items of value withfused particle fabrication/fused granular fabrication(FPF/FGF)3D printing, which not only is a good economic but also a good environmental option.[16][17] Listing of all official Fab Labs is maintained by the community through website fablabs.io.[18]As of November 2019, there existed 1830 Fab Labs in the world in total. Currently there are Fab Labs on every continent exceptAntarctica.
https://en.wikipedia.org/wiki/Fablab
Agift economyorgift cultureis a system of exchange wherevaluablesare not sold, but rather given without an explicit agreement for immediate or future rewards.[1]Social norms and customs govern giving a gift in a gift culture; although there is some expectation of reciprocity, gifts are not given in an explicit exchange of goods or services formoney, or some other good or service.[2]This contrasts with amarket economyorbartering, wheregoods and servicesare primarily explicitly exchanged for value received. The nature of gift economies is the subject of a foundational debate inanthropology. Anthropological research into gift economies began withBronisław Malinowski's description of theKula ring[3]in theTrobriand IslandsduringWorld War I.[4]The Kula trade appeared to be gift-like since Trobrianders would travel great distances over dangerous seas to give what were considered valuable objects without any guarantee of a return. Malinowski's debate with the French anthropologistMarcel Maussquickly established the complexity of "gift exchange" and introduced a series of technical terms such asreciprocity,inalienable possessions, and presentation to distinguish between the different forms of exchange.[5][6] According to anthropologistsMaurice Blochand Jonathan Parry, it is the unsettled relationship between market and non-market exchange that attracts the most attention. Some authors argue that gift economies build community,[7]while markets harm community relationships.[8] Gift exchange is distinguished from other forms of exchange by a number of principles, such as the form of property rights governing the articles exchanged; whether gifting forms a distinct "sphere of exchange" that can be characterized as an "economic system"; and the character of the social relationship that the gift exchange establishes. Gift ideology in highly commercialized societies differs from the "prestations" typical of non-market societies. Gift economies also differ from related phenomena, such ascommon propertyregimes and the exchange of non-commodified labour. According to anthropologist Jonathan Parry, discussion on the nature of gifts, and of a separate sphere of gift exchange that would constitute an economic system, has been plagued by theethnocentricuse of a modern, western, market society-based conception of the gift applied as if it were a universal across culture and time. However, he argues that anthropologists, through analysis of a variety of cultural and historical forms of exchange, have established that no universal practice exists.[9]Similarly, the idea of apure giftis "most likely to arise in highly differentiated societies with an advanced division of labour and a significant commercial sector" and need to be distinguished from non-market "prestations".[10]According to Weiner, to speak of a gift economy in a non-market society is to ignore the distinctive features of their exchange relationships, as the early classic debate betweenBronislaw MalinowskiandMarcel Maussdemonstrated.[5][6]Gift exchange is frequently "embedded" in political, kin, or religious institutions, and therefore does not constitute aneconomicsystem per se.[11] Gift-giving is a form of transfer of property rights over particular objects. The nature of those property rights varies from society to society, from culture to culture. They are not universal. The nature of gift-giving is thus altered by the type of property regime in place.[12] Property is not a thing, but a relationship amongst people about things.[13]It is a social relationship that governs the conduct of people with respect to the use and disposition of things. Anthropologists analyze these relationships in terms of a variety of actors' (individual or corporate)bundle of rightsover objects.[12]An example is the current debates aroundintellectual property rights.[14][15][16][17]Take a purchased book over which the author retains a copyright. Although the book is a commodity, bought and sold, it has not been completely alienated from its creator, who maintains a hold over it; the owner of the book is limited in what he can do with the book by the rights of the creator.[18][19]Weiner has argued that the ability to give while retaining a right to the gift/commodity is a critical feature of the gifting cultures described by Malinowski and Mauss, and explains, for example, why some gifts such as Kula valuables return to their original owners after an incredible journey around the Trobriand islands. The gifts given in Kula exchange still remain, in some respects, the property of the giver.[6] In the example used above, copyright is one of those bundled rights that regulate the use and disposition of a book. Gift-giving in many societies is complicated because private property owned by an individual may be quite limited in scope (see§ The commonsbelow).[12]Productive resources, such as land, may be held by members of a corporate group (such as a lineage), but only some members of that group may haveuse rights. When many people hold rights over the same objects, gifting has very different implications than the gifting of private property; only some of the rights in that object may be transferred, leaving that object still tied to its corporate owners. As such, these types of objects areinalienable possessions, simultaneously kept while given.[6] Malinowski's study of theKula ring[20]became the subject of debate with the French anthropologist, Marcel Mauss, author of "The Gift" ("Essai sur le don", 1925).[5]Parry argued that Malinowski emphasized the exchange of goods betweenindividuals, and their selfish motives for gifting: they expected a return of equal or greater value. Malinowski argued thatreciprocityis an implicit part of gifting, that there is no gift free of expectation.[21] In contrast, Mauss emphasized that the gifts were not between individuals, but between representatives of larger collectives. These gifts were atotal prestation,a service provided out of obligation, like community service.[22]They were not alienable commodities to be bought and sold, but, likecrown jewels, embodied the reputation, history and identity of a "corporate kin group", such as a line of kings. Given the stakes, Mauss asked "why anyone would give them away?" His answer was an enigmatic concept,the spirit of the gift.Parry believes that much of the confusion (and resulting debate) was due to a bad translation. Mauss appeared to be arguing that a return gift is given to maintain the relationship between givers; a failure to return a gift ends the relationship and the promise of any future gifts. Both Malinowski and Mauss agreed that in non-market societies, where there was no clear institutionalized economic exchange system, gift/prestation exchange served economic, kinship, religious and political functions that could not be clearly distinguished from each other, and which mutually influenced the nature of the practice.[21] The concept of total prestations was further developed by Annette Weiner, who revisited Malinowski's fieldsite in the Trobriand Islands. Her critique was twofold. First, Trobriand Island society is matrilineal, and women hold much economic and political power, but their exchanges were ignored by Malinowski. Secondly, she developed Mauss' argument about reciprocity and the "spirit of the gift" in terms of "inalienable possessions: the paradox of keeping while giving".[6]Weiner contrasted moveable goods, which can be exchanged, with immoveable goods that serve to draw the gifts back (in the Trobriand case, male Kula gifts with women's landed property). The goods given on the islands are so linked to particular groups that even when given away, they are not truly alienated. Such goods depend on the existence of particular kinds of kinship groups in society. French anthropologist Maurice Godelier[23]continued this analysis inThe Enigma of the Gift(1999). Albert Schrauwers argued that the kinds of societies used as examples by Weiner and Godelier (including theKula ringin the Trobriands, thePotlatchof theindigenous peoples of the Pacific Northwest Coast, and theTorajaofSouth Sulawesi,Indonesia) are all characterized by ranked aristocratic kin groups that fitClaude Lévi-Strauss' model ofHouse Societies(wherehouserefers to both noble lineage and their landed estate). Total prestations are given to preserve landed estates identified with particular kin groups and maintain their place in a ranked society.[24] Chris Gregoryargued thatreciprocityis a dyadic exchange relationship that we characterize, imprecisely, as gift-giving. Gregory argued that one gives gifts to friends and potential enemies in order to establish a relationship, by placing them in debt. He also claimed that in order for such a relationship to persist, there must be a time lag between the gift and counter-gift; one or the other partner must always be in debt. Marshall Sahlins gave birthday gifts as an example. They are separated in time so that one partner feels the obligation to make a return gift. To forget the return gift may be enough to end the relationship. Gregory stated that without a relationship of debt, there is no reciprocity, and that this is what distinguishes a gift economy from atruegift, given with no expectation of return (something Sahlinsgeneralised reciprocity;see below).[25] Marshall Sahlins, an American cultural anthropologist, identified three main types of reciprocity in his bookStone Age Economics(1972). Gift orgeneralized reciprocityis the exchange of goods and services without keeping track of their exact value, but often with the expectation that their value will balance out over time.Balanced or Symmetrical reciprocityoccurs when someone gives to someone else, expecting a fair and tangible return at a specified amount, time, and place. Market ornegative reciprocityis the exchange of goods and services where each party intends to profit from the exchange, often at the expense of the other. Gift economies, or generalized reciprocity, occurred within closely knit kin groups, and the more distant the exchange partner, the more balanced or negative the exchange became.[26] Jonathan Parry argued that ideologies of the "pure gift" are most likely to arise only in highly differentiated societies with an advanced division of labour and a significant commercial sector" and need to be distinguished from the non-market "prestations" discussed above.[10]Parry also underscored, using the example of charitable giving of alms in India (Dāna), that the "pure gift" of alms given with no expectation of return could be "poisonous". That is, the gift of alms embodying the sins of the giver, when given to ritually pure priests, saddled these priests with impurities of which they could not cleanse themselves. "Pure gifts", given without a return, can place recipients in debt, and hence in dependent status: the poison of the gift.[27]David Graeberpoints out that no reciprocity is expected between unequals: if you make a gift of a dollar to a beggar, he will not give it back the next time you meet. More than likely, he will ask for more, to the detriment of his status.[28]Many who are forced by circumstances to accept charity feel stigmatized. In theMoka exchangesystem of Papua New Guinea, where gift givers become political "big men", those who are in their debt and unable to repay with "interest" are referred to as "rubbish men". The French writerGeorges Bataille, inLa part Maudite, uses Mauss's argument in order to construct a theory of economy: the structure of gift is the presupposition for all possible economy. Bataille is particularly interested in the potlatch as described by Mauss, and claims that its agonistic character obliges the receiver to confirm their own subjection. Thus gifting embodies the Hegelian dipole of master and slave within the act. The relationship of new market exchange systems to indigenous non-market exchange remained a perplexing question for anthropologists.Paul Bohannanargued that the Tiv of Nigeria had threespheres of exchange, and that only certain kinds of goods could be exchanged in each sphere; each sphere had its own form of special-purpose money. However, the market and universal money allowed goods to be traded between spheres and thus damaged established social relationships.[29]Jonathan Parry andMaurice Blochargued in "Money and the Morality of Exchange" (1989), that the "transactional order" through which long-term social reproduction of the family occurs has to be preserved as separate from short-term market relations.[30]It is the long-term social reproduction of the family that is sacralized by religious rituals such baptisms, weddings and funerals, and characterized by gifting. In such situations where gift-giving and market exchange were intersecting for the first time, some anthropologists contrasted them as polar opposites. This opposition was classically expressed by Chris Gregory in his book "Gifts and Commodities" (1982). Gregory argued that: Commodity exchange is an exchange ofalienableobjects between people who are in a state of reciprocalindependencethat establishes aquantitativerelationship between theobjectsexchanged ... Gift exchange is an exchange ofinalienableobjects between people who are in a state of reciprocaldependencethat establishes aqualitativerelationship between thetransactors(emphasis added).[31] Gregory contrasts gift and commodity exchange according to five criteria:[32] But other anthropologists refused to see these different "exchange spheres" as such polar opposites.Marilyn Strathern, writing on a similar area in Papua New Guinea, dismissed the utility of the contrasting setup in "The Gender of the Gift" (1988).[33] Rather than emphasize how particular kinds of objects are either gifts or commodities to be traded inrestrictedspheres of exchange,Arjun Appaduraiand others began to look at how objects flowed between these spheres of exchange (i.e. how objects can be converted into gifts and then back into commodities). They refocussed attention away from the character of the human relationships formed through exchange, and placed it on "the social life of things" instead. They examined the strategies by which an object could be "singularized" (made unique, special, one-of-a-kind) and so withdrawn from the market. A marriage ceremony that transforms a purchased ring into an irreplaceable family heirloom is one example; the heirloom, in turn, makes a perfect gift. Singularization is the reverse of the seemingly irresistible process of commodification. They thus show how all economies are a constant flow of material objects that enter and leave specific exchange spheres. A similar approach is taken by Nicholas Thomas, who examines the same range of cultures and the anthropologists who write on them, and redirects attention to the "entangled objects" and their roles as both gifts and commodities.[34] Many societies have strong prohibitions against turning gifts into trade orcapitalgoods. Anthropologist Wendy James writes that among theUduk peopleof northeastAfricathere is a strong custom that any gift that crosses subclan boundaries must be consumed rather than invested.[35]: 4For example, an animal given as a gift must be eaten, not bred. However, as in the example of the Trobriand armbands and necklaces, this "perishing" may not consist of consumption as such, but of the gift moving on. In other societies, it is a matter of giving some other gift, either directly in return or to another party. To keep the gift and not give another in exchange is reprehensible. "In folk tales,"Lewis Hyderemarks, "the person who tries to hold onto a gift usually dies."[35]: 5 Daniel Everett, a linguist who studied the smallPirahã tribeof hunter-gatherers in Brazil,[36]reported that, while they are aware offood preservationusing drying, salting, and so forth, they reserve their use for items bartered outside the tribe. Within the group, when someone has a successful hunt they immediately share the abundance by inviting others to enjoy a feast. Asked about this practice, one hunter laughed and replied, "I store meat in the belly of my brother."[37][38] Carol Stack'sAll Our Kindescribes both the positive and negative sides of a network of obligation and gratitude effectively constituting a gift economy. Her narrative ofThe Flats, a poorChicagoneighborhood, tells in passing the story of two sisters who each came into a small inheritance. One sister hoarded the inheritance and prospered materially for some time, but was alienated from the community. Her marriage broke up, and she integrated herself back into the community largely by giving gifts. The other sister fulfilled the community's expectations, but within six weeks had nothing material to show for the inheritance but a coat and a pair of shoes.[35]: 75–76 Marcel Mauss was careful to distinguish "gift economies" (reciprocity) in market societies from the "total prestations" given in non-market societies. A prestation is a service provided out of obligation, like "community service".[22]These "prestations" bring together domains across political, religious, legal, moral and economic definitions, such that the exchange can be seen to beembeddedin non-economic social institutions. These prestations are often competitive, as in thepotlatch,Kula exchange, andMoka exchange.[39] TheMokais a highly ritualized system of exchange in theMount Hagenarea ofPapua New Guinea, that has become emblematic of the anthropological concepts of a "gift economy" and of a "big man" political system. Moka are reciprocal gifts that raise the social status of the giver if the gift is larger than one that the giver received.Mokarefers specifically to the increment in the size of the gift.[40]The gifts are of a limited range of goods, primarily pigs and scarce pearl shells from the coast. To return the same value as one has received in a moka is simply to repay a debt, strict reciprocity. Moka is the extra. To some, this represents interest on an investment. However, one is not bound to provide moka, only to repay the debt. One adds moka to the gift to increase one's prestige, and to place the receiver in debt. It is this constant renewal of the debt relationship which keeps the relationship alive; a debt fully paid off ends further interaction. Giving more than one receives establishes a reputation as a Big man, whereas the simple repayment of debt, or failure to fully repay, pushes one's reputation towards the other end of the scale, "rubbish man".[41]Gift exchange thus has a political effect; granting prestige or status to one, and a sense of debt in the other. A political system can be built out of these kinds of status relationships. Sahlins characterizes the difference between status and rank by highlighting that Big man is not a role; it is a status that is shared by many. The Big man is "not a princeofmen", but a "prince among men". The "big man" system is based on the ability to persuade, rather than command.[42] TheTorajaare anethnic groupindigenousto a mountainous region ofSouth Sulawesi, Indonesia.[43]Torajans are renowned for their elaborate funeral rites, burial sites carved into rocky cliffs, and massive peaked-roof traditional houses known astongkonanwhich are owned by noble families. Membership in a tongkonan is inherited by all descendants of its founders. Thus any individual may be a member of numerous tongkonan, as long as they contribute to its ritual events. Membership in a tongkonan carries benefits, such as the right to rent some of its rice fields.[44] Toraja funeral rites are important social events, usually attended by hundreds of people and lasting several days. The funerals are like "big men" competitions where all the descendants of a tongkonan compete through gifts of sacrificial cattle. Participants have invested cattle with others over the years, and draw on those extended networks to make the largest gift. The winner of the competition becomes the new owner of the tongkonan and its rice lands. They display all the cattle horns from their winning sacrifice on a pole in front of the tongkonan.[44] The Toraja funeral differs from the "big man" system in that the winner of the "gift" exchange gains control of the Tongkonan's property. It creates a clear social hierarchy between the noble owners of the tongkonan and its land, and the commoners who are forced to rent their fields from him. Since the owners of the tongkonan gain rent, they are better able to compete in the funeral gift exchanges, and their social rank is more stable than the "big man" system.[44] AnthropologistDavid Graeberargued that the great world religious traditions of charity and gift giving emerged almost simultaneously during theAxial Age(800 to 200 BCE), when coinage was invented and market economies were established on a continental basis. Graeber argues that these charity traditions emerged as a reaction against the nexus formed by coinage, slavery, military violence and the market (a "military-coinage" complex). The new world religions, includingHinduism,Judaism,Buddhism,Confucianism,Christianity, andIslamall sought to preserve "human economies" where money served to cement social relationships rather than purchase things (including people).[45] Charity and alms-giving are religiously sanctioned voluntary gifts given without expectation of return. However, case studies show that such gifting is not necessarily altruistic.[46] Theravada BuddhisminThailandemphasizes the importance of giving alms (merit making) without any intention of return (a pure gift), which is best accomplished according to doctrine, through gifts to monks and temples. The emphasis is on the selfless gifting which "earns merit" (and a future better life) for the giver rather than on the relief of the poor or the recipient on whom the gift is bestowed. However, Bowie's research shows that this ideal form of gifting is limited to the rich who have the resources to endow temples and sponsor the ordination of monks.[47]Monks come from these same families, so this gifting doctrine has a class element. Poorer farmers place much less emphasis on merit making through gifts to monks and temples. They equally validate gifting to beggars. Poverty and famine is widespread among these poorer groups, and by validating gift-giving to beggars, they are in fact demanding that the rich see to their needs in hard times. Bowie sees this as an example of amoral economy(see below) in which the poor use gossip and reputation to resist elite exploitation and pressure them to ease their "this world" suffering.[48] Dānais a form of religious charity given in Hindu India. The gift is said to embody the sins of the giver (the "poison of the gift"), whom it frees of evil by transmitting it to the recipient. The merit of the gift depends on finding a worthy recipient such as aBrahminpriest. Priests are supposed to be able to digest the sin through ritual action and transmit the gift with increment to someone of greater worth. It is imperative that this be a true gift, with no reciprocity, or the evil will return. The gift is not intended to create any relationship between donor and recipient, and there should never be a return gift. Dana thus transgresses the so-called universal "norm of reciprocity".[10] The Children of Peace(1812–1889) were a utopian Quaker sect. Today, they are primarily remembered for theSharon Temple, a national historic site and an architectural symbol of their vision of a society based on the values of peace, equality and social justice. They built this ornate temple to raise money for the poor, and built the province of Ontario's first shelter for the homeless. They took a lead role in organizing the province's first co-operative,the Farmers' Storehouse, and opened the province's firstcredit union. The group soon found that the charity they tried to distribute from their Temple fund endangered the poor. Accepting charity was a sign of indebtedness, and thedebtor could be jailed without trial at the time; this was the "poison of the gift". They thus transformed their charity fund into a credit union that loaned small sums like today's micro-credit institutions. This is an example ofsingularization, as money was transformed into charity in the Temple ceremony, then shifted to an alternative exchange sphere as a loan. Interest on the loan was then singularized, and transformed back into charity.[49] Non-commodified spheres of exchange exist in relation to the market economy. They are created through the processes ofsingularizationas specific objects are de-commodified for a variety of reasons and enter an alternativeexchange sphere. It may be in opposition to the market and to its perceived greed. It may also be used by corporations as a means of creating a sense of endebtedness and loyalty in customers. Modern marketing techniques often aim at infusing commodity exchange with features of gift exchange, thus blurring the presumably sharp distinction between gifts and commodities.[50] Market economies tend to "reduce everything – including human beings, their labor, and their reproductive capacity – to the status of commodities".[51]"The rapid transfer of organ transplant technology to the third world has created a trade in organs, with sick bodies travelling to theGlobal Southfor transplants, and healthy organs from the Global South being transported to the richer Global North, "creating a kind of 'Kula ring' of bodies and body parts."[52]However, all commodities can also be singularized, or de-commodified, and transformed into gifts. In North America, it is illegal to sell organs, and citizens are enjoined to give the "gift of life" and donate their organs in an organ gift economy.[53]However, this gift economy is a "medical realm rife with potent forms of mystified commodification".[54]This multimillion-dollar medical industry requires clients to pay steep fees for the gifted organ, which creates clear class divisions between those who donate (often in the global south) and will never benefit from gifted organs, and those who can pay the fees and thereby receive a gifted organ.[53] Unlike body organs, blood and semen have been successfully and legally commodified in the United States. Blood and semen can thus be commodified, but once consumed are "the gift of life". Although both can be either donated or sold, are perceived as the "gift of life" yet are stored in "banks", and can be collected only under strict government regulated procedures, recipients very clearly prefer altruistically donated semen and blood. The blood and semen samples with the highest market value are those that have been altruistically donated. The recipients view semen as storing the potential characteristics of their unborn child in its DNA, and value altruism over greed.[55]Similarly, gifted blood is the archetype of a pure gift relationship because the donor is only motivated by a desire to help others.[56][57] Engineers, scientists and software developers have createdfree softwareprojects such as theLinux kerneland theGNUoperating system. They are prototypical examples for the gift economy's prominence in the technology sector, and its active role in instating the use ofpermissive free softwareandcopyleftlicenses, which allow free reuse of software and knowledge. Other examples includefile-sharing,open access,unlicensedsoftware and so on. Many retail organizations have "gift" programs meant to encourage customer loyalty to their establishments. Bird-David and Darr refer to these as hybrid "mass-gifts" which are neither gift nor commodity. They are called mass-gifts because they are given away in large numbers "free with purchase" in a mass-consumption environment. They give as an example two bars of soap in which one is given free with purchase: which is the commodity and which the gift? The mass-gift both affirms the distinct difference between gift and commodity while confusing it at the same time. As with gifting, mass-gifts are used to create a social relationship. Some customers embrace the relationship and gift whereas others reject the gift relationship and interpret the "gift" as a 50% off sale.[58] "Give-away shops", "freeshops" or "free stores" are stores where all goods are free. They are similar tocharity shops, with mostly second-hand items – only everything is available at no cost. Whether it is abook, a piece offurniture, a garment or ahouseholditem, it is all freely given away, although some operate a one-in, one-out–type policy (swap shops). The free store is a form of constructivedirect actionthat provides a shopping alternative to amonetaryframework, allowing people to exchange goods and services outside a money-based economy. The anarchist1960s counterculturalgroupThe Diggers[59]openedfree storeswhich gave away their stock, provided free food, distributed free drugs, gave away money, organized free music concerts, and performed works of political art.[60]The Diggers took their name from the originalEnglish Diggersled byGerrard Winstanley[61]and sought to create a mini-society free of money andcapitalism.[62] Burning Manis a week-long annual art and community event held in the Black Rock Desert in northernNevada, in the United States. The event is described as an experiment in community, radical self-expression, and radical self-reliance. The event forbids commerce (except for ice, coffee, and tickets to the event itself)[63]and encourages gifting.[64]Gifting is one of the event's 10 principles,[65]as participants to Burning Man (both the desert festival and the year-round global community) are encouraged to rely on a gift economy. The practice of gifting at Burning Man is also documented by the 2002 documentary filmGifting It: A Burning Embrace of Gift Economy,[66]as well as by Making Contact's radio show "How We Survive: The Currency of Giving [encore]".[64] According to the Associated Press, "Gift-giving has long been a part of marijuana culture" and has accompanied legalization in U.S. states in the 2010s.[67]Voters in theDistrict of Columbialegalized the growing ofcannabisfor personal recreational use by approvingInitiative 71in November 2014, but the 2015 "Cromnibus" Federal appropriations bills prevented the District from creating a system to allow for its commercial sale. Possession, growth, and use of the drug by adults is legal in the District, as is giving it away, but sale and barter of it is not, in effect attempting to create a gift economy.[68]However it ended up creating a commercial market linked to selling other objects.[69]Preceding the January, 2018 legalization of cannabis possession in Vermont without a corresponding legal framework for sales, it was expected that a similar market would emerge there.[70]For a time, people in Portland, Oregon, could only legally obtain cannabis as a gift, which was celebrated in theBurnside Burnrally.[71]For a time, a similar situation ensued after possession was legalized in California, Maine and Massachusetts.[67][72][73] Many anarchists, particularlyanarcho-primitivistsandanarcho-communists, believe that variations on a gift economy may be the key to breaking thecycle of poverty. Therefore, they often desire to refashion all of society into a gift economy. Anarcho-communists advocate a gift economy as an ideal, with neither money, nor markets, nor planning. This view traces back at least toPeter Kropotkin, who saw in the hunter-gatherer tribes he had visited the paradigm of "mutual aid".[74]In place of a market,anarcho-communists, such as those who lived in some Spanish villages in the 1930s, support a gift economy without currency, where goods and services are produced by workers and distributed in community stores where everyone (including the workers who produced them) is essentially entitled to consume whatever they want or need as payment for their production of goods and services.[75] As an intellectual abstraction, mutual aid was developed and advanced bymutualismor laborinsurancesystems and thustrade unions, and has been also used incooperativesand othercivil societymovements. Typically, mutual-aid groups are free to join and participate in, and all activities arevoluntary. Often they are structured asnon-hierarchical,non-bureaucraticnon-profit organizations, with members controlling all resources and no external financial or professional support. They are member-led and member-organized. They are egalitarian in nature, and designed to supportparticipatory democracy,equalityof member status and power, and sharedleadershipandcooperative decision-making. Members' external societal status is considered irrelevant inside the group: status in the group is conferred by participation.[76] English historianE.P. Thompsonwrote about themoral economyof the poor in the context of widespread English food riots in the English countryside in the late 18th century. Thompson claimed that these riots were generally peaceable acts that demonstrated a common political culture rooted in feudal rights to "set the price" of essential goods in the market. These peasants believed that a traditional "fair price" was more important to the community than a "free" market price and they punished large farmers who sold their surpluses at higher prices outside the village while some village members still needed produce. Thus a moral economy is an attempt to preserve an alternative exchange sphere from market penetration.[77][78]The notion of peasants with a non-capitalist cultural mentality using the market for their own ends has been linked to subsistence agriculture and the need for subsistence insurance in hard times. However, James C. Scott points out that those who provide this subsistence insurance to the poor in bad years are wealthy patrons who exact a political cost for their aid; this aid is given to recruit followers. The concept of moral economy has been used to explain why peasants in a number of colonial contexts, such as the Vietnam War, have rebelled.[79] Some may confuse common property regimes with gift exchange systems. The commons is the cultural and natural resources accessible to all members of a society, including natural materials such as air, water, and a habitable earth. These resources are held in common, not owned privately.[80]The resources held in common can include everything fromnatural resourcesandcommon landtosoftware.[81]The commons containspublic propertyandprivate property, over which people have certain traditional rights. When commonly held property is transformed into private property this process is called "enclosure" or "privatization". A person who has a right in, or over, common land jointly with another or others is called a commoner.[82] There are a number of important aspects that can be used to describe true commons. The first is that the commons cannot becommodified– if they are, they cease to be commons. The second aspect is that unlike private property, the commons are inclusive rather than exclusive – their nature is to share ownership as widely, rather than as narrowly, as possible. The third aspect is that the assets in commons are meant to be preserved regardless of theirreturn of capital. Just as we receive them as a shared right, so we have a duty to pass them on to future generations in at least the same condition as we received them. If we can add to their value, so much the better, but at a minimum we must not degrade them, and we certainly have no right to destroy them.[83] Free content, or free information, is any kind of functional work,artwork, or other creativecontentthat meets the definition of afree cultural work.[84]A free cultural work is one which has no significantlegalrestriction on people's freedom: Although different definitions are used, free content is legally similar if not identical toopen content. An analogy is the use of the rival termsfree software and open sourcewhich describe ideological differences rather than legal ones.[87]Free content encompasses all works in thepublic domainand also thosecopyrightedworks whoselicenseshonor and uphold the freedoms mentioned above. Because copyright law in most countries by default grants copyright holdersmonopolistic controlover their creations, copyright content must be explicitly declared free, usually by the referencing or inclusion of licensing statements from within the work. Although a work which is in the public domain because its copyright has expired is considered free, it can become non-free again if the copyright law changes.[88] Information is particularly suited to gift economies, as information is anonrival goodand can be gifted at practically no cost (zeromarginal cost).[89][90]In fact, there is often an advantage to using the same software or data formats as others, so even from a selfish perspective, it can be advantageous to give away one's information. Markus Giesler, in hisethnographyConsumer Gift System, described music downloading as a system of social solidarity based on gift transactions.[91]AsInternetaccess spread, file sharing became extremely popular among users who could contribute and receive files on line. This form of gift economy was a model for online services such asNapster, which focused on music sharing and was later sued forcopyright infringement. Nonetheless, online file sharing persists in various forms such asBitTorrentanddirect download link. A number of communications and intellectual property experts such asHenry JenkinsandLawrence Lessighave described file-sharing as a form of gift exchange which provides many benefits to artists and consumers alike. They have argued that file sharing fosters community among distributors and allows for a more equitable distribution of media. In his essay "Homesteading the Noosphere", noted computer programmerEric S. Raymondsaid thatfree and open-source softwaredevelopers have created "a 'gift culture' in which participants compete for prestige by giving time, energy, and creativity away".[92]Prestige gained as a result of contributions to source code fosters a social network for the developer; theopen source communitywill recognize the developer's accomplishments and intelligence. Consequently, the developer may find more opportunities to work with other developers. However, prestige is not the only motivator for the giving of lines of code. An anthropological study of theFedoracommunity, as part of amaster'sstudy at the University of North Texas in 2010–11, found that common reasons given by contributors were "learning for the joy of learning and collaborating with interesting and smart people". Motivation for personal gain, such as career benefits, was more rarely reported. Many of those surveyed said things like, "Mainly I contribute just to make it work for me", and "programmers develop software to 'scratch an itch'".[93]The International Institute of Infonomics at the University of Maastricht in the Netherlands reported in 2002 that in addition to the above, large corporations, and they specifically mentionedIBM, also spend large annual sums employing developers specifically for them to contribute to open source projects. The firms' and the employees' motivations in such cases are less clear.[94] Members of theLinuxcommunity often speak of their community as a gift economy.[95]The IT research firm IDC valued the Linux kernel at US$18 billion in 2007 and projected its value at US$40 billion in 2010.[96]TheDebiandistributionof theGNU/Linux operating system offers over 37,000 free open-source software packages via their AMD64 repositories alone.[97] Collaborative works are works created by an open community. For example,Wikipedia– a free online encyclopedia – features millions of articles developed collaboratively, and almost none of its many authors and editors receive any direct material reward.[98][99] The concept of a gift economy has played a large role in works of fiction about alternative societies, especially in works ofscience fiction. Examples include:
https://en.wikipedia.org/wiki/Gift_economy
Here Comes Everybody: The Power of Organizing Without Organizationsis a book byClay Shirkypublished by Penguin Press in 2008 on the effect of theInterneton moderngroup dynamicsand organization. The author considers examples such asWikipedia,MySpace, and othersocial mediain his analysis. According to Shirky, the book is about "what happens when people are given the tools to do things together, without needing traditional organizational structures".[1]The title of the work alludes toHCE, a recurring and central figure inJames Joyce'sFinnegans Wakeand considers the impacts of self-organizing movements on culture, politics, and business.[2] In the book, Shirky recounts how social tools, such as blogging software likeWordPressandTwitter, file sharing platforms likeFlickr, and online collaboration platforms like Wikipedia, support group conversation and group action in a way that could previously only be achieved throughinstitutions. Shirky argues that with the advent of online social tools, groups can form without previous restrictions of time and cost, in the same way theprinting pressincreased individual expression, and thetelephoneincreased communications between individuals. Shirky observes that: "[Every] institution lives in a kind of contradiction: it exists to take advantage of group effort, but some of its resources are drained away by directing that effort. Call this theinstitutional dilemma--because an institution expends resources to manage resources, there is a gap between what those institutions are capable of in theory and in practice, and the larger the institution, the greater those costs."[3] Online social tools, Shirky argues, allow groups to form around activities 'whose costs are higher than the potential value,'[4]for institutions. Shirky further argues that the successful creation of online groups relies on successful fusion of a, 'plausible promise, an effective tool, and an acceptable bargain for the user.'[5]However, Shirky warns that this system should not be interpreted as a recipe for the successful use of social tools as the interaction between the components is too complex. Shirky also discusses the possibility ofmass amateurizationthat the internet allows.[6]With blogging and photo-sharing websites, anyone can publish an article or photo that they have created. This creates a mass amateurization of journalism and photography, requiring a new definition of what credentials make someone a journalist, photographer, or news reporter. This mass amateurization threatens to change the way news is spread throughout different media outlets. However, after publication, in an interview withJournalism.co.uk, Clay Shirky revised some of his own work by saying that "democratic legitimation is itself enough to regard aggregate public opinion as being clearly binding on the government." Shirky uses the example of the prioritization of a campaign to legalize medical marijuana onChange.gov, stating that while it was a 'net positive,' for democracy, it was not an absolute positive. He concedes that public pressure via the Internet could be another implementation method for special interest groups.[7] In Chapter Two, "Sharing Anchors Community", the author uses theories from the 1937 paperThe Nature of the FirmbyNobel Prize–winning economistRonald Coasewhich introduces the concept oftransaction coststo explain the nature and limits of firms. From these theories, Shirky derives two terms that represent the constraints under which these traditional institutions operate: Coasean Ceiling and Coasean Floor. The author argues that social tools drastically reduce transaction costs and organizing overhead, allowing loosely structured groups with limited managerial oversight to operate under the Coasean Floor. As an example, he citesFlickr, which allows groups to organically form around themes of images without the transaction costs of managerial oversight. In Chapter Eleven, "Promise, Tool, Bargain", Shirky states that each success story of using social tools to form groups contained within the book is an example of the complex fusion of 'a plausiblepromise, an effectivetool, and an acceptablebargainwith the users.' The Booksellerdeclared the book one of the two "most reviewed" books over the [2008] Easter weekend, noting that theTelegraph'sreviewer Dibbell found it "as crisply argued and as enlightening a book about the Internet as has been written" and that theGuardianreviewer Stuart Jeffries called it "terrifically clever" and "harrowing".[12] In a 2009 review, NYTimes.com contributor Liesl Schillinger called the book "eloquent and accessible" and encouraged readers to buy the book, which had recently been released in paperback.[13] In theTimes Higher Education,Tara Brabazon, professor of Media Studies atUniversity of Brighton, criticizesHere Comes Everybodyfor excluding "older citizens, the poor, and the illiterate". Brabazon also argues that the "assumption that 'we' can learn about technology from technology - without attention to user-generated contexts rather than content - is the gaping, stunning silence of Shirky's argument".[14]
https://en.wikipedia.org/wiki/Here_Comes_Everybody_(book)
The term "knowledge commons" refers to information, data, and content that is collectively owned and managed by a community of users,[1]particularly over theInternet. What distinguishes a knowledge commons from acommonsof shared physical resources is that digital resources arenon-subtractible;[2]that is, multiple users can access the same digital resources with no effect on their quantity or quality.[3] The term 'commons' is derived from the medieval economic systemthe commons.[4]The knowledge commons is a model for a number of domains, includingOpen Educational Resourcessuch as the MITOpenCourseWare, free digital media such asWikipedia,[5]Creative Commons–licensed art, open-source research,[6]and open scientific collections such as thePublic Library of Scienceor theScience Commons,free softwareandOpen Design.[7][8]According to research by Charlotte Hess andElinor Ostrom,[3]the conceptual background of the knowledge commons encompasses two intellectual histories: first, a European tradition of battling the enclosure of the "intangible commons of the mind",[9]threatened by expanding intellectual property rights and privatization of knowledge.[10]Second, a tradition rooted in theUnited States, which sees the knowledge commons as a shared space allowing for free speech and democratic practices,[11]and which is in the tradition of the town commons movement and commons-based production of scholarly work,open science, open libraries, and collective action.[3] The production of works in the knowledge commons is often driven bycollective intelligence, respectively thewisdom of crowds, and is related to knowledge communism[12]as defined byRobert K. Merton, according to whom scientists give upintellectual property rightsin exchange for recognition and esteem.[13] Ferenc Gyuris argues that it is important to distinguish "information" from "knowledge" in defining the term "knowledge commons".[14]He argues that "knowledge as a shared resource" requires that both information must become accessible and potential recipients must become able and willing to internalize it as 'knowledge'."Therefore, knowledge cannot become a shared resource without a complex set of institutions and practices that give the opportunity to potential recipients to gain the necessary abilities and willingness".[15] Copyleftlicenses are institutions which support a knowledge commons of executable software.[16]Copyleft licenses grant licensees all necessary rights such as right to study, use, change and redistribute—under the condition that all future works building on the license are again kept in the commons.[17]Popular applications of the 'copyleft' principle are theGNUSoftware Licenses (GPL,LGPLandGFDLbyFree Software Foundation) and theshare-alikelicenses undercreative commons.[18]
https://en.wikipedia.org/wiki/Knowledge_commons
Mass collaborationis a form ofcollective actionthat occurs when large numbers of people work independently on a single project, often modular in its nature.[1]Such projects typically take place on the internet usingsocial softwareandcomputer-supported collaborationtoolssuch aswikitechnologies, which provide a potentially infinite hypertextual substrate within which the collaboration may be situated. Open source software such asLinuxwas developed via mass collaboration. Modularity enables a mass of experiments to proceed in parallel, with different teams working on the same modules, each proposing different solutions. Modularity allows different "blocks" to be easily assembled, facilitating decentralised innovation that all fits together.[2] Mass collaboration differs from masscooperationin that the creative acts taking place require the joint development ofshared understandings. Conversely, group members involved in cooperation needn't engage in a joint negotiation of understanding; they may simply execute instructions willingly. Another important distinction is the borders around which a mass cooperation can be defined. Due to the extremely general characteristics and lack of need for fine grain negotiation and consensus when cooperating, the entire internet, a city, and even the global economy may be regarded as examples of mass cooperation. Thus mass collaboration is more refined and complex in its process and production on the level of collective engagement. Although an online discussion is certainly collaborative, mass collaboration differs from a large forum, email list, bulletin board, chat session or group discussion in that the discussion's structure of separate, individual posts generated throughturn-takingcommunication means the textual content does not take the form of a single, coherent body. Of course the conceptual domain of the overall discussion exists as a single unified body, however the textual contributors can be linked back to the understandings and interpretations of a single author. Though the author's understandings and interpretations are most certainly a negotiation of the understandings of all who read and contribute to the discussion, the fact that there was only one author of a given entry reduces the entry's collaborative complexity to the discursive/interpretive as opposed to constructive/‘negotiative’ levels[3][4][5] From the perspective of individual sites of work within a mass collaboration, the activity may appear to be identical to that ofcoauthoring. In fact, it is, with the exception being the implicit and explicit relationships formed by the interdependence that many sites within a mass collaboration share through hypertext and coauthorship with differing sets of collaborators. This interdependence of collaborative sites coauthored by a large number of people is what gives a mass collaboration one of its most distinguishing features - a coherent collaboration emerging from the interrelated collection of its parts. Many of the web applications associated with Bulletin boards, or forums can include a wide variety of tools that allows individuals to keep track of sites and content that they find on the internet. Users are able to bookmark from their browser by editing the title, adding a description and most importantly classifying using tags. Other non-collective tools are also used in Mass collaborative environments such as commenting, rating and quick evaluation.[6] In the booksWikinomics: How Mass Collaboration Changes EverythingandMacroWikinomics-Rebooting business and the world, Don Tapscott and Anthony Williams list five ideas that the new art and science of wikinomics is based on: The concept of mass collaboration has led to a number of efforts to harness and commercialize shared tasks. Collectively known ascrowdsourcing, these ventures typically involve on an online system of accounts for coordinating buyers and sellers of labor. Amazon'sMechanical Turksystem follows this model, by enabling employers to distribute minute tasks to thousands of registered workers. In the advertising industry, Giant Hydra employs mass collaboration to enable creatives to collaborate on advertising ideas online and create what they call an 'idea matrix', a highly complex node of concepts, executions and ideas all connected to each other. In the financial industry, companies such as the Open Models Valuation Company (OMVCO) also employ mass collaboration to improve the accuracy of financial forecasts. In traditional collaborative scenarios, discussion plays a key role in the negotiation of jointly developed,shared understandings(the essence of collaboration), acting as a point of mediation between the individual collaborators and the outcome which may or may not eventuate from the discussions. Mass collaboration reverses this relationship with the work being done providing the point of mediation between collaborators, with associated discussions being an optional component. It is of course debatable that discussion is optimal, as most (if not all) mass collaborations have discussions associated with the content being developed. However it is possible to contribute (toWikipediafor instance) without discussing the content you are contributing to. (Smaller scale collaborations might be conducted without discussions especially in a non-verbal mode of work - imagine two painters contributing to the same canvas - but the situation becomes increasingly problematic as more members are included.) Although the only widely successful examples of mass collaboration thus far evaluated exist in the textual medium, there is no immediate reason why this form ofcollective actioncouldn't work in other creative media. It could be argued that some projects within the open source software movement provide examples of mass collaboration outside of the traditional written language (see below), however, the code collaboratively created still exists as a language utilizing a textual medium. Music is also a possible medium for mass collaboration, for instance on live performance recordings where audience members' voices have become part of the standard version of a song. Most "anonymous" folk songs and "traditional" tunes are also arguably sites of long term mass collaboration. The sudden and unexpected importance of the Wikipedia, a free online encyclopedia created by tens of thousands of volunteers and coordinated in a deeply decentralized fashion, represents a radical new modality of content creation bymassively distributed collaboration. This talk will discuss the unique principles and values which have enabled the Wikipedia community to succeed and will examine the intriguing prospects for application of these methods to a broad spectrum of intellectual endeavors
https://en.wikipedia.org/wiki/Mass_collaboration
Non-formal learningincludes various structuredlearningsituations which do not either have the level ofcurriculum,institutionalization,accreditationorcertificationassociated with 'formal learning', but have more structure than that associated with 'informal learning', which typically take place naturally and spontaneously as part of other activities. These form the three styles of learning recognised and supported by theOECD.[1] Examples of non-formal learning include swimming sessions for toddlers, community-based sports programs, and programs developed by organisations such as theBoy Scouts, theGirl Guides, community or non-creditadult educationcourses, sports or fitness programs, professional conference styleseminars, and continuing professional development.[2]The learner's objectives may be to increase skills and knowledge, as well as to experience the emotional rewards associated with increased love for a subject or increased passion for learning.[3] The debate over the relative value of formal and informal learning has existed for a number of years. Traditionally formal learning takes place in aschooloruniversityand has a greater value placed upon it than informal learning, such as learning within the workplace. This concept of formal learning being the socio-cultural accepted norm for learning was first challenged by Scribner and Cole[4]in 1973, who claimed most things in life are better learnt through informal processes, citinglanguage learningas an example. Moreover, anthropologists noted that complex learning still takes place within indigenous communities that had no formal educational institutions.[5] It is the acquisition of this knowledge or learning which occurs in everyday life that has not been fully valued or understood. This led to the declaration by theOECDeducational ministers of the "life-long learningfor all"[6]strategy in 1996. This includes 23 countries from five continents, who have sought to clarify and validate all forms of learning including formal, non-formal and informal. This has been in conjunction with theEuropean Unionwhich has also developed policies for life-long learning which focus strongly on the need to identify, assess and certify non-formal and informal learning, particularly in the workplace.[7] [citation needed] Countries involved in recognition of non-formal learning (OECD 2010) Although all definitions can be contested (see below) this article shall refer to theEuropean Centre for the Development of Professional Training(Cedefop) 2001 communication on 'lifelong learning: formal, non-formal and informal learning' as the guideline for the differing definitions. Formal learning: learning typically provided by an education or training institution, structured (in terms of learning objectives, learning time or learning support) and leading to certification. Formal learning is intentional from the learner's perspective. (Cedefop 2001)[8] Informal learning: learning resulting from daily life activities related to work, family or leisure. It is not structured (in terms of learning objectives, learning time or learning support) and typically does not lead to certification. Informal learning may be intentional but in most cases it is not-intentional (or "incidental"/random). (Cedefop 2001)[8] UNESCO focuses on the flexibility of non formal education and how it allows for more personalised learning. This type of education is open to any personality, age, origin, and irrespective of their personal interest.[9] Non-formal learning: see definition above. If there is no clear distinction between formal and in-formal learning where is the room for non-formal learning. It is a contested issue with numerous definitions given. The following are some the competing theories. "It is difficult to make a clear distinction between formal and informal learning as there is often a crossover between the two."(McGivney, 1999, p1).[10] Similarly, Hodkinson et al. (2003), conclude after a significant literature analysis on the topics of formal, informal, and non-formal learning, that "the terms informal and non-formal appeared interchangeable, each being primarily defined in opposition to the dominant formal education system, and the largely individualist and acquisitional conceptualisations of learning developed in relation to such educational contexts." (Hodkinson et al., 2003, p. 314)[11]Moreover, he states that "It is important not to see informal and formal attributes as somehow separate, waiting to be integrated. This is the dominant view in the literature, and it is mistaken. Thus, the challenge is not to, somehow, combine informal and formal learning, for informal and formal attributes are present and inter-related, whether we will it so or not. The challenge is to recognise and identify them, and understand the implications. For this reason,the concept of non-formal learning, at least when seen as a middle state between formal and informal, is redundant." (p. 314) Eraut's[12]classification of learning into formal and non-formal: This removes informal learning from the equation and states all learning outside of formal learning is non-formal. Eraut equates informal with connotations of dress, language or behaviour that have no relation to learning. Eraut defines formal learning as taking place within a learning framework; within a classroom or learning institution, with a designated teacher or trainer; the award of a qualification or credit; the external specification of outcomes. Any learning that occurs outside of these parameters is non-formal. (Ined 2002)[13] The EC (2001) Communication on Lifelong Learning: formal, non-formal and informal learning: The EU places non-formal learning in between formal and informal learning (see above). This has learning both in a formal setting with a learning framework and as an organised event but within a qualification. "Non-formal learning: learning that is not provided by an education or training institution and typically does not lead to certification. It is, however, structured (in terms of learning objectives, learning time or learning support). Non-formal learning is intentional from the learner's perspective." (Cedefop 2001)[8] Livingstone's[14]adults formal and informal education, non-formal and informal learning: This focuses on the idea of adult non-formal education. This new mode, 'informal education' is when teachers or mentors guide learners without reference to structured learning outcomes. This informal education learning is gaining knowledge without an imposed framework, such as learning new job skills. (Infed, 2002)[13] Billett[15](2001): there is no such thing as informal learning: Billett's definition states there is no such thing as non-formal and informal learning. He states all human activity is learning, and that everything people do involves a process of learning. "all learning takes place within social organisations or communities that have formalised structures." Moreover, he states most learning in life takes place outside of formal education.(Ined 2002)[13] The council of Europe puts the distinction in terms of willingness and the systems on which its taking place. Non formal learning takes place outside learning institutions while informal is a part of the formal systems.[16] Recently, manyinternational organizationsandUNESCOMember States have emphasized the importance of learning that takes place outside of formal learning settings. This emphasis has led UNESCO, through itsInstitute of Lifelong Learning(UIL), to adopt international guidelines for the Recognition, Validation and Accreditation of the Outcomes of Non-formal and Informal Learning in 2012.[17]The emphasis has also led to an increasing number of policies and programmes in many Member States, and a gradual shift from pilots to large-scale systems such as those in Portugal, France, Australia, Mauritius and South Africa.[18] Cedefophas created European guidelines to provide validation to a broad range of learning experiences, thereby aiding transparency and comparability across its national borders. The broad framework for achieving this certification across both non-formal and informal learning is outlined in the Cedefop European guidelines for validating non-formal and informal learning; Routes from learning to certification.[19] There are different approaches to validation between OCED and EU countries, with countries adopting different measures. The EU, as noted above, through the Cedefop-released European guidelines for validating non-formal and informal learning in 2009 to standardise validation throughout the EU. Within the OCED countries, the picture is more mixed. Countries with the existence of recognition for non-formal and informal learning (Feutrie, 2007)[20] Non-formal education (NFE) is popular on a worldwide scale in both 'western' and 'developing countries'. Non-formal education can form a matrix with formal and non-formal education, as non-formal education can mean any form of systematic learning conducted outside the formal setting. Many courses in relation to non-formal education have been introduced in several universities in western and developing countries. The UNESCO institute of education conducted a seminar on non-formaleducation in Morocco. The association for development of education in Africa (ADEA) launched many programmes in non-formal education in at least 15 countries of Sub-Saharan Africa. In 2001 World Bank conducted an international seminar on basic education in non-formal programmes. In addition to this the World Bank was advised to extend its services to adult and non-formal education. A report on professional education, Making Learning Visible: the identification, assessment and recognition of non-formal learning in Europe, defines non-formal learning as semi structured, consisting of planned and explicit approaches to learning introduced into work organisations and elsewhere, not recognised within the formal education and training system.[21] Research by Dr Marnee Shay, a senior lecturer in University of Queensland School of Education indicate that there are nearly 10 times more Indigenous students in flexible schools than it would be expected from numbers in the general population.[22] Several classifications of non-formal education have been proposed.[23][24]Willems and Andersson[25]classify non-formal education according to two dimensions: (1) "NFE in relation to formal and informal learning (Substitute-Complement)" and (2) "Main learning content of NFE (Competencies-Values)". Based on these two dimensions, they describe four types of non-formal education. The goal of their framework is to better understand the various public governance challenges and structures that very different types of non-formal education have. Similarly, Shrestha[26]and colleagues focus on the role of NFE in comparison to formal education. Hoppers[27]proposes a three-fold classification, also in comparison to formal education: "A. Supplementary provisions", "B. Compensatory provisions", and "C. Alternative provisions". Rogers[28]pinpoints to the changing role of NFE over the last five decades and makes a distinction between a first and a second generation NFE. Community work, which is particularly widespread in Scotland, fosters people's commitment to their neighbours and encourages participation in and development of local democratic forms of organisation. Youth work which focuses on making people more active in the society. Social work which helps young people in homes to develop ways to deal with complex situations like fostering fruitful relationships between parents and children, bringing different groups of career together, etc... In France and Italy animation in a particular form is a kind of non-formal education. It uses theatre and acting as means of self-expression with different community groups for children and people with special needs. This type of non-formal education helps in ensuring active participation and teaches people to manage the community in which they live. Youth and community organisations young people have the opportunity to discover, analyse and understand values and their implications and build a set of values to guide their lives. They run work camps and meetings, recruit volunteers, administer bank accounts, give counselling etc. to work toward social change.[29] Education plays an important role in development. Out of school programmes are important to provide adaptable learning opportunities and new skills and knowledge to a large percentage of people who are beyond the reach of formal education. Non-formal education began to gain popularity in the late 1960s and early 1970s. Today, non-formal education is seen as a concept of recurrent and lifelong learning. Non-formal education is popular among the adults specially the women as it increases women's participation in both private and public activities, i.e. in house hold decision making and as active citizens in the community affairs and national development. These literacy programmes have a dramatic impact on women's self-esteem because they unleash their potential in economic, social, cultural and political spheres. According to UNESCO (2010), non-formal education helps to ensures equal access to education, eradicate illiteracy among women and improve women's access to professional training, science, technology and continuing education. It also encourages the development of non-discriminatory education and training. The effectiveness of such literacy and non-formal education programmes are bolstered by family, community and parental involvement.[citation needed]This is why the United NationsSustainable Development Goal 4advocates for a diversification of learning opportunities and the usage of a wide range of education and training modalities in recognition of the importance of non-formal education. Non-formal education is beneficial in a number of ways. There are activities that encourage young people to choose their own programme and projects that are important because they offer the youth the flexibility and freedom to explore their emerging interests. When the youth can choose the activities in which they can participate, they have opportunities to develop several skills like decision making skills. A distinction can be made between "participant functionality" and "societal functionality" of non-formal education.[30]Participant functionality refers to the aimed advantages for the individual participants in non-formal education, while societal functionality refers to the benefits non-formal education has on society in general. Non-formal learning has experiential learning activities that foster the development of skills and knowledge. This helps in building the confidence and abilities among the youth of today. It also helps in development of personal relationships not only among the youth but also among the adults. It helps in developing interpersonal skills among the young people as they learn to interact with peers outside the class and with adults in the community.[31] Formal education system are inadequate to effectively meet the needs of the individual and the society. The need to offer more and better education at all levels, to a growing number of people, particularly in developing countries, the scant success of current formal education systems to meet all such demands, has shown the need to develop alternatives to learning. The rigid structure of formal schools, mainly because of rules and regulations than concentrating on the real need of the students, offering curriculum that leans away from the individual and from society, far more concerned with performing programmes than reaching useful objectives. This called for non-formal education which starting from the basic need of the students, is concerned with the establishment of strategies that are compatible with reality.[32] The recognition of non-formal learning through credentials, diplomas, certificates, and awards is sorely lacking,[according to whom?]which can negatively affect employment opportunities which require specific certification or degrees.[33] Non-formal learning, due to its 'unofficial' and ad-hoc nature, may also not have a specific curriculum with a clear structure and direction which also implies a lack of accountability due to an over-reliance on self-assessment. Moreover, more often than not, the organizations or individuals providing non-formal learning tend to be teachers who were not professionally trained, thus meaning they possess less qualities than professionally trained teachers, which will negatively affect the students.[34]
https://en.wikipedia.org/wiki/Nonformal_learning
Open collaborationrefers to any "system of innovation or production that relies on goal-oriented yet loosely coordinated participants who cooperate voluntarily to create a product (or service) ofeconomic value, which is made freely available to contributors and noncontributors alike."[1]It is prominently observed inopen source software, and has been initially described inRichard Stallman'sGNU Manifesto,[2]as well asEric S. Raymond's 1997 essay,The Cathedral and the Bazaar. Beyond open source software, open collaboration is also applied to the development of other types of mind or creative works, such as information provision inInternet forums, or the production of encyclopedic content inWikipedia.[3] The organizing principle behind open collaboration is that ofpeer production.[4]Peer production communities are structured in an entirely decentralized manner, but differ from markets in that they function without price-based coordination, and often on the basis of volunteering only. Such communities are geared toward the production of openly accessible public or "common" goods, but differ from the State as well as charity groups in that they operate without a formal hierarchical structure, and rest solely on the construction of a rough, evolving consensus among participants.[5][6] Riehle et al. define open collaboration as collaboration based on three principles ofegalitarianism,meritocracy, andself-organization.[7]Levine and Piretula define open collaboration as "any system of innovation or production that relies on goal-oriented yet loosely coordinated participants who interact to create a product (or service) of economic value, which they make available to contributors and noncontributors alike."[8][9]This definition captures multiple instances, all joined by similar principles. For example, all of the elements — goods of economic value, open access to contribute and consume, interaction and exchange, purposeful yet loosely coordinated work — are present in an open source software project, in Wikipedia, or in a user forum or community. They can also be present in a commercial website that is based onuser-generated content. In all of these instances of open collaboration, anyone can contribute and anyone can freely partake in the fruits of sharing, which are produced by interacting participants who are loosely coordinated.[10]: 17 An annual conference dedicated to the research and practice of open collaboration is the International Symposium on Open Collaboration (OpenSym, formerly WikiSym).[11]As per its website, the group defines open collaboration as "collaboration that is egalitarian (everyone can join, no principled or artificial barriers to participation exist), meritocratic (decisions and status are merit-based rather than imposed) and self-organizing (processes adapt to people rather than people adapt to pre-defined processes)."[12] Since 2011, a peer-reviewed academic journal,The Journal of Peer Production(JoPP), is dedicated to documenting and researching peer production processes. This academic community understands peer production "as a mode of commons-based and oriented production in which participation is voluntary and predicated on the self-selection of tasks. Notable examples are the collaborative development of Free Software projects and of the Wikipedia online encyclopedia."[13]
https://en.wikipedia.org/wiki/Open_collaboration
One of the most visible approaches topeer learningcomes out ofcognitive psychology, and is applied within a "mainstream"educationalframework: "Peer learning is an educational practice in which students interact with other students to attain educational goals."[1]Other authors including David Boud describe peer learning as a way of moving beyond independent to interdependent or mutual learning among peers.[2]In this context, it can be compared to the practices that go by the namecooperative learning. However, other contemporary views on peer learning relax the constraints, and position "peer-to-peer learning" as a mode of "learning for everyone, by everyone, about almost anything."[3]Whether it takes place in aformalorinformallearning context, in small groups oronline, peer learning manifests aspects ofself-organizationthat are mostly absent frompedagogicalmodels of teaching and learning. In his 1916 book,Democracy and Education,John Deweywrote, “Education is not an affair of 'telling' and being told, but an active and constructive process.” In a later essay, entitled "Experience and Education",[4]Dewey went into greater detail about the science of child development and developed the basicConstructivisttheory that knowledge is created through experience, rather than passed down from teacher to student through rote memorization. Soviet psychologistLev Vygotsky, who developed the concept of theZone of Proximal Development, was another proponent of constructivist learning: his book,Thought and Language, provides evidence that students learn better through collaborative, meaningful problem-solving activities than through solo exercises. The three distinguishing features of constructivist theory are claims that:[5] These are clearly meaningful propositions in a social context with sustained relationships, where people work on projects or tasks that are collaborative or otherwise shared. Educational Psychology Professor Alison King explains in "Promoting Thinking Through Peer Learning"[6]that peer learning exercises as simple as having students explain concepts to one another are proof of social constructivism theory at work; the act of teaching another individual demands that students “clarify, elaborate on, and otherwise reconceptualize material.” Joss Winn, Senior Lecturer in Educational Research at University of Lincoln, proposes that schools radically redefine the teacher-student relationship to fit this constructivist theory of knowledge in his December 2011 paper, "Student as Producer".[7]Carl Rogers' "Personal Thoughts on Learning"[8]focus on the individual’s experience of effective learning, and eventually conclude that nearly the entire traditional educational structure is at odds with this experience. Self-discovered learning in a group that designates a facilitator is the “new approach” Rogers recommends for education. In general, peer learning may adaptconstructivistordiscovery learningmethods for the peer-to-peer context: however, peer learning typically manifests constructivist ideas in a more informal way, whenlearningandcollaborationare simply applied to solve some real shared problem. Paulo FreireinPedagogy of the Oppressedadvocated a more equitable relationship between teachers and students, one in which information is questioned and situated in political context, and all participants in the classroom work together to create knowledge. Paulo Blikstein, Assistant Professor of Education at Stanford University wrote inTravels in Troy with Freire: Technology as an Agent of Emancipation[9]that through exploratory building activities, “Not only did students become more autonomous and responsible, they learned to teach one another.” Yochai Benklerexplains how the now-ubiquitous computer helps us produce and process knowledge together with others in his book,The Wealth of Networks.George Siemensargues inConnectivism: A Learning Theory for the Digital Age, that technology has changed the way we learn, explaining how it tends to complicate or expose the limitations of the learning theories of the past. In practice, the ideas of connectivism developed in and alongside the then-new social formation, "massive open online courses" or MOOCs. Connectivismproposes that the knowledge we can access by virtue of our connections with others is just as valuable as the information carried inside our minds. The learning process, therefore, is not entirely under an individual’s control—learning can happen outside ourselves, as if we are a member of a large organization where many people are continuously updating a shared database. Rita Kop and Adrian Hill, in their critique of connectivism,[10]state that: In global health, peer learning has emerged as a significant approach for spreading evidence-based practices at scale.[11]Research from The Geneva Learning Foundation has demonstrated that structured peer learning networks can achieve higher efficacy scores (3.2 out of 4) compared to traditional cascade training (1.4) or expert coaching (2.2) when measured across variables including scalability, information fidelity, and cost effectiveness. For example, in Côte d'Ivoire, a peer learning initiative reached health workers across 85% of the country's districts within two weeks, leading to locally-led innovations in community engagement. The approach has shown particular promise in complex health interventions where traditional randomized controlled trials may be impractical, with one study showing peer learning participants were seven times more likely to successfully implement COVID-19 recovery plans compared to a control group.[12] In a joint paper, Roy Williams, Regina Karousou, and Jenny Mackness argue that educational institutions should consider "emergent learning," in which learning arises from a self-organized group interaction, as a valuable component of education in the Digital Age. Web 2.0 puts distributed individuals into a group setting where emergent learning can occur. However, deciding how to manage emergence is important; “fail-safe” management drives activity towards pre-determined outcomes, while “safe/fail experiments” steer away from negative outcomes while leaving space open for mistakes and innovation.[13]Williamset al.also distinguish between the term “environment” as controlled, and “ecology” as free/open. Cathy DavidsonandDavid Theo Goldbergwrite inThe Future of Learning Institutions in a Digital Ageabout the potential of “participatory learning,” and a new paradigm of education that is focused on mediated interactions between peers. They argue that if institutions of higher learning could begin to value this type of learning, instead of simply trying to implement “Instructional Technology” in classrooms, they could transform old models of university education. Davidson and Goldberg introduce “Ten Principles for the Future of Learning,” which include self-learning, horizontal structures, and open source education.Peter Sloterdijk's recent book "You Must Change Your Life" proposes similar ideas in the context of a "General Disciplinics" that would "counteract the atrophy of the educational system" by focusing on forms of learning that takes place through direct participation in the disciplines.[14](p. 156) Yochai BenklerandHelen Nissenbaumdiscuss implications for the realm of moral philosophy in their 2006 essay, "Commons-Based Peer Production and Virtue".[15]They argue that the “socio-technical systems” of today’s Internet make it easier for people to role-model and adopt positive, virtuous behaviors on a large scale. Joseph Corneli and Charles Jeffrey Danoff proposed the label “paragogy” to describe a collection of “best practices of effective peer learning”.[16]They published a short book[17]along with several papers in which they discuss five "paragogical principles" that form the core of their proposedlearning theory. These were generated by rethinkingMalcolm Knowlesprinciples ofandragogyfor a learning context that is co-created by the learners. The learning theories and approaches described above are currently being tested in peer-learning communities around the world, often adaptingeducational technologyto supportinformal learning, though results in formal learning contexts exist too. For example,Eric Mazurand colleagues report on "Ten years of experience and results" with a teaching technique they call "Peer Instruction": This approach made early use of a variant of the technique that is now known as the "flipped classroom": Peer 2 Peer University, or P2PU, which was founded in 2009 by Philipp Schmidt and others, is an example from the informal learning side. Speaking about the beginnings of P2PU, Schmidt echoes Siemens’ connectivism ideas and explains that, “The expertise is in the group. That’s the message, that everyone can bring something to the conversation.”[3]In numerous public talks, Schmidt argues that current educational models are "broken" (particularly on the basis of the high cost of university-level training). He suggests that social assessment mechanisms similar to those applied in open-source software development can be applied to education.[19]In practice, this approach uses peer-based assessment including recommendations andbadgesto provide an alternative form of accreditation.[20] Jeff Young’s article in the Chronicle of Higher Education, "When Professors Print Their Own Diplomas",[21]sparked a conversation about the necessity of formal degrees in an age when class lectures can be uploaded for free. The MIT Open Teaching initiative, for example, has since 2001 put all of its course materials online. ButDavid A. Wiley, then Psychology Professor at Utah State, went further, signing certificates for whoever takes his class. A similar practice has become even more visible in learning projects likeUdacity,Coursera, andEdX. Although these projects attempt to "scale education" by distributing learning materials produced by experts (not classic examples of peer learning), they do frequently feature peer-to-peer discussions in forums or offline.[22] In the forward to a book on thePower of peer learningby Jean-H. Guilmette, Maureen O'Neil, then president of Canada's International Development Research Centre, states that Guilmette suggests that peer learning is useful in the development context because Guilmette cites Anne K. Bernard, who in a report based on extensive interviews, concludes: ScardamaliaandBereiterexplain in "Computer Support for Knowledge-Building Communities"[25]that computers in the classroom have the opportunity to restructure the learning environment, but too often they are simply used to provide a digital version of a normal lesson or exam. They propose that classrooms be exchanged for “knowledge-building communities” where students can use computers to connect to and create knowledge in the outside world. However, as illustrated in citations above, this way of thinking about learning is often at odds with traditional educational praxis. In "The Role of the Learning Platform in Student-Centered E-Learning", Kurliha, Miettinen, Nokelainen, and Tirri found a "difference in learning outcomes based on the tools used."[26]However, the variables at work are not well understood, and are the subject of ongoing research.[27]Within a formal education setting, a 1994 study found that students were more responsive to feedback from a teacher than they were topeer feedback. However, another later study showed that training in assessment techniques had a positive impact on individual student performance. A classic study[28]on motivation inpeer tutoringshowed that "reward is no motivator." Although other more recent work has shown that non-monetary rewards or acknowledgement can make a difference inperformance(for certain populations of peer producers),[29]the exact motivations for going out of the way to teach or tutor someone else are not clearly understood. As mentionedabove, learning is often just part of solving a problem, so "peer learning" and "peer teaching" would tend to happen informally when people solve problems in groups. Research on peer learning may involveparticipant observation, and may itself bepeer produced. Some of this research falls under the broader umbrella ofScholarship of Teaching and Learning.Computer-supported collaborative learningis one obvious context in which to study peer learning, since in such settings "learning is observably and accountably embedded in collaborative activity."[30]Research has shown that peer collaboration in nursing simulations not only fosters a deeper understanding of clinical concepts but also improves students' ability to navigate complex decision-making scenarios, aligning with the principles of constructivist learning where knowledge is co-created through experiential peer interactions.[31]However, peer learning can play a role in settings where traditional conceptions of both "teaching" and "learning" do not apply, for instance, inacademic peer review, inorganizational learning, in development work, and in public health programmes. Research in these areas may fall within the area oforganization science,science, technology and society(STS) or other fields. This article incorporatestextavailable under theCC0license.
https://en.wikipedia.org/wiki/Peer_learning
Peer reviewis the evaluation of work by one or more people with similar competencies as the producers of the work (peers).[1]It functions as a form of self-regulation by qualified members of a profession within the relevantfield. Peer review methods are used to maintain quality standards, improve performance, and provide credibility. Inacademia,scholarly peer reviewis often used to determine anacademic paper's suitability for publication.[2]Peer review can be categorized by the type of activity and by the field or profession in which the activity occurs, e.g.,medical peer review. It can also be used as a teaching tool to help students improve writing assignments.[3] Henry Oldenburg(1619–1677) was a German-born British philosopher who is seen as the 'father' of modern scientific peer review.[4][5][6]It developed over the following centuries with, for example, the journalNaturemaking it standard practice in 1973. The term "peer review" was first used in the early 1970s.[7]A monument to peer review has been at theHigher School of Economicsin Moscow since 2017.[8] Professional peer review focuses on the performance of professionals, with a view to improving quality, upholding standards, or providing certification. In academia, peer review is used to inform decisions related to faculty advancement and tenure.[9] A prototype professional peer review process was recommended in theEthics of the Physicianwritten byIshāq ibn ʻAlī al-Ruhāwī(854–931). He stated that a visiting physician had to make duplicate notes of a patient's condition on every visit. When the patient was cured or had died, the notes of the physician were examined by a local medical council of other physicians, who would decide whether the treatment had met the required standards of medical care.[10] Professional peer review is common in the field of health care, where it is usually calledclinical peer review.[11]Further, since peer review activity is commonly segmented by clinical discipline, there is also physician peer review, nursing peer review, dentistry peer review, etc.[12]Many other professional fields have some level of peer review process: accounting,[13]law,[14][15]engineering (e.g.,software peer review,technical peer review), aviation, and even forest fire management.[16] Peer review is used in education to achieve certain learning objectives, particularly as a tool to reach higher order processes in the affective and cognitive domains as defined byBloom's taxonomy. This may take a variety of forms, including closely mimicking the scholarly peer review processes used in science and medicine.[17][18] Scholarly peer reviewor academic peer review (also known as refereeing) is the process of having a draft version of a researcher'smethodsandfindingsreviewed (usually anonymously) byexperts(or "peers") in the same field. Peer review is widely used for helping the academic publisher (that is, theeditor-in-chief, theeditorial boardor theprogram committee) decide whether the work should be accepted, considered acceptable with revisions, or rejected for official publication in anacademic journal, amonographor in theproceedingsof anacademic conference. If the identities of authors are not revealed to each other, the procedure is called dual-anonymous peer review. Medical peer reviewmay be distinguished in four classifications:[21] Additionally, "medical peer review" has been used by theAmerican Medical Associationto refer not only to the process of improving quality and safety in health care organizations, but also to the process of rating clinical behavior or compliance with professional society membership standards.[26][27]The clinical network believes it to be the most ideal method of guaranteeing that distributed exploration is dependable and that any clinical medicines that it advocates are protected and viable for individuals. Thus, the terminology has poor standardization and specificity, particularly as a database search term.[28] Inengineering, technical peer review is a type of engineering review. Technical peer reviews are a well-defined review process for finding and fixing defects, conducted by a team of peers with assigned roles. Technical peer reviews are carried out by peers representing areas of life cycle affected by material being reviewed (usually limited to 6 or fewer people). Technical peer reviews are held within development phases, between milestone reviews, on completed products or completed portions of products.[29] TheEuropean Unionhas been using peer review in the "Open Method of Co-ordination" of policies in the fields ofactive labour market policysince 1999.[30]In 2004, a program of peer reviews started insocial inclusion.[31]Each program sponsors about eight peer review meetings in each year, in which a "host country" lays a given policy or initiative open to examination by half a dozen other countries and the relevant European-levelNGOs. These usually meet over two days and include visits to local sites where the policy can be seen in operation. The meeting is preceded by the compilation of anexpert reporton which participating "peer countries" submit comments. The results are published on the web.[citation needed] TheUnited Nations Economic Commission for Europe, throughUNECE Environmental Performance Reviews, uses peer review, referred to as "peer learning", to evaluate progress made by its member countries in improving their environmental policies.[citation needed] The State of California is the only U.S. state to mandate scientific peer review. In 1997, the Governor of California signed into law Senate Bill 1320 (Sher), Chapter 295, statutes of 1997, which mandates that, before anyCalEPABoard, Department, or Office adopts a final version of a rule-making, the scientific findings, conclusions, and assumptions on which the proposed rule are based must be submitted for independent external scientific peer review. This requirement is incorporated into theCalifornia Health and Safety CodeSection 57004.[32] Peer review, or student peer assessment, is the method by which editors and writers work together in hopes of helping the author establish and further flesh out and develop their own writing.[33]Peer review is widely used in secondary and post-secondary education as part of the writing process. This collaborative learning tool involves groups of students reviewing each other's work and providing feedback and suggestions for revision.[34]Rather than a means of critiquing each other's work, peer review is often framed as a way to build connection between students and help develop writers' identity.[35]While widely used inEnglishandcompositionclassrooms, peer review has gained popularity in other disciplines that require writing as part of the curriculum including thesocialandnatural sciences.[36][37] Peer review in classrooms helps students become more invested in their work, and the classroom environment at large.[38]Understanding how their work is read by a diverse readership before it is graded by the teacher may also help students clarify ideas and understand how to persuasively reach different audience members via their writing. It also gives students professional experience that they might draw on later when asked to review the work of a colleague prior to publication.[39][40]The process can also bolster the confidence of students on both sides of the process. It has been found that students are more positive than negative when reviewing their classmates' writing.[41]Peer review can help students not get discouraged but rather feel determined to improve their writing.[41] Critics of peer review in classrooms say that it can be ineffective due to students' lack of practice giving constructive criticism, or lack of expertise in the writing craft at large.[42]Peer review can be problematic for developmental writers, particularly if students view their writing as inferior to others in the class as they may be unwilling to offer suggestions or ask other writers for help.[43]Peer review can impact a student's opinion of themselves as well as others as sometimes students feel a personal connection to the work they have produced, which can also make them feel reluctant to receive or offer criticism.[35]Teachers using peer review as an assignment can lead to rushed-through feedback by peers, using incorrect praise or criticism, thus not allowing the writer or the editor to get much out of the activity.[13]As a response to these concerns, instructors may provide examples, model peer review with the class, or focus on specific areas of feedback during the peer review process.[44]Instructors may also experiment with in-class peer review vs. peer review as homework, or peer review using technologies afforded by learning management systems online. Students that are older can give better feedback to their peers, getting more out of peer review, but it is still a method used in classrooms to help students young and old learn how to revise.[3]With evolving and changing technology, peer review will develop as well. New tools could help alter the process of peer review.[45] Peer seminar is a method that involves a speaker that presents ideas to an audience that also acts as a "contest".[46]To further elaborate, there are multiple speakers that are called out one at a time and given an amount of time to present the topic that they have researched. Each speaker may or may not talk about the same topic but each speaker has something to gain or lose which can foster a competitive atmosphere.[46]This approach allows speakers to present in a more personal tone while trying to appeal to the audience while explaining their topic. Peer seminars may be somewhat similar to what conference speakers do, however, there is more time to present their points, and speakers can be interrupted by audience members to provide questions and feedback upon the topic or how well the speaker did in presenting their topic.[46] Professional peer review focuses on the performance of professionals, with a view to improving quality, upholding standards, or providing certification. Peer review in writing is a pivotal component among various peer review mechanisms, often spearheaded by educators and involving student participation, particularly in academic settings. It constitutes a fundamental process in academic and professional writing, serving as a systematic means to ensure the quality, effectiveness, and credibility of scholarly work. However, despite its widespread use, it is one of the most scattered, inconsistent, and ambiguous practices associated with writing instruction.[47]Many scholars question its effectiveness and specific methodologies. Critics of peer review in classrooms express concerns about its ineffectiveness due to students' lack of practice in giving constructive criticism or their limited expertise in the writing craft overall. Academic peer review has faced considerable criticism, with many studies highlighting inherent issues in the peer review process. A particular concern in peer review is "role duality" as people are in parallel in the role of being an evaluator and being evaluated.[48]Research illustrates that taken on both roles in parallel biases people in their role as evaluators as they engage in strategic actions to increase the chance of being evaluated positively themselves.[48] The editorial peer review process has been found to be strongly biased against 'negative studies,' i.e. studies that do not work. This then biases the information base of medicine. Journals become biased against negative studies when values come into play. "Who wants to read something that doesn't work?" asks Richard Smith in the Journal of the Royal Society of Medicine. "That's boring." This is also particularly evident in university classrooms, where the most common source of writing feedback during student years often comes from teachers, whose comments are often highly valued. Students may become influenced to provide research in line with the professor's viewpoints, because of the teacher's position of high authority. The effectiveness of feedback largely stems from its high authority. Benjamin Keating, in his article "A Good Development Thing: A Longitudinal Analysis of Peer Review and Authority in Undergraduate Writing," conducted a longitudinal study comparing two groups of students (one majoring in writing and one not) to explore students' perceptions of authority. This research, involving extensive analysis of student texts, concludes that students majoring in non-writing fields tend to undervalue mandatory peer review in class, while those majoring in writing value classmates' comments more. This reflects that peer review feedback has a certain threshold, and effective peer review requires a certain level of expertise. For non-professional writers, peer review feedback may be overlooked, thereby affecting its effectiveness.[49] Elizabeth Ellis Miller, Cameron Mozafari, Justin Lohr and Jessica Enoch state, "While peer review is an integral part of writing classrooms, students often struggle to effectively engage in it." The authors illustrate some reasons for the inefficiency of peer review based on research conducted during peer review sessions in university classrooms: This research demonstrates that besides issues related to expertise, numerous objective factors contribute to students' poor performance in peer review sessions, resulting in feedback from peer reviewers that may not effectively assist authors. Additionally, this study highlights the influence of emotions in peer review sessions, suggesting that both peer reviewers and authors cannot completely eliminate emotions when providing and receiving feedback. This can lead to peer reviewers and authors approaching the feedback with either positive or negative attitudes towards the text, resulting in selective or biased feedback and review, further impacting their ability to objectively evaluate the article. It implies that subjective emotions may also affect the effectiveness of peer review feedback.[50] Pamela Bedore and Brian O'Sullivan also hold a skeptical view of peer review in most writing contexts. The authors conclude, based on comparing different forms of peer review after systematic training at two universities, that "the crux is that peer review is not just about improving writing but about helping authors achieve their writing vision." Feedback from the majority of non-professional writers during peer review sessions often tends to be superficial, such as simple grammar corrections and questions. This precisely reflects the implication in the conclusion that the focus is only on improving writing skills. Meaningful peer review involves understanding the author's writing intent, posing valuable questions and perspectives, and guiding the author to achieve their writing goals.[51] Various alternatives to peer review have been suggested (such as, in the context of science funding,funding-by-lottery).[52] Magda Tigchelaar compares peer review with self-assessment through an experiment that divided students into three groups: self-assessment, peer review, and no review. Across four writing projects, she observed changes in each group, with surprising results showing significant improvement only in the self-assessment group. The author's analysis suggests that self-assessment allows individuals to clearly understand the revision goals at each stage, as the author is the most familiar with their writing. Thus, self-checking naturally follows a systematic and planned approach to revision. In contrast, the effectiveness of peer review is often limited due to the lack of structured feedback, characterized by scattered, meaningless summaries and evaluations that fail to meet the author's expectations for revising their work.[53] Stephanie Conner and Jennifer Gray highlight the value of most students' feedback during peer review. They argue that many peer review sessions fail to meet students' expectations, as students, even as reviewers themselves, feel uncertain about providing constructive feedback due to their lack of confidence in their writing. The authors offer numerous improvement strategies. For instance, the peer review process can be segmented into groups, where students present the papers to be reviewed while other group members take notes and analyze them. Then, the review scope can be expanded to the entire class. This widens the review sources and further enhances the level of professionalism.[54] With evolving technology, peer review is also expected to evolve. New tools have the potential to transform the peer review process. Mimi Li discusses the effectiveness and feedback of an online peer review software used in their freshman writing class. Unlike traditional peer review methods commonly used in classrooms, the online peer review software offers many tools for editing articles and comprehensive guidance. For instance, it lists numerous questions peer reviewers can ask and allows various comments to be added to the selected text. Based on observations over a semester, students showed varying degrees of improvement in their writing skills and grades after using the online peer review software. Additionally, they highly praised the technology of online peer review.[55]
https://en.wikipedia.org/wiki/Peer_review
Production for useis a phrase referring to the principle of economic organization and production taken as a defining criterion for asocialist economy. It is held in contrast toproduction for profit. This criterion is used to distinguish communism fromcapitalism, and is one of the fundamental defining characteristics of communism.[1] This principle is broad and can refer to an array of different configurations that vary based on the underlying theory of economics employed. In its classic definition, production for use implied an economic system whereby thelaw of valueandlaw of accumulationno longer directed economic activity, whereby a direct measure of utility and value is used in place of the abstractions of theprice system,money, andcapital.[2]Alternative conceptions of socialism that do not use the profit system such as theLange model, use instead a price system and monetary calculation.[3] The main socialist critique of the capitalist profit is that the accumulation of capital ("making money") becomes increasingly detached from the process of producingeconomic value, leading towaste,inefficiency, and social problems. Essentially, it[clarification needed]is a distortion of proper accounting, based on the assertion of thelaw of valueinstead of the "real" costs of production,objectively determinedoutside of social relations. Production for use refers to an arrangement whereby the production of goods and services is carried outex ante(directly) for theirutility(also called "use-value"). The implication is that the value of economic output would be based on use-value or a direct measure of utility as opposed toexchange-value; because economic activity would be undertaken to directly satisfy economic demands and human needs, the productive apparatus would directly serve individual and social needs. This is contrasted with production for exchange of the produced good or service in order to profit, where production is subjected to the perpetualaccumulation of capital, a condition where production is only undertaken if it generates profit, implying anex postor indirect means of satisfying economic demand. The profits system is oriented toward generating a profit to be reinvested into the economy (and the constant continuation of this process), the result being that society is structured around the need for a perpetual accumulation of capital.[4]In contrast, production for use means that the accumulation of capital is not a compulsory driving force in the economy, and by extension, the core process which society and culture revolves around. Production for profit, in contrast, is the dominant mode of production in the modernworld system, equivocates "profitability" and "productivity" and presumes that the former always equates to the latter.[5] Some thinkers, including the Austrian philosopher and political economistOtto Neurath, have used the phrase "socialization" to refer to the same concept of "production for use". In Neurath's phraseology, "total socialization" involvescalculation in kindin place of financial calculation and a system ofplanningin place of market-based allocation of economic goods.[6]Alternative conceptions exist in the form of market socialism. Proponents of socialism argue that production for profit (i.e.,capitalism) does not always satisfy the economic needs of people, especially the working-class, because capital only invests in production when it is profitable. This fails to satisfy demand, that is the needs of people who lack basic necessities but have insufficient purchasing power to acquire these needs in a manner that would be profitable for businesses. This results in a number of inefficiencies: unsold items are rarely given away to people who need but can’t afford them, unemployed workers are not utilized to produce such services, and resources are expended on occupations that serve no other purpose than to support the accumulation of profit instead of being utilized to provide useful goods and services.[13]For example, theUnited States housing bubbleresulted in anoverproductionof housing units that could not be sold at a profit, despite there being sufficient demand and need for housing units. Production for use in some form was the historically dominant modality until the initialprimitive accumulation of capital[citation needed]. Economic planning is not synonymous with production for use. Planning is essential in modern globalised production both within enterprises and within states. Planning to maximize profitability (i.e., within industries and private corporations) or to improve the efficiency of capital accumulation in the capitalist macro-economy (i.e.monetary policy,fiscal policyandindustrial policy) does not change the fundamental criteria and need to generate a financial profit to be reinvested into the economy. A more recent critique of production for profit is that it fails spectacularly to address issues such asexternalitieswhich the board and management of a for profit enterprise are often under a fiduciary responsibility to ignore if they harm or conflict with the shareholders'profit motives[citation needed]. Some socialists suggest a number of irrational outcomes occur from capitalism and the need to accumulate capital when capitalist economies reach a point in development whereby investment accumulates at a greater rate than growth of profitable investment opportunities. Many theories, such as theBuddhist Economics, theAppropriate technology, and theJevons Paradox, have demonstrated that the accumulation of capital due to maximization of profit, detaches Society from the process of producing social and economic value, leading to waste, inefficiency and underlying social issues.[14][15][16] Planned obsolescenceis a strategy used by businesses to generate demand for the continual consumption required for capitalism to sustain itself. The negative effect planned obsolescence has to environment (mainly), is due to constantly increasing natural material extraction to produce the goods and services to satisfy a never ending added demand, linked with a non-caring disposal of end products.[17] The creation of industries, projects and services comes about for no other purpose than generating profit, economic growth or maintaining employment. The drive to create such industries arises from the need to absorb the savings in the economy, and thus, to maintain the accumulation of capital. This can take the form of corporatization and commercialization of public services, i.e., transforming them into profit-generating industries to absorb investment, or the creation and expansion of sectors of the economy that do not produce any economic value by themselves because they deal only with exchange-related activities, sectors such as financial services. This can contribute to the formation of economic bubbles, crises and recessions.[18] For socialists, the solution to these problems entails a reorientation of the economic system from production for profit and the need to accumulate capital to a system where production is adjusted to meet individual and social demands directly. As an objective criterion for socialism, production for use can be used to evaluate the socialistic content of the composition of former and existing economic systems. For example, an economic system that is dominated by nationalized firms organized around the production of profit, whether this profit is retained by the firm or paid to the government as a dividend payment, would be astate capitalisteconomy. In such a system, the organizational structure of the firm remains similar to a private-sector firm; non-financial costs are externalized because profitability is the criterion for production, so that the majority of the economy remains essentially capitalist despite the formal title of public ownership. This has led many socialists to categorize the currentChinese economic systemasparty-state capitalism.[19][20] Theeconomy of the Soviet Unionwas based upon capital accumulation for reinvestment and production for profit; the difference between it and Western capitalism was that the USSR achieved this throughnationalizedindustry and state-directed investment, with the eventual goal of building a socialist society based upon production for use andself-management.Vladimir Lenindescribed the USSR economy as "state-monopoly capitalism"[21]and did not consider it to be socialism. During the1965 Liberman Reforms, the USSR re-introduced profitability as a criterion for industrial enterprises. Other views argue the USSR evolved into a non-capitalist and non-socialist system characterized by control and subordination of society by party and government officials who coordinated the economy; this can be calledbureaucratic collectivism. Michel Bauwensidentifies the emergence of the open software movement andpeer-to-peer productionas an emergent alternativemode of productionto the capitalist economy that is based on collaborative self-management, common ownership of resources, and the (direct) production of use-values through the free cooperation of producers who have access to distributed capital.[22] Commons-based peer productiongenerally involves developers who produce goods and services with no aim to profit directly, but freely contribute to a project relying upon an open common pool of resources and software code. In both cases, production is carried out directly for use - software is produced solely for theiruse-value. Multiple forms of valuation have been proposed to govern production in a socialist economy, to serve as a unit of account and to quantify the usefulness of an object in socialism. These include valuations based on labor-time, the expenditure of energy in production, or disaggregated units of physical quantities.[23] The classic formulation of socialism involved replacing the criteria of value from money (exchange-value) to physical utility (use-value), to be quantified in terms of physical quantities (Calculation in kindand Input-Output analysis) or some natural unit of accounting, such asenergy accounting.[24] Input-output modelanalysis is based upon directly determining the physical quantities of goods and services to be produced and allocating economic inputs accordingly; thus production targets are pre-planned.[25]Soviet economic planning was overwhelmingly focused onmaterial balances- balancing the supply of economic inputs with planned output targets. Oskar Langeformulated a mechanism for the direct allocation of capital goods in a socialist economy that was based on themarginal costof production. Under a capitalist economy, managers of firms are ordered and legally required to base production around profitability, and in theory, competitive pressure creates a downward pressure on profits and forces private businesses to be responsive to demands of consumers, indirectly approximating production for use. In theLange Model, the firms would be publicly owned and the managers would be tasked with setting the price of output to its marginal cost, thereby achievingpareto efficiencythrough direct allocation. Cybernetics, the use of computers to coordinate production in an optimal fashion, has been suggested for socialist economies. Oskar Lange, rejecting his earlier proposals formarket socialism, argued that the computer is more efficient than the market process at solving the multitude of simultaneous equations required for allocating economic inputs efficiently (either in terms of physical quantities or monetary prices).[26] Salvador Allende's socialist-led government developedProject Cybersyn, a form ofdecentralized economic planningthough the experimental computer-ledviable system modelof computed organisational structure of autonomous operative units though analgedonic feedback settingand bottom-upparticipative decision-makingby theCyberfolkcomponent. The project was disbanded after the1973 Chilean coup d'état.[27] Based on the perspective that thelaw of valuewould continue to operate in a socialist economy, it is argued that a market purged of "parasitical and wasteful elements" in the form of private ownership of the means of production and the distortions that arise from the concentration of power and wealth in a class of capitalists would enable the market to operate efficiently without distortions. Simply replacing the antagonistic interests between capitalists and workers in enterprises would alter the orientation of the economy from private profit to meeting the demands of the community as firms would seek to maximize the benefits to the member-workers, who would as a whole comprise society. Cooperative economistJaroslav Vaneksuggests that worker self-management and collective ownership of enterprises operating in a free-market would allow for a genuinefree-marketeconomy free of the market-distorting, monopolistic tendencies and antagonistic interests that emerge from private ownership over production.[28] In theHoward Hawks-directed 1940 filmHis Girl Friday, written byCharles Ledererbased on the 1928 Broadway playThe Front PagebyBen HechtandCharles MacArthur, reporter Hildy Johnson (Rosalind Russell) interviews accused killer Earl Williams (John Qualen) in jail to write his story for her newspaper. Williams is despondent and confused, and easily accepts it when Johnson leads him into an account of the events preceding the killing, which revolves around the desperate out-of-work man's hearing the expression "production for use" and transferring the concept in his mind to the gun he had: it was made for use, and he used it. This is the story about Williams that Johnson writes up, to the admiration of the other reporters covering the case. This version of Earl Williams' motivations differs significantly from that presented in the original stage play and thefirst film adaptation of it from 1931. In those scripts, the killer was a committedanarchistwho had definite political reasons for the shooting, and did not need to be influenced by a stronger personality into a false narrative.[29]
https://en.wikipedia.org/wiki/Production_for_use
Aprosumeris an individual who bothconsumesandproduces. The term is aportmanteauof the wordsproducerandconsumer. Research has identified six types of prosumers: DIY prosumers, self-service prosumers, customizing prosumers, collaborative prosumers, monetised prosumers, and economic prosumers.[1] The termsprosumerandprosumptionwere coined in 1980 byAlvin Toffler, an Americanfuturist, and were widely used by many technology writers of the time. Technological breakthroughs and a rise in user participation blurs the line between production and consumption activities, with the consumer becoming a prosumer. Prosumers have been defined as "individuals who consume and produce value, either for self-consumption or consumption by others, and can receive implicit or explicit incentives from organizations involved in the exchange."[1] The term has since come to refer to a person usingcommons-based peer production. In the digital and online world,prosumeris used to describe 21st-century online buyers because not only are they consumers of products, but they are able to produce their own products such as, customised handbags, jewellery with initials, jumpers with team logos, etc. In the field of renewable energy, prosumers are households or organisations which at times produce surplus fuel or energy and feed it into a national (or local) distribution network; whilst at other times (when their fuel or energy requirements outstrip their own production of it) they consume that same fuel or energy from that grid. This is widely done by households by means of PV panels on their roofs generating electricity. Such households may additionally make use of battery storage to increase their share of self-consumed PV electricity, referred to as prosumage in the literature.[2][3]It is also done by businesses which producebiogasand feed it into a gas network while using gas from the same network at other times or in other places. The European Union's Nobel Grid project, which is part of their Horizon 2020 research and innovation programme, uses the term in this way, for example. Thesharing economyis another context where individuals can act as prosumers. For example, in the sharing economy, individuals can be providers (e.g.,Airbnbhosts,Uberdrivers) and consumers (e.g., Airbnb guests, and Uber passengers). Prosumers are one avenue to grow the sharing economy.[4] Scholars have connected prosumer culture to the concept ofMcDonaldization, as advanced by sociologistGeorge Ritzer. Referring to the business model ofMcDonald's, which has emphasized efficiency for management while getting customers to invest more effort and time themselves (such as by cleaning up after themselves in restaurants), McDonaldization gets prosumers to perform more work without paying them for their labor.[5] The blurring of the roles of consumers and producers has its origins in the cooperativeself-helpmovements that sprang up during various economic crises, e.g. theGreat Depressionof the 1930s.Marshall McLuhanand Barrington Nevitt suggested in their 1972 bookTake Today, (p. 4) that with electric technology, the consumer would become a producer. In the 1980 book,The Third Wave,futurologistAlvin Tofflercoined the term "prosumer" when he predicted that the role of producers andconsumerswould begin to blur and merge (even though he described it in his bookFuture Shockfrom 1970). Toffler envisioned a highly saturatedmarketplaceasmass productionofstandardizedproducts began to satisfy basic consumer demands. To continue growingprofit, businesses would initiate a process ofmass customization, that is the mass production of highly customized products. However, to reach a high degree of customization, consumers would have to take part in the production process especially in specifyingdesignrequirements. In a sense, this is merely an extension or broadening of the kind of relationship that many affluent clients have had with professionals likearchitectsfor many decades. However, in many cases architectural clients are not the only or even primary end-consumers.[6] Toffler has extended these and many other ideas well into the 21st century. Along with more recently published works such asRevolutionary Wealth(2006), one can recognize and assess both the concept and fact of theprosumeras it is seen and felt on a worldwide scale. That these concepts are having a global impact and reach, however, can be measured in part by noting in particular, Toffler's popularity inChina. Discussing some of these issues withNewt GingrichonC-SPAN'sAfter Wordsprogram in June 2006, Toffler mentioned thatThe Third Waveis the second ranked bestseller of all time in China, just behind a work byMao Zedong.[7] Don Tapscottreintroduced the concept in his 1995 bookThe Digital Economy., and his 2006 bookWikinomics: How Mass Collaboration Changes Everythingwith Anthony D. Williams.George Ritzerand Nathan Jurgenson, in a widely cited article, claimed that prosumption had become a salient characteristic ofWeb 2.0. Prosumers create value for companies without receiving wages. Toffler's Prosumption was well described and expanded in economic terms byPhilip Kotler, who saw them as a new challenge for marketers.[8]Kotler anticipated that people will also want to play larger role in designing certain goods and services they consume, furthermore modern computers will permit them to do it. He also described several forces that would lead to more prosumption like activities, and to more sustainable lifestyles, that topic was further developed by Tomasz Szymusiak in 2013 and 2015 in two marketing books.[9][10] Technological breakthrough has fastened the development of prosumption. With the help of additive manufacturing techniques, for example, co-creation takes place at different production stages: design, manufacturing and distribution stages. It also takes place between individual customers, leading to co-design communities. Similarly, mass customisation is often associated with the production of tailored goods or services on a large scale production. This increase in participation has flourished following the increasing popularity ofWeb 2.0technologies, such as Instagram, Facebook, Twitter and Flickr. In July 2020, an academic description reported on the nature and rise of the "robot prosumer", derived frommodern-day technologyand relatedparticipatory culture, that, in turn, was substantially predicted earlier byscience fiction writers.[11][12][13] Prosumercapitalismhas been criticized as promoting "new forms of exploitation throughunpaid workgamifiedas fun".[14]: 57 Identifiable trends and movements outside of the mainstream economy which have adopted prosumer terminology and techniques include:
https://en.wikipedia.org/wiki/Prosumer
Open business[1]is an approach toenterprisethat draws on ideas fromopennessmovements likefree software,open source,open contentand open tools and standards. The approach places value ontransparency, stakeholder inclusion, and accountability. Open business structures make contributors and non-contributors visible so thatbusiness benefits are distributed accordingly. They seek to increase personal engagement and positive outcomes by rewarding contributors in an open way. Central to the concept are: Business means the state of being busy. The concept of business includes all the activities of earning money. Businesses that sell consumer products can operate in open ways, for example by disclosing prices for components and publishing operating information.[2]There is an interest in the benefit of most stakeholders, whether shareholders, workers, families etc. The risk of bankruptcy of such open-movement businesses is reduced because the fruits of their work remain in the commons and therefore remain as a permanent base for recovering the open business, even in their most critical situations. A service orientated business can also operate in open ways. A business that documents all transactions (donations and use of donated money) real-time on their websites in public, is very open. Another example might beCanonical Ltd. Open businesses can be more attractive to donors, especially if thename of the donorsinsocial networks(as real names, Twitter-, Facebook- or other branded Online Ids) are made public too. So in this case even the donors participate in the charity as business and beyond by increasing their positive community karma (earning "whuffies") and building their reputation. The risk of bankruptcy of such transaction-oriented businesses is reduced due to the fact, that The degree of freedom to participate may vary:
https://en.wikipedia.org/wiki/Open_business
Open manufacturing, also known asopen production,maker manufacturingormaterial peer productionand with the slogan "Design Global, Manufacture Local" is a new model ofsocioeconomicproduction in which physical objects are produced in an open, collaborative and distributed manner[1][2]and based onopen designandopen-source principles. Open manufacturing combines the following elements of a production process: new open production tools and methods (such as3D printers), new value-based movements (such as themaker movement), new institutions and networks for manufacturing and production (such asFabLabs), and open source methods, software and protocols.[3][4] Open manufacturing may also include digital modeling and fabrication andcomputer numeric control(CNC) of the machines used for production throughopen source softwareandopen source hardware. The philosophy of open manufacturing is close to theopen-source movement, but aims at the development of physical products rather than software.[5]The term is linked to the notion of democratizing technology[6]as embodied in themaker culture, theDIY ethic, theopen source appropriate technologymovement, the Fablab-network and other rooms for grassroot innovation such ashackerspaces. The openness of "open manufacturing" may relate to the nature of the product (open design), to the nature of the production machines and methods (e.g. open source3D-printers, open sourceCNC), to the process of production and innovation (commons-based peer production/ collaborative /distributed manufacturing), or to new forms of value creation (network-based bottom-up or hybrid versus business-centric top down).[7]Jeremy Rifkinargues, that open production through 3D-printing "will eventually and inevitably reduce marginal costs to near zero, eliminate profit, and make property exchange in markets unnecessary for many (though not all) products".[8] The following points are seen as key implications of open manufacturing:[6] In the context ofsocioeconomic development, open manufacturing has been described as a path towards a more sustainable industrialization on a global scale, that promotes "social sustainability" and provides the opportunity to shift to a "collaboration-oriented industrialization driven by stakeholders from countries with different development status connected in a global value creation at eye level".[9] For developing countries, open production could notably lead to products more adapted to local problems and local markets and reduce dependencies on foreign goods, as vital products could be manufactured locally.[10]In such a context, open manufacturing is strongly linked to the broader concept ofOpen Source Appropriate Technologymovement. According to scholarMichel Bauwens, Open Manufacturing is "the expansion ofpeer productionto the world of physical production".[1] Redlich and Bruns define "Open Production" as "a new form of coordination for production systems that implies a superior broker system coordinating the information and material flows between the stakeholders of production", and which will encompass the entire value creation process for physical goods: development, manufacturing, sales, support etc.[11] Vasilis Kostakiset al argue that Open Manufacturing can organize production by prioritising socio-ecological well-being over corporate profits, over-production and excess consumption[12] A policy paper commissioned by the European Commission uses the term "maker manufacturing" and positions it between social innovation, open source ICT and manufacturing.[3] A number of factors are seen to hamper the broad-based application of the model of "open manufacturing" and / or to realize its positive implications for more sustainable global production pattern. The first factor is the sustainability of commons-based peer production models: "Empowerment happens only, if the participants are willing to share their knowledge with their colleagues. The participation of the actors cannot be guaranteed, thus there are many cases known, where participation could only be insufficiently realized".[9]Other problems include missing or inadequate systems of quality control, the persistent paradigm of high-volume manufacturing and its cost-efficiency, the lack of widely adopted platforms to share hardware designs, as well as challenges linked to the joint-ownership paradigm behind the open licences of open manufacturing and the fact, that hardware is much more difficult to share and to standardize than software.[6] In developing countries, a number of factors need to be considered in addition to the points above. Scholar Waldman-Brown names the following: lack of manufacturing expertise and informality of current SMMs[clarification needed]in emerging markets as an obstacle to quality control for final products and raw material as well as universities and vocational training programs not apt to react rapidly enough to provide the necessary knowledge and qualifications.[6]
https://en.wikipedia.org/wiki/Open_manufacturing
Theopen music modelis an economic and technological framework for therecording industrybased on research conducted at theMassachusetts Institute of Technology. It predicts that the playback of prerecorded music will be regarded as aservicerather than asindividually sold products, and that the only system for thedigital distributionof music that will be viable against piracy is asubscription-based system supportingfile sharingand free ofdigital rights management. The research also indicated thatUS$9 per month for unlimited use would be themarket clearingprice at that time, but recommended $5 per month as the long-term optimal price.[1] Since its creation in 2002, a number of its principles have been adopted throughout the recording industry,[2]and it has been cited as the basis for the business model of manymusic subscription services.[3][4] The model asserts that there are fivenecessaryrequirements for a viable commercial music digital distribution network: The model was proposed byShuman Ghosemajumderin his 2002 research paperAdvanced Peer-Based Technology Business Models[1]at theMIT Sloan School of Management. It was the first of several studies that found significant demand for online, open music sharing systems.[5]The following year, it was publicly referred to as the Open Music Model.[6] The model suggests changing the way consumers interact with the digital property market: rather than being seen as a good to be purchased from online vendors, music would be treated as a service being provided by the industry, with firms based on the model serving as intermediaries between the music industry and its consumers. The model proposed giving consumers unlimited access to music for the price of$5 per month[1]($9 in 2024), based on research showing that this could be a long-term optimal price, expected to bring in a total revenue of overUS$3 billion per year.[1] The research demonstrated the demand for third-party file sharing programs. Insofar as the interest for a particular piece of digital property is high, and the risk of acquiring the good via illegitimate means is low, people will naturally flock towards third-party services such asNapsterandMorpheus(more recently,BittorrentandThe Pirate Bay).[1] The research showed that consumers would use file sharing services not primarily due to cost but because of convenience, indicating that services which provided access to the most music would be the most successful.[1] The model predicted the failure ofonline music distributionsystems based ondigital rights management.[6][7] Criticisms of the model included that it would not eliminate the issue of piracy.[8]Others countered that it was in fact the most viable solution to piracy,[9]since piracy was "inevitable".[10]Supporters argued that it offered a superior alternative to the currentlaw-enforcement based methodsused by the recording industry.[11]One startup in Germany, Playment, announced plans to adapt the entire model to a commercial setting as the basis for its business model.[12] Several aspects of the model have been adopted by the recording industry and its partners over time: Why would the big four music companies agree to let Apple and others distribute their music without using DRM systems to protect it? The simplest answer is because DRMs haven't worked, and may never work, to halt music piracy.
https://en.wikipedia.org/wiki/Open_music_model
Peer-to-peer(P2P) computing or networking is adistributed applicationarchitecture that partitions tasks or workloads between peers. Peers are equally privileged,equipotentparticipants in the network, forming a peer-to-peer network ofnodes.[1]In addition, apersonal area network(PAN) is also in nature a type ofdecentralizedpeer-to-peer network typically between two devices.[2] Peers make a portion of their resources, such as processing power, disk storage, ornetwork bandwidth, directly available to other network participants, without the need for central coordination by servers or stable hosts.[3]Peers are both suppliers and consumers of resources, in contrast to the traditionalclient–server modelin which the consumption and supply of resources are divided.[4] While P2P systems had previously been used in manyapplication domains,[5]the architecture was popularized by theInternetfile sharing systemNapster, originally released in 1999.[6]P2P is used in many protocols such asBitTorrentfile sharing over the Internet[7]and inpersonal networkslikeMiracastdisplaying andBluetoothradio.[8]The concept has inspired new structures and philosophies in many areas of human interaction. In such social contexts,peer-to-peer as a memerefers to theegalitariansocial networkingthat has emerged throughout society, enabled byInternettechnologies in general. While P2P systems had previously been used in many application domains,[5]the concept was popularized byfile sharingsystems such as the music-sharing applicationNapster. The peer-to-peer movement allowed millions of Internet users to connect "directly, forming groups and collaborating to become user-created search engines, virtual supercomputers, and filesystems".[9]The basic concept of peer-to-peer computing was envisioned in earlier software systems and networking discussions, reaching back to principles stated in the firstRequest for Comments, RFC 1.[10] Tim Berners-Lee's vision for theWorld Wide Webwas close to a P2P network in that it assumed each user of the web would be an active editor and contributor, creating and linking content to form an interlinked "web" of links. The early Internet was more open than the present day, where two machines connected to the Internet could send packets to each other without firewalls and other security measures.[11][9][page needed]This contrasts with thebroadcasting-like structure of the web as it has developed over the years.[12][13][14]As a precursor to the Internet,ARPANETwas a successful peer-to-peer network where "every participating node could request and serve content". However, ARPANET was not self-organized, and it could not "provide any means for context or content-based routing beyond 'simple' address-based routing."[14] Therefore,Usenet, a distributed messaging system that is often described as an early peer-to-peer architecture, was established. It was developed in 1979 as a system that enforces adecentralized modelof control.[15]The basic model is aclient–servermodel from the user or client perspective that offers a self-organizing approach to newsgroup servers. However,news serverscommunicate with one another as peers to propagate Usenet news articles over the entire group of network servers. The same consideration applies toSMTPemail in the sense that the core email-relaying network ofmail transfer agentshas a peer-to-peer character, while the periphery ofEmail clientsand their direct connections is strictly a client-server relationship.[16] In May 1999, with millions more people on the Internet,Shawn Fanningintroduced the music and file-sharing application calledNapster.[14]Napster was the beginning of peer-to-peer networks, as we know them today, where "participating users establish a virtual network, entirely independent from the physical network, without having to obey any administrative authorities or restrictions".[14] A peer-to-peer network is designed around the notion of equalpeernodes simultaneously functioning as both "clients" and "servers" to the other nodes on the network.[17]This model of network arrangement differs from theclient–servermodel where communication is usually to and from a central server. A typical example of a file transfer that uses the client-server model is theFile Transfer Protocol(FTP) service in which the client and server programs are distinct: the clients initiate the transfer, and the servers satisfy these requests. Peer-to-peer networks generally implement some form of virtualoverlay networkon top of the physical network topology, where the nodes in the overlay form asubsetof the nodes in the physical network.[18]Data is still exchanged directly over the underlyingTCP/IPnetwork, but at theapplication layerpeers can communicate with each other directly, via the logical overlay links (each of which corresponds to a path through the underlying physical network). Overlays are used for indexing and peer discovery, and make the P2P system independent from the physical network topology. Based on how the nodes are linked to each other within the overlay network, and how resources are indexed and located, we can classify networks asunstructuredorstructured(or as a hybrid between the two).[19][20][21] Unstructured peer-to-peer networksdo not impose a particular structure on the overlay network by design, but rather are formed by nodes that randomly form connections to each other.[22](Gnutella,Gossip, andKazaaare examples of unstructured P2P protocols).[23] Because there is no structure globally imposed upon them, unstructured networks are easy to build and allow for localized optimizations to different regions of the overlay.[24]Also, because the role of all peers in the network is the same, unstructured networks are highly robust in the face of high rates of "churn"—that is, when large numbers of peers are frequently joining and leaving the network.[25][26] However, the primary limitations of unstructured networks also arise from this lack of structure. In particular, when a peer wants to find a desired piece of data in the network, the search query must be flooded through the network to find as many peers as possible that share the data. Flooding causes a very high amount of signaling traffic in the network, uses moreCPU/memory (by requiring every peer to process all search queries), and does not ensure that search queries will always be resolved. Furthermore, since there is no correlation between a peer and the content managed by it, there is no guarantee that flooding will find a peer that has the desired data. Popular content is likely to be available at several peers and any peer searching for it is likely to find the same thing. But if a peer is looking for rare data shared by only a few other peers, then it is highly unlikely that the search will be successful.[27] Instructured peer-to-peer networksthe overlay is organized into a specific topology, and the protocol ensures that any node can efficiently[28]search the network for a file/resource, even if the resource is extremely rare.[23] The most common type of structured P2P networks implement adistributed hash table(DHT),[4][29]in which a variant ofconsistent hashingis used to assign ownership of each file to a particular peer.[30][31]This enables peers to search for resources on the network using ahash table: that is, (key,value) pairs are stored in the DHT, and any participating node can efficiently retrieve the value associated with a given key.[32][33] However, in order to route traffic efficiently through the network, nodes in a structured overlay must maintain lists of neighbors[34]that satisfy specific criteria. This makes them less robust in networks with a high rate ofchurn(i.e. with large numbers of nodes frequently joining and leaving the network).[26][35]More recent evaluation of P2P resource discovery solutions under real workloads have pointed out several issues in DHT-based solutions such as high cost of advertising/discovering resources and static and dynamic load imbalance.[36] Notable distributed networks that use DHTs includeTixati, an alternative toBitTorrent'sdistributed tracker, theKad network, theStorm botnet, and theYaCy. Some prominent research projects include theChord project,Kademlia,PAST storage utility,P-Grid, a self-organized and emerging overlay network, andCoopNet content distribution system.[37]DHT-based networks have also been widely utilized for accomplishing efficient resource discovery[38][39]forgrid computingsystems, as it aids in resource management and scheduling of applications. Hybrid models are a combination of peer-to-peer andclient–servermodels.[40]A common hybrid model is to have a central server that helps peers find each other.Spotifywas an example of a hybrid model [until 2014].[41]There are a variety of hybrid models, all of which make trade-offs between the centralized functionality provided by a structured server/client network and the node equality afforded by the pure peer-to-peer unstructured networks. Currently, hybrid models have better performance than either pure unstructured networks or pure structured networks because certain functions, such as searching, do require a centralized functionality but benefit from the decentralized aggregation of nodes provided by unstructured networks.[42] CoopNet (Cooperative Networking)was a proposed system for off-loading serving to peers who have recentlydownloadedcontent, proposed by computer scientists Venkata N. Padmanabhan and Kunwadee Sripanidkulchai, working atMicrosoft ResearchandCarnegie Mellon University.[43][44]When aserverexperiences an increase in load it redirects incoming peers to other peers who have agreed tomirrorthe content, thus off-loading balance from the server. All of the information is retained at the server. This system makes use of the fact that the bottleneck is most likely in the outgoing bandwidth than theCPU, hence its server-centric design. It assigns peers to other peers who are 'close inIP' to its neighbors [same prefix range] in an attempt to use locality. If multiple peers are found with the samefileit designates that the node choose the fastest of its neighbors.Streaming mediais transmitted by having clientscachethe previous stream, and then transmit it piece-wise to new nodes. Peer-to-peer systems pose unique challenges from acomputer securityperspective. Like any other form ofsoftware, P2P applications can containvulnerabilities. What makes this particularly dangerous for P2P software, however, is that peer-to-peer applications act as servers as well as clients, meaning that they can be more vulnerable toremote exploits.[45] Since each node plays a role in routing traffic through the network, malicious users can perform a variety of "routing attacks", ordenial of serviceattacks. Examples of common routing attacks include "incorrect lookup routing" whereby malicious nodes deliberately forward requests incorrectly or return false results, "incorrect routing updates" where malicious nodes corrupt the routing tables of neighboring nodes by sending them false information, and "incorrect routing network partition" where when new nodes are joining they bootstrap via a malicious node, which places the new node in a partition of the network that is populated by other malicious nodes.[45] The prevalence ofmalwarevaries between different peer-to-peer protocols.[46]Studies analyzing the spread of malware on P2P networks found, for example, that 63% of the answered download requests on thegnutellanetwork contained some form of malware, whereas only 3% of the content onOpenFTcontained malware. In both cases, the top three most common types of malware accounted for the large majority of cases (99% in gnutella, and 65% in OpenFT). Another study analyzing traffic on theKazaanetwork found that 15% of the 500,000 file sample taken were infected by one or more of the 365 differentcomputer virusesthat were tested for.[47] Corrupted data can also be distributed on P2P networks by modifying files that are already being shared on the network. For example, on theFastTracknetwork, theRIAAmanaged to introduce faked chunks into downloads and downloaded files (mostlyMP3files). Files infected with the RIAA virus were unusable afterwards and contained malicious code. The RIAA is also known to have uploaded fake music and movies to P2P networks in order to deter illegal file sharing.[48]Consequently, the P2P networks of today have seen an enormous increase of their security and file verification mechanisms. Modernhashing,chunk verificationand different encryption methods have made most networks resistant to almost any type of attack, even when major parts of the respective network have been replaced by faked or nonfunctional hosts.[49] The decentralized nature of P2P networks increases robustness because it removes thesingle point of failurethat can be inherent in a client–server based system.[50]As nodes arrive and demand on the system increases, the total capacity of the system also increases, and the likelihood of failure decreases. If one peer on the network fails to function properly, the whole network is not compromised or damaged. In contrast, in a typical client–server architecture, clients share only their demands with the system, but not their resources. In this case, as more clients join the system, fewer resources are available to serve each client, and if the central server fails, the entire network is taken down. There are both advantages and disadvantages in P2P networks related to the topic of databackup, recovery, and availability. In a centralized network, the system administrators are the only forces controlling the availability of files being shared. If the administrators decide to no longer distribute a file, they simply have to remove it from their servers, and it will no longer be available to users. Along with leaving the users powerless in deciding what is distributed throughout the community, this makes the entire system vulnerable to threats and requests from the government and other large forces. For example,YouTubehas been pressured by theRIAA,MPAA, and entertainment industry to filter out copyrighted content. Although server-client networks are able to monitor and manage content availability, they can have more stability in the availability of the content they choose to host. A client should not have trouble accessing obscure content that is being shared on a stable centralized network. P2P networks, however, are more unreliable in sharing unpopular files because sharing files in a P2P network requires that at least one node in the network has the requested data, and that node must be able to connect to the node requesting the data. This requirement is occasionally hard to meet because users may delete or stop sharing data at any point.[51] In a P2P network, the community of users is entirely responsible for deciding which content is available. Unpopular files eventually disappear and become unavailable as fewer people share them. Popular files, however, are highly and easily distributed. Popular files on a P2P network are more stable and available than files on central networks. In a centralized network, a simple loss of connection between the server and clients can cause a failure, but in P2P networks, the connections between every node must be lost to cause a data-sharing failure. In a centralized system, the administrators are responsible for all data recovery and backups, while in P2P systems, each node requires its backup system. Because of the lack of central authority in P2P networks, forces such as the recording industry,RIAA,MPAA, and the government are unable to delete or stop the sharing of content on P2P systems.[52] In P2P networks, clients both provide and use resources. This means that unlike client–server systems, the content-serving capacity of peer-to-peer networks can actuallyincreaseas more users begin to access the content (especially with protocols such asBitTorrentthat require users to share, refer a performance measurement study[53]). This property is one of the major advantages of using P2P networks because it makes the setup and running costs very small for the original content distributor.[54][55] Peer-to-peer file sharingnetworks such asGnutella,G2, and theeDonkey networkhave been useful in popularizing peer-to-peer technologies. These advancements have paved the way forPeer-to-peer content delivery networksand services, including distributed caching systems like Correli Caches to enhance performance.[56]Furthermore, peer-to-peer networks have made possible the software publication and distribution, enabling efficient sharing ofLinux distributionand various games throughfile sharingnetworks. Peer-to-peer networking involves data transfer from one user to another without using an intermediate server. Companies developing P2P applications have been involved in numerous legal cases, primarily in the United States, over conflicts withcopyrightlaw.[57]Two major cases areGrokstervs RIAAandMGM Studios, Inc. v. Grokster, Ltd..[58]In the last case, the Court unanimously held that defendant peer-to-peer file sharing companies Grokster and Streamcast could be sued for inducing copyright infringement. TheP2PTVandPDTPprotocols are used in various peer-to-peer applications. Someproprietarymultimedia applications leverage a peer-to-peer network in conjunction with streaming servers to stream audio and video to their clients.Peercastingis employed for multicasting streams. Additionally, a project calledLionShare, undertaken byPennsylvania State University, MIT, andSimon Fraser University, aims to facilitate file sharing among educational institutions globally. Another notable program,Osiris, enables users to create anonymous and autonomous web portals that are distributed via a peer-to-peer network. Datis a distributed version-controlled publishing platform.I2P, is anoverlay networkused to browse the Internetanonymously. Unlike the related I2P, theTor networkis not itself peer-to-peer[dubious–discuss]; however, it can enable peer-to-peer applications to be built on top of it viaonion services. TheInterPlanetary File System(IPFS) is aprotocoland network designed to create acontent-addressable, peer-to-peer method of storing and sharinghypermediadistribution protocol, with nodes in the IPFS network forming adistributed file system.Jamiis a peer-to-peer chat andSIPapp.JXTAis a peer-to-peer protocol designed for theJava platform.Netsukukuis aWireless community networkdesigned to be independent from the Internet.Open Gardenis a connection-sharing application that shares Internet access with other devices using Wi-Fi or Bluetooth. Resilio Syncis a directory-syncing app. Research includes projects such as theChord project, thePAST storage utility, theP-Grid, and theCoopNet content distribution system.Secure Scuttlebuttis a peer-to-peergossip protocolcapable of supporting many different types of applications, primarilysocial networking.Syncthingis also a directory-syncing app.Tradepall andM-commerceapplications are designed to power real-time marketplaces. TheU.S. Department of Defenseis conducting research on P2P networks as part of its modern network warfare strategy.[59]In May 2003,Anthony Tether, then director ofDARPA, testified that the United States military uses P2P networks.WebTorrentis a P2Pstreamingtorrent clientinJavaScriptfor use inweb browsers, as well as in theWebTorrent Desktopstandalone version that bridges WebTorrent andBitTorrentserverless networks.Microsoft, inWindows 10, uses a proprietary peer-to-peer technology called "Delivery Optimization" to deploy operating system updates using end-users' PCs either on the local network or other PCs. According to Microsoft's Channel 9, this led to a 30%-50% reduction in Internet bandwidth usage.[60]Artisoft'sLANtasticwas built as a peer-to-peer operating system where machines can function as both servers and workstations simultaneously.Hotline CommunicationsHotline Client was built with decentralized servers and tracker software dedicated to any type of files and continues to operate today.Cryptocurrenciesare peer-to-peer-baseddigital currenciesthat useblockchains Cooperation among a community of participants is key to the continued success of P2P systems aimed at casual human users; these reach their full potential only when large numbers of nodes contribute resources. But in current practice, P2P networks often contain large numbers of users who utilize resources shared by other nodes, but who do not share anything themselves (often referred to as the "freeloader problem"). Freeloading can have a profound impact on the network and in some cases can cause the community to collapse.[61]In these types of networks "users have natural disincentives to cooperate because cooperation consumes their own resources and may degrade their own performance".[62]Studying the social attributes of P2P networks is challenging due to large populations of turnover, asymmetry of interest and zero-cost identity.[62]A variety of incentive mechanisms have been implemented to encourage or even force nodes to contribute resources.[63][45] Some researchers have explored the benefits of enabling virtual communities to self-organize and introduce incentives for resource sharing and cooperation, arguing that the social aspect missing from today's P2P systems should be seen both as a goal and a means for self-organized virtual communities to be built and fostered.[64]Ongoing research efforts for designing effective incentive mechanisms in P2P systems, based on principles from game theory, are beginning to take on a more psychological and information-processing direction. Some peer-to-peer networks (e.g.Freenet) place a heavy emphasis onprivacyandanonymity—that is, ensuring that the contents of communications are hidden from eavesdroppers, and that the identities/locations of the participants are concealed.Public key cryptographycan be used to provideencryption,data validation, authorization, and authentication for data/messages.Onion routingand othermix networkprotocols (e.g. Tarzan) can be used to provide anonymity.[65] Perpetrators oflive streaming sexual abuseand othercybercrimeshave used peer-to-peer platforms to carry out activities with anonymity.[66] Although peer-to-peer networks can be used for legitimate purposes, rights holders have targeted peer-to-peer over the involvement with sharing copyrighted material. Peer-to-peer networking involves data transfer from one user to another without using an intermediate server. Companies developing P2P applications have been involved in numerous legal cases, primarily in the United States, primarily over issues surroundingcopyrightlaw.[57]Two major cases areGrokstervs RIAAandMGM Studios, Inc. v. Grokster, Ltd.[58]In both of the cases the file sharing technology was ruled to be legal as long as the developers had no ability to prevent the sharing of the copyrighted material. To establish criminal liability for the copyright infringement on peer-to-peer systems, the government must prove that the defendant infringed a copyright willingly for the purpose of personal financial gain or commercial advantage.[67]Fair useexceptions allow limited use of copyrighted material to be downloaded without acquiring permission from the rights holders. These documents are usually news reporting or under the lines of research and scholarly work. Controversies have developed over the concern of illegitimate use of peer-to-peer networks regarding public safety and national security. When a file is downloaded through a peer-to-peer network, it is impossible to know who created the file or what users are connected to the network at a given time. Trustworthiness of sources is a potential security threat that can be seen with peer-to-peer systems.[68] A study ordered by theEuropean Unionfound that illegal downloadingmaylead to an increase in overall video game sales because newer games charge for extra features or levels. The paper concluded that piracy had a negative financial impact on movies, music, and literature. The study relied on self-reported data about game purchases and use of illegal download sites. Pains were taken to remove effects of false and misremembered responses.[69][70][71] Peer-to-peer applications present one of the core issues in thenetwork neutralitycontroversy. Internet service providers (ISPs) have been known to throttle P2P file-sharing traffic due to its high-bandwidthusage.[72]Compared to Web browsing, e-mail or many other uses of the internet, where data is only transferred in short intervals and relative small quantities, P2P file-sharing often consists of relatively heavy bandwidth usage due to ongoing file transfers and swarm/network coordination packets. In October 2007,Comcast, one of the largest broadband Internet providers in the United States, started blocking P2P applications such asBitTorrent. Their rationale was that P2P is mostly used to share illegal content, and their infrastructure is not designed for continuous, high-bandwidth traffic. Critics point out that P2P networking has legitimate legal uses, and that this is another way that large providers are trying to control use and content on the Internet, and direct people towards aclient–server-based application architecture. The client–server model provides financial barriers-to-entry to small publishers and individuals, and can be less efficient for sharing large files. As a reaction to thisbandwidth throttling, several P2P applications started implementing protocol obfuscation, such as theBitTorrent protocol encryption. Techniques for achieving "protocol obfuscation" involves removing otherwise easily identifiable properties of protocols, such as deterministic byte sequences and packet sizes, by making the data look as if it were random.[73]The ISP's solution to the high bandwidth isP2P caching, where an ISP stores the part of files most accessed by P2P clients in order to save access to the Internet. Researchers have used computer simulations to aid in understanding and evaluating the complex behaviors of individuals within the network. "Networking research often relies on simulation in order to test and evaluate new ideas. An important requirement of this process is that results must be reproducible so that other researchers can replicate, validate, and extend existing work."[74]If the research cannot be reproduced, then the opportunity for further research is hindered. "Even though new simulators continue to be released, the research community tends towards only a handful of open-source simulators. The demand for features in simulators, as shown by our criteria and survey, is high. Therefore, the community should work together to get these features in open-source software. This would reduce the need for custom simulators, and hence increase repeatability and reputability of experiments."[74] Popular simulators that were widely used in the past are NS2, OMNeT++, SimPy, NetLogo, PlanetLab, ProtoPeer, QTM, PeerSim, ONE, P2PStrmSim, PlanetSim, GNUSim, and Bharambe.[75] Besides all the above stated facts, there has also been work done on ns-2 open source network simulators. One research issue related to free rider detection and punishment has been explored using ns-2 simulator here.[76]
https://en.wikipedia.org/wiki/Social_peer-to-peer_processes
Broadcastingis thedistributionofaudioaudiovisualcontent to dispersed audiences via a electronicmass communications medium, typically one using theelectromagnetic spectrum(radio waves), in aone-to-manymodel.[1]Broadcasting began withAM radio, which came into popular use around 1920 with the spread ofvacuum tuberadio transmittersandreceivers. Before this, most implementations of electronic communication (earlyradio,telephone, andtelegraph) wereone-to-one, with the message intended for a single recipient. The termbroadcastingevolved from its use as the agricultural method of sowing seeds in a field by casting them broadly about.[2]It was later adopted for describing the widespread distribution of information by printed materials[3]or by telegraph.[4]Examples applying it to "one-to-many" radio transmissions of an individual station to multiple listeners appeared as early as 1898.[5] Over-the-air broadcasting is usually associated withradioandtelevision, though more recently, both radio and television transmissions have begun to be distributed by cable (cable television). The receiving parties may include the general public or a relatively small subset; the point is that anyone with the appropriate receiving technology and equipment (e.g., a radio or television set) can receive the signal. The field of broadcasting includes both government-managed services such aspublic radio,community radioandpublic television, and privatecommercial radioandcommercial television. The U.S. Code of Federal Regulations, title 47, part 97 definesbroadcastingas "transmissions intended for reception by the general public, either direct or relayed".[6]Private or two-waytelecommunicationstransmissions do not qualify under this definition. For example,amateur("ham") andcitizens band(CB) radio operators are not allowed to broadcast. As defined,transmittingandbroadcastingare not the same. Transmission of radio and television programs from a radio or television station to home receivers byradio wavesis referred to asover the air(OTA) orterrestrialbroadcasting and in most countries requires abroadcasting license. Transmissions using a wire or cable, likecable television(which also retransmits OTA stations with theirconsent), are also considered broadcasts but do not necessarily require a license (though in some countries, a license is required). In the 2000s, transmissions of television and radio programs viastreamingdigital technology have increasingly been referred to as broadcasting as well.[7] In 1894, Italian inventorGuglielmo Marconibegan developing a wireless communication using the then-newly discovered phenomenon ofradio waves, showing by 1901 that they could be transmitted across the Atlantic Ocean.[8]This was the start ofwireless telegraphyby radio. Audio radio broadcasting began experimentally in the first decade of the 20th century. On 17 December 1902, a transmission from the Marconi station inGlace Bay, Nova Scotia, Canada, became the world's first radio message to cross the Atlantic from North America. In 1904, a commercial service was established to transmit nightly news summaries to subscribing ships, which incorporated them into their onboard newspapers.[9] World War Iaccelerated the development of radio formilitary communications. After the war, commercial radioAM broadcastingbegan in the 1920s and became an important mass medium for entertainment and news.World War IIagain accelerated the development of radio for the wartime purposes of aircraft and land communication, radio navigation, and radar.[10]Development of stereoFM broadcastingof radio began in the 1930s in the United States and the 1970s in the United Kingdom, displacing AM as the dominant commercial standard.[11] On 25 March 1925,John Logie Bairddemonstrated the transmission of moving pictures at the London department storeSelfridges. Baird's device relied upon theNipkow diskand thus became known as themechanical television. It formed the basis of experimental broadcasts done by theBritish Broadcasting Corporationbeginning on 30 September 1929.[12]However, for most of the 20th century, televisions depended on thecathode-ray tubeinvented byKarl Braun. The first version of such a television to show promise was produced byPhilo Farnsworthand demonstrated to his family on 7 September 1927.[13]AfterWorld War II, interrupted experiments resumed and television became an important home entertainment broadcast medium, usingVHFandUHFspectrum.Satellite broadcastingwas initiated in the 1960s and moved into general industry usage in the 1970s, with DBS (Direct Broadcast Satellites) emerging in the 1980s. Originally, all broadcasting was composed ofanalog signalsusinganalog transmissiontechniques but in the 2000s, broadcastersswitchedtodigital signalsusingdigital transmission. An analog signal is anycontinuous signalrepresenting some other quantity, i.e.,analogousto another quantity. For example, in an analogaudio signal, the instantaneous signalvoltagevaries continuously with thepressure of the sound waves.[citation needed]In contrast, adigital signalrepresents the original time-varying quantity as asampledsequence ofquantizedvalues which imposes somebandwidthanddynamic rangeconstraints on the representation. In general usage, broadcasting most frequently refers to the transmission of information and entertainment programming from various sources to the general public:[citation needed] The world's technological capacity to receive information through one-way broadcast networks more than quadrupled during the two decades from 1986 to 2007, from 432exabytesof (optimally compressed) information, to 1.9zettabytes.[14]This is the information equivalent of 55 newspapers per person per day in 1986, and 175 newspapers per person per day by 2007.[15] In a broadcast system, the central high-poweredbroadcast towertransmits a high-frequencyelectromagnetic waveto numerous receivers. The high-frequency wave sent by the tower is modulated with a signal containing visual or audio information. The receiver is thentunedso as to pick up the high-frequency wave and ademodulatoris used to retrieve the signal containing the visual or audio information. The broadcast signal can be either analog (signal is varied continuously with respect to the information) or digital (information is encoded as a set of discrete values).[16][17] Historically, there have been several methods used for broadcastingelectronic mediaaudio and video to the general public: There are several means of providing financial support for continuous broadcasting: Broadcasters may rely on a combination of thesebusiness models. For example, in the United States,National Public Radio(NPR) and thePublic Broadcasting Service(PBS, television) supplement public membership subscriptions and grants with funding from theCorporation for Public Broadcasting(CPB), which is allocated bi-annually by Congress. US public broadcasting corporate and charitable grants are generally given in consideration ofunderwriting spotswhich differ from commercial advertisements in that they are governed by specificFCCrestrictions, which prohibit the advocacy of a product or a "call to action". The first regular television broadcasts started in 1937. Broadcasts can be classified asrecordedorlive. The former allows correcting errors, and removing superfluous or undesired material, rearranging it, applyingslow-motionand repetitions, and other techniques to enhance the program. However, some live events likesports televisioncan include some of the aspects including slow-motion clips of important goals/hits, etc., in between thelive televisiontelecast. American radio-network broadcasters habitually forbade prerecorded broadcasts in the 1930s and 1940s, requiring radio programs played for the Eastern and Centraltime zonesto be repeated three hours later for the Pacific time zone (See:Effects of time on North American broadcasting). This restriction was dropped for special occasions, as in the case of the GermandirigibleairshipHindenburgdisaster atLakehurst, New Jersey, in 1937. DuringWorld War II, prerecorded broadcasts from war correspondents were allowed on U.S. radio. In addition, American radio programs were recorded for playback byArmed Forces Radioradio stationsaround the world. A disadvantage of recording first is that the public may learn the outcome of an event before the recording is broadcast, which may be aspoiler. Prerecording may be used to preventannouncersfrom deviating from an officially approvedscriptduring alive radiobroadcast, as occurred withpropagandabroadcasts from Germany in the 1940s and withRadio Moscowin the 1980s. Many events are advertised as being live, although they are often recorded live (sometimes called "live-to-tape"). This is particularly true of performances of musical artists on radio when they visit for an in-studioconcertperformance. Similar situations have occurred intelevision production("The Cosby Showis recorded in front of alive televisionstudioaudience") andnews broadcasting. A broadcast may be distributed through several physical means. If coming directly from theradio studioat a single station ortelevision station, it is sent through thestudio/transmitter linkto thetransmitterand hence from thetelevision antennalocated on theradio masts and towersout to the world. Programming may also come through acommunications satellite, played either live or recorded for later transmission. Networks of stations maysimulcastthe same programming at the same time, originally viamicrowavelink, now usually by satellite. Distribution to stations or networks may also be through physical media, such asmagnetic tape,compact disc(CD),DVD, and sometimes other formats. Usually these are included in another broadcast, such as whenelectronic news gathering(ENG) returns a story to the station for inclusion on anews programme. The final leg of broadcast distribution is how the signal gets to the listener or viewer. It may come over the air as with aradio stationortelevision stationto anantennaandradio receiver, or may come throughcable television[18]orcable radio(orwireless cable) via the station or directly from a network. TheInternetmay also bring eitherinternet radioorstreaming mediatelevision to the recipient, especially withmulticastingallowing the signal andbandwidthto be shared. The termbroadcast networkis often used to distinguish networks that broadcast over-the-air television signals that can be received using atunerinside atelevision setwith atelevision antennafrom so-called networks that are broadcast only viacable television(cablecast) orsatellite televisionthat uses adish antenna. The termbroadcast televisioncan refer to thetelevision programsof such networks. The sequencing of content in a broadcast is called aschedule. As with all technological endeavors, a number of technical terms andslanghave developed. A list of these terms can be found atList of broadcasting terms.[19]Televisionandradioprograms are distributed through radio broadcasting orcable, often both simultaneously. By coding signals and having acable converter boxwithdecodingequipment inhomes, the latter also enablessubscription-based channels,pay-tvandpay-per-viewservices. In his essay,John Durham Peterswrote thatcommunicationis a tool used for dissemination. Peters stated, "Disseminationis a lens—sometimes a usefully distorting one—that helps us tackle basic issues such as interaction, presence, and space and time ... on the agenda of any futurecommunication theoryin general".[20]: 211Dissemination focuses on the message being relayed from one main source to one largeaudiencewithout the exchange ofdialoguein between. It is possible for the message to bechanged or corrupted by government officialsonce the main source releases it. There is no way to predetermine how the larger population or audience will absorb the message. They can choose to listen, analyze, or ignore it. Dissemination in communication is widely used in the world of broadcasting. Broadcasting focuses on getting a message out and it is up to the general public to do what they wish with it. Peters also states that broadcasting is used to address an open-ended destination.[20]: 212There are many forms of broadcasting, but they all aim to distribute a signal that will reach the targetaudience. Broadcasters typically arrange audiences into entire assemblies.[20]: 213In terms of media broadcasting, aradio showcan gather a large number of followers who tune in every day to specifically listen to that specificdisc jockey. The disc jockey follows the script for their radio show and just talks into themicrophone.[20]They do not expect immediate feedback from any listeners. The message is broadcast across airwaves throughout the community, but the listeners cannot always respond immediately, especially since many radio shows are recorded prior to the actual air time. Conversely, receivers can select opt-in or opt-out of getting broadcast messages using an Excel file, offering them control over the information they receive. Broadcast engineering is the field ofelectrical engineering, and now to some extentcomputer engineeringandinformation technology, which deals withradioandtelevisionbroadcasting.Audio engineeringandRF engineeringare also essential parts of broadcast engineering, being their ownsubsetsof electrical engineering.[21] Broadcast engineering involves both thestudioandtransmitteraspects (the entireairchain), as well asremote broadcasts. Everystationhas a broadcastengineer, though one may now serve an entire station group in a city. In smallmedia marketsthe engineer may work on acontractbasis for one or more stations as needed.[21][22][23]
https://en.wikipedia.org/wiki/Broadcasting
Narrowcastingis the dissemination of information to a specialised audience, rather than to the broader public-at-large; it is the opposite ofbroadcasting. It may refer to advertising or programming via radio,podcast, newspaper, television, or theInternet. The term "multicast" is sometimes used interchangeably, although strictly speaking this refers to the technology used, and narrowcasting to thebusiness model. Narrowcasting is sometimes aimed at paid subscribers, such as in the case ofcable television. The evolution ofnarrowcastingcame from broadcasting. In the early 20th century,Charles Herrolddesignated radio transmissions meant for a single receiver, distinguished frombroadcasting, meant for a general audience.[1]Merriam-Websterreports the first known use of the word in 1932.[2]Broadcasting was revived in the context ofsubscription radioprograms in the late 1940s,[3]after which the term narrowcasting entered the common lexicon due to computer scientist andpublic broadcastingadvocateJ. C. R. Licklider, who in a 1967 report envisioned[4] ...a multiplicity of television networks aimed at serving the needs of smaller, specialized audiences. 'Here,' stated Licklider, 'I should like to coin the term "narrowcasting," using it to emphasize the rejection or dissolution of the constraints imposed by commitment to a monolithic mass-appeal, broadcast approach.' The term "multicast" is sometimes used interchangeably, although strictly speaking this refers to the technology used, and narrowcasting to thebusiness model.[5]Narrowcasting is sometimes aimed at paid subscribers,[2]such as in the case ofcable television.[6] In the beginning of the 1990s, when American television was still mainly ruled by three major networks (ABC,CBSandNBC), it was believed that the greatest achievement was to promote and create content that would be directed towards a huge mass of people, avoiding completely those projects that might appeal to only a reduced audience. That was mainly due to the fact that specially in the earlier days of television, there was not much more competition.[7]Nevertheless, this changed once independent stations, more cable channels, and the success of videocassettes started increasing and rising, which gave the audiences the possibility of having more options. Thus, this previous mass-oriented point of view started to change towards one that was, obviously, narrower.[8] It was the arrival of cable television that allowed a much larger number of producers and programmers to aim at smaller audiences, such asMTV, which started off as the channel for those who loved music.[7] Narrowcasting has made its place in, for example, the way television networks schedule shows; while one night they might choose to stream shows directed at teenagers, a different night they might want to focus on another specific kind of audience, such as those interested in documentaries. In this way, they target what could be seen as a narrow audience, but collect their attention altogether as a mass audience on one night.[8] Related toniche marketingortarget marketing, narrowcasting involves aiming media messages at specific segments of the public defined by values, preferences, demographic attributes, or subscription.[9][10]Narrowcasting is based on thepostmodernidea thatmass audiencesdo not exist.[11] Marketingexperts are often interested in narrowcast media as a commercialadvertisingtool, since access to such content implies exposure to a specific and clearly defined prospective consumer audience. The theory being that, by identifying particulardemographicsviewing such programs, advertisers can better target their markets. Pre-recorded television programs are often broadcast to captive audiences intaxi cabs, buses,elevators, and queues. For instance, theCabvisionnetwork inLondon'sblack cabsshows limited pre-recordedtelevision programsinterspersed with targeted advertising to cab passengers.[12]Point of saleadvertising is a form of narrowcasting.[13] Interactive narrowcasting allows users to interact with products or services viatouch screensor other technology, thus allowing them to interact with them before purchasing them.[14] Narrowcasting has become increasingly used to target audiences with a particular political leaning, such aspolitical satireby the entertainment industry.[15]It has been often pointed out thatDonald Trumpused narrowcasting as well as broadcasting to good effect.[16][17][18] The termnarrowcastingcan also apply to the spread of information to an audience (private or public) which is by nature geographically limited—a group such as office employees, military troops, or conference attendees—and requires a localised dissemination of information from a shared source.[19]Hotels, hospitals, museums, and offices, use narrowcasting to display relevant information to their visitors and staff.[13] Both broadcasting and a narrowcasting models are found on the Internet. Sites based on the latter require the user to register an account andlog inbefore viewing content. Narrowcasting may also employ various types ofpush technologies, which send information directly to subscribers;electronic mailing lists, where emails are sent to subscribers, are an example of these.[5] Narrowcasting is also sometimes applied topodcasting, since the audience for a podcast is often specific and sharply defined.[20] This evolution towards narrowcasting was discussed in 1993 byHamid Naficy, who focused on this change specifically inLos Angeles, and how such a content directed towards a narrowed audience affected social culture. For example, with the rise of Middle Eastern television programs, more content that did not have the pressure to have a mass audience appeal to watch it was able to be produced and promoted. This made it easier for minorities to feel better represented in television.[21] Narrowcasting by definition focuses on specific groups of people, which may promote division or even conflict among groups. Political, social, or other ideologies promoted by narrowcasting have the potential to cause harm to society.[13] There is also the danger of people living in afilter bubble, where the audience is not exposed to different viewpoints, opinions or ideologies, leading to thinking that their opinion is the correct one. The resulting narrow-mindedness can lead to conflict.[13] The 2022 Australian sci-fi thrillerMonolith, in which the sole on-screen actor is a podcaster, provides commentary on narrowcasting. Ari Mattes ofNotre Dame Universitywrote in an article inThe Conversation: "Monolithis one of the first Australian films to critically navigate the ramifications of narrowcasting technology... the strange solitude of interpersonal communication in the global information economy underpins the whole thing".[22]
https://en.wikipedia.org/wiki/Narrowcasting
Open innovationis a term used to promote anInformation Agemindset toward innovation that runs counter to thesecrecyandsilo mentalityof traditional corporate research labs. The benefits and driving forces behind increased openness have been noted and discussed as far back as the 1960s, especially as it pertains to interfirm cooperation in R&D.[1]Use of the term 'open innovation' in reference to the increasing embrace of external cooperation in a complex world has been promoted in particular byHenry Chesbrough, adjunct professor and faculty director of the Center for Open Innovation of theHaas School of Businessat the University of California, and Maire Tecnimont Chair of Open Innovation atLuiss.[2][3] The term was originally referred to as "a paradigm that assumes that firms can and should use external ideas as well as internal ideas, and internal and external paths to market, as the firms look to advance their technology".[3]More recently, it is defined as "a distributed innovation process based on purposively managed knowledge flows across organizational boundaries, using pecuniary and non-pecuniary mechanisms in line with the organization's business model".[4]This more recent definition acknowledges that open innovation is not solely firm-centric: it also includescreative consumers[5]and communities of user innovators.[6]The boundaries between a firm and its environment have become more permeable; innovations can easily transfer inward and outward between firms and other firms and between firms and creative consumers, resulting in impacts at the level of the consumer, the firm, an industry, and society.[7] Because innovations tend to be produced by outsiders andfoundersinstartups, rather than existing organizations, the central idea behind open innovation is that, in a world of widely distributed knowledge, companies cannot afford to rely entirely on their own research, but should instead buy or license processes or inventions (i.e. patents) from other companies. This is termed inbound open innovation.[8]In addition, internal inventions not being used in a firm's business should be taken outside the company (e.g. through licensing, joint ventures orspin-offs).[9]This is called outbound open innovation. The open innovation paradigm can be interpreted to go beyond just using external sources of innovation such as customers, rival companies, and academic institutions, and can be as much a change in the use, management, and employment ofintellectual propertyas it is in the technical and research driven generation of intellectual property.[10]In this sense, it is understood as the systematic encouragement and exploration of a wide range of internal and external sources for innovative opportunities, the integration of this exploration with firm capabilities and resources, and the exploitation of these opportunities through multiple channels.[11] In addition, as open innovation explores a wide range of internal and external sources, it could be not just analyzed in the level of company, but also it can be analyzed at inter-organizational level, intra-organizational level, extra-organizational and at industrial, regional and society.[12] Open innovation offers several benefits to companies operating on a program of global collaboration: Implementing a model of open innovation is naturally associated with a number of risks and challenges, including: In the UK, knowledge transfer partnerships (KTP) are a funding mechanism encouraging the partnership between a firm and a knowledge-based partner.[15]A KTP is a collaboration program between a knowledge-based partner (i.e. a research institution), a company partner and one or more associates (i.e. recently qualified persons such as graduates). KTP initiatives aim to deliver significant improvement in business partners’ profitability as a direct result of the partnership through enhanced quality and operations, increased sales and access to new markets. At the end of their KTP project, the three actors involved have to prepare a final report that describes KTP initiative supported the achievement of the project's innovation goals.[15] Open innovation has allowed startup companies to produce innovation comparable to that of large companies.[16]Although startups tend to have limited resources and experience, they can overcome this disadvantage by leveraging external resources and knowledge.[17]To do so, startups can work in tandem with other institutions including large companies, incubators, VC firms, and higher education systems. Collaborating with these institutions provides startups with the proper resources and support to successfully bring new innovations to the market.[18] The collaboration between startups and large companies, in particular, has been used to exemplify the fruits of open innovation. In this collaboration, startups can assume one of two roles: that of inbound open innovation, where the startup utilizes innovationfromthe large company, or that of outbound open innovation, where the startup provides internal innovationforthe large company. In the inbound open innovation model, startups can gain access to technology that will allow them to create successful products. In the outbound innovation model, startups can capitalize on their technology without making large investments to do so. The licensing of technology between startups and large companies is beneficial for both parties, but it is more significant for startups since they face larger obstacles in their pursuit of innovation.[17] This approach involves developing and introducing a partially completed product, for the purpose of providing a framework or tool-kit for contributors to access, customize, and exploit. The goal is for the contributors to extend the platform product's functionality while increasing the overall value of the product for everyone involved. Readily available software frameworks such as asoftware development kit(SDK), or anapplication programming interface(API) are common examples of product platforms. This approach is common in markets with strongnetwork effectswhere demand for the product implementing the framework (such as a mobile phone, or an online application) increases with the number of developers that are attracted to use the platform tool-kit. The high scalability of platforming often results in an increased complexity of administration and quality assurance.[13] This model entails implementing a system that encourages competitiveness among contributors by rewarding successful submissions. Developer competitions such ashackathonevents and manycrowdsourcinginitiatives fall under this category of open innovation. This method provides organizations with inexpensive access to a large quantity of innovative ideas, while also providing a deeper insight into the needs of their customers and contributors.[13] While mostly oriented toward the end of the product development cycle, this technique involves extensive customer interaction through employees of the host organization. Companies are thus able to accurately incorporate customer input, while also allowing them to be more closely involved in the design process and product management cycle.[13] Similarly to product platforming, an organization incorporates their contributors into the development of the product. This differs from platforming in the sense that, in addition to the provision of the framework on which contributors develop, the hosting organization still controls and maintains the eventual products developed in collaboration with their contributors. This method gives organizations more control by ensuring that the correct product is developed as fast as possible, while reducing the overall cost of development.[13]Dr.Henry Chesbroughrecently supported this model for open innovation in the optics and photonics industry.[19] Similarly to idea competitions, an organization leverages a network of contributors in the design process by offering a reward in the form of anincentive. The difference relates to the fact that the network of contributors are used to develop solutions to identified problems within the development process, as opposed to new products.[13]Emphasis needs to be placed on assessing organisational capabilities to ensure value creation in open innovation.[20] InAustriatheLudwig Boltzmann Gesellschaftstarted a project named "Tell us!" about mental health issues and used the concept of open innovation tocrowdsourceresearch questions.[21][22]The institute also launched the first "Lab for Open Innovation in Science" to teach 20 selected scientists the concept of open innovation over the course of one year. Innovation intermediaries are persons or organizations that facilitate innovation by linking multiple independent players in order to encourage collaboration and open innovation, thus strengthening the innovation capacity of companies, industries, regions, or nations.[23]As such, they may be key players for the transformation from closed to open modes of innovation.[24] The paradigm ofclosed innovationholds that successful innovation requires control. Particularly, a company should control the generation of their own ideas, as well as production, marketing, distribution, servicing, financing, and supporting. What drove this idea is that, in the early twentieth century, academic and government institutions were not involved in the commercial application of science. As a result, it was left up to other corporations to take thenew product developmentcycle into their own hands. There just was not the time to wait for the scientific community to become more involved in the practical application of science. There also was not enough time to wait for other companies to start producing some of the components that were required in their final product. These companies became relatively self-sufficient, with little communication directed outwards to other companies or universities. Throughout the years several factors emerged that paved the way for open innovation paradigms: These four factors have resulted in a new market of knowledge. Knowledge is not anymore proprietary to the company. It resides in employees, suppliers, customers, competitors and universities. If companies do not use the knowledge they have inside, someone else will. Innovation can be generated either by means of closed innovation or by open innovation paradigms.[3][9]Some research argues that the potential of open innovation is exaggerated, while the merits of closed innovation are overlooked.[25]There is an ongoing debate on which paradigm will dominate in the future. Modern research of open innovation is divided into two groups, which have several names, but are similar in their essence (discovery and exploitation; outside-in and inside-out; inbound and outbound). The common factor for different names is the direction of innovation, whether from outside the company in, or from inside the company out:[26] This type of open innovation is when a company freely shares its resources with other partners, without an instant financial reward. The source of profit has an indirect nature and is manifested as a new type of business model. In this type of open innovation a company commercialises its inventions and technology through selling or licensing technology to a third party. This type of open innovation is when companies use freely available external knowledge, as a source of internal innovation. Before starting any internal R&D project a company should monitor the external environment in search for existing solutions, thus, in this case, internal R&D become tools to absorb external ideas for internal needs. In this type of open innovation a company is buying innovation from its partners through licensing, or other procedures, involving monetary reward for external knowledge Open sourceand open innovation might conflict on patent issues. This conflict is particularly apparent when considering technologies that may save lives, or otheropen-source-appropriate technologiesthat may assist inpovertyreduction orsustainable development.[27]However,open sourceand open innovation are not mutually exclusive, because participating companies can donate their patents to an independent organization, put them in a common pool, or grant unlimited license use to anybody. Hence some open-source initiatives can merge these two concepts: this is the case for instance for IBM with itsEclipseplatform, which the company presents as a case of open innovation, where competing companies are invited to cooperate inside an open-innovation network.[28] In 1997,Eric Raymond, writing about the open-source software movement, coined the termthe cathedral and the bazaar. The cathedral represented the conventional method of employing a group of experts to design and develop software (though it could apply to any large-scale creative or innovative work). The bazaar represented the open-source approach. This idea has been amplified by a lot of people, notablyDon TapscottandAnthony D. Williamsin their bookWikinomics. Eric Raymond himself is also quoted as saying that 'one cannot code from the ground up in bazaar style. One can test, debug, and improve in bazaar style, but it would be very hard to originate a project in bazaar mode'. In the same vein, Raymond is also quoted as saying 'The individual wizard is where successful bazaar projects generally start'.[29] In 2014, Chesbrough and Bogers describe open innovation as a distributed innovation process that is based on purposefully managed knowledge flows across enterprise boundaries.[30]Open innovation is hardly aligned with the ecosystem theory and not a linear process. Fasnacht's adoption for the financial services uses open innovation as basis and includes alternative forms of mass collaboration, hence, this makes it complex, iterative, non-linear, and barely controllable.[31]The increasing interactions between business partners, competitors, suppliers, customers, and communities create a constant growth of data and cognitive tools. Open innovation ecosystems bring together the symbiotic forces of all supportive firms from various sectors and businesses that collectively seek to create differentiated offerings. Accordingly, the value captured from a network of multiple actorsandthe linear value chain of individual firms combined, creates the new delivery model that Fasnacht declares "value constellation". The termOpen Innovation Ecosystemconsists of three parts that describe the foundations of the approach of open innovation, innovation systems and business ecosystems.[1] WhileJames F. Mooreresearched business ecosystems in manufacturing around a specific business or branch, the open model of innovation with the ecosystem theory was recently studied in various industries. Traitler et al. researched it 2010 and used it forR&D, stating that global innovation needs alliances based on compatible differences. Innovation partnerships based on sharing knowledge represents a paradigm shift toward accelerating co‐development of sustainable innovation.[32]West researched open innovation ecosystems in the software industry,[33]following studies in the food industry that show how a small firm thrived and became a business success based on building an ecosystem that shares knowledge, encourages individuals' growth, and embeds trust among participants such as suppliers, alumni chef and staff, and food writers.[34]Other adoptions include the telecom industry[35]or smart cities.[36] Ecosystems foster collaboration and accelerate the dissemination of knowledge through thenetwork effect, in fact, value creation increases with each actor in the ecosystem, which in turn nurtures the ecosystem as such. A digital platform is essential to make the innovation ecosystem work as it aligns various actors to achieve a mutually beneficial purpose. Parker explained that with platform revolution and described how networked Markets are transforming the economy.[37]Basically there are three dimensions that increasingly converge, i.e. e-commerce, social media and logistics and finance, termed by Daniel Fasnacht as thegolden triangle of ecosystems.[38] Business ecosystems are increasingly used and drive digital growth.[3] and pioneering firms in China use their technological capabilities and link client data to historical transactions and social behaviour to offer tailored financial services among luxury goods or health services. Such open collaborative environment changes the client experience and adds value to consumers. The drawback is that it is also threatening incumbent banks from the U.S. and Europe due to its legacies and lack of agility and flexibility.[39] Bogers, M., Zobel, A-K., Afuah, A., Almirall, E., Brunswicker, S., Dahlander, L., Frederiksen, L.,Gawer, A., Gruber, M., Haefliger, S., Hagedoorn, J., Hilgers, D., Laursen, K., Magnusson, M.G., Majchrzak, A., McCarthy, I.P., Moeslein, K.M., Nambisan, S., Piller, F.T., Radziwon, A., Rossi-Lamastra, C., Sims, J. & Ter Wal, A.J. (2017). The open innovation research landscape: Established perspectives and emerging themes across different levels of analysis. Industry & Innovation, 24(1), 8-40.
https://en.wikipedia.org/wiki/Open_Innovation
Placemakingis a multi-faceted approach to theplanning, design and management of public spaces. Placemaking capitalizes on a local community's assets, inspiration, and potential, with the intention of creating public spaces that improveurban vitalityand promote people's health, happiness, and well-being. It is political due to the nature ofplace identity. Placemaking is both a process and a philosophy that makes use of urban design principles. It can be either official and government led, or community driven grassrootstactical urbanism, such as extending sidewalks with chalk, paint, and planters, or open streets events such asBogotá,Colombia'sCiclovía. Good placemaking makes use of underutilized space to enhance the urban experience at the pedestrian scale to build habits of locals. The concepts behind placemaking originated in the 1960s, when writers likeJane JacobsandWilliam H. Whyteoffered groundbreaking ideas about designing cities that catered to people, not just to cars and shopping centers. Their work focused on the importance of lively neighborhoods and inviting public spaces. Jacobs advocated citizen ownership of streets through the now-famous idea of "eyes on the street." Whyte emphasized essential elements for creating social life in public spaces.[1] The term came into use in the 1970s bylandscape architects,architectsandurban plannersto describe the process of creating squares, plazas, parks, streets and waterfronts to attract people because they are pleasurable or interesting. Landscape often plays an important role in the design process. The term encourages disciplines involved in designing the built environment to work together in pursuit of qualities that they each alone are unable to achieve. Bernard Hunt, of HTA Architects noted that: "We have theories, specialisms, regulations, exhortations, demonstration projects. We have planners. We have highway engineers. We have mixed use, mixed tenure, architecture, community architecture, urban design, neighbourhood strategy. But what seems to have happened is that we have simply lost the art of placemaking; or, put another way, we have lost the simple art of placemaking. We are good at putting up buildings but we are bad at making places." Jan Gehl has said "First life, then spaces, then buildings – the other way around never works"; and "In a Society becoming steadily more privatized with private homes, cars, computers, offices and shopping centers, the public component of our lives is disappearing. It is more and more important to make the cities inviting, so we can meet our fellow citizens face to face and experience directly through our senses. Public life in good quality public spaces is an important part of a democratic life and a full life."[2] The writings of poetWendell Berryhave contributed to an imaginative grasp of place and placemaking, particularly with reference to local ecology and local economy. He writes that, "If what we see and experience, if our country, does not become real in imagination, then it never can become real to us, and we are forever divided from it... Imagination is a particularizing and a local force, native to the ground underfoot." In recent years, placemaking has been widely applied in the field of Sports Management and the sports industry. Often times, the idea of placemaking centers around urban real estate development, centralized around a stadium or sports district. According toProject for Public Spaces,[3]successful placemaking is based on eleven basic principles: Both the opportunities available to individuals and the choices made based on those opportunities impact individual, family, and community health. The World Health Organization's definition of health[5]provides an appropriate, broad-reaching understanding of health as a "resource for everyday life, not the object of living" and an important frame for discussing the interconnections betweenPlaceandHealth. A 2016 reportThe Case for Healthy Places, fromProject for Public Spacesand the Assembly Project, funded by theKnight Foundationand focusing on research related to Shaping Space for Civic Life, looked at the evidence base showing how health and wellbeing were impacted by where a person lives and the opportunities available to them.[6] There is an increasing focus on using placemaking as a way to "connect blueways and greenways" - to address the physical disconnect between theurban streamsandgreenwaysthrough placemaking.[7] While the arts and creative expression play a substantial part in establishing a sense of place, economic growth and production must also play an equally large role in creating a successful place. These two factors are not mutually exclusive, as the arts and cultural economic activity made up $729.6 billion (or 4.2%) of the United States GDP in 2014, and employed 4.7 million workers in 2012.[8]This means that the arts can be deployed as a powerful tool in the creation or rehabilitation of urban spaces. Jamie Bennett, executive director of ArtPlace America, has identified the following four tools used by communities while implementing creative placemaking.[9] Great places must do more than meet the basic requirements if they want to foster greater community attachment. A strong sense of attachment can result in residents who are more committed to the growth and success of their community. TheKnight Foundationconducted a study measuring community attachment, and found that there was very little variation in the primary drivers of attachment rates when compared between different cities across the United States.[11] Drivers of attachment include: Streets are the stage for activity of everyday life within a city and they have the most potential to be designed to harness a high-quality sense of place. Effective placemaking in thestreetscapelends special attention to the streets livability by representing a sense of security,sense of place, visible employment, variety of transportation options, meaningful interactions between residents, "eyes on the street" as well as "social capital".[12][13]All of these interactions take place at themesoscale. Mesoscale is described as the city level of observation betweenmacroscale—being birdseye view—andmicroscale--being textures and individual elements of the streetscape (streetlamp type, building textures, etc.); in other words, mesoscale is the area observable from a humans eyes, for example: between buildings, including storefronts, sidewalks, street trees, and people. Placemaking for a street takes place at both mesoscale and microscale. To be effective placemakers, it is important that planners, architects, and engineers consider designing in the mesoscale when designing for places that are intended to be livable byWhyte's standards.[13] Tools and practices of placemaking that benefit from utilizing the mesoscale context include:[13] As society changes to accommodate new technologies,urban plannersand citizens alike are attempting to utilize those technologies to enact physical change. One thing that has had a massive impact on western society is the advent of digital technologies, likesocial media. Urban decision makers are increasingly attempting to plan cities based on feedback fromcommunity engagementso as to ensure the development of a durable, livable place. With the invention of niche social technologies, communities have shifted their engagement away fromlocal-government-led forums and platforms, to social media groups on websites such asFacebookandNextdoorto voice concerns, critiques and desires.[14]In a sense, these new platforms have become aThird Place, in reference toRay Oldenburg's term.[14][15] Social media tools such as these show promise for the future of placemaking in that they are being used to reclaim, reinvigorate and activate spaces. These online neighborhood and event-centric groups andforumsprovide a convenient non-physical space forpublic discourseand discussion through digital networked interactions to implement change on ahyper-locallevel; this theory is sometimes referred to as Urban Acupuncture. This type of shift towards a morecrowd-sourcedplanning method can lead to the creation of more relevant and useful and inclusive places with greater sense of place.[14][12] Other new technologies have also been used in placemaking, such as the WiFi-based project created by D.C. Denison and Michael Oh at Boston's South Station and other locations around Boston. The project was backed byThe Boston Globe. ThePulse of Boston[16]used local WiFi signals to create online hyperlocal communities in five different locations around the city.
https://en.wikipedia.org/wiki/Placemaking
TheCity Repair Projectis a501(c)(3)non-profit organization based inPortland, Oregon. Its focus is education and activism for community building. The organizational motto is "The City Repair Project is group of citizen activists creating public gathering places and helping others to creatively transform the places where they live."[2] City Repair is an organization primarily run by volunteers. A board of directors oversees the project's long-term vision, and a council maintains its daily operations. Both the board of directors and council meet monthly. City Repair's work focuses on localization andplacemaking. The City Repair Project maintains an office in Portland. The City Repair Project was founded in 1996 by a small group of neighbors interested insustainabilityand neighborhood activism.[3]The first City Repair action was an intersection repair at Share-It Square at SE 9th Ave and SE Sherrett Street. An intersection repair is a place where two streets crossed that is painted by the members of that neighborhood. The street is closed down during the painting. The first intersection repair that happened was at Share-it Square. Other intersection repairs include Sunnyside Piazza.[4][5] City Repair hosts two events annually, Portland'sEarth Daycelebration and the Village Building Convergence.[6] Past projects include the T-Horse, a small pick-up truck converted into a mobile tea house. The T-Horse was driven to neighborhood sites and events around Portland and served freechaiand pie.[citation needed] The organization has inspired groups around the United States to start their own City Repair Projects. Unaffiliated City Repairs exist in California, Washington, Minnesota, and other places. TheVillage Building Convergence(VBC) is an annual 10-day event held every May inPortland, Oregon, United States. The event is coordinated by the City Repair Project and consists of a series of workshops incorporatingnatural buildingandpermaculturedesign at multiple sites around the city. Many of the workshops center on "intersection repairs" which aim to transform street intersections into public gathering spaces. In 1996, neighbors in theSellwood neighborhoodof Portland at the intersection of 8th and Sherrett created a tea stand, children's playhouse and community library on the corner and renamed it "Share-It Square".[7]Community organizers founded the City Repair Project that same year, seeking to share their vision with the community. In January 2000, thePortland City Councilpassed ordinance #172207, an "Intersection Repair" ordinance, allowing neighborhoods to develop public gathering places in certain street intersections.[8] The first Village Building Convergence took place in May 2002, then called the Natural Building Convergence. During its history, the VBC has coordinated the creation of over 72 natural building and permaculture sites in Portland, including information kiosks, painted intersections,cobbenches, and astrawbale houseatDignity Village. The sites are primarily located in the southeast quadrant of Portland. Natural builders from around the world have coordinated the activities at many of the construction sites at the Village Building Convergence. Most of the labor taking place at the sites is done by volunteers. The VBC hosts a series of workshops, many of which are free to the public. Topics of the workshops are usually related tosustainabilityandnatural building. Past workshops have includedaikidolessons, outdoor mushroom cultivation,bioswalecreation, andnonviolent communication.[9] The VBC also hosts speakers and entertainment during the evenings of its convergences. Presentations for the 2007 convergence were made atDisjectabyStarhawk,Michael Lerner, andPaul Stamets.[10]Prior years' presentations have been given byMalik Rahim,Toby Hemenway, and Judy Bluehorse.
https://en.wikipedia.org/wiki/City_repair_project
Open-source software development (OSSD)is the process by whichopen-source software, or similar software whosesource codeis publicly available, is developed by anopen-source software project. These are software products available with its source code under anopen-source licenseto study, change, and improve its design. Examples of some popular open-source software products areMozilla Firefox,Google Chromium,Android,LibreOfficeand theVLC media player. In 1997,Eric S. RaymondwroteThe Cathedral and the Bazaar.[1]In this book, Raymond makes the distinction between two kinds ofsoftware development. The first is the conventionalclosed-sourcedevelopment. This kind of development method is, according to Raymond, like the building of a cathedral; central planning, tight organization and one process from start to finish. The second is the progressive open-source development, which is more like "a great babbling bazaar of differing agendas and approaches out of which a coherent and stable system could seemingly emerge only by a succession of miracles." The latter analogy points to the discussion involved in an open-source development process. Differences between the two styles of development, according to Bar and Fogel, are in general the handling (and creation) ofbug reportsandfeature requests, and the constraints under which the programmers are working.[2]In closed-source software development, the programmers are often spending a lot of time dealing with and creating bug reports, as well as handling feature requests. This time is spent on creating and prioritizing further development plans. This leads to part of the development team spending a lot of time on these issues, and not on the actual development. Also, in closed-source projects, the development teams must often work under management-related constraints (such as deadlines, budgets, etc.) that interfere with technical issues of the software. In open-source software development, these issues are solved by integrating the users of the software in the development process, or even letting these users build the system themselves.[citation needed] Open-source software development can be divided into several phases. The phases specified here are derived fromSharma et al.[3]A diagram displaying the process-data structure of open-source software development is shown on the right. In this picture, the phases of open-source software development are displayed, along with the corresponding data elements. This diagram is made using themeta-modelingandmeta-process modelingtechniques. There are several ways in which work on an open-source project can start: Eric Raymond observed in his essayThe Cathedral and the Bazaarthat announcing the intent for a project is usually inferior to releasing a working project to the public. It's a common mistake to start a project when contributing to an existing similar project would be more effective (NIH syndrome)[citation needed]. To start a successful project it is very important to investigate what's already there. The process starts with a choice between the adopting of an existing project, or the starting of a new project. If a new project is started, the process goes to the Initiation phase. If an existing project is adopted, the process goes directly to the Execution phase.[original research?] Several types of open-source projects exist. First, there is the garden variety of software programs and libraries, which consist of standalone pieces of code. Some might even be dependent on other open-source projects. These projects serve a specified purpose and fill a definite need. Examples of this type of project include theLinux kernel, the Firefox web browser and the LibreOffice office suite of tools. Distributions are another type of open-source project. Distributions are collections of software that are published from the same source with a common purpose. The most prominent example of a "distribution" is an operating system. There are manyLinux distributions(such asDebian,Fedora Core,Mandriva,Slackware,Ubuntuetc.) which ship the Linux kernel along with many user-land components. There are other distributions, likeActivePerl, thePerl programming languagefor various operating systems, andCygwindistributions of open-source programs forMicrosoft Windows. Other open-source projects, like theBSDderivatives, maintain the source code of an entire operating system, the kernel and all of its core components, in onerevision controlsystem; developing the entire system together as a single team. These operating system development projects closely integrate their tools, more so than in the other distribution-based systems. Finally, there is the book or standalone document project. These items usually do not ship as part of an open-source software package.Linux Documentation Projecthosts many such projects that document various aspects of the Linux operating system. There are many other examples of this type of open-source project. It is hard to run an open-source project following a more traditional software development method like thewaterfall model, because in these traditional methods it is not allowed to go back to a previous phase. In open-source software development, requirements are rarely gathered before the start of the project; instead they are based on early releases of the software product, as Robbins describes.[4]Besides requirements, often volunteer staff is attracted to help develop the software product based on the early releases of the software. Thisnetworking effectis essential according to Abrahamsson et al.: “if the introduced prototype gathers enough attention, it will gradually start to attract more and more developers”. However, Abrahamsson et al. also point out that the community is very harsh, much like the business world of closed-source software: “if you find the customers you survive, but without customers you die”.[5] Fuggetta[6]argues that “rapid prototyping, incremental and evolutionary development, spiral lifecycle, rapid application development, and, recently, extreme programming and the agile software process can be equally applied to proprietary and open source software”. He also pinpointsExtreme Programmingas an extremely useful method for open source software development. More generally, allAgile programmingmethods are applicable to open-source software development, because of their iterative and incremental character. Other Agile methods are equally useful for both open and closed source software development:Internet-Speed Development, for example is suitable for open-source software development because of the distributed development principle it adopts. Internet-Speed Development uses geographically distributed teams to ‘work around the clock’. This method, mostly adopted by large closed-source firms, (because they're the only ones which afford development centers in different time zones), works equally well in open source projects because a software developed by a large group of volunteers shall naturally tend to have developers spread across all time zones. Developers and users of an open-source project are not all necessarily working on the project in proximity. They require some electronic means of communications.Emailis one of the most common forms of communication among open-source developers and users. Often,electronic mailing listsare used to make sure e-mail messages are delivered to all interested parties at once. This ensures that at least one of the members can reply to it. In order to communicate in real time, many projects use aninstant messagingmethod such asIRC. Web forums have recently become a common way for users to get help with problems they encounter when using an open-source product.Wikishave become common as a communication medium for developers and users.[7] In OSS development the participants, who are mostly volunteers, are distributed amongst different geographic regions so there is need for tools to aid participants to collaborate in the development of source code. During early 2000s,Concurrent Versions System(CVS) was a prominent example of a source code collaboration tool being used in OSS projects. CVS helps manage the files and codes of a project when several people are working on the project at the same time. CVS allows several people to work on the same file at the same time. This is done by moving the file into the users’ directories and then merging the files when the users are done. CVS also enables one to easily retrieve a previous version of a file. During mid 2000s,The Subversion revision control system(SVN) was created to replace CVS. It is quickly gaining ground as an OSS project version control system.[7] Many open-source projects are now usingdistributed revision control systems, which scale better than centralized repositories such as SVN and CVS. Popular examples aregit, used by theLinux kernel,[8]andMercurial, used by thePythonprogramming language.[citation needed] Most large-scale projects require a bug tracking system to keep track of the status of various issues in the development of the project. Since OSS projects undergo frequent integration, tools that helpautomate testingduringsystem integrationare used. An example of such tool is Tinderbox. Tinderbox enables participants in an OSS project to detect errors during system integration. Tinderbox runs a continuous build process and informs users about the parts of source code that have issues and on which platform(s) these issues arise.[7] Adebuggeris a computer program that is used to debug (and sometimes test or optimize) other programs.GNU Debugger(GDB) is an example of a debugger used in open-source software development. This debugger offers remote debugging, what makes it especially applicable to open-source software development.[citation needed] A memory leak tool ormemory debuggeris a programming tool for findingmemory leaksandbuffer overflows. A memory leak is a particular kind of unnecessary memory consumption by a computer program, where the program fails to release memory that is no longer needed. Examples of memory leak detection tools used by Mozilla are theXPCOMMemory Leak tools. Validation tools are used to check if pieces of code conform to the specified syntax. An example of a validation tool isSplint.[citation needed] Apackage management systemis a collection of tools to automate the process of installing, upgrading, configuring, and removing software packages from a computer. TheRed Hat Package Manager(RPM) for .rpm andAdvanced Packaging Tool(APT) for.debfile format, are package management systems used by a number of Linux distributions.[citation needed] Software directories and release logs: Articles:
https://en.wikipedia.org/wiki/Open-source_software_development
Collaborative intelligenceis distinguished from collective intelligence in three key ways: First, in collective intelligence there is a central controller who poses the question, collects responses from a crowd of anonymous responders, and uses an algorithm to process those responses to achieve a (typically) "better than average" consensus result, whereas collaborative intelligence focuses on gathering, and valuing, diverse input. Second, in collective intelligence the responders are anonymous, whereas in collaborative intelligence, as in social networks, participants are not anonymous. Third, in collective intelligence, as in the standard model of problem-solving, there is a beginning, when the central controller broadcasts the question, and an end, when the central controller announces the "consensus" result. In collaborative intelligence there is no central controller because the process is modeled on evolution. Distributed, autonomous agents contribute and share control, as in evolution and as manifested in the generation ofWikipediaarticles. Collaborative intelligence characterizesmulti-agent,distributed systemswhere each agent, human or machine, is autonomously contributing to aproblem solvingnetwork. Collaborative autonomy of organisms in their ecosystems makes evolution possible. Natural ecosystems, where each organism's unique signature is derived from its genetics, circumstances, behavior and position in its ecosystem, offer principles for design of next generationsocial networksto support collaborative intelligence,crowdsourcingindividual expertise, preferences, and unique contributions in a problem solving process.[1] Four related terms are complementary: Collaborative intelligence is a term used in several disciplines. In business it describes heterogeneous networks of people interacting to produce intelligent outcomes. It can also denote non-autonomousmulti-agent problem-solving systems. The term was used in 1999 to describe the behavior of an intelligent business "ecosystem"[2]where Collaborative Intelligence, or CQ, is "the ability to build, contribute to and manage power found in networks of people."[3]When the computer science community adopted the termcollective intelligenceand gave that term a specific technical denotation, a complementary term was needed to distinguish between anonymous homogeneity in collective prediction systems and non-anonymous heterogeneity in collaborative problem-solving systems. Anonymous collective intelligence was then complemented by collaborative intelligence, which acknowledged identity, viewingsocial networksas the foundation for next generation problem-solving ecosystems, modeled onevolutionary adaptationin nature's ecosystems. Although many sources warn that AI may cause the extinction of the human species,[4]humans may cause our own extinction viaclimate change,ecosystem disruption, decline of our ocean lifeline, increasingmass murdersandpolice brutality, and anarms racethat could triggerWorld War III, driving humanity extinct before AI gets a chance. The surge ofopen sourceapplications in generative AI demonstrates the power of collaborative intelligence (AI-human C-IQ) among distributed, autonomous agents, sharing achievements in collaborative partnerships and networks. The successes of small open source experiments in generative AI provide a model for a paradigm shift from centralized, hierarchical control to decentralized bottom-up, evolutionary development.[5]The key role of AI in collaborative intelligence was predicted in 2012 when Zann Gill wrote that collaborative intelligence (C-IQ) requires “multi-agent, distributed systems where each agent, human or machine, is autonomously contributing to a problem-solving network.”[6]Gill’s ACM paper has been cited in applications ranging from an NIH (U. S. National Institute of Health) Center for Biotechnology study of human robot collaboration,[7]to an assessment of cloud computing tradeoffs.[8]A key application domain for collaborative intelligence is risk management, where preemption is an anticipatory action taken to secure first-options in maximising future gain and/or minimising loss.[9]Prediction of gain/ loss scenarios can increasingly harness AI analytics and predictive systems designed to maximize collaborative intelligence. Other collaborative intelligence applications include the study of social media and policing, harnessing computational approaches to enhance collaborative action between residents and law enforcement.[10]In their Harvard Business Review essay, Collaborative Intelligence: Humans and AI Are Joining Forces – Humans and machines can enhance each other’s strengths, authors H. James Wilson and Paul R. Daugherty report on research involving 1,500 firms in a range of industries, showing that the biggest performance improvements occur when humans and smart machines work together, enhancing each other’s strengths.[11] Collaborative intelligence traces its roots to the Pandemonium Architecture proposed by artificial intelligence pioneerOliver Selfridgeas a paradigm forlearning.[12]His concept was a precursor for the blackboard system where an opportunistic solution space, or blackboard, draws from a range of partitioned knowledge sources, as multiple players assemble a jigsaw puzzle, each contributing a piece.Rodney Brooksnotes that the blackboard model specifies how knowledge is posted to a blackboard for generalsharing, but not how knowledge is retrieved, typically hiding from the consumer of knowledge who originally produced which knowledge,[13]so it would not qualify as a collaborative intelligence system. In the late 1980s,Eshel Ben-Jacobbegan to study bacterialself-organization, believing that bacteria hold the key to understanding larger biological systems. He developed new pattern-forming bacteria species,Paenibacillus vortexandPaenibacillus dendritiformis, and became a pioneer in the study of social behaviors of bacteria.P. dendritiformismanifests a collective faculty, which could be viewed as a precursor of collaborative intelligence, the ability to switch between different morphotypes to adapt with the environment.[14][15]Ants were first characterized by entomologistW. M. Wheeleras cells of a single "superorganism" where seemingly independent individuals can cooperate so closely as to become indistinguishable from a single organism.[16]Later research characterized some insect colonies as instances ofcollective intelligence. The concept ofant colony optimization algorithms, introduced byMarco Dorigo, became a dominant theory ofevolutionary computation. The mechanisms ofevolutionthrough which species adapt toward increased functional effectiveness in their ecosystems are the foundation for principles of collaborative intelligence. Artificial Swarm Intelligence(ASI) is a real-time technology that enables networked human groups to efficiently combine their knowledge, wisdom, insights, and intuitions into an emergent intelligence. Sometimes referred to as a "hive mind," the first real-time human swarms were deployed byUnanimous A.I.using a cloud-based server called "UNU"in 2014. It enables online groups to answer questions, reach decisions, and make predictions by thinking together as a unified intelligence. This process has been shown to produce significantly improved decisions, predictions, estimations, and forecasts, as demonstrated when predicting major events such as the Kentucky Derby, the Oscars, the Stanley Cup, Presidential Elections, and the World Series.[17][18] A type of collaborative AI was the focus of aDARPAArtificial Intelligence Exploration (AIE)[19]Program from 2021 to 2023. Named Shared Experience Lifelong Learning,[20]the program aimed to develop a population of agents capable of sharing a growing number of machine-learned tasks without forgetting. The vision behind this initiative was later elaborated in a Perspective inNature Machine Intelligence,[21]which proposed a synergy between lifelong learning and the sharing of machine-learned knowledge in populations of agents. The envisioned network of AI agents promises to bring about emergent properties such as faster and more efficient learning, a higher degree of open-ended learning, and a potentially more democratic society of AI agents, in contrast to monolithic, large-scale AI systems. These research developments were deemed to implement concepts inspired by sci-fi concepts such as theBorgfromStar Trek, however, featuring more appealing characteristics such as individuality and autonomy.[22] Crowdsourcingevolved from anonymous collective intelligence and is evolving toward credited, open source, collaborative intelligence applications that harness social networks. Evolutionary biologist Ernst Mayr noted that competition among individuals would not contribute to species evolution if individuals were typologically identical. Individual differences are a prerequisite for evolution.[23]This evolutionary principle corresponds to the principle of collaborative autonomy in collaborative intelligence, which is a prerequisite for next generation platforms for crowd-sourcing. Following are examples of crowdsourced experiments with attributes of collaborative intelligence: Ascrowdsourcingevolves from basic pattern recognition tasks to toward collaborative intelligence, tapping the unique expertise of individual contributors insocial networks, constraints guideevolutiontoward increased functional effectiveness, co-evolving with systems to tag, credit, time-stamp, and sort content.[24]Collaborative intelligence requires capacity for effective search, discovery, integration, visualization, and frameworks to support collaborative problem-solving.[25] The collaborative intelligence technology category was established in 2022 by MURAL, a software provider ofinteractive whiteboardcollaboration spaces for group ideation and problem-solving.[26]MURAL formalized the collaborative intelligence category through the acquisition of LUMA Institute,[27]an organization that trains people to be collaborative problem solvers through teachinghuman-centered design.[28]The collaborative intelligence technology category is described by MURAL as combining "collaboration design with collaboration spaces and emerging Collaboration Insights™️ ... to enable and amplify the potential of the team."[29] The termcollective intelligenceoriginally encompassed both collective and collaborative intelligence, and many systems manifest attributes of both.Pierre Lévycoined the term "collective intelligence" in his book of that title, first published in French in 1994.[30]Lévy defined "collective intelligence" to encompass both collective and collaborative intelligence: "a form of universally distributed intelligence, constantly enhanced, coordinated in real time, and in the effective mobilization of skills".[31]Following publication of Lévy's book, computer scientists adopted the term collective intelligence to denote an application within the more general area to which this term now applies in computer science. Specifically, an application that processes input from a large number of discrete responders to specific, generally quantitative, questions (e.g. what will the price ofDRAMbe next year?)Algorithmshomogenize input, maintaining the traditional anonymity of survey responders to generate better-than-average predictions. Recent dependency network studies suggest links between collective and collaborative intelligence. Partial correlation-based Dependency Networks, a new class of correlation-based networks, have been shown to uncover hidden relationships between the nodes of the network. Research by Dror Y. Kenett and his Ph.D. supervisorEshel Ben-Jacobuncovered hidden information about the underlying structure of theU.S. stock marketthat was not present in the standardcorrelation networks, and published their findings in 2011.[32] Collaborative intelligence addresses problems where individual expertise, potentially conflicting priorities of stakeholders, and different interpretations of diverse experts are critical for problem-solving. Potential future applications include: Wikipedia, one of the most popular websites on the Internet, is an exemplar of an innovation network manifesting distributed collaborative intelligence that illustrates principles for experimental business laboratories and start-up accelerators.[33] A new generation of tools to support collaborative intelligence is poised to evolve from crowdsourcing platforms,recommender systems, andevolutionary computation.[25]Existing tools to facilitate group problem-solving include collaborative groupware,synchronous conferencingtechnologies such asinstant messaging,online chat, and shared white boards, which are complemented by asynchronous messaging likeelectronic mail, threaded, moderated discussionforums, web logs, and groupWikis. Managing the Intelligent Enterprise relies on these tools, as well as methods for group member interaction; promotion of creative thinking; group membership feedback; quality control and peer review; and a documented group memory or knowledge base. As groups work together, they develop a shared memory, which is accessible through the collaborative artifacts created by the group, including meeting minutes, transcripts from threaded discussions, and drawings. The shared memory (group memory) is also accessible through the memories of group members; current interest focuses on how technology can support and augment the effectiveness of shared past memory and capacity for future problem-solving. Metaknowledge characterizes how knowledge content interacts with its knowledge context in cross-disciplinary, multi-institutional, or global distributed collaboration.[34]
https://en.wikipedia.org/wiki/Collaborative_intelligence
Acollaborative innovation network(CoIN) is a collaborative innovation practice that uses internet platforms to promote communication and innovation within self-organizing virtual teams. Coins work across hierarchies and boundaries where members can exchange ideas and information directly and openly. This collaborative and transparent environment fosters innovation. Peter Gloor describes the phenomenon as "swarm creativity". He says, "CoINs are the best engines to drive innovation."[1] CoINs existed well before the advent of modern communication technology. However, theInternetand instant communication improved productivity and enabled the reach of a global scale. Today, they rely on the Internet,e-mail, and other communications vehicles for information sharing.[1] According to Gloor, CoINs have five main characteristics:[1] There are also five essential elements of collaborative innovation networks (which Gloor calls "genetic code"):[1] CoINs have been developing many disruptive innovations such as theInternet,Linux,the WebandWikipedia. Students with little or no budget created these inventions in universities or labs. They were not focused on the money but on the sense of accomplishment.[1] Faced with creations like the Internet, large companies such asIBMandIntelhave learned to use the principles of open innovation to enhance their research learning curve. They increased or established collaborations with universities, agencies, and small companies to accelerate their processes and launch new services faster.[1] Asheim and Isaksen (2002)[2]conclude that innovative network contributes to the achievement of optimal allocation of resources, and promoting knowledge transfer performance. However, four factors of collaborative innovation networks affect the performance of CoINs differently:[3] Collaborative innovation still needs to be empowered. A more collaborative approach involving stakeholders such as governments, corporations, entrepreneurs, and scholars is critical to tackling today's main challenges.[according to whom?]
https://en.wikipedia.org/wiki/Collaborative_Innovation_Networks
Thecollaborative human interpreter(CHI) is a proposed software interface forhuman-based computation(first proposed as a programming language on the blog Google Blogoscoped, but implementable via anAPIin virtually any programming language) specially designed for collecting and making use ofhuman intelligencein acomputer program. One typical usage is implementing impossible-to-automate functions. For example, it is currently difficult for a computer to differentiate between images ofmen,womenand non-humans. However, this is easy for people. A programmer using CHI could write a code fragment along these lines: Code for the functioncheckGender(Photo p)can currently only approximate a result, but the task can easily be solved by a person. When the functioncheckGender()is called, the system will send a request to someone, and the person who received the request will process the task and input the result. If the person (task processor) inputs valueMALE, you'll get the value in your variable result, in your program. This querying process can be highly automated. On November 6, 2005,Amazon.comlaunched CHI as its business platform in theAmazon Mechanical Turk.[1]It's the first business application using CHI. CHI is originally mentioned in Philipp Lenssen'sblog.[2]
https://en.wikipedia.org/wiki/Collaborative_human_interpreter
Ahuman-based computation gameorgame with a purpose(GWAP[1]) is ahuman-based computationtechnique of outsourcing steps within a computational process to humans in an entertaining way (gamification).[2][3] Luis von Ahnfirst proposed the idea of "human algorithm games", or games with a purpose (GWAPs), in order to harness human time and energy for addressing problems that computers cannot yet tackle on their own. He believes that human intellect is an important resource and contribution to the enhancement of computer processing and human computer interaction. He argues that games constitute a general mechanism for using brainpower to solve open computational problems. In this technique, human brains are compared to processors in a distributed system, each performing a small task of a massive computation. However, humans require an incentive to become part of a collective computation. Online games are used as a means to encourage participation in the process.[3] The tasks presented in these games are usually trivial for humans, but difficult for computers. These tasks include labeling images, transcribing ancient texts, common sense or human experience based activities, and more. Human-based computation games motivate people through entertainment rather than an interest in solving computation problems. This makes GWAPs more appealing to a larger audience. GWAPs can be used to help build the semantic web, annotate and classify collected data, crowdsource general knowledge, and improving other general computer processes.[3]GWAPs have a vast range of applications in variety of areas such as security, computer vision, Internet accessibility, adult content filtering, and Internet search.[2]In applications such as these, games with a purpose have lowered the cost of annotating data and increased the level of human participation. The first human-based computation game or games with a purpose was created in 2004 byLuis von Ahn. The idea was that ESP would use human power to help label images. The game is a two player agreement game and relied on players to come up with labels for images and attempt to guess what labels a partner was coming up with. ESP used microtasks, simple tasks that can be solved quickly without the need of any credentials.[4] Games with a purpose categorized as output agreement games are microtask games where players are matched into pairs and randomly assigned partners attempt to match output with each other given a shared visible input.ESPis an example of an output agreement game. Given an image, the ESP Game can be used to determine what objects are in the image, but cannot be used to determine the location of the object in the image. Location information is necessary for training and testing computer vision algorithms, so the data collected by the ESP Game is not sufficient. Thus, to deal with this problem, a new type of microtask game known as inversion problem games were introduced by creator ofESP, von Ahn in 2006. Peekaboom extended upon ESP and had players associate labels with a specific region of an image. In inversion problem games, two players are randomly paired together. One is assigned as the describer and the other is the guesser. The describer is given an input, which the guesser must reproduce given hints from the describer. In Peekaboom, for example, the describer slowly reveals small sections of an image until the guesser correctly guesses the label provided to the describer.[5] In input-agreement games two randomly paired players are each given an input that is hidden from the other player. Player inputs will either match or be different. The goal of these games is for players to tag their input such that the other player can determine whether or not the two inputs match. In 2008, Edith L. M. Law created the input-agreement game called TagATune. In this game, players label sound clips. In TagATune, players describe sound clips and guess if their partner's sound clip is the same as their own given their partner's tags.[6] Macrotask games, unlike microtask games, contain complex problems that are usually left to experts to solve. In 2008, a macrotask game calledFolditwas created by Seth Cooper. The idea was that players would attempt to fold a three-dimensional representation of a protein. This task was a hard problem for computers to automate completely. Locating the biologically relevant native conformation of a protein is a difficult computational challenge given the very large size of the search space. By gamification and implementation of user friendly versions of algorithms, players are able to perform this complex task without much knowledge of biology.[7][8][9] TheApetopiagame helps determining perceived color differences. Players' choices are used to model better color metrics.[10]TheApetopiagame, which was launched byUniversity of Berlin, is designed to help scientists understand perceived color differences. This game is intended to provide data on how the shades of color are perceived by people in order to model the best color parameters. Artigo[11]is a Web platform currently offering six artwork annotation games as well as an artwork search engine in English, French, and German. Three of Artigo's games, theARTigogame,ARTigo Taboo, andTagATag, are variations[12]ofLuis von Ahn'sESP game(laterGoogle Image Labeler). Three other games of the Artigo platform,Karido,[13]Artigo-Quiz, andCombino, have been conceived so as to complement the data collected by the three aforementioned ESP game variations.[14][15]Artigo's search engine relies on an original tensor latent semantic analysis.[15][16] As of September 2013, Artigo had over 30,000 (pictures of) artworks mostly of Europe and of the "long 19th century", from the Promotheus Image Archive,[17]theRijksmuseum, Amsterdam, the Netherlands, theStaatliche Kunsthalle Karlsruhe, Karlsruhe, Germany, theUniversity Museum of Contemporary Art, campus of the University of Massachusetts Amherst, USA. From 2008 through 2013, Artigo has collected over 7 million tags (mostly in German), 180,000 players (about a tenth of whom are registered), and in average 150 players per day.[18] Artigo is a joint research endeavor of art historians and computer scientists aiming at both developing an art work search engine and data analysis in art history. The first example was theESP game, an effort inhuman computationoriginally conceived byLuis von AhnofCarnegie Mellon University, which labels images. To make it an entertaining effort for humans, two players attempt to assign the same labels to an image. The game records the results of matches as image labels and the players enjoy the encounter because of the competitive and timed nature of it. To ensure that people do their best to accurately label the images, the game requires two people (chosen at random and unknown to each other), who have only the image in common, to choose the same word as an image label. This discouragesvandalismbecause it would be self-defeating as astrategy. The ESP game is a human-based computation game developed to address the problem of creating difficultmetadata. The idea behind the game is to use the computational power of humans to perform a task thatcomputerscannot (originally,image recognition) by packaging the task as agame.Googlebought a licence to create its own version of the game (Google Image Labeler) in 2006 in order to return better search results for its online images.[19]The license of the data acquired by Ahn's ESP Game, or the Google version, is not clear.[clarification needed]Google's version was shut down on 16 September 2011 as part of the Google Labs closure in September 2011. PeekaBoom is a web-based game that helps computers locate objects in images by using human gameplay to collect valuablemetadata. Humans understand and are able to analyze everyday images with minimal effort (what objects are in the image, their location, as well as background and foreground information), while computers have trouble with these basic visual tasks.[20]Peekaboom has two main components: "Peek" and "Boom". Two random players from the Web participate by taking different roles in the game. When one player is Peek, the other is Boom. Peek starts out with a blank screen, while Boom starts with an image and a word related to it. The goal of the game is for Boom to reveal parts of the image to Peek. In the meantime, Peek can guess associated words with the revealed parts of the image. When Peek guesses words that are closer to the image, Boom can indicate whether Peek's guesses are hot or cold. When Peek correctly, the players gets points and then switch roles.[5] EteRNAis a game in which players attempt to designRNAsequences that fold into a given configuration. The widely varied solutions from players, often non-biologists, are evaluated to improve computer models predicting RNA folding. Some designs are actuallysynthesizedto evaluate the actual folding dynamics and directly compare with the computer models. Eyewireis a game for finding theconnectomeof theretina.[21] Crowdsourcinghas been gamified in games likeFoldit, a game designed by theUniversity of Washington, in which players compete to manipulate proteins into more efficient structures. A 2010 paper in science journalNaturecredited Foldit's 57,000 players with providing useful results that matched or outperformed algorithmically computed solutions.[22] Foldit, while also a GWAP, has a different type of method for tapping the collective human brain. This game challenges players to use their human intuition of 3-dimensional space to help with protein folding algorithms. Unlike the ESP game, which focuses on the results that humans are able to provide, Foldit is trying to understand how humans approach complicated 3-dimensional objects. By 'watching' how humans play the game, researchers hope to be able to improve their own computer programs. Instead of simply performing tasks that computers cannot do, this GWAP is asking humans to help make current machine algorithms better. Guess the Correlation is a game with a purpose challenging players to guess the truePearson correlation coefficientinscatter plots. The collected data is used to study what features in scatter plots skew human perception of the true correlation. The game was developed by Omar Wagih at theEuropean Bioinformatics Institute.[23][24] JeuxDeMots[fr][25]is a game aiming to build a largesemantic network. People are asked to associate terms according to some instructions that are provided for a given word. The French version of the produced network contains so far more than 350 million relations between 5 million lexical items (March 2021). The project was developed by academics of theLaboratoire d'Informatique, de Robotique et de Microélectronique de Montpellier/Montpellier 2 University. Nanocrafteris a game about assembling pieces ofDNAinto structures with functional properties, such aslogic circuits, to solve problems.[26]Like Foldit, it is developed at theUniversity of Washington.[27] OnToGalaxyis a game in which players help to acquire common sense knowledge about words. Implemented as a space shooter,OnToGalaxyin its design quite different from other human computation games.[28]The game was developed by Markus Krause at theUniversity of Bremen. Phrase Detectivesis an "annotation game" geared towards lovers of literature, grammar and language. It lets users indicate relationships between words and phrases to create a resource that is rich in linguistic information. Players are awarded with points for their contributions and are featured on a leader board.[29]It was developed by academics Jon Chamberlain, Massimo Poesio and Udo Kruschwitz at theUniversity of Essex. Phylo[30]allows gamers to contribute to the greater good by trying to decode the code for genetic diseases. While playing the game and aligning the colored squares, one is helping the scientific community get a step closer to solving the age-old problem ofmultiple sequence alignment. The problem of multiple sequence alignment is too big for computers to handle. The goal is to understand how and where the function of an organism is encoded in the DNA. The game explains that "a sequence alignment is a way of arranging the sequences of DNA, RNA or protein to identify regions of similarity". Play to Cure: Genes in Spaceis a mobile game that uses the collective force of players to analyse real genetic data to help with cancer research.[31] Quantum Movesis a dexterity and spatial problem solving game, where players move slippery particles across quantum space. Players' solutions on various levels are used to program and fine tune a realquantum computeratAarhus University.[32]The game was first developed as a graphical interface for quantum simulation and education in 2012. In 2013 it was released to the public in a user-friendly form, and has been continually updated throughout 2014. Reverse The Oddsis a mobile based game which helps researchers learn about analyzing cancers. By incorporating data analysis intoReverse The Odds, researchers can get thousands of players to help them learn more about different cancers including head and neck, lung, and bladder cancer.[33] Robot Traineris a game with a purpose that aims in gathering Commonsense Knowledge. The player takes the role of a teacher. The goal of the game is to train a robot that will travel in deep space and will carry a significant amount of human knowledge so that it can teach other humans in the future, far away from earth. The game has three levels. At each level, the player gets a specific task, like building knowledge rules to answer questions, resolving conflicts and validating other players’ knowledge rules. Players are rewarded for submitting knowledge rules that help the robot answer a question and match the contribution of their fellow teachers.[34] Sea Hero Questis an iOS and Android based game that helps advancing the research in the field of dementia.[35] In the browser-based gameSmorball,[36]players are asked to type the words they see as quickly and accurately as possible to help their team to victory in the fictional sport of Smorball. The game presents players with phrases from scanned pages in the Biodiversity Heritage Library. After verification, the words players type are sent to the libraries that store the corresponding pages, allowing those pages to be searched and data mined and ultimately making historic literature more usable for institutions, scholars, educators, and the public. The game was developed byTiltfactor Lab. Train Robotsis an annotation game similar to Phrase Detectives. Players are shown pairs of before/after images of a robot arm and blocks on a board, and asked to enter commands to instruct the robot to move from the first configuration to the second. The game collects natural language data for training linguistic and robotic processing systems.[37] The Verbosity game elicits commonsense knowledge from players. One player is the "Narrator" and is given a word, like "computer". The narrator is allowed to send a hint to the "Guesser". The narrator can select one out of several templates, such as "It contains a ", and can type in one word into the blank (except that it cannot contain the word as a substring, such as "supercomputer"). The guesser then types in a guess, and the narrator can say if it is "hotter" or "colder" than the previous guess.[38] The Wikidata Game represents a gamification approach to let users help resolve questions regarding persons, images etc. and thus automatically edit the corresponding data items in Wikidata, the structured knowledge repository supporting Wikipedia and Wikimedia Commons, the other Wikimedia projects, and more.[39][40] ZombiLingo is a French game where players are asked to find the right head (a word or expression) to gain brains and become a more and more degraded zombie. While playing, they in fact annotate syntactic relations in French corpora.[41][42]It was designed and developed by researchers fromLORIAandUniversité Paris-Sorbonne.[43] While there are many games with a purpose that deal with visual data, there are few that attempt to label audio data. Annotating audio data can be used to search and index music and audio databases as well as generate training data formachine learning. However, currently manually labeling data is costly. Thus, one way to lessen the cost is to create a game with a purpose with the intention of labeling audio data.[44]TagATune is an audio based online game that has human players tag and label descriptions of sounds and music. TagATune is played by randomly paired partners. The partners are given three minutes to come up with agreed descriptions for as many sounds as possible. In each round, a sound is randomly selected from the database and presented to the partners. The description then becomes a tag that can be used for search when it is agreed upon by enough people. After the first round, the comparison round presents a tune and asks players to compare it to one of two other tunes of the same type.[6] MajorMiner is an online game in which players listen to 10 seconds of randomly selected sound and then describe the sound with tags. If one of the tags the players choose matches that of another players, each player gains one point. If that was the first time that tag was used for that specific sound, the player gains two points.[45]The goal is to use player input to research automatic music labelling and recommendation based on the audio itself.[citation needed] A game of thewikiracingtype, where players are given twoWikipediaarticles (start and target) and are tasked with finding a path from the start article to the target article, exclusively by clicking hyperlinks encountered along the way. The path data collected via the game sheds light on the ways in which people reason about encyclopedic knowledge and how they interact with complex networks.[46]
https://en.wikipedia.org/wiki/Game_with_a_purpose
The term "computer", in use from the early 17th century (the first known written reference dates from 1613),[1]meant "one who computes": a person performing mathematicalcalculations, beforeelectronic calculatorsbecame available.Alan Turingdescribed the "human computer" as someone who is "supposed to be following fixed rules; he has no authority to deviate from them in any detail."[2]Teams of people, often women from the late nineteenth century onwards, were used to undertake long and often tedious calculations; the work was divided so that this could be done in parallel. The same calculations were frequently performed independently by separate teams to check the correctness of the results. Since the end of the 20th century, the term "human computer" has also been applied to individuals with prodigious powers ofmental arithmetic, also known asmental calculators. AstronomersinRenaissancetimes used that term about as often as they called themselves "mathematicians" for their principal work of calculating thepositions of planets. They often hired a "computer" to assist them. For some people, such asJohannes Kepler, assisting a scientist in computation was a temporary position until they moved on to greater advancements. Before he died in 1617,John Napiersuggested ways by which "the learned, who perchance may have plenty of pupils and computers" might construct an improvedlogarithm table.[3]: p.46 Computing became more organized when the FrenchmanAlexis Claude Clairaut(1713–1765) divided the computation to determine the time of the return ofHalley's Cometwith two colleagues,Joseph LalandeandNicole-Reine Lepaute.[4]Human computers continued plotting the future movements of astronomical objects to create celestial tables foralmanacsin the late 1760s.[5] The computers working on theNautical Almanacfor the British Admiralty includedWilliam Wales,Israel LyonsandRichard Dunthorne.[6]The project was overseen byNevil Maskelyne.[7]Maskelyne would borrow tables from other sources as often as he could in order to reduce the number of calculations his team of computers had to make.[8]Women were generally excluded, with some exceptions such asMary Edwardswho worked from the 1780s to 1815 as one of thirty-five computers for the BritishNautical Almanacused for navigation at sea. The United States also worked on their own version of a nautical almanac in the 1840s, withMaria Mitchellbeing one of the best-known computers on the staff.[9] Other innovations in human computing included the work done by a group of boys who worked in the Octagon Room of theRoyal Greenwich Observatoryfor Astronomer RoyalGeorge Airy.[10]Airy's computers, hired after 1835, could be as young as fifteen, and they were working on a backlog of astronomical data.[11]The way that Airy organized the Octagon Room with a manager, pre-printed computing forms, and standardized methods of calculating and checking results (similar to the way theNautical Almanaccomputers operated) would remain a standard for computing operations for the next 80 years.[12] Women were increasingly involved in computing after 1865.[13]Private companies hired them for computing and to manage office staff.[13] In the 1870s, the United StatesSignal Corpscreated a new way of organizing human computing to track weather patterns.[14]This built on previous work from theUS Navyand theSmithsonian meteorological project.[15]The Signal Corps used a small computing staff that processed data that had to be collected quickly and finished in "intensive two-hour shifts".[16]Each individual human computer was responsible for only part of the data.[14] In the late nineteenth centuryEdward Charles Pickeringorganized the "Harvard Computers".[17]The first woman to approach them,Anna Winlock, askedHarvard Observatoryfor a computing job in 1875.[18]By 1880, all of the computers working at the Harvard Observatory were women.[18]The standard computer pay started at twenty-five cents an hour.[19]There would be such a huge demand to work there, that some women offered to work for the Harvard Computers for free.[20]Many of the women astronomers from this era were computers with possibly the best-known beingFlorence Cushman,Henrietta Swan Leavitt, andAnnie Jump Cannon, who worked with Pickering from 1888, 1893, and 1896 respectively. Cannon could classify stars at a rate of three per minute.[21]Mina Fleming, one of the Harvard Computers, publishedThe Draper Catalogue of Stellar Spectrain 1890.[22]The catalogue organized stars byspectral lines.[22]The catalogue continued to be expanded by the Harvard Computers and added new stars in successive volumes.[23]Elizabeth Williamswas involved in calculations in the search for a new planet,Pluto, at theLowell Observatory. In 1893,Francis Galtoncreated the Committee for Conducting Statistical Inquiries into the Measurable Characteristics of Plants and Animals which reported to theRoyal Society.[24]The committee used advanced techniques for scientific research and supported the work of several scientists.[24]W.F. Raphael Weldon, the first scientist supported by the committee worked with his wife, Florence Tebb Weldon, who was his computer.[24]Weldon used logarithms and mathematical tables created byAugust Leopold Crelleand had no calculating machine.[25]Karl Pearson, who had a lab at theUniversity of London, felt that the work Weldon did was "hampered by the committee".[26]However, Pearson did create a mathematical formula that the committee was able to use for data correlation.[27]Pearson brought his correlation formula to his own Biometrics Laboratory.[27]Pearson had volunteer and salaried computers who were both men and women.[28]Alice Leewas one of his salaried computers who worked withhistogramsand thechi-squaredstatistics.[29]Pearson also worked withBeatriceandFrances Cave-Brown-Cave.[29]Pearson's lab, by 1906, had mastered the art ofmathematical tablemaking.[29] Human computers were used to compile 18th and 19th century Western Europeanmathematical tables, for example those fortrigonometryandlogarithms. Although these tables were most often known by the names of the principalmathematicianinvolved in the project, such tables were often in fact the work of an army of unknown and unsung computers. Ever more accurate tables to a high degree of precision were needed for navigation and engineering. Approaches differed, but one was to break up the project into a form ofpiece workcompleted at home. The computers, often educated middle-class women whom society deemed it unseemly to engage in the professions or go out to work, would receive and send back packets of calculations by post.[30]The Royal Astronomical Society eventually gave space to a new committee, the Mathematical Tables Committee, which was the only professional organization for human computers in 1925.[31] Human computers were used to predict the effects of building theAfsluitdijkbetween 1927 and 1932 in theZuiderzeein theNetherlands. The computer simulation was set up byHendrik Lorentz.[32] A visionary application to meteorology can be found in the scientific work ofLewis Fry Richardsonwho, in 1922, estimated that 64,000 humans could forecast the weather for the whole globe by solving the attending differentialprimitive equationsnumerically.[33]Around 1910 he had already used human computers to calculate the stresses inside a masonry dam.[34] It was not untilWorld War Ithat computing became a profession. "The First World War required large numbers of human computers. Computers on both sides of the war produced map grids, surveying aids, navigation tables and artillery tables. With the men at war, most of these new computers were women and many were college educated."[35]This would happen again duringWorld War II, as more men joined the fight, college educated women were left to fill their positions. One of the first female computers, Elizabeth Webb Wilson, was hired by the Army in 1918 and was a graduate ofGeorge Washington University. Wilson "patiently sought a war job that would make use of her mathematical skill. In later years, she would claim that the war spared her from the 'Washington social whirl', the rounds of society events that should have procured for her a husband"[35]and instead she was able to have a career. After the war, Wilson continued with a career in mathematics and became anactuaryand turned her focus tolife tables. Human computers played integral roles in the World War II war effort in the United States, and because of the depletion of the male labor force due to thedraft, many computers during World War II were women, frequently with degrees in mathematics. In the 1940s, women were hired to examine nuclear and particle tracks left on photographic emulsions.[36]In theManhattan Project, human computers working with a variety of mechanical aids assisted numerical studies of the complex formulas related tonuclear fission.[37] Human computers were involved in calculating ballistics tables during World War I.[38]Between the two world wars, computers were used in the Department of Agriculture in the United States and also atIowa State College.[39]The human computers in these places also used calculating machines and early electrical computers to aid in their work.[40]In the 1930s, The Columbia University Statistical Bureau was created byBenjamin Wood.[41]Organized computing was also established atIndiana University, theCowles Commissionand theNational Research Council.[42] Following World War II, theNational Advisory Committee for Aeronautics(NACA) used human computers in flight research to transcribe raw data from celluloid film andoscillographpaper and then, usingslide rulesand electriccalculators, reduced the data to standard engineering units.Margot Lee Shetterly's biographical book,Hidden Figures(made into amovie of the same namein 2016), depicts African-American women who served as human computers atNASAin support of theFriendship 7, the first American crewed mission into Earth orbit.[43]NACA had begun hiring black women as computers from 1940.[44]One such computer wasDorothy Vaughanwho began her work in 1943 with theLangley Research Centeras a special hire to aid the war effort,[45]and who came to supervise theWest Area Computers, a group of African-American women who worked as computers at Langley. Human computing was, at the time, considered menial work. On November 8, 2019, theCongressional Gold Medalwas awarded "In recognition of all the women who served as computers, mathematicians, and engineers at the National Advisory Committee for Aeronautics and the National Aeronautics and Space Administration (NASA) between the 1930s and the 1970s."[46] As electrical computers became more available, human computers, especially women, were drafted as some of the firstcomputer programmers.[47]Because the six people responsible for setting up problems on theENIAC(the first general-purpose electronic digital computer built at theUniversity of Pennsylvaniaduring World War II) were drafted from a corps of human computers, the world's first professional computer programmers were women, namely:Kay McNulty,Betty Snyder,Marlyn Wescoff,Ruth Lichterman,Betty Jean Jennings, andFran Bilas.[48] The term "human computer" has been recently used by a group of researchers who refer to their work as "human computation".[49]In this usage, "human computer" refers to activities of humans in the context ofhuman-based computation(HBC). This use of "human computer" is debatable for the following reason: HBC is a computational technique where a machine outsources certain parts of a task to humans to perform, which are not necessarily algorithmic. In fact, in the context of HBC most of the time humans are not provided with a sequence of exact steps to be executed to yield the desired result; HBC is agnostic about how humans solve the problem. This is why "outsourcing" is the term used in the definition above. The use of humans in the historical role of "human computers" forHBCis very rare.
https://en.wikipedia.org/wiki/Human_computer
Human–computer information retrieval(HCIR) is the study and engineering ofinformation retrievaltechniques that bring human intelligence into thesearchprocess. It combines the fields ofhuman-computer interaction(HCI) and information retrieval (IR) and creates systems that improve search by taking into account the human context, or through a multi-step search process that provides the opportunity for human feedback. This termhuman–computer information retrievalwas coined byGary Marchioniniin a series of lectures delivered between 2004 and 2006.[1]Marchionini's main thesis is that "HCIR aims to empower people to explore large-scale information bases but demands that people also take responsibility for this control by expending cognitive and physical energy." In 1996 and 1998, a pair of workshops at theUniversity of Glasgowoninformation retrievalandhuman–computer interactionsought to address the overlap between these two fields. Marchionini notes the impact of theWorld Wide Weband the sudden increase ininformation literacy– changes that were only embryonic in the late 1990s. A few workshops have focused on the intersection of IR and HCI. The Workshop on Exploratory Search, initiated by theUniversity of Maryland Human-Computer Interaction Labin 2005, alternates between theAssociation for Computing MachinerySpecial Interest Group on Information Retrieval(SIGIR) andSpecial Interest Group on Computer-Human Interaction(CHI) conferences. Also in 2005, theEuropean Science Foundationheld an Exploratory Workshop on Information Retrieval in Context. Then, the first Workshop on Human Computer Information Retrieval was held in 2007 at theMassachusetts Institute of Technology. HCIR includes various aspects of IR and HCI. These includeexploratory search, in which users generally combine querying and browsing strategies to foster learning and investigation; information retrieval in context (i.e., taking into account aspects of the user or environment that are typically not reflected in a query); and interactive information retrieval, which Peter Ingwersen defines as "the interactive communication processes that occur during the retrieval of information by involving all the major participants in information retrieval (IR), i.e. the user, the intermediary, and the IR system."[2] A key concern of HCIR is that IR systems intended for human users be implemented and evaluated in a way that reflects the needs of those users.[3] Most modern IR systems employ arankedretrieval model, in which the documents are scored based on theprobabilityof the document'srelevanceto the query.[4]In this model, the system only presents the top-ranked documents to the user. This systems are typically evaluated based on theirmean average precisionover a set of benchmark queries from organizations like theText Retrieval Conference(TREC). Because of its emphasis in using human intelligence in the information retrieval process, HCIR requires different evaluation models – one that combines evaluation of the IR and HCI components of the system. A key area of research in HCIR involves evaluation of these systems. Early work on interactive information retrieval, such as Juergen Koenemann andNicholas J. Belkin's 1996 study of different levels of interaction for automatic query reformulation, leverage the standard IR measures ofprecisionandrecallbut apply them to the results of multiple iterations of user interaction, rather than to a single query response.[5]Other HCIR research, such asPia Borlund's IIR evaluation model, applies a methodology more reminiscent of HCI, focusing on the characteristics of users, the details of experimental design, etc.[6] HCIR researchers have put forth the following goals towards a system where the user has more control in determining relevant results.[1][7] Systems should In short, information retrieval systems are expected to operate in the way that good libraries do. Systems should help users to bridge the gap between data or information (in the very narrow, granular sense of these terms) and knowledge (processed data or information that provides the context necessary to inform the next iteration of an information seeking process). That is, good libraries provide both the information a patron needs as well as a partner in the learning process — theinformation professional— to navigate that information, make sense of it, preserve it, and turn it into knowledge (which in turn creates new, more informed information needs). The techniques associated with HCIR emphasize representations of information that use human intelligence to lead the user to relevant results. These techniques also strive to allow users to explore and digest the dataset without penalty, i.e., without expending unnecessary costs of time, mouse clicks, or context shift. Manysearch engineshave features that incorporate HCIR techniques.Spelling suggestionsandautomatic query reformulationprovide mechanisms for suggesting potential search paths that can lead the user to relevant results. These suggestions are presented to the user, putting control of selection and interpretation in the user's hands. Faceted searchenables users to navigate informationhierarchically, going from a category to its sub-categories, but choosing the order in which the categories are presented. This contrasts with traditionaltaxonomiesin which the hierarchy of categories is fixed and unchanging.Faceted navigation, like taxonomic navigation, guides users by showing them available categories (or facets), but does not require them to browse through a hierarchy that may not precisely suit their needs or way of thinking.[8] Lookaheadprovides a general approach to penalty-free exploration. For example, variousweb applicationsemployAJAXto automatically complete query terms and suggest popular searches. Another common example of lookahead is the way in which search engines annotate results with summary information about those results, including both static information (e.g.,metadataabout the objects) and "snippets" of document text that are most pertinent to the words in the search query. Relevance feedbackallows users to guide an IR system by indicating whether particular results are more or less relevant.[9] Summarization andanalyticshelp users digest the results that come back from the query. Summarization here is intended to encompass any means ofaggregatingorcompressingthe query results into a more human-consumable form. Faceted search, described above, is one such form of summarization. Another isclustering, which analyzes a set of documents by grouping similar or co-occurring documents or terms. Clustering allows the results to be partitioned into groups of related documents. For example, a search for "java" might return clusters forJava (programming language),Java (island), orJava (coffee). Visual representation of datais also considered a key aspect of HCIR. The representation of summarization or analytics may be displayed as tables, charts, or summaries of aggregated data. Other kinds ofinformation visualizationthat allow users access to summary views of search results includetag cloudsandtreemapping.
https://en.wikipedia.org/wiki/Human_Computer_Information_Retrieval
Social computingis an area ofcomputer sciencethat is concerned with the intersection ofsocial behaviorand computational systems. It is based on creating or recreating social conventions and social contexts through the use of software and technology. Thus,blogs,email,instant messaging,social network services,wikis,social bookmarkingand other instances of what is often calledsocial softwareillustrate ideas from social computing. Social computing begins with the observation that humans—and human behavior—are profoundly social. From birth, humans orient to one another, and as they grow, they develop abilities for interacting with each other. This ranges from expression and gesture to spoken and written language. As a consequence, people are remarkably sensitive to the behavior of those around them and make countless decisions that are shaped by their social context. Whether it is wrapping up a talk when the audience starts fidgeting, choosing the crowded restaurant over the nearly deserted one, or crossing the street against the light because everyone else is doing so, social information provides a basis for inferences, planning, and coordinating activity. The premise of 'Social Computing' is that it is possible to design digital systems that support useful functionality by making socially produced information available to their users. This information may be provided directly, as when systems show the number of users who have rated a review as helpful or not. Or the information may be provided after being filtered and aggregated, as is done when systems recommend a product based on what else people with similar purchase history have purchased. Alternatively, the information may be provided indirectly, as is the case with Google's page rank algorithms which orders search results based on the number of pages that (recursively) point to them. In all of these cases, information that is produced by a group of people is used to provide or enhance the functioning of a system. Social computing is concerned with systems of this sort and the mechanisms and principles that underlie them. Social computing can be defined as follows: "Social Computing" refers to systems that support the gathering, representation, processing, use, and dissemination of information that is distributed across social collectivities such as teams, communities, organizations, and markets. Moreover, the information is not "anonymous" but is significantly precise because it is linked to people, who are in turn linked to other people.[1] More recent definitions, however, have foregone the restrictions regarding anonymity of information, acknowledging the continued spread and increasing pervasiveness of social computing. As an example, Hemmatazad, N. (2014) defined social computing as "the use of computational devices to facilitate or augment the social interactions of their users, or to evaluate those interactions in an effort to obtain new information."[2] PLATOmay be the earliest example of social computing in a live production environment with initially hundreds and soon thousands of users, on the PLATO computer system based in theUniversity of Illinoisat Urbana Champaign in 1973, when social software applications for multi-userchat rooms, group messageforums, andinstant messagingappeared all within that year. In 1974,emailwas made available as well as the world's first online newspaper called NewsReport, which supported content submitted by the user community as well as written by editors and reporters. Social computing has to do with supporting "computations" that are carried out by groups of people, an idea that has been popularized inJames Surowiecki'sbook,The Wisdom of Crowds. Examples of social computing in this sense includecollaborative filtering,online auctions,reputation systems, computational social choice, tagging, and verification games. Thesocial information processingpage focuses on this sense of social computing. The idea to engage users using websites to interact was first brought forth byWeb 2.0and was an advancement from Web 1.0 where according to Cormode, G. and Krishnamurthy, B. (2008): "content creators were few in Web 1.0 with the vast majority of users simply acting as consumers of content."[2] Web 2.0 provided functionalities that allowed for low cost web-hosting services and introduced features with browser windows that used basic information structure and expanded it to as many devices as possible using HTTP.[3] By 2006, Of particular interest in the realm of social computing is social software for enterprise. Sometimes referred to as "Enterprise 2.0",[4]a term derived from Web 2.0, this generally refers to the use of social computing in corporate intranets and in other medium- and large-scale business environments. It consisted of a class of tools that allowed for networking and social changes to businesses at the time. It was a layering of the business tools on Web 2.0 and brought forth several applications and collaborative software with specific uses. Finance Electronic negotiation, which first came up in 1969 and was adapted over time to suit financial markets networking needs, represents an important and desirable coordination mechanism for electronic markets. Negotiation between agents (software agents as well as humans) allows cooperative and competitive sharing of information to determine a proper price. Recent research and practice has also shown that electronic negotiation is beneficial for the coordination of complex interactions among organizations. Electronic negotiation has recently emerged as a very dynamic, interdisciplinary research area covering aspects from disciplines such as Economics, Information Systems, Computer Science, Communication Theory, Sociology and Psychology. Social computing has become more widely known because of its relationship to a number of recent trends. These include the growing popularity of social software and Web 3.0, increased academic interest in social network analysis, the rise of open source as a viable method of production, and a growing conviction that all of this can have a profound impact on daily life. A February 13, 2006 paper by market research company Forrester Research suggested that: Socially intelligent computing is a new term that refers to the recent efforts of individuals to understand the ways in which systems of people and computers will prove useful as intermediaries between people and tools used by people. These systems result in new behaviors that occur as a result of the complexinteraction between humans and computersand can be explained by several different areas of science. The Foundations of Social Computing are deeply vested in the understanding ofsocial psychologyandcyberpsychology.Social psychologycovers topics such asdecision making,persuasion,group behavior,personal attraction, and factors that promote health and well-being.[5]Cognitive sciencesalso play a huge role in understanding Social computing and human behavior on networking elements driven by personal needs/means.Sociologyis also a factor since overall environments decide how individuals choose to interact.[6] There are multiple areas of social computing that have been able to expand the threshold of knowledge in this discipline. Each area has been able to have a focus and goal behind it that provides us with a deeper understanding of the social behavior between users that interact using some variation of social computing. Social software can be any computational system that supports social interactions among groups of people. The following are examples of such systems. Social mediahas become an outlet that is one of the most widely used ways of interacting through computers and mobile phones. Though there are many different platforms that can be used for social media, they all serve the same primary purpose of creating a social interaction through computers, mobile devices, etc. Social media has evolved into not just an interaction through text, but through pictures, videos,GIFs, and many other forms ofmultimedia. This has provided users an enhanced way to interact with other users while being able to more widely express and share during computational interaction. Within the last couple decades, social media has blown up and created many famous applications within the social computing arena. These sites also serve as digital marketing platforms, which is growing rapidly. Through social networking, people are able to use platforms to build or enhance social networks/relations among people. These are people who commonly share similar backgrounds, interests, or participate in the same activities. For more details seesocial networking service. Awikiprovides computing users a chance to collaborate to come together with a common goal and provide content to the public; both novice and expert users. Through the collaboration and efforts of many, a wiki page has no limit for the number of improvements/edits that can be made. Ablog, in social computing aspects, is more a way for people to follow a particular user, group, or company and comment on the progress toward the particular ideal being covered in the blog. This allows users to interact using the content that is provided by page admin as the main subject. Five of the best blogging platforms[3]includeTumblr,WordPress,Squarespace,Blogger, andPosterous. These sites enable users, whether it be a person, company, or organization, to express certain ideas, thoughts, and/or opinions on either a single or variety of subjects. There are also a new technology called webloging which are sites that hosts blogs such as Myspace and Xanga. Both blogs and weblogging are very similar in that they act as a form of social computing where they help form social relations through one another such as gaining followers, trending using hashtags, or commenting on a post providing an opinion on a blog. According to a study conducted by Rachael Kwai Fun IP and Christian Wagner,[5]some features of weblogs that attract users and support blogs and weblogs as an important aspect of social computing in forming and strengthening relationships are: content management tools, community building tools, time structuring, search by category, commentary, and the ability to secure closed blogs. Blogs are also highly used in social computing concepts in order to understand human behaviors amongst online communities through a concept calledsocial network analysis. Social network analysis (SNA) is "a discipline of social science that seeks to explain social phenomena through a structural interpretation of human interaction both as a theory and a methodology".[6]There are certain links that occur in blogs, weblogs in this case, where they have different functions that portray different types of information such as Permalink, Blogrolls, Comments, and Trackbacks. Online gaming is the social behavior of using anonline gamewhile interacting with other users. Online gaming can be done using a multitude of different platforms; common ones includepersonal computers,Xbox,PlayStation, and many more gaming consoles that can be stationary or mobile. Many of these applications include messaging between users. Online datinghas created a community of websites like OkCupid, eHarmony, and Match.com. These platforms provide users with a way to interact with others that have goals relating to creating new relationships. The interaction between users in sites like these will differ based on the platform but the goal is simple; create relationships through online social interaction. People can meet more possible companions through online dating websites than they could at work or in their neighborhood. Groups of people interact with these social computing systems in a variety of ways, all of which may be described as socially intelligent computing. Crowdsourcingconsists of a group of participants that work collaboratively either for pay or as volunteers to produce a good or service. Crowdsourcing platforms likeAmazon Mechanical Turkallow individuals to perform simple tasks that can be accumulated into a larger project. TheDark social mediais the social media tools used to collaborate between individuals where contents are supposed to be only available to the participants. However, unlike mobile phone calls or messaging where information is sent from one user, transmitted through a medium and stored on each user devices, with the medium having no storage permission of the actual content of the data, more and more communication methods include a centralized server where all the contents are received, stored, and then transmitted. Some examples of these new mechanisms include Google Doc, Facebook Messages or Snapchat. All of the information passes through these channels has largely been unaccounted for by users themselves and the data analytics. However, in addition to their respective users private companies (Facebook, Twitter, Snapchat) that provided these services do have complete control over such data. The number of images, links, referrals and information pass through digital is supposed to be completely unaccounted for in the marketing scheme of things. Collective intelligenceis considered an area of social computing because of the group collaboration aspect. Becoming a growing area in computer science, collective intelligence provides users with a way to gain knowledge through collective efforts in a social interactive environment. Recent research has begun to look at interactions between humans and their computers in groups. This line of research focuses on the interaction as the primary unit of analysis by drawing from fields such as psychology, social psychology, and sociology.[7][8] Since 2007, research in social computing has become more popular for researchers and professionals in multiple fields dealing with technology, business and politics. A study performed by affiliates ofWashington State Universityused aLatent semantic analysison academic papers containing the term "social computing" to find that topics in social computing converge into the three major themes of Knowledge Discovery, Knowledge Sharing and Content Management.[9]Social computing continues to shift the direction of research in Information Sciences as a whole, extending social aspects into technological and corporate domains. Companies and industries such as Google, Cisco and Fox have invested in such endeavors. Possible questions to be answered through social computing research include how to form stable communities, how these communities evolve, how knowledge is created and processed, how people are motivated to participate, etc.[10] Currently, research in the areas of social computing is being done by many well known labs owned byMicrosoftandMassachusetts Institute of Technology. The team at Microsoft has taken off with a mission statement of "To research and develop software that contributes to compelling and effective social interactions."[11]They take a main focus on user-centered design processes. They also add rapid prototyping combined with rigorous science to bring forth complete projects and research that can impact the social computing field. Current projects being worked on by the Microsoft team include Hotmap,[12]SNARF,[13]Slam,[14]andWallop. MIT, however, has a goal of creating software that shapes our cities[15]and more in depth: "More specifically, (1) we create micro-institutions in physical space, (2) we design social processes that allow others to replicate and evolve those micro-institutions, and (3) we write software that enables those social processes. We use this process to create more robust, decentralized, human-scale systems in our cities. We are particularly focused on reinventing our current systems for learning, agriculture, and transportation."[15] The current research projects at the MIT social computing lab include The Dog Programming Language,[16]Wildflower Montessori, and You Are Here.[17]A broad overview of what to expect from newly started Wildflower Montessori is as follows: "Wildflower Montessori School is a pilot Lab School and the first in a new network of learning centers. Its aim is to be an experiment in a new learning environment, blurring the boundaries between coffee shops and schools, between home-schooling and institutional schooling, between tactile, multisensory methods and abstract thinking. Wildflower will serve as a research platform to test new ideas in advancing the Montessori Method in the context of modern fluencies, as well as to test how to direct the organic growth of a social system that fosters the growth and connection of such schools."[15] Introduction to Computational Social Science: Principles and Applications . textbook by Claudio Cioffi-Revilla Published at December 31,2013.page 2,3
https://en.wikipedia.org/wiki/Social_computing
Insociology, asocial organizationis a pattern ofrelationshipsbetween and amongindividualsandgroups.[1][2]Characteristics of social organization can include qualities such as sexual composition, spatiotemporal cohesion,leadership,structure, division of labor, communication systems, and so on.[3][4] Because of these characteristics of social organization, people can monitor their everyday work and involvement in other activities that are controlled forms of human interaction. These interactions include: affiliation, collective resources, substitutability of individuals and recorded control. These interactions come together to constitute common features in basic social units such as family, enterprises, clubs, states, etc. These are social organizations.[5] Common examples of modern social organizations aregovernment agencies,[6][7]NGOs, andcorporations.[8][9] Social organizations happen in everyday life. Many people belong to various social structures—institutional and informal. These include clubs, professional organizations, and religious institutions.[10]To have a sense of identity with the social organization, being closer to one another helps build a sense of community.[11]While organizations link many like-minded people, it can also cause a separation with others not in their organization due to the differences in thought. Social organizations are structured to where there is a hierarchical system.[12]A hierarchical structure in social groups influences the way a group is structured and how likely it is that the group remains together. Four other interactions can also determine if the group stays together. A group must have a strong affiliation within itself. To be affiliated with an organization means having a connection and acceptance in that group. Affiliation means an obligation to come back to that organization. To be affiliated with an organization, it must know and recognize that you are a member. The organization gains power through the collective resources of these affiliations. Often affiliates have something invested in these resources that motivate them to continue to make the organization better. On the other hand, the organization must keep in mind thesubstitutabilityof these individuals. While the organization needs the affiliates and the resources to survive, it also must be able to replace leaving individuals to keep the organization going. Because of all these characteristics, it can often be difficult to be organized within the organization. This is where recorded control comes in, as writing things down makes them more clear and organized.[5] Social organizations within society are constantly changing.[13]Smaller scale social organizations in society include groups forming from common interests and conversations. Social organizations are created constantly and with time change.[citation needed] Smaller scaled social organizations include many everyday groups that people would not even think have these characteristics. These small social organizations can include things such as bands, clubs, or even sports teams. Within all of these small scaled groups, they contain the same characteristics as a large scale organization would. While these small social organizations do not have nearly as many people as large scale ones, they still interact and function in similar ways. Looking at a common small organization, a school sports team, it is easy to see how it can be a social organization. The members of the team all have the same goals, which is to win, and they all work together to accomplish that common goal. It is also clear to see the structure in the team. While everyone has the same goal in mind[citation needed], they have different roles, or positions, that play a part to get there. To achieve their goal they must be united. In large-scale organizations, there is always some extent of bureaucracy. Having bureaucracy includes: a set of rules, specializations, and a hierarchical system. This allows for these larger sized organizations to try maximize efficiency. Large-scaled organizations also come with making sure managerial control is right. Typically, the impersonal authority approach is used. This is when the position of power is detached and impersonal with the other members of the organization. This is done to make sure that things run smoothly and the social organization stays the best it can be.[14] A big social organization that most people are somewhat familiar with is ahospital. Within the hospital are small social organization—for example, the nursing staff and the surgery team. These smaller organizations work closer together to accomplish more for their area, which in turn makes the hospital more successful and long lasting. As a whole, the hospital contains all the characteristics of being a social organization. In a hospital, there are various relationships between all of the members of the staff and also with the patients. This is a main reason that a hospital is a social organization. There is also division of labor, structure, cohesiveness, and communication systems. To operate to the utmost effectiveness, a hospital needs to contain all of the characteristics of a social organization because that is what makes it strong. Without one of these things, it would be difficult for this organization to run.[citation needed] Although the assumption that many organizations run better with bureaucracy and a hierarchical system with management, there are other factors that can prove that wrong. These factors are whether or not the organization isparallelorinterdependent. To be parallel in an organization means that each department or section does not depend on the other in order to do its job. To be Interdependent means that you do depend on others to get the job done. If an organization is parallel, the hierarchical structure would not be necessary and would not be as effect as it would in an interdependent organization. Because of all the different sub-structures in parallel organizations (the different departments), it would be hard for hierarchical management to be in charge due to the different jobs. On the other hand, an interdependent organization would be easier to manage that way due to the cohesiveness throughout each department in the organization.[14] Societies can be organized throughindividualisticor collectivist means, which can have implications foreconomic growth, legal and political institutions and effectiveness and social relations. This is based on the premise that the organization of society is a reflection of its cultural, historical, social, political and economic processes which therefore govern interaction. Collectivist social organization sometimes refers todeveloping countriesthat bypass formal institutions and rather rely on informal institutions to uphold contractual obligations. This organization relies on a horizontal social structure, stressing relationships withincommunitiesrather than asocial hierarchybetween them. This kind of system has been largely attributed to cultures with strong religious,ethnic, orfamilialgroup ties.[citation needed] In contrast, individualistic social organization implies interaction between individuals of different social groups. Enforcement stems from formal institutions such ascourts of law. The economy and society are completely integrated, enabling transactions across groups and individuals, who may similarly switch from group to group, and allowing individuals to be less dependent on one group.[original research?]This kind of social organization is traditionally associated withWestern societies.[15][dubious–discuss] One type of collectivism is racial collectivism, or race collectivism.[16]Racial collectivism is a form of social organization based onraceorethniclines as opposed to other factors such aspoliticalorclassaffiliated collectivism. Examples of societies that have attempted, historically had, or still have a racial collectivist structure, at least in part, includeNazismandNazi Germany,racial segregation in the United States(especially prior to thecivil rights movementof the 1950s and 1960s),Apartheidin South Africa,White Zimbabweans, thecaste system of India, and many other nations and regions of the world.[16][17] Social organizations may be seen in digital spaces, and online communities show patterns of how people would react in social networking situations.[18]The technology allows people to use the constructed social organizations as a way to engage with one another without having to physically be in the same place. Looking at social organization online is a different way to think about it and a little challenging to connect the characteristics. While the characteristics of social organization are not completely the same for online organizations, they can be connected and talked about in a different context to make the cohesiveness between the two apparent. Online, there are various forms of communication and ways that people connect. Again, this allows them to talk and share common interests (which is what makes them a social organization) and be a part of the organization without having to physically be with the other members. Although these online social organization do not take place in person, they still function as social organization because of the relationships within the group and the goal to keep the communities going.
https://en.wikipedia.org/wiki/Social_organization
Collective intelligence(CI) is shared orgroupintelligence(GI) thatemergesfrom thecollaboration, collective efforts, and competition of many individuals and appears inconsensus decision making. The term appears insociobiology,political scienceand in context of masspeer reviewandcrowdsourcingapplications. It may involveconsensus,social capitalandformalismssuch asvoting systems,social mediaand other means of quantifying mass activity.[1]CollectiveIQis a measure of collective intelligence, although it is often used interchangeably with the term collective intelligence. Collective intelligence has also been attributed tobacteriaand animals.[2] It can be understood as anemergent propertyfrom thesynergiesamong: Or it can be more narrowly understood as an emergent property between people and ways of processing information.[4]This notion of collective intelligence is referred to as "symbiotic intelligence" by Norman Lee Johnson.[5]The concept is used insociology,business,computer scienceand mass communications: it also appears inscience fiction.Pierre Lévydefines collective intelligence as, "It is a form of universally distributed intelligence, constantly enhanced, coordinated in real time, and resulting in the effective mobilization of skills. I'll add the following indispensable characteristic to this definition: The basis and goal of collective intelligence is mutual recognition and enrichment of individuals rather than the cult of fetishized orhypostatizedcommunities."[6]According to researchers Pierre Lévy andDerrick de Kerckhove, it refers to capacity of networkedICTs(Information communication technologies) to enhance the collective pool of social knowledge by simultaneously expanding the extent of human interactions.[7][8]A broader definition was provided byGeoff Mulganin a series of lectures and reports from 2006 onwards[9]and in the book Big Mind[10]which proposed a framework for analysing any thinking system, including both human and machine intelligence, in terms of functional elements (observation, prediction, creativity, judgement etc.), learning loops and forms of organisation. The aim was to provide a way to diagnose, and improve, the collective intelligence of a city, business, NGO or parliament. Collective intelligence strongly contributes to the shift of knowledge and power from the individual to the collective. According toEric S. Raymondin 1998 and JC Herz in 2005,[11][12]open-source intelligencewill eventually generate superior outcomes to knowledge generated by proprietary software developed within corporations.[13]Media theoristHenry Jenkinssees collective intelligence as an 'alternative source of media power', related to convergence culture. He draws attention to education and the way people are learning to participate in knowledge cultures outside formal learning settings. Henry Jenkins criticizes schools which promote 'autonomous problem solvers and self-contained learners' while remaining hostile to learning through the means of collective intelligence.[14]Both Pierre Lévy and Henry Jenkins support the claim that collective intelligence is important fordemocratization, as it is interlinked with knowledge-based culture and sustained by collective idea sharing, and thus contributes to a better understanding of diverse society.[15][16] Similar to thegfactor (g)for general individual intelligence, a new scientific understanding of collective intelligence aims to extract a general collective intelligence factorc factorfor groups indicating a group's ability to perform a wide range of tasks.[17]Definition, operationalization and statistical methods are derived fromg. Similarly asgis highly interrelated with the concept ofIQ,[18][19]this measurement of collective intelligence can be interpreted as intelligence quotient for groups (Group-IQ) even though the score is not a quotient per se. Causes forcand predictive validity are investigated as well. Writers who have influenced the idea of collective intelligence includeFrancis Galton,Douglas Hofstadter(1979), Peter Russell (1983),Tom Atlee(1993),Pierre Lévy(1994),Howard Bloom(1995),Francis Heylighen(1995),Douglas Engelbart, Louis Rosenberg,Cliff Joslyn,Ron Dembo,Gottfried Mayer-Kress(2003), andGeoff Mulgan. The concept (although not so named) originated in 1785 with theMarquis de Condorcet, whose"jury theorem"states that if each member of a voting group is more likely than not to make a correct decision, the probability that the highest vote of the group is the correct decision increases with the number of members of the group.[20]Many theorists have interpretedAristotle's statement in thePoliticsthat "a feast to which many contribute is better than a dinner provided out of a single purse" to mean that just as many may bring different dishes to the table, so in a deliberation many may contribute different pieces of information to generate a better decision.[21][22]Recent scholarship,[23]however, suggests that this was probably not what Aristotle meant but is a modern interpretation based on what we now know about team intelligence.[24] A precursor of the concept is found in entomologistWilliam Morton Wheeler's observation in 1910 that seemingly independent individuals can cooperate so closely as to become indistinguishable from a single organism.[25]Wheeler saw this collaborative process at work inantsthat acted like the cells of a single beast he called asuperorganism. In 1912Émile Durkheimidentified society as the sole source of human logical thought. He argued in "The Elementary Forms of Religious Life" that society constitutes a higher intelligence because it transcends the individual over space and time.[26]Other antecedents areVladimir VernadskyandPierre Teilhard de Chardin's concept of "noosphere" andH. G. Wells's concept of "world brain".[27]Peter Russell,Elisabet Sahtouris, andBarbara Marx Hubbard(originator of the term "conscious evolution")[28]are inspired by the visions of a noosphere – a transcendent, rapidly evolving collective intelligence – an informational cortex of the planet. The notion has more recently been examined by the philosopher Pierre Lévy. In a 1962 research report,Douglas Engelbartlinked collective intelligence to organizational effectiveness, and predicted that pro-actively 'augmenting human intellect' would yield a multiplier effect in group problem solving: "Three people working together in this augmented mode [would] seem to be more than three times as effective in solving a complex problem as is one augmented person working alone".[29]In 1994, he coined the term 'collective IQ' as a measure of collective intelligence, to focus attention on the opportunity to significantly raise collective IQ in business and society.[30] The idea of collective intelligence also forms the framework for contemporary democratic theories often referred to asepistemic democracy. Epistemic democratic theories refer to the capacity of the populace, either through deliberation or aggregation of knowledge, to track the truth and relies on mechanisms to synthesize and apply collective intelligence.[31] Collective intelligence was introduced into the machine learning community in the late 20th century,[32]and matured into a broader consideration of how to design "collectives" of self-interested adaptive agents to meet a system-wide goal.[33][34]This was related to single-agent work on "reward shaping"[35]and has been taken forward by numerous researchers in the game theory and engineering communities.[36] Howard Bloomhas discussed mass behavior –collective behaviorfrom the level of quarks to the level of bacterial, plant, animal, and human societies. He stresses the biological adaptations that have turned most of this earth's living beings into components of what he calls "a learning machine". In 1986 Bloom combined the concepts ofapoptosis,parallel distributed processing,group selection, and the superorganism to produce a theory of how collective intelligence works.[37]Later he showed how the collective intelligences of competing bacterial colonies and human societies can be explained in terms of computer-generated "complex adaptive systems" and the "genetic algorithms", concepts pioneered byJohn Holland.[38] Bloom traced the evolution of collective intelligence to our bacterial ancestors 1 billion years ago and demonstrated how a multi-species intelligence has worked since the beginning of life.[38]Ant societiesexhibit more intelligence, in terms of technology, than any other animal except for humans and co-operate in keeping livestock, for exampleaphidsfor "milking".[38]Leaf cutters care for fungi and carry leaves to feed the fungi.[38] David Skrbina[39]cites the concept of a 'group mind' as being derived from Plato's concept ofpanpsychism(that mind or consciousness is omnipresent and exists in all matter). He develops the concept of a 'group mind' as articulated byThomas HobbesinLeviathanandFechner's arguments for acollective consciousnessof mankind. He citesDurkheimas the most notable advocate of a "collective consciousness"[40]andTeilhard de Chardinas a thinker who has developed the philosophical implications of the group mind.[41] Tom Atlee focuses primarily on humans and on work to upgrade what Howard Bloom calls "the group IQ". Atlee feels that collective intelligence can be encouraged "to overcome 'groupthink' and individualcognitive biasin order to allow a collective to cooperate on one process – while achieving enhanced intellectual performance." George Pór defined the collective intelligence phenomenon as "the capacity of human communities to evolve towards higher order complexity and harmony, through such innovation mechanisms as differentiation and integration, competition and collaboration."[42]Atlee and Pór state that "collective intelligence also involves achieving a single focus of attention and standard of metrics which provide an appropriate threshold of action".[43]Their approach is rooted inscientific community metaphor.[43] The term group intelligence is sometimes used interchangeably with the term collective intelligence. Anita Woolley presents Collective intelligence as a measure of group intelligence and group creativity.[17]The idea is that a measure of collective intelligence covers a broad range of features of the group, mainly group composition and group interaction.[44]The features of composition that lead to increased levels of collective intelligence in groups include criteria such as higher numbers of women in the group as well as increased diversity of the group.[44] Atlee and Pór suggest that the field of collective intelligence should primarily be seen as a human enterprise in which mind-sets, a willingness to share and an openness to the value of distributed intelligence for the common good are paramount, though group theory andartificial intelligencehave something to offer.[43]Individuals who respect collective intelligence are confident of their own abilities and recognize that the whole is indeed greater than the sum of any individual parts.[45]Maximizing collective intelligence relies on the ability of an organization to accept and develop "The Golden Suggestion", which is any potentially useful input from any member.[46]Groupthink often hampers collective intelligence by limiting input to a select few individuals or filtering potential Golden Suggestions without fully developing them to implementation.[43] Robert David Steele VivasinThe New Craft of Intelligenceportrayed all citizens as "intelligence minutemen", drawing only on legal and ethical sources of information, able to create a "public intelligence" that keeps public officials and corporate managers honest, turning the concept of "national intelligence" (previously concerned about spies and secrecy) on its head.[47] According toDon TapscottandAnthony D. Williams, collective intelligence ismass collaboration. In order for this concept to happen, four principles need to exist:[48] A new scientific understanding of collective intelligence defines it as a group's general ability to perform a wide range of tasks.[17]Definition, operationalization and statistical methods are similar to thepsychometric approach of general individual intelligence. Hereby, an individual's performance on a given set of cognitive tasks is used to measure general cognitive ability indicated by the general intelligencefactorgproposed by English psychologistCharles Spearmanand extracted viafactor analysis.[49]In the same vein asgserves to display between-individual performance differences on cognitive tasks, collective intelligence research aims to find a parallel intelligence factor for groups'cfactor'[17](also called 'collective intelligence factor' (CI)[50]) displaying between-group differences on task performance. The collective intelligence score then is used to predict how this same group will perform on any other similar task in the future. Yet tasks, hereby, refer to mental or intellectual tasks performed by small groups[17]even though the concept is hoped to be transferable to other performances and any groups or crowds reaching from families to companies and even whole cities.[51]Since individuals'gfactor scores are highly correlated with full-scaleIQscores, which are in turn regarded as good estimates ofg,[18][19]this measurement of collective intelligence can also be seen as an intelligence indicator or quotient respectively for a group (Group-IQ) parallel to an individual's intelligence quotient (IQ) even though the score is not a quotient per se. Mathematically,candgare both variables summarizing positive correlations among different tasks supposing that performance on one task is comparable with performance on other similar tasks.[52]cthus is a source of variance among groups and can only be considered as a group's standing on thecfactor compared to other groups in a given relevant population.[19][53]The concept is in contrast to competing hypotheses including other correlational structures to explain group intelligence,[17]such as a composition out of several equally important but independent factors as found inindividual personality research.[54] Besides, this scientific idea also aims to explore the causes affecting collective intelligence, such as group size, collaboration tools or group members' interpersonal skills.[55]TheMIT Center for Collective Intelligence, for instance, announced the detection ofThe Genome of Collective Intelligence[55]as one of its main goals aiming to develop a "taxonomy of organizational building blocks, or genes, that can be combined and recombined to harness the intelligence of crowds".[55] Individual intelligence is shown to be genetically and environmentally influenced.[56][57]Analogously, collective intelligence research aims to explore reasons why certain groups perform more intelligently than other groups given thatcis just moderately correlated with the intelligence of individual group members.[17]According to Woolley et al.'s results, neither team cohesion nor motivation or satisfaction is correlated withc. However, they claim that three factors were found as significant correlates: the variance in the number of speaking turns, group members' average social sensitivity and the proportion of females. All three had similar predictive power forc, but only social sensitivity was statistically significant (b=0.33, P=0.05).[17] The number speaking turns indicates that "groups where a few people dominated the conversation were less collectively intelligent than those with a more equal distribution of conversational turn-taking".[50]Hence, providing multiple team members the chance to speak up made a group more intelligent.[17] Group members' social sensitivity was measured via the Reading the Mind in the Eyes Test[58](RME) and correlated .26 withc.[17]Hereby, participants are asked to detect thinking or feeling expressed in other peoples' eyes presented on pictures and assessed in a multiple choice format. The test aims to measure peoples'theory of mind (ToM), also called 'mentalizing'[59][60][61][62]or 'mind reading',[63]which refers to the ability to attribute mental states, such as beliefs, desires or intents, to other people and in how far people understand that others have beliefs, desires, intentions or perspectives different from their own ones.[58]RME is a ToM test for adults[58]that shows sufficient test-retest reliability[64]and constantly differentiates control groups from individuals with functionalautismorAsperger Syndrome.[58]It is one of the most widely accepted and well-validated tests for ToM within adults.[65]ToM can be regarded as an associated subset of skills and abilities within the broader concept ofemotional intelligence.[50][66] The proportion of females as a predictor ofcwaslargely mediated by social sensitivity (Sobelz = 1.93, P= 0.03)[17]which is in line with previous research showing that women score higher on social sensitivity tests.[58]While amediation, statistically speaking, clarifies the mechanism underlying the relationship between a dependent and an independent variable,[67]Wolley agreed in an interview with theHarvard Business Reviewthat these findings are saying that groups of women are smarter than groups of men.[51]However, she relativizes this stating that the actual important thing is the high social sensitivity of group members.[51] It is theorized that the collective intelligence factorcis an emergent property resulting from bottom-up as well as top-down processes.[44]Hereby, bottom-up processes cover aggregated group-member characteristics. Top-down processes cover group structures and norms that influence a group's way of collaborating and coordinating.[44] Top-down processes cover group interaction, such as structures, processes, and norms.[44]An example of such top-down processes is conversational turn-taking.[17]Research further suggest that collectively intelligent groups communicate more in general as well as more equally; same applies for participation and is shown for face-to-face as well as online groups communicating only via writing.[50][68] Bottom-up processes include group composition,[44]namely the characteristics of group members which are aggregated to the team level.[44]An example of such bottom-up processes is the average social sensitivity or the average and maximum intelligence scores of group members.[17]Furthermore, collective intelligence was found to be related to a group's cognitive diversity[69]including thinking styles and perspectives.[70]Groups that are moderately diverse incognitive stylehave higher collective intelligence than those who are very similar in cognitive style or very different. Consequently, groups where members are too similar to each other lack the variety of perspectives and skills needed to perform well. On the other hand, groups whose members are too different seem to have difficulties to communicate and coordinate effectively.[69] For most of human history, collective intelligence was confined to small tribal groups in which opinions were aggregated through real-time parallel interactions among members.[71]In modern times, mass communication, mass media, and networking technologies have enabled collective intelligence to span massive groups, distributed across continents and time-zones. To accommodate this shift in scale, collective intelligence in large-scale groups been dominated by serialized polling processes such as aggregating up-votes, likes, and ratings over time. While modern systems benefit from larger group size, the serialized process has been found to introduce substantial noise that distorts the collective output of the group. In one significant study of serialized collective intelligence, it was found that the first vote contributed to a serialized voting system can distort the final result by 34%.[72] To address the problems of serialized aggregation of input among large-scale groups, recent advancements collective intelligence have worked to replace serialized votes, polls, and markets, with parallel systems such as "human swarms" modeled after synchronous swarms in nature.[73][74]Based on natural process ofSwarm Intelligence, these artificial swarms of networked humans enable participants to work together in parallel to answer questions and make predictions as an emergent collective intelligence.[75][76]In one high-profile example, a human swarm challenge by CBS Interactive to predict the Kentucky Derby. The swarm correctly predicted the first four horses, in order, defying 542–1 odds and turning a $20 bet into $10,800.[77] The value of parallel collective intelligence was demonstrated in medical applications by researchers atStanford University School of MedicineandUnanimous AIin a set of published studies wherein groups of human doctors were connected by real-time swarming algorithms and tasked with diagnosing chest x-rays for the presence of pneumonia.[78][79]When working together as "human swarms", the groups of experienced radiologists demonstrated a 33% reduction in diagnostic errors as compared to traditional methods.[80][81] Woolley, Chabris, Pentland, Hashmi, & Malone (2010),[17]the originators of this scientific understanding of collective intelligence, found a single statistical factor for collective intelligence in their research across 192 groups with people randomly recruited from the public. In Woolley et al.'s two initial studies, groups worked together on different tasks from theMcGrath Task Circumplex,[82]a well-established taxonomy of group tasks. Tasks were chosen from all four quadrants of the circumplex and included visual puzzles, brainstorming, making collective moral judgments, and negotiating over limited resources. The results in these tasks were taken to conduct afactor analysis. Both studies showed support for a general collective intelligence factorcunderlying differences in group performance with an initial eigenvalue accounting for 43% (44% in study 2) of the variance, whereas the next factor accounted for only 18% (20%). That fits the range normally found in research regarding ageneral individual intelligence factorgtypically accounting for 40% to 50% percent of between-individual performance differences on cognitive tests.[52] Afterwards, a more complex task was solved by each group to determine whethercfactor scores predict performance on tasks beyond the original test. Criterion tasks were playingcheckers (draughts)against a standardized computer in the first and a complex architectural design task in the second study. In aregression analysisusing both individual intelligence of group members andcto predict performance on the criterion tasks,chad a significant effect, but average and maximum individual intelligence had not. While average (r=0.15, P=0.04) and maximum intelligence (r=0.19, P=0.008) of individual group members were moderately correlated withc,cwas still a much better predictor of the criterion tasks. According to Woolley et al., this supports the existence of a collective intelligence factorc,because it demonstrates an effect over and beyond group members' individual intelligence and thus thatcis more than just the aggregation of the individual IQs or the influence of the group member with the highest IQ.[17] Engel et al.[50](2014) replicated Woolley et al.'s findings applying an accelerated battery of tasks with a first factor in the factor analysis explaining 49% of the between-group variance in performance with the following factors explaining less than half of this amount. Moreover, they found a similar result for groups working together online communicating only via text and confirmed the role of female proportion and social sensitivity in causing collective intelligence in both cases. Similarly to Wolley et al.,[17]they also measured social sensitivity with the RME which is actually meant to measure people's ability to detect mental states in other peoples' eyes. The online collaborating participants, however, did neither know nor see each other at all. The authors conclude that scores on the RME must be related to a broader set of abilities of social reasoning than only drawing inferences from other people's eye expressions.[83] A collective intelligence factorcin the sense of Woolley et al.[17]was further found in groups of MBA students working together over the course of a semester,[84]in online gaming groups[68]as well as in groups from different cultures[85]and groups in different contexts in terms of short-term versus long-term groups.[85]None of these investigations considered team members' individual intelligence scores as control variables.[68][84][85] Note as well that the field of collective intelligence research is quite young and published empirical evidence is relatively rare yet. However, various proposals and working papers are in progress or already completed but (supposedly) still in ascholarly peer reviewingpublication process.[86][87][88][89] Next to predicting a group's performance on more complex criterion tasks as shown in the original experiments,[17]the collective intelligence factorcwas also found to predict group performance in diverse tasks in MBA classes lasting over several months.[84]Thereby, highly collectively intelligent groups earned significantly higher scores on their group assignments although their members did not do any better on other individually performed assignments. Moreover, highly collective intelligent teams improved performance over time suggesting that more collectively intelligent teams learn better.[84]This is another potential parallel to individual intelligence where more intelligent people are found to acquire new material quicker.[19][90] Individual intelligence can be used to predict plenty of life outcomes from school attainment[91]and career success[92]to health outcomes[93]and even mortality.[93]Whether collective intelligence is able to predict other outcomes besides group performance on mental tasks has still to be investigated. Gladwell[94](2008) showed that the relationship between individual IQ and success works only to a certain point and that additional IQ points over an estimate of IQ 120 do not translate into real life advantages. If a similar border exists for Group-IQ or if advantages are linear and infinite, has still to be explored. Similarly, demand for further research on possible connections of individual and collective intelligence exists within plenty of other potentially transferable logics of individual intelligence, such as, for instance, the development over time[95]or the question of improving intelligence.[96][97]Whereas it is controversial whether human intelligence can be enhanced via training,[96][97]a group's collective intelligence potentially offers simpler opportunities for improvement by exchanging team members or implementing structures and technologies.[51]Moreover, social sensitivity was found to be, at least temporarily, improvable by readingliterary fiction[98]as well as watching drama movies.[99]In how far such training ultimately improves collective intelligence through social sensitivity remains an open question.[100] There are further more advanced concepts and factor models attempting to explain individual cognitive ability including the categorization of intelligence influid and crystallized intelligence[101][102]or thehierarchical model of intelligence differences.[103][104]Further supplementing explanations and conceptualizations for the factor structure of theGenomesof collective intelligence besides a general'cfactor', though, are missing yet.[105] Other scholars explain team performance by aggregating team members' general intelligence to the team level[106][107]instead of building an own overall collective intelligence measure. Devine and Philips[108](2001) showed in a meta-analysis that mean cognitive ability predicts team performance in laboratory settings (0.37) as well as field settings (0.14) – note that this is only a small effect. Suggesting a strong dependence on the relevant tasks, other scholars showed that tasks requiring a high degree of communication and cooperation are found to be most influenced by the team member with the lowest cognitive ability.[109]Tasks in which selecting the best team member is the most successful strategy, are shown to be most influenced by the member with the highest cognitive ability.[66] Since Woolley et al.'s[17]results do not show any influence of group satisfaction,group cohesiveness, or motivation, they, at least implicitly, challenge these concepts regarding the importance for group performance in general and thus contrast meta-analytically proven evidence concerning the positive effects ofgroup cohesion,[110][111][112]motivation[113][114]and satisfaction[115]on group performance. Some scholars have noted that the evidence for collective intelligence in the body of work by Wolley et al.[17]is weak and may contain errors or misunderstandings of the data.[116]For example, Woolley et al.[17]stated in their findings that the maximum individual score on the Wonderlic Personnel Test (WPT;[117]an individual intelligence test used in their research) was 39, but also that the maximum averaged team score on the same test was also a 39. This indicates that their sample seemingly had a team composed entirely of people who, individually, got exactly the same score on the WPT, and also all happened to all have achieved the highest scores on the WPT found in Woolley et al.[17]This was noted by scholars as particularly unlikely to occur.[116]Other anomalies found in the data indicate that results may be driven in part by low-effort responding.[17][116]For instance, Woolley et al.'s[17]data indicates that at least one team scored a 0 on a task in which they were given 10 minutes to come up with as many uses for a brick as possible. Similarly, Woolley et al.'s[17]data show that at least one team had an average score of 8 out of 50 on the WPT. Scholars have noted that the probability of this occurring with study participants who are putting forth effort is nearly zero.[116]This may explain why Woolley et al.[17]found that the group's individual intelligence scores were not predictive of performance. In addition, low effort on tasks in human subjects research may inflate evidence for a supposed collective intelligence factor based on similarity of performance across tasks, because a team's low effort on one research task may generalize to low effort across many tasks.[116][118][119]It is notable that such a phenomenon is present merely because of the low stakes setting of laboratory research for research participants and not because it reflects how teams operate in organizations.[116][120] Noteworthy is also that the involved researchers among the confirming findings widely overlap with each other and with the authors participating in the original first study around Anita Woolley.[17][44][50][69][83] On 3 May 2022, the authors of "Quantifying collective intelligence in human groups",[121]who include Riedl and Woolley from the original 2010 paper on Collective Intelligence,[17]issued a correction to the article after mathematically impossible findings reported in the article were noted publicly by researcher Marcus Credé. Among the corrections is an admission that the average variance extracted (AVE)--that is to say, the evidence for collective intelligence—was only 19.6% from their Confirmatory Factor Analysis. Notable is that an AVE of at least 50% is generally required to demonstrate evidence for convergent validity of a single factor, with greater than 70% generally indicating good evidence for the factor.[122]Therefore, the evidence for collective intelligence referred to as "robust" in Riedl et al.[121]is in fact quite weak or nonexistent, as their primary evidence does not meet or near even the lowest thresholds of acceptable evidence for a latent factor.[122]Curiously, despite this and several other factual inaccuracies found throughout the article, the paper has not been retracted, and these inaccuracies were apparently not originally detected by the author team, peer reviewers, or editors of the journal.[121] In 2001, Tadeusz (Tad) Szuba from theAGH Universityin Poland proposed a formal model for the phenomenon of collective intelligence. It is assumed to be an unconscious, random, parallel, and distributed computational process, run in mathematical logic by the social structure.[123] In this model, beings and information are modeled as abstract information molecules carrying expressions of mathematical logic.[123]They are quasi-randomly displacing due to their interaction with their environments with their intended displacements.[123]Their interaction in abstract computational space creates multi-thread inference process which we perceive as collective intelligence.[123]Thus, a non-Turingmodel of computation is used. This theory allows simple formal definition of collective intelligence as the property ofsocial structureand seems to be working well for a wide spectrum of beings, from bacterial colonies up to human social structures. Collective intelligence considered as a specific computational process is providing a straightforward explanation of several social phenomena. For this model of collective intelligence, the formal definition of IQS (IQ Social) was proposed and was defined as "the probability function over the time and domain of N-element inferences which are reflecting inference activity of the social structure".[123]While IQS seems to be computationally hard, modeling of social structure in terms of a computational process as described above gives a chance for approximation.[123]Prospective applications are optimization of companies through the maximization of their IQS, and the analysis of drug resistance against collective intelligence of bacterial colonies.[123] One measure sometimes applied, especially by more artificial intelligence focused theorists, is a "collective intelligence quotient"[124](or "cooperation quotient") – which can be normalized from the "individual"intelligence quotient(IQ)[124]– thus making it possible to determine the marginal intelligence added by each new individual participating in thecollective action, thus usingmetricsto avoid the hazards ofgroup thinkandstupidity.[125] There have been many recent applications of collective intelligence, including in fields such as crowd-sourcing, citizen science and prediction markets. The Nesta Centre for Collective Intelligence Design[126]was launched in 2018 and has produced many surveys of applications as well as funding experiments. In 2020 the UNDP Accelerator Labs[127]began using collective intelligence methods in their work to accelerate innovation for theSustainable Development Goals. Here, the goal is to get an estimate (in a single value) of something. For example, estimating the weight of an object, or the release date of a product or probability of success of a project etc. as seen in prediction markets like Intrade, HSX or InklingMarkets and also in several implementations of crowdsourced estimation of a numeric outcome such as theDelphi method. Essentially, we try to get the average value of the estimates provided by the members in the crowd. In this situation, opinions are gathered from the crowd regarding an idea, issue or product. For example, trying to get a rating (on some scale) of a product sold online (such as Amazon's star rating system). Here, the emphasis is to collect and simply aggregate the ratings provided by customers/users. In these problems, someone solicits ideas for projects, designs or solutions from the crowd. For example, ideas on solving adata scienceproblem (as inKaggle) or getting a good design for a T-shirt (as inThreadless) or in getting answers to simple problems that only humans can do well (as in Amazon's Mechanical Turk). The objective is to gather the ideas and devise some selection criteria to choose the best ideas. James Surowieckidivides the advantages of disorganized decision-making into three main categories, which are cognition, cooperation and coordination.[128] Because of the Internet's ability to rapidly convey large amounts of information throughout the world, the use of collective intelligence to predict stock prices and stock price direction has become increasingly viable.[129]Websites aggregate stock market information that is as current as possible so professional or amateur stock analysts can publish their viewpoints, enabling amateur investors to submit their financial opinions and create an aggregate opinion.[129]The opinion of all investor can be weighed equally so that a pivotal premise of the effective application of collective intelligence can be applied: the masses, including a broad spectrum of stock market expertise, can be utilized to more accurately predict the behavior of financial markets.[130][131] Collective intelligence underpins theefficient-market hypothesisofEugene Fama[132]– although the term collective intelligence is not used explicitly in his paper. Fama cites research conducted byMichael Jensen[133]in which 89 out of 115 selected funds underperformed relative to the index during the period from 1955 to 1964. But after removing the loading charge (up-front fee) only 72 underperformed while after removing brokerage costs only 58 underperformed. On the basis of such evidenceindex fundsbecame popular investment vehicles using the collective intelligence of the market, rather than the judgement of professional fund managers, as an investment strategy.[133] Political parties mobilize large numbers of people to form policy, select candidates and finance and run election campaigns.[134]Knowledge focusing through variousvotingmethods allows perspectives to converge through the assumption that uninformed voting is to some degree random and can be filtered from the decision process leaving only a residue of informed consensus.[134]Critics point out that often bad ideas, misunderstandings, and misconceptions are widely held, and that structuring of the decision process must favor experts who are presumably less prone to random or misinformed voting in a given context.[135] Companies such as Affinnova (acquired by Nielsen),Google,InnoCentive,Marketocracy, andThreadless[136]have successfully employed the concept of collective intelligence in bringing about the next generation of technological changes through their research and development (R&D), customer service, and knowledge management.[136][137]An example of such application is Google's Project Aristotle in 2012, where the effect of collective intelligence on team makeup was examined in hundreds of the company's R&D teams.[138] In 2012, theGlobal Futures Collective Intelligence System(GFIS) was created byThe Millennium Project,[139]which epitomizes collective intelligence as the synergistic intersection among data/information/knowledge, software/hardware, and expertise/insights that has a recursive learning process for better decision-making than the individual players alone.[139] New mediaare often associated with the promotion and enhancement of collective intelligence. The ability of new media to easily store and retrieve information, predominantly through databases and the Internet, allows for it to be shared without difficulty. Thus, through interaction with new media, knowledge easily passes between sources[13]resulting in a form of collective intelligence. The use of interactive new media, particularly the internet, promotes online interaction and this distribution of knowledge between users. Francis Heylighen,Valentin Turchin, and Gottfried Mayer-Kress are among those who view collective intelligence through the lens of computer science andcybernetics. In their view, the Internet enables collective intelligence at the widest, planetary scale, thus facilitating the emergence of aglobal brain. The developer of the World Wide Web,Tim Berners-Lee, aimed to promote sharing and publishing of information globally. Later his employer opened up the technology for free use. In the early '90s, the Internet's potential was still untapped, until the mid-1990s when 'critical mass', as termed by the head of the Advanced Research Project Agency (ARPA), Dr.J.C.R. Licklider, demanded more accessibility and utility.[140]The driving force of this Internet-based collective intelligence is the digitization of information and communication.Henry Jenkins, a key theorist of new media and media convergence draws on the theory that collective intelligence can be attributed to media convergence and participatory culture.[13]He criticizes contemporary education for failing to incorporate online trends of collective problem solving into the classroom, stating "whereas a collective intelligence community encourages ownership of work as a group, schools grade individuals". Jenkins argues that interaction within a knowledge community builds vital skills for young people, and teamwork through collective intelligence communities contribute to the development of such skills.[141]Collective intelligence is not merely a quantitative contribution of information from all cultures, it is also qualitative.[141] Lévyandde Kerckhoveconsider CI from a mass communications perspective, focusing on the ability of networked information and communication technologies to enhance the community knowledge pool. They suggest that these communications tools enable humans to interact and to share and collaborate with both ease and speed.[13]With the development of theInternetand its widespread use, the opportunity to contribute to knowledge-building communities, such asWikipedia, is greater than ever before. These computer networks give participating users the opportunity to store and to retrieve knowledge through the collective access to these databases and allow them to "harness the hive"[13]Researchers at theMIT Center for Collective Intelligenceresearch and explore collective intelligence of groups of people and computers.[142] In this context collective intelligence is often confused withshared knowledge. The former is the sum total of information held individually by members of a community while the latter is information that is believed to be true and known by all members of the community.[143]Collective intelligence as represented byWeb 2.0has less user engagement thancollaborative intelligence. An art project using Web 2.0 platforms is "Shared Galaxy", an experiment developed by an anonymous artist to create a collective identity that shows up as one person on several platforms like MySpace, Facebook, YouTube and Second Life. The password is written in the profiles and the accounts named "Shared Galaxy" are open to be used by anyone. In this way many take part in being one.[144]Another art project using collective intelligence to produce artistic work is Curatron, where a large group of artists together decides on a smaller group that they think would make a good collaborative group. The process is used based on an algorithm computing the collective preferences[145]In creating what he calls 'CI-Art', Nova Scotia based artist Mathew Aldred follows Pierry Lévy's definition of collective intelligence.[146]Aldred's CI-Art event in March 2016 involved over four hundred people from the community of Oxford, Nova Scotia, and internationally.[147][148]Later work developed by Aldred used the UNUswarm intelligencesystem to create digital drawings and paintings.[149]The Oxford Riverside Gallery (Nova Scotia) held a public CI-Art event in May 2016, which connected with online participants internationally.[150] Insocial bookmarking(also called collaborative tagging),[151]users assign tags to resources shared with other users, which gives rise to a type of information organisation that emerges from thiscrowdsourcingprocess. The resulting information structure can be seen as reflecting the collective knowledge (or collective intelligence) of a community of users and is commonly called a "Folksonomy", and the process can be captured bymodels of collaborative tagging.[151] Recent research using data from the social bookmarking websiteDelicious, has shown that collaborative tagging systems exhibit a form ofcomplex systems(orself-organizing) dynamics.[152][153][154]Although there is no central controlled vocabulary to constrain the actions of individual users, the distributions of tags that describe different resources has been shown to converge over time to a stablepower lawdistributions.[152]Once such stable distributions form, examining thecorrelationsbetween different tags can be used to construct simple folksonomy graphs, which can be efficiently partitioned to obtained a form of community or shared vocabularies.[155]Such vocabularies can be seen as a form of collective intelligence, emerging from the decentralised actions of a community of users. The Wall-it Project is also an example of social bookmarking.[156] Research performed by Tapscott and Williams has provided a few examples of the benefits of collective intelligence to business:[48] Cultural theorist and online community developer, John Banks considered the contribution of online fan communities in the creation of theTrainzproduct. He argued that its commercial success was fundamentally dependent upon "the formation and growth of an active and vibrant online fan community that would both actively promote the product and create content- extensions and additions to the game software".[157] The increase in user created content and interactivity gives rise to issues of control over the game itself and ownership of the player-created content. This gives rise to fundamental legal issues, highlighted by Lessig[158]and Bray and Konsynski,[159]such asintellectual propertyand property ownership rights. Gosney extends this issue of Collective Intelligence in videogames one step further in his discussion ofalternate reality gaming. This genre, he describes as an "across-media game that deliberately blurs the line between the in-game and out-of-game experiences"[160]as events that happen outside the game reality "reach out" into the player's lives in order to bring them together. Solving the game requires "the collective and collaborative efforts of multiple players"; thus the issue of collective and collaborative team play is essential to ARG. Gosney argues that the Alternate Reality genre of gaming dictates an unprecedented level of collaboration and "collective intelligence" in order to solve the mystery of the game.[160] Co-operation helps to solve most important and most interesting multi-science problems. In his book, James Surowiecki mentioned that most scientists think that benefits of co-operation have much more value when compared to potential costs. Co-operation works also because at best it guarantees number of different viewpoints. Because of the possibilities of technology global co-operation is nowadays much easier and productive than before. It is clear that, when co-operation goes from university level to global it has significant benefits. For example, why do scientists co-operate? Science has become more and more isolated and each science field has spread even more and it is impossible for one person to be aware of all developments. This is true especially in experimental research where highly advanced equipment requires special skills. With co-operation scientists can use information from different fields and use it effectively instead of gathering all the information just by reading by themselves."[128] Military, trade unions, and corporations satisfy some definitions of CI – the most rigorous definition would require a capacity to respond to very arbitrary conditions without orders or guidance from "law" or "customers" to constrain actions. Online advertising companies are using collective intelligence to bypass traditional marketing and creative agencies.[161] TheUNUopen platform for "human swarming" (or "social swarming") establishes real-time closed-loop systems around groups of networked users molded after biological swarms, enabling human participants to behave as a unified collective intelligence.[162][163]When connected to UNU, groups of distributed users collectively answer questions and make predictions in real-time.[164]Early testing shows that human swarms can out-predict individuals.[162]In 2016, an UNU swarm was challenged by a reporter to predict the winners of the Kentucky Derby, and successfully picked the first four horses, in order, beating 540 to 1 odds.[165][166] Specialized information sites such as Digital Photography Review[167]or Camera Labs[168]is an example of collective intelligence. Anyone who has an access to the internet can contribute to distributing their knowledge over the world through the specialized information sites. Inlearner-generated contexta group of users marshal resources to create an ecology that meets their needs often (but not only) in relation to the co-configuration, co-creation and co-design of a particular learning space that allows learners to create their own context.[169][170][171]Learner-generated contexts represent anad hoccommunity that facilitates coordination of collective action in a network of trust. An example of learner-generated context is found on the Internet when collaborative users pool knowledge in a "shared intelligence space". As the Internet has developed so has the concept of CI as a shared public forum. The global accessibility and availability of the Internet has allowed more people than ever to contribute and access ideas.[13] Games such asThe SimsSeries, andSecond Lifeare designed to be non-linear and to depend on collective intelligence for expansion. This way of sharing is gradually evolving and influencing the mindset of the current and future generations.[140]For them, collective intelligence has become a norm. In Terry Flew's discussion of 'interactivity' in the online games environment, the ongoing interactive dialogue between users and game developers,[172]he refers to Pierre Lévy's concept of Collective Intelligence[citation needed]and argues this is active in videogames as clans or guilds inMMORPGconstantly work to achieve goals.Henry Jenkinsproposes that the participatory cultures emerging between games producers, media companies, and the end-users mark a fundamental shift in the nature of media production and consumption. Jenkins argues that this new participatory culture arises at the intersection of three broad new media trends.[173]Firstly, the development of new media tools/technologies enabling the creation of content. Secondly, the rise of subcultures promoting such creations, and lastly, the growth of value adding media conglomerates, which foster image, idea and narrative flow. Improvisational actors also experience a type of collective intelligence which they term "group mind", as theatrical improvisation relies on mutual cooperation and agreement,[174]leading to the unity of "group mind".[174][175] Growth of the Internet and mobile telecom has also produced "swarming" or "rendezvous" events that enable meetings or even dates on demand.[32]The full impact has yet to be felt but theanti-globalization movement, for example, relies heavily on e-mail, cell phones, pagers, SMS and other means of organizing.[176]TheIndymediaorganization does this in a more journalistic way.[177]Such resources could combine into a form of collective intelligence accountable only to the current participants yet with some strong moral or linguistic guidance from generations of contributors – or even take on a more obviously democratic form to advance shared goal.[177] A further application of collective intelligence is found in the "Community Engineering for Innovations".[178]In such an integrated framework proposed by Ebner et al., idea competitions and virtual communities are combined to better realize the potential of the collective intelligence of the participants, particularly in open-source R&D.[179]In management theory the use of collective intelligence and crowd sourcing leads to innovations and very robust answers to quantitative issues.[180]Therefore, collective intelligence and crowd sourcing is not necessarily leading to the best solution to economic problems, but to a stable, good solution. Collective actions or tasks require different amounts of coordination depending on the complexity of the task. Tasks vary from being highly independent simple tasks that require very little coordination to complex interdependent tasks that are built by many individuals and require a lot of coordination. In the article written by Kittur, Lee and Kraut the writers introduce a problem in cooperation: "When tasks require high coordination because the work is highly interdependent, having more contributors can increase process losses, reducing the effectiveness of the group below what individual members could optimally accomplish". Having a team too large the overall effectiveness may suffer even when the extra contributors increase the resources. In the end the overall costs from coordination might overwhelm other costs.[181] Group collective intelligence is a property that emerges through coordination from both bottom-up and top-down processes. In a bottom-up process the different characteristics of each member are involved in contributing and enhancing coordination. Top-down processes are more strict and fixed with norms, group structures and routines that in their own way enhance the group's collective work.[44] Tom Atlee reflects that, although humans have an innate ability to gather and analyze data, they are affected by culture, education and social institutions.[182][self-published source?]A single person tends to make decisions motivated by self-preservation. Therefore, without collective intelligence, humans may drive themselves into extinction based on their selfish needs.[46] Phillip Brown and Hugh Lauder quotes Bowles andGintis(1976) that in order to truly define collective intelligence, it is crucial to separate 'intelligence' from IQism.[183]They go on to argue that intelligence is an achievement and can only be developed if allowed to.[183]For example, earlier on, groups from the lower levels of society are severely restricted from aggregating and pooling their intelligence. This is because the elites fear that the collective intelligence would convince the people to rebel. If there is no such capacity and relations, there would be no infrastructure on which collective intelligence is built.[184]This reflects how powerful collective intelligence can be if left to develop.[183] Skeptics, especially those critical of artificial intelligence and more inclined to believe that risk ofbodily harmand bodily action are the basis of all unity between people, are more likely to emphasize the capacity of a group to take action and withstand harm as one fluidmass mobilization, shrugging off harms the way a body shrugs off the loss of a few cells.[185][186]This train of thought is most obvious in theanti-globalization movementand characterized by the works ofJohn Zerzan,Carol Moore, andStarhawk, who typically shun academics.[185][186]These theorists are more likely to refer to ecological andcollective wisdomand to the role ofconsensus processin making ontological distinctions than to any form of "intelligence" as such, which they often argue does not exist, or is mere "cleverness".[185][186] Harsh critics of artificial intelligence on ethical grounds are likely to promote collective wisdom-building methods, such as thenew tribalistsand theGaians.[187][self-published source]Whether these can be said to be collective intelligence systems is an open question. Some, e.g.Bill Joy, simply wish to avoid any form of autonomous artificial intelligence and seem willing to work on rigorous collective intelligence in order to remove any possible niche for AI.[188] In contrast to these views, companies such asAmazon Mechanical TurkandCrowdFlowerare using collective intelligence andcrowdsourcingorconsensus-based assessmentto collect the enormous amounts of data formachine learningalgorithms.
https://en.wikipedia.org/wiki/Symbiotic_intelligence
This is alist ofdigital libraryprojects.
https://en.wikipedia.org/wiki/List_of_digital_library_projects
Wikisourceis an online wiki-baseddigital libraryoffree-contenttextual sourcesoperated by theWikimedia Foundation. Wikisource is the name of the project as a whole; it is also the name for each instance of that project, one for each language. The project's aim is to host all forms of free text, in many languages, and translations. Originally conceived as an archive to store useful or important historical texts, it has expanded to become a general-content library. The project officially began on November 24, 2003, under the nameProject Sourceberg, a play onProject Gutenberg. The name Wikisource was adopted later that year and it received its owndomain name. The project holds works that are either in thepublic domainorfreely licensed: professionally published works or historical source documents, notvanity products. Verification was initially made offline, or by trusting the reliability of other digital libraries. Now works are supported by online scans via the ProofreadPage extension, which ensures the reliability and accuracy of the project's texts. Some individual Wikisources, each representing a specific language, now only allow works backed up with scans. While the bulk of its collection are texts, Wikisource as a whole hosts other media, from comics to film toaudiobooks. Some Wikisources allow user-generated annotations, subject to the specific policies of the Wikisource in question. The project has come under criticism for lack of reliability but it is also cited by organisations such as theNational Archives and Records Administration.[3] As of May 2025, there are Wikisource subdomains active for 79 languages[1]comprising a total of 6,443,127 articles and 2,674 recently active editors.[4] The original concept for Wikisource was as storage for useful or important historical texts. These texts were intended to supportWikipediaarticles, by providing primary evidence and original source texts, and as an archive in its own right. The collection was initially focused on important historical and cultural material, distinguishing it from other digital archives like Project Gutenberg.[2] The project was originally called Project Sourceberg during its planning stages (a play on words for Project Gutenberg).[2] In 2001, there was a dispute on Wikipedia regarding the addition of primary-source materials, leading toedit warsover their inclusion or deletion. Project Sourceberg was suggested as a solution to this. In describing the proposed project, user The Cunctator said, "It would be to Project Gutenberg what Wikipedia is toNupedia",[5]soon clarifying the statement with "we don't want to try to duplicate Project Gutenberg's efforts; rather, we want to complement them. Perhaps Project Sourceberg can mainly work as an interface for easily linking from Wikipedia to a Project Gutenberg file, and as an interface for people to easily submit new work to PG."[6]Initial comments were skeptical, withLarry Sangerquestioning the need for the project, writing "The hard question, I guess, is why we are reinventing the wheel, when Project Gutenberg already exists? We'd want to complement Project Gutenberg—how, exactly?",[7]andJimmy Walesadding "like Larry, I'm interested that we think it over to see what we can add to Project Gutenberg. It seems unlikely that primary sources should in general be editable by anyone — I mean, Shakespeare is Shakespeare, unlike our commentary on his work, which is whatever we want it to be."[8] The project began its activity at ps.wikipedia.org. The contributors understood the "PS" subdomain to mean either "primary sources" or Project Sourceberg.[5]However, this resulted in Project Sourceberg occupying the subdomain of thePashto Wikipedia(theISO language codeof thePashto languageis "ps"). Project Sourceberg officially launched on November 24, 2003, when it received its own temporary URL, at sources.wikipedia.org, and all texts and discussions hosted on ps.wikipedia.org were moved to the temporary address. A vote on the project's name changed it to Wikisource on December 6, 2003. Despite the change in name, the project did not move to its permanent URL (http://wikisource.org/) until July 23, 2004.[9] Since Wikisource was initially called "Project Sourceberg", its first logo was a picture of aniceberg.[2]Two votes conducted to choose a successor were inconclusive, and the original logo remained until 2006. Finally, for both legal and technical reasons—because the picture's license was inappropriate for a Wikimedia Foundation logo and because a photo cannot scale properly—a stylized vector iceberg inspired by the original picture was mandated to serve as the project's logo. The first prominent use of Wikisource's slogan—The Free Library—was at the project'smultilingual portal, when it was redesigned based upon the Wikipedia portal on August 27, 2005, (historical version).[10]As in the Wikipedia portal the Wikisource slogan appears around the logo in the project's ten largest languages. Clicking on the portal's central images (the iceberg logo in the center and the "Wikisource" heading at the top of the page) links to alist of translationsforWikisourceandThe Free Libraryin 60 languages. AMediaWikiextension called ProofreadPage was developed for Wikisource by developer ThomasV to improve the vetting of transcriptions by the project. This displays pages of scanned works side by side with the text relating to that page, allowing the text to beproofreadand its accuracy later verified independently by any other editor.[11][12][13]Once a book, or other text, has been scanned, the raw images can be modified withimage processingsoftware to correct for page rotations and other problems. The retouched images can then be converted into aPDForDjVufile and uploaded to either Wikisource orWikimedia Commons.[11] This system assists editors in ensuring the accuracy of texts on Wikisource. The original page scans of completed works remain available to any user so that errors may be corrected later and readers may check texts against the originals. ProofreadPage also allows greater participation, since access to a physical copy of the original work is not necessary to be able to contribute to the project once images have been uploaded.[citation needed] Within two weeks of the project's official start at sources.wikipedia.org, over 1,000 pages had been created, with approximately 200 of these being designated as actual articles. On January 4, 2004, Wikisource welcomed its 100th registered user. In early July, 2004 the number of articles exceeded 2,400, and more than 500 users had registered. On April 30, 2005, there were 2667 registered users (including 18 administrators) and almost 19,000 articles. The project passed its 96,000th edit that same day.[citation needed] On November 27, 2005, theEnglish Wikisourcepassed 20,000 text-units in its third month of existence, already holding more texts than did the entire project in April (before the move to language subdomains). On May 10, 2006, thefirst Wikisource Portalwas created. On February 14, 2008, the English Wikisource passed 100,000 text-units withChapter LXXIVofSix Months at the White House, a memoir by painterFrancis Bicknell Carpenter.[14] In November, 2011, 250,000 text-units milestone was passed. Wikisource collects and stores indigital formatpreviously published texts; including novels, non-fiction works, letters, speeches, constitutional and historical documents, laws and a range of other documents. All texts collected are either free of copyright or released under theCreative Commons Attribution/Share-Alike License.[2]Texts in all languages are welcomed, as are translations. In addition to texts, Wikisource hosts material such ascomics,films, recordings andspoken-wordworks.[2]All texts held by Wikisource must have been previously published; the project does not host "vanity press" books or documents produced by its contributors.[2][15][16][17][18] A scanned source is preferred on many Wikisources and required on some. Most Wikisources will, however, accept works transcribed from offline sources or acquired fromother digital libraries.[2]The requirement for prior publication can also be waived in a small number of cases if the work is a source document of notable historical importance. The legal requirement for works to be licensed or free of copyright remains constant. The only original pieces accepted by Wikisource are annotations and translations.[19]Wikisource, and its sister projectWikibooks, has the capacity forannotated editionsof texts. On Wikisource, the annotations are supplementary to the original text, which remains the primary objective of the project. By contrast, on Wikibooks the annotations are primary, with the original text as only a reference or supplement, if present at all.[18]Annotated editions are more popular on the German Wikisource.[18]The project also accommodates translations of texts provided by its users. A significant translation on the English Wikisource is theWiki Bibleproject, intended to create a new, "laissez-faire translation" ofThe Bible.[20] A separateHebrew versionof Wikisource (he.wikisource.org) was created in August 2004. The need for a language-specificHebrewwebsite derived from the difficulty of typing and editing Hebrew texts in aleft-to-rightenvironment (Hebrew is written right-to-left). In the ensuing months, contributors in other languages includingGermanrequested their own wikis, but a December vote on the creation of separate language domains was inconclusive. Finally, asecond votethat ended May 12, 2005, supported the adoption of separate language subdomains at Wikisource by a large margin, allowing each language to host its texts on its own wiki. An initial wave of 14 languages was set up on August 23, 2005.[21]The new languages did not include English, but the code en: was temporarily set to redirect to the main website (wikisource.org). At this point the Wikisource community, through a mass project of manually sorting thousands of pages and categories by language, prepared for a second wave of page imports to local wikis. On September 11, 2005, the wikisource.org wiki was reconfigured to enable theEnglish version, along with 8 other languages that were created early that morning and late the night before.[22]Three more languages were created on March 29, 2006,[23]and then another large wave of 14 language domains was created on June 2, 2006.[24] Languages without subdomains are locally incubated. As of September 2020[update], 182 languages arehosted locally. As of May 2025, there are Wikisource subdomains for 81 languages of which 79 are active and 2 are closed.[1]The active sites have 6,443,127 articles and the closed sites have 13 articles.[4]There are 5,053,593 registered users of which 2,674 are recently active.[4] The top ten Wikisource language projects by mainspace article count:[4] For a complete list with totals see Wikimedia Statistics:[25] During the move to language subdomains, the community requested that the mainwikisource.orgwebsite remain a functioning wiki, in order to serve three purposes: The idea of a project-specific coordination wiki, first realized at Wikisource, also took hold in another Wikimedia project, namely atWikiversity'sBeta Wiki. Like wikisource.org, it serves Wikiversity coordination in all languages, and as a language incubator, but unlike Wikisource, itsMain Pagedoes not serve as its multilingual portal.[27] Wikipedia co-founderLarry Sangercriticised Wikisource and sister projectWiktionaryin 2011, after he left the project, saying that their collaborative nature and technology means that there is no oversight by experts, and alleging that their content is therefore not reliable.[28] Bart D. Ehrman, a New Testament scholar and professor of religious studies at theUniversity of North Carolina at Chapel Hill, has criticised the English Wikisource's project to create a user-generated translation of the Bible saying "Democratization isn't necessarily good for scholarship."[20]Richard Elliott Friedman, an Old Testament scholar and professor of Jewish studies at theUniversity of Georgia, identified errors in the translation of theBook of Genesisas of 2008.[20] In 2010, Wikimedia France signed an agreement with theBibliothèque nationale de France(National Library of France) to add scans from its ownGallicadigital library to French Wikisource. Fourteen hundred public domain French texts were added to the Wikisource library as a result via upload to theWikimedia Commons. The quality of the transcriptions, previously automatically generated byoptical character recognition(OCR), was expected to be improved by Wikisource's human proofreaders.[29][30][31] In 2011, the English Wikisource received many high-quality scans of documents from the USNational Archives and Records Administration(NARA) as part of their efforts "to increase the accessibility and visibility of its holdings." Processing and upload to Commons of these documents, along with many images from the NARA collection, was facilitated by a NARAWikimedian in residence, Dominic McDevitt-Parks. Many of these documents have been transcribed and proofread by the Wikisource community and are featured as links in the National Archives' own online catalog.[32] Wikisource About Wikisource
https://en.wikipedia.org/wiki/Wikisource
Insocial dynamics,critical massis a sufficient number of adopters of a new idea, technology or innovation in a social system so that the rate of adoption becomes self-sustaining and creates further growth. The point at which critical mass is achieved is sometimes referred to as a threshold within thethreshold modelofstatistical modeling. The term "critical mass" is borrowed from nuclear physics, where it refers to the amount of a substance needed to sustain a chain reaction. Within social sciences, critical mass has its roots in sociology and is often used to explain the conditions under which reciprocal behavior is started within collective groups, and how reciprocal behavior becomes self-sustaining. Recent technology research in platform ecosystems shows that apart from the quantitative notion of a “sufficient number”, critical mass is also influenced by qualitative properties such as reputation, interests, commitments, capabilities, goals, consensuses, and decisions, all of which are crucial in determining whether reciprocal behavior can be started to achieve sustainability to a commitment such as an idea, new technology, or innovation.[1][2] Other social factors that are important include the size of; and inter-dependencies and level of communication in a society or one of its subcultures. Another is social stigma, or the possibility of public advocacy due to such a factor. Critical mass is a concept used in a variety of contexts, includingphysics,group dynamics,politics,public opinion, andtechnology. The concept of critical mass was originally created by game theoristThomas Schellingand sociologistMark Granovetterto explain the actions and behaviors of a wide range of people and phenomenon. The concept was first established (although not explicitly named) in Schelling's essay about racial segregation in neighborhoods, "Dynamic models of segregation", published in 1971 in the Journal of Mathematical Sociology,[3]and later refined in his book,Micromotives and Macrobehavior, published in 1978.[4]Schelling did use the term "critical density" with regard to pollution in his "On the Ecology of Micromotives".[5]Granovetter, in his essay "Threshold models of collective behavior", published in theAmerican Journal of Sociologyin 1978[6]worked to solidify the theory.[7]Everett Rogers later cites them both in his workDiffusion of Innovations, in which critical mass plays an important role. The concept of critical mass had existed before it entered a sociology context. It was an established concept inmedicine, specificallyepidemiology, since the 1920s, as it helped to explain the spread of illnesses. It had also been a present, if not solidified, idea in the study of consumer habits and economics, especially inGeneral Equilibrium Theory. In his papers, Schelling quotes the well-known"The Market for Lemons: Quality Uncertainty and the Market Mechanism"paper written in 1970 by George Akerlof.[8]Similarly, Granovetter cited theNash Equilibriumgame in his papers. Finally,Herbert A. Simon's essay, "Bandwagon and underdog effects and the possibility of election predictions", published in 1954 inPublic Opinion Quarterly,[9]has been cited as a predecessor to the concept we now know as critical mass. Critical mass and the theories behind it help us to understand aspects of humans as they act and interact in a larger social setting. Certain theories, such asMancur Olson'sLogic of Collective Action[10]orGarrett Hardin'sTragedy of the Commons,[11]work to help us understand why humans do or adopt certain things which are beneficial to them, or, more importantly, why they do not. Much of this reasoning has to do with individual interests trumping that which is best for the collective whole, which may not be obvious at the time. Oliver,Marwell, andTeixeiratackle this subject in relation to critical theory in a 1985 article published in theAmerican Journal of Sociology.[12]In their essay, they define that action in service of a public good as "collective action". "Collective Action" is beneficial to all, regardless of individual contribution. By their definition, then, "critical mass" is the small segment of a societal system that does the work or action required to achieve the common good. The "Production Function" is the correlation between resources, or what individuals give in an effort to achieve public good, and the achievement of that good. Such function can be decelerating, where there is less utility per unit of resource, and in such a case, resource can taper off. On the other hand, the function can be accelerating, where the more resources that are used the bigger the payback. "Heterogeneity" is also important to the achievement of a common good. Variations (heterogeneity) in the value individuals put on a common good or the effort and resources people give is beneficial, because if certain people stand to gain more, they are willing to give or pay more. Critical mass theory in gender politics and collective political action is defined as the critical number of personnel needed to affect policy and make a change not as the token but as an influential body.[13]This number has been placed at 30%, before women are able to make a substantial difference in politics.[14][15]However, other research suggests lower numbers of women working together in legislature can also affect political change.[16][17]Kathleen Bratton goes so far as to say that women, in legislatures where they make up less than 15% of the membership, may actually be encouraged to develop legislative agendas that are distinct from those of their male colleagues.[18]Others argue that we should look more closely at parliamentary and electoral systems instead of critical mass.[19][20] While critical mass can be applied to many different aspects of sociodynamics, it becomes increasingly applicable to innovations in interactive media such as the telephone, fax, or email. With other non-interactive innovations, the dependence on other users was generally sequential, meaning that the early adopters influenced the later adopters to use the innovation. However, with interactive media, the interdependence was reciprocal, meaning both users influenced each other. This is due to the fact that interactive media have highnetwork effect,[21]wherein the value and utility of a good or service increases the more users it has. Thus, the increase of adopters and quickness to reach critical mass can therefore be faster and more intense with interactive media, as can the rate at which previous users discontinue their use. The more people that use it, the more beneficial it will be, thus creating a type of snowball effect, and conversely, if users begin to stop using the innovation, the innovation loses utility, thus pushing more users to discontinue their use.[22] InM. Lynne Markus' essay inCommunication Researchentitled "Toward a 'Critical Mass' Theory of Interactive Media",[22]several propositions are made that attempt to predict under what circumstances interactive media is most likely to achieve critical mass and reachuniversal access—a "common good", using Oliver et al.'s terminology. One proposition states that such media's existence is all or nothing, wherein if universal access is not achieved, then, eventually, use will discontinue. Another proposition suggests that a media's ease of use and inexpensiveness, as well as its utilization of an "active notification capability" will help it achieve universal access. The third proposition states that the heterogeneity, as discussed by Oliver et al., is beneficial, especially if users are dispersed over a larger area, thus necessitating interactivity via media. Fourth, it is very helpful to have highly sought-after individuals to act as early adopters, as their use acts as incentive for later users. Finally, Markus posits that interventions, both monetarily and otherwise, by governments, businesses, or groups of individuals will help a media reach its critical mass and achieve universal access. An example put forth by Rogers inDiffusion of Innovationswas that of thefax machine, which had been around for almost 150 years before it became popular and widely used. It had existed in various forms and for various uses, but with more advancements in the technology of faxes, including the use of existing phone lines to transmit information, coupled with falling prices in both machines and cost per fax, the fax machine reached a critical mass in 1987, when "Americans began to assume that 'everybody else' had a fax machine".[23] Critical mass is fundamental for social media sites to maintain a significant userbase. Reaching a sustainable population is dependent on the collective rather than individual use of the technology. The adoption of the platform creates the effects ofpositive externalitieswhereby each additional user provides additional perceived benefits to previous and potential adopters.[24] Facebookprovides a good illustration of critical mass. In its initial stages, Facebook had limited value to users due to the lack of network effects and critical mass.[25]The principle behind the strategy is that at each time Facebook enlarged the size of the community, the saturation never drops below the critical mass, reaching the desired diffusion effect discussed in Rogers'Diffusion of innovations.[26]Facebook promoted the innovation to groups that were likely to adopt en masse. Between 2003–2004 Facebook was exclusive to universities such as Harvard, Yale and 34 other schools. Perceived critical mass grew amongst the student population, and by the end of 2004 more than a million students had signed up, continuing to[clarification needed]when Facebook opened the platform to high-school and university students worldwide in 2005, before eventually launching to the public in 2006.[27]By obtaining critical mass in each relative population before advancing to the next audience, Facebook developed enough saturation to become self-sustaining. Being self-sustained helps to grow and maintain network size, whilst also enhancing the perceived critical mass of those yet to adopt.
https://en.wikipedia.org/wiki/Critical_mass_(sociodynamics)
Crowd manipulationis the intentional or unwitting use of techniques based on the principles ofcrowd psychologyto engage, control, or influence the desires of acrowdin order to direct its behavior toward a specific action.[1] History suggests that the socioeconomic and political context and location influence dramatically the potential for crowd manipulation. Such time periods in America included: Internationally, time periods conducive to crowd manipulation included the Interwar Period (i.e. following the collapse of the Austria-Hungarian, Russian, Ottoman, and German empires) and Post-World War II (i.e. decolonization and collapse of the British, German, French, and Japanese empires).[4]The prelude to the collapse of theSoviet Unionprovided ample opportunity for messages of encouragement. TheSolidarity Movementbegan in the 1970s thanks in part to leaders likeLech WalesaandU.S. Information Agencyprogramming.[5]In 1987, U.S. PresidentRonald Reagancapitalized on the sentiments of the West Berliners as well as the freedom-starved East Berliners to demand thatGeneral Secretary of the Communist Party of the Soviet UnionMikhail Gorbachev"tear down" theBerlin Wall.[6]During the 2008 presidential elections, candidateBarack Obamacapitalized on the sentiments of many American voters frustrated predominantly by the recent economic downturn and the continuing wars inIraqandAfghanistan. His simple messages of "Hope", "Change", and "Yes We Can" were adopted quickly and chanted by his supporters during his political rallies.[7] Historical context and events may also encourage unruly behavior. Such examples include the: In order to capitalize fully upon historical context, it is essential to conduct a thorough audience analysis to understand the desires, fears, concerns, and biases of the target crowd. This may be done through scientific studies, focus groups, and polls.[3] It is also imperative to differentiate between a crowd and a mob to gauge the magnitude crowd manipulation should be used to. AUnited Nationstraining guide on crowd control states that "a crowd is a lawful gathering of people, who are organized disciplined and have an objective. A mob is a crowd who have gone out of control because of various and powerful influences, such as racial tension or revenge."[9] The crowd manipulator and the propagandist may work together to achieve greater results than they would individually. According to Edward Bernays, the propagandist must prepare his target group to think about and anticipate a message before it is delivered. Messages themselves must be tested in advance since a message that is ineffective is worse than no message at all.[10]Social scientistJacques Ellulcalled this sort of activity "pre-propaganda", and it is essential if the main message is to be effective. Ellul wrote inPropaganda: The Formation of Men's Attitudes: Direct propaganda, aimed at modifying opinions and attitudes, must be preceded by propaganda that is sociological in character, slow, general, seeking to create a climate, an atmosphere of favorable preliminary attitudes. No direct propaganda can be effective without pre-propaganda, which, without direct or noticeable aggression, is limited to creating ambiguities, reducing prejudices, and spreading images, apparently without purpose. … InJacques Ellul's book,Propaganda: The Formation of Men's Attitudes, it states that sociological propaganda can be compared to plowing, direct propaganda to sowing; you cannot do the one without doing the other first.[11]Sociological propaganda is a phenomenon where a society seeks to integrate the maximum number of individuals into itself by unifying its members' behavior according to a pattern, spreading its style of life abroad, and thus imposing itself on other groups. Essentially sociological propaganda aims to increase conformity with the environment that is of a collective nature by developing compliance with or defense of the established order through long term penetration and progressive adaptation by using all social currents. The propaganda element is the way of life with which the individual is permeated and then the individual begins to express it in film, writing, or art without realizing it. This involuntary behavior creates an expansion of society through advertising, the movies, education, and magazines. "The entire group, consciously or not, expresses itself in this fashion; and to indicate, secondly that its influence aims much more at an entire style of life."[12]This type of propaganda is not deliberate but springs up spontaneously or unwittingly within a culture or nation. This propaganda reinforces the individual's way of life and represents this way of life as best. Sociological propaganda creates an indisputable criterion for the individual to make judgments of good and evil according to the order of the individual's way of life. Sociological propaganda does not result in action, however, it can prepare the ground fordirect propaganda. From then on, the individual in the clutches of such sociological propaganda believes that those who live this way are on the side of the angels, and those who don't are bad.[13] Bernays expedited this process by identifying and contracting those who most influence public opinion (key experts, celebrities, existing supporters, interlacing groups, etc.). After the mind of the crowd is plowed and the seeds of propaganda are sown, a crowd manipulator may prepare to harvest his crop.[10] Psychological warfare(PSYWAR), or the basic aspects of modern psychological operations (PsyOp), has been known by many other names or terms, including Military Information Support Operations (MISO), Psy Ops,political warfare, "Hearts and Minds", andpropaganda.[14][15]The term is used "to denote any action which is practiced mainly by psychological methods with the aim of evoking a planned psychological reaction in other people".[16] Various techniques are used, and are aimed at influencing a target audience'svaluesystem,beliefsystem,emotions,motives,reasoning, orbehavior. It is used to induceconfessionsor reinforce attitudes and behaviors favorable to the originator's objectives, and are sometimes combined withblack operationsorfalse flagtactics. It is also used to destroy the morale of enemies through tactics that aim to depress troops' psychological states.[17][18] Target audiences can begovernments,organizations,groups, andindividuals, and is not just limited to soldiers. Civilians of foreign territories can also be targeted by technology and media so as to cause an effect on the government of their country.[19] Prestige is a form of "domination exercised on our mind by an individual, a work, or an idea." The manipulator with great prestige paralyses the critical faculty of his crowd and commands respect and awe. Authority flows from prestige, which can be generated by "acquired prestige" (e.g. job title, uniform, judge's robe) and "personal prestige" (i.e. inner strength). Personal prestige is like that of the "tamer of a wild beast" who could easily devour him. Success is the most important factor affecting personal prestige. Le Bon wrote, "From the minute prestige is called into question, it ceases to be prestige." Thus, it would behoove the manipulator to prevent this discussion and to maintain a distance from the crowd lest his faults undermine his prestige.[22] At 22,Winston Churchilldocumented his conclusions about speaking to crowds. He titled it "The Scaffolding of Rhetoric" and it outlined what he believed to be the essentials of any effective speech. Among these essentials are: Adolf Hitlerbelieved he could apply the lessons of propaganda he learned in his early World War I experiences and apply those lessons to benefit Germany thereafter. His comments were as follows: TheNazi Partyin Germany used propaganda to develop acult of personalityaround Hitler. Historians such asIan Kershawemphasise the psychological impact of Hitler's skill as an orator.[26]Neil Kressel reports, "Overwhelmingly ... Germans speak with mystification of Hitler's 'hypnotic' appeal".[27]Roger Gill states: "His moving speeches captured the minds and hearts of a vast number of the German people: he virtually hypnotized his audiences".[28]Hitler was especially effective when he could absorb the feedback from a live audience, and listeners would also be caught up in the mounting enthusiasm.[29]He looked for signs of fanatic devotion, stating that his ideas would then remain "like words received under an hypnotic influence."[30][31] Ever since the advent of mass production, businesses and corporations have used crowd manipulation to sell their products.Advertisingserves as propaganda to prepare a future crowd to absorb and accept a particular message.Edward Bernaysbelieved that particular advertisements are more effective if they create an environment which encourages the purchase of certain products. Instead of marketing the features of a piano, sell prospective customers the idea of a music room.[32]
https://en.wikipedia.org/wiki/Crowd_manipulation
Ahappeningis a performance, event, orsituationart, usually asperformance art. The term was first used byAllan Kaprowin 1959 to describe a range of art-related events.[1] Allan Kaprowfirst coined the term "happening" in the spring of 1959 at an art picnic atGeorge Segal's farm to describe the art pieces being performed.[2]The first appearance in print about one was in Kaprow's famous "Legacy ofJackson Pollock" essay that was published in 1958 but primarily written in 1956. "Happening" also appeared in print in one issue of theRutgers Universityundergraduate literary magazine,Anthologist.[3]The form was imitated and the term was adopted by artists across theU.S.,Germany, andJapan. Happenings are difficult to describe, in part because each one is unique. One definition comes fromWardrip-FruinandMontfortinThe New Media Reader, "The term 'happening' has been used to describe many performances and events, organized by Allan Kaprow and others during the 1950s and 1960s, including a number of theatrical productions that were traditionally scripted and invited only limited audience interaction."[4]: 83Another definition is "a purposefully composed form of theatre in which diverse alogical elements, including nonmatrixed performing, are organized in a compartmented structure".[5]However, Canadian theatre critic and playwrightGary Botting, who himself had "constructed" several happenings, wrote in 1972: "Happenings abandoned the matrix of story and plot for the equally complex matrix of incident and event."[6] Kaprow was a student ofJohn Cage, who had experimented with "musical happenings" atBlack Mountain Collegeas early as 1952.[7]Kaprow combined the theatrical and visual arts with discordant music. "His happenings incorporated the use of huge constructions or sculptures similar to those suggested byArtaud," wrote Botting, who also compared them to the "impermanent art" of Dada. "A happening explores negative space in the same way Cage explored silence. It is a form of symbolism: actions concerned with 'now' or fantasies derived from life, or organized structures of events appealing to archetypal symbolic associations."[8] Happenings can be a form of participatory new media art, emphasizing an interaction between the performer and the audience. In hisWater,Robert Whitmanhad the performers drench each other with colored water. "One girl squirmed between wet inner tubes, ultimately struggling through a large silver vulva."[9]Claes Oldenburg, best known for his innovative sculptures, used a vacant house, his own store, and the parking lot of the American Institute of Aeronautics and Astronautics in Los Angeles forInjun,World's Fair IIandAUT OBO DYS.[10]The idea was to break down the fourth wall between performer and spectator; with the involvement of the spectator as performer, objective criticism is transformed into subjective support. For some happenings, everyone present is included in the making of the art and even the form of the art depends on audience engagement, for they are a key factor in where the performers' spontaneity leads.[11] Later happenings had no set rules, only vague guidelines that the performers follow based on surrounding props. Unlike other forms of art, happenings that allow chance to enter are ever-changing. When chance determines the path the performance will follow, there is no room for failure. As Kaprow wrote in his essay, "'Happenings' in the New York Scene", "Visitors to a Happening are now and then not sure what has taken place, when it has ended, even when things have gone 'wrong". For when something goes 'wrong', something far more 'right,' more revelatory, has many times emerged".[4]: 86 Kaprow's piece18 Happenings in 6 Parts(1959) is commonly cited as the first happening, although that distinction is sometimes given to a 1952 performance ofTheater Piece No. 1atBlack Mountain CollegebyJohn Cage, one of Kaprow's teachers in the mid-1950s.[7]Cage stood reading from a ladder,Charles Olsonread from another ladder,Robert Rauschenbergshowed some of his paintings and played wax cylinders ofÉdith Piafon an Edison horn recorder,David Tudorperformed on aprepared pianoandMerce Cunninghamdanced.[12]All these things took place at the same time, among the audience rather than on a stage. Cage credited a collaborative close reading ofAntonin Artaud'sThe Theatre and Its DoublewithM.C. RichardsandDavid Tudoras the impetus for the event.[13] Happenings flourished inNew York Cityin the late 1950s and early 1960s. Key contributors to the form includedCarolee Schneemann,Red Grooms,Robert Whitman,Jim DineCar Crash,[14]Claes Oldenburg,Robert Delford Brown,Lucas Samaras, andRobert Rauschenberg. Some of their work is documented in Michael Kirby's bookHappenings(1966).[15]Kaprow claimed that "some of us will become famous, and we will have proven once again that the only success occurred when there was a lack of it".[4]: 87In 1963Wolf Vostellmade the happeningTV-Buryingat theYam Festivalin coproduction with theSmolin Galleryand in 1964 the happeningYouinGreat Neck, New Yorkwhich is onLong Island.[16][17] During the summer of 1959,Red Groomsalong with others (Yvonne Andersen, Bill Barrell, Sylvia Small and Dominic Falcone) staged the non-narrative "play"Walking Man, which began with construction sounds, such as sawing. Grooms recalls, "The curtains were opened by me, playing a fireman wearing a simple costume of white pants and T-shirt with a poncholike cloak and a Smokey Stoverish fireman's helmet. Bill, the 'star' in a tall hat and black overcoat, walked back and forth across the stage with great wooden gestures. Yvonne sat on the floor by a suspended fire engine. She was a blind woman with tin-foil covered glasses and cup. Sylvia played a radio and pulled on hanging junk. For the finale, I hid behind a false door and shouted pop code words. Then the cast did a wild run around and it ended".[18]Dubbing his 148 Delancey Street studio The Delancey Street Museum, Grooms staged three more happenings there,A Garden,The Burning BuildingandThe Magic Trainride(originally titledFireman's Dream). No wonder Kaprow called Grooms "a Charlie Chaplin forever dreaming about fire".[18]On the opening night ofThe Burning Building,Bob Thompsonsolicited an audience member for a light, since none of the cast had one, and this gesture of spontaneous theater recurred in eight subsequent performances.[18]The Japanese artistYayoi Kusamastagednudehappenings during the late '60s in New York City.[19][20] Happenings emphasize the organic connection between art and its environment. Kaprow supports that "happenings invite us to cast aside for a moment these proper manners and partake wholly in the real nature of the art and life. It is a rough and sudden act, where one often feels "dirty", and dirt, we might begin to realize, is also organic and fertile, and everything including the visitors can grow a little into such circumstances." Happenings have no plot or philosophy, but rather are materialized in an improvisatory fashion. There is no direction thus the outcome is unpredictable. "It is generated in action by a headful of ideas...and it frequently has words but they may or may not make literal sense. If they do, their meaning is not representational of what the whole element conveys. Hence they carry a brief, detached quality. If they do not make sense, then they are acknowledgement of the sound of the word rather than the meaning conveyed by it."[22] Due to the convention's nature, there is no such term as "failure" which can be applied. "For when something goes "wrong", something far more "right", more revelatory may emerge. This sort of sudden near-miracle presently is made more likely by chance procedures." As a conclusion, a happening is fresh while it lasts and cannot be reproduced.[4]: 86 Regarding happenings,Red Groomshas remarked, "I had the sense that I knew it was something. I knew it was something because I didn't know what it was. I think that's when you're at your best point. When you're really doing something, you're doing it all out, but you don't know what it is."[18] The lack of plot as well as the expected audience participation can be likened to Augusto Boal'sTheater of the Oppressed, which also claims that "spectator is a bad word". Boal expected audience members to participate in the theater of the oppressed by becoming the actors. His goal was to allow the downtrodden to act out the forces oppressing them in order to mobilize the people into political action. Both Kaprow and Boal are reinventing theater to try to make plays more interactive and to abolish the traditional narrative form to make theater something more free-form and organic.[23] The combine performance mixes the four-dimensional elements of performance with the three-dimensional elements of happening; much us is the case with performance "living" sculptures.[24] Allan Kaprow's and other artists of the 1950s and 1960s that performed these happenings helped put "new media technology developments into context".[4]: 83The happenings allowed other artists to create performances that would attract attention to the issue they wanted to portray. In 1959 the French artistYves Kleinfirst performedZone de Sensibilité Picturale Immatérielle. The work involved the sale of documentation of ownership of empty space (the Immaterial Zone), taking the form of a cheque, in exchange forgold; if the buyer wished, the piece could then be completed in an elaborate ritual in which the buyer would burn the cheque, and Klein would throw half of the gold into theSeine.[25]The ritual would be performed in the presence of an art critic or distinguished dealer, an art museum director and at least two witnesses.[25] In 1960,Jean-Jacques Lebelsupervised and participated in the first European happeningL'enterrement de la ChoseinVenice. For his performance there – calledHappening Funeral Ceremony of the Anti-Process– Lebel invited the audience to attend a ceremony in formal dress. In a decorated room within a grand residence, a draped 'cadaver' rested on a plinth which was then ritually stabbed by an 'executioner' while a 'service' was read consisting of extracts from the French décadent writerJoris-Karl Huysmansand leMarquis de Sade. Then pall-bearers carried the coffin out into a gondola and the 'body'–which was a mechanical sculpture byJean Tinguely–was ceremonially slid into the canal.[26] Poet and painterAdrian Henriclaimed to have organized the first happenings in England inLiverpoolin 1962,[27]taking place during the Merseyside Arts Festival.[28]The most important event in London was the Albert Hall "International Poetry Incarnation" on June 11, 1965, where an audience of 7,000 people witnessed and participated in performances by some of the leadingavant-gardeyoung British and American poets of the day (seeBritish Poetry RevivalandPoetry of the United States). One of the participants,Jeff Nuttall, went on to organize a number of further happenings, often working with his friendBob Cobbing,sound poetandperformance poet. InTokyoin 1964,Yoko Onocreated a happening by performing herCut Pieceat theSogetsu Art Center. She walked onto the stage draped in fabric, presented the audience with a pair of scissors, and instructed the audience to cut the fabric away gradually until the performer decided they should stop.[29]This piece was presented again in 1966 at theDestruction in Art Symposiumin London, this time allowing the cutting away of her street cloths. InBelgium, the first happenings were organized around 1965–1968 inAntwerp,BrusselsandOstendby artistsHugo HeyrmanandPanamarenko. In theNetherlands, the first documented happening took place in 1961, with the Dutch artist and performerWim T. Schippersemptying a bottle of soda water in the North Sea near Petten. Later on, he organized random walks in the Amsterdam city centre.Provoorganized happenings around the a statueHet Lieverdjeon the Spui, a square in the centre ofAmsterdam, from 1966 till 1968.Policeoften raided these events. In the 1960sJoseph Beuys,Wolf Vostell,Nam June Paik,Charlotte Moorman,Dick Higgins, andHA Schultstaged happenings in Germany. In Canada,Gary Bottingcreated or "constructed" happenings between 1969 (in St. John's, Newfoundland) and 1972 (in Edmonton, Alberta), includingThe Aeolian Stringerin which a "captive" audience was entangled in string emanating from a vacuum cleaner as it made its rounds (similar to Kaprow's "A Spring Happening", where he used a power lawnmower and huge electric fan to similar effect);Zen Rock Festivalin which the central icon was a huge rock with which the audience interacted in unpredictable ways;Black on Blackheld in the Edmonton Art Gallery; and "Pipe Dream," set in a men's washroom with an all-female "cast".[11]InAustralia, theYellow House Artist CollectiveinSydneyhoused 24-hour happenings throughout the early 1970s. Behind theIron CurtaininPoland, artist and theater directorTadeusz Kantorstaged the first happenings beginning in 1965. In the second half of 1970s painter and performerKrzysztof Jungran the Repassage gallery, which promoted performance art in Poland.[30]Also in the second half of the 1980s, a student-based happening movementOrange Alternativefounded by MajorWaldemar Fydrychbecame known for its much attended happenings (over 10 thousand participants at one time) aimed against the military regime led byGeneral Jaruzelskiand the fear blocking the Polish society ever sincemartial lawhad been imposed in December 1981. Since 1993 the artistJens Galschiøthas had political happenings all over theworld. In November 1993 he held the happeningmy inner beastwhere twenty sculptures were erected within 55 hours without the knowledge of the authorities all overEurope.Pillar of Shameis a series of Galschiøt's sculptures. The first happening was erected inHong Kongon 4 June 1997, ahead of the handover from British to Chinese rule on 1 July 1997, as a protest against China's crackdown of theTiananmen Square protests of 1989. On 1 May 1999, a Pillar of Shame was set up on the Zócalo[31]inMexico Cityand it stood for two days in front of the Parliament to protest the oppression of the region's indigenous people. The non-profit, artist-run organization, iKatun,[32]artist group, The Institute of Infinitely Small Things, has reflected the use of "happenings" influence while incorporating the medium of internet. Their aim is one which "fosters public engagement in the politics of information".[full citation needed]Their project entitledThe International Database of Corporate Commandspresents a scrutinizing look at the super-saturating advertisements slogans, and "commands" of companies. "The Institute for Infinitely Small Things" uses the commands to conduct research performances, performances in which we attempt to enact, as literally as possible, what the command tells us to do and where it tells us to do it.[33] Starting around 2010, a world-wide group calledThe Order of the Third Birdstarted creatingflashmobstyleart appreciationhappenings.[34][35] In 2018 thePrague-basedperformanceandpoeticscollectiveOBJECT:PARADISEwas established by writers Tyko Say and Jeff Milton.[36]The collective has since aimed to makepoetryreadings more similar to language happenings which involve a variety of interdisciplinary acts and performances occurring at the same time.[37][38] Kaprow explains that happenings are not a new style, but a moral act, a human stand of great urgency, whose professional status as art is less critical than their certainty as an ultimate existential commitment. He argues that once artists have been recognized and paid, they also surrender to the confinement, rather the tastes of the patrons (even if that may not be the intention on both ends). "The whole situation is corrosive, neither patrons nor artists comprehend their role...and out of this hidden discomfort comes a stillborn art, tight or merely repetitive and at worst, chic." Though the we may easily blame those offering the temptation, Kaprow reminds us that it is not the publicist's moral obligation to protect the artist's freedom, and artists themselves hold the ultimate power to reject fame if they do not want its responsibilities.[4]: 86 Art and music festivals play a large role in positive and successful happenings. Some of the festivals includeBurning Manand theOregon Country FairnearVeneta, Oregon.[39][40]
https://en.wikipedia.org/wiki/Happening
Improv Everywhere(often abbreviatedIE) is a comedicperformance artgroup based inNew York City, formed in 2001 by Charlie Todd. Its slogan is "We Cause Scenes". The group carries outpranks, which they call "missions", in public places. The stated goal of these missions is to cause scenes of "chaos and joy." Some of the group's missions use hundreds or even thousands of performers and are similar toflash mobs, while other missions utilize only a handful of performers. Improv Everywhere has stated that they do not identify their work with the term flash mob, in part because the group was created two years prior to the flash mob trend, and the group has an apolitical nature.[1] While Improv Everywhere was created years beforeYouTube, the group has grown in notoriety since joining the site in April 2006. To date, Improv Everywhere's videos have been viewed over 470 million times on YouTube.[2]They have over 1.9 million YouTube subscribers.[2]In 2007, the group shot a television pilot forNBC.[3]In May 2009,HarperCollinsreleased a book about Improv Everywhere,Causing a Scene[4]The book, written by founder Charlie Todd and senior agent Alex Scordelis, is a behind-the-scenes look at some of the group's stunts. In 2013, a feature-length documentary about Improv Everywhere premiered at theSouth By SouthwestFestival in Austin, Texas. The film, titledWe Cause Scenes, was released digitally oniTunes,Netflixand other platforms in 2014.[5] In 2019, Improv Everywhere produced theDisney+live-action seriesPixar In Real Life, which premiered on 12 November 2019, with eleven episodes released monthly.[6][7] After graduating from theUniversity of North Carolina at Chapel Hill,[8]Todd started the group in August 2001 after playing a prank in a Manhattan bar with some friends that involved him pretending to be musicianBen Folds.[9]Later that year Todd started taking classes at theUpright Citizens Brigade Theatrein New York City where he first met most of the "Senior Agents" of Improv Everywhere. The owners of the theatre, TheUpright Citizens Brigade(UCB), had a television series from 1998 to 2000 onComedy Central. While primarily asketch comedyshow, the UCB often filmed their characters in public places with hidden cameras and showed the footage under the end credits. Both the UCB's show and their teachings on improv have been influential to Improv Everywhere.[1]Todd currently performs on a house team at the UCBT in New York, where he also taught for many years.[10] All the missions share a certainmodus operandi: Members ("agents") play their roles entirely straight, not breaking character or betraying that they are acting. IE claims the missions are benevolent, aiming to give the observers a laugh and a positive experience.[11] Improv Everywhere's most popular YouTube video is "Frozen Grand Central", which has received over 35 million views.[12]The two-minute video depicts 207 IE Agents freezing in place simultaneously for five minutes in New York'sGrand Central Terminal. The video was listed as number 49 inUrlesque's 100 Most Iconic Internet Videos.[13]Martin Bashirdeclared onNightlinethat the video was "one of the funniest moments ever captured on tape."[14]According to Charlie Todd, the prank has been recreated by fans in 100 cities around the world.[15] On 21 May 2005, IE staged a fakeU2street concerton a rooftop in New York hours before the real U2 were scheduled to perform atMadison Square Garden.[16]Just like at the filming of the band'sWhere the Streets Have No Namevideo in 1987, the police eventually shut the performance down, but not before IE was able to exhaust their four-song repertoire and get most of the way through an encore repeat of "Vertigo". The crowd, even those who had realized that this was a prank, shouted "one more song!", and then "let them play!" when the police officers arrived. This mission was number 23 on theVH1countdown of the "40 Greatest Pranks."[17] This 2005 performance piece, marking the biggest mission to that date both in logistics and personnel size, on line at Youtube with about 2 million views;[18]shows 70 agents performing timely choreographed moves while situated in each of the 70 windows in a large six-floored Manhattan retail store. The performers were all clothed in all black and were instructed to espouse directions (written down on palmsized sheets) unique to their spot (window). The performance lasted about 4 minutes and included a solo dance by three performers and the spelling of 'Look Up More' with 4-feet tall letters hold up by 10 of the performers.
https://en.wikipedia.org/wiki/Improv_Everywhere
Azapis a form of politicaldirect actionthat came into use in the 1970s in the United States. Popularized by the earlygay liberationgroupGay Activists Alliance, a zap was a raucous public demonstration designed to embarrass a public figure or celebrity while calling the attention of both gays and straights to issues of gay rights. Although Americanhomophileorganizations had engaged inpublic demonstrationsas early as 1959, these demonstrations tended to be peacefulpicket lines. Following the 1969Stonewall riots, considered the flashpoint of the modern gay liberation movement, younger, more radical gay activists were less interested in the staid tactics of the previous generation. Zaps targeted politicians and other public figures and many addressed the portrayal of gay people in the popular media. LGBT and AIDS activist groups continued to use zap-like tactics into the 1990s and beyond. Beginning in 1959,[1]and continuing for the next ten years, gay people occasionally demonstrated against discriminatory attitudes toward and treatment of homosexuals. Although these sometimes took the form ofsit-ins,[2]and on at least two occasions riots,[1][3]for the most part these were picket lines. Many of these pickets were organized by Eastern affiliates of such groups as theMattachine Societychapters out of New York City and Washington, D.C., Philadelphia'sJanus Societyand the New York chapter ofDaughters of Bilitis, These groups acted under the collective nameEast Coast Homophile Organizations(ECHO).[4]Organized pickets tended to be in large urban population centers because these centers were where the largest concentration of homophile activists were located.[5]Picketers at ECHO-organized events were required to follow strict dress codes. Men had to wear ties, preferably with a jacket. Women were required to wear skirts. The dress code was imposed by Mattachine Society Washington founderFrank Kameny, with the goal of portraying homosexuals as "presentable and 'employable'".[6] On June 28, 1969, the patrons of theStonewall Inn, agay barlocated in New York City'sGreenwich Village, resisted a police raid. Gay people returned to the Stonewall and the surrounding neighborhood for the next several nights for additional confrontations.[7]Although there had been two smaller riots — in Los Angeles in 1959 andSan Francisco in 1966— it is the Stonewall riots that have come to be seen as the flashpoint of a new gay liberation movement.[8][9] In the weeks and months following Stonewall, a dramatic increase in gay political organizing took place. Among the many groups that formed was theGay Activists Alliance, which focused more exclusively on organizing around gay issues and less of the general leftist political perspective taken by such other new groups as theGay Liberation Frontand Red Butterfly.[10]GAA member Marty Robinson is credited with developing the zap following a March 7, 1970, police raid on a gay bar called the Snake Pit.[11]Police arrested 167 patrons. One, an Argentine national namedDiego Viñales, so feared the possibility of deportation that he leapt from a second-story window of the police station, impaling himself on the spikes of an iron fence.[12]Gay journalist and activistArthur Evanslater recalled how the raid and Viñales' critical injuries inspired the technique: The Snake Pit incident truly outraged us, and we put out a leaflet saying that, in effect, regardless of how you looked at it, Diego Viñales was pushed out the window and we were determined to stop it....There was no division for us between the political and personal. We were never given the option to make that division. We lived it. So we decided that people on the other side of the power structure were going to have the same thing happen to them. The wall that they had built protecting themselves from the personal consequences of their political decisions was going to be torn down and politics was going to become personal for them.[13] Zaps typically included sudden onset against vulnerable targets, noisiness, verbal assaults and media attention. Tactics included sit-ins, disruptive actions and street confrontations.[14] GAA founding memberArthur Bellexplained the philosophy of the zap, which he described as "political theater for educating the gay masses": Gays who have as yet no sense of gay pride see a zap on television or read about it in the press. First they are vaguely disturbed at the demonstrators for "rocking the boat"; eventually, when they see how the straight establishment responds, they feel anger. This anger gradually focuses on the heterosexual oppressors, and the gays develop a sense of class-consciousness. And the no-longer-closeted gays realize that assimilation into the heterosexual mainstream is no answer: gays must unite among themselves, organize their common resources for collective action, and resist.[15] Thus, obtaining media coverage of the zap became more important than the subject of the zap itself.[16]It was precisely this anti-assimilationist attitude that led some mainstream gay people and groups to oppose zapping as a strategy. TheNational Gay Task Force'smedia director, Ronald Gold, despite having been involved in early GAA zaps, came to urge GAA not to engage in the tactic. As zaps and other activism began opening doors for nascent gay organizations like NGTF and the Gay Media Task Force, these groups became more invested in negotiating with the people within the mainstream power structures rather than in maintaining a tactic they saw as being of the outsider.[17] One area of special interest to GAA was how LGBT people were portrayed on television and on film. There were very few gay characters on television in the1960sandearly 1970s, and many of them were negative. Several in particular, including episodes ofMarcus Welby, M.D.in 1973 and 1974 and a 1974 episode ofPolice Woman, were deemed especially egregious, with their presentation ofhomosexuality as a mental illness, gays as child molesters and lesbians as psychotic killers echoing similar portrayals that continueda trend that dated back to before 1961. In response to the 1973Welbyepisode, "The Other Martin Loring", a GAA representative tried to negotiate with ABC,[18]but when negotiations failed GAA zapped ABC's New York headquarters on February 16, 1973, picketing ABC's New York City headquarters and sending 30-40 members to occupy the office of ABC presidentLeonard Goldenson. Executives offered to meet with two GAA representatives but GAA insisted that all protesters be present. The network refused. All but six of the zappers then left; the final six were arrested but charges were later dropped.[19] When NBC aired "Flowers of Evil", an episode ofPolice Womanabout a trio of lesbians murdering nursing home residents for their money, it was met with a zap byLesbian Feminist Liberation. LFL, which had split from GAA over questions of lack of male attention to women's issues, zapped NBC's New York office on November 19, occupying the office of vice presidentHerminio Traviesasovernight. NBC agreed not to rerun the episode.[20]LFL had earlier zapped an episode ofThe Dick Cavett Showon which anti-feminist authorGeorge Gilderwas the guest.[21] Zaps could sometimes involve physical altercations and vandalism. GAA co-founder Morty Manford got into scuffles with security and administration during his successful effort to found the student club Gay People at Columbia University in 1971, as well as at a famous protest against homophobia at the eliteInner Circleevent in 1972 (which led Morty's motherJeanne Manfordto foundPFLAG).[22][23]GAA was later associated with a series of combative "super-zaps" against homophobic politicians and anti-gay business owners in the summer of 1977. On one occasion activists threw eggs and firecrackers at the home of Adam Walinsky, a state official who had denounced new gay rights legislation for New York, and cut the phone lines of his house. AlthoughTimemagazine derided them as "Gay goons", and Walinsky won an injunction against protests near his home, the actions succeeded in keeping the conservative backlash of the late-1970s out of New York state.[24][25][26] ActivistMark Segalwas a very active zapper, usually acting alone, sometimes with a compatriot operating under the name "Gay Raiders". His guerilla zaps frequently drew national news coverage, sometimes from the target of the zaps themselves. Some of his more successful zaps include: chaining himself to a railing at a taping ofThe Tonight Show Starring Johnny Carsonin early March 1973;[27]handcuffing himself and a friend to a camera at a 7 May 1973 taping ofThe Mike Douglas Showafter producers cancelled a planned discussion of gay issues;[28][29]disrupting a live broadcast ofThe Today Showon 26 October 1973[30](resulting in an off-camera interview withBarbara Walters, who explained the reason for the zap);[31]and interruptingWalter Cronkiteduring a live newscast of theCBS Evening Newson 11 December 1973 by rushing the set with a sign readingGays Protest CBS Prejudice(after a brief interruption, Cronkite reported the zap).[32] Politicians and other public figures were also the targets of zaps. New York MayorJohn Lindsaywas an early and frequent GAA target, with GAA insisting that Lindsay take a public stance on gay rights issues. Lindsay, elected as a liberal Republican, preferred quiet coalition building and also feared that publicly endorsing gay rights would damage his chances at the Presidency; he refused to speak publicly in favor of gay rights and refused to meet with GAA to discuss passing a citywide anti-discrimination ordinance.[16]The group's first zap, on April 13, 1970,[33]involved infiltrating opening night of the 1970Metropolitan Operaseason, shouting gay slogans when the mayor and his wife made their entrance.[34]Lindsay was zapped again on April 19 as he taped an episode of his weekly television program,With Mayor Lindsay. Approximately 40 GAA members obtained tickets to the taping. Some GAA members rushed the stage calling for the mayor to endorse gay rights; others called out comments from the audience, booed, stomped their feet and otherwise disrupted taping. One notable exchange came when the mayor noted it was illegal to blow car horns in New York, drawing the response "It's illegal to blow a lot of things!"[35]When Lindsay announced his candidacy for the Presidency in the 1972 election, GAA saw the opportunity to bring gay issues to national attention and demanded of each potential candidate a pledge to support anti-discrimination. Lindsay was among those who responded favorably.[clarification needed][36] Zapping migrated to the West Coast as early as 1970, when a coalition of several Los Angeles groups targetedBarney's Beanery. Barney's had long displayed a wooden sign at its bar reading "FAGOTS [sic] – STAY OUT". Although there were few reports of actual anti-gay discrimination at Barney's, activists found the sign's presence galling and refused to patronize the place, even when gay gatherings were held there. On February 7, over 100 people converged on Barney's. They engaged in picketing and leafletting outside and occupied tables for long periods inside with small orders.[note 1]The owner of Barney's not only refused to take down the sign, he put up more signs made of cardboard, harassed the gay customers inside, refused service to them, ordered them out of the restaurant and eventually assaulted a customer and called the sheriff. After several hours and consultation with the sheriff's department, the original wooden sign was taken down and stored out of sight and the new cardboard signs were removed and distributed among the demonstrators.[37][note 2] Encouraged by GAA co-founderArthur Bell, in his capacity as a columnist forThe Village Voice, activists employed zaps againstWilliam Friedkinand the cast and crew of the 1980 filmCruising. In 1979,Cruisingopponents blew whistles, shined lights into camera lenses and otherwise disrupted filming to protest how the gay community and the leather sub-culture in particular were being portrayed.[38] Emerging activist groups in other countries adopted the zap as a tactic. The British GLFzapped the Festival of Light, a morality campaign, in 1971. GLF memberPeter Tatchellhas continued to engage in zaps in the intervening decades, both singly and in association with such organizations as the British GLF andOutRage!. In Australia, Sydney Gay Liberation perpetrated a series of zaps beginning in 1973, including engaging in public displays of affection, leafletting and sitting in at a pub rumored to be refusing service to gay customers. Gay Activists Alliance in Adelaide zapped a variety of targets, including a gynecologist perceived to be anti-lesbian, a religious conference atParkin-Wesley Collegeand politicians and public figures such asSteele Hall,Ernie Sigley, John Court andMary Whitehouse.[39] In response to theAIDSepidemic, the direct action groupAIDS Coalition To Unleash Power(ACT UP) formed in 1987. ACT UP adopted a zap-like form of direct action reminiscent of the earlier GAA-style zaps. Some of these included: a March 24, 1987 "die-in" onWall Street, in which 250 people demonstrated against what they saw as price gouging for anti-HIV drugs; the October 1988 attempted shut-down of theFood and Drug Administrationheadquarters inRockville, Maryland, to protest perceived foot-dragging in approving new AIDS treatments; and perhaps most notoriously,Stop the Church, a December 12, 1989, demonstration in and aroundSt. Patrick's Cathedralin opposition to the Catholic Church's opposition to condom use to prevent the spread of HIV.[40] Queer Nationformed in 1990 and adopted the militant tactics of ACT UP and applied them more generally to LGBT issues. Queer Nation members were known for entering social spaces like straight bars and clubs and engaging in straight-identified behaviour like playingspin the bottleto make the point that most public spaces were straight spaces. QN would stage "kiss-ins" in public places like shopping malls or sidewalks, both as a shock tactic directed at heterosexuals and to point out that gay people should be able to engage in the same public behaviours as straight people. Echoing the disruption a decade earlier during the filming ofCruising, Queer Nation and other direct action groups disrupted filming ofBasic Instinctover what they believed were negative portrayals of lesbian and bisexual women.[41]
https://en.wikipedia.org/wiki/Zap_(action)
"Flash Crowd" is a1973English-languagenovellabyscience fiction authorLarry Niven,[1]one of a series about the social consequence of inventing an instant, practically freedisplacement booth.[2] One consequence not foreseen by the builders of the system was that with the almost immediate reporting of newsworthy events, tens of thousands of people worldwide – along with criminals – wouldteleportto the scene of anything interesting, thus creating disorder and confusion. The plot centers around a television journalist who, after being fired for his inadvertent role in inciting a post-robbery riot in Los Angeles, seeks to independently investigate the teleportation system for the flaws in its design allowing for such spontaneous riots to occur. His investigation takes him to destinations and people around the world within the matter of less than 12 hours before he gets his chance to plead his case on television, and he encounters the wide-ranging effects of displacements upon aspects of human behavior such as settlement, crime, natural resources, agriculture, waste management and tourism. In various other books, for exampleRingworld, Niven suggests that easy transportation might be disruptive to traditional behavior and open the way for new forms of parties, spontaneous congregations, or shopping trips around the world. The central character inRingworld, celebrating his birthday, teleports across time-zones to "lengthen" his birthday multiple times (particularly notable since the first edition had the error of the character heading the wrong direction, increasing that edition's value). Niven's essay "Exercise in Speculation: The Theory and Practice of Teleportation" was published in the collectionAll the Myriad Ways[8]In it he discusses the ideas that underlie his teleportation stories. On theWorld Wide Web, a similar phenomenon can occur, when a web site catches the attention of a large number of people, and gets an unexpected and overloading surge of traffic. This usage was first coined by John Pettitt of Beyond.com in 1996.[citation needed]Multiple other terms for the phenomenon exist, often coming from the name of a particular prominent, high-traffic site whose normal base of viewers can constitute a flash crowd when directed to a less famous website. Notorious examples include the "Slashdot effect",[9]the "Instalanche" (when a smaller site gets links by the popular blogInstapundit), or a website being "Farked" orDrudged(where the target site is crashed due to the large number of hits in a short time).
https://en.wikipedia.org/wiki/Flash_Crowd
Laurence van Cott Niven(/ˈnɪvən/; born April 30, 1938) is an Americanscience fiction writer.[2]His 1970 novelRingworldwon theHugo,Locus,Ditmar, andNebulaawards. WithJerry Pournellehe wroteThe Mote in God's Eye(1974) andLucifer's Hammer(1977). TheScience Fiction and Fantasy Writers of Americagave him the 2015Damon Knight Memorial Grand Master Award.[3] His work is primarilyhard science fiction, usingbig scienceconcepts and theoretical physics. It also often includes elements ofdetective fictionandadventure stories. Hisfantasyincludes the seriesThe Magic Goes Away, works of rational fantasy dealing with magic as anon-renewable resource. Niven was born in Los Angeles.[2]He is a great-grandson ofEdward L. Doheny, an oil tycoon who drilled the first successful well in theLos Angeles City Oil Fieldin 1892, and also was subsequently implicated in theTeapot Dome scandal.[4] Niven briefly attended theCalifornia Institute of Technology[5]and graduated with aBachelor of Artsinmathematics(with a minor inpsychology) fromWashburn UniversityinTopeka, Kansasin 1962. He also completed a year of graduate work in mathematics at theUniversity of California, Los Angeles. On September 6, 1969, he married Marilyn Wisowaty, a science fiction andRegency literaturefan. Niven is the author of numerous science fiction short stories and novels, beginning with his 1964 story "The Coldest Place". In this story, the coldest place concerned is the dark side ofMercury, which at the time the story was written was thought to betidally lockedwith theSun(it was found to rotate in a 2:3 resonance after Niven received payment for the story, but before it was published).[6] Algis Budryssaid in 1968 that Niven becoming a top writer despite theNew Wavewas evidence that "trends are for second-raters".[7]In addition to the Nebula Award in 1970[8]and the Hugo and Locus awards in 1971[9]forRingworld, Niven won theHugo Award for Best Short Storyfor "Neutron Star" in 1967.[5]He won the same award in 1972, for "Inconstant Moon", and in 1975 for "The Hole Man". In 1976, he won theHugo Award for Best Novelettefor "The Borderland of Sol". Niven frequently collaborated withJerry Pournelle; they wrote nine novels together, includingThe Mote in God's Eye,Lucifer's HammerandFootfall. Niven has written scripts for two science fiction television series: the originalLand of the Lostseries andStar Trek: The Animated Series, for which he adapted his early story "The Soft Weapon." ForThe Outer Limits, his story "Inconstant Moon" was adapted into anepisode of the same namebyBrad Wright. Niven has also written for theDC ComicscharacterGreen Lantern, including in his storieshard science fictionconcepts such as universalentropyand theredshifteffect. Several of his stories predicted the black market in transplant organs ("organlegging"). Many of Niven's stories—sometimes called the Tales of Known Space[10]—take place in hisKnown Spaceuniverse, in which humanity shares the several habitablestar systemsnearest to theSunwith over a dozenalienspecies, including the aggressive felineKzintiand the very intelligent but cowardlyPierson's Puppeteers, which are frequently central characters. TheRingworldseries is part of the Tales of Known Space, and Niven has shared the setting with other writers since a 1988 anthology,The Man-KzinWars(Baen Books, jointly edited withJerry PournelleandDean Ing).[10]There have been several volumes of short stories and novellas. Niven has also written a logical fantasy seriesThe Magic Goes Away, which utilizes an exhaustible resource calledmanato power a rule-based "technological" magic.The Draco Tavernseries of short stories take place in a more light-hearted science fiction universe, and are told from the point of view of the proprietor of an omni-species bar. The whimsicalSvetzseries consists of a collection of short stories,The Flight of the Horse, and a novel,Rainbow Mars, which involve a nominal time machine sent back to retrieve long-extinct animals, but which travels, in fact, into alternative realities and brings back mythical creatures such as arocand aunicorn. Much of his writing since the 1970s has been in collaboration, particularly with Jerry Pournelle andSteven Barnes, but alsoBrenda CooperandEdward M. Lerner. One of Niven's best known humorous works is "Man of Steel, Woman of Kleenex", in which he uses real-world physics to underline the difficulties ofSupermanand a human woman (Lois LaneorLana Lang) mating.[11] In theMagic: The Gatheringtrading card game, the card Nevinyrral's Disk uses his name, spelled backwards.[12]This tribute was paid because the game's system where mana from lands is used to power spells was inspired by his bookThe Magic Goes Away. The card Nevinyrral, Urborg Tyrant was added in Commander Legends, adding Niven's namesake character fully to the game.[13] According to authorMichael Moorcock, in 1967, Niven, despite being a staunchconservative, voicedoppositionto theVietnam War.[14]However, in 1968 Niven signed an advertisement inGalaxy Science Fictionin support for continued US involvement in the Vietnam War.[15][16] Niven was an adviser toRonald Reaganon the creation of theStrategic Defense Initiativeantimissile policy, as part of theCitizens' Advisory Council on National Space Policy—as covered in theBBCdocumentaryPandora's BoxbyAdam Curtis.[17] In 2007, Niven, in conjunction with a think tank of science fiction writers known as SIGMA, founded and led byArlan Andrews, began advising the U.S.Department of Homeland Securityas to future trends affecting terror policy and other topics.[18]Among those topics was reducing costs for hospitals to which Niven offered the solution to spread rumors in Latino communities that organs were being harvested illegally in hospitals.[19] Larry Niven is also known inscience fiction fandomfor "Niven's Law": "There is no cause so right that one cannot find a fool following it." Over the course of his career Niven has added to this first law a list ofNiven's Lawswhich he describes as "how the Universe works" as far as he can tell.
https://en.wikipedia.org/wiki/Larry_Niven
TheBartle taxonomy of player typesis a classification ofvideo gameplayers (gamers) based on a 1996 paper byRichard Bartle[1]according to their preferred actions within the game. The classification originally described players ofmultiplayer online games(includingMUDsandMMORPGs), though now it also refers to players ofsingle-player video games.[2] The taxonomy is based on a character theory. This character theory consists of four characters: Achievers, Explorers, Socializers, and Killers (often mapped onto the four suits of the standard playing card deck; Diamonds, Spades, Hearts, and Clubs, in that order). These are imagined according to a quadrant model where the X axis represents preference for interacting with other players vs. exploring the world and the Y axis represents preference for interaction vs. unilateral action.[3] A test known asBartle Test of Gamer Psychologybased on Bartle's taxonomy was created in 1999–2000 by Erwin Andreasen and Brandon Downey, containing a series of questions and an accompanying scoring formula.[4][5][6][7]Although the test has been met with some criticism[8]for thedichotomousnature of its question-asking method, as of October 2011, it had been taken over 800,000 times.[9][10]As of February 2018, the Bartle Test of Gamer Psychology hosted by GamerDNA is no longer available. Alternative online implementations of the test exist, however.[11] The result of the Bartle Test is the "Bartle Quotient", which is calculated based on the answers to a series of 30 random questions in the test, and totals 200% across all categories, with no single category exceeding 100%.[12] Also known as "Diamonds" (♦) , these are players who prefer to gain "points", levels, equipment and other concrete measurements of succeeding in a game.[13]They will go to great lengths to achieve rewards that are merely cosmetic.[14] Every game that can be "beaten" in some way caters to the Achiever play style by giving them something to accomplish. Games that offer a 100% completion rating appeal to Achievers.[15] One of the appeals of online gaming to the Achiever is that they have the opportunity to show off their skill and hold elite status to others.[16]They value (or despise) the competition from other Achievers, and look to the Socializers to give them praise.[17]Microsoft'sXbox Liveutilizes theGamerscoreto reward Achievers, who can get points by completing difficult "Achievements" in the various games they purchase. They can, in turn, compare themselves to other gamers from around the world. Explorers, dubbed "Spades" (♠) for their tendency to dig around, are players who prefer discovering areas, and immerse themselves in the game world. They are often annoyed by time-restricted missions as that does not allow them to traverse at their own pace. They enjoy findingglitchesor a hiddeneaster egg.[18][14] Combat and gaining levels or points is secondary to the Explorer, so they traditionally flock to games such asMyst.[19]In these games, the player finds themselves in a strange place, and the objective is to find their way out by paying close attention to detail and solving puzzles. The Explorer will often enrich themselves in any backstory or lore they can find about the people and places in-game.[18]Whereas an Achiever may quickly forget a gaming adventure; the Explorer will recall fond memories about their experience.[20] However, Explorers will often become bored with any particular MMORPG when they have experienced its content. They will tire quicker than other gamer types, and feel the game has become a chore to play.[21] There are a multitude of gamers who choose to play games for the social aspect, rather than the actual game itself. These players are known as Socializers or "Hearts". (♥) They gain the most enjoyment from a game by interacting with other players, and sometimes, computer-controlled characters with personality.[22]The game is merely a tool they use to meet others in-game or outside of it.[23]Some socializers enjoy helping others for the sake of altruism, while explorers help for the sake of discovering previously unattained areas, and achievers or killers want to help for the sake of an extrinsic reward such as points.[citation needed] Since their objective is not so much to win or explore as it is to be social, there are few games that the Socializer enjoy based on their merits. Instead, they play some of the more popular games so that they can use the multi-player features.[24]However, there are some games designed with their play style in mind, which socializers may in particular enjoy. Games of the earliestvideo game generationsseldom have longer dialogue trees, but 2000s games that offer significant player-NPCrelationship interaction and development include the titlesFable,Mass Effect, andKnights of the Old Republic. With the advent of the World Wide Web, gamers' association has partially moved online. Socializers are especially keen at sharing their gaming experiences on forums and social media. For instance, theprocedurally generatedgameDwarf Fortress, has a tight-knit community due to the game's unforgiving nature, unique scenarios and perplexing mechanics.[25]Video game streamerswho interact with their audience are often socializers. One former popular form of gaming video is theLet's Playformat, which has largely been replaced bylive streamingon platforms such asTwitchandYouTube.[26] The online environment is very appealing to the Socializer, as it provides near limitless potential for new relationships.[citation needed]They take full advantage of the ability to joinguildsor kinships in many online games.[27] Killers, well-suited as "Clubs" (♣), are, more than other player types, motivated bypowergamingand eclipsing others.[28]They want to achieve first rank on thehigh scoreboard or beat anotherspeedrunner'stime record.[29][30] Causing mayhem among computer-controlled people and things may be fun to the Killer, but nothing amounts to the joy of pitting one's skills against an actual player-controlled opponent.[31]For most, the joy of being a Killer results from a friendly competitive spirit.[32] For others, it's more about power and the ability to hurt others or the thrill of the hunt. One such example is "ganking" or "owning", a process where the Killer takes their strong character to a place where inexperienced or weaker characters reside, and proceeds to kill them repeatedly.[33] In addition to helping players define their game-playing preferences, the Bartle taxonomy has also been used by game designers to help define the requirements of games that are intended to appeal to a particular audience.[34] In 2006, after running for ten years on a web server maintained by Erwin Andreasen, the database met with intractable scalability problems. After several months, the test was rewritten and moved toGamerDNAservers, preserving all the original test data.[35] Richard Bartle also created a version of his player types model that included a third axis of implicit/explicit, leading to eight player types.[7][36] Achievers Explorers Socializers Killers According to Bartle: "The 4-part version is easy to draw because it's 2D, but the 8-part one is 3D; it's therefore much harder to draw in such a way as it doesn't collapse in a mass of lines."[37](Bartle's personal blog.)There is one known online test based on this model.[38] Bartle's divisions provide a foundation for investigating gamer psychology; however, subsequent studies have noted certain limitations. For example, Nick Yee has argued that a "component" framework provides more explanatory power than a "category" framework.[39]Bartle's motivation factors were analysed for correlation by factorial analysis based on a sample of 7,000 MMO players. One of the results was that the Bartle's Explorer type didn't appear and more importantly its subfactors "exploring the world" and "analysing the game mechanics" didn't correlate.[40]Jon Radoff has proposed a new four-quadrant model of player motivations (immersion, cooperation, achievement, and competition) that has a goal of combining simplicity along with the major motivational elements that apply to all games (multiplayer or otherwise).[41][42]
https://en.wikipedia.org/wiki/Bartle_taxonomy_of_player_types
Adark pattern(also known as a "deceptivedesign pattern") is auser interfacethat has been carefully crafted to trick users into doing things, such as buying overpriced insurance with their purchase or signing up for recurring bills.[1][2][3]User experiencedesigner Harry Brignull coined theneologismon 28 July 2010 with the registration of darkpatterns.org, a "pattern library with the specific goal of naming and shaming deceptive user interfaces".[4][5][6]In 2023, he released the bookDeceptive Patterns.[7] In 2021, theElectronic Frontier FoundationandConsumer Reportscreated a tip line to collect information about dark patterns from the public.[8] "Privacy Zuckering" – named afterFacebookco-founder andMeta PlatformsCEOMark Zuckerberg– is a practice that tricks users into sharing more information than they intended to.[9][10][citation needed]Users may give up this information unknowingly or through practices that obscure or delay the option to opt out of sharing their private information. California has approved regulations that limit this practice by businesses in theCalifornia Consumer Privacy Act.[11] In mid-2024, Meta Platforms announced plans to utilize user data from Facebook and Instagram to train its AI technologies, including generative AI systems. This initiative included processing data from public and non-public posts, interactions, and even abandoned accounts. Users were given until June 26, 2024, to opt out of the data processing. However, critics noted that the process was fraught with obstacles, including misleading email notifications, redirects to login pages, and hidden opt-out forms that were difficult to locate. Even when users found the forms, they were required to provide a reason for opting out, despite Meta's policy stating that any reason would be accepted, raising questions about the necessity of this extra step.[12][13] The European Center for Digital Rights (Noyb) responded to Meta’s controversial practices by filing complaints in 11 EU countries. Noyb alleged that Meta's use of "dark patterns" undermined user consent, violating the General Data Protection Regulation (GDPR). These complaints emphasized that Meta's obstructive opt-out process included hidden forms, redirect mechanisms, and unnecessary requirements like providing reasons for opting out—tactics exemplifying "dark patterns," deliberately designed to dissuade users from opting out. Additionally, Meta admitted it could not guarantee that opted-out data would be fully excluded from its training datasets, raising further concerns about user privacy and data protection compliance.[14][15] Amid mounting regulatory and public pressure, theIrish Data Protection Commission (DPC)intervened, leading Meta to pause its plans to process EU/EEA user data for AI training. This decision, while significant, did not result in a legally binding amendment to Meta’s privacy policy, leaving questions about its long-term commitment to respecting EU data rights. Outside the EU, however, Meta proceeded with its privacy policy update as scheduled on June 26, 2024, prompting critics to warn about the broader implications of such practices globally.[16][17] The incident underscored the pervasive issue of dark patterns in privacy settings and the challenges of holding large technology companies accountable for their data practices. Advocacy groups called for stronger regulatory frameworks to prevent deceptive tactics and ensure that users can exercise meaningful control over their personal information.[18] Bait-and-switchpatternsadvertisea free (or at a greatlyreduced price) product or service that is wholly unavailable or stocked in small quantities. After announcing the product's unavailability, the page presents similar products of higher prices or lesser quality.[19][20] ProPublica has long reported on how Intuit, the maker of TurboTax, and other companies have used the bait and switch pattern to stop Americans from being able to file their taxes for free.[21]On March 29, 2022, theFederal Trade Commissionannounced that they would take legal action against Intuit, the parent company of TurboTax in response to deceptive advertising of its free tax filing products.[22][23]The commission reported that the majority of tax filers cannot use any of TurboTax's free products which were advertised, claiming that it has misled customers to believing that tax filers can use TurboTax to file their taxes. In addition, tax filers who earn farm income or are gig workers cannot be eligible for those products. Intuit announced that they would take counter action, announcing that the FTC's arguments are "not credible" and claimed that their free tax filing service is available to all tax filers.[24] On May 4, 2022, Intuit agreed to pay a $141 million settlement over the misleading advertisements.[25]In May 2023, the company began sending over 4 million customers their settlement checks, which ranged from $30 to $85 USD.[26]In January 2024, the FTC ordered Intuit to fix its misleading ads for "free" tax preparation software - for which most filers wouldn't even qualify.[27] As of March 2024, Intuit has stopped providing its free TurboTax service.[28] Drip pricingis a pattern where a headline price is advertised at the beginning of a purchase process, followed by the incremental disclosure of additional fees, taxes or charges. The objective of drip pricing is to gain a consumer's interest in a misleadingly low headline price without the true final price being disclosed until the consumer has invested time and effort in the purchase process and made a decision to purchase. Confirmshaming uses shame to drive users to act, such as when websites word an option to decline an email newsletter in a way that shames visitors into accepting.[20][29] Common in software installers, misdirection presents the user with a button in the fashion of a typical continuation button. A dark pattern would show a prominent "I accept these terms" button asking the user to accept the terms of a program unrelated to the one they are trying to install.[30]Since the user typically will accept the terms by force of habit, the unrelated program can subsequently be installed. The installer's authors do this because the authors of the unrelated program pay for each installation that they procure. The alternative route in the installer, allowing the user to skip installing the unrelated program, is much less prominently displayed,[31]or seems counter-intuitive (such as declining the terms of service). Some websites that ask for information that is not required also use misdirection. For example, one would fill out a username and password on one page, and after clicking the "next" button, the page asks the user for their email address with another "next" button as the only option.[32]This hides the option to press "next" without entering the information. In some cases, the page shows the method to skip the step as a small, greyed-out link instead of a button, so it does not stand out to the user.[33]Other examples include sites offering a way to invite friends by entering their email address, to upload a profile picture, or to identify interests. Confusing wording may be also used to trick users into formally accepting an option which they believe has the opposite meaning. For example a personal data processing consent button using a double-negative such as "don't notsell my personal information".[34] Aroach motelor atrammel netdesign provides an easy or straightforward path to get in but a difficult path to get out.[35]Examples include businesses that require subscribers to print and mail their opt-out or cancellation request.[19][20] For example, during the2020 United States presidential election,Donald Trump'sWinRedcampaign employed a similar dark pattern, pushing users towards committing to a recurring monthly donation.[36] Another common version of this pattern is any service which enables one to sign-up and start the service online, but which requires a phone call (often with long wait times) to terminate the service. Examples include services like cable TV and internet services, and credit monitoring.[citation needed] In 2021, in the United States, theFederal Trade Commission(FTC) has announced they will ramp up enforcement against dark patterns like roach motel that trick consumers into signing up for subscriptions or making it difficult to cancel. The FTC has stated key requirements related to information transparency and clarity, express informed consent, and simple and easy cancellation.[37] In 2016 and 2017, research documented social media anti-privacy practices using dark patterns.[38][39]In 2018, theNorwegian Consumer Council(Forbrukerrådet) published "Deceived by Design," a report on deceptive user interface designs ofFacebook,Google, andMicrosoft.[40]A 2019 study investigated practices on 11,000 shopping web sites. It identified 1,818 dark patterns in total and grouped them into 15 categories.[41] Research from April 2022 found that dark patterns are still commonly used in the marketplace, highlighting a need for further scrutiny of such practices by the public, researchers, and regulators.[42] Under the European UnionGeneral Data Protection Regulation(GDPR), all companies must obtain unambiguous, freely-given consent from customers before they collect and use ("process") their personally identifiable information. A 2020 study found that "big tech" companies often used deceptive user interfaces in order to discourage their users from opting out.[43]In 2022, a report by the European Commission found that "97% of the most popular websites and apps used by EU consumers deployed at least one dark pattern."[44] Research on advertising network documentation shows that information presented to mobile app developers on these platforms is focused on complying with legal regulations, and puts the responsibility for such decisions on the developer. Also, sample code and settings often have privacy-unfriendly defaults laced with dark patterns to nudge developers’ decisions towards privacy-unfriendly options such as sharing sensitive data to increase revenue.[45] Bait-and-switch is a form of fraud that violates US law.[46] On 9 April 2019, US senatorsDeb FischerandMark Warnerintroduced the Deceptive Experiences To Online Users Reduction (DETOUR) Act, which would make it illegal for companies with more than 100 million monthly active users to use dark patterns when seeking consent to use their personal information.[47] In March 2021, California adopted amendments to theCalifornia Consumer Privacy Act, which prohibits the use of deceptive user interfaces that have "the substantial effect of subverting or impairing a consumer's choice to opt-out."[34] In October 2021, the Federal Trade Commission issued an enforcement policy statement, announcing a crackdown on businesses using dark patterns that "trick or trap consumers into subscription services." As a result of rising numbers of complaints, the agency is responding by enforcing theseconsumer protectionlaws.[37] In 2022, New York Attorney GeneralLetitia Jamesfined Fareportal $2.6 million for using deceptive marketing tactics to sell airline tickets and hotel rooms[48]and the Federal Court of Australia finedExpedia Group'sTrivagoA$44.7 million for misleading consumers into paying higher prices for hotel room bookings.[49] In March 2023, the United StatesFederal Trade CommissionfinedFortnitedeveloperEpic Games$245 million for use of "dark patterns to trick users into making purchases." The $245 million will be used to refund affected customers and is the largest refund amount ever issued by the FTC in a gaming case.[50] In the European Union, the GDPR requires that a user's informed consent to processing of their personal information be unambiguous, freely-given, and specific to each usage of personal information. This is intended to prevent attempts to have users unknowingly accept all data processing by default (which violates the regulation).[51][52][53][54][55] According to theEuropean Data Protection Board, the "principle of fair processing laid down in Article 5 (1) (a) GDPR serves as a starting point to assess whether a design pattern actually constitutes a 'dark pattern'."[56] At the end of 2023 the final version of theData Act[57]was adopted. It is one of the three EU legislations which deal expressly with dark patterns.[58]Another one being theDigital Services Act.[59]The third EU legislation on dark patterns in force is the directive financial services contracts concluded at a distance.[60]The Public German Consumer Protection Organisation claimsBig Techuses dark patterns to violate theDigital Services Act.[61] In April 2019, the UKInformation Commissioner's Office(ICO) issued a proposed "age-appropriate design code" for the operations of social networking services when used by minors, which prohibits using "nudges" to draw users into options that have low privacy settings. This code would be enforceable under theData Protection Act 2018.[62]It took effect 2 September 2020.[63][64]
https://en.wikipedia.org/wiki/Dark_pattern
Egoboo/ˈiːɡoʊbuː/is acolloquial expressionfor thepleasurereceived frompublic recognitionof voluntary work. The term was in use inscience fiction fandomas early as 1947, when it was used (spelled "ego boo") in a letter fromRick Snearypublished in theletter columnofThrilling Wonder Stories.[1]It was originally simply used to describe the "ego boost" someone feels on seeing their name in print. As a reliable way for someone to get their name in print was to do something worth mentioning, it became caught up with the idea of voluntary community participation. As a result of this, in later years, the term grew to mean something akin to an ephemeralcurrency, e.g., "I got a lot of egoboo for editing that newsletter." The term later spread into theopen source programmingmovement, where the concept of non-monetary reward from community response is a keymotivatorfor many of the participants.[2] As a result of its prevalence in this context, it is often attributed toEric S. Raymond. However, it has been in use in science fiction fandom since 1947 or earlier, being referenced in the 1959 collection of fandom-related jargonFancyclopedia II.[3]It did not, however, occur in the 1944 predecessor to that work,Fancyclopedia I,[4]suggesting the term came into common use sometime in the intervening years. The first print citation available electronically is in a 1950 issue ofLee Hoffman'sQuandry, where it is spelled "ego-boo";[5]later usage dropped the hyphen and blended the two words, a common feature of fannish jargon. The earliest online citation recorded is a reference to it being used in 1982, describingInConJunction, ascience fiction conventioninIndiana;[6]the high proportion of science fiction fans onUsenet, and theInternetgenerally, in early years helped spread it into the wider computing community.
https://en.wikipedia.org/wiki/Egoboo
Thegamification of learningis aneducational approachthat seeks to motivate students by usingvideo game designand game elements in learning environments.[1][2]The goal is to maximize enjoyment and engagement by capturing the interest of learners and inspiring them to continue learning.[3]Gamification, broadly defined, is the process of defining the elements which comprise games, make those games fun, and motivate players to continue playing, then using those same elements in a non-game context to influence behavior.[4]In other words, gamification is the introduction of game elements into a traditionally non-game situation. There are two forms of gamification: structural, which means no changes to subject matter, and the altered content method that adds subject matter.[5]Games applied in learning can be consideredserious games, or games where the learning experience is centered around serious stories. A serious story needs to be both "impressive in quality" and "part of a thoughtful process" to achieve learning goals.[6] In educational contexts, examples of desired student behavior as a result of gamification include attending class, focusing on meaningful learning tasks, and taking initiative.[7][8] Gamification of learning does not involve students in designing and creating their own games or in playing commercially producedvideo games, making it distinguishable fromgame-based learning, or using educational games to learn a concept. Within game-based learning initiatives, students might useGamestar MechanicorGameMakerto create their own video game or explore and create 3D worlds inMinecraft. In these examples, the learning agenda is encompassed within the game itself. Some authors contrast gamification of learning with game-based learning. They claim that gamification occurs only when learning happens in a non-game context, such as a school classroom. Under this classification, when a series of game elements is arranged into a "game layer," or a system which operates in coordination with learning in regular classrooms, then gamification of learning occurs.[9]Other examples of gamified content include games that are created to induce learning.[10] Gamification, in addition to employing game elements in non-game contexts, can actively foster critical thinking and student engagement.[11]This approach encourages students to explore their own learning processes through reflection and active participation, enabling them to adapt to new academic contexts more effectively.[11]By framing assignments as challenges or quests, gamified strategies help students develop metacognitive skills that enable them to strategize and take ownership of their learning journey.[11] Some elements of games that may be used to motivate learners and facilitate learning include: A more complete taxonomy of game elements used in educational contexts divide 21 game elements into five dimensions[12][13] When a classroom incorporates the use of some of these elements, that environment can be considered "gamified". There is no distinction as to how many elements need to be included to officially constitute gamification, but a guiding principle is that gamification takes into consideration the complex system of reasons a person chooses to act, and not just one single factor.[9]Progress mechanics, which need not make use of advanced technology, are often thought of as constituting a gamified system[1]However, used in isolation, these points and opportunities to earn achievements are not necessarily effective motivators for learning.[1]Engaging video games which can keep players playing for hours on end do not maintain players' interest by simply offering the ability to earn points and beat levels. Rather, the story that carries players along, the chances for players to connect and collaborate with others, the immediate feedback, the increasing challenges, and the powerful choices given to players about how to proceed throughout the game, are immensely significant factors in sustained engagement. Business initiatives designed to use gamification to retain and recruit customers, but do not incorporate a creative and balanced approach to combining game elements, may be destined to fail.[14]Similarly, in learning contexts, the unique needs of each set of learners, along with the specific learning objectives relevant to that context must inform the combination of game elements to shape a compelling gamification system that has the potential to motivate learners.[3] A system of game elements which operates in the classroom is explicit, and consciously experienced by the students in the classroom. There is no hidden agenda by which teachers attempt tocoerceor trick students into doing something. Students still make autonomous choices to participate in learning activities. The progress mechanics used in the gamified system can be thought of as lighting the way for learners as they progress,[15]and the other game mechanics and elements of game design are set up as an immersive system to support and maximize students' learning.[16] Gamification initiatives in learning contexts acknowledge that large numbers of school-aged children play video games, which shapes theiridentityas people and as learners.[17][18][page needed][19][page needed]While the world of gaming used to be skewed heavily toward male players, recent statistics show that slightly more than half of videogame players are male: in the United States, 59% male, 41% female, and 52% male, 48% female in Canada.[20][21]Within games and other digital media, students experience opportunities for autonomy, competence and relatedness,[22]and theseaffordancesare what they have come to expect from such environments. Providing these same opportunities in the classroom environment is a way to acknowledge students' reality, and to acknowledge that this reality affects who they are as learners.[23][page needed][24][25][26]Incorporating elements from games into classroom scenarios is a way to provide students with opportunities to act autonomously, to display competence, and to learn in relationship to others.[22]Game elements are a familiar language that children speak, and an additional channel through which teachers can communicate with their students. Game designerJane McGonigalcharacterizes video game players as urgent optimists who are part of a social fabric, engaged in blissful productivity, and on the lookout for epic meaning.[27]If teachers can successfully organize their classrooms and curriculum activities to incorporate the elements of games which facilitate such confidence, purpose and integrated sense of mission, students may become engrossed in learning and collaborating such that they do not want to stop. The dynamic combination of intrinsic and extrinsic motivators is a powerful force[22]which, if educational contexts can adapt from video games, may increase studentmotivation, and student learning. Some of the potential benefits of successful gamification initiatives in the classroom include: Referring to how video games provide increasingly difficult challenges to players, game designerAmy Jo Kimhas suggested that every educational scenario could be set up to operate this way.[15][31]This game mechanic which involves tracking players' learning in the game, and responding by raising the difficulty level of tasks at just the right moment, keeps players from becoming unnecessarily frustrated with tasks that are too difficult, as well as keeps players from becoming bored with tasks that are too easy. This pacing fosters continued engagement and interest which can mean that learners are focused on educational tasks, and may get into a state offlow, or deeply absorbed in learning.[32] In gamified e-learning platforms, massive amount of data are generated as a result of user interaction and action within the system. These actions and interactions can be properly sampled, recorded, and analyzed. Meaningful insights on performance behaviors and learning objectives can be useful to teachers, learners, and application developers to improve the learning. These insights can be in form of a quick feedback to learners on the learning objectives while the learner still operates within the rules of play. Data generated from games can also be used to uncover patterns and rules to improve the gamified e-learning experience.[33] In a large systematic review of the literature regarding the application of gamification in Higher Education, benefits that were identified included positive effects in student engagement, attitude, performance, and enjoyment although these are mediated by the context and design.[34] Common ways to integrate gamification in education is creating battles, digital games such as Kahoot or Quizlet, or playing old-school games such as bingo or scavenger hunts.[35]With regard to language, instead of referring to academic requirements with the typical associated terms, game-like names may be used instead. For example, making a course presentation might be referred to as "embarking on a quest", writing an exam might be "defeating monsters", and creating a prototype might be classed as "completing a mission". In terms of grading, the grading scheme for a course might be adapted to make use ofExperience points(XP) as opposed to letter grades. Each student can begin at level one with zero points; as they progress through the course, completing missions and demonstrating learning, they earn XP. A chart can be developed to illustrate how many XP is required to earn a letter grade. For example, earning 1500 XP might translate to a C, while 2000 would earn a B, and 2500, an A. Some teachers use XP, as well as health points (HP) and knowledge points (KP) to motivate students in the classroom, but do not connect these points with the letter grades students get on a report card. Instead these points are connected with earning virtual rewards such as badges or trophies.[citation needed] In First-Year Composition (FYC) courses, gamification has been successfully implemented through tasks like "Quests" and "Random Encounters."[11]Quests are designed as extended assignments that encourage students to engage deeply with specific topics, often involving research, collaborative writing, or creative problem-solving.[11]These tasks enable students to develop essential research and collaborative skills, which are critical for academic success and professional growth.[11]By working on complex, multi-step challenges, students learn to approach problems systematically and think critically about their solutions.[11]Random Encounters are shorter, impromptu tasks that require students to apply critical thinking and adaptability in unpredictable scenarios, such as responding to a challenging writing prompt or analyzing an unfamiliar text.[11]Such activities help students build resilience and navigate uncertain or complex situations, equipping them to handle dynamic challenges in academic, professional, and everyday contexts.[11]Gamified tasks also encourage students to actively engage with course material, fostering a sense of exploration and agency in their learning journey.[11]These examples highlight the varied applications of gamified tasks, which also depend on the roles played by teachers and the structure of the learning environment.[11] The structure of a course or unit may be adapted in various ways to incorporate elements of gamification; these adaptations can affect the role of the student, the role of the teacher, and role of the learning environment. The role of a student in a gamified environment might be to adopt anavatarand a game name with which they navigate through their learning tasks. Students may be organized into teams orguilds, and be invited to embark on learning quests with their fellow guild members. They may be encouraged to help other guild members, as well as those in other guilds, if they have mastered a learning task ahead of others. Students tend to express themselves as one of the following game-player types; player (motivated by extrinsic rewards), socialiser (motivated by relatedness), free spirit (motivated by autonomy), achiever (motivated by mastery) and philanthropist (motivated by purpose).[36]The role of the teacher is to design a gamified application, embedding game dynamics and mechanics that appeal to the target group (i.e. students) and provide the type of rewards that are attractive to the motivation of the majority.[37]Therefore, it is important teachers know their students so they are able to best design a gamified program that not only interests the students but also one in which matches the specific learning goals that hit on elements of knowledge from the curriculum.[1]The teacher also needs to responsibly track student achievements with a web-based platform, such asOpen Badges, the WordPress plug-in GameOn or anonline spreadsheet. The teacher may also publish aleaderboardonline which illustrates the students who have earned the most XP, or reached the highest level of play. The teacher may define the parameters of the classroom "game", giving the ultimate learning goal a name, defining the learning tasks which make up the unit or the course, and specifying the rewards for completing those tasks. The other important role of the teacher is to provide encouragement and guidance for students as they navigate the gamified environment. The role of a gamified learning environment may be structured to provide an overarching narrative which functions as a context for all the learning activities. For example, a narrative might involve an impendingzombieattack which can be fended off or a murder mystery which can be solved, ultimately, through the process of learning. Learning is the focus of each gamified system. Sometimes the narrative is related to the content being learned, for example, in the case of a disease outbreak which can be stopped through learning biology. In some cases the narrative is unrelated, as in a case of music students who learn to play pieces as the means to collectively climb up to the top of a mountain, experiencing various challenges and setbacks along the way. Other ways in which gaming elements are part of the role of the learning environment include theme music played at opportune times, a continuous feedback loop which, if not instantaneous, is as quick as possible, a variety of individual and collaborative challenges, and the provision of choice as to which learning activities are undertaken, how they will be undertaken, or in which order they will be undertaken.[original research?] Without adding extra gaming elements to the classroom, schooling already contains some elements which are analogous to games.[30][unreliable source?]Since the 1700s, school has presented opportunities for students to earn marks for handing in assignments and completing exams,[38][page needed][39][40]which are a form of reward points. Since the early 1900s, with the advent ofpsychoanalytic theory, reward management programs were developed and can still be seen in schools. For example, many teachers set up reward programs in their classrooms which allow students to earn free time, school supplies or treats for finishing homework or following classroom rules.[4] Teaching machines with gamification features were developed by cyberneticistGordon Paskfrom 1956 onwards, after he was granted a patent for an "Apparatus for assisting an operator in performing a skill".[41]Based on this patent, Pask and Robin McKinnon-Wood built SAKI – the Self-Adaptive Keyboard Instructor – for teaching students how to use the Hollerith key punch, a data entry device using punched cards. The punched card was common until the 1970s and there was huge demand for skilled operators. SAKI treats the student as a "black box", building a probabilistic model of their performance as it goes.[42]The machine stores the response times for different exercises, repeating exercises for which the operator has the slowest average response time, and increasing the difficulty of exercises where the operator has performed successfully. SAKI could train an expert key-punch operator in four to six weeks, a reduction of between 30 and 50 percent over other methods. "Ideally, for an operator to perform a skill efficiently, the data presented to him should always be of sufficient complexity to maintain his interest and maintain a competitive situation, but not so complex as to discourage the operator".[41]SAKI led to the development of teaching software such as the Mavis Beacon typing tutor,[43]fondly remembered by students of touch-typing everywhere. While some have criticized the term "gamification" then, as simply a new name for a practice that has been used in education for many years,[44]gamification does not refer to a one-dimensional system where a reward is offered for performing a certain behaviour. The gamification of learning is an approach which recently has evolved, in coordination with technological developments, to include much larger scales for gameplay, new tools, and new ways to connect people.[45]The term gamification, coined in 2002, is not a one-dimensional reward system. Rather, it takes into consideration the variety of complex factors which make a person decide to do something; it is a multifaceted approach which takes into consideration psychology, design, strategy, and technology.[9]One reason for the popularization of the term "gamification" is that current advancements in technology, in particular, mobile technology have allowed for the explosion of a variety of gamification initiatives in many contexts. Some of these contexts include theStarbucksandShoppers Drug Martloyalty programs, location-based check-in applications such asFoursquare, and mobile and web applications and tools that reward and broadcast healthy eating, drinking, and exercise habits, such asFitocracy,BACtrackandFitbit. These examples involve the use of game elements such as points, badges and leaderboards to motivate behavioural changes and track those changes in online platforms. The gamification of learning is related to these popular initiatives, but specifically focuses on the use of game elements to facilitate student engagement and motivation to learn. It is difficult to pinpoint when gamification, in the strict sense of the term, came to be used in educational contexts, although examples shared online by classroom teachers begin appearing in 2010.[citation needed] The research of Domínguez and colleagues about gamifying learning experiences suggests that common beliefs about the benefits obtained when using games in education can be challenged. Students who completed the gamified experience got better scores in practical assignments and in overall score, but their findings also suggest that these students performed poorly on written assignments and participated less on class activities, although their initial motivation was higher. The researchers concluded that gamification in e-learning platforms seems to have the potential to increase student motivation, but that it is not trivial to achieve that effect, as a big effort is required in the design and implementation of the experience for it to be fully motivating for participants. On the one hand, qualitative analysis of the study suggests that gamification can have a great emotional and social impact on students, as reward systems and competitive social mechanisms seem to be motivating for them. But quantitative analysis suggests that the cognitive impact of gamification on students is not very significant. Students who followed traditional exercises performed similarly in overall score than those who followed gamified exercises. Disadvantages of gamified learning were reported by 57 students who did not want to participate in the gamified experience. The most frequent reason argued by students was 'time availability'. The second most important reason were technical problems. Other reasons were that there were too many students and that they had to visit so manyweb pagesand applications at the university that they did not want to use a new one.[46] Another field where serious games are used to improve learning is health care. Petit dit Dariel, Raby, Ravaut and Rothan-Tondeur investigated the developing of serious games potential in nursing education. They suggest that few nursing students have long-term exposure to home-care and community situations. New pedagogical tools are needed to adequately and consistently prepare nurses for the skills they will need to care for patients outside acute care settings. Advances in information and communications technologies offer an opportunity to explore innovative pedagogical solutions that could help students develop these skills in a safe environment. Laboratory simulations with high fidelity mannequins, for example, have become an integral element in many health care curricula.[47]A recent systematic review found evidence suggesting that the use of simulation mannequins significantly improved three outcomes integral to clinical reasoning: knowledge acquisition, critical thinking and the ability to identify deteriorating patients.[48] In the study of Mouaheb, Fahli, Moussetad and Eljamali an American version of a serious game was investigated: Virtual University. Results showed that learning using this serious game has educational values that are based on learning concepts advocated by constructivist psycho-cognitive theories. It guarantees intrinsic motivation, generates cognitive conflicts and provides situated learning. The use of Virtual University allowed the researchers to identify the following key points: from its playfulness combined with video game technologies, the tool was able to motivate learners intrinsically; the simulation game also recreates learning situations extremely close to that of reality, especially considering the complexity, dynamism and all of the interrelations and interactions that exist within the university system. This is a major educational advantage by encouraging 1) an intense interaction that generates real cognitive or socio-cognitive conflicts, providing a solid construction of knowledge; 2) an autonomy in the learning process following a strong metacognitive activity; 3) an eventual transfer of acquired skills.[49] In another study involving an American-based school, gamification was integrated into all its subjects. Both students and teachers indicated they derived maximum satisfaction from a gamified form of learning. However, results from standardized tests showed a slightly improved performance, and in some cases, below-average performance in comparison to other schools.[50]Enough evidence-based research needs to be carried out to objectively measure the effectiveness of gamification of learning across varying factors.[51] Multiple legal restrictions may apply to the gamification of learning because of the difference in laws in different countries and states. However, there are common laws prevalent in most jurisdictions. Administrators and instructors must ensure the privacy rights of learners are protected. The use of Personally Identifiable Information(PII) of learners and other user-generated data should be clearly stated in a privacy policy made available to all learners. Gamified e-learning systems can make use of existing game elements such as avatars and badges. Educators should be aware of the copyright protection guiding the use of such elements and ensure they are not in violation. Permission should be obtained from the creators of existing game items under copyright protection. In some cases, educators can create their game elements for use in such gamified e-learning systems.[52] LeapFrog, a corporation which manufactures e-learning toys,smart toysand games for children, was the subject of a hacking scandal involving its product LeapPad Ultimate, a rugged gaming and e-learning tablet featuring educational games for young users. The tablet had security errors that allowed third-parties to message users, scrape personal information from users and get into theWiFi networksof users, most of whom were minors. This led to concerns regardingpedophilesusing the tablets as a way to groom potential victims.[53][54][55][56] Gamification of learning has been criticized for its use of extrinsic motivators, which some teachers believe must be avoided since they have the potential to decrease intrinsic motivation for learning (seeoverjustification). This idea is based on research which emerged first in the early 1970s[57][58]and has been recently made popular byDaniel Pink.[59] Some teachers may criticize gamification for taking a less than serious approach to education. This may be a result of the historical distinction between work and play which perpetuates the notion that the classroom cannot be a place for games, or a place for fun.[60][61]Gameplay in some views may be seen as being easy, irrelevant to learning, and applicable only to very young children.[62] Teachers who criticize the gamification of learning might feel that it is not worth their time to implement gaming initiatives, either because they themselves are stretched thin with the number of responsibilities that they already have,[30]or because they fear that the curriculum might not be covered if any time is spent dedicated to anything other than engagement with that curriculum. Gamification of learning has been also criticized as ineffective for certain learners and for certain situations.[citation needed].[1]Videogame theoristIan Bogosthas criticized gamification for its tendency to take a simplistic, manipulative approach which does not reflect the real quality of complex, motivational games. Educational scenarios which purport to be gamification, but only make use of progress mechanics such as points, badges and leaderboards are particularly susceptible to such criticism.[63] Gamification in education has also raised concerns over inequity in the classroom. A lack of access to technology, students who do not like gaming, and students in large schools where the teachers do not know each student on an individual level may affect any educational benefit to come from gamification, and gamification may not be appropriate for every subject in school. For example, sensitive or controversial subject matter such as racial history or human rights may not be an appropriate space for gamification.[64] There are growing concerns about ethical constraints surrounding implementation of gamification using ICT tools and e-learning systems. Gaming elements, like points and badges, can encourage collaboration and social competition but can also encourage aggression amongst learners. More so, the policies guiding the privacy and security of data produced in gamified e-learning systems needs to be transparent to all stakeholders including students and administrators.[65]Teachers and students need to be aware and accept to participate in any gamified form of learning introduced in the curriculum. Any possible risks that may arise should be made available to all participants prior to their participation. Also, educators should have an understanding of the target audience of the learners to maintain fairness. Educators need to ensure gaming elements and rules integrated in gamification design do not impair learners' participation because of their social, cultural or physical conditions.[66]
https://en.wikipedia.org/wiki/Gamification_of_learning
GNS theoryis an informal field of study developed byRon Edwardswhich attempts to create a unifiedtheoryof howrole-playing gameswork. Focused on player behavior, in GNS theory participants in role-playing games organize their interactions around three categories of engagement: Gamism, Narrativism and Simulation. The theory focuses on player interaction rather than statistics, encompassinggame designbeyond role-playing games. Analysis centers on how player behavior fits the above parameters of engagement and how these preferences shape the content and direction of a game. GNS theory is used by game designers to dissect the elements which attract players to certain types of games. GNS theory was inspired by thethreefold modelidea, from discussions on the rec.games.frp.advocacy group onUsenetin summer 1997.[1]The Threefold Model defined drama, simulation and game as threeparadigmsof role-playing. The name "Threefold Model" was coined in a 1997 post by Mary Kuhner outlining the theory.[2]Kuhner posited the main ideas for theory on Usenet, and John H. Kim later organized the discussion and helped it grow.[1] In his article "System Does Matter",[3]which was first posted to the website Gaming Outpost in July 1999,[1]Ron Edwards wrote that all RPG players have one of three mutually-exclusive perspectives. According to Edwards, enjoyable RPGs focus on one perspective and a common error in RPG design is to try to include all three types. His article could be seen as a warning againstgeneric role-playing game systemsfrom large developers.[4]Edwards connected GNS theory to game design, which helped to popularize the theory.[1]On December 2, 2005, Edwards closed the forums on the Forge about GNS theory, saying that they had outlived their usefulness.[5] A gamist makes decisions to satisfy predefined goals in the face of adversity: to win. Edwards wrote, I might as well get this over with now: the phrase "Role-playing games are not about winning" is the most widespread example ofsynecdochein the hobby. Potential Gamist responses, and I think appropriately, include: "Eat me,"(upon winning) "I win," and "C'mon, let's play without these morons."[6] These decisions are most common in games pitting characters against successively-tougher challenges and opponents, and may not consider why the characters are facing them in the first place. Gamist RPG design emphasizes parity; all player characters should be equally strong and capable of dealing with adversity. Combat and diversified options for short-term problem solving (for example, lists of specific spells or combat techniques) are frequently emphasized. Randomization provides a gamble, allowing players to risk more for higher stakes rather than modelling probability. Examples includeMagic: The Gathering,chessand most computer games. Narrativism relies on outlining (or developing) character motives, placing characters into situations where those motives conflict and making their decisions the driving force. For example, a samurai sworn to honor and obey his lord might be tested when directed to fight his rebellious son; a compassionate doctor might have his charity tested by an enemy soldier under his care; or a student might have to decide whether to help her best friend cheat on an exam. This has two major effects. Characters usually change and develop over time, and attempts to impose a fixed storyline are impossible or counterproductive. Moments of drama (the characters' inner conflict) make player responses difficult to predict, and the consequences of such choices cannot be minimized. Revisiting character motives or underlying emotional themes often leads to escalation: asking variations of the same "question" at higher intensity levels. Simulationism is a playing style recreating, or inspired by, a genre or source. Its major concerns are internal consistency, analysis of cause and effect and informed speculation. Characterized by physical interaction and details of setting, simulationism shares with narrativism a concern for character backgrounds, personality traits and motives to model cause and effect in the intellectual and physical realms. Simulationist players consider their characters independent entities, and behave accordingly; they may be reluctant to have their character act on the basis of out-of-character information. Similar to the distinction between actor and character in a film or play, character generation and the modeling of skill growth and proficiency can be complex and detailed. Many simulationist RPGs encourage illusionism (manipulation of in-game probability and environmental data to point to predefined conclusions) to create a story.Call of Cthulhurecreates the horror and humanity's cosmic insignificance in theCthulhu Mythos, using illusionism to craft grisly fates for the players' characters and maintain consistency with the source material. Simulationism maintains a self-contained universe operating independent of player will; events unfold according to internal rules. Combat may be broken down into discrete, semi-randomised steps for modeling attack skill, weapon weight, defense checks, armor, body parts and damage potential. Some simulationist RPGs explore different aspects of their source material, and may have no concern for realism;Toon, for example, emulates cartoon hijinks.Role-playing game systemssuch asGURPSandFudgeuse a somewhat-realistic core system which can be modified with sourcebooks or special rules. GNS theory incorporatesJonathan Tweet's three forms of task resolution which determine the outcome of an event. According to Edwards, an RPG should use a task-resolution system (or combination of systems) most appropriate for that game's GNS perspective. The task-resolution forms are: Edwards has said that he changed the name of the Threefold Model's "drama" type to "narrativism" in GNS theory to avoid confusion with the "drama" task-resolution system.[7] GNS theory identifies five elements of role-playing: It details four stances the player may take in making decisions for their character: Brian Gleichman, a self-identified Gamist[8]whose works Edwards cited in his examination of Gamism,[6]wrote an extensive critique of the GNS theory and the Big Model. He states that although any RPG intuitively contains elements ofgaming,storytelling, and self-consistentsimulated worlds, the GNS theory "mistakes components of an activity for the goals of the activity", emphasizesplayer typingover other concerns, and assumes "without reason" that there are only three possible goals in all of role-playing.[9]Combined with the principles outlined in "System Does Matter",[3]this produces a new definition of RPG, in which its traditional components (challenge, story, consistency) are mutually exclusive,[10]and any game system that mixes them is labeled as "incoherent" and thus inferior to the "coherent" ones.[11]To disprove this, Gleichman cites a survey conducted byWizards of the Coastin 1999,[12]which identified four player types and eight "core values" (instead of the three predicted by the GNS theory) and found that these are neither exclusive, nor strongly correlated with particular game systems.[13]Gleichman concludes that the GNS theory is "logically flawed", "fails completely in its effort to define or model RPGs as most people think of them", and "will produce something that is basically another type of game completely".[8] Gleichman also states that just as theThreefold Model(developed by self-identified Simulationists who "didn't really understand any other style of player besides their own"[14]) "uplifted" Simulation, Edwards' GNS theory "trumpets" its definition of Narrativism. According to him, Edwards' view of Simulationism as "'a form of retreat, denial, and defense against the responsibilities of either Gamism or Narrativism'" and characterization of Gamism as "being more akin toboard games" than to RPGs,[11]reveals anelitistattitude surrounding the narrow GNS definition of narrative role-playing, which attributes enjoyment of any incompatible play-style to "'[literal]brain damage'".[15]Lastly, Gleichman states that most games rooted in the GNS theory, e.g.My Life with MasterandDogs in the Vineyard, "actually failed to support Narrativism as a whole, instead focusing on a single Narrativist theme", and have had no commercial success.[16] Fantasy author andLegend of the Five RingscontributorMarie Brennanreviews the GNS theory in the eponymous chapter of her 2017 non-fiction bookDice Tales. While she finds many of its "elaborations and add-ons that accreted over the years... less than useful", she suggests that the "core concepts of GNS can be helpful in elucidating some aspects of [RPGs], ranging fromgame designto the disputes that arise between players". A self-identified Narrativist, Brennan finds Edwards' definition of said creative agenda ("exploration oftheme") too narrow, adding "character development,suspense, excitingplot twists, and everything else that makes up a good story" to the Narrativist priorities list. She concludes that rather than being a practical guide, GNS is more useful for explaining the general ideas of role-playing and especially "for understanding how gamers behave".[17] The role-playing game historian Shannon Appelcline (author ofDesigners & Dragons) drew parallels between three of his contemporary commercial categories of RPG products and the three basic categories of GNS. He posited that "OSR gamesare largely gamist andindie gamesare largely narrativist", while "the mainstream games... tend toward simulationist on average", and cautiously concluded that this "makes you think that Edwards was on to something".[18] Noted participant of theForge, contributor to GNS theory, and developer of many role-playing games,Vincent Baker, has said that "the model is obsolete," and discussed that trying to fit play into the boxes provided by the model may contribute to misunderstanding it.[19]
https://en.wikipedia.org/wiki/GNS_theory