text
stringlengths
4
118k
source
stringlengths
15
79
Nubia Technology is a Chinese smartphone manufacturer headquartered in Shenzhen, Guangdong. Originally established as a wholly owned subsidiary of ZTE in 2012, it became an independent company in 2015 and received a significant investment from Suning Holdings Group and Suning Commerce Group in 2016. ZTE reduced its stake in Nubia to 49.9% in 2017, officially meaning Nubia was no longer considered a subsidiary of ZTE, but more of an associate company. In February 2016 Nubia became a sponsor of Jiangsu Suning F.C. for a reported CN¥150 million. The company hired footballer Cristiano Ronaldo to promote the mobile phone of the company in May 2016. In 2017, China Daily reported that Nubia would build a factory in Nanchang, Jiangxi Province. In April 2018, Nubia Technology launched a gaming sub-brand, named REDMAGIC (红魔). REDMAGIC announced its new 5G compatible device REDMAGIC 5G on March 12, 2020, in Shanghai. REDMAGIC is known for being the first smartphone brand to put cooling fans inside their phones. The company also unveiled a partnership with Chinese esport team Royal Never Give Up, to further expand its brand among esport enthusiasts. On 13 April 2020, the company unveiled a brand new logo as well as its new brand vision. In March 2022, Nubia unveiled the first gaming phone featuring an under-display camera technology, the REDMAGIC 7 Pro. In 2023, Nubia released the Red Magic 8S Pro, touted as the strongest gaming phone to date. In 2024, Nubia released the Red Magic 9S Pro+ with the highest Antutu smartphone score to date. In July 2024, RedMagic announced its entry in the computer category with its first gaming laptop, the Titan 16 Pro, available in China and global markets, while in September, they launched their first gaming tablet, the Nova gaming pad. == Products == === Smartphone === ==== RedMagic sub-brand ==== ==== Nubia ==== 2024 — Nubia Neo 2 2025 — Nubia Z70S (also Pro, Ultra and Ultra Photographer Edition) == References == == External links == Official website
https://en.wikipedia.org/wiki/Nubia_Technology
A Bachelor of Technology (B.Tech., B.T., or BTech; Latin Baccalaureus Technologiae) is a bachelor's academic degree that is awarded for an undergraduate program in engineering. == Australia == In Australia, the Bachelor of Technology (BTech) degree is offered by RMIT University, Edith Cowan University, Curtin University and certain private institutions. == Canada == In Canada, the degree is offered by British Columbia Institute of Technology, Thompson Rivers University, Northern Alberta Institute of Technology, McMaster University, Seneca College, Algonquin College, and Marine Institute of Memorial University of Newfoundland. == India == Bachelor of Technology (B.Tech.) degree in India is an undergraduate academic degree conferred after the completion of a four-year full-time engineering program from All India Council for Technical Education recognised institute. The B.Tech. degree is generally awarded by Indian Institutes of Technology (IITs), National Institutes of Technology (NITs) (India), Indian Institutes of Information Technology (IIITs), Government Funded Technical Institutes (GFTIs) or other Centrally Funded Technical Institutes (CFTIs) and private deemed universities in various engineering disciplines such as civil engineering, chemical engineering, mechanical engineering, electrical engineering, computer science and engineering, electronics and communication engineering, cyber security and many more. This degree is generally equivalent to a Bachelor of Engineering offered in other affiliated engineering colleges of state collegiate universities or a Bachelor of Science in Engineering or Bachelor of Engineering in the United States and Europe. Eligibility for a B.Tech. program in India typically requires candidates to have completed their higher secondary education (10+2) with mandatory subjects such as mathematics, physics, chemistry or other technical subjects. Institutions often set a minimum aggregate percentage requirement, usually 75% and the Joint Entrance Examination (JEE) Main is a prominent entrance exam for B.Tech. admissions, comprising questions in mathematics, physics and chemistry. JEE Advanced is the subsequent exam for those trying to enter the Indian Institutes of Technology (IITs). Admission to National Institutes of Technology (NITs), Indian Institutes of Information Technology (IIITs), MIT Vishwaprayag University, Solapur other Government Funded Technical Institutes (GFTIs) and deemed universities like Amrita Vishwa Vidyapeetham, International Institute of Information Technology, Hyderabad are based on JEE Main scores. == Pakistan == The National Technology Council (NTC) of Pakistan is responsible for accrediting 4-year technology degree programs in universities. This ensures quality education that meets international standards. == Singapore == In Singapore, the degree is offered by National University of Singapore under NUS SCALE programmes. == United States == In New York State, the degree is offered by the New York City College of Technology, part of City University of New York. Multiple academic departments of the college offers course resulting in the degree upon graduation. NTC also maintains a register of technologists with different categories based on qualifications and experience. Graduates with 4-year degrees can register as "Graduate Engineering Technologists" until December 31, 2022. After that, only graduates from NTC-accredited programs will be eligible. Those with 5 years of experience can register as "Professional Engineering Technologist". == References ==
https://en.wikipedia.org/wiki/Bachelor_of_Technology
Electronics is a scientific and engineering discipline that studies and applies the principles of physics to design, create, and operate devices that manipulate electrons and other electrically charged particles. It is a subfield of physics and electrical engineering which uses active devices such as transistors, diodes, and integrated circuits to control and amplify the flow of electric current and to convert it from one form to another, such as from alternating current (AC) to direct current (DC) or from analog signals to digital signals. Electronic devices have significantly influenced the development of many aspects of modern society, such as telecommunications, entertainment, education, health care, industry, and security. The main driving force behind the advancement of electronics is the semiconductor industry, which continually produces ever-more sophisticated electronic devices and circuits in response to global demand. The semiconductor industry is one of the global economy's largest and most profitable sectors, with annual revenues exceeding $481 billion in 2018. The electronics industry also encompasses other sectors that rely on electronic devices and systems, such as e-commerce, which generated over $29 trillion in online sales in 2017. == History and development == Karl Ferdinand Braun´s development of the crystal detector, the first semiconductor device, in 1874 and the identification of the electron in 1897 by Sir Joseph John Thomson, along with the subsequent invention of the vacuum tube which could amplify and rectify small electrical signals, inaugurated the field of electronics and the electron age. Practical applications started with the invention of the diode by Ambrose Fleming and the triode by Lee De Forest in the early 1900s, which made the detection of small electrical voltages, such as radio signals from a radio antenna, practicable. Vacuum tubes (thermionic valves) were the first active electronic components which controlled current flow by influencing the flow of individual electrons, and enabled the construction of equipment that used current amplification and rectification to give us radio, television, radar, long-distance telephony and much more. The early growth of electronics was rapid, and by the 1920s, commercial radio broadcasting and telecommunications were becoming widespread and electronic amplifiers were being used in such diverse applications as long-distance telephony and the music recording industry. The next big technological step took several decades to appear, when the first working point-contact transistor was invented by John Bardeen and Walter Houser Brattain at Bell Labs in 1947. However, vacuum tubes continued to play a leading role in the field of microwave and high power transmission as well as television receivers until the middle of the 1980s. Since then, solid-state devices have all but completely taken over. Vacuum tubes are still used in some specialist applications such as high power RF amplifiers, cathode-ray tubes, specialist audio equipment, guitar amplifiers and some microwave devices. In April 1955, the IBM 608 was the first IBM product to use transistor circuits without any vacuum tubes and is believed to be the first all-transistorized calculator to be manufactured for the commercial market. The 608 contained more than 3,000 germanium transistors. Thomas J. Watson Jr. ordered all future IBM products to use transistors in their design. From that time on transistors were almost exclusively used for computer logic circuits and peripheral devices. However, early junction transistors were relatively bulky devices that were difficult to manufacture on a mass-production basis, which limited them to a number of specialised applications. The MOSFET was invented at Bell Labs between 1955 and 1960. It was the first truly compact transistor that could be miniaturised and mass-produced for a wide range of uses. Its advantages include high scalability, affordability, low power consumption, and high density. It revolutionized the electronics industry, becoming the most widely used electronic device in the world. The MOSFET is the basic element in most modern electronic equipment. As the complexity of circuits grew, problems arose. One problem was the size of the circuit. A complex circuit like a computer was dependent on speed. If the components were large, the wires interconnecting them must be long. The electric signals took time to go through the circuit, thus slowing the computer. The invention of the integrated circuit by Jack Kilby and Robert Noyce solved this problem by making all the components and the chip out of the same block (monolith) of semiconductor material. The circuits could be made smaller, and the manufacturing process could be automated. This led to the idea of integrating all components on a single-crystal silicon wafer, which led to small-scale integration (SSI) in the early 1960s, and then medium-scale integration (MSI) in the late 1960s, followed by VLSI. In 2008, billion-transistor processors became commercially available. == Subfields == == Devices and components == An electronic component is any component in an electronic system either active or passive. Components are connected together, usually by being soldered to a printed circuit board (PCB), to create an electronic circuit with a particular function. Components may be packaged singly, or in more complex groups as integrated circuits. Passive electronic components are capacitors, inductors, resistors, whilst active components are such as semiconductor devices; transistors and thyristors, which control current flow at electron level. == Types of circuits == Electronic circuit functions can be divided into two function groups: analog and digital. A particular device may consist of circuitry that has either or a mix of the two types. Analog circuits are becoming less common, as many of their functions are being digitized. === Analog circuits === Analog circuits use a continuous range of voltage or current for signal processing, as opposed to the discrete levels used in digital circuits. Analog circuits were common throughout an electronic device in the early years in devices such as radio receivers and transmitters. Analog electronic computers were valuable for solving problems with continuous variables until digital processing advanced. As semiconductor technology developed, many of the functions of analog circuits were taken over by digital circuits, and modern circuits that are entirely analog are less common; their functions being replaced by hybrid approach which, for instance, uses analog circuits at the front end of a device receiving an analog signal, and then use digital processing using microprocessor techniques thereafter. Sometimes it may be difficult to classify some circuits that have elements of both linear and non-linear operation. An example is the voltage comparator which receives a continuous range of voltage but only outputs one of two levels as in a digital circuit. Similarly, an overdriven transistor amplifier can take on the characteristics of a controlled switch, having essentially two levels of output. Analog circuits are still widely used for signal amplification, such as in the entertainment industry, and conditioning signals from analog sensors, such as in industrial measurement and control. === Digital circuits === Digital circuits are electric circuits based on discrete voltage levels. Digital circuits use Boolean algebra and are the basis of all digital computers and microprocessor devices. They range from simple logic gates to large integrated circuits, employing millions of such gates. Digital circuits use a binary system with two voltage levels labelled "0" and "1" to indicated logical status. Often logic "0" will be a lower voltage and referred to as "Low" while logic "1" is referred to as "High". However, some systems use the reverse definition ("0" is "High") or are current based. Quite often the logic designer may reverse these definitions from one circuit to the next as they see fit to facilitate their design. The definition of the levels as "0" or "1" is arbitrary. Ternary (with three states) logic has been studied, and some prototype computers made, but have not gained any significant practical acceptance. Universally, Computers and Digital signal processors are constructed with digital circuits using Transistors such as MOSFETs in the electronic logic gates to generate binary states. Logic gates Adders Flip-flops Counters Registers Multiplexers Schmitt triggers Highly integrated devices: Memory chip Microprocessors Microcontrollers Application-specific integrated circuit (ASIC) Digital signal processor (DSP) Field-programmable gate array (FPGA) Field-programmable analog array (FPAA) System on chip (SOC) == Design == Electronic systems design deals with the multi-disciplinary design issues of complex electronic devices and systems, such as mobile phones and computers. The subject covers a broad spectrum, from the design and development of an electronic system (new product development) to assuring its proper function, service life and disposal. Electronic systems design is therefore the process of defining and developing complex electronic devices to satisfy specified requirements of the user. Due to the complex nature of electronics theory, laboratory experimentation is an important part of the development of electronic devices. These experiments are used to test or verify the engineer's design and detect errors. Historically, electronics labs have consisted of electronics devices and equipment located in a physical space, although in more recent years the trend has been towards electronics lab simulation software, such as CircuitLogix, Multisim, and PSpice. === Computer-aided design === Today's electronics engineers have the ability to design circuits using premanufactured building blocks such as power supplies, semiconductors (i.e. semiconductor devices, such as transistors), and integrated circuits. Electronic design automation software programs include schematic capture programs and printed circuit board design programs. Popular names in the EDA software world are NI Multisim, Cadence (ORCAD), EAGLE PCB and Schematic, Mentor (PADS PCB and LOGIC Schematic), Altium (Protel), LabCentre Electronics (Proteus), gEDA, KiCad and many others. == Negative qualities == === Thermal management === Heat generated by electronic circuitry must be dissipated to prevent immediate failure and improve long term reliability. Heat dissipation is mostly achieved by passive conduction/convection. Means to achieve greater dissipation include heat sinks and fans for air cooling, and other forms of computer cooling such as water cooling. These techniques use convection, conduction, and radiation of heat energy. === Noise === Electronic noise is defined as unwanted disturbances superposed on a useful signal that tend to obscure its information content. Noise is not the same as signal distortion caused by a circuit. Noise is associated with all electronic circuits. Noise may be electromagnetically or thermally generated, which can be decreased by lowering the operating temperature of the circuit. Other types of noise, such as shot noise cannot be removed as they are due to limitations in physical properties. == Packaging methods == Many different methods of connecting components have been used over the years. For instance, early electronics often used point to point wiring with components attached to wooden breadboards to construct circuits. Cordwood construction and wire wrap were other methods used. Most modern day electronics now use printed circuit boards made of materials such as FR4, or the cheaper (and less hard-wearing) Synthetic Resin Bonded Paper (SRBP, also known as Paxoline/Paxolin (trade marks) and FR2) – characterised by its brown colour. Health and environmental concerns associated with electronics assembly have gained increased attention in recent years, especially for products destined to go to European markets. Electrical components are generally mounted in the following ways: Through-hole (sometimes referred to as 'Pin-Through-Hole') Surface mount Chassis mount Rack mount LGA/BGA/PGA socket == Industry == The electronics industry consists of various sectors. The central driving force behind the entire electronics industry is the semiconductor industry sector, which has annual sales of over $481 billion as of 2018. The largest industry sector is e-commerce, which generated over $29 trillion in 2017. The most widely manufactured electronic device is the metal-oxide-semiconductor field-effect transistor (MOSFET), with an estimated 13 sextillion MOSFETs having been manufactured between 1960 and 2018. In the 1960s, U.S. manufacturers were unable to compete with Japanese companies such as Sony and Hitachi who could produce high-quality goods at lower prices. By the 1980s, however, U.S. manufacturers became the world leaders in semiconductor development and assembly. However, during the 1990s and subsequently, the industry shifted overwhelmingly to East Asia (a process begun with the initial movement of microchip mass-production there in the 1970s), as plentiful, cheap labor, and increasing technological sophistication, became widely available there. Over three decades, the United States' global share of semiconductor manufacturing capacity fell, from 37% in 1990, to 12% in 2022. America's pre-eminent semiconductor manufacturer, Intel Corporation, fell far behind its subcontractor Taiwan Semiconductor Manufacturing Company (TSMC) in manufacturing technology. By that time, Taiwan had become the world's leading source of advanced semiconductors—followed by South Korea, the United States, Japan, Singapore, and China. Important semiconductor industry facilities (which often are subsidiaries of a leading producer based elsewhere) also exist in Europe (notably the Netherlands), Southeast Asia, South America, and Israel. == See also == == References == == Further reading == Horowitz, Paul; Hill, Winfield (1980). The Art of Electronics. Cambridge University Press. ISBN 978-0521370950. Mims, Forrest M. (2003). Getting Started in Electronics. Master Publishing, Incorporated. ISBN 978-0-945053-28-6. == External links == Navy 1998 Navy Electricity and Electronics Training Series (NEETS) Archived 2 November 2004 at the Wayback Machine DOE 1998 Electrical Science, Fundamentals Handbook, 4 vols. Vol. 1, Basic Electrical Theory, Basic DC Theory Vol. 2, DC Circuits, Batteries, Generators, Motors Vol. 3, Basic AC Theory, Basic AC Reactive Components, Basic AC Power, Basic AC Generators Vol. 4, AC Motors, Transformers, Test Instruments & Measuring Devices, Electrical Distribution Systems
https://en.wikipedia.org/wiki/Electronics
Technology is the application of conceptual knowledge to achieve practical goals, especially in a reproducible way. The word technology can also mean the products resulting from such efforts, including both tangible tools such as utensils or machines, and intangible ones such as software. Technology plays a critical role in science, engineering, and everyday life. Technological advancements have led to significant changes in society. The earliest known technology is the stone tool, used during prehistory, followed by the control of fire—which in turn contributed to the growth of the human brain and the development of language during the Ice Age, according to the cooking hypothesis. The invention of the wheel in the Bronze Age allowed greater travel and the creation of more complex machines. More recent technological inventions, including the printing press, telephone, and the Internet, have lowered barriers to communication and ushered in the knowledge economy. While technology contributes to economic development and improves human prosperity, it can also have negative impacts like pollution and resource depletion, and can cause social harms like technological unemployment resulting from automation. As a result, philosophical and political debates about the role and use of technology, the ethics of technology, and ways to mitigate its downsides are ongoing. == Etymology == Technology is a term dating back to the early 17th century that meant 'systematic treatment' (from Greek Τεχνολογία, from the Greek: τέχνη, romanized: tékhnē, lit. 'craft, art' and -λογία (-logíā), 'study, knowledge'). It is predated in use by the Ancient Greek word τέχνη (tékhnē), used to mean 'knowledge of how to make things', which encompassed activities like architecture. Starting in the 19th century, continental Europeans started using the terms Technik (German) or technique (French) to refer to a 'way of doing', which included all technical arts, such as dancing, navigation, or printing, whether or not they required tools or instruments. At the time, Technologie (German and French) referred either to the academic discipline studying the "methods of arts and crafts", or to the political discipline "intended to legislate on the functions of the arts and crafts." The distinction between Technik and Technologie is absent in English, and so both were translated as technology. The term was previously uncommon in English and mostly referred to the academic discipline, as in the Massachusetts Institute of Technology. In the 20th century, as a result of scientific progress and the Second Industrial Revolution, technology stopped being considered a distinct academic discipline and took on the meaning: the systemic use of knowledge to practical ends. == History == === Prehistoric === Tools were initially developed by hominids through observation and trial and error. Around 2 Mya (million years ago), they learned to make the first stone tools by hammering flakes off a pebble, forming a sharp hand axe. This practice was refined 75 kya (thousand years ago) into pressure flaking, enabling much finer work. The discovery of fire was described by Charles Darwin as "possibly the greatest ever made by man". Archaeological, dietary, and social evidence point to "continuous [human] fire-use" at least 1.5 Mya. Fire, fueled with wood and charcoal, allowed early humans to cook their food to increase its digestibility, improving its nutrient value and broadening the number of foods that could be eaten. The cooking hypothesis proposes that the ability to cook promoted an increase in hominid brain size, though some researchers find the evidence inconclusive. Archaeological evidence of hearths was dated to 790 kya; researchers believe this is likely to have intensified human socialization and may have contributed to the emergence of language. Other technological advances made during the Paleolithic era include clothing and shelter. No consensus exists on the approximate time of adoption of either technology, but archaeologists have found archaeological evidence of clothing 90-120 kya and shelter 450 kya. As the Paleolithic era progressed, dwellings became more sophisticated and more elaborate; as early as 380 kya, humans were constructing temporary wood huts. Clothing, adapted from the fur and hides of hunted animals, helped humanity expand into colder regions; humans began to migrate out of Africa around 200 kya, initially moving to Eurasia. === Neolithic === The Neolithic Revolution (or First Agricultural Revolution) brought about an acceleration of technological innovation, and a consequent increase in social complexity. The invention of the polished stone axe was a major advance that allowed large-scale forest clearance and farming. This use of polished stone axes increased greatly in the Neolithic but was originally used in the preceding Mesolithic in some areas such as Ireland. Agriculture fed larger populations, and the transition to sedentism allowed for the simultaneous raising of more children, as infants no longer needed to be carried around by nomads. Additionally, children could contribute labor to the raising of crops more readily than they could participate in hunter-gatherer activities. With this increase in population and availability of labor came an increase in labor specialization. What triggered the progression from early Neolithic villages to the first cities, such as Uruk, and the first civilizations, such as Sumer, is not specifically known; however, the emergence of increasingly hierarchical social structures and specialized labor, of trade and war among adjacent cultures, and the need for collective action to overcome environmental challenges such as irrigation, are all thought to have played a role. The invention of writing led to the spread of cultural knowledge and became the basis for history, libraries, schools, and scientific research. Continuing improvements led to the furnace and bellows and provided, for the first time, the ability to smelt and forge gold, copper, silver, and lead – native metals found in relatively pure form in nature. The advantages of copper tools over stone, bone and wooden tools were quickly apparent to early humans, and native copper was probably used from near the beginning of Neolithic times (about 10 kya). Native copper does not naturally occur in large amounts, but copper ores are quite common and some of them produce metal easily when burned in wood or charcoal fires. Eventually, the working of metals led to the discovery of alloys such as bronze and brass (about 4,000 BCE). The first use of iron alloys such as steel dates to around 1,800 BCE. === Ancient === After harnessing fire, humans discovered other forms of energy. The earliest known use of wind power is the sailing ship; the earliest record of a ship under sail is that of a Nile boat dating to around 7,000 BCE. From prehistoric times, Egyptians likely used the power of the annual flooding of the Nile to irrigate their lands, gradually learning to regulate much of it through purposely built irrigation channels and "catch" basins. The ancient Sumerians in Mesopotamia used a complex system of canals and levees to divert water from the Tigris and Euphrates rivers for irrigation. Archaeologists estimate that the wheel was invented independently and concurrently in Mesopotamia (in present-day Iraq), the Northern Caucasus (Maykop culture), and Central Europe. Time estimates range from 5,500 to 3,000 BCE with most experts putting it closer to 4,000 BCE. The oldest artifacts with drawings depicting wheeled carts date from about 3,500 BCE. More recently, the oldest-known wooden wheel in the world as of 2024 was found in the Ljubljana Marsh of Slovenia; Austrian experts have established that the wheel is between 5,100 and 5,350 years old. The invention of the wheel revolutionized trade and war. It did not take long to discover that wheeled wagons could be used to carry heavy loads. The ancient Sumerians used a potter's wheel and may have invented it. A stone pottery wheel found in the city-state of Ur dates to around 3,429 BCE, and even older fragments of wheel-thrown pottery have been found in the same area. Fast (rotary) potters' wheels enabled early mass production of pottery, but it was the use of the wheel as a transformer of energy (through water wheels, windmills, and even treadmills) that revolutionized the application of nonhuman power sources. The first two-wheeled carts were derived from travois and were first used in Mesopotamia and Iran in around 3,000 BCE. The oldest known constructed roadways are the stone-paved streets of the city-state of Ur, dating to c. 4,000 BCE, and timber roads leading through the swamps of Glastonbury, England, dating to around the same period. The first long-distance road, which came into use around 3,500 BCE, spanned 2,400 km from the Persian Gulf to the Mediterranean Sea, but was not paved and was only partially maintained. In around 2,000 BCE, the Minoans on the Greek island of Crete built a 50 km road leading from the palace of Gortyn on the south side of the island, through the mountains, to the palace of Knossos on the north side of the island. Unlike the earlier road, the Minoan road was completely paved. Ancient Minoan private homes had running water. A bathtub virtually identical to modern ones was unearthed at the Palace of Knossos. Several Minoan private homes also had toilets, which could be flushed by pouring water down the drain. The ancient Romans had many public flush toilets, which emptied into an extensive sewage system. The primary sewer in Rome was the Cloaca Maxima; construction began on it in the sixth century BCE and it is still in use today. The ancient Romans also had a complex system of aqueducts, which were used to transport water across long distances. The first Roman aqueduct was built in 312 BCE. The eleventh and final ancient Roman aqueduct was built in 226 CE. Put together, the Roman aqueducts extended over 450 km, but less than 70 km of this was above ground and supported by arches. === Pre-modern === Innovations continued through the Middle Ages with the introduction of silk production (in Asia and later Europe), the horse collar, and horseshoes. Simple machines (such as the lever, the screw, and the pulley) were combined into more complicated tools, such as the wheelbarrow, windmills, and clocks. A system of universities developed and spread scientific ideas and practices, including Oxford and Cambridge. The Renaissance era produced many innovations, including the introduction of the movable type printing press to Europe, which facilitated the communication of knowledge. Technology became increasingly influenced by science, beginning a cycle of mutual advancement. === Modern === Starting in the United Kingdom in the 18th century, the discovery of steam power set off the Industrial Revolution, which saw wide-ranging technological discoveries, particularly in the areas of agriculture, manufacturing, mining, metallurgy, and transport, and the widespread application of the factory system. This was followed a century later by the Second Industrial Revolution which led to rapid scientific discovery, standardization, and mass production. New technologies were developed, including sewage systems, electricity, light bulbs, electric motors, railroads, automobiles, and airplanes. These technological advances led to significant developments in medicine, chemistry, physics, and engineering. They were accompanied by consequential social change, with the introduction of skyscrapers accompanied by rapid urbanization. Communication improved with the invention of the telegraph, the telephone, the radio, and television. The 20th century brought a host of innovations. In physics, the discovery of nuclear fission in the Atomic Age led to both nuclear weapons and nuclear power. Analog computers were invented and asserted dominance in processing complex data. While the invention of vacuum tubes allowed for digital computing with computers like the ENIAC, their sheer size precluded widespread use until innovations in quantum physics allowed for the invention of the transistor in 1947, which significantly compacted computers and led the digital transition. Information technology, particularly optical fiber and optical amplifiers, allowed for simple and fast long-distance communication, which ushered in the Information Age and the birth of the Internet. The Space Age began with the launch of Sputnik 1 in 1957, and later the launch of crewed missions to the moon in the 1960s. Organized efforts to search for extraterrestrial intelligence have used radio telescopes to detect signs of technology use, or technosignatures, given off by alien civilizations. In medicine, new technologies were developed for diagnosis (CT, PET, and MRI scanning), treatment (like the dialysis machine, defibrillator, pacemaker, and a wide array of new pharmaceutical drugs), and research (like interferon cloning and DNA microarrays). Complex manufacturing and construction techniques and organizations are needed to make and maintain more modern technologies, and entire industries have arisen to develop succeeding generations of increasingly more complex tools. Modern technology increasingly relies on training and education – their designers, builders, maintainers, and users often require sophisticated general and specific training. Moreover, these technologies have become so complex that entire fields have developed to support them, including engineering, medicine, and computer science; and other fields have become more complex, such as construction, transportation, and architecture. == Impact == Technological change is the largest cause of long-term economic growth. Throughout human history, energy production was the main constraint on economic development, and new technologies allowed humans to significantly increase the amount of available energy. First came fire, which made edible a wider variety of foods, and made it less physically demanding to digest them. Fire also enabled smelting, and the use of tin, copper, and iron tools, used for hunting or tradesmanship. Then came the agricultural revolution: humans no longer needed to hunt or gather to survive, and began to settle in towns and cities, forming more complex societies, with militaries and more organized forms of religion. Technologies have contributed to human welfare through increased prosperity, improved comfort and quality of life, and medical progress, but they can also disrupt existing social hierarchies, cause pollution, and harm individuals or groups. Recent years have brought about a rise in social media's cultural prominence, with potential repercussions on democracy, and economic and social life. Early on, the internet was seen as a "liberation technology" that would democratize knowledge, improve access to education, and promote democracy. Modern research has turned to investigate the internet's downsides, including disinformation, polarization, hate speech, and propaganda. Since the 1970s, technology's impact on the environment has been criticized, leading to a surge in investment in solar, wind, and other forms of clean energy. === Social === ==== Jobs ==== Since the invention of the wheel, technologies have helped increase humans' economic output. Past automation has both substituted and complemented labor; machines replaced humans at some lower-paying jobs (for example in agriculture), but this was compensated by the creation of new, higher-paying jobs. Studies have found that computers did not create significant net technological unemployment. Due to artificial intelligence being far more capable than computers, and still being in its infancy, it is not known whether it will follow the same trend; the question has been debated at length among economists and policymakers. A 2017 survey found no clear consensus among economists on whether AI would increase long-term unemployment. According to the World Economic Forum's "The Future of Jobs Report 2020", AI is predicted to replace 85 million jobs worldwide, and create 97 million new jobs by 2025. From 1990 to 2007, a study in the U.S. by MIT economist Daron Acemoglu showed that an addition of one robot for every 1,000 workers decreased the employment-to-population ratio by 0.2%, or about 3.3 workers, and lowered wages by 0.42%. Concerns about technology replacing human labor however are long-lasting. As US president Lyndon Johnson said in 1964, "Technology is creating both new opportunities and new obligations for us, opportunity for greater productivity and progress; obligation to be sure that no workingman, no family must pay an unjust price for progress." upon signing the National Commission on Technology, Automation, and Economic Progress bill. ==== Security ==== With the growing reliance of technology, there have been security and privacy concerns along with it. Billions of people use different online payment methods, such as WeChat Pay, PayPal, Alipay, and much more to help transfer money. Although security measures are placed, some criminals are able to bypass them. In March 2022, North Korea used Blender.io, a mixer which helped them to hide their cryptocurrency exchanges, to launder over $20.5 million in cryptocurrency, from Axie Infinity, and steal over $600 million worth of cryptocurrency from the game's owner. Because of this, the U.S. Treasury Department sanctioned Blender.io, which marked the first time it has taken action against a mixer, to try to crack down on North Korean hackers. The privacy of cryptocurrency has been debated. Although many customers like the privacy of cryptocurrency, many also argue that it needs more transparency and stability. === Environmental === Technology can have both positive and negative effects on the environment. Environmental technology, describes an array of technologies which seek to reverse, mitigate or halt environmental damage to the environment. This can include measures to halt pollution through environmental regulations, capture and storage of pollution, or using pollutant byproducts in other industries. Other examples of environmental technology include deforestation and the reversing of deforestation. Emerging technologies in the fields of climate engineering may be able to halt or reverse global warming and its environmental impacts, although this remains highly controversial. As technology has advanced, so too has the negative environmental impact, with increased release of greenhouse gases, including methane, nitrous oxide and carbon dioxide, into the atmosphere, causing the greenhouse effect. This continues to gradually heat the earth, causing global warming and climate change. Measures of technological innovation correlates with a rise in greenhouse gas emissions. ==== Pollution ==== Pollution, the presence of contaminants in an environment that causes adverse effects, could have been present as early as the Inca Empire. They used a lead sulfide flux in the smelting of ores, along with the use of a wind-drafted clay kiln, which released lead into the atmosphere and the sediment of rivers. == Philosophy == Philosophy of technology is a branch of philosophy that studies the "practice of designing and creating artifacts", and the "nature of the things so created." It emerged as a discipline over the past two centuries, and has grown "considerably" since the 1970s. The humanities philosophy of technology is concerned with the "meaning of technology for, and its impact on, society and culture". Initially, technology was seen as an extension of the human organism that replicated or amplified bodily and mental faculties. Marx framed it as a tool used by capitalists to oppress the proletariat, but believed that technology would be a fundamentally liberating force once it was "freed from societal deformations". Second-wave philosophers like Ortega later shifted their focus from economics and politics to "daily life and living in a techno-material culture", arguing that technology could oppress "even the members of the bourgeoisie who were its ostensible masters and possessors." Third-stage philosophers like Don Ihde and Albert Borgmann represent a turn toward de-generalization and empiricism, and considered how humans can learn to live with technology. Early scholarship on technology was split between two arguments: technological determinism, and social construction. Technological determinism is the idea that technologies cause unavoidable social changes.: 95  It usually encompasses a related argument, technological autonomy, which asserts that technological progress follows a natural progression and cannot be prevented. Social constructivists argue that technologies follow no natural progression, and are shaped by cultural values, laws, politics, and economic incentives. Modern scholarship has shifted towards an analysis of sociotechnical systems, "assemblages of things, people, practices, and meanings", looking at the value judgments that shape technology. Cultural critic Neil Postman distinguished tool-using societies from technological societies and from what he called "technopolies", societies that are dominated by an ideology of technological and scientific progress to the detriment of other cultural practices, values, and world views. Herbert Marcuse and John Zerzan suggest that technological society will inevitably deprive us of our freedom and psychological health. == Ethics == The ethics of technology is an interdisciplinary subfield of ethics that analyzes technology's ethical implications and explores ways to mitigate potential negative impacts of new technologies. There is a broad range of ethical issues revolving around technology, from specific areas of focus affecting professionals working with technology to broader social, ethical, and legal issues concerning the role of technology in society and everyday life. Prominent debates have surrounded genetically modified organisms, the use of robotic soldiers, algorithmic bias, and the issue of aligning AI behavior with human values. Technology ethics encompasses several key fields: Bioethics looks at ethical issues surrounding biotechnologies and modern medicine, including cloning, human genetic engineering, and stem cell research. Computer ethics focuses on issues related to computing. Cyberethics explores internet-related issues like intellectual property rights, privacy, and censorship. Nanoethics examines issues surrounding the alteration of matter at the atomic and molecular level in various disciplines including computer science, engineering, and biology. And engineering ethics deals with the professional standards of engineers, including software engineers and their moral responsibilities to the public. A wide branch of technology ethics is concerned with the ethics of artificial intelligence: it includes robot ethics, which deals with ethical issues involved in the design, construction, use, and treatment of robots, as well as machine ethics, which is concerned with ensuring the ethical behavior of artificially intelligent agents. Within the field of AI ethics, significant yet-unsolved research problems include AI alignment (ensuring that AI behaviors are aligned with their creators' intended goals and interests) and the reduction of algorithmic bias. Some researchers have warned against the hypothetical risk of an AI takeover, and have advocated for the use of AI capability control in addition to AI alignment methods. Other fields of ethics have had to contend with technology-related issues, including military ethics, media ethics, and educational ethics. == Futures studies == Futures studies is the study of social and technological progress. It aims to explore the range of plausible futures and incorporate human values in the development of new technologies.: 54  More generally, futures researchers are interested in improving "the freedom and welfare of humankind".: 73  It relies on a thorough quantitative and qualitative analysis of past and present technological trends, and attempts to rigorously extrapolate them into the future. Science fiction is often used as a source of ideas.: 173  Futures research methodologies include survey research, modeling, statistical analysis, and computer simulations.: 187  === Existential risk === Existential risk researchers analyze risks that could lead to human extinction or civilizational collapse, and look for ways to build resilience against them. Relevant research centers include the Cambridge Center for the Study of Existential Risk, and the Stanford Existential Risk Initiative. Future technologies may contribute to the risks of artificial general intelligence, biological warfare, nuclear warfare, nanotechnology, anthropogenic climate change, global warming, or stable global totalitarianism, though technologies may also help us mitigate asteroid impacts and gamma-ray bursts. In 2019 philosopher Nick Bostrom introduced the notion of a vulnerable world, "one in which there is some level of technological development at which civilization almost certainly gets devastated by default", citing the risks of a pandemic caused by bioterrorists, or an arms race triggered by the development of novel armaments and the loss of mutual assured destruction. He invites policymakers to question the assumptions that technological progress is always beneficial, that scientific openness is always preferable, or that they can afford to wait until a dangerous technology has been invented before they prepare mitigations. == Emerging technologies == Emerging technologies are novel technologies whose development or practical applications are still largely unrealized. They include nanotechnology, biotechnology, robotics, 3D printing, and blockchains. In 2005, futurist Ray Kurzweil claimed the next technological revolution would rest upon advances in genetics, nanotechnology, and robotics, with robotics being the most impactful of the three technologies. Genetic engineering will allow far greater control over human biological nature through a process called directed evolution. Some thinkers believe that this may shatter our sense of self, and have urged for renewed public debate exploring the issue more thoroughly; others fear that directed evolution could lead to eugenics or extreme social inequality. Nanotechnology will grant us the ability to manipulate matter "at the molecular and atomic scale", which could allow us to reshape ourselves and our environment in fundamental ways. Nanobots could be used within the human body to destroy cancer cells or form new body parts, blurring the line between biology and technology. Autonomous robots have undergone rapid progress, and are expected to replace humans at many dangerous tasks, including search and rescue, bomb disposal, firefighting, and war. Estimates on the advent of artificial general intelligence vary, but half of machine learning experts surveyed in 2018 believe that AI will "accomplish every task better and more cheaply" than humans by 2063, and automate all human jobs by 2140. This expected technological unemployment has led to calls for increased emphasis on computer science education and debates about universal basic income. Political science experts predict that this could lead to a rise in extremism, while others see it as an opportunity to usher in a post-scarcity economy. == Movements == === Appropriate technology === Some segments of the 1960s hippie counterculture grew to dislike urban living and developed a preference for locally autonomous, sustainable, and decentralized technology, termed appropriate technology. This later influenced hacker culture and technopaganism. === Technological utopianism === Technological utopianism refers to the belief that technological development is a moral good, which can and should bring about a utopia, that is, a society in which laws, governments, and social conditions serve the needs of all its citizens. Examples of techno-utopian goals include post-scarcity economics, life extension, mind uploading, cryonics, and the creation of artificial superintelligence. Major techno-utopian movements include transhumanism and singularitarianism. The transhumanism movement is founded upon the "continued evolution of human life beyond its current human form" through science and technology, informed by "life-promoting principles and values." The movement gained wider popularity in the early 21st century. Singularitarians believe that machine superintelligence will "accelerate technological progress" by orders of magnitude and "create even more intelligent entities ever faster", which may lead to a pace of societal and technological change that is "incomprehensible" to us. This event horizon is known as the technological singularity. Major figures of techno-utopianism include Ray Kurzweil and Nick Bostrom. Techno-utopianism has attracted both praise and criticism from progressive, religious, and conservative thinkers. === Anti-technology backlash === Technology's central role in our lives has drawn concerns and backlash. The backlash against technology is not a uniform movement and encompasses many heterogeneous ideologies. The earliest known revolt against technology was Luddism, a pushback against early automation in textile production. Automation had resulted in a need for fewer workers, a process known as technological unemployment. Between the 1970s and 1990s, American terrorist Ted Kaczynski carried out a series of bombings across America and published the Unabomber Manifesto denouncing technology's negative impacts on nature and human freedom. The essay resonated with a large part of the American public. It was partly inspired by Jacques Ellul's The Technological Society. Some subcultures, like the off-the-grid movement, advocate a withdrawal from technology and a return to nature. The ecovillage movement seeks to reestablish harmony between technology and nature. == Relation to science and engineering == Engineering is the process by which technology is developed. It often requires problem-solving under strict constraints. Technological development is "action-oriented", while scientific knowledge is fundamentally explanatory. Polish philosopher Henryk Skolimowski framed it like so: "science concerns itself with what is, technology with what is to be.": 375  The direction of causality between scientific discovery and technological innovation has been debated by scientists, philosophers and policymakers. Because innovation is often undertaken at the edge of scientific knowledge, most technologies are not derived from scientific knowledge, but instead from engineering, tinkering and chance.: 217–240  For example, in the 1940s and 1950s, when knowledge of turbulent combustion or fluid dynamics was still crude, jet engines were invented through "running the device to destruction, analyzing what broke [...] and repeating the process". Scientific explanations often follow technological developments rather than preceding them.: 217–240  Many discoveries also arose from pure chance, like the discovery of penicillin as a result of accidental lab contamination. Since the 1960s, the assumption that government funding of basic research would lead to the discovery of marketable technologies has lost credibility. Probabilist Nassim Taleb argues that national research programs that implement the notions of serendipity and convexity through frequent trial and error are more likely to lead to useful innovations than research that aims to reach specific outcomes. Despite this, modern technology is increasingly reliant on deep, domain-specific scientific knowledge. In 1975, there was an average of one citation of scientific literature in every three patents granted in the U.S.; by 1989, this increased to an average of one citation per patent. The average was skewed upwards by patents related to the pharmaceutical industry, chemistry, and electronics. A 2021 analysis shows that patents that are based on scientific discoveries are on average 26% more valuable than equivalent non-science-based patents. == Other animal species == The use of basic technology is also a feature of non-human animal species. Tool use was once considered a defining characteristic of the genus Homo. This view was supplanted after discovering evidence of tool use among chimpanzees and other primates, dolphins, and crows. For example, researchers have observed wild chimpanzees using basic foraging tools, pestles, levers, using leaves as sponges, and tree bark or vines as probes to fish termites. West African chimpanzees use stone hammers and anvils for cracking nuts, as do capuchin monkeys of Boa Vista, Brazil. Tool use is not the only form of animal technology use; for example, beaver dams, built with wooden sticks or large stones, are a technology with "dramatic" impacts on river habitats and ecosystems. == In popular culture == The relationship of humanity with technology has been explored in science-fiction literature, for example in Brave New World, A Clockwork Orange, Nineteen Eighty-Four, Isaac Asimov's essays, and movies like Minority Report, Total Recall, Gattaca, and Inception. It has spawned the dystopian and futuristic cyberpunk genre, which juxtaposes futuristic technology with societal collapse, dystopia or decay. Notable cyberpunk works include William Gibson's Neuromancer novel, and movies like Blade Runner, and The Matrix. == See also == == References == === Citations === === Sources === == Further reading == Gribbin, John, "Alone in the Milky Way: Why we are probably the only intelligent life in the galaxy", Scientific American, vol. 319, no. 3 (September 2018), pp. 94–99. "Is life likely to exist elsewhere in the [Milky Way] galaxy? Almost certainly yes, given the speed with which it appeared on Earth. Is another technological civilization likely to exist today? Almost certainly no, given the chain of circumstances that led to our existence. These considerations suggest that we are unique not just on our planet but in the whole Milky Way. And if our planet is so special, it becomes all the more important to preserve this unique world for ourselves, our descendants and the many creatures that call Earth home." (p. 99.)
https://en.wikipedia.org/wiki/Technology
Appropriate technology is a movement (and its manifestations) encompassing technological choice and application that is small-scale, affordable by its users, labor-intensive, energy-efficient, environmentally sustainable, and locally autonomous. It was originally articulated as intermediate technology by the economist Ernst Friedrich "Fritz" Schumacher in his work Small Is Beautiful. Both Schumacher and many modern-day proponents of appropriate technology also emphasize the technology as people-centered. Appropriate technology has been used to address issues in a wide range of fields. Well-known examples of appropriate technology applications include: bike- and hand-powered water pumps (and other self-powered equipment), the bicycle, the universal nut sheller, self-contained solar lamps and streetlights, and passive solar building designs. Today appropriate technology is often developed using open source principles, which have led to open-source appropriate technology (OSAT) and thus many of the plans of the technology can be freely found on the Internet. OSAT has been proposed as a new model of enabling innovation for sustainable development. Appropriate technology is most commonly discussed in its relationship to economic development and as an alternative to technology transfer of more capital-intensive technology from industrialized nations to developing countries. However, appropriate technology movements can be found in both developing and developed countries. In developed countries, the appropriate technology movement grew out of the energy crisis of the 1970s and focuses mainly on environmental and sustainability issues. Today the idea is multifaceted; in some contexts, appropriate technology can be described as the simplest level of technology that can achieve the intended purpose, whereas in others, it can refer to engineering that takes adequate consideration of social and environmental ramifications. The facets are connected through robustness and sustainable living. == History == === Predecessors === Indian ideological leader Mahatma Gandhi is often cited as the "father" of the appropriate technology movement. Though the concept had not been given a name, Gandhi advocated for small, local and predominantly village-based technology to help India's villages become self-reliant. He disagreed with the idea of technology that benefited a minority of people at the expense of the majority or that put people out of work to increase profit. In 1925 Gandhi founded the All-India Spinners Association and in 1935 he retired from politics to form the All-India Village Industries Association. Both organizations focused on village-based technology similar to the future appropriate technology movement. China also implemented policies similar to appropriate technology during the reign of Mao Zedong and the following Cultural Revolution. During the Cultural Revolution, development policies based on the idea of "walking on two legs" advocated the development of both large-scale factories and small-scale village industries. === E. F. Schumacher === Despite these early examples, Dr. Ernst Friedrich "Fritz" Schumacher is credited as the founder of the appropriate technology movement. A well-known economist, Schumacher worked for the British National Coal Board for more than 20 years, where he blamed the size of the industry's operations for its uncaring response to the harm black-lung disease inflicted on the miners. However it was his work with developing countries, such as India and Burma, which helped Schumacher form the underlying principles of appropriate technology. Schumacher first articulated the idea of "intermediate technology," now known as appropriate technology, in a 1962 report to the Indian Planning Commission in which he described India as long in labor and short in capital, calling for an "intermediate industrial technology" that harnessed India's labor surplus. Schumacher had been developing the idea of intermediate technology for several years prior to the Planning Commission report. In 1955, following a stint as an economic advisor to the government of Burma, he published the short paper "Economics in a Buddhist Country," his first known critique of the effects of Western economics on developing countries. In addition to Buddhism, Schumacher also credited his ideas to Gandhi. Initially, Schumacher's ideas were rejected by both the Indian government and leading development economists. Spurred to action over concern the idea of intermediate technology would languish, Schumacher, George McRobie, Mansur Hoda and Julia Porter brought together a group of approximately 20 people to form the Intermediate Technology Development Group (ITDG) in May 1965. Later that year, a Schumacher article published in The Observer garnered significant attention and support for the group. In 1967, the group published the Tools for Progress: A Guide to Small-scale Equipment for Rural Development and sold 7,000 copies. ITDG also formed panels of experts and practitioners around specific technological needs (such as building construction, energy and water) to develop intermediate technologies to address those needs. At a conference hosted by the ITDG in 1968 the term "intermediate technology" was discarded in favor of the term "appropriate technology" used today. Intermediate technology had been criticized as suggesting the technology was inferior to advanced (or high) technology and not including the social and political factors included in the concept put forth by the proponents. In 1973, Schumacher described the concept of appropriate technology to a mass audience in his influential work Small Is Beautiful: A Study of Economics As If People Mattered. === Growing trend === Between 1966 and 1975 the number of new appropriate technology organizations founded each year was three times greater than the previous nine years. There was also an increase in organizations focusing on applying appropriate technology to the problems of industrialized nations, particularly issues related to energy and the environment. In 1977, the OECD identified in its Appropriate Technology Directory 680 organizations involved in the development and promotion of appropriate technology. By 1980, this number had grown to more than 1,000. International agencies and government departments were also emerging as major innovators in appropriate technology, indicating its progression from a small movement fighting against the established norms to a legitimate technological choice supported by the establishment. For example, the Inter-American Development Bank created a Committee for the Application of Intermediate Technology in 1976 and the World Health Organization established the Appropriate Technology for Health Program in 1977. Appropriate technology was also increasingly applied in developed countries. For example, the energy crisis of the mid-1970s led to the creation of the National Center for Appropriate Technology (NCAT) in 1977 with an initial appropriation of 3 million dollars from the U.S. Congress. The Center sponsored appropriate technology demonstrations to "help low-income communities find better ways to do things that will improve the quality of life, and that will be doable with the skills and resources at hand." However, by 1981 the NCAT's funding agency, Community Services Administration, had been abolished. For several decades NCAT worked with the US departments of Energy and Agriculture on contract to develop appropriate technology programs. Since 2005, NCAT's informational web site is no longer funded by the US government. === Decline === In more recent years, the appropriate technology movement has continued to decline in prominence. The German Appropriate Technology Exchange (GATE) and Holland's Technology Transfer for Development (TOOL) are examples of organizations no longer in operation. Recently, a study looked at the continued barriers to AT deployment despite the relatively low cost of transferring information in the internet age. The barriers have been identified as: AT seen as inferior or "poor person's" technology, technical transferability and robustness of AT, insufficient funding, weak institutional support, and the challenges of distance and time in tackling rural poverty. A more free market-centric view has also begun to dominate the field. For example, Paul Polak, founder of International Development Enterprises (an organization that designs and manufactures products that follow the ideals of appropriate technology), declared appropriate technology dead in a 2010 blog post. Polak argues the "design for the other 90 percent" movement has replaced appropriate technology. Growing out of the appropriate technology movement, designing for the other 90 percent advocates the creation of low-cost solutions for the 5.8 billion of the world's 6.8 billion population "who have little or no access to most of the products and services many of us take for granted." Many of the ideas integral to appropriate technology can now be found in the increasingly popular "sustainable development" movement, which among many tenets advocates technological choice that meets human needs while preserving the environment for future generations. In 1983, the OECD published the results of an extensive survey of appropriate technology organizations titled, The World of Appropriate Technology, in which it defined appropriate technology as characterized by "low investment cost per work-place, low capital investment per unit of output, organizational simplicity, high adaptability to a particular social or cultural environment, sparing use of natural resources, low cost of final product or high potential for employment." Today, the OECD web site redirects from the "Glossary of Statistical Terms" entry on "appropriate technology" to "environmentally sound technologies." The United Nations' "Index to Economic and Social Development" also redirects from the "appropriate technology" entry to "sustainable development." === Potential resurgence === Despite the decline, several appropriate technology organizations are still in existence, including the ITDG which became Practical Action after a name change in 2005. Skat (Schweizerische Kontaktstelle für Angepasste Technology) adapted by becoming a private consultancy in 1998, though some Intermediate Technology activities are continued by Skat Foundation through the Rural Water Supply Network (RWSN). Another actor still very active is the charity CEAS (Centre Ecologique Albert Schweitzer). A pioneer in food transformation and solar heaters, it offers vocational training in West Africa and Madagascar. There is also currently a notable resurgence as viewed by the number of groups adopting open source appropriate technology (OSAT) because of the enabling technology of the Internet. These OSAT groups include: Akvo Foundation, Appropedia, The Appropriate Technology Collaborative, Catalytic Communities, Centre for Alternative Technology, Center For Development Alternatives, Engineers Without Borders, Open Source Ecology, Practical Action, and Village Earth. Most recently ASME, Engineers Without Borders (USA) and the IEEE have joined together to produce Engineering for Change, which facilitates the development of affordable, locally appropriate and sustainable solutions to the most pressing humanitarian challenges. == Terminology == Appropriate technology frequently serves as an umbrella term for a variety names for this type of technology. Frequently these terms are used interchangeably; however, the use of one term over another can indicate the specific focus, bias or agenda of the technological choice in question. Though the original name for the concept now known as appropriate technology, "intermediate technology" is now often considered a subset of appropriate technology that focuses on technology that is more productive than "inefficient" traditional technologies, but less costly than the technology of industrialized societies. Other types of technology under the appropriate technology umbrella include: A variety of competing definitions exist in academic literature and organization and government policy papers for each of these terms. However, the general consensus is appropriate technology encompasses the ideas represented by the above list. Furthermore, the use of one term over another in referring to an appropriate technology can indicate ideological bias or emphasis on particular economic or social variables. Some terms inherently emphasize the importance of increased employment and labor utilization (such as labor-intensive or capital-saving technology), while others may emphasize the importance of human development (such as self-help and people's technology). It is also possible to distinguish between hard and soft technologies. According to Dr. Maurice Albertson and Audrey Faulkner, appropriate hard technology is "engineering techniques, physical structures, and machinery that meet a need defined by a community, and utilize the material at hand or readily available. It can be built, operated and maintained by the local people with very limited outside assistance (e.g., technical, material, or financial). it is usually related to an economic goal." Albertson and Faulkner consider appropriate soft technology as technology that deals with "the social structures, human interactive processes, and motivation techniques. It is the structure and process for social participation and action by individuals and groups in analyzing situations, making choices and engaging in choice-implementing behaviors that bring about change." A closely related concept is social technology, defined as "products, techniques and/or re-applicable methodologies developed in the interaction with the community and that must represent effective solution in terms of social transformation". Further, Kostakis et al. propose a mid-tech approach to distinguish between low-tech and hi-tech polarities. Inspired by E.F. Schumacher, they argue that mid-tech could be understood as an inclusive middle that may go beyond the two polarities, combining the efficiency and versatility of digital/automated technology with low-tech's potential for autonomy and resilience. == Practitioners == Some of the well known practitioners of the appropriate technology sector include: B.V. Doshi, Buckminster Fuller, William Moyer (1933–2002), Amory Lovins, Sanoussi Diakité, Albert Bates, Victor Papanek, Giorgio Ceragioli (1930–2008), Frithjof Bergmann, Arne Næss, (1912–2009), Mansur Hoda, and Laurie Baker. == Development == Schumacher's initial concept of intermediate technology was created as a critique of the currently prevailing development strategies which focused on maximizing aggregate economic growth through increases to overall measurements of a country's economy, such as gross domestic product (GDP). Developed countries became aware of the situation of developing countries during and in the years following World War II. Based on the continuing rise in income levels in Western countries since the Industrial Revolution, developed countries embarked on a campaign of massive transfers of capital and technology to developing countries in order to force a rapid industrialization intended to result in an economic "take-off" in the developing countries. However, by the late 1960s it was becoming clear this development method had not worked as expected and a growing number of development experts and national policy makers were recognizing it as a potential cause of increasing poverty and income inequality in developing countries. In many countries, this influx of technology had increased the overall economic capacity of the country. However, it had created a dual or two-tiered economy with pronounced division between the classes. The foreign technology imports were only benefiting a small minority of urban elites. This was also increasing urbanization with the rural poor moving to urban cities in hope of more financial opportunities. The increased strain on urban infrastructures and public services led to "increasing squalor, severe impacts on public health and distortions in the social structure." Appropriate technology was meant to address four problems: extreme poverty, starvation, unemployment and urban migration. Schumacher saw the main purpose for economic development programs was the eradication of extreme poverty and he saw a clear connection between mass unemployment and extreme poverty. Schumacher sought to shift development efforts from a bias towards urban areas and on increasing the output per laborer to focusing on rural areas (where a majority of the population still lived) and on increasing employment. == In developed countries == The term appropriate technology is also used in developed nations to describe the use of technology and engineering that result in less negative impacts on the environment and society, i.e., technology should be both environmentally sustainable and socially appropriate. E. F. Schumacher asserts that such technology, described in the book Small Is Beautiful, tends to promote values such as health, beauty and permanence, in that order. Often the type of appropriate technology that is used in developed countries is "appropriate and sustainable technology" (AST), appropriate technology that, besides being functional and relatively cheap (though often more expensive than true AT), is durable and employs renewable resources. AT does not include this (see Sustainable design). == Applications == == Determining a sustainable approach == Features such as low cost, low usage of fossil fuels and use of locally available resources can give some advantages in terms of sustainability. For that reason, these technologies are sometimes used and promoted by advocates of sustainability and alternative technology. Besides using natural, locally available resources (e.g., wood or adobe), waste materials imported from cities using conventional (and inefficient) waste management may be gathered and re-used to build a sustainable living environment. Use of these cities' waste material allows the gathering of a huge amount of building material at a low cost. When obtained, the materials may be recycled over and over in the own city/community, using the cradle to cradle design method. Locations where waste can be found include landfills, junkyards, on water surfaces and anywhere around towns or near highways. Organic waste that can be reused to fertilise plants can be found in sewages. Also, town districts and other places (e.g., cemeteries) that are subject of undergoing renovation or removal can be used for gathering materials as stone, concrete, or potassium. == Related social movements == == See also == == References == == Further reading == Huesemann, Michael H., and Joyce A. Huesemann (2011). Technofix: Why Technology Won't Save Us or the Environment, Chapter 13, "The Design of Environmentally Sustainable and Appropriate Technologies", New Society Publishers, Gabriola Island, British Columbia, Canada, ISBN 0-86571-704-4, 464 pp. Basic Needs Approach, Appropriate Technology, and Institutionalism by Dr. Mohammad Omar Farooq. Unintended Consequences of Green Technologies. Edward Tenner, Why Things Bite Back, Vantage Books, 1997. Zehner, Ozzie. Green Illusions, University of Nebraska Press, 2012. == External links == Appropedia – The Sustainability Wiki – World Wide Wiki of Sustainable Technology (Appropriate technology portal) Akvopedia — the open water and sanitation knowledge resource Aprovecho – An environmental education center with a focus on living with appropriate technologies. The Appropriate Technology Collaborative – An appropriate technology design and dissemination nonprofit. The Whole Earth Catalog: Access to Tools and Ideas Archived 2009-01-06 at the Wayback Machine Guide des innovations pour lutter contre la pauvreté (innovation guide to tackle poverty) / available in French, German and Portuguese, this guide features 100 innovations designed to improve the living conditions of the Poor.
https://en.wikipedia.org/wiki/Appropriate_technology
ON Technology Corporation was a software company in the United States. Formed in 1987 by Mitch Kapor after his departure from Lotus Software, the initial business plan of the company was to build an object-oriented PC desktop environment providing a variety of applications. In (roughly) the early 1990s, the company was acquired by Notework Corporation, a vendor of LAN email systems. Although the merged company was now managed by the Notework Corporation, the company still retained the ON Technology name (which new management perceived had more cache/brand.) Following its acquisition by Notework Corporation, ON Technology proceeded to expand its product line through a series of small product/company acquisitions, including email software (DaVinci, a message handling system-based email product), antivirus technology, corporate Internet usage monitoring, IP firewall, and desktop systems management. In 1995, the company went public, providing additional structural methods to perform most of the product acquisitions listed above. The company distinguished itself through its go-to-market model, incorporating a 30-day free trial alongside significant marketing and telesales to reach a large number of small to medium-sized customers. In 1998, the company restructured its operations and sold off its "free-trial" small/medium business products to Elron Software. It retained "enterprise-sized" products (the MeetingMaker calendaring product and the ON Command CCM desktop systems management product) which it sold using a more traditional enterprise software business model. ON divested its Meeting Maker product in a private transaction to a private investor, who later sold the technology to PeopleCube. The only remaining technology at this point was the systems management software, then branded as "CCM." ON was acquired by Symantec on October 27, 2003. to assist Symantec's move into the desktop systems management business. == References ==
https://en.wikipedia.org/wiki/ON_Technology
Technology strategy (information technology strategy or IT strategy) is the overall plan which consists of objectives, principles and tactics relating to use of technologies within a particular organization. Such strategies primarily focus on the technologies themselves and in some cases the people who directly manage those technologies. The strategy can be implied from the organization's behaviors towards technology decisions, and may be written down in a document. The strategy includes the formal vision that guides the acquisition, allocation, and management of IT resources so it can help fulfill the organizational objectives. Other generations of technology-related strategies primarily focus on: the efficiency of the company's spending on technology; how people, for example the organization's customers and employees, exploit technologies in ways that create value for the organization; on the full integration of technology-related decisions with the company's strategies and operating plans, such that no separate technology strategy exists other than the de facto strategic principle that the organization does not need or have a discrete 'technology strategy'. A technology strategy has traditionally been expressed in a document that explains how technology should be utilized as part of an organization's overall corporate strategy and each business strategy. In the case of IT, the strategy is usually formulated by a group of representatives from both the business and from IT. Often the Information Technology Strategy is led by an organization's Chief Technology Officer (CTO) or equivalent. Accountability varies for an organization's strategies for other classes of technology. Although many companies write an overall business plan each year, a technology strategy may cover developments somewhere between three and five years into the future. The United States identified the need to implement a technology strategy in order to restore the country's competitive edge. In 1983 Project Socrates, a US Defense Intelligence Agency program, was established to develop a national technology strategy policy. == Effective strategy == A successful technology strategy involves the documentation of planning assumptions and the development of success metrics. These establish a mission-driven strategy, which ensures that initiatives are aligned with the organization's goals and objectives. This aspect underscores that the primary objective of designing technology strategy is to make sure that the business strategy can be realized through technology and that technology investments are aligned with business. Some experts underscore the successful technology strategy is one that is integrated within the organization's overall business strategy not just to contribute to the mission and vision of the company but also get support from it. There are frameworks (e.g., ASSIMPLER) available that provide insights into the current and future business strategy, assess business-IT alignment on various parameters, identify gaps, and define technology roadmaps and budgets. These highlight key information, which include the following: The important components of information tech-strategy is information technology and strategic planning working together. The IT strategy alignment is the capability of IT functionality to both shape, and support business strategy. The degree to which the IT mission, objectives, and plans support and are supported by the business mission, objective, and plans. For a strategy to be effective, it should also answer questions of how to create value, deliver value, and capture value. In order to create value, one needs to trace back the technology and forecast on how the technology evolves, how the market penetration changes, and how to organize effectively. Capturing value requires knowledge how to gain competitive advantage and sustain it, and how to compete in case that standards of technology is important. The final step is delivering the value, where firms define how to execute the strategy, make strategic decisions and take decisive actions. The Strategic Alignment Process is a step-by-step process that helps managers stay focused on specific task in order to execute the task and deliver value. == Meta-model of (IT) technology strategy == Aligned with Statement Of Applicability (SOA) approach, IT strategy is composed of IT Capability Model (ITCM) and IT Operating Model (IT-OM) as proposed by Haloedscape IT Strategy Model. == Framework of (IT) technology strategy == Process of IT Strategy is simplified with framework constituted of IT Service Management (ITSM), Enterprise Architecture Development (TOGAF) and Governance (COBIT). IT Strategy is modeled as vertical IT service applied to and supported by each horizontal layers of SOA architecture. For details, refer Haloedscape IT Strategy Framework. == Typical structure of a (IT) technology strategy == The following are typically sections of a technology strategy: Executive Summary – This is a summary of the IT strategy. High level organizational benefits Project objective and scope Approach and methodology of the engagement Relationship to overall business strategy Resource summary Staffing Budgets Summary of key projects Internal capabilities IT project portfolio management – An inventory of current projects being managed by the information technology department and their status. Note: It is not common to report current project status inside a future-looking strategy document. Show Return on Investment (ROI) and timeline for implementing each application. A catalog of existing applications supported, and the level of resources required to support them Architectural directions and methods for implementation of IT solutions Current IT departmental Includes a SWOT Analysis SWOT analysis Strengths Current IT departments strengths Weaknesses Current IT department weaknesses External Forces Summary of changes driven from outside the organization Rising expectations of users Example: Growth of high-quality web user interfaces driven by Ajax technology Example: Availability of open source learning management systems List of new IT projects requested by the organization. Opportunities Description of new cost reduction or efficiency increase opportunities. Example: List of available Professional Service contractors for short term projects Description of how Moore's law (faster processors, networks or storage at lower costs) will impact the organization's ROI for technology. Threats Description of disruptive forces that could cause the organization to become less profitable or competitive. Analysis IT usage by competition IT Organization structure and Governance IT organization roles and responsibilities IT role description IT governance Milestones List of monthly, quarterly or mid-year milestones and review dates to indicate if the strategy is on track. List milestone name, deliverables and metrics == Audience == A technology strategy document is usually designed to be read by non-technical stakeholders involved in business planning within an organization. It should be free of technical jargon and information technology acronyms. The IT strategy should also be presented or read by internal IT staff members. Many organizations will circulate prior year versions to internal IT department for feedback. The feedback is used to create new annual IT strategy plans. One critical integration point is the interface with an organization's marketing plan. The marketing plan frequently requires the support of a web site to create an appropriate on-line presence. Large organizations frequently have complex web site requirements such as web content management. == Implementation == The implementation of technology strategy will likely follow the conventional procedure taken when implementing a business strategy or an organization's planned changes within the so-called change management framework. Fundamentally, it is directed by a manager who oversees the process, which could include gaining targeted org. For instance, in the area of systematic exploration of emerging technologies, this approach help determine the relevance and opportunities offered by new technologies to business through its well-defined assessment mechanisms that can effectively justify adoption. == Relationship between strategy and enterprise technology architecture == A technology strategy document typically refers to but does not duplicate an overall enterprise architecture. The technology strategy may refer to: High-level view of logical architecture of information technology systems High-level view of physical architecture of information technology systems Technology rationalization plan == See also == Business strategy Enterprise planning systems Project portfolio management Second half of the chessboard Strategy == Notes == == References == Floyd, S.W. & Wolf, C. (2010) 'Technology Strategy' In: Narayanan, V.K. & O'Connor, G.C. (eds.) Encyclopedia of technology and innovation management. West Sussex: Wiley pp. 125–128. ISBN 1-4051-6049-7 Lawson, J (2006) "Delivering on Strategy: Those That Can...Do!! Those Who Simply Talk... Make Another Fine Mess", "Spectra – Journal of the MCA, June 2006" See Article Here. Strassmann, Paul A. (1990), The Business Value of Computers: An Executive's Guide, The Information Economic Press ISBN 0-9620413-2-7. The Human Capital Impact on e-Business: The Case of Encyclopædia Britannica. This case study is widely quoted example how technology has large impacts an overall organization's overall business strategy. J. C., Henderson; , N. Venkatraman. "Strategic alignment: Leveraging information technology for transforming organizations". IBM research. Retrieved 7 November 2013
https://en.wikipedia.org/wiki/Technology_strategy
Software consists of computer programs that instruct the execution of a computer. Software also includes design documents and specifications. The history of software is closely tied to the development of digital computers in the mid-20th century. Early programs were written in the machine language specific to the hardware. The introduction of high-level programming languages in 1958 allowed for more human-readable instructions, making software development easier and more portable across different computer architectures. Software in a programming language is run through a compiler or interpreter to execute on the architecture's hardware. Over time, software has become complex, owing to developments in networking, operating systems, and databases. Software can generally be categorized into two main types: operating systems, which manage hardware resources and provide services for applications application software, which performs specific tasks for users The rise of cloud computing has introduced the new software delivery model Software as a Service (SaaS). In SaaS, applications are hosted by a provider and accessed over the Internet. The process of developing software involves several stages. The stages include software design, programming, testing, release, and maintenance. Software quality assurance and security are critical aspects of software development, as bugs and security vulnerabilities can lead to system failures and security breaches. Additionally, legal issues such as software licenses and intellectual property rights play a significant role in the distribution of software products. == History == The first use of the word software to describe computer programs is credited to mathematician John Wilder Tukey in 1958. The first programmable computers, which appeared at the end of the 1940s, were programmed in machine language. Machine language is difficult to debug and not portable across different computers. Initially, hardware resources were more expensive than human resources. As programs became complex, programmer productivity became the bottleneck. The introduction of high-level programming languages in 1958 hid the details of the hardware and expressed the underlying algorithms into the code . Early languages include Fortran, Lisp, and COBOL. == Types == There are two main types of software: Operating systems are "the layer of software that manages a computer's resources for its users and their applications". There are three main purposes that an operating system fulfills: Allocating resources between different applications, deciding when they will receive central processing unit (CPU) time or space in memory. Providing an interface that abstracts the details of accessing hardware details (like physical memory) to make things easier for programmers. Offering common services, such as an interface for accessing network and disk devices. This enables an application to be run on different hardware without needing to be rewritten. Application software runs on top of the operating system and uses the computer's resources to perform a task. There are many different types of application software because the range of tasks that can be performed with modern computers is so large. Applications account for most software and require the environment provided by an operating system, and often other applications, in order to function. Software can also be categorized by how it is deployed. Traditional applications are purchased with a perpetual license for a specific version of the software, downloaded, and run on hardware belonging to the purchaser. The rise of the Internet and cloud computing enabled a new model, software as a service (SaaS), in which the provider hosts the software (usually built on top of rented infrastructure or platforms) and provides the use of the software to customers, often in exchange for a subscription fee. By 2023, SaaS products—which are usually delivered via a web application—had become the primary method that companies deliver applications. == Software development and maintenance == Software companies aim to deliver a high-quality product on time and under budget. A challenge is that software development effort estimation is often inaccurate. Software development begins by conceiving the project, evaluating its feasibility, analyzing the business requirements, and making a software design. Most software projects speed up their development by reusing or incorporating existing software, either in the form of commercial off-the-shelf (COTS) or open-source software. Software quality assurance is typically a combination of manual code review by other engineers and automated software testing. Due to time constraints, testing cannot cover all aspects of the software's intended functionality, so developers often focus on the most critical functionality. Formal methods are used in some safety-critical systems to prove the correctness of code, while user acceptance testing helps to ensure that the product meets customer expectations. There are a variety of software development methodologies, which vary from completing all steps in order to concurrent and iterative models. Software development is driven by requirements taken from prospective users, as opposed to maintenance, which is driven by events such as a change request. Frequently, software is released in an incomplete state when the development team runs out of time or funding. Despite testing and quality assurance, virtually all software contains bugs where the system does not work as intended. Post-release software maintenance is necessary to remediate these bugs when they are found and keep the software working as the environment changes over time. New features are often added after the release. Over time, the level of maintenance becomes increasingly restricted before being cut off entirely when the product is withdrawn from the market. As software ages, it becomes known as legacy software and can remain in use for decades, even if there is no one left who knows how to fix it. Over the lifetime of the product, software maintenance is estimated to comprise 75 percent or more of the total development cost. Completing a software project involves various forms of expertise, not just in software programmers but also testing, documentation writing, project management, graphic design, user experience, user support, marketing, and fundraising. == Quality and security == Software quality is defined as meeting the stated requirements as well as customer expectations. Quality is an overarching term that can refer to a code's correct and efficient behavior, its reusability and portability, or the ease of modification. It is usually more cost-effective to build quality into the product from the beginning rather than try to add it later in the development process. Higher quality code will reduce lifetime cost to both suppliers and customers as it is more reliable and easier to maintain. Software failures in safety-critical systems can be very serious including death. By some estimates, the cost of poor quality software can be as high as 20 to 40 percent of sales. Despite developers' goal of delivering a product that works entirely as intended, virtually all software contains bugs. The rise of the Internet also greatly increased the need for computer security as it enabled malicious actors to conduct cyberattacks remotely. If a bug creates a security risk, it is called a vulnerability. Software patches are often released to fix identified vulnerabilities, but those that remain unknown (zero days) as well as those that have not been patched are still liable for exploitation. Vulnerabilities vary in their ability to be exploited by malicious actors, and the actual risk is dependent on the nature of the vulnerability as well as the value of the surrounding system. Although some vulnerabilities can only be used for denial of service attacks that compromise a system's availability, others allow the attacker to inject and run their own code (called malware), without the user being aware of it. To thwart cyberattacks, all software in the system must be designed to withstand and recover from external attack. Despite efforts to ensure security, a significant fraction of computers are infected with malware. == Encoding and execution == === Programming languages === Programming languages are the format in which software is written. Since the 1950s, thousands of different programming languages have been invented; some have been in use for decades, while others have fallen into disuse. Some definitions classify machine code—the exact instructions directly implemented by the hardware—and assembly language—a more human-readable alternative to machine code whose statements can be translated one-to-one into machine code—as programming languages. Programs written in the high-level programming languages used to create software share a few main characteristics: knowledge of machine code is not necessary to write them, they can be ported to other computer systems, and they are more concise and human-readable than machine code. They must be both human-readable and capable of being translated into unambiguous instructions for computer hardware. === Compilation, interpretation, and execution === The invention of high-level programming languages was simultaneous with the compilers needed to translate them automatically into machine code. Most programs do not contain all the resources needed to run them and rely on external libraries. Part of the compiler's function is to link these files in such a way that the program can be executed by the hardware. Once compiled, the program can be saved as an object file and the loader (part of the operating system) can take this saved file and execute it as a process on the computer hardware. Some programming languages use an interpreter instead of a compiler. An interpreter converts the program into machine code at run time, which makes them 10 to 100 times slower than compiled programming languages. == Legal issues == === Liability === Software is often released with the knowledge that it is incomplete or contains bugs. Purchasers knowingly buy it in this state, which has led to a legal regime where liability for software products is significantly curtailed compared to other products. === Licenses === Since the mid-1970s, software and its source code have been protected by copyright law that vests the owner with the exclusive right to copy the code. The underlying ideas or algorithms are not protected by copyright law, but are sometimes treated as a trade secret and concealed by such methods as non-disclosure agreements. A software copyright is often owned by the person or company that financed or made the software (depending on their contracts with employees or contractors who helped to write it). Some software is in the public domain and has no restrictions on who can use it, copy or share it, or modify it; a notable example is software written by the United States Government. Free and open-source software also allow free use, sharing, and modification, perhaps with a few specified conditions. The use of some software is governed by an agreement (software license) written by the copyright holder and imposed on the user. Proprietary software is usually sold under a restrictive license that limits its use and sharing. Some free software licenses require that modified versions must be released under the same license, which prevents the software from being sold or distributed under proprietary restrictions. === Patents === Patents give an inventor an exclusive, time-limited license for a novel product or process. Ideas about what software could accomplish are not protected by law and concrete implementations are instead covered by copyright law. In some countries, a requirement for the claimed invention to have an effect on the physical world may also be part of the requirements for a software patent to be held valid. Software patents have been historically controversial. Before the 1998 case State Street Bank & Trust Co. v. Signature Financial Group, Inc., software patents were generally not recognized in the United States. In that case, the Supreme Court decided that business processes could be patented. Patent applications are complex and costly, and lawsuits involving patents can drive up the cost of products. Unlike copyrights, patents generally only apply in the jurisdiction where they were issued. == Impact == Engineer Capers Jones writes that "computers and software are making profound changes to every aspect of human life: education, work, warfare, entertainment, medicine, law, and everything else". It has become ubiquitous in everyday life in developed countries. In many cases, software augments the functionality of existing technologies such as household appliances and elevators. Software also spawned entirely new technologies such as the Internet, video games, mobile phones, and GPS. New methods of communication, including email, forums, blogs, microblogging, wikis, and social media, were enabled by the Internet. Massive amounts of knowledge exceeding any paper-based library are now available with a quick web search. Most creative professionals have switched to software-based tools such as computer-aided design, 3D modeling, digital image editing, and computer animation. Almost every complex device is controlled by software. == References == === Sources ===
https://en.wikipedia.org/wiki/Software
Technology readiness levels (TRLs) are a method for estimating the maturity of technologies during the acquisition phase of a program. TRLs enable consistent and uniform discussions of technical maturity across different types of technology. TRL is determined during a technology readiness assessment (TRA) that examines program concepts, technology requirements, and demonstrated technology capabilities. TRLs are based on a scale from 1 to 9 with 9 being the most mature technology. TRL was developed at NASA during the 1970s. The US Department of Defense has used the scale for procurement since the early 2000s. By 2008 the scale was also in use at the European Space Agency (ESA). The European Commission advised EU-funded research and innovation projects to adopt the scale in 2010. TRLs were consequently used in 2014 in the EU Horizon 2020 program. In 2013, the TRL scale was further canonized by the International Organization for Standardization (ISO) with the publication of the ISO 16290:2013 standard. A comprehensive approach and discussion of TRLs has been published by the European Association of Research and Technology Organisations (EARTO). Extensive criticism of the adoption of TRL scale by the European Union was published in The Innovation Journal, stating that the "concreteness and sophistication of the TRL scale gradually diminished as its usage spread outside its original context (space programs)". == Definitions == == Assessment tools == A Technology Readiness Level Calculator was developed by the United States Air Force. This tool is a standard set of questions implemented in Microsoft Excel that produces a graphical display of the TRLs achieved. This tool is intended to provide a snapshot of technology maturity at a given point in time. The Defense Acquisition University (DAU) Decision Point (DP) Tool originally named the Technology Program Management Model was developed by the United States Army. and later adopted by the DAU. The DP/TPMM is a TRL-gated high-fidelity activity model that provides a flexible management tool to assist Technology Managers in planning, managing, and assessing their technologies for successful technology transition. The model provides a core set of activities including systems engineering and program management tasks that are tailored to the technology development and management goals. This approach is comprehensive, yet it consolidates the complex activities that are relevant to the development and transition of a specific technology program into one integrated model. == Uses == The primary purpose of using technology readiness levels is to help management in making decisions concerning the development and transitioning of technology. It is one of several tools that are needed to manage the progress of research and development activity within an organization. Among the advantages of TRLs: Provides a common understanding of technology status Risk management Used to make decisions concerning technology funding Used to make decisions concerning transition of technology Some of the characteristics of TRLs that limit their utility: Readiness does not necessarily fit with appropriateness or technology maturity A mature product may possess a greater or lesser degree of readiness for use in a particular system context than one of lower maturity Numerous factors must be considered, including the relevance of the products' operational environment to the system at hand, as well as the product-system architectural mismatch TRL models tend to disregard negative and obsolescence factors. There have been suggestions made for incorporating such factors into assessments. For complex technologies that incorporate various development stages, a more detailed scheme called the Technology Readiness Pathway Matrix has been developed going from basic units to applications in society. This tool aims to show that a readiness level of a technology is based on a less linear process but on a more complex pathway through its application in society. == History == Technology readiness levels were conceived at NASA in 1974 and formally defined in 1989. The original definition included seven levels, but in the 1990s NASA adopted the nine-level scale that subsequently gained widespread acceptance. Original NASA TRL Definitions (1989) Level 1 – Basic Principles Observed and Reported Level 2 – Potential Application Validated Level 3 – Proof-of-Concept Demonstrated, Analytically and/or Experimentally Level 4 – Component and/or Breadboard Laboratory Validated Level 5 – Component and/or Breadboard Validated in Simulated or Realspace Environment Level 6 – System Adequacy Validated in Simulated Environment Level 7 – System Adequacy Validated in Space The TRL methodology was originated by Stan Sadin at NASA Headquarters in 1974. Ray Chase was then the JPL Propulsion Division representative on the Jupiter Orbiter design team. At the suggestion of Stan Sadin, Chase used this methodology to assess the technology readiness of the proposed JPL Jupiter Orbiter spacecraft design. Later Chase spent a year at NASA Headquarters helping Sadin institutionalize the TRL methodology. Chase joined ANSER in 1978, where he used the TRL methodology to evaluate the technology readiness of proposed Air Force development programs. He published several articles during the 1980s and 90s on reusable launch vehicles utilizing the TRL methodology. These documented an expanded version of the methodology that included design tools, test facilities, and manufacturing readiness on the Air Force Have Not program. The Have Not program manager, Greg Jenkins, and Ray Chase published the expanded version of the TRL methodology, which included design and manufacturing. Leon McKinney and Chase used the expanded version to assess the technology readiness of the ANSER team's Highly Reusable Space Transportation (HRST) concept. ANSER also created an adapted version of the TRL methodology for proposed Homeland Security Agency programs. The United States Air Force adopted the use of technology readiness levels in the 1990s. In 1995, John C. Mankins, NASA, wrote a paper that discussed NASA's use of TRL, extended the scale, and proposed expanded descriptions for each TRL. In 1999, the United States General Accounting Office produced an influential report that examined the differences in technology transition between the DOD and private industry. It concluded that the DOD takes greater risks and attempts to transition emerging technologies at lesser degrees of maturity than does private industry. The GAO concluded that use of immature technology increased overall program risk. The GAO recommended that the DOD make wider use of technology readiness levels as a means of assessing technology maturity prior to transition. In 2001, the Deputy Under Secretary of Defense for Science and Technology issued a memorandum that endorsed use of TRLs in new major programs. Guidance for assessing technology maturity was incorporated into the Defense Acquisition Guidebook. Subsequently, the DOD developed detailed guidance for using TRLs in the 2003 DOD Technology Readiness Assessment Deskbook. Because of their relevance to Habitation, 'Habitation Readiness Levels (HRL)' were formed by a group of NASA engineers (Jan Connolly, Kathy Daues, Robert Howard, and Larry Toups). They have been created to address habitability requirements and design aspects in correlation with already established and widely used standards by different agencies, including NASA TRLs. More recently, Dr. Ali Abbas, Professor of chemical engineering and Associate Dean of Research at the University of Sydney and Dr. Mobin Nomvar, a chemical engineer and commercialisation specialist, have developed Commercial Readiness Level (CRL), a nine-point scale to be synchronised with TRL as part of a critical innovation path to rapidly assess and refine innovation projects to ensure market adoption and avoid failure. === In the European Union === The European Space Agency adopted the TRL scale in the mid-2000s. Its handbook closely follows the NASA definition of TRLs. In 2022, the ESA TRL Calculator was released to the public. The universal usage of TRL in EU policy was proposed in the final report of the first High Level Expert Group on Key Enabling Technologies, and it was implemented in the subsequent EU framework program, called H2020, running from 2013 to 2020. This means not only space and weapons programs, but everything from nanotechnology to informatics and communication technology. == See also == Capability Maturity Model Integration – Process level improvement training and appraisal program List of emerging technologies – New technologies actively in development Manufacturing readiness level – Method for estimating the maturity of manufacturing Open innovation – Term for external cooperation in innovation Technology assessment – Research area dealing with trends in science and technology and related social developments Technology life cycle – Development, ascent, maturity, and decline of new technologies Technology transfer – Process of disseminating technology == References == === Online === "Best Practices: Better Management of Technology Development Can Improve Weapon System Outcomes". U.S. Government Accountability Office. July 1999. NSIAD-99-162. "Joint Strike Fighter Acquisition: Mature Critical Technologies Needed to Reduce Risks". U.S. Government Accountability Office. October 2001. GAO-02-39. == External links == Technology Readiness Levels (TRL) NASA Technology Readiness Levels Introduction NASA archive via Wayback Machine DNV Recommended_Practices (Look for DNV-RP-A203) UK MoD Acquisition Operating Framework guide to TRL (requires registration)
https://en.wikipedia.org/wiki/Technology_readiness_level
Shanghai Moonton Technology Co. Ltd. (Chinese: 上海沐瞳科技有限公司; pinyin: Shànghǎi mù tóng kējì yǒuxiàn gōngsī), commonly known as Moonton, is a Chinese multinational video game developer and publisher owned by the Nuverse subsidiary of ByteDance and based in Shanghai, China. It is best known for the mobile multiplayer online battle arena (MOBA) game Mobile Legends: Bang Bang released in July 2016. == History == Moonton was established in April 2014 within the Minhang District of Shanghai, China. One of its co-founders was Justin Yuan, who became chief executive officer (CEO) of the company in late 2018 after Xu Zhenhua. Moonton's first video game, the tower defense (TD) game Magic Rush: Heroes, was released on 6 April 2015. Following the completion of Magic Rush: Heroes, Moonton began development for a multiplayer online battle arena (MOBA) game. Mobile Legends was released as Mobile Legends: 5v5 MOBA in 2016, and became popular in Southeast Asia, notably in Indonesia, Philippines and Malaysia, where it was the most-downloaded free mobile game app among iPhone users in 2017. The game is distributed by Elex Tech in the United States. At present, the company claims to currently employ over 1,600 people throughout its global offices. == Merchandise == === Video games === === Television series === === Notes === Riot Games suspected that Mobile Legends: 5v5 MOBA infringed on the intellectual property League of Legends, and demanded that Google remove the game from Google Play and App Store. Moonton removed the game before Google could act and eventually relaunched it as Mobile Legends: Bang Bang on 9 November 2016. In July 2017, Riot Games filed a lawsuit against Moonton over copyright infringement, citing similarities between Magic Rush and Mobile Legends against League of Legends. The case was dismissed by Central District Court of California in the United States on grounds of forum non conveniens. Tencent, the parent of Riot Games, followed with a separate lawsuit in Shanghai No.1 Intermediate People's Court against Xu Zhenhua, previously a senior Tencent employee, for violating non-competition agreements. Tencent won the lawsuit in July 2018 and was awarded a settlement of $2.9 million (CN¥19.4 million). On 22 March 2021, the developer of TikTok, BABE, Resso and Lark ByteDance, through its video game subsidiary Nuverse, acquired Moonton for US$4 billion. ByteDance reportedly won over a bid from Tencent. Mobile Legends: Bang Bang is a minor revision of Mobile Legends: 5v5 MOBA but was considered a separate product in the forty-four–page lawsuit filed by Riot Games against Moonton. == References ==
https://en.wikipedia.org/wiki/Moonton
Java is a high-level, general-purpose, memory-safe, object-oriented programming language. It is intended to let programmers write once, run anywhere (WORA), meaning that compiled Java code can run on all platforms that support Java without the need to recompile. Java applications are typically compiled to bytecode that can run on any Java virtual machine (JVM) regardless of the underlying computer architecture. The syntax of Java is similar to C and C++, but has fewer low-level facilities than either of them. The Java runtime provides dynamic capabilities (such as reflection and runtime code modification) that are typically not available in traditional compiled languages. Java gained popularity shortly after its release, and has been a popular programming language since then. Java was the third most popular programming language in 2022 according to GitHub. Although still widely popular, there has been a gradual decline in use of Java in recent years with other languages using JVM gaining popularity. Java was designed by James Gosling at Sun Microsystems. It was released in May 1995 as a core component of Sun's Java platform. The original and reference implementation Java compilers, virtual machines, and class libraries were released by Sun under proprietary licenses. As of May 2007, in compliance with the specifications of the Java Community Process, Sun had relicensed most of its Java technologies under the GPL-2.0-only license. Oracle, which bought Sun in 2010, offers its own HotSpot Java Virtual Machine. However, the official reference implementation is the OpenJDK JVM, which is open-source software used by most developers and is the default JVM for almost all Linux distributions. Java 24 is the version current as of March 2025. Java 8, 11, 17, and 21 are long-term support versions still under maintenance. == History == James Gosling, Mike Sheridan, and Patrick Naughton initiated the Java language project in June 1991. Java was originally designed for interactive television, but it was too advanced for the digital cable television industry at the time. The language was initially called Oak after an oak tree that stood outside Gosling's office. Later the project went by the name Green and was finally renamed Java, from Java coffee, a type of coffee from Indonesia. Gosling designed Java with a C/C++-style syntax that system and application programmers would find familiar. Sun Microsystems released the first public implementation as Java 1.0 in 1996. It promised write once, run anywhere (WORA) functionality, providing no-cost run-times on popular platforms. Fairly secure and featuring configurable security, it allowed network- and file-access restrictions. Major web browsers soon incorporated the ability to run Java applets within web pages, and Java quickly became popular. The Java 1.0 compiler was re-written in Java by Arthur van Hoff to comply strictly with the Java 1.0 language specification. With the advent of Java 2 (released initially as J2SE 1.2 in December 1998 – 1999), new versions had multiple configurations built for different types of platforms. J2EE included technologies and APIs for enterprise applications typically run in server environments, while J2ME featured APIs optimized for mobile applications. The desktop version was renamed J2SE. In 2006, for marketing purposes, Sun renamed new J2 versions as Java EE, Java ME, and Java SE, respectively. In 1997, Sun Microsystems approached the ISO/IEC JTC 1 standards body and later the Ecma International to formalize Java, but it soon withdrew from the process. Java remains a de facto standard, controlled through the Java Community Process. At one time, Sun made most of its Java implementations available without charge, despite their proprietary software status. Sun generated revenue from Java through the selling of licenses for specialized products such as the Java Enterprise System. On November 13, 2006, Sun released much of its Java virtual machine (JVM) as free and open-source software (FOSS), under the terms of the GPL-2.0-only license. On May 8, 2007, Sun finished the process, making all of its JVM's core code available under free software/open-source distribution terms, aside from a small portion of code to which Sun did not hold the copyright. Sun's vice-president Rich Green said that Sun's ideal role with regard to Java was as an evangelist. Following Oracle Corporation's acquisition of Sun Microsystems in 2009–10, Oracle has described itself as the steward of Java technology with a relentless commitment to fostering a community of participation and transparency. This did not prevent Oracle from filing a lawsuit against Google shortly after that for using Java inside the Android SDK (see the Android section). On April 2, 2010, James Gosling resigned from Oracle. In January 2016, Oracle announced that Java run-time environments based on JDK 9 will discontinue the browser plugin. Java software runs on most devices from laptops to data centers, game consoles to scientific supercomputers. Oracle (and others) highly recommend uninstalling outdated and unsupported versions of Java, due to unresolved security issues in older versions. === Principles === There were five primary goals in creating the Java language: It must be simple, object-oriented, and familiar. It must be robust and secure. It must be architecture-neutral and portable. It must execute with high performance. It must be interpreted, threaded, and dynamic. === Versions === As of November 2024, Java 8, 11, 17, and 21 are supported as long-term support (LTS) versions, with Java 25, releasing in September 2025, as the next scheduled LTS version. Oracle released the last zero-cost public update for the legacy version Java 8 LTS in January 2019 for commercial use, although it will otherwise still support Java 8 with public updates for personal use indefinitely. Other vendors such as Adoptium continue to offer free builds of OpenJDK's long-term support (LTS) versions. These builds may include additional security patches and bug fixes. Major release versions of Java, along with their release dates: == Editions == Sun has defined and supports four editions of Java targeting different application environments and segmented many of its APIs so that they belong to one of the platforms. The platforms are: Java Card for smart-cards. Java Platform, Micro Edition (Java ME) – targeting environments with limited resources. Java Platform, Standard Edition (Java SE) – targeting workstation environments. Java Platform, Enterprise Edition (Java EE) – targeting large distributed enterprise or Internet environments. The classes in the Java APIs are organized into separate groups called packages. Each package contains a set of related interfaces, classes, subpackages and exceptions. Sun also provided an edition called Personal Java that has been superseded by later, standards-based Java ME configuration-profile pairings. == Execution system == === Java JVM and bytecode === One design goal of Java is portability, which means that programs written for the Java platform must run similarly on any combination of hardware and operating system with adequate run time support. This is achieved by compiling the Java language code to an intermediate representation called Java bytecode, instead of directly to architecture-specific machine code. Java bytecode instructions are analogous to machine code, but they are intended to be executed by a virtual machine (VM) written specifically for the host hardware. End-users commonly use a Java Runtime Environment (JRE) installed on their device for standalone Java applications or a web browser for Java applets. Standard libraries provide a generic way to access host-specific features such as graphics, threading, and networking. The use of universal bytecode makes porting simple. However, the overhead of interpreting bytecode into machine instructions made interpreted programs almost always run more slowly than native executables. Just-in-time (JIT) compilers that compile byte-codes to machine code during runtime were introduced from an early stage. Java's Hotspot compiler is actually two compilers in one; and with GraalVM (included in e.g. Java 11, but removed as of Java 16) allowing tiered compilation. Java itself is platform-independent and is adapted to the particular platform it is to run on by a Java virtual machine (JVM), which translates the Java bytecode into the platform's machine language. ==== Performance ==== Programs written in Java have a reputation for being slower and requiring more memory than those written in C++. However, Java programs' execution speed improved significantly with the introduction of just-in-time compilation in 1997/1998 for Java 1.1, the addition of language features supporting better code analysis (such as inner classes, the StringBuilder class, optional assertions, etc.), and optimizations in the Java virtual machine, such as HotSpot becoming Sun's default JVM in 2000. With Java 1.5, the performance was improved with the addition of the java.util.concurrent package, including lock-free implementations of the ConcurrentMaps and other multi-core collections, and it was improved further with Java 1.6. === Non-JVM === Some platforms offer direct hardware support for Java; there are micro controllers that can run Java bytecode in hardware instead of a software Java virtual machine, and some ARM-based processors could have hardware support for executing Java bytecode through their Jazelle option, though support has mostly been dropped in current implementations of ARM. === Automatic memory management === Java uses an automatic garbage collector to manage memory in the object lifecycle. The programmer determines when objects are created, and the Java runtime is responsible for recovering the memory once objects are no longer in use. Once no references to an object remain, the unreachable memory becomes eligible to be freed automatically by the garbage collector. Something similar to a memory leak may still occur if a programmer's code holds a reference to an object that is no longer needed, typically when objects that are no longer needed are stored in containers that are still in use. If methods for a non-existent object are called, a null pointer exception is thrown. One of the ideas behind Java's automatic memory management model is that programmers can be spared the burden of having to perform manual memory management. In some languages, memory for the creation of objects is implicitly allocated on the stack or explicitly allocated and deallocated from the heap. In the latter case, the responsibility of managing memory resides with the programmer. If the program does not deallocate an object, a memory leak occurs. If the program attempts to access or deallocate memory that has already been deallocated, the result is undefined and difficult to predict, and the program is likely to become unstable or crash. This can be partially remedied by the use of smart pointers, but these add overhead and complexity. Garbage collection does not prevent logical memory leaks, i.e. those where the memory is still referenced but never used. Garbage collection may happen at any time. Ideally, it will occur when a program is idle. It is guaranteed to be triggered if there is insufficient free memory on the heap to allocate a new object; this can cause a program to stall momentarily. Explicit memory management is not possible in Java. Java does not support C/C++ style pointer arithmetic, where object addresses can be arithmetically manipulated (e.g. by adding or subtracting an offset). This allows the garbage collector to relocate referenced objects and ensures type safety and security. As in C++ and some other object-oriented languages, variables of Java's primitive data types are either stored directly in fields (for objects) or on the stack (for methods) rather than on the heap, as is commonly true for non-primitive data types (but see escape analysis). This was a conscious decision by Java's designers for performance reasons. Java contains multiple types of garbage collectors. Since Java 9, HotSpot uses the Garbage First Garbage Collector (G1GC) as the default. However, there are also several other garbage collectors that can be used to manage the heap, such as the Z Garbage Collector (ZGC) introduced in Java 11, and Shenandoah GC, introduced in Java 12 but unavailable in Oracle-produced OpenJDK builds. Shenandoah is instead available in third-party builds of OpenJDK, such as Eclipse Temurin. For most applications in Java, G1GC is sufficient. In prior versions of Java, such as Java 8, the Parallel Garbage Collector was used as the default garbage collector. Having solved the memory management problem does not relieve the programmer of the burden of handling properly other kinds of resources, like network or database connections, file handles, etc., especially in the presence of exceptions. == Syntax == The syntax of Java is largely influenced by C++ and C. Unlike C++, which combines the syntax for structured, generic, and object-oriented programming, Java was built almost exclusively as an object-oriented language. All code is written inside classes, and every data item is an object, with the exception of the primitive data types, (i.e. integers, floating-point numbers, boolean values, and characters), which are not objects for performance reasons. Java reuses some popular aspects of C++ (such as the printf method). Unlike C++, Java does not support operator overloading or multiple inheritance for classes, though multiple inheritance is supported for interfaces. Java uses comments similar to those of C++. There are three different styles of comments: a single line style marked with two slashes (//), a multiple line style opened with /* and closed with */, and the Javadoc commenting style opened with /** and closed with */. The Javadoc style of commenting allows the user to run the Javadoc executable to create documentation for the program and can be read by some integrated development environments (IDEs) such as Eclipse to allow developers to access documentation within the IDE. === Hello world === The following is a simple example of a "Hello, World!" program that writes a message to the standard output: == Special classes == === Applet === Java applets were programs embedded in other applications, mainly in web pages displayed in web browsers. The Java applet API was deprecated with the release of Java 9 in 2017. === Servlet === Java servlet technology provides Web developers with a simple, consistent mechanism for extending the functionality of a Web server and for accessing existing business systems. Servlets are server-side Java EE components that generate responses to requests from clients. Most of the time, this means generating HTML pages in response to HTTP requests, although there are a number of other standard servlet classes available, for example for WebSocket communication. The Java servlet API has to some extent been superseded (but still used under the hood) by two standard Java technologies for web services: the Java API for RESTful Web Services (JAX-RS 2.0) useful for AJAX, JSON and REST services, and the Java API for XML Web Services (JAX-WS) useful for SOAP Web Services. Typical implementations of these APIs on Application Servers or Servlet Containers use a standard servlet for handling all interactions with the HTTP requests and responses that delegate to the web service methods for the actual business logic. === JavaServer Pages === JavaServer Pages (JSP) are server-side Java EE components that generate responses, typically HTML pages, to HTTP requests from clients. JSPs embed Java code in an HTML page by using the special delimiters <% and %>. A JSP is compiled to a Java servlet, a Java application in its own right, the first time it is accessed. After that, the generated servlet creates the response. === Swing application === Swing is a graphical user interface library for the Java SE platform. It is possible to specify a different look and feel through the pluggable look and feel system of Swing. Clones of Windows, GTK+, and Motif are supplied by Sun. Apple also provides an Aqua look and feel for macOS. Where prior implementations of these looks and feels may have been considered lacking, Swing in Java SE 6 addresses this problem by using more native GUI widget drawing routines of the underlying platforms. === JavaFX application === JavaFX is a software platform for creating and delivering desktop applications, as well as rich web applications that can run across a wide variety of devices. JavaFX is intended to replace Swing as the standard graphical user interface (GUI) library for Java SE, but since JDK 11 JavaFX has not been in the core JDK and instead in a separate module. JavaFX has support for desktop computers and web browsers on Microsoft Windows, Linux, and macOS. JavaFX does not have support for native OS look and feels. === Generics === In 2004, generics were added to the Java language, as part of J2SE 5.0. Prior to the introduction of generics, each variable declaration had to be of a specific type. For container classes, for example, this is a problem because there is no easy way to create a container that accepts only specific types of objects. Either the container operates on all subtypes of a class or interface, usually Object, or a different container class has to be created for each contained class. Generics allow compile-time type checking without having to create many container classes, each containing almost identical code. In addition to enabling more efficient code, certain runtime exceptions are prevented from occurring, by issuing compile-time errors. If Java prevented all runtime type errors (ClassCastExceptions) from occurring, it would be type safe. In 2016, the type system of Java was proven unsound in that it is possible to use generics to construct classes and methods that allow assignment of an instance of one class to a variable of another unrelated class. Such code is accepted by the compiler, but fails at run time with a class cast exception. == Criticism == Criticisms directed at Java include the implementation of generics, speed, the handling of unsigned numbers, the implementation of floating-point arithmetic, and a history of security vulnerabilities in the primary Java VM implementation HotSpot. Developers have criticized the complexity and verbosity of the Java Persistence API (JPA), a standard part of Java EE. This has led to increased adoption of higher-level abstractions like Spring Data JPA, which aims to simplify database operations and reduce boilerplate code. The growing popularity of such frameworks suggests limitations in the standard JPA implementation's ease-of-use for modern Java development. == Class libraries == The Java Class Library is the standard library, developed to support application development in Java. It is controlled by Oracle in cooperation with others through the Java Community Process program. Companies or individuals participating in this process can influence the design and development of the APIs. This process has been a subject of controversy during the 2010s. The class library contains features such as: The core libraries, which include: Input/output (I/O or IO) and non-blocking I/O (NIO), or IO/NIO Networking (new user agent (HTTP client) since Java 11) Reflective programming (reflection) Concurrent computing (concurrency) Generics Scripting, Compiler Functional programming (Lambda, streaming) Collection libraries that implement data structures such as lists, dictionaries, trees, sets, queues and double-ended queue, or stacks XML Processing (Parsing, Transforming, Validating) libraries Security Internationalization and localization libraries The integration libraries, which allow the application writer to communicate with external systems. These libraries include: The Java Database Connectivity (JDBC) API for database access Java Naming and Directory Interface (JNDI) for lookup and discovery Java remote method invocation (RMI) and Common Object Request Broker Architecture (CORBA) for distributed application development Java Management Extensions (JMX) for managing and monitoring applications User interface libraries, which include: The (heavyweight, or native) Abstract Window Toolkit (AWT), which provides GUI components, the means for laying out those components and the means for handling events from those components The (lightweight) Swing libraries, which are built on AWT but provide (non-native) implementations of the AWT widgetry APIs for audio capture, processing, and playback JavaFX A platform dependent implementation of the Java virtual machine that is the means by which the bytecodes of the Java libraries and third-party applications are executed Plugins, which enable applets to be run in web browsers Java Web Start, which allows Java applications to be efficiently distributed to end users across the Internet Licensing and documentation == Documentation == Javadoc is a comprehensive documentation system, created by Sun Microsystems. It provides developers with an organized system for documenting their code. Javadoc comments have an extra asterisk at the beginning, i.e. the delimiters are /** and */, whereas the normal multi-line comments in Java are delimited by /* and */, and single-line comments start with //. == Implementations == Oracle Corporation owns the official implementation of the Java SE platform, due to its acquisition of Sun Microsystems on January 27, 2010. This implementation is based on the original implementation of Java by Sun. The Oracle implementation is available for Windows, macOS, Linux, and Solaris. Because Java lacks any formal standardization recognized by Ecma International, ISO/IEC, ANSI, or other third-party standards organizations, the Oracle implementation is the de facto standard. The Oracle implementation is packaged into two different distributions: The Java Runtime Environment (JRE) which contains the parts of the Java SE platform required to run Java programs and is intended for end users, and the Java Development Kit (JDK), which is intended for software developers and includes development tools such as the Java compiler, Javadoc, Jar, and a debugger. Oracle has also released GraalVM, a high performance Java dynamic compiler and interpreter. OpenJDK is another Java SE implementation that is licensed under the GNU GPL. The implementation started when Sun began releasing the Java source code under the GPL. As of Java SE 7, OpenJDK is the official Java reference implementation. The goal of Java is to make all implementations of Java compatible. Historically, Sun's trademark license for usage of the Java brand insists that all implementations be compatible. This resulted in a legal dispute with Microsoft after Sun claimed that the Microsoft implementation did not support Java remote method invocation (RMI) or Java Native Interface (JNI) and had added platform-specific features of their own. Sun sued in 1997, and, in 2001, won a settlement of US$20 million, as well as a court order enforcing the terms of the license from Sun. As a result, Microsoft no longer ships Java with Windows. Platform-independent Java is essential to Java EE, and an even more rigorous validation is required to certify an implementation. This environment enables portable server-side applications. == Use outside the Java platform == The Java programming language requires the presence of a software platform in order for compiled programs to be executed. Oracle supplies the Java platform for use with Java. The Android SDK is an alternative software platform, used primarily for developing Android applications with its own GUI system. === Android === The Java language is a key pillar in Android, an open source mobile operating system. Although Android, built on the Linux kernel, is written largely in C, the Android SDK uses the Java language as the basis for Android applications but does not use any of its standard GUI, SE, ME or other established Java standards. The bytecode language supported by the Android SDK is incompatible with Java bytecode and runs on its own virtual machine, optimized for low-memory devices such as smartphones and tablet computers. Depending on the Android version, the bytecode is either interpreted by the Dalvik virtual machine or compiled into native code by the Android Runtime. Android does not provide the full Java SE standard library, although the Android SDK does include an independent implementation of a large subset of it. It supports Java 6 and some Java 7 features, offering an implementation compatible with the standard library (Apache Harmony). ==== Controversy ==== The use of Java-related technology in Android led to a legal dispute between Oracle and Google. On May 7, 2012, a San Francisco jury found that if APIs could be copyrighted, then Google had infringed Oracle's copyrights by the use of Java in Android devices. District Judge William Alsup ruled on May 31, 2012, that APIs cannot be copyrighted, but this was reversed by the United States Court of Appeals for the Federal Circuit in May 2014. On May 26, 2016, the district court decided in favor of Google, ruling the copyright infringement of the Java API in Android constitutes fair use. In March 2018, this ruling was overturned by the Appeals Court, which sent down the case of determining the damages to federal court in San Francisco. Google filed a petition for writ of certiorari with the Supreme Court of the United States in January 2019 to challenge the two rulings that were made by the Appeals Court in Oracle's favor. On April 5, 2021, the Court ruled 6–2 in Google's favor, that its use of Java APIs should be considered fair use. However, the court refused to rule on the copyrightability of APIs, choosing instead to determine their ruling by considering Java's API copyrightable "purely for argument's sake." == See also == C# C++ Dalvik, used in old Android versions, replaced by non-JIT Android Runtime Java Heterogeneous Distributed Computing List of Java APIs List of Java frameworks List of JVM languages List of Java virtual machines Comparison of C# and Java Comparison of Java and C++ Comparison of programming languages == References == == Bibliography == == External links == Official website, OpenJDK, Oracle JDK builds, Adoptium
https://en.wikipedia.org/wiki/Java_(programming_language)
The HSTDV is an unmanned scramjet demonstration aircraft for hypersonic flight. It is being developed as a carrier vehicle for hypersonic and long-range cruise missiles, and will have multiple civilian applications including the launching of small satellites at low cost. The HSTDV program is being run by the Defence Research and Development Organisation (DRDO). == Introduction == India is pushing ahead with the development of ground and flight test hardware as part of an ambitious plan for a hypersonic cruise missile. The Defence Research and Development Laboratory's Hypersonic Technology Demonstrator Vehicle (HSTDV) is intended to attain autonomous scramjet flight for 20 seconds, using a solid rocket launch booster. The research will also inform India's interest in reusable launch vehicles. The eventual target is to reach Mach 6 at an altitude of 32.5 km (20 miles). Initial flight testing is aimed at validating the aerodynamics of the air vehicle, as well as its thermal properties and scramjet engine performance. A mock-up of the HSTDV was shown at the Aero India exhibition in Bengaluru in February (see photo), and S. Panneerselvam, the DRDO's project director, says engineers aim to begin flight testing a full-scale air-breathing model powered by a 1,300-lb.-thrust scramjet engine in near future. == Design and development == The design for airframe attachment with the engine was completed in the year 2004. In May 2008, Dr. Saraswat said: The HSTDV project, through which we want to demonstrate the performance of a scram-jet engine at an altitude of 15 km to 20 km, is on. Under this project, we are developing a hypersonic vehicle that will be powered by a scram-jet engine. This is dual-use technology, which when developed, will have multiple civilian applications. It can be used for launching satellites at low cost. It will also be available for long-range cruise missiles of the future. Israel has provided some assistance on the HSTDV program, including wind tunnel testing, as has Cranfield University of the U.K. An unnamed third country is helping as well. According to a report, Russia has provided critical help in the project. India's main defence-industrial partner is Russia, which has carried out considerable research into hypersonic propulsion. The 1-metric-ton, 5.6-meter-long (18 ft) air vehicle under construction features a flattened octagonal cross section with mid-body stub-wings and raked tail fins and a 3.7-meter rectangular section air intake. The scramjet engine is located under the mid-body, with the aftbody serving as part of the exhaust nozzle. Development work on the engine is also in progress. Two parallel fences in the forebody are meant to reduce spillage and increase thrust. Part span flaps are provided at the trailing edge of the wings for roll control. A deflectable nozzle cowl at the combustor end can deflect up to 25° to ensure satisfactory performance during power-off and power-on phases. Surfaces of the airframe's bottom, wings and tail are made of titanium alloy, while aluminum alloy comprises the top surface. The inner surface of the double-wall engine is niobium alloy and the outer surface is nimonic alloy. Due to technology denial of material for the scramjet engine, a new program was initiated and the materials were developed in-house. This led to self-sufficiency in the area and the scramjet engine was ground tested successfully for 20s instead of the initial 3s. In the 12 June 2019 test, the cruise vehicle was mounted on an Agni-I solid rocket motor to take it to the required altitude. After the required altitude was reached and the Mach was achieved, the cruise vehicle was ejected out of the launch vehicle. Mid-air the scramjet engine was auto-ignited, and propelled the cruise vehicle at Mach 6. DRDO spent $30 million during design and development phase while $4.5 million was spent on HSTDV prototype development. == Testing == === Wind tunnel testing === A 1:16 scale model of the vehicle was tested at a hypersonic wind tunnel operated by Israel Aerospace Industries. The isolated intake has been tested at a trisonic wind tunnel at India's National Aerospace Laboratory (NAL) in Bangalore. During the lab testing the scramjet engine was tested twice for 20s. A total of five to six tests are required before the test flight. The test flight was expected to take place by the end of 2010. In November 2010, DRDO officials told press that they were in the process of opening four state-of-the-art facilities inside as well as in the vicinity of Hyderabad at a cost of more than ₹10 billion (US$118 million) over the next five years. Reportedly, they will invest ₹3 to 4 billion (US$66 to 88 million) for setting up a much-needed hypersonic wind tunnel at Hyderabad's Missile Complex. The advanced Hypersonic Wind Tunnel (HWT) test facility was finally commissioned at Dr APJ Abdul Kalam Missile Complex on 20 December 2020. The facility facilitate testing of various parameters of the Hypersonic Technology Development Vehicle (HSTDV), including engine performance. "It is pivotal to test the [HSTDV] in the range of up to Mach 12. This will be a unique installation in India," Saraswat told AW&ST on 22 November 2010. As of December 2011, the scientists had proved technologies for aerodynamics, aero-thermodynamics, engine and hot structures through design and ground testing. "Ahead of the launch, we will have to now focus on the mechanical and electrical integration, control and guidance system along with their packaging, checkout system, HILS (hardware in loop simulation) and launch readiness," sources said. === Flight testing === In 2016, it was announced that the vehicle will be tested by December 2016. In early 2019, the vehicle was cleared for tests and was expected to be tested in same year. On 12 June 2019, it was tested from Abdul Kalam Island by the Defence Research and Development Organisation. With the scramjet engine, it can cruise at Mach 6. It was test-fired from Launch Complex-4 of Integrated Test Range (ITR) at the Abdul Kalam Island in the Balasore district of Odisha at 11:27 IST. According to some unconfirmed reports, the test was a partial success since, allegedly, the Agni-I ballistic carrier vehicle on which the HSTDV was to receive its altitude boost didn't complete the mission. This was supposedly due to ‘weight issues’. The rumours however, were unconfirmed. According to the official statement by the Ministry of Defence, “the missile was successfully launched” and the data collected will be analysed to “validate critical technologies”. On 7 September 2020 DRDO successfully tested the scramjet powered Hypersonic Technology Demonstrator Vehicle (HSTDV). Cruise vehicle was launched at 11:03 IST from Integrated Test Range Launch Complex IV at Abdul Kalam Island atop a solid booster. At 30 km (98,000 ft) altitude payload fairing separated, followed by separation of HSTDV cruise vehicle, air-intake opening, fuel injection and auto-ignition. After sustaining hypersonic combustion for 20 seconds, cruise vehicle achieved velocity of nearly 2 km/s (Mach 5.9). A ship was also deployed in the Bay of Bengal to study the missile trajectory. The missile's scramjet engine performed in a "text book manner." This test flight validated aerodynamic configuration of vehicle, ignition and sustained combustion of scramjet engine at hypersonic flow, separation mechanisms and characterised thermo-structural materials. The HSTDV is set to serve as the building block for next-generation hypersonic cruise missiles. === Scramjet testing === == Gallery == == See also == Boeing X-51 BrahMos-II HGV-202F == References == == External links == Hypersonic Flight and Ground Testing Activities in India Media related to DRDO Hypersonic Technology Demonstrator Vehicle at Wikimedia Commons
https://en.wikipedia.org/wiki/Hypersonic_Technology_Demonstrator_Vehicle
Rackspace Technology, Inc. is an American cloud computing company based in San Antonio, Texas. It also has offices in Blacksburg, Virginia and Austin, Texas, as well as in Australia, Canada, United Kingdom, India, Dubai, Switzerland, the Netherlands, Germany, Singapore, Mexico and Hong Kong. Its data centers are located in Amsterdam (Netherlands), Virginia (USA), Chicago (USA), Dallas (USA), London (UK), Frankfurt (Germany), Hong Kong (China), Kansas City (USA), New York City (USA), San Jose (USA), Shanghai (China), Queenstown (Singapore) and Sydney (Australia). == History == === 1990s === Rackspace was founded in 1996 by Richard Yoo, Dirk Elmendorf and Patrick Condon. Two years later, Graham Weston and Morris Miller provided seed capital and began managing the company. The company began after Yoo dropped out of Trinity University and launched Cymitar Technology Group out of a garage, through which the company sold internet access to his former classmates. In 1998, the company was renamed Rackspace. That year, Weston became CEO. === 2000s === Lanham Napier entered the company in 2000 as its chief financial officer. In 2006, Yoo left Rackspace and Napier was named chief executive officer (CEO). Weston stepped down as CEO and that year, he was named chairman. In 2008, Rackspace moved its headquarters to the then-unoccupied Windsor Park Mall in Windcrest, Texas. Rackspace's Chairman, Graham Weston, owned the Montgomery Ward building in the mall until 2006 when it was sold to a developer In 2005, following Hurricane Katrina, Rackspace employees volunteered to refurbish the Montgomery Ward into a shelter for 1,300 people. The revitalization of the mall lead to development in the surrounding area, including the creation of Racker Road and the frontage road Fanatical Way, inspired by the company's trademark "Fanatical Support". "Fanatical support" was the company's motto to describe its customer service. This consisted of the disuse of voicemail, live customer support, and London-based customer service representatives always accessible, which at the time news reports attribute to giving Rackspace an "edge" in the web hosting industry. Later, Rackspace’s Fanatical Support would be used to describe a service of providing customer representatives when businesses were implementing cloud hosting. In 2008 Rackspace opened for trading on the New York Stock Exchange under the ticker symbol "RAX" after its initial public offering (IPO) in which it raised $187.5 million. The initial public offering included 15,000,000 shares of its common stock at a price of $12.50 per share. The IPO did not do well in the public market and lost about 20% of its initial price almost immediately. At around 3:45 PM CST December 18, 2009, Rackspace experienced an outage for customers using their Dallas–Fort Worth data center – including those of Rackspace Cloud. === 2010s === On September 8, 2010, Rackspace received national attention when they decided to discontinue providing web hosting service to one of their customers, Dove World Outreach Center. This was in reaction to Dove World's pastor Terry Jones' plan to burn several copies of the Qur'an on the anniversary of the September 11 attacks. Rackspace claims that this violated their company policy. This move came under criticism, notably from Terry Jones himself, who described it as an "indirect attack on our freedom of speech." Others questioned the appropriateness of Rackspace's action, stating that there is "absolutely no reason for web hosts to have an editorial policy, and this only gives Jones more attention and makes him look more persecuted." In August 2016, it was confirmed that the American private equity firm, Apollo Global Management, had reached an agreement to buy the company for $4.3 billion. The sale was completed in November 2016 and Rackspace officially ended trading on the New York Stock Exchange on November 3, 2016. In May 2017, CEO Taylor Rhodes announced he was leaving the company, and was replaced by Joe Eazor. Eazor was replaced in 2019 by Kevin Jones. === 2020s === In June 2020 it changed its name to Rackspace Technology. In August 2020 Rackspace Technology opened for trading on the Nasdaq under the ticker symbol "RXT" after its initial public offering (IPO). The Initial public offering of 33,500,000 shares of its common stock at an initial public offering price of $21.00 per share. In September 2022 the company named Amar Maletira as its new CEO. In December 2022 Rackspace suffered a major service outage which affected all their hosted Exchange users (customers who bought email services from Rackspace that involved instances of Microsoft Exchange hosted on Rackspace's servers). After initial investigation Rackspace declared the incident a 'security incident' and said it had powered down its servers to protect customer data which some commentators speculated might be indicative of a ransomware incident, a theory that was lent further credence by Rackspace's decision to recommend that customers migrate to Microsoft 365 rather than wait to have their Exchange-based solutions restored. On Monday December 5, 2022, the first full day of trading after the incident (which started on the previous Friday), Rackspace's shares were down as much as 16% ($0.75). A class action lawsuit against Rackspace Technology, Inc. was filed on December 12, 2022, by Cole & Van Note for tens of thousands of businesses who lost access to their emails and services due to ransomware users. Stephenson, et al. v. Rackspace Technology, Inc. This class action was dismissed by the judge in San Antonio in May, 2023. In January 2024, Rackspace moved its San Antonio Global Headquarters from Windcrest (The Castle) to the RidgeWood Plaza II office building, located in north-central San Antonio. == Acquisitions == On September 13, 2007, Rackspace announced it has acquired email hosting provider Webmail.us, based in Blacksburg, Virginia. On October 22, 2008, Rackspace announced it was purchasing cloud storage provider Jungle Disk and VPS provider SliceHost. On February 16, 2012, Rackspace acquired SharePoint911, a Microsoft SharePoint consulting company based in Cincinnati, Ohio. On May 25, 2017, Rackspace announced an agreement to acquire TriCore Solutions. On September 11, 2017, Rackspace announced plans to acquire Datapipe. On September 17, 2018, Rackspace announced it had acquired RelationEdge. On November 4, 2019, Rackspace announced plans to acquire Onica. Other acquisitions include Cloudkick, Anso Labs, Mailgun, ObjectRocket, Exceptional Cloud Services, and ZeroVM. On January 18, 2022, Rackspace announced it had acquired the Singapore-headquartered cloud-based data, analytics and AI company, Just Analytics. == Involvement with other companies == Rackspace launched ServerBeach in San Antonio in January 2003 as a lower-cost alternative for dedicated servers designed for technology hobbyists who want flexibility and reliability. Richard Yoo was a catalyst in the startup of ServerBeach. A bandwidth and colocation provider, Peer 1 Hosting now known as Cogeco Peer 1, purchased ServerBeach in October 2004 for $7.5 Million. Peer 1 Hosting entered the UK managed hosting market in January 2009 and the ServerBeach brand now competes directly with the UK arm of Rackspace, run by Dominic Monkhouse, former managing director of Rackspace Limited. In October 2006, Mosso Inc. was launched, which experimented with white-labeling hosting services. Eventually, the division became the foundation for the Rackspace Cloud Computing offering. On October 1, 2007, Rackspace acquired Webmail.us, a private e-mail hosting firm located in Blacksburg, VA. Originally branded as Mailtrust on May 20, 2009, it became part of the newly formed Cloud Office division of Rackspace. On October 22, 2008, Rackspace acquired Slicehost, a provider of virtual servers and Jungle Disk, a provider of online backup software and services. Rackspace announced on March 8, 2017, plans for an expansion to its portfolio to include managed service for the Google Cloud Platform. The program began beta testing on July 18, 2017, with a planned full offering in late 2017. Rackspace partnered with Google in Customer Reliability Engineering, a group of Google Site Reliability Engineers, to ensure cloud applications "run with the same speed and reliability as some of Google's most widely-used products". == OpenStack == In 2010, Rackspace contributed the source code of its Cloud Files product to the OpenStack project under the Apache License to become the OpenStack Object Storage component. In April 2012, Rackspace announced it would implement OpenStack Compute as the underlying technology for their Cloud Servers product. This change introduced a new control panel as well as add-on cloud services offering databases, server monitoring, block storage, and virtual networking. In 2015, two Rackspace executives were elected to the board of the OpenStack Foundation. In a February 2016 interview, CTO John Engates stated that Rackspace uses OpenStack to power their public and private cloud. == Recognition == Fortune's "Top 100 Best Companies to Work For 2008" placed Rackspace as No. 32. In 2011 and 2013, the company was named as one of the top 100 places to work by Fortune. == References ==
https://en.wikipedia.org/wiki/Rackspace_Technology
MEMS (micro-electromechanical systems) is the technology of microscopic devices incorporating both electronic and moving parts. MEMS are made up of components between 1 and 100 micrometres in size (i.e., 0.001 to 0.1 mm), and MEMS devices generally range in size from 20 micrometres to a millimetre (i.e., 0.02 to 1.0 mm), although components arranged in arrays (e.g., digital micromirror devices) can be more than 1000 mm2. They usually consist of a central unit that processes data (an integrated circuit chip such as microprocessor) and several components that interact with the surroundings (such as microsensors). Because of the large surface area to volume ratio of MEMS, forces produced by ambient electromagnetism (e.g., electrostatic charges and magnetic moments), and fluid dynamics (e.g., surface tension and viscosity) are more important design considerations than with larger scale mechanical devices. MEMS technology is distinguished from molecular nanotechnology or molecular electronics in that the latter two must also consider surface chemistry. The potential of very small machines was appreciated before the technology existed that could make them (see, for example, Richard Feynman's famous 1959 lecture There's Plenty of Room at the Bottom). MEMS became practical once they could be fabricated using modified semiconductor device fabrication technologies, normally used to make electronics. These include molding and plating, wet etching (KOH, TMAH) and dry etching (RIE and DRIE), electrical discharge machining (EDM), and other technologies capable of manufacturing small devices. They merge at the nanoscale into nanoelectromechanical systems (NEMS) and nanotechnology. == History == An early example of a MEMS device is the resonant-gate transistor, an adaptation of the MOSFET, developed by Robert A. Wickstrom for Harvey C. Nathanson in 1965. Another early example is the resonistor, an electromechanical monolithic resonator patented by Raymond J. Wilfinger between 1966 and 1971. During the 1970s to early 1980s, a number of MOSFET microsensors were developed for measuring physical, chemical, biological and environmental parameters. The term "MEMS" was introduced in 1986. S.C. Jacobsen (PI) and J.E. Wood (Co-PI) introduced the term "MEMS" by way of a proposal to DARPA (15 July 1986), titled "Micro Electro-Mechanical Systems (MEMS)", granted to the University of Utah. The term "MEMS" was presented by way of an invited talk by S.C. Jacobsen, titled "Micro Electro-Mechanical Systems (MEMS)", at the IEEE Micro Robots and Teleoperators Workshop, Hyannis, MA Nov. 9–11, 1987. The term "MEMS" was published by way of a submitted paper by J.E. Wood, S.C. Jacobsen, and K.W. Grace, titled "SCOFSS: A Small Cantilevered Optical Fiber Servo System", in the IEEE Proceedings Micro Robots and Teleoperators Workshop, Hyannis, MA Nov. 9–11, 1987. CMOS transistors have been manufactured on top of MEMS structures. == Types == There are two basic types of MEMS switch technology: capacitive and ohmic. A capacitive MEMS switch is developed using a moving plate or sensing element, which changes the capacitance. Ohmic switches are controlled by electrostatically controlled cantilevers. Ohmic MEMS switches can fail from metal fatigue of the MEMS actuator (cantilever) and contact wear, since cantilevers can deform over time. == Materials == The fabrication of MEMS evolved from the process technology in semiconductor device fabrication, i.e. the basic techniques are deposition of material layers, patterning by photolithography and etching to produce the required shapes. Silicon Silicon is the material used to create most integrated circuits used in consumer electronics in the modern industry. The economies of scale, ready availability of inexpensive high-quality materials, and ability to incorporate electronic functionality make silicon attractive for a wide variety of MEMS applications. Silicon also has significant advantages engendered through its material properties. In single crystal form, silicon is an almost perfect Hookean material, meaning that when it is flexed there is virtually no hysteresis and hence almost no energy dissipation. As well as making for highly repeatable motion, this also makes silicon very reliable as it suffers very little fatigue and can have service lifetimes in the range of billions to trillions of cycles without breaking. Semiconductor nanostructures based on silicon are gaining increasing importance in the field of microelectronics and MEMS in particular. Silicon nanowires, fabricated through the thermal oxidation of silicon, are of further interest in electrochemical conversion and storage, including nanowire batteries and photovoltaic systems. Polymers Even though the electronics industry provides an economy of scale for the silicon industry, crystalline silicon is still a complex and relatively expensive material to produce. Polymers on the other hand can be produced in huge volumes, with a great variety of material characteristics. MEMS devices can be made from polymers by processes such as injection molding, embossing or stereolithography and are especially well suited to microfluidic applications such as disposable blood testing cartridges. Metals Metals can also be used to create MEMS elements. While metals do not have some of the advantages displayed by silicon in terms of mechanical properties, when used within their limitations, metals can exhibit very high degrees of reliability. Metals can be deposited by electroplating, evaporation, and sputtering processes. Commonly used metals include gold, nickel, aluminium, copper, chromium, titanium, tungsten, platinum, and silver. Ceramics The nitrides of silicon, aluminium and titanium as well as silicon carbide and other ceramics are increasingly applied in MEMS fabrication due to advantageous combinations of material properties. AlN crystallizes in the wurtzite structure and thus shows pyroelectric and piezoelectric properties enabling sensors, for instance, with sensitivity to normal and shear forces. TiN, on the other hand, exhibits a high electrical conductivity and large elastic modulus, making it possible to implement electrostatic MEMS actuation schemes with ultrathin beams. Moreover, the high resistance of TiN against biocorrosion qualifies the material for applications in biogenic environments. The figure shows an electron-microscopic picture of a MEMS biosensor with a 50 nm thin bendable TiN beam above a TiN ground plate. Both can be driven as opposite electrodes of a capacitor, since the beam is fixed in electrically isolating side walls. When a fluid is suspended in the cavity its viscosity may be derived from bending the beam by electrical attraction to the ground plate and measuring the bending velocity. == Basic processes == === Deposition processes === One of the basic building blocks in MEMS processing is the ability to deposit thin films of material with a thickness anywhere from one micrometre to about 100 micrometres. The NEMS process is the same, although the measurement of film deposition ranges from a few nanometres to one micrometre. There are two types of deposition processes, as follows. ==== Physical deposition ==== Physical vapor deposition ("PVD") consists of a process in which a material is removed from a target, and deposited on a surface. Techniques to do this include the process of sputtering, in which an ion beam liberates atoms from a target, allowing them to move through the intervening space and deposit on the desired substrate, and evaporation, in which a material is evaporated from a target using either heat (thermal evaporation) or an electron beam (e-beam evaporation) in a vacuum system. ==== Chemical deposition ==== Chemical deposition techniques include chemical vapor deposition (CVD), in which a stream of source gas reacts on the substrate to grow the material desired. This can be further divided into categories depending on the details of the technique, for example LPCVD (low-pressure chemical vapor deposition) and PECVD (plasma-enhanced chemical vapor deposition). Oxide films can also be grown by the technique of thermal oxidation, in which the (typically silicon) wafer is exposed to oxygen and/or steam, to grow a thin surface layer of silicon dioxide. === Patterning === Patterning is the transfer of a pattern into a material. === Lithography === Lithography in a MEMS context is typically the transfer of a pattern into a photosensitive material by selective exposure to a radiation source such as light. A photosensitive material is a material that experiences a change in its physical properties when exposed to a radiation source. If a photosensitive material is selectively exposed to radiation (e.g. by masking some of the radiation) the pattern of the radiation on the material is transferred to the material exposed, as the properties of the exposed and unexposed regions differs. This exposed region can then be removed or treated providing a mask for the underlying substrate. Photolithography is typically used with metal or other thin film deposition, wet and dry etching. Sometimes, photolithography is used to create structure without any kind of post etching. One example is SU8 based lens where SU8 based square blocks are generated. Then the photoresist is melted to form a semi-sphere which acts as a lens. Electron beam lithography (often abbreviated as e-beam lithography) is the practice of scanning a beam of electrons in a patterned fashion across a surface covered with a film (called the resist), ("exposing" the resist) and of selectively removing either exposed or non-exposed regions of the resist ("developing"). The purpose, as with photolithography, is to create very small structures in the resist that can subsequently be transferred to the substrate material, often by etching. It was developed for manufacturing integrated circuits, and is also used for creating nanotechnology architectures. The primary advantage of electron beam lithography is that it is one of the ways to beat the diffraction limit of light and make features in the nanometer range. This form of maskless lithography has found wide usage in photomask-making used in photolithography, low-volume production of semiconductor components, and research & development. The key limitation of electron beam lithography is throughput, i.e., the very long time it takes to expose an entire silicon wafer or glass substrate. A long exposure time leaves the user vulnerable to beam drift or instability which may occur during the exposure. Also, the turn-around time for reworking or re-design is lengthened unnecessarily if the pattern is not being changed the second time. It is known that focused-ion beam lithography has the capability of writing extremely fine lines (less than 50 nm line and space has been achieved) without proximity effect. However, because the writing field in ion-beam lithography is quite small, large area patterns must be created by stitching together the small fields. Ion track technology is a deep cutting tool with a resolution limit around 8 nm applicable to radiation resistant minerals, glasses and polymers. It is capable of generating holes in thin films without any development process. Structural depth can be defined either by ion range or by material thickness. Aspect ratios up to several 104 can be reached. The technique can shape and texture materials at a defined inclination angle. Random pattern, single-ion track structures and an aimed pattern consisting of individual single tracks can be generated. X-ray lithography is a process used in the electronic industry to selectively remove parts of a thin film. It uses X-rays to transfer a geometric pattern from a mask to a light-sensitive chemical photoresist, or simply "resist", on the substrate. A series of chemical treatments then engraves the produced pattern into the material underneath the photoresist. Diamond patterning is a method of forming diamond MEMS. It is achieved by the lithographic application of diamond films to a substrate such as silicon. The patterns can be formed by selective deposition through a silicon dioxide mask, or by deposition followed by micromachining or focused ion beam milling. === Etching processes === There are two basic categories of etching processes: wet etching and dry etching. In the former, the material is dissolved when immersed in a chemical solution. In the latter, the material is sputtered or dissolved using reactive ions or a vapor phase etchant. ==== Wet etching ==== Wet chemical etching consists of the selective removal of material by dipping a substrate into a solution that dissolves it. The chemical nature of this etching process provides good selectivity, which means the etching rate of the target material is considerably higher than the mask material if selected carefully. Wet etching can be performed using either isotropic wet etchants or anisotropic wet etchants. Isotropic wet etchant etch in all directions of the crystalline silicon at approximately equal rates. Anisotropic wet etchants preferably etch along certain crystal planes at faster rates than other planes, thereby allowing more complicated 3-D microstructures to be implemented. Wet anisotropic etchants are often used in conjunction with boron etch stops wherein the surface of the silicon is heavily doped with boron resulting in a silicon material layer that is resistant to the wet etchants. This has been used in MEWS pressure sensor manufacturing for example. Etching progresses at the same speed in all directions. Long and narrow holes in a mask will produce v-shaped grooves in the silicon. The surface of these grooves can be atomically smooth if the etch is carried out correctly, with dimensions and angles being extremely accurate. Some single crystal materials, such as silicon, will have different etching rates depending on the crystallographic orientation of the substrate. This is known as anisotropic etching and one of the most common examples is the etching of silicon in KOH (potassium hydroxide), where Si <111> planes etch approximately 100 times slower than other planes (crystallographic orientations). Therefore, etching a rectangular hole in a (100)-Si wafer results in a pyramid shaped etch pit with 54.7° walls, instead of a hole with curved sidewalls as with isotropic etching. Hydrofluoric acid is commonly used as an aqueous etchant for silicon dioxide (SiO2, also known as BOX for SOI), usually in 49% concentrated form, 5:1, 10:1 or 20:1 BOE (buffered oxide etchant) or BHF (Buffered HF). They were first used in medieval times for glass etching. It was used in IC fabrication for patterning the gate oxide until the process step was replaced by RIE. Hydrofluoric acid is considered one of the more dangerous acids in the cleanroom. Electrochemical etching (ECE) for dopant-selective removal of silicon is a common method to automate and to selectively control etching. An active p–n diode junction is required, and either type of dopant can be the etch-resistant ("etch-stop") material. Boron is the most common etch-stop dopant. In combination with wet anisotropic etching as described above, ECE has been used successfully for controlling silicon diaphragm thickness in commercial piezoresistive silicon pressure sensors. Selectively doped regions can be created either by implantation, diffusion, or epitaxial deposition of silicon. ==== Dry etching ==== Xenon difluoride (XeF2) is a dry vapor phase isotropic etch for silicon originally applied for MEMS in 1995 at University of California, Los Angeles. Primarily used for releasing metal and dielectric structures by undercutting silicon, XeF2 has the advantage of a stiction-free release unlike wet etchants. Its etch selectivity to silicon is very high, allowing it to work with photoresist, SiO2, silicon nitride, and various metals for masking. Its reaction to silicon is "plasmaless", is purely chemical and spontaneous and is often operated in pulsed mode. Models of the etching action are available, and university laboratories and various commercial tools offer solutions using this approach. Modern VLSI processes avoid wet etching, and use plasma etching instead. Plasma etchers can operate in several modes by adjusting the parameters of the plasma. Ordinary plasma etching operates between 0.1 and 5 Torr. (This unit of pressure, commonly used in vacuum engineering, equals approximately 133.3 pascals.) The plasma produces energetic free radicals, neutrally charged, that react at the surface of the wafer. Since neutral particles attack the wafer from all angles, this process is isotropic. Plasma etching can be isotropic, i.e., exhibiting a lateral undercut rate on a patterned surface approximately the same as its downward etch rate, or can be anisotropic, i.e., exhibiting a smaller lateral undercut rate than its downward etch rate. Such anisotropy is maximized in deep reactive ion etching. The use of the term anisotropy for plasma etching should not be conflated with the use of the same term when referring to orientation-dependent etching. The source gas for the plasma usually contains small molecules rich in chlorine or fluorine. For instance, carbon tetrachloride (CCl4) etches silicon and aluminium, and trifluoromethane etches silicon dioxide and silicon nitride. A plasma containing oxygen is used to oxidize ("ash") photoresist and facilitate its removal. Ion milling, or sputter etching, uses lower pressures, often as low as 10−4 Torr (10 mPa). It bombards the wafer with energetic ions of noble gases, often Ar+, which knock atoms from the substrate by transferring momentum. Because the etching is performed by ions, which approach the wafer approximately from one direction, this process is highly anisotropic. On the other hand, it tends to display poor selectivity. Reactive-ion etching (RIE) operates under conditions intermediate between sputter and plasma etching (between 10−3 and 10−1 Torr). Deep reactive-ion etching (DRIE) modifies the RIE technique to produce deep, narrow features. In reactive-ion etching (RIE), the substrate is placed inside a reactor, and several gases are introduced. A plasma is struck in the gas mixture using an RF power source, which breaks the gas molecules into ions. The ions accelerate towards, and react with, the surface of the material being etched, forming another gaseous material. This is known as the chemical part of reactive ion etching. There is also a physical part, which is similar to the sputtering deposition process. If the ions have high enough energy, they can knock atoms out of the material to be etched without a chemical reaction. It is a very complex task to develop dry etch processes that balance chemical and physical etching, since there are many parameters to adjust. By changing the balance it is possible to influence the anisotropy of the etching, since the chemical part is isotropic and the physical part highly anisotropic the combination can form sidewalls that have shapes from rounded to vertical. Deep reactive ion etching (DRIE) is a special subclass of RIE that is growing in popularity. In this process, etch depths of hundreds of micrometers are achieved with almost vertical sidewalls. The primary technology is based on the so-called "Bosch process", named after the German company Robert Bosch, which filed the original patent, where two different gas compositions alternate in the reactor. Currently, there are two variations of the DRIE. The first variation consists of three distinct steps (the original Bosch process) while the second variation only consists of two steps. In the first variation, the etch cycle is as follows: (i) SF6 isotropic etch; (ii) C4F8 passivation; (iii) SF6 anisotropic etch for floor cleaning. In the 2nd variation, steps (i) and (iii) are combined. Both variations operate similarly. The C4F8 creates a polymer on the surface of the substrate, and the second gas composition (SF6 and O2) etches the substrate. The polymer is immediately sputtered away by the physical part of the etching, but only on the horizontal surfaces and not the sidewalls. Since the polymer only dissolves very slowly in the chemical part of the etching, it builds up on the sidewalls and protects them from etching. As a result, etching aspect ratios of 50 to 1 can be achieved. The process can easily be used to etch completely through a silicon substrate, and etch rates are 3–6 times higher than wet etching. After preparing a large number of MEMS devices on a silicon wafer, individual dies have to be separated, which is called die preparation in semiconductor technology. For some applications, the separation is preceded by wafer backgrinding in order to reduce the wafer thickness. Wafer dicing may then be performed either by sawing using a cooling liquid or a dry laser process called stealth dicing. == Manufacturing technologies == Bulk micromachining is the oldest paradigm of silicon-based MEMS. The whole thickness of a silicon wafer is used for building the micro-mechanical structures. Silicon is machined using various etching processes. Bulk micromachining has been essential in enabling high performance pressure sensors and accelerometers that changed the sensor industry in the 1980s and 1990s. Surface micromachining uses layers deposited on the surface of a substrate as the structural materials, rather than using the substrate itself. Surface micromachining was created in the late 1980s to render micromachining of silicon more compatible with planar integrated circuit technology, with the goal of combining MEMS and integrated circuits on the same silicon wafer. The original surface micromachining concept was based on thin polycrystalline silicon layers patterned as movable mechanical structures and released by sacrificial etching of the underlying oxide layer. Interdigital comb electrodes were used to produce in-plane forces and to detect in-plane movement capacitively. This MEMS paradigm has enabled the manufacturing of low cost accelerometers for e.g. automotive air-bag systems and other applications where low performance and/or high g-ranges are sufficient. Analog Devices has pioneered the industrialization of surface micromachining and has realized the co-integration of MEMS and integrated circuits. Wafer bonding involves joining two or more substrates (usually having the same diameter) to one another to form a composite structure. There are several types of wafer bonding processes that are used in microsystems fabrication including: direct or fusion wafer bonding, wherein two or more wafers are bonded together that are usually made of silicon or some other semiconductor material; anodic bonding wherein a boron-doped glass wafer is bonded to a semiconductor wafer, usually silicon; thermocompression bonding, wherein an intermediary thin-film material layer is used to facilitate wafer bonding; and eutectic bonding, wherein a thin-film layer of gold is used to bond two silicon wafers. Each of these methods have specific uses depending on the circumstances. Most wafer bonding processes rely on three basic criteria for successfully bonding: the wafers to be bonded are sufficiently flat; the wafer surfaces are sufficiently smooth; and the wafer surfaces are sufficiently clean. The most stringent criteria for wafer bonding is usually the direct fusion wafer bonding since even one or more small particulates can render the bonding unsuccessful. In comparison, wafer bonding methods that use intermediary layers are often far more forgiving. Both bulk and surface silicon micromachining are used in the industrial production of sensors, ink-jet nozzles, and other devices. But in many cases the distinction between these two has diminished. A new etching technology, deep reactive-ion etching, has made it possible to combine good performance typical of bulk micromachining with comb structures and in-plane operation typical of surface micromachining. While it is common in surface micromachining to have structural layer thickness in the range of 2 μm, in HAR silicon micromachining the thickness can be from 10 to 100 μm. The materials commonly used in HAR silicon micromachining are thick polycrystalline silicon, known as epi-poly, and bonded silicon-on-insulator (SOI) wafers although processes for bulk silicon wafer also have been created (SCREAM). Bonding a second wafer by glass frit bonding, anodic bonding or alloy bonding is used to protect the MEMS structures. Integrated circuits are typically not combined with HAR silicon micromachining. == Applications == Some common commercial applications of MEMS include: Inkjet printers, which use piezoelectrics or thermal bubble ejection to deposit ink on paper. Accelerometers in modern cars for a large number of purposes including airbag deployment and electronic stability control. Inertial measurement units (IMUs): MEMS accelerometers. MEMS gyroscopes in remote controlled, or autonomous, helicopters, planes and multirotors (also known as drones), used for automatically sensing and balancing flying characteristics of roll, pitch and yaw. MEMS magnetic field sensor (magnetometer) may also be incorporated in such devices to provide directional heading. MEMS inertial navigation systems (INSs) of modern cars, airplanes, submarines and other vehicles to detect yaw, pitch, and roll; for example, the autopilot of an airplane. Accelerometers in consumer electronics devices such as game controllers (Nintendo Wii), personal media players / cell phones (virtually all smartphones, various HTC PDA models), augmented reality (AR) and virtual reality (VR) devices, and a number of digital cameras (various Canon Digital IXUS models). Also used in PCs to park the hard disk head when free-fall is detected, to prevent damage and data loss. MEMS speakers for Headphones MEMS barometers. MEMS microphones in portable devices, e.g., mobile phones, head sets and laptops. The market for smart microphones includes smartphones, wearable devices, smart home and automotive applications. Precision temperature-compensated resonators in real-time clocks. Silicon pressure sensors e.g., car tire pressure sensors, and disposable blood pressure sensors. Displays e.g., the digital micromirror device (DMD) chip in a projector based on DLP technology, which has a surface with several hundred thousand micromirrors or single micro-scanning-mirrors also called microscanners. The MEMS mirrors can also be used in conjunction with laser scanning to project an image. Optical switching technology, which is used for switching technology and alignment for data communications. RF switches and relays. Bio-MEMS applications in medical and health related technologies including lab-on-a-chip (taking advantage of microfluidics and micropumps), biosensors, chemosensors as well as embedded components of medical devices e.g. stents. Interferometric modulator display (IMOD) applications in consumer electronics (primarily displays for mobile devices), used to create interferometric modulation − reflective display technology as found in mirasol displays. Fluid acceleration, such as for micro-cooling. Micro-scale energy harvesting including piezoelectric, electrostatic and electromagnetic micro harvesters. Micromachined ultrasound transducers. MEMS-based loudspeakers focusing on applications such as in-ear headphones and hearing aids. MEMS oscillators. MEMS-based scanning probe microscopes including atomic force microscopes. LiDAR (light detection and ranging). == Industry structure == The global market for micro-electromechanical systems, which includes products such as automobile airbag systems, display systems and inkjet cartridges totaled $40 billion in 2006 according to Global MEMS/Microsystems Markets and Opportunities, a research report from SEMI and Yole Development and is forecasted to reach $72 billion by 2011. Companies with strong MEMS programs come in many sizes. Larger firms specialize in manufacturing high volume inexpensive components or packaged solutions for end markets such as automobiles, biomedical, and electronics. Smaller firms provide value in innovative solutions and absorb the expense of custom fabrication with high sales margins. Both large and small companies typically invest in R&D to explore new MEMS technology. The market for materials and equipment used to manufacture MEMS devices topped $1 billion worldwide in 2006. Materials demand is driven by substrates, making up over 70 percent of the market, packaging coatings and increasing use of chemical mechanical planarization (CMP). While MEMS manufacturing continues to be dominated by used semiconductor equipment, there is a migration to 200mm lines and select new tools, including etch and bonding for certain MEMS applications. == See also == MEMS sensor generations Microoptoelectromechanical systems Microoptomechanical systems Nanoelectromechanical systems == References == == Further reading == Microsystem Technologies, published by Springer Publishing, Journal homepage Geschke, O.; Klank, H.; Telleman, P., eds. (2004). Microsystem Engineering of Lab-on-a-chip Devices. Wiley. ISBN 3-527-30733-8. == External links == Chollet, F.; Liu, HB. (10 August 2018). A (not so) short introduction to MEMS. ISBN 978-2-9542015-0-4. 5.4.
https://en.wikipedia.org/wiki/MEMS
Pokhran-II (Operation Shakti) was a series of five nuclear weapon tests conducted by India in May 1998. e detonated at the Indian Army's Pokhran Test Range in Rajasthan. It was the second instance of nuclear testing conducted by India, after the first test, Smiling Buddha, in May 1974. The test consisted of five detonations, the first of which was claimed to be a two-stage fusion bomb while the remaining four were fission bombs. The first three tests were carried out simultaneously on 11 May 1998 and the last two were detonated two days later on 13 May 1998. The tests were collectively called Operation Shakti, and the five nuclear bombs were designated as Shakti-I to Shakti-V. The chairman of the Atomic Energy Commission of India described each of the explosions to be equivalent to several tests carried out over the years by various nations. While announcing the tests, the Indian government declared India as a nuclear state and that the tests achieved the main objective of giving the capability to build fission bombs and thermonuclear weapons with yields up to 200 kilotons. While the Indian fission bombs have been documented, the design and development of thermonuclear weapons remains uncertain after the tests. As a consequence of the tests, United Nations Security Council Resolution 1172 was enacted and economic sanctions were imposed by countries including Japan and the United States. == History == === Early nuclear programme (1944–1965) === Efforts towards building a nuclear bomb, infrastructure, and research on related technologies have been undertaken by India since the end of Second World War. The origins of India's nuclear programme go back to 1945 when nuclear physicist Homi Bhabha established the Tata Institute of Fundamental Research (TIFR) with the aid of Tata Group. After Indian independence, the Atomic Energy Act was passed on 15 April 1948, that established the Indian Atomic Energy Commission (IAEC). In 1954, Department of Atomic Energy (DAE) was established which was responsible for the atomic development programme and was allocated a significant amount of the defence budget in the subsequent years. In 1956, the first nuclear reactor became operational at Bhabha Atomic Research Centre (BARC), becoming the first operating reactor in Asia. In 1961, India commissioned a reprocessing plant to produce weapon grade plutonium. In 1962, India was engaged in a war with China, and with China conducting its own nuclear test in 1964, India accelerated its development of nuclear weapons. With two reactors operational in the early 1960s, research progressed into the manufacture of nuclear weapons. With the unexpected deaths of then Prime Minister Nehru in 1964 and Bhabha in 1966, the programme slowed down. The incoming prime minister Lal Bahadur Shastri appointed physicist Vikram Sarabhai as the head of the nuclear programme and the direction of the programme changed towards using nuclear energy for peaceful purposes rather than military development. === Development of nuclear bomb and first test (1966–1972) === After Shastri's death in 1966, Indira Gandhi became the prime minister and work on the nuclear programme resumed. The design work on the bomb proceeded under physicist Raja Ramanna, who continued the nuclear weapons technology research after Bhabha's death in 1966. The project employed 75 scientists and progressed in secrecy. During the Indo-Pakistani War, the US government sent a carrier battle group into the Bay of Bengal in an attempt to intimidate India, who were aided by the Soviet Union, who responded by sending a submarine armed with nuclear missiles. The Soviet response underlined the deterrent value and significance of nuclear weapons to India. After India gained military and political initiative over Pakistan in the war, the work on building a nuclear device continued. The hardware began to be built in early 1972 and the Prime Minister authorised the development of a nuclear test device in September 1972. On 18 May 1974, India tested a implosion-type fission device at the Indian Army's Pokhran Test Range under the code name Smiling Buddha. The test was described as a peaceful nuclear explosion (PNE) and the yield was estimated to be between 6 and 10 kilotons. === Aftermath of nuclear tests (1973–1988) === While India continued to state that the test was for peaceful purposes, it encountered opposition from many countries. The Nuclear Suppliers Group (NSG) was formed in reaction to the Indian tests to check international nuclear proliferation. The technological embargo and sanctions affected the development of India's nuclear programme. It was crippled by the lack of indigenous resources and dependence on imported technology on certain areas. Though India declared to the International Atomic Energy Agency (IAEA) that India's nuclear program was intended only for peaceful purposes, preliminary work on a fusion bomb was initiated. In the aftermath of the state emergency in 1975 that resulted in the collapse of the Second Indira Gandhi ministry, the programme continued under M.R. Srinivasan, but made slow progress. Though the nuclear programme did not receive much attention from incoming Prime Minister Morarji Desai at first, it gained impetus when Ramanna was appointed to the Ministry of Defence. With the discovery of Pakistan's clandestine atomic bomb program, India realised that it was very likely to succeed in its project in a few years. With the return of Indira Gandhi in 1980, the nuclear programme gained momentum. Two new underground shafts were constructed at the Pokhran test range by 1982 and Gandhi approved further nuclear tests in 1982. But the decision was reversed owing to pressure from the United States as it might end up in nuclear brinksmanship with Pakistan and potential foreign policy implications. Work continued towards weaponizing the nuclear bomb under V. S. R. Arunachalam and the Indian missile programme was launched under A. P. J. Abdul Kalam. Ramanna pushed forward with a uranium enrichment program and despite the sanctions, India imported heavy water required as a neutron moderator in the nuclear reactors, from countries like China, Norway and Soviet Union through a middleman. Though Rajiv Gandhi, who became the Prime Minister in 1984, supported technological development and research, he was sceptical about nuclear testing as he believed it would result in further technological alienation from the developed countries. Dhruva, a new reactor with a capability to produce larger quantities of weapon grade material, was commissioned at BARC in 1985. Other components for a nuclear fusion bomb were developed during the time with capabilities to air drop nuclear weapons. In late 1985, a study group commissioned by the Prime Minister outlined a plan for the production of 70 to 100 nuclear warheads and a strict no first use policy. === Building towards second nuclear test (1989–1998) === In 1989, V.P. Singh formed the government, which collapsed within two years and this period of instability caused a snag in the nuclear weapons programme. Foreign relations between India and Pakistan severely worsened when India accused Pakistan of supporting the Insurgency in Jammu and Kashmir. During this time, the Indian Missile Program succeeded in the development of the Prithvi missiles. India decided to observe the temporary moratorium on the nuclear tests for fear of inviting international criticism. The NSG decided in 1992 to require full-scope IAEA safeguards for any new nuclear export deals, which effectively ruled out nuclear exports to India. Though India had stock-piled material and components to be able to construct a dozen nuclear fission bombs, the deliverance mechanism was still under development. With the successful testing of Agni missile and successful trials involving dropping of similar bombs without fissionable material from bomber aircraft in 1994, the weaponization became successful. With the Comprehensive Nuclear-Test-Ban Treaty under discussion and global pressure pushing India to sign, then Indian Prime Minister Narasimha Rao ordered preparations for further nuclear tests in 1995. Based on the direction of the director of DAE R. Chidambaram, S. K. Sikka was tasked with the development of a thermo-nuclear fusion device. In August, K. Santhanam, the chief technical adviser of DRDO, was appointed the director for carrying out the tests. While water was being pumped out of the shafts constructed more than ten years earlier, American spy satellites picked up the signs. With pressure from US President Bill Clinton, the test never progressed. With Rao's term ending in 1996, the next two years saw multiple governments being formed. Atal Bihari Vajpayee, who was a strong advocate of nuclear weaponization, came to power following the 1998 general elections. Vajpayee had earlier declared that if re-voted to power, his government would induct nuclear weapons and declare India's might to gather respect. Soon after assuming power in March 1998, Vajpayee organized a discussion with Abdul Kalam and Chidambaram to conduct nuclear tests. On 28 March 1998, he asked to make preparations for a test. == Nuclear test == === Preparation === The Indian Intelligence Bureau had been aware of the capability of the United States spy satellites in detecting Indian test preparations. Therefore, the tests required complete secrecy and the 58th Engineer Regiment of the Indian Army Corps of Engineers was tasked with preparing the test sites without being detected. Work was mostly done during night, and equipment was returned to the original place during the day to give the impression that it was never moved. Bomb shafts were dug under camouflage netting and the dug-out sand was shaped like natural sand dunes. Cables and sensors were either covered with sand or concealed using native vegetation. A select group was involved in the detonation process with all personnel required to wear uniforms to preserve the secrecy of the tests. They were given pseudo-names and they traveled in smaller groups to avoid detection. Scientists and engineers of BARC, the Atomic Minerals Directorate for Exploration and Research (AMDER), and DRDO were involved in the development and assembly of the bombs. Three laboratories of the DRDO were involved in designing, testing and producing components for the bombs, including the detonators, the implosion and high-voltage trigger systems. These were also responsible for systems engineering, aerodynamics and safety. The bombs were transported moved from BARC at 3 am on 1 May 1998 to Bombay airport, then flown in an Indian Air Force's AN-32 aircraft to Jaisalmer Airport. It was then transported to Pokhran in an army convoy of four trucks, and this required three trips. The devices were delivered to the device preparation building, which was designated as Prayer Hall'. === Personnel === Following were the main personnel involved in the testing: Chief Coordinators : A.P.J. Abdul Kalam, scientific adviser to the defence minister and head of the DRDO R. Chidambaram, chairman of the Atomic Energy Commission and the Department of Atomic Energy Defence Research & Development Organization (DRDO): K. Santhanam, director of test site preparations Bhabha Atomic Research Centre (BARC) : Anil Kakodkar, director Satinder Kumar Sikka, lead for thermonuclear weapon development M. S. Ramakumar, Director of Nuclear Fuel and Automation Manufacturing Group; lead for manufacture of nuclear components D.D. Sood, director of Radiochemistry and Isotope Group; director of nuclear materials acquisition S.K. Gupta, Solid State Physics and Spectroscopy Group; director of device design and assessment G. Govindraj, associate director of Electronic and Instrumentation Group; director of field instrumentation === Testing === The test was organized into two groups to be fired separately, with all devices in a group fired at the same time. Five nuclear devices were tested during the operation. Group-I: Shakti I: Two stage thermonuclear device with fusion boosted primary, test design yield 45 kt, but designed for up to 200 kt deployed yield Shakti II: A light-weight plutonium implosion fission device yielding 12 kt and intended as a warhead that could be delivered by bomber or missile Shakti III: An experimental linear implosion fission device that used reactor-grade plutonium, yielding 0.3 kt Group-II: Shakti IV: A 0.5 kt experimental fission device Shakti V: A 0.2 kt thorium/U-233 experimental fission device An additional, sixth device (Shakti VI) was developed but not detonated. The first test was planned on 11 May. The thermonuclear device was placed in a shaft code named White House, which was approximately 230 metres [m] (750 ft) deep, the fission bomb was placed in a 150 metres (490 ft) deep shaft code named Taj Mahal, and the first sub-kiloton device in shaft Kumbhkaran. The first three devices were placed in their respective shafts on 10 May. The first device to be placed was the sub-kiloton device, which was sealed by the army engineers by 8:30 PM. The thermonuclear device was lowered and sealed by 4 AM on the next day with the fission device being placed by 7:30 AM. The shafts were L-shaped, with a horizontal chamber used for the test devices. The timing of the tests was pursuant to the local weather conditions, and the test sequence was initiated in the afternoon. Santhanam, in-charge of the test site, handed over the site to M. Vasudev, the range safety officer, who was responsible for verifying the test indicators. After the safety clearance, the countdown system was activated and at 3:45 PM IST, the three devices were detonated simultaneously. On 13 May, at 12.21 PM IST, two sub-kiloton devices (Shakti IV and V) were detonated. Due to their very low yield, these explosions were not detected by any seismic station. === Announcement === Having tested weaponized nuclear warheads, India became the sixth country to join the nuclear club. Shortly after the tests, Prime Minister Vajpayee appeared before the press corps and made the following short statement: Today, at 15:45 hours, India conducted three underground nuclear tests in the Pokhran range. The tests conducted today were with a fission device, a low yield device and a thermonuclear device. The measured yields are in line with expected values. Measurements have also confirmed that there was no release of radioactivity into the atmosphere. These were contained explosions like the experiment conducted in May 1974. I warmly congratulate the scientists and engineers who have carried out these successful tests. On 13 May 1998, India declared the series of tests to be over after this. == Reactions to tests == === Domestic === News of the tests were greeted with jubilation and large-scale approval by general public in India. The Bombay Stock Exchange registered significant gains. The media praised the government for its decision and advocated the development of an operational nuclear arsenal for the country's armed forces. The opposition led by Indian National Congress criticized the Vajpayee administration for carrying out the series of nuclear tests, accusing the government of trying to use the tests for political ends rather than to enhance the country's national security. By the time India had conducted tests, the country had a total of $44bn in loans in 1998, from the IMF and the World Bank. The industrial sectors of the Indian economy were likely to be hurt by sanctions with the foreign companies, which had invested heavily in India, facing consequences of impending sanctions. The Indian government announced that it had factored the economic response and was willing to take the consequences. === International === The United States issued a statement condemning India and threatened economic sanctions. The intelligence community felt humiliated for its failure to detect the preparations for the test. In keeping with its preferred approach to foreign policy in recent decades, and in compliance with the 1994 anti-proliferation law, the United States imposed economic sanctions on India. The sanctions on India consisted of cutting off all assistance to India except humanitarian aid, banning the export of certain defense material and technologies, ending American credit and credit guarantees to India, and requiring the US to oppose lending by international financial institutions to India. The United States held talks with India over the issue of India becoming a part of the CTBT and NPT and pressurized to rollback India's nuclear program. India did not accede to the request stating that it was not consistent with her national security interest. Canada criticized India's actions. Japan imposed economic sanctions which included freezing all new loans and grants except for humanitarian aid. Few other nations also imposed sanctions on India, primarily in the form of suspension of foreign aid and government-to-government credit lines. China stated that it was seriously concerned about the tests which are not favorable to the peace and stability in the region and called for the international community to pressure India to cease the development of nuclear weapons. It further rejected claims of India's stated rationale of needing nuclear capabilities to counter a Chinese threat as unfounded. However, permanent members of the United Nations Security Council such as the United Kingdom, France, and Russia refrained from making any statements condemning the tests. Pakistan issued a statement blaming India for instigating a nuclear arms race in the region with Prime Minister Nawaz Sharif stating that his country will take appropriate action. Pakistan carried out six nuclear tests under the codename Chagai-I on 28 May 1998 and Chagai-II on 30 May 1998. Pakistan's leading nuclear physicist, Pervez Hoodbhoy, held India responsible for Pakistan's nuclear test experiments. Pakistan's subsequent tests invited similar condemnation and economic sanctions. On June 6, the UN Security Council adopted Resolution 1172, condemning the Indian and Pakistani tests. == Legacy and popular culture == The Government of India declared 11 May as National Technology Day in India to commemorate the first of the five successful nuclear weapon tests that were carried out on 11 May 1998. The day is celebrated by giving awards to various individuals and industries in the field of science and technology. Parmanu: The Story of Pokhran is a 2018 Bollywood movie was based on the nuclear tests. War and Peace is a documentary by Anand Patwardhan, which details the events of the tests. == See also == India and weapons of mass destruction Pokhran-I == References == == External links ==
https://en.wikipedia.org/wiki/Pokhran-II
Hybridoma technology is a method for producing large quantities of monoclonal antibodies by fusing antibody producing B cells with myeloma cells (cancerous B cells). This creates hybrid cells, hybridomas, that produce the antibody from their parent B cell whilst maintaining the properties of the parental myeloma cell line being immortal (endlessly reproducing) and having desirable properties for cell culture. The B cells to be used are generally gathered from animals who have been immunized with an antigen against which an antibody targeting it is desired. After forming hybridomas any non-hybrid cells are killed before screening and monoclonalization to create hybridoma lines that are derived from one parental cell and thus producing the same antibody against the desired target. The production of monoclonal antibodies was invented by César Milstein and Georges J. F. Köhler in 1975. They shared the Nobel Prize of 1984 for Medicine and Physiology with Niels Kaj Jerne, who made other contributions to immunology. The term hybridoma was coined by Leonard Herzenberg during his sabbatical in Milstein's laboratory in 1976–1977. == Method == Laboratory animals (mammals, e.g. mice) are first exposed to the antigen against which an antibody is to be generated. Usually this is done by a series of injections of the antigen in question, over the course of several weeks. These injections are typically followed by the use of in vivo electroporation, which significantly enhances the immune response. Once splenocytes are isolated from the mammal's spleen, the B cells are fused with immortalised myeloma cells. The fusion of the B cells with myeloma cells can be done using electrofusion. Electrofusion causes the B cells and myeloma cells to align and fuse with the application of an electric field. Alternatively, the B-cells and myelomas can be made to fuse by chemical protocols, most often using polyethylene glycol. The myeloma cells are selected beforehand to ensure they are not secreting antibody themselves and that they lack the hypoxanthine-guanine phosphoribosyltransferase (HGPRT) gene, making them sensitive (or vulnerable) to the HAT medium (see below). Fused cells are incubated in HAT medium (hypoxanthine-aminopterin-thymidine medium) for roughly 10 to 14 days. Aminopterin blocks the pathway that allows for nucleotide synthesis. Hence, unfused myeloma cells die, as they cannot produce nucleotides by the de novo or salvage pathways because they lack HGPRT. Removal of the unfused myeloma cells is necessary because they have the potential to outgrow other cells, especially weakly established hybridomas. Unfused B cells die as they have a short life span. In this way, only the B cell-myeloma hybrids survive, since the HGPRT gene coming from the B cells is functional. These cells produce antibodies (a property of B cells) and are immortal (a property of myeloma cells). The incubated medium is then diluted into multi-well plates to such an extent that each well contains only one cell. Since the antibodies in a well are produced by the same B cell, they will be directed towards the same epitope, and are thus monoclonal antibodies. The next stage is a rapid primary screening process, which identifies and selects only those hybridomas that produce antibodies of appropriate specificity. The first screening technique used is called ELISA. The hybridoma culture supernatant, secondary enzyme labeled conjugate, and chromogenic substrate, are then incubated, and the formation of a colored product indicates a positive hybridoma. Alternatively, immunocytochemical, western blot, and immunoprecipitation-mass spectrometry. Unlike western blot assays, immunoprecipitation-mass spectrometry facilitates screening and ranking of clones which bind to the native (non-denaturated) forms of antigen proteins. Flow cytometry screening has been used for primary screening of a large number (~1000) of hybridoma clones recognizing the native form of the antigen on the cell surface. In the flow cytometry-based screening, a mixture of antigen-negative cells and antigen-positive cells is used as the antigen to be tested for each hybridoma supernatant sample. The B cell that produces the desired antibodies can be cloned to produce many identical daughter clones. Supplemental media containing interleukin-6 (such as briclone) are essential for this step. Once a hybridoma colony is established, it will continually grow in culture medium like RPMI-1640 (with antibiotics and fetal bovine serum) and produce antibodies. Multiwell plates are used initially to grow the hybridomas, and after selection, are changed to larger tissue culture flasks. This maintains the well-being of the hybridomas and provides enough cells for cryopreservation and supernatant for subsequent investigations. The culture supernatant can yield 1 to 60 μg/ml of monoclonal antibody, which is maintained at -20 °C or lower until required. By using culture supernatant or a purified immunoglobulin preparation, further analysis of a potential monoclonal antibody producing hybridoma can be made in terms of reactivity, specificity, and cross-reactivity. == Applications == The use of monoclonal antibodies is numerous and includes the prevention, diagnosis, and treatment of disease. For example, monoclonal antibodies can distinguish subsets of B cells and T cells, which is helpful in identifying different types of leukaemias. In addition, specific monoclonal antibodies have been used to define cell surface markers on white blood cells and other cell types. This led to the cluster of differentiation series of markers. These are often referred to as CD markers and define several hundred different cell surface components of cells, each specified by binding of a particular monoclonal antibody. Such antibodies are extremely useful for fluorescence-activated cell sorting, the specific isolation of particular types of cells. === In diagnostic histopathology === With the help of monoclonal antibodies, tissues and organs can be classified based on their expression of certain defined markers, which reflect tissue or cellular genesis. Prostate specific antigen, placental alkaline phosphatase, human chorionic gonadotrophin, α-fetoprotein and others are organ-associated antigens and the production of monoclonal antibodies against these antigens helps in determining the nature of a primary tumor. Monoclonal antibodies are especially useful in distinguishing morphologically similar lesions, like pleural and peritoneal mesothelioma, adenocarcinoma, and in the determination of the organ or tissue origin of undifferentiated metastases. Selected monoclonal antibodies help in the detection of occult metastases (cancer of unknown primary origin) by immuno-cytological analysis of bone marrow, other tissue aspirates, as well as lymph nodes and other tissues and can have increased sensitivity over normal histopathological staining. One study performed a sensitive immuno-histochemical assay on bone marrow aspirates of 20 patients with localized prostate cancer. Three monoclonal antibodies (T16, C26, and AE-1), capable of recognizing membrane and cytoskeletal antigens expressed by epithelial cells to detect tumour cells, were used in the assay. Bone marrow aspirates of 22% of patients with localized prostate cancer (stage B, 0/5; Stage C, 2/4), and 36% patients with metastatic prostate cancer (Stage D1, 0/7 patients; Stage D2, 4/4 patients) had antigen-positive cells in their bone marrow. It was concluded that immuno-histochemical staining of bone marrow aspirates are very useful to detect occult bone marrow metastases in patients with apparently localized prostate cancer. Although immuno-cytochemistry using tumor-associated monoclonal antibodies has led to an improved ability to detect occult breast cancer cells in bone marrow aspirates and peripheral blood, further development of this method is necessary before it can be used routinely. One major drawback of immuno-cytochemistry is that only tumor-associated and not tumor-specific monoclonal antibodies are used, and as a result, some cross-reaction with normal cells can occur. In order to effectively stage breast cancer and assess the efficacy of purging regimens prior to autologous stem cell infusion, it is important to detect even small quantities of breast cancer cells. Immuno-histochemical methods are ideal for this purpose because they are simple, sensitive, and quite specific. Franklin et al. performed a sensitive immuno-cytochemical assay by using a combination of four monoclonal antibodies (260F9, 520C9, 317G5 and BrE-3) against tumor cell surface glycoproteins to identify breast tumour cells in bone marrow and peripheral blood. They concluded from the results that immuno-cytochemical staining of bone marrow and peripheral blood is a sensitive and simple way to detect and quantify breast cancer cells. One of the main reasons for metastatic relapse in patients with solid tumours is the early dissemination of malignant cells. The use of monoclonal antibodies (mAbs) specific for cytokeratins can identify disseminated individual epithelial tumor cells in the bone marrow. One study reports on having developed an immuno-cytochemical procedure for simultaneous labeling of cytokeratin component no. 18 (CK18) and prostate specific antigen (PSA). This would help in the further characterization of disseminated individual epithelial tumor cells in patients with prostate cancer. The twelve control aspirates from patients with benign prostatic hyperplasia showed negative staining, which further supports the specificity of CK18 in detecting epithelial tumour cells in bone marrow. In most cases of malignant disease complicated by effusion, neoplastic cells can be easily recognized. However, in some cases, malignant cells are not so easily seen or their presence is too doubtful to call it a positive report. The use of immuno-cytochemical techniques increases diagnostic accuracy in these cases. Ghosh, Mason and Spriggs analysed 53 samples of pleural or peritoneal fluid from 41 patients with malignant disease. Conventional cytological examination had not revealed any neoplastic cells. Three monoclonal antibodies (anti-CEA, Ca 1 and HMFG-2) were used to search for malignant cells. Immunocytochemical labelling was performed on unstained smears, which had been stored at -20 °C up to 18 months. Twelve of the forty-one cases in which immuno-cytochemical staining was performed, revealed malignant cells. The result represented an increase in diagnostic accuracy of approximately 20%. The study concluded that in patients with suspected malignant disease, immuno-cytochemical labeling should be used routinely in the examination of cytologically negative samples and has important implications with respect to patient management. Another application of immuno-cytochemical staining is for the detection of two antigens in the same smear. Double staining with light chain antibodies and with T and B cell markers can indicate the neoplastic origin of a lymphoma. One study has reported the isolation of a hybridoma cell line (clone 1E10), which produces a monoclonal antibody (IgM, k isotype). This monoclonal antibody shows specific immuno-cytochemical staining of nucleoli. Tissues and tumours can be classified based on their expression of certain markers, with the help of monoclonal antibodies. They help in distinguishing morphologically similar lesions and in determining the organ or tissue origin of undifferentiated metastases. Immuno-cytological analysis of bone marrow, tissue aspirates, lymph nodes etc. with selected monoclonal antibodies help in the detection of occult metastases. Monoclonal antibodies increase the sensitivity in detecting even small quantities of invasive or metastatic cells. Monoclonal antibodies (mAbs) specific for cytokeratins can detect disseminated individual epithelial tumour cells in the bone marrow. == References == == External links == Hybridomas at the U.S. National Library of Medicine Medical Subject Headings (MeSH) "Hybridoma Technology". Understanding Cancer Series: The Immune System. National Cancer Institute. Archived from the original on 5 October 2014. "Hybridoma Cell Culture". Archived from the original on 2018-02-20. Retrieved 2017-09-28.
https://en.wikipedia.org/wiki/Hybridoma_technology
Contemporary Amperex Technology Co., Limited (CATL) is a Chinese battery manufacturer and technology company founded in 2011 that specializes in the manufacturing of lithium-ion batteries for electric vehicles and energy storage systems, as well as battery management systems (BMS). CATL is the biggest EV and energy storage battery manufacturer in the world, with a global market share of around 37% and 40% respectively in 2023. It is headquartered in Ningde, Fujian province. == History == CATL was founded in Ningde, which is reflected in its Chinese name (宁德时代 'Ningde era'). The company started as a spin-off of Amperex Technology Limited (ATL), a previous business founded by Robin Zeng in 1999. ATL initially manufactured lithium-polymer batteries based on licensed technology but later developed more reliable battery designs themselves. In 2005, ATL was acquired by Japan's TDK company, but Zeng continued as a manager for ATL. In 2011, A group of Chinese investors, led by Zeng and vice-chairman Huang Shilin, spun off the EV battery operations of ATL into the new company CATL after acquiring an 85% stake. Former parent TDK retained its 15% stake in CATL until 2015. Zeng has applied management styles of TDK and Huawei to his company. === 2011–2021 === Amid the rise of electric vehicles, CATL gradually became one of the leading battery providers in the world due to its early investments in EV battery technologies and government subsidization of the battery industry. In 2011, China required foreign automakers to transfer crucial technology to domestic companies in order to receive subsidies for electric vehicles. In 2012, CATL established cooperation with BMW Brilliance, its first main customer. China's dominant position in the battery manufacturing supply chain, including the control over rare-earth materials, provided an ideal foundation for Chinese companies like CATL to decouple from the monopoly of Western technology. It started to provide components to the supply chains of European and American vehicle manufacturers amidst competition from Panasonic and L.G. Chemical. In 2016, CATL was the world's third largest provider of EV, HEV and PHEV batteries, behind Panasonic (Sanyo) and BYD. In 2017, CATL's sales of power battery system reached 11.84GWh, taking the lead worldwide for the first time. In January 2017, CATL announced a strategic partnership with Valmet Automotive, focusing on project management, engineering and battery pack supply. CATL acquired a 22% stake in Valmet Automotive. In June 2018, CATL went public on the Shenzhen Stock Exchange. BMW announced in 2018 that it would buy €4 billion worth of batteries from CATL for use in the electric Mini and iNext vehicles. In the same year, CATL announced that it would establish a new battery factory in Arnstadt, Thuringia, Germany. In June 2020, Zeng Yuqun announced that the company had achieved a battery for electric vehicles (EVs) rated as good for 1 million miles (or 1.6 million kilometers). In 2021 the company unveiled a sodium-ion battery for the automotive market. A battery recycling facility is planned to recover some of the materials. CATL continued to invest in cobalt batteries as well, and acquired a near 25% stake in the Democratic Republic of Congo's Kisanfu cobalt mine, one of the world's largest sources of cobalt. === 2022–present === In the first half, CATL ranked first in the world with a market share of 34 percent, according to SNE research. CATL announced plans to establish a battery factory in Debrecen, Hungary. Their Yibin manufacturing plant was certified as the world's first zero-carbon battery factory. In July 2022, Ford announced buying batteries from CATL for use in the Ford Mustang Mach-E and Ford F-150 Lightning models, which subsequently raised concerns with the United States House Select Committee on Strategic Competition between the United States and the Chinese Communist Party. In October, CATL expanded its deal with VinFast to provide a skateboard chassis and "enhance global footprint". On 12 August 2022, CATL announced its second European battery plant in Hungary. In 2023, CATL received the equivalent of US$790 million in state subsidies. The same year, CATL introduced its M3P battery, offering a 15% increase in energy density, reaching 210 Wh/kg. The battery replaces the iron in the lithium iron phosphate battery with a combination of magnesium, zinc, and aluminum. Later that year, the company announced its Shenxing LFP battery. The cathode of Shenxing LFP is fully nano-crystallized, which accelerates ion movement and the response to charging signals. The anode's second-generation fast ion ring technology increases intercalation channels and shortens intercalation distance. Its superconducting electrolyte formula reduces viscosity and improves conductivity. A new separator film reduces resistance. At room temperature, Shenxing can charge from 0 to 80% in 10 minutes and in just 30 minutes at -10 °C, maintains 0-100 kph performance at low temperatures. Safety is enhanced by using a safe coating for the electrolyte and the separator. A real-time fault testing system allows safe and fast refueling. Ford announced a 2,500 worker battery plant in Marshall, Michigan using CATL technology. The facility would be a Ford subsidiary. Making the batteries domestically would enable Ford customers to access federal subsidies. The project was paused after lawmakers questioned the tax subsidies. In November 2023, CATL and Stellantis announced that they are considering the possibility of a joint investment in the form of a joint venture with equivalent contributions. On 7 December 2023, CATL and Hong Kong Science and Technology Parks Corporation (HKSTP) signed a memorandum of understanding to establish a CATL research center at the HKSTP with investment of over HKD 1.2 billion. In 2023, the World Intellectual Property Organization (WIPO)’s Annual PCT Review ranked CATL's number of patent applications published under the PCT System as 8th in the world, with 1,799 patent applications being published during 2023. In April 2024, CATL announced Tener, a large scale stationary energy storage system. It is claimed to feature all-round safety, zero degradation over five-years and 6.25 MWh capacity per unit. It incorporates biomimetic SEI (solid electrolyte interphase) and self-assembled electrolyte technologies. In August 2024, American legislators Marco Rubio and John Moolenaar asked Defense Secretary Lloyd Austin to add CATL to a list of companies prohibited from receiving U.S. military contracts. As of September 2024, CATL is the top recipient of Chinese corporate subsidies, a position it has maintained since 2023. In December 2024, CATL announced to its suppliers that it is willing to provide them with financial support to speed up the technology innovation in battery materials and equipment. On 12 December 2024, it was reported that CATL will collaborate with Stellantis in a joint-venture to build a large-scale lithium iron phosphate battery plant in Zaragoza, an investment worth €4.1 billion. This 50-50 partnership, is anticipated to commence battery production in 2026 and will have a capacity reaching 50 GWh. On 7 January 2025, the US Department of Defense added CATL and Tencent to its list of "Chinese military companies". On 11 February 2025, CATL filed for a secondary listing on the Hong Kong Stock Exchange, aiming to raise over $5 billion to fund international expansion plans including projects in Hungary, Spain, and Indonesia. Cornerstone investors included Sinopec, the Kuwait Investment Authority, Hillhouse Investment, Oaktree Capital Management and an Agnelli family investment fund. == Corporate affairs == The key trends for CATL are (as of the financial year ending 31 December): == Facilities == CATL operates thirteen battery manufacturing plants worldwide, namely in: == Partnerships == Due to its main competitor BYD Company prioritizing battery supply to its own vehicles, CATL was able to capture partnerships with foreign automakers. CATL's battery technology is currently used by electric vehicle manufacturers in the overseas market, and CATL collaborates with companies including BMW, Daimler AG, Hyundai, Honda, Li Auto, NIO, PSA, Tesla, Toyota, Volkswagen, Volvo and XPeng. In China, its clients include BAIC Motor, Geely, GAC Group, Yutong Bus, Zhongtong Bus, Xiamen King Long, SAIC Motor and Foton Motor. CATL also partners with Valmet Automotive, BMW, Ford, VinFast, and Hong Kong Science and Technology Parks Corporation. In August 2022, CATL and truck maker FAW Jiefang established a joint venture to develop battery technology (urban battery replacement station networks). == Assessment == According to former Tesla battery supply chain manager Vivas Kumar in 2019, CATL "are seen as the leaders of lithium iron phosphate battery (LFP battery) technology". The company employs the cell-to-pack method to reduce the inactive weight of its batteries. It increases volume utilization rate by 15% to 20%, doubles the production efficiency and reduces the number of parts for a battery pack by 40%, while the energy density of a battery pack jumps from 140 to 150 Wh/Kg to 200 Wh/Kg. According to Kumar, unlike competitors such as LG Energy Solution or SK Innovation, CATL is more willing to adapt outside technology, as opposed to applying a full in-house design. In 2024, Tu Le of consultancy Sino Auto Insights claimed that the US was "years behind" China in batteries, and that "if the US is going to be competitive on the global stage with EVs, through 2030 they’re going to have to use Chinese batteries". == Security concerns == In December 2023, Duke Energy disconnected CATL batteries from Marine Corps Base Camp Lejeune due to security concerns. CATL called accusations about its batteries posing espionage threats "false and misleading." The National Defense Authorization Act for Fiscal Year 2024 prohibited US defense funding for CATL products. In June 2024, a group of U.S. lawmakers asked the United States Department of Homeland Security to add CATL to an import ban list under the Uyghur Forced Labor Prevention Act. CATL said in a statement that the allegations against it were "groundless and completely false" and that it was in compliance with applicable laws and regulations. In April 2025, the United States House Select Committee on Strategic Competition between the United States and the Chinese Communist Party asked JPMorgan Chase and Bank of America to withdraw from working on CATL's Hong Kong IPO. == See also == List of electric-vehicle-battery manufacturers == References == == External links == Official website
https://en.wikipedia.org/wiki/CATL
Environmental technology (or envirotech) is the use of engineering and technological approaches to understand and address issues that affect the environment with the aim of fostering environmental improvement. It involves the application of science and technology in the process of addressing environmental challenges through environmental conservation and the mitigation of human impact to the environment. The term is sometimes also used to describe sustainable energy generation technologies such as photovoltaics, wind turbines, etc. == Purification and waste management == === Water purification === === Air purification === Air purification describes the processes used to remove contaminants and pollutants from the air to reduce the potential adverse effects on humans and the environment. The process of air purification may be performed using methods such as mechanical filtration, ionization, activated carbon adsorption, photocatalytic oxidation, and ultraviolet light germicidal irradiation. === Sewage treatment === === Environmental remediation === Environmental remediation is the process through which contaminants or pollutants in soil, water and other media are removed to improve environmental quality. The main focus is the reduction of hazardous substances within the environment. Some of the areas involved in environmental remediation include; soil contamination, hazardous waste, groundwater contamination, oil, gas and chemical spills. There are three most common types of environmental remediation. These include soil, water, and sediment remediation. Soil remediation consists of removing contaminants in soil, as these pose great risks to humans and the ecosystem. Some examples of this are heavy metals, pesticides, and radioactive materials. Depending on the contaminant the remedial processes can be physical, chemical, thermal, or biological. Water remediation is one of the most important considering water is an essential natural resource. Depending on the source of water there will be different contaminants. Surface water contamination mainly consists of agricultural, animal, and industrial waste, as well as acid mine drainage. There has been a rise in the need for water remediation due to the increased discharge of industrial waste, leading to a demand for sustainable water solutions. The market for water remediation is expected to consistently increase to $19.6 billion by 2030. Sediment remediation consists of removing contaminated sediments. Is it almost similar to soil remediation except it is often more sophisticated as it involves additional contaminants. To reduce the contaminants it is likely to use physical, chemical, and biological processes that help with source control, but if these processes are executed correctly, there's a risk of contamination resurfacing. === Solid waste management === Solid waste management is the purification, consumption, reuse, disposal, and treatment of solid waste that is undertaken by the government or the ruling bodies of a city/town. It refers to the collection, treatment, and disposal of non-soluble, solid waste material. Solid waste is associated with both industrial, institutional, commercial and residential activities. Hazardous solid waste, when improperly disposed can encourage the infestation of insects and rodents, contributing to the spread of diseases. Some of the most common types of solid waste management include; landfills, vermicomposting, composting, recycling, and incineration. However, a major barrier for solid waste management practices is the high costs associated with recycling and the risks of creating more pollution. === E-Waste Recycling === The recycling of electronic waste (e-waste) has seen significant technological advancements due to increasing environmental concerns and the growing volume of electronic product disposals. Traditional e-waste recycling methods, which often involve manual disassembly, expose workers to hazardous materials and are labor-intensive. Recent innovations have introduced automated processes that improve safety and efficiency, allowing for more precise separation and recovery of valuable materials. Modern e-waste recycling techniques now leverage automated shredding and advanced sorting technologies, which help in effectively segregating different types of materials for recycling. This not only enhances the recovery rate of precious metals but also minimizes the environmental impact by reducing the amount of waste destined for landfills. Furthermore, research into biodegradable electronics aims to reduce future e-waste through the development of electronics that can decompose more naturally in the environment. These advancements support a shift towards a circular economy, where the lifecycle of materials is extended, and environmental impacts are significantly minimized. === Bioremediation === Bioremediation is a process that uses microorganisms such as bacteria, fungi, plant enzymes, and yeast to neutrilize hazardous containments that can be in the environment. This could help mitigate a variety of environmental hazards, including oil spills, pesticides, heavy metals, and other pollutants. Bioremediation can be conducted either on-site ('in situ') or off-site ('ex situ') which is often necessary if the climate is too cold. Factors influencing the duration of bioremediation would include to the extent of the contamination, environmental conditions, and with timelines that can range from months to years. === Examples === Biofiltration Bioreactor Bioremediation Composting toilet Desalination Thermal depolymerization Pyrolysis == Sustainable energy == Concerns over pollution and greenhouse gases have spurred the search for sustainable alternatives to fossil fuel use. The global reduction of greenhouse gases requires the adoption of energy conservation as well as sustainable generation. That environmental harm reduction involves global changes such as: substantially reducing methane emissions from melting perma-frost, animal husbandry, pipeline and wellhead leakage. virtually eliminating fossil fuels for vehicles, heat, and electricity. carbon dioxide capture and sequestration at point of combustion. widespread use of public transport, battery, and fuel cell vehicles extensive implementation of wind/solar/water generated electricity reducing peak demands with carbon taxes and time of use pricing. Since fuel used by industry and transportation account for the majority of world demand, by investing in conservation and efficiency (using less fuel), pollution and greenhouse gases from these two sectors can be reduced around the globe. Advanced energy-efficient electric motor (and electric generator) technology that are cost-effective to encourage their application, such as variable speed generators and efficient energy use, can reduce the amount of carbon dioxide (CO2) and sulfur dioxide (SO2) that would otherwise be introduced to the atmosphere, if electricity were generated using fossil fuels. Some scholars have expressed concern that the implementation of new environmental technologies in highly developed national economies may cause economic and social disruption in less-developed economies. === Renewable energy === Renewable energy is the energy that can be replenished easily. For years we have been using sources such as wood, sun, water, etc. for means for producing energy. Energy that can be produced by natural objects like the sun, wind, etc. is considered to be renewable. Technologies that have been in usage include wind power, hydropower, solar energy, geothermal energy, and biomass/bioenergy. It refers to any form of energy that naturally regenerates over time, and does not run out. This form of energy naturally replenishes and is characterized by a low carbon footprint. Some of the most common types of renewable energy sources include; solar power, wind power, hydroelectric power, and bioenergy which is generated by burning organic matter. === Examples === Energy saving modules Heat pump Hydrogen fuel cell Hydroelectricity Ocean thermal energy conversion Photovoltaic Solar power Wave energy Wind power Wind turbine ==== Renewable Energy Innovations ==== The intersection of technology and sustainability has led to innovative solutions aimed at enhancing the efficiency of renewable energy systems. One such innovation is the integration of wind and solar power to maximize energy production. Companies like Unéole are pioneering technologies that combine solar panels with wind turbines on the same platform, which is particularly advantageous for urban environments with limited space. This hybrid system not only conserves space but also increases the energy yield by leveraging the complementary nature of solar and wind energy availability. Furthermore, advancements in offshore wind technology have significantly increased the viability and efficiency of wind energy. Modern offshore wind turbines feature improvements in structural design and aerodynamics, which enhance their energy capture and reduce costs. These turbines are now more adaptable to various marine environments, allowing for greater flexibility in location and potentially reducing visual pollution. The floating wind turbines, for example, use tension leg platforms and spar buoys that can be deployed in deeper waters, significantly expanding the potential areas for wind energy generation Such innovations not only advance the capabilities of individual renewable technologies but also contribute to a more resilient and sustainable energy grid. By optimizing the integration and efficiency of renewable resources, these technologies play a crucial role in the transition towards a sustainable energy future. === Energy conservation === Energy conservation is the utilization of devices that require smaller amounts of energy in order to reduce the consumption of electricity. Reducing the use of electricity causes less fossil fuels to be burned to provide that electricity. And it refers to the practice of using less energy through changes in individual behaviors and habits. The main emphasis for energy conservation is the prevention of wasteful use of energy in the environment, to enhance its availability. Some of the main approaches to energy conservation involve refraining from using devices that consume more energy, where possible. === eGain forecasting === Egain forecasting is a method using forecasting technology to predict the future weather's impact on a building. By adjusting the heat based on the weather forecast, the system eliminates redundant use of heat, thus reducing the energy consumption and the emission of greenhouse gases. It is a technology introduced by the eGain International, a Swedish company that intelligently balances building power consumption. The technology involves forecasting the amount of heating energy required by a building within a specific period, which results in energy efficiency and sustainability. eGain lowers building energy consumption and emissions while determining time for maintenance where inefficiencies are observed. === Solar power === == Computational sustainability == === Sustainable Agriculture === Sustainable agriculture is an approach to farming that utilizes technology in a way that ensures food protection, while ensuring the long-term health and productivity of agricultural systems, ecosystems, and communities. Historically, technological advancements have significantly contributed to increasing agricultural productivity and reducing physical labor. The National Institute of Food and Agriculture improves sustainable agriculture through the use of funded programs aimed at fulfilling human food and fiber needs, improving environmental quality, and preserving natural resources vital to the agricultural economy, optimizing the utilization of both nonrenewable and on-farm resources while integrating natural biological cycles and controls as appropriate, maintaining the economic viability of farm operations, and to foster an improved quality of life for farmers and society at large. Among its initiatives, the NIFA wants to improve farm and ranch practices, integrated pest management, rotational grazing, soil conservation, water quality/wetlands, cover crops, crop/landscape diversity, nutrient management, agroforestry, and alternative marketing. == Education == Courses aimed at developing graduates with some specific skills in environmental systems or environmental technology are becoming more common and fall into three broad classes: Environmental Engineering or Environmental Systems courses oriented towards a civil engineering approach in which structures and the landscape are constructed to blend with or protect the environment; Environmental chemistry, sustainable chemistry or environmental chemical engineering courses oriented towards understanding the effects (good and bad) of chemicals in the environment. Such awards can focus on mining processes pollutants and commonly also cover biochemical processes; Environmental technology courses are oriented towards producing electronic, electrical, or electrotechnology graduates capable of developing devices and artifacts that can monitor, measure, model, and control environmental impact, including monitoring and managing energy generation from renewable sources and developing novel energy generation technologies. == See also == == References == == Further reading == OECD Studies on Environmental Innovation Invention and Transfer of Environmental Technologies. OECD. September 2011. ISBN 978-92-64-11561-3. == External links ==
https://en.wikipedia.org/wiki/Environmental_technology
A hacker is a person skilled in information technology who achieves goals by non-standard means. The term has become associated in popular culture with a security hacker – someone with knowledge of bugs or exploits to break into computer systems and access data which would otherwise be inaccessible to them. In a positive connotation, though, hacking can also be utilized by legitimate figures in legal situations. For example, law enforcement agencies sometimes use hacking techniques to collect evidence on criminals and other malicious actors. This could include using anonymity tools (such as a VPN or the dark web) to mask their identities online and pose as criminals. Hacking can also have a broader sense of any roundabout solution to a problem, or programming and hardware development in general, and hacker culture has spread the term's broader usage to the general public even outside the profession or hobby of electronics (see life hack). == Definitions == Reflecting the two types of hackers, there are two definitions of the word "hacker": Originally, hacker simply meant advanced computer technology enthusiast (both hardware and software) and adherent of programming subculture; see hacker culture. Someone who is able to subvert computer security. If doing so for malicious purposes, the person can also be called a cracker. Mainstream usage of "hacker" mostly refers to computer criminals, due to the mass media usage of the word since the 1990s. This includes what hacker jargon calls script kiddies, less skilled criminals who rely on tools written by others with very little knowledge about the way they work. This usage has become so predominant that the general public is largely unaware that different meanings exist. Though the self-designation of hobbyists as hackers is generally acknowledged and accepted by computer security hackers, people from the programming subculture consider the computer intrusion related usage incorrect, and emphasize the difference between the two by calling security breakers "crackers" (analogous to a safecracker). The controversy is usually based on the assertion that the term originally meant someone messing about with something in a positive sense, that is, using playful cleverness to achieve a goal. But then, it is supposed, the meaning of the term shifted over the decades and came to refer to computer criminals. As the security-related usage has spread more widely, the original meaning has become less known. In popular usage and in the media, "computer intruders" or "computer criminals" is the exclusive meaning of the word. In computer enthusiast and hacker culture, the primary meaning is a complimentary description for a particularly brilliant programmer or technical expert. A large segment of the technical community insist the latter is the correct usage, as in the Jargon File definition. Sometimes, "hacker" is simply used synonymously with "geek": "A true hacker is not a group person. He's a person who loves to stay up all night, he and the machine in a love-hate relationship... They're kids who tended to be brilliant but not very interested in conventional goals It's a term of derision and also the ultimate compliment." Fred Shapiro thinks that "the common theory that 'hacker' originally was a benign term and the malicious connotations of the word were a later perversion is untrue." He found that the malicious connotations were already present at MIT in 1963 (quoting The Tech, an MIT student newspaper), and at that time referred to unauthorized users of the telephone network, that is, the phreaker movement that developed into the computer security hacker subculture of today. === Civic hacker === Civic hackers use their security and programming acumens to create solutions, often public and open-sourced, addressing challenges relevant to neighborhoods, cities, states or countries and the infrastructure within them. Municipalities and major government agencies such as NASA have been known to host hackathons or promote a specific date as a "National Day of Civic Hacking" to encourage participation from civic hackers. Civic hackers, though often operating autonomously and independently, may work alongside or in coordination with certain aspects of government or local infrastructure such as trains and buses. For example, in 2008, Philadelphia-based civic hacker William Entriken developed a web application that displayed a comparison of the actual arrival times of local SEPTA trains to their scheduled times after being reportedly frustrated by the discrepancy. === Security related hacking === Security hackers are people involved with circumvention of computer security. There are several types, including: White hat Hackers who work to keep data safe from other hackers by finding system vulnerabilities that can be mitigated. White hats are usually employed by the target system's owner and are typically paid (sometimes quite well) for their work. Their work is not illegal because it is done with the system owner's consent. Black hat or Cracker Hackers with malicious intentions. They often steal, exploit, and sell data, and are usually motivated by personal gain. Their work is usually illegal. A cracker is like a black hat hacker, but is specifically someone who is very skilled and tries via hacking to make profits or to benefit, not just to vandalize. Crackers find exploits for system vulnerabilities and often use them to their advantage by either selling the fix to the system owner or selling the exploit to other black hat hackers, who in turn use it to steal information or gain royalties. Grey hat Computer security experts who may sometimes violate laws or typical ethical standards, but do not have the malicious intent typical of a black hat hacker. === Hacker culture === Hacker culture is an idea derived from a community of enthusiast computer programmers and systems designers in the 1960s around the Massachusetts Institute of Technology's (MIT's) Tech Model Railroad Club (TMRC) and the MIT Artificial Intelligence Laboratory. The concept expanded to the hobbyist home computing community, focusing on hardware in the late 1970s (e.g. the Homebrew Computer Club) and on software (video games, software cracking, the demoscene) in the 1980s/1990s. Later, this would go on to encompass many new definitions such as art, and life hacking. == Motives == Four primary motives have been proposed as possibilities for why hackers attempt to break into computers and networks. First, there is a criminal financial gain to be had when hacking systems with the specific purpose of stealing credit card numbers or manipulating banking systems. Second, many hackers thrive off of increasing their reputation within the hacker subculture and will leave their handles on websites they defaced or leave some other evidence as proof that they were involved in a specific hack. Third, corporate espionage allows companies to acquire information on products or services that can be stolen or used as leverage within the marketplace. Lastly, state-sponsored attacks provide nation states with both wartime and intelligence collection options conducted on, in, or through cyberspace. == Overlaps and differences == The main basic difference between programmer subculture and computer security hacker is their mostly separate historical origin and development. However, the Jargon File reports that considerable overlap existed for the early phreaking at the beginning of the 1970s. An article from MIT's student paper The Tech used the term hacker in this context already in 1963 in its pejorative meaning for someone messing with the phone system. The overlap quickly started to break when people joined in the activity who did it in a less responsible way. This was the case after the publication of an article exposing the activities of Draper and Engressia. According to Raymond, hackers from the programmer subculture usually work openly and use their real name, while computer security hackers prefer secretive groups and identity-concealing aliases. Also, their activities in practice are largely distinct. The former focus on creating new and improving existing infrastructure (especially the software environment they work with), while the latter primarily and strongly emphasize the general act of circumvention of security measures, with the effective use of the knowledge (which can be to report and help fixing the security bugs, or exploitation reasons) being only rather secondary. The most visible difference in these views was in the design of the MIT hackers' Incompatible Timesharing System, which deliberately did not have any security measures. There are some subtle overlaps, however, since basic knowledge about computer security is also common within the programmer subculture of hackers. For example, Ken Thompson noted during his 1983 Turing Award lecture that it is possible to add code to the UNIX "login" command that would accept either the intended encrypted password or a particular known password, allowing a backdoor into the system with the latter password. He named his invention the "Trojan horse". Furthermore, Thompson argued, the C compiler itself could be modified to automatically generate the rogue code, to make detecting the modification even harder. Because the compiler is itself a program generated from a compiler, the Trojan horse could also be automatically installed in a new compiler program, without any detectable modification to the source of the new compiler. However, Thompson disassociated himself strictly from the computer security hackers: "I would like to criticize the press in its handling of the 'hackers,' the 414 gang, the Dalton gang, etc. The acts performed by these kids are vandalism at best and probably trespass and theft at worst. ... I have watched kids testifying before Congress. It is clear that they are completely unaware of the seriousness of their acts." The programmer subculture of hackers sees secondary circumvention of security mechanisms as legitimate if it is done to get practical barriers out of the way for doing actual work. In special forms, that can even be an expression of playful cleverness. However, the systematic and primary engagement in such activities is not one of the actual interests of the programmer subculture of hackers and it does not have significance in its actual activities, either. A further difference is that, historically, members of the programmer subculture of hackers were working at academic institutions and used the computing environment there. In contrast, the prototypical computer security hacker had access exclusively to a home computer and a modem. However, since the mid-1990s, with home computers that could run Unix-like operating systems and with inexpensive internet home access being available for the first time, many people from outside of the academic world started to take part in the programmer subculture of hacking. Since the mid-1980s, there are some overlaps in ideas and members with the computer security hacking community. The most prominent case is Robert T. Morris, who was a user of MIT-AI, yet wrote the Morris worm. The Jargon File hence calls him "a true hacker who blundered". Nevertheless, members of the programmer subculture have a tendency to look down on and disassociate from these overlaps. They commonly refer disparagingly to people in the computer security subculture as crackers and refuse to accept any definition of hacker that encompasses such activities. The computer security hacking subculture, on the other hand, tends not to distinguish between the two subcultures as harshly, acknowledging that they have much in common including many members, political and social goals, and a love of learning about technology. They restrict the use of the term cracker to their categories of script kiddies and black hat hackers instead. All three subcultures have relations to hardware modifications. In the early days of network hacking, phreaks were building blue boxes and various variants. The programmer subculture of hackers has stories about several hardware hacks in its folklore, such as a mysterious "magic" switch attached to a PDP-10 computer in MIT's AI lab that, when switched off, crashed the computer. The early hobbyist hackers built their home computers themselves from construction kits. However, all these activities have died out during the 1980s when the phone network switched to digitally controlled switchboards, causing network hacking to shift to dialing remote computers with modems when pre-assembled inexpensive home computers were available and when academic institutions started to give individual mass-produced workstation computers to scientists instead of using a central timesharing system. The only kind of widespread hardware modification nowadays is case modding. An encounter of the programmer and the computer security hacker subculture occurred at the end of the 1980s, when a group of computer security hackers, sympathizing with the Chaos Computer Club (which disclaimed any knowledge in these activities), broke into computers of American military organizations and academic institutions. They sold data from these machines to the Soviet secret service, one of them in order to fund his drug addiction. The case was solved when Clifford Stoll, a scientist working as a system administrator, found ways to log the attacks and to trace them back (with the help of many others). 23, a German film adaption with fictional elements, shows the events from the attackers' perspective. Stoll described the case in his book The Cuckoo's Egg and in the TV documentary The KGB, the Computer, and Me from the other perspective. According to Eric S. Raymond, it "nicely illustrates the difference between 'hacker' and 'cracker'. Stoll's portrait of himself, his lady Martha, and his friends at Berkeley and on the Internet paints a marvelously vivid picture of how hackers and the people around them like to live and how they think." == Representation in media == The mainstream media's current usage of the term may be traced back to the early 1980s. When the term, previously used only among computer enthusiasts, was introduced to wider society by the mainstream media in 1983, even those in the computer community referred to computer intrusion as hacking, although not as the exclusive definition of the word. In reaction to the increasing media use of the term exclusively with the criminal connotation, the computer community began to differentiate their terminology. Alternative terms such as cracker were coined in an effort to maintain the distinction between hackers within the legitimate programmer community and those performing computer break-ins. Further terms such as black hat, white hat and gray hat developed when laws against breaking into computers came into effect, to distinguish criminal activities from those activities which were legal. Network news' use of the term consistently pertains primarily to criminal activities, despite attempts by the technical community to preserve and distinguish the original meaning. Today, the mainstream media and general public continue to describe computer criminals, with all levels of technical sophistication, as "hackers" and do not generally make use of the word in any of its non-criminal connotations. Members of the media sometimes seem unaware of the distinction, grouping legitimate "hackers" such as Linus Torvalds and Steve Wozniak along with criminal "crackers". As a result, the definition is still the subject of heated controversy. The wider dominance of the pejorative connotation is resented by many who object to the term being taken from their cultural jargon and used negatively, including those who have historically preferred to self-identify as hackers. Many advocate using the more recent and nuanced alternate terms when describing criminals and others who negatively take advantage of security flaws in software and hardware. Others prefer to follow common popular usage, arguing that the positive form is confusing and unlikely to become widespread in the general public. A minority still use the term in both senses despite the controversy, leaving context to clarify (or leave ambiguous) which meaning is intended. However, because the positive definition of hacker was widely used as the predominant form for many years before the negative definition was popularized, "hacker" can therefore be seen as a shibboleth, identifying those who use the technically oriented sense (as opposed to the exclusively intrusion-oriented sense) as members of the computing community. On the other hand, due to the variety of industries software designers may find themselves in, many prefer not to be referred to as hackers because the word holds a negative denotation in many of those industries. A possible middle ground position has been suggested, based on the observation that "hacking" describes a collection of skills and tools which are used by hackers of both descriptions for differing reasons. The analogy is made to locksmithing, specifically picking locks, which is a skill which can be used for good or evil. The primary weakness of this analogy is the inclusion of script kiddies in the popular usage of "hacker", despite their lack of an underlying skill and knowledge base. == See also == Script kiddie, an unskilled computer security attacker Hacktivism, conducting cyber attacks on a business or organisation in order to bring social change == References == == Further reading == === Computer security === === Free software/open source === == External links == Hacking at Wikibooks The dictionary definition of Hacker at Wiktionary Media related to Hackers at Wikimedia Commons
https://en.wikipedia.org/wiki/Hacker
A technology museum is a museum devoted to applied science and technological developments. Many museums are both a science museum and a technology museum, and incorporate elements of both museum genres. The goal of technology museums is to educate the public on the history of technology, and to preserve technological heritage. They also may aim to promote local pride in technological and industrial developments, such as the manufacturing materials on display at the Newcastle Discovery Museum. Some technology museums may simply want to display technological items, while others may want to use them to demonstrate how they function. == Examples of Technology Museums == Some of the most historically significant technology museums are: the Musée des Arts et Métiers in Paris, founded in 1794; the Science Museum in London, founded in 1857; the Deutsches Museum von Meisterwerken der Naturwissenschaft und Technik in Munich, founded in 1903; and the Technisches Museum für Industrie und Gewerbe in Vienna, founded in 1918. the Computer History Museum in California, founded in the 1970s. Further technology museums in Germany include the Deutsches Technikmuseum in Berlin-Kreuzberg, the Technoseum in Mannheim, the Technik Museum Speyer, the Technik Museum Sinsheim and the Technikmuseum Magdeburg. The most prestigious of its kind in Austria is the Technisches Museum in Vienna. == Technology on Display in Museums == Many other independent museums, such as transport museums, cover certain technical genres, processes or industries, for example mining, chemistry, metrology, musical instruments, ceramics or paper. Despite concentration on other fields, if there is extensive information on the technologies related to these subjects, the museum could be considered a technology museum. For example, elements of a technology museum could be incorporated with a marine science museum, a military museum, or an industrial museum. Semi-technology-focused museums typically “reflect some of the variety of applications of technology and present it within interestingly different contexts”. === Museum Buildings and Structures === In some examples of this type of museum, the actual building is incorporated into the exhibition. A museum on mining technology may be housed inside a mining or colliery site, and a museum focusing on industrial technology might be inside a warehouse or former factory. Many naval and maritime museums follow this trend, such as the Patriots Point Naval and Maritime Museum in Mount Pleasant, South Carolina. The objects inside this museum are displayed inside the USS Yorktown – an aircraft carrier – and the USS Laffey—a destroyer. By housing exhibits inside relevant buildings and other structures, museums can display technology that supports their concentrations. == References == == See also == Computer Museum
https://en.wikipedia.org/wiki/Technology_museum
Biotechnology is a multidisciplinary field that involves the integration of natural sciences and engineering sciences in order to achieve the application of organisms and parts thereof for products and services. Specialists in the field are known as biotechnologists. The term biotechnology was first used by Károly Ereky in 1919 to refer to the production of products from raw materials with the aid of living organisms. The core principle of biotechnology involves harnessing biological systems and organisms, such as bacteria, yeast, and plants, to perform specific tasks or produce valuable substances. Biotechnology had a significant impact on many areas of society, from medicine to agriculture to environmental science. One of the key techniques used in biotechnology is genetic engineering, which allows scientists to modify the genetic makeup of organisms to achieve desired outcomes. This can involve inserting genes from one organism into another, and consequently, create new traits or modifying existing ones. Other important techniques used in biotechnology include tissue culture, which allows researchers to grow cells and tissues in the lab for research and medical purposes, and fermentation, which is used to produce a wide range of products such as beer, wine, and cheese. The applications of biotechnology are diverse and have led to the development of products like life-saving drugs, biofuels, genetically modified crops, and innovative materials. It has also been used to address environmental challenges, such as developing biodegradable plastics and using microorganisms to clean up contaminated sites. Biotechnology is a rapidly evolving field with significant potential to address pressing global challenges and improve the quality of life for people around the world; however, despite its numerous benefits, it also poses ethical and societal challenges, such as questions around genetic modification and intellectual property rights. As a result, there is ongoing debate and regulation surrounding the use and application of biotechnology in various industries and fields. == Definition == The concept of biotechnology encompasses a wide range of procedures for modifying living organisms for human purposes, going back to domestication of animals, cultivation of plants, and "improvements" to these through breeding programs that employ artificial selection and hybridization. Modern usage also includes genetic engineering, as well as cell and tissue culture technologies. The American Chemical Society defines biotechnology as the application of biological organisms, systems, or processes by various industries to learning about the science of life and the improvement of the value of materials and organisms, such as pharmaceuticals, crops, and livestock. As per the European Federation of Biotechnology, biotechnology is the integration of natural science and organisms, cells, parts thereof, and molecular analogues for products and services. Biotechnology is based on the basic biological sciences (e.g., molecular biology, biochemistry, cell biology, embryology, genetics, microbiology) and conversely provides methods to support and perform basic research in biology. Biotechnology is the research and development in the laboratory using bioinformatics for exploration, extraction, exploitation, and production from any living organisms and any source of biomass by means of biochemical engineering where high value-added products could be planned (reproduced by biosynthesis, for example), forecasted, formulated, developed, manufactured, and marketed for the purpose of sustainable operations (for the return from bottomless initial investment on R & D) and gaining durable patents rights (for exclusives rights for sales, and prior to this to receive national and international approval from the results on animal experiment and human experiment, especially on the pharmaceutical branch of biotechnology to prevent any undetected side-effects or safety concerns by using the products). The utilization of biological processes, organisms or systems to produce products that are anticipated to improve human lives is termed biotechnology. By contrast, bioengineering is generally thought of as a related field that more heavily emphasizes higher systems approaches (not necessarily the altering or using of biological materials directly) for interfacing with and utilizing living things. Bioengineering is the application of the principles of engineering and natural sciences to tissues, cells, and molecules. This can be considered as the use of knowledge from working with and manipulating biology to achieve a result that can improve functions in plants and animals. Relatedly, biomedical engineering is an overlapping field that often draws upon and applies biotechnology (by various definitions), especially in certain sub-fields of biomedical or chemical engineering such as tissue engineering, biopharmaceutical engineering, and genetic engineering. == History == Although not normally what first comes to mind, many forms of human-derived agriculture clearly fit the broad definition of "utilizing a biotechnological system to make products". Indeed, the cultivation of plants may be viewed as the earliest biotechnological enterprise. Agriculture has been theorized to have become the dominant way of producing food since the Neolithic Revolution. Through early biotechnology, the earliest farmers selected and bred the best-suited crops (e.g., those with the highest yields) to produce enough food to support a growing population. As crops and fields became increasingly large and difficult to maintain, it was discovered that specific organisms and their by-products could effectively fertilize, restore nitrogen, and control pests. Throughout the history of agriculture, farmers have inadvertently altered the genetics of their crops through introducing them to new environments and breeding them with other plants — one of the first forms of biotechnology. These processes also were included in early fermentation of beer. These processes were introduced in early Mesopotamia, Egypt, China and India, and still use the same basic biological methods. In brewing, malted grains (containing enzymes) convert starch from grains into sugar and then adding specific yeasts to produce beer. In this process, carbohydrates in the grains broke down into alcohols, such as ethanol. Later, other cultures produced the process of lactic acid fermentation, which produced other preserved foods, such as soy sauce. Fermentation was also used in this time period to produce leavened bread. Although the process of fermentation was not fully understood until Louis Pasteur's work in 1857, it is still the first use of biotechnology to convert a food source into another form. Before the time of Charles Darwin's work and life, animal and plant scientists had already used selective breeding. Darwin added to that body of work with his scientific observations about the ability of science to change species. These accounts contributed to Darwin's theory of natural selection. For thousands of years, humans have used selective breeding to improve the production of crops and livestock to use them for food. In selective breeding, organisms with desirable characteristics are mated to produce offspring with the same characteristics. For example, this technique was used with corn to produce the largest and sweetest crops. In the early twentieth century scientists gained a greater understanding of microbiology and explored ways of manufacturing specific products. In 1917, Chaim Weizmann first used a pure microbiological culture in an industrial process, that of manufacturing corn starch using Clostridium acetobutylicum, to produce acetone, which the United Kingdom desperately needed to manufacture explosives during World War I. Biotechnology has also led to the development of antibiotics. In 1928, Alexander Fleming discovered the mold Penicillium. His work led to the purification of the antibiotic formed by the mold by Howard Florey, Ernst Boris Chain and Norman Heatley – to form what we today know as penicillin. In 1940, penicillin became available for medicinal use to treat bacterial infections in humans. The field of modern biotechnology is generally thought of as having been born in 1971 when Paul Berg's (Stanford) experiments in gene splicing had early success. Herbert W. Boyer (Univ. Calif. at San Francisco) and Stanley N. Cohen (Stanford) significantly advanced the new technology in 1972 by transferring genetic material into a bacterium, such that the imported material would be reproduced. The commercial viability of a biotechnology industry was significantly expanded on June 16, 1980, when the United States Supreme Court ruled that a genetically modified microorganism could be patented in the case of Diamond v. Chakrabarty. Indian-born Ananda Chakrabarty, working for General Electric, had modified a bacterium (of the genus Pseudomonas) capable of breaking down crude oil, which he proposed to use in treating oil spills. (Chakrabarty's work did not involve gene manipulation but rather the transfer of entire organelles between strains of the Pseudomonas bacterium). The MOSFET invented at Bell Labs between 1955 and 1960, Two years later, Leland C. Clark and Champ Lyons invented the first biosensor in 1962. Biosensor MOSFETs were later developed, and they have since been widely used to measure physical, chemical, biological and environmental parameters. The first BioFET was the ion-sensitive field-effect transistor (ISFET), invented by Piet Bergveld in 1970. It is a special type of MOSFET, where the metal gate is replaced by an ion-sensitive membrane, electrolyte solution and reference electrode. The ISFET is widely used in biomedical applications, such as the detection of DNA hybridization, biomarker detection from blood, antibody detection, glucose measurement, pH sensing, and genetic technology. By the mid-1980s, other BioFETs had been developed, including the gas sensor FET (GASFET), pressure sensor FET (PRESSFET), chemical field-effect transistor (ChemFET), reference ISFET (REFET), enzyme-modified FET (ENFET) and immunologically modified FET (IMFET). By the early 2000s, BioFETs such as the DNA field-effect transistor (DNAFET), gene-modified FET (GenFET) and cell-potential BioFET (CPFET) had been developed. A factor influencing the biotechnology sector's success is improved intellectual property rights legislation—and enforcement—worldwide, as well as strengthened demand for medical and pharmaceutical products. Rising demand for biofuels is expected to be good news for the biotechnology sector, with the Department of Energy estimating ethanol usage could reduce U.S. petroleum-derived fuel consumption by up to 30% by 2030. The biotechnology sector has allowed the U.S. farming industry to rapidly increase its supply of corn and soybeans—the main inputs into biofuels—by developing genetically modified seeds that resist pests and drought. By increasing farm productivity, biotechnology boosts biofuel production. == Examples == Biotechnology has applications in four major industrial areas, including health care (medical), crop production and agriculture, non-food (industrial) uses of crops and other products (e.g., biodegradable plastics, vegetable oil, biofuels), and environmental uses. For example, one application of biotechnology is the directed use of microorganisms for the manufacture of organic products (examples include beer and milk products). Another example is using naturally present bacteria by the mining industry in bioleaching. Biotechnology is also used to recycle, treat waste, clean up sites contaminated by industrial activities (bioremediation), and also to produce biological weapons. A series of derived terms have been coined to identify several branches of biotechnology, for example: Bioinformatics (or "gold biotechnology") is an interdisciplinary field that addresses biological problems using computational techniques, and makes the rapid organization as well as analysis of biological data possible. The field may also be referred to as computational biology, and can be defined as, "conceptualizing biology in terms of molecules and then applying informatics techniques to understand and organize the information associated with these molecules, on a large scale". Bioinformatics plays a key role in various areas, such as functional genomics, structural genomics, and proteomics, and forms a key component in the biotechnology and pharmaceutical sector. Blue biotechnology is based on the exploitation of sea resources to create products and industrial applications. This branch of biotechnology is the most used for the industries of refining and combustion principally on the production of bio-oils with photosynthetic micro-algae. Green biotechnology is biotechnology applied to agricultural processes. An example would be the selection and domestication of plants via micropropagation. Another example is the designing of transgenic plants to grow under specific environments in the presence (or absence) of chemicals. One hope is that green biotechnology might produce more environmentally friendly solutions than traditional industrial agriculture. An example of this is the engineering of a plant to express a pesticide, thereby ending the need of external application of pesticides. An example of this would be Bt corn. Whether or not green biotechnology products such as this are ultimately more environmentally friendly is a topic of considerable debate. It is commonly considered as the next phase of green revolution, which can be seen as a platform to eradicate world hunger by using technologies which enable the production of more fertile and resistant, towards biotic and abiotic stress, plants and ensures application of environmentally friendly fertilizers and the use of biopesticides, it is mainly focused on the development of agriculture. On the other hand, some of the uses of green biotechnology involve microorganisms to clean and reduce waste. Red biotechnology is the use of biotechnology in the medical and pharmaceutical industries, and health preservation. This branch involves the production of vaccines and antibiotics, regenerative therapies, creation of artificial organs and new diagnostics of diseases. As well as the development of hormones, stem cells, antibodies, siRNA and diagnostic tests. White biotechnology, also known as industrial biotechnology, is biotechnology applied to industrial processes. An example is the designing of an organism to produce a useful chemical. Another example is the using of enzymes as industrial catalysts to either produce valuable chemicals or destroy hazardous/polluting chemicals. White biotechnology tends to consume less in resources than traditional processes used to produce industrial goods. Yellow biotechnology refers to the use of biotechnology in food production (food industry), for example in making wine (winemaking), cheese (cheesemaking), and beer (brewing) by fermentation. It has also been used to refer to biotechnology applied to insects. This includes biotechnology-based approaches for the control of harmful insects, the characterisation and utilisation of active ingredients or genes of insects for research, or application in agriculture and medicine and various other approaches. Gray biotechnology is dedicated to environmental applications, and focused on the maintenance of biodiversity and the remotion of pollutants. Brown biotechnology is related to the management of arid lands and deserts. One application is the creation of enhanced seeds that resist extreme environmental conditions of arid regions, which is related to the innovation, creation of agriculture techniques and management of resources. Violet biotechnology is related to law, ethical and philosophical issues around biotechnology. Microbial biotechnology has been proposed for the rapidly emerging area of biotechnology applications in space and microgravity (space bioeconomy) Dark biotechnology is the color associated with bioterrorism or biological weapons and biowarfare which uses microorganisms, and toxins to cause diseases and death in humans, livestock and crops. === Medicine === In medicine, modern biotechnology has many applications in areas such as pharmaceutical drug discoveries and production, pharmacogenomics, and genetic testing (or genetic screening). In 2021, nearly 40% of the total company value of pharmaceutical biotech companies worldwide were active in Oncology with Neurology and Rare Diseases being the other two big applications. Pharmacogenomics (a combination of pharmacology and genomics) is the technology that analyses how genetic makeup affects an individual's response to drugs. Researchers in the field investigate the influence of genetic variation on drug responses in patients by correlating gene expression or single-nucleotide polymorphisms with a drug's efficacy or toxicity. The purpose of pharmacogenomics is to develop rational means to optimize drug therapy, with respect to the patients' genotype, to ensure maximum efficacy with minimal adverse effects. Such approaches promise the advent of "personalized medicine"; in which drugs and drug combinations are optimized for each individual's unique genetic makeup. Biotechnology has contributed to the discovery and manufacturing of traditional small molecule pharmaceutical drugs as well as drugs that are the product of biotechnology – biopharmaceutics. Modern biotechnology can be used to manufacture existing medicines relatively easily and cheaply. The first genetically engineered products were medicines designed to treat human diseases. To cite one example, in 1978 Genentech developed synthetic humanized insulin by joining its gene with a plasmid vector inserted into the bacterium Escherichia coli. Insulin, widely used for the treatment of diabetes, was previously extracted from the pancreas of abattoir animals (cattle or pigs). The genetically engineered bacteria are able to produce large quantities of synthetic human insulin at relatively low cost. Biotechnology has also enabled emerging therapeutics like gene therapy. The application of biotechnology to basic science (for example through the Human Genome Project) has also dramatically improved our understanding of biology and as our scientific knowledge of normal and disease biology has increased, our ability to develop new medicines to treat previously untreatable diseases has increased as well. Genetic testing allows the genetic diagnosis of vulnerabilities to inherited diseases, and can also be used to determine a child's parentage (genetic mother and father) or in general a person's ancestry. In addition to studying chromosomes to the level of individual genes, genetic testing in a broader sense includes biochemical tests for the possible presence of genetic diseases, or mutant forms of genes associated with increased risk of developing genetic disorders. Genetic testing identifies changes in chromosomes, genes, or proteins. Most of the time, testing is used to find changes that are associated with inherited disorders. The results of a genetic test can confirm or rule out a suspected genetic condition or help determine a person's chance of developing or passing on a genetic disorder. As of 2011 several hundred genetic tests were in use. Since genetic testing may open up ethical or psychological problems, genetic testing is often accompanied by genetic counseling. === Agriculture === Genetically modified crops ("GM crops", or "biotech crops") are plants used in agriculture, the DNA of which has been modified with genetic engineering techniques. In most cases, the main aim is to introduce a new trait that does not occur naturally in the species. Biotechnology firms can contribute to future food security by improving the nutrition and viability of urban agriculture. Furthermore, the protection of intellectual property rights encourages private sector investment in agrobiotechnology. Examples in food crops include resistance to certain pests, diseases, stressful environmental conditions, resistance to chemical treatments (e.g. resistance to a herbicide), reduction of spoilage, or improving the nutrient profile of the crop. Examples in non-food crops include production of pharmaceutical agents, biofuels, and other industrially useful goods, as well as for bioremediation. Farmers have widely adopted GM technology. Between 1996 and 2011, the total surface area of land cultivated with GM crops had increased by a factor of 94, from 17,000 to 1,600,000 square kilometers (4,200,000 to 395,400,000 acres). 10% of the world's crop lands were planted with GM crops in 2010. As of 2011, 11 different transgenic crops were grown commercially on 395 million acres (160 million hectares) in 29 countries such as the US, Brazil, Argentina, India, Canada, China, Paraguay, Pakistan, South Africa, Uruguay, Bolivia, Australia, Philippines, Myanmar, Burkina Faso, Mexico and Spain. Genetically modified foods are foods produced from organisms that have had specific changes introduced into their DNA with the methods of genetic engineering. These techniques have allowed for the introduction of new crop traits as well as a far greater control over a food's genetic structure than previously afforded by methods such as selective breeding and mutation breeding. Commercial sale of genetically modified foods began in 1994, when Calgene first marketed its Flavr Savr delayed ripening tomato. To date most genetic modification of foods have primarily focused on cash crops in high demand by farmers such as soybean, corn, canola, and cotton seed oil. These have been engineered for resistance to pathogens and herbicides and better nutrient profiles. GM livestock have also been experimentally developed; in November 2013 none were available on the market, but in 2015 the FDA approved the first GM salmon for commercial production and consumption. There is a scientific consensus that currently available food derived from GM crops poses no greater risk to human health than conventional food, but that each GM food needs to be tested on a case-by-case basis before introduction. Nonetheless, members of the public are much less likely than scientists to perceive GM foods as safe. The legal and regulatory status of GM foods varies by country, with some nations banning or restricting them, and others permitting them with widely differing degrees of regulation. GM crops also provide a number of ecological benefits, if not used in excess. Insect-resistant crops have proven to lower pesticide usage, therefore reducing the environmental impact of pesticides as a whole. However, opponents have objected to GM crops per se on several grounds, including environmental concerns, whether food produced from GM crops is safe, whether GM crops are needed to address the world's food needs, and economic concerns raised by the fact these organisms are subject to intellectual property law. Biotechnology has several applications in the realm of food security. Crops like Golden rice are engineered to have higher nutritional content, and there is potential for food products with longer shelf lives. Though not a form of agricultural biotechnology, vaccines can help prevent diseases found in animal agriculture. Additionally, agricultural biotechnology can expedite breeding processes in order to yield faster results and provide greater quantities of food. Transgenic biofortification in cereals has been considered as a promising method to combat malnutrition in India and other countries. === Industrial === Industrial biotechnology (known mainly in Europe as white biotechnology) is the application of biotechnology for industrial purposes, including industrial fermentation. It includes the practice of using cells such as microorganisms, or components of cells like enzymes, to generate industrially useful products in sectors such as chemicals, food and feed, detergents, paper and pulp, textiles and biofuels. In the current decades, significant progress has been done in creating genetically modified organisms (GMOs) that enhance the diversity of applications and economical viability of industrial biotechnology. By using renewable raw materials to produce a variety of chemicals and fuels, industrial biotechnology is actively advancing towards lowering greenhouse gas emissions and moving away from a petrochemical-based economy. Synthetic biology is considered one of the essential cornerstones in industrial biotechnology due to its financial and sustainable contribution to the manufacturing sector. Jointly biotechnology and synthetic biology play a crucial role in generating cost-effective products with nature-friendly features by using bio-based production instead of fossil-based. Synthetic biology can be used to engineer model microorganisms, such as Escherichia coli, by genome editing tools to enhance their ability to produce bio-based products, such as bioproduction of medicines and biofuels. For instance, E. coli and Saccharomyces cerevisiae in a consortium could be used as industrial microbes to produce precursors of the chemotherapeutic agent paclitaxel by applying the metabolic engineering in a co-culture approach to exploit the benefits from the two microbes. Another example of synthetic biology applications in industrial biotechnology is the re-engineering of the metabolic pathways of E. coli by CRISPR and CRISPRi systems toward the production of a chemical known as 1,4-butanediol, which is used in fiber manufacturing. In order to produce 1,4-butanediol, the authors alter the metabolic regulation of the Escherichia coli by CRISPR to induce point mutation in the gltA gene, knockout of the sad gene, and knock-in six genes (cat1, sucD, 4hbd, cat2, bld, and bdh). Whereas CRISPRi system used to knockdown the three competing genes (gabD, ybgC, and tesB) that affect the biosynthesis pathway of 1,4-butanediol. Consequently, the yield of 1,4-butanediol significantly increased from 0.9 to 1.8 g/L. === Environmental === Environmental biotechnology includes various disciplines that play an essential role in reducing environmental waste and providing environmentally safe processes, such as biofiltration and biodegradation. The environment can be affected by biotechnologies, both positively and adversely. Vallero and others have argued that the difference between beneficial biotechnology (e.g., bioremediation is to clean up an oil spill or hazard chemical leak) versus the adverse effects stemming from biotechnological enterprises (e.g., flow of genetic material from transgenic organisms into wild strains) can be seen as applications and implications, respectively. Cleaning up environmental wastes is an example of an application of environmental biotechnology; whereas loss of biodiversity or loss of containment of a harmful microbe are examples of environmental implications of biotechnology. Many cities have installed CityTrees, which use biotechnology to filter pollutants from urban atmospheres. === Regulation === The regulation of genetic engineering concerns approaches taken by governments to assess and manage the risks associated with the use of genetic engineering technology, and the development and release of genetically modified organisms (GMO), including genetically modified crops and genetically modified fish. There are differences in the regulation of GMOs between countries, with some of the most marked differences occurring between the US and Europe. Regulation varies in a given country depending on the intended use of the products of the genetic engineering. For example, a crop not intended for food use is generally not reviewed by authorities responsible for food safety. The European Union differentiates between approval for cultivation within the EU and approval for import and processing. While only a few GMOs have been approved for cultivation in the EU a number of GMOs have been approved for import and processing. The cultivation of GMOs has triggered a debate about the coexistence of GM and non-GM crops. Depending on the coexistence regulations, incentives for the cultivation of GM crops differ. === Database for the GMOs used in the EU === The EUginius (European GMO Initiative for a Unified Database System) database is intended to help companies, interested private users and competent authorities to find precise information on the presence, detection and identification of GMOs used in the European Union. The information is provided in English. == Learning == In 1988, after prompting from the United States Congress, the National Institute of General Medical Sciences (National Institutes of Health) (NIGMS) instituted a funding mechanism for biotechnology training. Universities nationwide compete for these funds to establish Biotechnology Training Programs (BTPs). Each successful application is generally funded for five years then must be competitively renewed. Graduate students in turn compete for acceptance into a BTP; if accepted, then stipend, tuition and health insurance support are provided for two or three years during the course of their PhD thesis work. Nineteen institutions offer NIGMS supported BTPs. Biotechnology training is also offered at the undergraduate level and in community colleges. == References and notes == == External links == What is Biotechnology? – A curated collection of resources about the people, places and technologies that have enabled biotechnology
https://en.wikipedia.org/wiki/Biotechnology
Wireless communication (or just wireless, when the context allows) is the transfer of information (telecommunication) between two or more points without the use of an electrical conductor, optical fiber or other continuous guided medium for the transfer. The most common wireless technologies use radio waves. With radio waves, intended distances can be short, such as a few meters for Bluetooth, or as far as millions of kilometers for deep-space radio communications. It encompasses various types of fixed, mobile, and portable applications, including two-way radios, cellular telephones, personal digital assistants (PDAs), and wireless networking. Other examples of applications of radio wireless technology include GPS units, garage door openers, wireless computer mouse, keyboards and headsets, headphones, radio receivers, satellite television, broadcast television and cordless telephones. Somewhat less common methods of achieving wireless communications involve other electromagnetic phenomena, such as light and magnetic or electric fields, or the use of sound. The term wireless has been used twice in communications history, with slightly different meanings. It was initially used from about 1890 for the first radio transmitting and receiving technology, as in wireless telegraphy, until the new word radio replaced it around 1920. Radio sets in the UK and the English-speaking world that were not portable continued to be referred to as wireless sets into the 1960s. The term wireless was revived in the 1980s and 1990s mainly to distinguish digital devices that communicate without wires, such as the examples listed in the previous paragraph, from those that require wires or cables. This became its primary usage in the 2000s, due to the advent of technologies such as mobile broadband, Wi-Fi, and Bluetooth. Wireless operations permit services, such as mobile and interplanetary communications, that are impossible or impractical to implement with the use of wires. The term is commonly used in the telecommunications industry to refer to telecommunications systems (e.g. radio transmitters and receivers, remote controls, etc.) that use some form of energy (e.g. radio waves and acoustic energy) to transfer information without the use of wires. Information is transferred in this manner over both short and long distances. == History == === Photophone === The first wireless telephone conversation occurred in 1880 when Alexander Graham Bell and Charles Sumner Tainter invented the photophone, a telephone that sent audio over a beam of light. The photophone required sunlight to operate, and a clear line of sight between the transmitter and receiver, which greatly decreased the viability of the photophone in any practical use. It would be several decades before the photophone's principles found their first practical applications in military communications and later in fiber-optic communications. === Electric wireless technology === ==== Early wireless ==== A number of wireless electrical signaling schemes including sending electric currents through water and the ground using electrostatic and electromagnetic induction were investigated for telegraphy in the late 19th century before practical radio systems became available. These included a patented induction system by Thomas Edison allowing a telegraph on a running train to connect with telegraph wires running parallel to the tracks, a William Preece induction telegraph system for sending messages across bodies of water, and several operational and proposed telegraphy and voice earth conduction systems. The Edison system was used by stranded trains during the Great Blizzard of 1888 and earth conductive systems found limited use between trenches during World War I but these systems were never successful economically. ==== Radio waves ==== In 1894, Guglielmo Marconi began developing a wireless telegraph system using radio waves, which had been known about since proof of their existence in 1888 by Heinrich Hertz, but discounted as a communication format since they seemed, at the time, to be a short-range phenomenon. Marconi soon developed a system that was transmitting signals way beyond distances anyone could have predicted (due in part to the signals bouncing off the then unknown ionosphere). Marconi and Karl Ferdinand Braun were awarded the 1909 Nobel Prize for Physics for their contribution to this form of wireless telegraphy. Millimetre wave communication was first investigated by Jagadish Chandra Bose during 1894–1896, when he reached an extremely high frequency of up to 60 GHz in his experiments. He also introduced the use of semiconductor junctions to detect radio waves, when he patented the radio crystal detector in 1901. === Wireless revolution === The wireless revolution began in the 1990s, with the advent of digital wireless networks leading to a social revolution, and a paradigm shift from wired to wireless technology, including the proliferation of commercial wireless technologies such as cell phones, mobile telephony, pagers, wireless computer networks, cellular networks, the wireless Internet, and laptop and handheld computers with wireless connections. The wireless revolution has been driven by advances in radio frequency (RF), microelectronics, and microwave engineering, and the transition from analog to digital RF technology, which enabled a substantial increase in voice traffic along with the delivery of digital data such as text messaging, images and streaming media. == Modes == Wireless communications can be via: === Radio === Radio and microwave communication carry information by modulating properties of electromagnetic waves transmitted through space. Specifically, the transmitter generates artificial electromagnetic waves by applying time-varying electric currents to its antenna. The waves travel away from the antenna until they eventually reach the antenna of a receiver, which induces an electric current in the receiving antenna. This current can be detected and demodulated to recreate the information sent by the transmitter. === Wireless optical === ==== Free-space optical (long-range) ==== Free-space optical communication (FSO) is an optical communication technology that uses light propagating in free space to transmit wireless data for telecommunications or computer networking. "Free space" means the light beams travel through the open air or outer space. This contrasts with other communication technologies that use light beams traveling through transmission lines such as optical fiber or dielectric "light pipes". The technology is useful where physical connections are impractical due to high costs or other considerations. For example, free space optical links are used in cities between office buildings that are not wired for networking, where the cost of running cable through the building and under the street would be prohibitive. Another widely used example is consumer IR devices such as remote controls and IrDA (Infrared Data Association) networking, which is used as an alternative to WiFi networking to allow laptops, PDAs, printers, and digital cameras to exchange data. === Sonic === Sonic, especially ultrasonic short-range communication involves the transmission and reception of sound. === Electromagnetic induction === Electromagnetic induction only allows short-range communication and power transmission. It has been used in biomedical situations such as pacemakers, as well as for short-range RFID tags. == Services == Common examples of wireless equipment include: Infrared and ultrasonic remote control devices Professional LMR (Land Mobile Radio) and SMR (Specialized Mobile Radio) are typically used by business, industrial, and Public Safety entities. Consumer Two-way radio including FRS Family Radio Service, GMRS (General Mobile Radio Service), and Citizens band ("CB") radios. The Amateur Radio Service (Ham radio). Consumer and professional Marine VHF radios. Airband and radio navigation equipment used by aviators and air traffic control Cellular telephones and pagers: provide connectivity for portable and mobile applications, both personal and business. Global Positioning System (GPS): allows drivers of cars and trucks, captains of boats and ships, and pilots of aircraft to ascertain their location anywhere on earth. Cordless computer peripherals: the cordless mouse is a common example; wireless headphones, keyboards, and printers can also be linked to a computer via wireless using technology such as Wireless USB or Bluetooth. Cordless telephone sets: these are limited-range devices, not to be confused with cell phones. Satellite television: Is broadcast from satellites in geostationary orbit. Typical services use direct broadcast satellite to provide multiple television channels to viewers. == Electromagnetic spectrum == AM and FM radios and other electronic devices make use of the electromagnetic spectrum. The frequencies of the radio spectrum that are available for use for communication are treated as a public resource and are regulated by organizations such as the American Federal Communications Commission, Ofcom in the United Kingdom, the international ITU-R or the European ETSI. Their regulations determine which frequency ranges can be used for what purpose and by whom. In the absence of such control or alternative arrangements such as a privatized electromagnetic spectrum, chaos might result if, for example, airlines did not have specific frequencies to work under and an amateur radio operator was interfering with a pilot's ability to land an aircraft. Wireless communication spans the spectrum from 9 kHz to 300 GHz. == Applications == === Mobile telephones === One of the best-known examples of wireless technology is the mobile phone, also known as a cellular phone, with more than 6.6 billion mobile cellular subscriptions worldwide as of the end of 2010. These wireless phones use radio waves from signal-transmission towers to enable their users to make phone calls from many locations worldwide. They can be used within the range of the mobile telephone site used to house the equipment required to transmit and receive the radio signals from these instruments. === Data communications === Wireless data communications allow wireless networking between desktop computers, laptops, tablet computers, cell phones, and other related devices. The various available technologies differ in local availability, coverage range, and performance, and in some circumstances, users employ multiple connection types and switch between them using connection manager software or a mobile VPN to handle the multiple connections as a secure, single virtual network. Supporting technologies include: Wi-Fi is a wireless local area network that enables portable computing devices to connect easily with other devices, peripherals, and the Internet. Standardized as IEEE 802.11 a, b, g, n, ac, ax, Wi-Fi has link speeds similar to older standards of wired Ethernet. Wi-Fi has become the de facto standard for access in private homes, within offices, and at public hotspots. Some businesses charge customers a monthly fee for service, while others have begun offering it free in an effort to increase the sales of their goods. Cellular data service offers coverage within a range of 10-15 miles from the nearest cell site. Speeds have increased as technologies have evolved, from earlier technologies such as GSM, CDMA and GPRS, through 3G, to 4G networks such as W-CDMA, EDGE or CDMA2000. As of 2018, the proposed next generation is 5G. Low-power wide-area networks (LPWAN) bridge the gap between Wi-Fi and Cellular for low-bitrate Internet of things (IoT) applications. Mobile-satellite communications may be used where other wireless connections are unavailable, such as in largely rural areas or remote locations. Satellite communications are especially important for transportation, aviation, maritime and military use. Wireless sensor networks are responsible for sensing noise, interference, and activity in data collection networks. This allows us to detect relevant quantities, monitor and collect data, formulate clear user displays, and to perform decision-making functions Wireless data communications are used to span a distance beyond the capabilities of typical cabling in point-to-point communication and point-to-multipoint communication, to provide a backup communications link in case of normal network failure, to link portable or temporary workstations, to overcome situations where normal cabling is difficult or financially impractical, or to remotely connect mobile users or networks. ==== Peripherals ==== Peripheral devices in computing can also be connected wirelessly, as part of a Wi-Fi network or directly via an optical or radio-frequency (RF) peripheral interface. Originally these units used bulky, highly local transceivers to mediate between a computer and a keyboard and mouse; however, more recent generations have used smaller, higher-performance devices. Radio-frequency interfaces, such as Bluetooth or Wireless USB, provide greater ranges of efficient use, usually up to 10 feet, but distance, physical obstacles, competing signals, and even human bodies can all degrade the signal quality. Concerns about the security of wireless keyboards arose at the end of 2007 when it was revealed that Microsoft's implementation of encryption in some of its 27 MHz models were highly insecure. === Energy transfer === Wireless energy transfer is a process whereby electrical energy is transmitted from a power source to an electrical load that does not have a built-in power source, without the use of interconnecting wires. There are two different fundamental methods for wireless energy transfer. Energy can be transferred using either far-field methods that involve beaming power/lasers, radio or microwave transmissions, or near-field using electromagnetic induction. Wireless energy transfer may be combined with wireless information transmission in what is known as Wireless Powered Communication. In 2015, researchers at the University of Washington demonstrated far-field energy transfer using Wi-Fi signals to power cameras. === Medical technologies === New wireless technologies, such as mobile body area networks (MBAN), have the capability to monitor blood pressure, heart rate, oxygen level, and body temperature. The MBAN works by sending low-powered wireless signals to receivers that feed into nursing stations or monitoring sites. This technology helps with the intentional and unintentional risk of infection or disconnection that arise from wired connections. == Categories of implementations, devices, and standards == == See also == == References == == Further reading == == External links == Bibliography - History of wireless and radio broadcasting Nets, Webs and the Information Infrastructure at Wikibooks Sir Jagadis Chandra Bose - The man who (almost) invented the radio
https://en.wikipedia.org/wiki/Wireless
An engineering technologist is a professional trained in certain aspects of development and implementation of a respective area of technology. An education in engineering technology concentrates more on application and less on theory than does an engineering education. Engineering technologists often assist engineers; but after years of experience, they can also become engineers. Like engineers, areas where engineering technologists can work include product design, fabrication, and testing. Engineering technologists sometimes rise to senior management positions in industry or become entrepreneurs. Engineering technologists are more likely than engineers to focus on post-development implementation, product manufacturing, or operation of technology. The American National Society of Professional Engineers (NSPE) makes the distinction that engineers are trained in conceptual skills, to "function as designers", while engineering technologists "apply others' designs". The mathematics and sciences, as well as other technical courses, in engineering technology programs, are taught with more application-based examples, whereas engineering coursework provides a more theoretical foundation in math and science. Moreover, engineering coursework tends to require higher-level mathematics including calculus and calculus-based theoretical science courses, as well as more extensive knowledge of the natural sciences, which serves to prepare students for research (whether in graduate studies or industrial R&D) as opposed to engineering technology coursework which focuses on algebra, trigonometry, applied calculus, and other courses that are more practical than theoretical in nature and generally have more labs that involve the hands-on application of the topics studied. In the United States, although some states require, without exception, a BS degree in engineering at schools with programs accredited by the Engineering Accreditation Commission (EAC) of the Accreditation Board for Engineering and Technology (ABET), about two-thirds of the states accept BS degrees in engineering technology accredited by the Engineering Technology Accreditation Commission (ETAC) of the ABET, in order to become licensed as professional engineers. States have different requirements as to the years of experience needed to take the Fundamentals of Engineering (FE) and Professional Engineering (PE) exams. A few states require those sitting for the exams to have a master's degree in engineering. This education model is in line with the educational system in the United Kingdom where an accredited MEng or MSc degree in engineering is required by the Engineering Council (EngC) to be registered as a Chartered Engineer. Engineering technology graduates with can earn an MS degree in engineering technology, engineering, engineering management, construction management, or a National Architectural Accrediting Board (NAAB)-accredited Master of Architecture degree. These degrees are also offered online or through distance-learning programs at various universities, both nationally and internationally, which allows individuals to continue working full-time while earning an advanced degree. == Nature of the work == Engineering technologists are more likely to work in testing, fabrication/construction or fieldwork, while engineers generally focus more on conceptual design and product development, with considerable overlap (e.g., testing and fabrication are often integral to the overall product development process and can involve engineers as well as engineering technologists). Engineering technologists are employed in a wide array of industries and areas including product development, manufacturing and maintenance. They may become managers depending upon the experience and their educational emphasis on management. Entry-level positions relating in various ways to product design, product testing, product development, systems development, field engineering, technical operations, and quality control are common for engineering technologists. Most companies generally make no distinction between engineers and engineering technologists when it comes to hiring. == Education and accreditation == Beginning in the 1950s and 1960s, some post-secondary institutions in the U.S. and Canada began offering degrees in engineering technology, focusing on applied study rather than the more theoretical studies required for engineering degrees. The focus on applied study addressed a need within the scientific, manufacturing, and engineering communities, as well as other industries, for professionals with hands-on and applications-based engineering knowledge. Depending on the institution, associate's or bachelor's degrees are offered, with some institutions also offering advanced degrees in engineering technology. In general, an engineering technologist receives a broad range of applied science and applied mathematics training, as well as the fundamentals of engineering in the student's area of focus. Engineering technology programs typically include instruction in providing support to specific engineering specialties. Information technology is primarily involved with the management, operation, and maintenance of computer systems and networks, along with an application of technology in diverse fields such as architecture, engineering, graphic design, telecommunications, computer science, and network security. An engineering technologist is also expected to have had some coursework in ethics. In 2001, Professional organizations from different countries have signed a mutual recognition agreement called the Sydney Accord, which represents an understanding that the academic credentials of engineering technologists will be recognized in all signatory states. The recognition given engineering technologists under the Sydney Accord can be compared to the Washington Accord for engineers and the Dublin Accord for engineering technicians. The Engineering Technologist Mobility Forum (ETMF) is an international forum held by signatories of the Sydney Accord to explore mutual recognition for experienced engineering technologists and to remove artificial barriers to the free movement and practice of engineering technologists amongst their countries. ETMF can be compared to the Engineers Mobility Forum (EMF) for engineers. Graduates acquiring an associate degree, or lower, typically find careers as engineering technicians. According to the United States Bureau of Labor Statistics: "Many four-year colleges offer bachelor's degrees in engineering technology and graduates of these programs are hired to work as entry-level engineers or applied engineers, but not technicians." Engineering technicians typically have a two-year associate degree, while engineering technologists have a bachelor's degrees. === Canada === In Canada, the new occupational category of "technologist" was established in the 1960s, in conjunction with an emerging system of community colleges and technical institutes. It was designed to effectively bridge the gap between the increasingly theoretical nature of engineering degrees and the predominantly practical approach of technician and trades programs. Provincial associations may certify individuals as a professional technologist (P.Tech.), certified engineering technologist (C.E.T.), registered engineering technologist (R.E.T.), applied science technologist (AScT), or technologue professionel (T.P.). These provincial associations are constituent members of Technology Professionals Canada (TPC), which accredits technology programs across Canada, through its Technology Accreditation Canada (TAC). Nationally accredited engineering technology programs range from two to three years in length, depending on the province, and often require as many classroom hours as a 4-year degree program. === United States === In the United States, the U.S. Department of Education or the Council for Higher Education Accreditation (CHEA) are at the top of the educational accreditation hierarchy. The U.S. Department of Education acknowledges regional and national accreditation and CHEA recognizes specialty accreditation. One technology accreditation is currently recognized by CHEA: The Association of Technology, Management and Applied Engineering (ATMAE). CHEA recognizes ATMAE for accrediting associate, baccalaureate, and master's degree programs in technology, applied technology, engineering technology, and technology-related disciplines delivered by national or regional accredited institutions in the United States. As of March 2019, ABET withdrew from CHEA recognition The National Institute for Certification in Engineering Technologies (NICET) awards certification at two levels, depending on work experience: the Associate Engineering Technologist (AT) and the Certified Engineering Technologist (CT). ATMAE awards two levels of certification in technology management: Certified Technology Manager (CTM) and Certified Senior Technology Manager (CSTM). ATMAE also awards two levels of certification of manufacturing specialist: Certified Manufacturing Specialist (CMS) and Certified Senior Manufacturing Specialist (CSMS). In 2020, ATMAE announced offering the Certified Controls Engineer (CCE) and Certified Senior Controls Engineer (CSCE) professional certifications. While the CTM, CMS, and CCE certifications are obtained through examination, the CSTM, CSMS and CSCE require industry experience and continuous improvement via the obtainment of professional development units (PDUs). The American Society of Certified Engineering Technicians (ASCET) is a membership organization that issues Certified Member certifications to engineering technicians and engineering technologists. Professional engineers are issued Registered Member certification. === United Kingdom === The United Kingdom has a decades-long tradition of producing engineering technologists via the apprenticeship system. UK engineering technologists have always been designated as "engineers", which in the UK is used to describe the entire range of skilled workers and professionals, from tradespeople through to the highly educated Chartered Engineer. In fact up until the 1960s professional engineers in the UK were often referred to as "Technologists" to distinguish them from scientists, technicians, and craftsmen. The modern term for an engineering technologist is "incorporated engineer" (IEng), although since 2000 the normal route to achieving IEng is with a bachelor's or honors degree in engineering or technology. Modern technical apprenticeships would normally lead to the engineering technician (EngTech) professional qualification and, with further studies at higher apprenticeship level, an IEng. Since 2015, the Universities and Colleges Admissions Service (UCAS) has introduced engineering degree (bachelors and masters) apprenticeships. The title "incorporated engineer" is protected by civil law. Prior to the title "incorporated engineer", UK technologists were known as "technician engineers" a designation introduced in the 1960s. In the United Kingdom, an incorporated engineer is accepted as a "professional engineer", registered by the EngC, although the term "professional engineer" has no legal meaning in the UK and there are no restrictions on the practice. In fact, anyone in the UK can call themselves an "engineer" or "professional engineer" without any qualifications or proven competence in engineering; and most UK skilled trades are sometimes referred to as "professional" or "accredited" engineers. Examples are "Registered Gas Engineer" (gas installer) or "Professional Telephone Engineer" (phone line installer or fault diagnosis). Incorporated engineers are recognized internationally under the Sydney Accord as engineering technologists. One of the professional titles recognized by the Washington Accord for engineers in the United Kingdom is the chartered engineer. The incorporated engineer is a professional engineer as recognized by the EngC of the United Kingdom. The European designation, as demonstrated by the prescribed title under 2005/36/EC, is "engineer". The incorporated engineer operates autonomously and directs activities independently. They do not necessarily need the support of chartered engineers, because they are often acknowledged as full engineers in the UK (but not in Canada or the U.S.). The United Kingdom incorporated engineer may also contribute to the design of new products and systems. The chartered engineer and incorporated engineer, whilst often undertaking similar roles, are distinct qualifications awarded by the EngC, with Chartered Engineer (CEng) status being the terminal engineering qualification. Incorporated engineers currently require an IEng-accredited bachelors or honors degree in engineering (prior to 1997 the B.Sc. and B.Eng. degrees satisfied the academic requirements for "chartered engineer" registration), a Higher National Certificate or diploma, City and Guilds of London Institute higher diploma/full technological cert diploma, or a Foundation Degree in engineering, plus appropriate further learning to degree level, or an NVQ4 or SVQ4 qualifications approved for the purpose by a licensed engineering institution. The academic requirements must be accompanied by the appropriate peer-reviewed experience in employment—typically 4 years post qualification. In addition to the experience and academic requirements, the engineering candidate must have three referees (themselves CEng or IEng) who vouch for the performance of the individual being considered for professional recognition. There are a number of alternative ways to achieve IEng status for those that do not have the necessary qualifications for applicants, but who can clearly show they have achieved the same level as those with qualifications, including: writing a technical report, based upon their experience and demonstrate their knowledge and understanding of engineering principles; earning the City and Guilds graduate diploma (bachelors level) and a postgraduate diploma (masters level) accredited by the Institution of Mechanical Engineers (IMechE), Institution of Engineering and Technology (IET) and Institution of Civil Engineers (ICE); following a work-based learning program; or taking an academic program specified by the institution to which they are applying. === Germany – European Union === ==== Engineering technologist / state-certified engineer ==== The engineering technologist (state-certified technician; German: Staatlich geprüfter Techniker) are vocational (non-academic) qualifications at the tertiary level in Germany. The degree is governed by the framework agreement of trade and technical schools (resolution of the Standing Conference of the Ministers of Education and Cultural Affairs of the states in the Federal Republic of Germany of 7 November 2002 in its respective applicable version) and is recognised by all states of the Federal Republic of Germany. It is awarded after passing state examinations at state or state-recognised technical school or academies (German: Fachschule/Fachakademie). Through the Vocational Training Modernisation Act (12.12.2019), state-certified engineers are also allowed to hold the title Bachelor Professional in Technik as of 1 January 2020. To be eligible for the engineering technologist examination, candidates must fulfill the following requirements: completion of one of the school systems (Hauptschule, Realschule, Gymnasium), an apprenticeship of at least two years duration, one year of completed professional work experience and attendance of an educational program with a course load of 2400–3000 hours, usually completed within two years, full-time, or 3.5–4 years, part-time, at vocational colleges. ==== State-certified technicians/engineers in the EU directives ==== As of 31 January 2012, state-certified engineers, state-certified business managers and state-certified designers are at level 6-bachelor in the European Qualifications Framework (EQF), equivalent to a bachelor's degree. As such, the engineering technologist constitutes an advanced entry qualification for German universities and in principle permits entry into any undergraduate academic-degree program. The qualifications are listed in EU Directives as recognised, regulated professions in Germany and the EU. Annexes C and D were added to Council Directive 92/51/EEC as a second general system for the recognition of professional education and training to supplement Directive 89/48/EEC. Institutions involved included the federal government (in Germany, the Federal Ministry of Education and Research and the Federal Ministry of Economics and Technology), EU Standing Conference and Economic Ministerial Meeting of Countries, the German Chamber of Crafts, the Confederation of German Employers' Associations, German Chambers of Industry and Commerce, Confederation of German Trade Unions, and the Federal Institute for Vocational Application. These government institutions agreed on a common position regarding the implementation of the EQF and a German qualifications framework (DQR). European Union law and other documents considered to be public include: Annexes C and D to Council Directive 92/51/EEC on a second general system for the recognition of professional education and training to supplement Directive 89/48/EEC EU Directive 2005L0036-EN 01.01.2007 ANNEX III list of regulated education and training referred to in the third subparagraph of Article 13(2) == See also == National Council of Examiners for Engineering and Surveying American Society for Engineering Education UNESCO-UNEVOC Practical engineer Drafter == References == == Further reading == Sastry, M.K.S.; Clement K. Sankat; Harris Khan; Dave Bhajan (2008). "The need for technologists and applied technology programs: an experience from Trinidad and Tobago". International Journal of Management in Education. 2 (2): 222. doi:10.1504/IJMIE.2008.018393. Sastry, M.K.S.; C.K. Sankat; D. Exall; K.D. Srivastava; H. Khan; B.Copeland; W. Lewis; D.Bhajan (April 2007). "An Appraisal of Tertiary Level Institutional Collaboration and Joint Degree Programs in Trinidad and Tobago". Latin American and Caribbean Journal of Engineering Education. 1 (1): 27–34. ISSN 1935-0295. Retrieved 4 October 2010.
https://en.wikipedia.org/wiki/Engineering_technologist
An institute of technology (also referred to as technological university, technical university, university of technology, polytechnic university) is an institution of tertiary education that specializes in engineering, technology, applied science, and natural sciences. == Institutes of technology versus polytechnics == The institutes of technology and polytechnics have been in existence since at least the 18th century, but became popular after World War II with the expansion of engineering and applied science education, associated with the new needs created by industrialization. The world's first institution of technology, the Berg-Schola (today its legal successor is the University of Miskolc), was founded by the Court Chamber of Vienna in Selmecbánya, Kingdom of Hungary (now Banská Štiavnica, Slovakia), in 1735 in order to train specialists of precious metal and copper mining according to the requirements of the industrial revolution in Hungary. The oldest German Institute of Technology is the Braunschweig University of Technology, founded in 1745 as "Collegium Carolinum". The French École Polytechnique was founded in 1794. In some cases, polytechnics or institutes of technology are engineering schools or technical colleges. In several countries, like Germany, the Netherlands, Switzerland, Turkey and Taiwan, institutes of technology are institutions of higher education and have been accredited to award academic degrees and doctorates. Famous examples are the Istanbul Technical University, ETH Zurich, Delft University of Technology, RWTH Aachen and National Taiwan University of Science and Technology all considered universities. In countries like Iran, Finland, Malaysia, Portugal, Singapore or the United Kingdom, there is often a significant and confused distinction between polytechnics and universities. In the UK, a binary system of higher education emerged consisting of universities (research orientation) and polytechnics (engineering and applied science and professional practice orientation). Polytechnics offered university equivalent degrees mainly in STEM subjects from bachelor's, master's and PhD that were validated and governed at the national level by the independent UK Council for National Academic Awards. In 1992, UK polytechnics were designated as universities which meant they could award their own degrees. The CNAA was disbanded. The UK's first polytechnic, the Royal Polytechnic Institution (now the University of Westminster), was founded in 1838 in Regent Street, London. In Ireland the term "institute of technology" is the more favored synonym of a "regional technical college" though the latter is the legally correct term; however, Dublin Institute of Technology was a university in all but name as it can confer degrees in accordance with law; Cork Institute of Technology and other Institutes of Technology have delegated authority from HETAC to make awards to and including master's degree level—Level 9 of Ireland's National Framework for Qualifications (NFQ)—for all areas of study and Doctorate level in a number of others. In 2018, Ireland passed the Technological Universities Act, which allowed a number of Institutes of Technology to transform into Technological Universities. In a number of countries, although being today generally considered similar institutions of higher learning across many countries, polytechnics and institutes of technology used to have a quite different statute among each other, its teaching competences and organizational history. In many cases, "polytechnic" were elite technological universities concentrating on applied science and engineering and may also be a former designation for a vocational institution, before it has been granted the exclusive right to award academic degrees and can be truly called an "institute of technology". A number of polytechnics providing higher education is simply a result of a formal upgrading from their original and historical role as intermediate technical education schools. In some situations, former polytechnics or other non-university institutions have emerged solely through an administrative change of statutes, which often included a name change with the introduction of new designations like "institute of technology", "polytechnic university", "university of applied sciences" or "university of technology" for marketing purposes. Such emergence of so many upgraded polytechnics, former vocational education and technical schools converted into more university-like institutions has caused concern where the lack of specialized intermediate technical professionals lead to industrial skill shortages in some fields, being also associated to an increase of the graduate unemployment rate. This is mostly the case in those countries, where the education system is not controlled by the state and any institution can grant degrees. Evidence have also shown a decline in the general quality of teaching and graduate's preparation for the workplace, due to the fast-paced conversion of that technical institutions to more advanced higher level institutions. Mentz, Kotze and Van der Merwe argue that all the tools are in place to promote the debate on the place of technology in higher education in general and in universities of technology specifically and they posit several questions for the debate. == Institutes by country == === Argentina === In Argentina, the main higher institution devoted to the study of technology is the National Technological University which has Regional Faculties throughout Argentina. The Buenos Aires Institute of Technology (ITBA) and Balseiro Institute are other recognized institutes of technology. === Australia === 1970s–1990s During the 1970s to early 1990s, the term was used to describe state owned and funded technical schools that offered both vocational and higher education. They were part of the College of Advanced Education system. In the 1990s most of these merged with existing universities or formed new ones of their own. These new universities often took the title University of Technology, for marketing rather than legal purposes. AVCC report The most prominent such university in each state founded the Australian Technology Network a few years later. 1990s–today Since the mid-1990s, the term has been applied to some technically minded technical and further education (TAFE) institutes. A recent example is the Melbourne Polytechnic rebranding and repositioning in 2014 from Northern Melbourne Institute of TAFE. These primarily offer vocational education, although some like Melbourne Polytechnic are expanding into higher education offering vocationally oriented applied bachelor's degrees. This usage of the term is most prevalent historically in NSW and the ACT. The new terminology is apt given that this category of institution are becoming very much like the institutes of the 1970s–1990s period. In 2009, the old college system in Tasmania and TAFE Tasmania have started a 3-year restructure to become the Tasmanian Polytechnic www.polytechnic.tas.edu.au, Tasmanian Skills Institute www.skillsinstitute.tas.edu.au and Tasmanian Academy www.academy.tas.edu.au In the higher education sector, there are seven designated universities of technology in Australia (though, note, not all use the phrase "university of technology", such as the Universities of Canberra and South Australia, which used to be Colleges of Advanced Education before transitioning into fully-fledged universities with the ability – most important of all – to confer doctorates): Curtin University, Western Australia Queensland University of Technology, Queensland Royal Melbourne Institute of Technology, Victoria Swinburne University of Technology, Victoria University of Canberra, Australian Capital Territory University of South Australia, South Australia University of Technology Sydney, New South Wales === Austria === Universities of technology These institutions are entitled to confer habilitation and doctoral degrees and focus on research. Graz University of Technology (13,454 students, founded 1811, Hochschule since 1865, doctoral degrees since 1901, university since 1975) TU Wien (27,923 students, founded 1815, Hochschule since 1872, doctoral degrees since 1901, university since 1975) University of Natural Resources and Life Sciences, Vienna focused on agriculture (12,500 students, founded as Hochschule in 1872, doctoral degrees since 1906, university since 1975) University of Leoben specialized in mining, metallurgy and materials (4,030 students, founded 1840, Hochschule since 1904, doctoral degrees since 1906, university since 1975) Research institutions These institutions focus only on research. Austrian Institute of Technology (founded 1956) Institute of Science and Technology Austria (founded 2007) Technical faculties at universities Several universities have faculties of technology that are entitled to confer habilitation and doctoral degrees and which focus on research. Johannes Kepler University Linz (Faculty of Engineering and Natural Sciences founded 1965, university since 1975) University of Innsbruck (Faculty of Civil Engineering founded 1969) Alpen-Adria-Universität Klagenfurt (Faculty of Technical Sciences founded 2007) Fachhochschulen Fachhochschule is a German type of tertiary education institution and adopted later in Austria and Switzerland. They do not focus exclusively on technology, but may also offer courses in social science, medicine, business and design. They grant bachelor's degrees and master's degrees and focus more on teaching than research and more on specific professions than on science. In 2010, there were 20 Fachhochschulen in Austria === Bangladesh === There are some public engineering universities in Bangladesh: Bangladesh University of Engineering and Technology (BUET) Chittagong University of Engineering and Technology (CUET). Formerly known as Bangladesh Institute of Technology, Chittagong. Khulna University of Engineering and Technology (KUET). Formerly known as Bangladesh Institute of Technology, Khulna Rajshahi University of Engineering and Technology (RUET). Formerly known as Bangladesh Institute of Technology, Rajshahi. Dhaka University of Engineering and Technology (DUET). Formerly known as Bangladesh Institute of Technology, Dhaka. There are some general, technological and specialized universities in Bangladesh offer engineering programs: University of Chittagong. Engineering programs offer under the Faculty of Engineering and Technology. University of Dhaka. Engineering programs offer under the Faculty of Engineering and Technology. University of Khulna. Engineering programs offer under the Faculty of Science, Engineering and Technology. University of Rajshahi. Engineering programs offer under the Faculty of Engineering and Technology. Islamic University, Bangladesh (IU). Engineering programs offer under the Faculty of Applied Science and Technology. Shahjalal University of Science and Technology. Engineering programs offer under the Faculty of Applied Science and Technology. Bangladesh University of Textiles (BUTEX). Specialized institution that offers various engineering programs with its interdisciplinary curricula. There are some private engineering universities in Bangladesh: Ahsanullah University of Science and Technology (AUST) Military Institute of Science and Technology (MIST) There is only one international engineering university in Bangladesh: Islamic University of Technology (IUT) There are numerous private and other universities as well as science and technology universities providing engineering education. Most prominent are: American International University-Bangladesh Bangladesh University of Business and Technology. Engineering programs offer under the Faculty of Engineering and Technology. North South University International Islamic University Chittagong East West University BRAC University Independent University, Bangladesh European University of Bangladesh There are numerous government-funded as well as private polytechnic institutes, engineering colleges and science and technology institutes providing engineering education. Most prominent are: Bangladesh Institute of Glass and Ceramics Dhaka Polytechnic Institute Chittagong Polytechnic Institute Bangladesh Survey Institute Govt. Arts Graphics Institute Bangladesh Institute of Marine Technology Mymensingh Engineering College Narayangonj Technical School and College === Belarus === Belarusian National Technical University (BNTU) (Minsk, Belarus) Belarusian State Technological University (Minsk, Belarus) Belarusian State University of Informatics and Radioelectronics (Minsk, Belarus) Brest State Technical University (Brest, Belarus) Pavel Sukhoi State Technical University of Gomel (Gomel, Belarus) Vitebsk State Technological University (Vitebsk, Belarus) === Belgium and the Netherlands === In the Netherlands, there are four universities of technology, jointly known as 4TU: Delft University of Technology (TU Delft) Eindhoven University of Technology (TU Eindhoven) Universiteit Twente (U Twente) Wageningen University (Wageningen U) In Belgium and in the Netherlands, Hogescholen or Hautes écoles (also translated into colleges, university colleges or universities of applied science) are applied institutes of higher education that do not award doctorates. They are generally limited to Bachelor-level education, with degrees called professional bachelors, and only minor Master's programmes. The hogeschool thus has many similarities to the Fachhochschule in the German language areas and to the ammattikorkeakoulu in Finland. A list of all hogescholen in the Netherlands, including some which might be called polytechnics, can be found at the end of this list. === Brazil === Federal: Federal Centers for Technological Education (CEFET) CEFET of Minas Gerais CEFET of Rio de Janeiro Federal Institute of Education, Science and Technology (IFET) Federal Institute of Bahia Federal Institute of São Paulo Federal Institute of Pará Federal Institute of Rio de Janeiro Federal Institute of Maranhao Federal Technological University of Paraná Service academy: Instituto Militar de Engenharia Instituto Tecnológico de Aeronáutica Private: Instituto Nacional de Telecomunicações – Inatel State: Sao Paulo State Technological College === Bulgaria === Technical University of Gabrovo Technical University of Sofia Technical University of Varna University of Chemical Technology and Metallurgy === Cambodia === In Cambodia, there are institutes of technology/polytechnic institutes and Universities that offer instruction in a variety of programs that can lead to: certificates, diplomas and degrees. Institutes of technology/polytechnic institutes and universities tend to be independent institutions. Institutes of technology/polytechnic institutes Institute of Technology of Cambodia (ITC) or Institut de Technologie du Cambodge (polytechnic institute in Phnom Penh, Cambodia) Phnom Penh Institute of Technology (PPIT) (polytechnic institute in Phnom Penh, Cambodia) Universities Royal University of Phnom Penh (RUPP) or Royal Université de Phnom Penh (polytechnic university in Phnom Penh, Cambodia) === Canada === In Canada, there are affiliate schools, colleges, and institutes of technology/polytechnic institutes that offer instruction in a variety of programs that can lead to the bestowment of apprenticeships, citations, certificates, diplomas, and associate's degrees upon successful completion. Affiliate schools are polytechnic divisions attached to a national university and offer select technical and engineering transfer programs. Colleges, institutes of technology/polytechnic institutes, and universities tend to be independent institutions. Credentials are typically conferred at the undergraduate level; however, university-affiliated schools like the École de technologie supérieure and the École Polytechnique de Montréal (both of which are located in Quebec), also offer graduate and postgraduate programs, in accordance with provincial higher education guidelines. Canadian higher education institutions, at all levels, undertake directed and applied research with financing allocated through public funding, private equity, or industry sources. Some of Canada's most well-known colleges and polytechnic institutions also partake in collaborative institute-industry projects, leading to technology commercialization, made possible through the scope of Polytechnics Canada, a national alliance of eleven leading research-intensive colleges and institutes of technology. Affiliate schools École de technologie supérieure (ETS) (technical school part of the Université du Québec system in Montreal, Quebec) École Polytechnique de Montréal (polytechnic school affiliated with the Université de Montréal in Montreal, Quebec) Colleges Algonquin College (Ottawa, Ontario) Conestoga College (Kitchener, Ontario) George Brown College (Toronto, Ontario) Humber College (Toronto) Red River College (college in Winnipeg, Manitoba, offering degrees) Seneca Polytechnic (Toronto) St. Clair College (Windsor) Institutes of technology/polytechnic institutes British Columbia Institute of Technology (BCIT; polytechnic institute in Burnaby, British Columbia) Kwantlen Polytechnic University (polytechnic university in Surrey, British Columbia) Northern Alberta Institute of Technology (NAIT; polytechnic institute in Edmonton, Alberta) Toronto Metropolitan University (formerly Ryerson Polytechnical Institute, university in Toronto, Ontario) – The former Ryerson University was one of the originators of applied education in Ontario and Canada. It became a university in 1993 and dropped the term "polytechnic" in 2002, after it gained the right to grant master's and doctoral degrees, as well as changing the name of some degree designations to bring it in line with other traditional research universities. Saskatchewan Polytechnic, formerly SIAST (polytechnic institute; multiple campuses with headquarters in Saskatoon, Saskatchewan) Sheridan College (polytechnic institute in Oakville, Ontario) Southern Alberta Institute of Technology (SAIT; polytechnic institute in Calgary, Alberta) Red Deer Polytechnic (RDP; polytechnic institute in Red Deer, Alberta) University of Ontario Institute of Technology (UOIT; university in Oshawa, Ontario) === China === China's modern higher education began in 1895 with the Imperial Tientsin University which was a polytechnic plus a law department. Liberal arts were not offered until three years later at Capital University. To this day, about half of China's elite universities remain essentially polytechnical. Harbin Institute of Technology is among the best engineering school in China and the world. === Chile === Federico Santa María Technical University (UTFSM), currently the only active technical university / Institute of technology in Chile, founded initially in 1931 as School of Crafts and Arts and School of Engineering José Miguel Carrera, 18,000 students === Costa Rica === The National Technical University (UTN) founded in 2008 by merging several trade and craftsmanship schools, it is a polytechnic. The Costa Rica Institute of Technology (TEC) was founded in 1971, has its main campus located in the Cartago province, it is an institute of technology. === Croatia === In Croatia there are many polytechnic institutes and colleges that offer a polytechnic education. The law about polytechnic education in Croatia was passed in 1997. === Czech Republic === Technical universities Brno University of Technology (VUT), founded in 1899, 24,000 students Collegium Nobilium in Olomouc, 1725–1847 Czech Technical University in Prague (ČVUT), college founded in 1707, university since 1806, 23,000 students, belongs to the oldest technical universities in the world Czech University of Life Sciences Prague (ČZU), founded in 1904, focused on agriculture, 18,000 students Institute of Chemical Technology in Prague (VŠCHT), founded in 1952, 3,000 student Mendel University in Brno (MENDELU), founded in 1919, focused on agriculture, 9,000 students Technical University of Liberec (TUL), founded in 1953, 8,000 students Technical University of Ostrava (VŠB TUO), founded in 1849, 22,000 students Tomáš Baťa University in Zlín (UTB), founded in 2000, 10,000 students Research institutions Academy of Sciences of the Czech Republic (AV ČR), dates back to 1784, 14,000 research staff altogether Technical faculties at universities Jan Evangelista Purkyně University in Ústí nad Labem (Faculty of Production Technology and Management, University founded in 1991) University of Pardubice (Faculty of Chemical Technology since 1950, Jan Perner Faculty of Transportation since 1991, Institute of Electrical Engineering and Informatics since 2002) University of West Bohemia (Faculty of Mechanical Engineering, Faculty of Electrical Engineering; University founded in 1991) === Denmark === Technical University of Denmark, founded in 1829 by Hans Christian Ørsted === Dominican Republic === Instituto Tecnológico de Santo Domingo Universidad Tecnológica de Santiago === Ecuador === National Polytechnic School (EPN), National Polytechnic School, Quito, Ecuador EPN is known for research and education in the applied science, astronomy, atmospheric physics, engineering and physical sciences. The Geophysics Institute monitors the country's seismic, tectonic and volcanic activity in the continental territory and in the Galápagos Islands. One of the oldest observatories in South America is the Quito Astronomical Observatory. It was founded in 1873 and is located 12 minutes south of the Equator in Quito, Ecuador. The Quito Astronomical Observatory is the National Observatory of Ecuador and is located in the Historic Center of Quito and is managed by the National Polytechnic School. The Nuclear Science Department at EPN is the only one in Ecuador and has the large infrastructure, related to irradiation facilities like cobalt-60 source and electron beam processing. === Egypt === Alexandria Higher Institute of Engineering and Technology (AIET) Higher Technological Institute Institute of Aviation Engineering and Technology === Estonia === Tallinn University of Technology (TalTech), a public research university Tallinn University of Applied Sciences, a public vocational university Estonian Entrepreneurship University of Applied Sciences, a private institution in Tallinn === Finland === Universities of technology Universities of technology are categorised as universities, are allowed to grant B.Sc. (Tech.), Diplomi-insinööri M.Sc. (Tech.), Lic.Sc. (Tech.), Ph.D. and D.Sc. (Tech.) degrees and roughly correspond to Instituts de technologie of French-speaking areas and Technische Universität of Germany in prestige. In addition to universities of technology, some universities, e.g. University of Oulu and Åbo Akademi University, are allowed to grant the B.Sc. (tech.), M.Sc. (tech.) and D.Sc. (Tech.) degrees. Universities of technology are academically similar to other (non-polytechnic) universities. Prior to Bologna process, M.Sc. (Tech.) required 180 credits, whereas M.Sc. from a normal university required 160 credits. The credits between universities of technology and normal universities are comparable. Some Finnish universities of technology are: Aalto University formed from Helsinki University of Technology and other universities Lappeenranta-Lahti University of Technology LUT Polytechnics Polytechnic schools are distinct from academic universities in Finland. Ammattikorkeakoulu is the common term in Finland, as is the Swedish alternative "yrkeshögskola" – their focus is on studies leading to a degree (for instance insinööri, engineer; in international use, Bachelor of Engineering) in kind different from but in level comparable to an academic bachelor's degree awarded by a university. Since 2006 the polytechnics have offered studies leading to master's degrees (Master of Engineering). After January 1, 2006, some Finnish ammattikorkeakoulus switched the English term "polytechnic" to the term "university of applied sciences" in the English translations of their legal names. The ammattikorkeakoulu has many similarities to the hogeschool in Belgium and in the Netherlands and to the Fachhochschule in the German language areas. Some recognized Finnish polytechnics are: Helsinki Metropolia University of Applied Sciences Lapland University of Applied Sciences Tampere University of Applied Sciences Turku University of Applied Sciences A complete list may be found in List of polytechnics in Finland. === France and Francophone regions === Instituts de Technologie (Grandes Écoles) Collegiate universities grouping several engineering schools or multi-site clusters of French grandes écoles provide sciences and technology curricula as autonomous higher education engineering institutes. They include: Arts et Métiers ParisTech CentraleSupélec Graduate School Grenoble Institute of Technology Institut national des sciences appliquées Institut Supérieur de l'Aéronautique et de l'Espace Paris Institute of Technology ESTIA Institute of Technology They provide science and technology master's degrees and doctoral degrees. Universités de Technologie / Polytechs The universities of technology (French: universités de technologie) are public institutions awarding degrees and diplomas that are accredited by the French Ministry of Higher Education and Research. Although called "universities", the universities of technology are in fact non-university institutes (écoles extérieures aux universités), as defined by Chapter I, Section II (Articles 34 through 36) of French law 84-52 of 26 January 1984 regarding higher education (the loi Savary). They possess the advantage of combining all the assets of the engineering Grandes Écoles and those of universities as they develop simultaneously and coherently three missions: Education, Research, Transfer of technology. They maintain close links with the industrial world both on national and international levels and they are reputed for their ability to innovate, adapt and provide an education that matches the ever-changing demands of industry. This network includes three institutions: The University of Technology of Belfort-Montbéliard (Université de Technologie de Belfort-Montbéliard or UTBM) The University of Technology of Compiègne (Université de Technologie de Compiègne or UTC) The University of Technology of Troyes (Université de Technologie de Troyes or UTT) 'Polytech institutes', embedded as a part of eleven French universities provide both undergraduate and graduate engineering curricula. In the French-speaking part of Switzerland exists also the term haute école specialisée for a type of institution called Fachhochschule in the German-speaking part of the country. (see below). Écoles polytechniques Higher education systems, that are influenced by the French education system set at the end of the 18th century, use a terminology derived by reference to the French École polytechnique. Such terms include Écoles Polytechniques (Algeria, Belgium, Canada, France, Switzerland, Tunisia), Escola Politécnica (Brasil, Spain), Polytechnicum (Eastern Europe). In French language, higher education refers to écoles polytechniques, providing science and engineering curricula: École polytechnique or X (near Paris) École polytechnique de Bruxelles École polytechnique de Montréal École polytechnique fédérale de Lausanne National Polytechnic Institute of Lorraine National Polytechnic Institute of Toulouse === Germany === Fachhochschule Fachhochschulen were first founded in the early 1970s. They do not focus exclusively on technology, but may also offer courses in social science, medicine, business and design. They grant bachelor's degrees and master's degrees and focus more on teaching than research and more on specific professions than on science. In 2009/10, there existed about 200 Fachhochschulen in Germany. See the German Wikipedia for a list. Technische Universität Technische Universität (abbreviation: TU) is the common term for universities of technology. These institutions can grant habilitation and doctoral degrees and focus on research. The nine largest and most renowned Technische Universitäten in Germany have formed TU9 German Institutes of Technology as community of interests. Technische Universitäten normally have faculties or departements of natural sciences and often of economics but can also have units of cultural and social sciences and arts. RWTH Aachen, TU Dresden and TU München also have a faculty of medicine associated with university hospitals (Klinikum Aachen, University Hospital Dresden, Rechts der Isar Hospital). There are 20 universities of technology in Germany with about 290,000 students enrolled. The three states of Bremen, Mecklenburg-Vorpommern and Schleswig-Holstein do not have a Technische Universität. Saxony and Lower Saxony have the highest counts of TUs, while in Saxony three out of four universities are universities of technology. === Greece === Greece has Technical Universities (also known as Polytechnic Universities) with 5 years of study legally equivalent to Bachelor's and master's degree 300 ECTS, ISCED 7 and has the full professional rights of the Engineer. and had Technological Educational Institutes (TEIs) (1982–2019) also known as Higher Educational Institute of Technology, Technological Institute, Institute of Technology (provides at least 4-year undergraduate degree qualification πτυχίο, Latinised version: Ptychion, in line with the Bologna Process legally equivalent to Bachelor's honours degree 240 ECTS, ISCED 6. Previously it was three and a half years studies from 1983 to 1995, 210 ECTS). All the Technical Universities and Technological Educational Institutes are Higher Education Institutions (HEIs) with university title (UT) and degree awarding powers (DAPs). TEIs existed from 1983 to 2019; they were reformed between 2013 and 2019 and their departments incorporated into existing higher education institutions (HEIs). The two Polytechnic Universities (Technical Universities) in Greece (Greek: Πολυτεχνείο) are the National Technical University of Athens and the Technical University of Crete. However, many other universities have a faculty of engineering that provides an equivalent diploma of engineerings with an integrated master and the full professional rights as well. Many TEIs that got dismantled created engineering faculty with 5 years of study and 300 ECTS, ISCED 6. But those faculty are not under the general term of Polytechnics nor they have an integrated master's degree yet waiting evaluation to be characterised as equivalent. These have been named School of Engineers for the time being and not Technical Universities or Polytechnic. In Greece, all Higher Education Institutions (HEIs) are public university owned and government-funded, with free education undergraduate programmes that can be attended without any payment of tuition fee. About 1 out of 4 (one-fourth of) HEIs postgraduate programmes are offered free without any payment of tuition fee, especially about a 30% percentage of students can be entitled without tuition fee to attend all the HEIs statutory tuition fee postgraduate programmes after they be assessed on an individual basis of determined criteria as set out in the Ministry of Education. === Hong Kong === The first polytechnic in Hong Kong is The Hong Kong Polytechnic, established in 1972 through upgrading the Hong Kong Technical College (Government Trade School before 1947). The second polytechnic, the City Polytechnic of Hong Kong, was founded in 1984. These polytechnics awards diplomas, higher diplomas, as well as academic degrees. Like the United Kingdom, the two polytechnics were granted university status in 1994 and renamed The Hong Kong Polytechnic University and the City University of Hong Kong respectively. The Hong Kong University of Science and Technology, a university with a focus in applied science, engineering and business, was founded in 1991. === Hungary === The world's first Institute of Technology the Berg-Schola (Bergschule) established in Selmecbánya, Kingdom of Hungary, by the Court Chamber of Vienna in 1735 providing Further education to train specialists of precious metal and copper mining. In 1762 the institute ranked up to be Academia providing Higher Education courses. After the Treaty of Trianon the institute had to be moved to Sopron. University of Miskolc re-established in 1949 as Technical University of Heavy Industry in Miskolc and in 1990 as University of Miskolc. The university is the successor of the University of Mining and Metallurgy of Selmecbánya (est. as Bergshule 1735). Budapest University of Technology and Economics, one of the oldest institute of technology of the world is located in Budapest (est. 1782). The BME was the first University in Europe to award engineering degrees. University of Debrecen – Faculty of Engineering University of Dunaújváros Pallasz Athéné University – GAMF University of Nyíregyháza – Institute of Technical and Agricultural Sciences University of Sopron – The university is a successor of the University of Mining and Metallurgy of Selmecbánya (est. as Bergshule 1735). University of Szeged – Faculty of Engineering Szent István University Széchenyi István University University of Pannonia University of Pécs – Faculty of Engineering and Information Technology Óbuda University === India === There are Indian Institutes of Technology, Indian Institutes of Information Technology, and National Institutes of Technology in India which are autonomous public institutions. These institutions are Institutes of National Importance, and hence each of the institutions are autonomous. All Indian Institutes of Technology, Indian Institutes of Information Technology, and National Institutes of Technology have their own councils which are headed by President of India. The activities of these institutions are generally governed by the institutes alone, but sometimes they are bound to follow the directives of Ministry of Education (India) and are answerable to Ministry of Education (India) and President of India. Some departments of some of these institutions are bound to follow certain guidelines of National Board of Accreditation (NBA) if they receive the accreditation from NBA. However, unlike other institutions, it is not mandatory for these institutes to follow guidelines of All India Council for Technical Education (AICTE) and NBA completely. The authority controlling technical education, other than the Institutes of National Importance, in India is All India Council for Technical Education (AICTE) and National Board of Accreditation (NBA). === Indonesia === There are four public institutes of technology in Indonesia that are owned by the government of Indonesia. Other than that, there are hundreds of other institutes that are owned by private or other institutions. Four public institutes are: Bandung Institute of Technology, Bandung Sepuluh Nopember Institute of Technology, Surabaya Kalimantan Institute of Technology, Balikpapan Sumatera Institute of Technology, Bandar Lampung Public state-owned polytechnics also available and provides vocational education offers either three-year Diploma degrees, which is similar to an associate degree or four-year bachelor's degree in applied sciences (Indonesian: Sarjana Terapan). The more advanced vocational Master's are also available and doctoral degrees are still in progress. Some notable polytechnics in Indonesia includes State Polytechnic of Jakarta, State Polytechnic of Bandung, State Polytechnic of Malang, State Electronics Polytechnic of Surabaya, and State Naval and Shipbuilding Polytechnic of Surabaya. These polytechnics are known to be departed from Indonesian prestigious universities and institute of technologies, e.g. the State Polytechnic of Jakarta was departed from the University of Indonesia while both Surabaya Polytechnics were departed from Sepuluh Nopember Institute of Technology. === Iran === There are 18 technological universities in Iran. Amirkabir University of Technology (Tehran Polytechnic), Tehran Sharif University of Technology, Tehran Technical and Vocational University, 172 branches in Iran Iran University of Science and Technology, Tehran K. N. Toosi University of Technology, Tehran Petroleum University of Technology, Tehran and Ahwaz Isfahan University of Technology, Isfahan Sahand University of Technology, Tabriz Shiraz University of Technology, Shiraz Arak University of Technology, Arak Urmia University of Technology, Urmia Babol University of Technology, Babol Shahrood University of Technology, Shahrood Hamedan University of Technology, Hamedan Kermanshah University of Technology, Kermanshah Qom University of Technology, Qom Birjand University of Technology, Birjand Jondi-Shapur University of Technology, Dezful Sirjan University of Technology, Sirjan === Iraq === University of Technology, Iraq === Ireland === An "Institute of Technology" was formerly referred to as Regional Technical College (RTCs) system. The abbreviation IT is now widely used to refer to an Institute of Technology. These institutions offer sub-degree, degree and masters level studies. Unlike the Irish university system an Institute of Technology also offers sub-degree programmes such as 2-year Higher Certificate programme in various academic fields of study. Some institutions have "delegated authority" that allows them to make doctoral awards in their own name, after authorisation by the Quality and Qualifications Ireland. Dublin Institute of Technology developed separately from the Regional Technical College system and after several decades of association with the University of Dublin it acquired the authority to confer its own degrees before becoming a member ofTU Dublin. The approval of Ireland's first Technological University, TU Dublin was announced in July 2018 and the new university established on 01 January 2019. It is the result of a merger of three of the ITs in the County Dublin area - Dublin Institute of Technology, IT Tallaght and IT Blanchardstown. Several Technological Universities have since been set up in the country. Munster TU was established 01 January 2021 through a merger of Cork IT and IT Tralee (Kerry). The Technological University of the Shannon: Midlands Midwest was the third such university, established in October 2021 out of Limerick IT and Athlone IT. The Atlantic Technological University was formally established on 01 April 2022 out of Galway-Mayo IT, IT Sligo, and Letterkenny IT. As a fifth such institution, the South East Technological University was established on 01 May 2022 out of Carlow IT and Waterford IT. As of May 2023, the only remaining Institutes of Technology in Ireland are Dundalk IT and the Dun Laoghaire Institute of Art, Design and Technology. The Technological Higher Education Association is the representative body for the various Institutes of Technology in Ireland. === Israel === Technion – Israel Institute of Technology Holon Institute of Technology === Italy === In Italy, the term "technical institute" generally refers to a secondary school which offers a five-year course granting the access to the university system. In higher education, Politecnico refers to a technical university awarding bachelor, master and PhD degrees in engineering. Historically there were two Politecnici, one in each of the two largest industrial cities of the north: Politecnico di Torino, established 1859; Politecnico di Milano, established 1863. A third Politecnico was added in the south in 1990: Politecnico di Bari, established 1990. In 2003 the Libera Università di Ancona becomes: Università Politecnica delle Marche (Polytechnic University of the Marches). However, many other universities have a faculty of engineering. In 2003, the Ministry of Education, Universities and Research (Italy) and the Ministry of Economy and Finance (Italy) jointly established the Istituto Italiano di Tecnologia (Italian Institute of Technology), headquartered in Genoa with 10 laboratories around Italy, which however focuses on research, not entirely in the fields of engineering and does not offer undergraduate degrees. === Jamaica === University of Technology, Jamaica, in Kingston, Jamaica === Japan === In Japan, an institute of technology (工業大学, kōgyō daigaku) is a type of university that specializes in the sciences. See also the Imperial College of Engineering, which was the forerunner of the University of Tokyo's engineering faculty. National Tokyo Institute of Technology, 1929 Kyoto Institute of Technology, 1949 Muroran Institute of Technology, 1949 Nagoya Institute of Technology, 1949 Kyushu Institute of Technology, 1949 University of Electro-Communications, 1949 Tokyo University of Agriculture and Technology, 1949 Kitami Institute of Technology, 1966 Nagaoka University of Technology, 1976 Japan Advanced Institute of Science And Technology, 1986 Nara Institute of Science and Technology, 2006 Okinawa Institute of Science and Technology, 2011 Public Tokyo Metropolitan Institute of Technology, 1986 Maebashi Institute of Technology, 1997 Kochi University of Technology, 1997 Advanced Institute of Industrial Technology, 2006 Private Chiba Institute of Technology, 1942 Osaka Institute of Technology, 1949 Shibaura Institute of Technology, 1949 Tokyo Polytechnic University, 1950 Kobe Institute of Computing, 1958 Aichi Institute of Technology, 1959 Hiroshima Institute of Technology, 1963 Fukuoka Institute of Technology, 1963 Shonan Institute of Technology, 1963 Tohoku Institute of Technology, 1964 Kanazawa Institute of Technology, 1965 Fukui University of Technology, 1965 Nippon Institute of Technology, 1967 Hokkaido Institute of Technology, 1967 Ashikaga Institute of Technology, 1967 Hachinohe Technical University, 1972 Kanagawa Institute of Technology, 1975 Toyohashi University of Technology, 1976 Saitama Institute of Technology, 1976 Tokyo University of Technology, 1986 Kobe Design University, 1989 Tohoku University of Art and Design, 1991 Shizuoka Institute of Science and Technology, 1991 Niigata Institute of Technology, 1995 Aichi University of Technology, 2000 === Kenya === In Kenya, Technical Universities are special Universities that focus on technical and engineering courses and offer certifications in the same from Artisan, Craft, Diploma, Higher Diploma, Degree, Masters and Doctorate levels. They are former national polytechnics and are the only institutions of learning that offer the complete spectrum of tertiary education programs. They Include Technical University of Kenya, Formerly Kenya National Polytechnic in Nairobi Technical University of Mombasa, Formerly Mombasa National Polytechnic in Mombasa === Jordan === Princess Sumaya University for Technology in Amman Jordan University of Science and Technology in Irbid Balqa Applied University in Salt Tafila Technical University in Tafila === Macau === The first polytechnic in Macau is the Polytechnic Institute of the University of East Asia which was established in 1981, as an institute of a private university. In 1991, following the splitting of the University of East Asia into three (University of Macau, Macao Polytechnic Institute, Asia International Open University), a public and independent Polytechnic Institute, Macao Polytechnic Institute, was officially established. The first private technology university Macau University of Science and Technology is established in 2000. Macao Polytechnic Institute has renamed Macao Polytechnic University in 2022. === Malaysia === Polytechnics Polytechnics in Malaysia have been in operation since 1969. The institutions provide courses for bachelor's degree & Bachelor of Science (BSc), Advanced Diploma, Diploma and Special Skills Certificate. The first polytechnic in Malaysia, Politeknik Ungku Omar, was established by the Ministry of Education in 1969 with the help of UNESCO and the amount of RM24.5 million from the United Nations Development Program (UNDP). At present, Malaysia has 36 polytechnics all over the country providing engineering, agriculture, commerce, hospitality and design courses. The following is a list of the polytechnics in Malaysia in order of establishment: Technical University There are four technical universities in Malaysia and all are belongs to Malaysian Technical University Network: Universiti Tun Hussein Onn Malaysia Universiti Malaysia Perlis Universiti Teknikal Malaysia Melaka Universiti Malaysia Pahang === Mauritius === The only technical university in Mauritius is the University of Technology, Mauritius with its main campus situated in La Tour Koenig, Pointe aux Sables. === Mexico === In Mexico there are different Institutes and Colleges of Technology. Most of them are public institutions. The National Technological Institute of Mexico (in Spanish: Tecnológico Nacional de México, TecNM) is a Mexican public university system created on 23 July 2014 by presidential decree with the purpose to unify 263 public institutes of technology that had been created since 1948 and are found all around Mexico. Another important institute of technology in Mexico is the National Polytechnic Institute (Instituto Politécnico Nacional), which is located in the northern region of Mexico City. === Moldova === Technical University of Moldova === Nepal === Institute of Engineering CTEVT, Council for Technical Education and Vocational Training === New Zealand === New Zealand polytechnics are established under the Education Act 1989 as amended and are typically considered state-owned tertiary institutions along with universities, colleges of education and wānanga; there is today often much crossover in courses and qualifications offered between all these types of Tertiary Education Institutions. Some have officially taken the title 'institute of technology' which is a term recognized in government strategies equal to that of the term 'polytechnic'. One has opted for the name 'Universal College of Learning' (UCOL) and another 'Unitec New Zealand'. These are legal names but not recognized terms like 'polytechnic' or 'institute of technology'. Many if not all now grant at least bachelor-level degrees. Some colleges of education or institutes of technology are privately owned, however, the qualification levels vary widely. Since the 1990s, there has been consolidation in New Zealand's state-owned tertiary education system. In the polytechnic sector: Wellington Polytechnic amalgamated with Massey University. The Central Institute of Technology explored a merger with the Waikato Institute of Technology, which was abandoned, but later, after financial concerns, controversially amalgamated with Hutt Valley Polytechnic, which in turn became Wellington Institute of Technology. Some smaller polytechnics in the North Island, such as Waiarapa Polytechnic, amalgamated with UCOL. (The only other amalgamations have been in the colleges of education.) The Auckland University of Technology is the only polytechnic to have been elevated to university status; while Unitec has had repeated attempts blocked by government policy and consequent decisions; Unitec has not been able to convince the courts to overturn these decisions. In mid-February 2019, the Minister of Education Minister Chris Hipkins proposed merging the country's sixteen polytechnics into a "NZ Institute of Skills and Technology" in response to deficits and a decline in domestic enrollments. This was commenced with branding changes to 20 establishments in 2022 in preparation of their merger into Te Pūkenga === Nigeria === Virtually, every state in Nigeria has a polytechnic university operated by either the federal or state government. In Rivers State for example, there are two state-owned polytechnic universities; Kenule Beeson Saro-Wiwa Polytechnic, Bori City and the Rivers State College of Arts and Science, Port Harcourt. The former was established on 13 May 1988 while the latter–though founded in 1984–was approved by the NBTE in 2006. The first private polytechnic university in the state is the Eastern Polytechnic, established in 2008. === Pakistan === The polytechnic institutes in Pakistan offer Diploma in Engineering spanning three years in different engineering branches. This diploma is known as Diploma of Associate Engineering (DAE). Students are admitted to the diploma program based on their results in the 10th grade standardized exams. The main purpose of the diploma offered in polytechnic institutes is to train people in various trades. These institutes are located throughout Pakistan and started in the early 1950s. After successfully completion of diploma at a polytechnic, students can either get employment or enroll in Bachelor of Technology (B.Tech) and Bachelor of Engineering (BE) degree programs. Universities of Engineering & Technology in Pakistan offer undergraduate (BE/BS/BSc Engineering) and postgraduate (ME/MS/MSc Engineering and PhD) degree programs in engineering. BE/BS/BSc Engineering is a professional degree in Pakistan. It is a four-year full-time program after HSSC (higher secondary school certificate). === Palestine === University College of Applied Sciences (UCAS) is a technical college in Gaza founded in 1998. The College offers undergraduate degrees in several unique specializations such as education technology, technological management and planning, and geographic information systems === Philippines === Mapúa University, the premier engineering school of the Philippines. Being an internationally accredited engineering school, it consistently tops various board exams for engineering students in the Philippines. FEU Institute of Technology, a premier engineering school known for its technological academic teaching and board topnotchers operating under the Far Eastern University system. Mindanao State University–Iligan Institute of Technology, the premier state university in the southern Philippines and the science and technology flagship campus of the Mindanao State University System (the second biggest university system in the Philippines after the University of the Philippines). Technological University of the Philippines, the premier state university of technology education in the Philippines. Technological Institute of the Philippines, an engineering school with an international accreditation. Bicol University, center in teaching excellence, offers IT courses and a well known university. Cebu Institute of Technology – University, a premier engineering school, this university is known to have high selectivity in admissions as well as excellence in engineering research and education. Cebu Technological University Polytechnic University of the Philippines, a state university in the Philippines that consistently tops various board exams for engineering students in the Philippines, also referred to as the National Comprehensive University of the Philippines. Quezon City Polytechnic University, a local university, this university is well known in engineering, IT and technical education. Rizal Technological University, the only university that offers degree courses in astronomy. === Poland === Politechnika (translated as a "technical university" or "university of technology") is the designation of a technical university in Poland. Here are some of the larger polytechnics in Poland: Politechnika Śląska Politechnika Wrocławska Politechnika Warszawska Politechnika Poznańska Politechnika Krakowska Politechnika Gdańska Politechnika Łódzka Politechnika Białostocka Politechnika Lubelska Other polytechnic universities: Akademia Górniczo-Hutnicza Uniwersytet Technologiczno-Przyrodniczy im. Jana i Jędrzeja Śniadeckich w Bydgoszczy (University of Technology and Life Sciences in Bydgoszcz) Zachodniopomorski Uniwersytet Technologiczny (West Pomeranian University of Technology) === Portugal === Till recently, there was a Technical University of Lisbon (UTL). It included several of the most prestigious schools, including, an engineering school (Instituto Superior Tecnico) and one of the most ancient business schools in the world (ISEG Lisbon). But UTL merged with the University of Lisbon. In this field, here are also a number of non-university higher educational institutions which are called polytechnic institutes since the 1970s. Some of these institutions existed since the 19th century with different designations (industrial and commercial institutes, agricultural managers, elementary teachers and nurses schools, etc.). In theory, the polytechnics higher education system is aimed to provide a more practical training and be profession-oriented, while the university higher education system is aimed to have a stronger theoretical basis and be highly research-oriented. The polytechnics are also oriented to provide shorter length studies aimed to respond to local needs. The Portuguese polytechnics can then be compared to the US community colleges. Since the implementation of Bologna Process in Portugal in 2007, the polytechnics offer the 1st cycle (licentiate degree) and 2nd cycle (master's degree) of higher studies. Until 1998, the polytechnics only awarded bachelor (Portuguese: bacharelato) degrees (three-year short-cycle degrees) and were not authorized to award higher degrees. They however granted post-bachelor diplomas in specialized higher studies (DESE, diploma de estudos superiores especializados), that could be obtained after the conclusion of a two-year second cycle of studies and were academical equivalent to the university's licentiate degrees (licenciatura). After 1998, they started to be allowed to confer their own licentiate degrees, which replaced the DESE diplomas. === Romania === Politehnica University of Bucharest, 1864 Polytechnic University of Timișoara, 1920 Gheorghe Asachi Technical University of Iași, 1937 Technical University of Cluj-Napoca, 1948 Technical University of Civil Engineering of Bucharest, 1948 Oil & Gas University of Ploieşti, 1948 University of Petroşani, 1948 Technical Military Academy of Bucharest, 1949 === Russia === Bauman Moscow State Technical University Saint Petersburg Polytechnical University Novosibirsk State Technical University Tomsk Polytechnic University Moscow Polytechnic University === Singapore === Polytechnics in Singapore do not offer bachelor's, master's degrees or doctorate. However, Polytechnics in Singapore offer three-year diploma courses in fields ranging from applied sciences to business, information technology, humanities, social sciences, and other vocational fields such as engineering and nursing. There are five polytechnics in Singapore: Singapore Polytechnic, Ngee Ann Polytechnic, Temasek Polytechnic, Nanyang Polytechnic and Republic Polytechnic. The Polytechnic diploma certification in Singapore is equivalent to an associate degree obtainable at the community colleges in the United States. A Polytechnic diploma in Singapore is also known to be parallel and sometimes equivalent to the first years at a bachelor's degree-granting institution, thus, Polytechnic graduates in Singapore have the privilege of being granted transfer credits or module exemptions when they apply to a local or overseas universities, depending on the university's policies on transfer credits. The only university in Singapore with the term "institute of technology", most notably the Singapore Institute of Technology were developed in 2009 as an option for Polytechnic graduates who desire to pursue a bachelor's degree. Other technological universities in Singapore includes the Nanyang Technological University and the Singapore University of Technology and Design. === Slovakia === Slovak University of Technology in Bratislava The world's first institution of technology or technical university with tertiary technical education is the Banská Akadémia in Banská Štiavnica, Slovakia, founded in 1735, Academy since December 13, 1762 established by queen Maria Theresa in order to train specialists of silver and gold mining and metallurgy in neighbourhood. Teaching started in 1764. Later the department of Mathematics, Mechanics and Hydraulics and department of Forestry were settled. University buildings are still at their place today and are used for teaching. University has launched the first book of electrotechnics in the world. Technical University of Košice University of Žilina Technical University in Zvolen Trenčín University in Trenčín Dubnica Technology Institute === South Africa === In South Africa, there was a division between universities and technikons (polytechnics), as well between institutions servicing particular racial and language groupings. By the mid-2000s, former technikons have either been merged with traditional universities to form comprehensive universities or have become universities of technology; however, the universities of technology have not to date acquired all of the traditional rights and privileges of a university (such as the ability to confer a wide range of degrees). === Spain === Universidad Politécnica de Madrid Universitat Politècnica de Catalunya Universitat Politècnica de València Universidad Politécnica de Cartagena === Sri Lanka === University of Moratuwa Institute of Technology, University of Moratuwa University of Vocational Technology Sri Lanka Institute of Information Technology Technical College === Sweden === KTH Royal Institute of Technology, Stockholm Chalmers University of Technology, Gothenburg The Institute of Technology at Linköping University, Linköping Faculty of Engineering (LTH), Lund University, Lund Luleå University of Technology, Luleå Blekinge Institute of Technology, Blekinge === Switzerland === Eidgenössische Technische Hochschule Zürich (ETH Zurich) École Polytechnique Fédérale de Lausanne (EPFL) === Taiwan === The question of Taiwanese college education is, the students either from high school (the aims is to go to normal college) or tech high school(the aims is to go to work or technology university), almost all of the students take the same test(the score can go to two kinds of school), and the school would not care what kind of high school you are from. National Taiwan University of Science and Technology National Taipei University of Technology National Taichung University of Science and Technology National Yunlin University of Science and Technology National Formosa University National Kaohsiung University of Science and Technology National Pingtung University of Science and Technology === Thailand === Most of Thailand's institutes of technology were developed from technical colleges, in the past could not grant bachelor's degrees; today, however, they are university level institutions, some of which can grant degrees to the doctoral level. Examples are Pathumwan Institute of Technology (developed from Pathumwan Technical School), King Mongkut's Institute of Technology Ladkrabang (Nondhaburi Telecommunications Training Centre) and King Mongkut's Institute of Technology North Bangkok (Thai-German Technical School). There are two former institutes of technology, which already changed their name to "University of Technology": Rajamangala University of Technology (formerly Institute of Technology and Vocational Education) and King Mongkut's University of Technology Thonburi (Thonburi Technology Institute). Institutes of technology with different origins are Asian Institute of Technology, which developed from SEATO Graduate School of Engineering and Sirindhorn International Institute of Technology, an engineering school of Thammasat University. Suranaree University of Technology is the only government-owned technological university in Thailand that was established (1989) as such; while Mahanakorn University of Technology is the most well known private technological institute. A certain number of technical colleges in Thailand is associated with bitter rivalries which erupts into frequent off-campus brawls and assassinations of students in public locations that has been going on for nearly a decade, with innocent bystanders also commonly among the injured and the military under martial law still unable to stop them from occurring. === Turkey === In Turkey, with historical roots extending back to the Ottoman Empire, Istanbul Technical University is recognized as the oldest technical university, established in 1773. Notably, Karadeniz Technical University in Trabzon was established in 1955. Middle East Technical University in Ankara followed closely, founded in 1956. More recent developments include the transformation of Yıldız University into Yıldız Technical University, along with the establishment of Gebze Technical University in Kocaeli, and İzmir Institute of Technology in İzmir. Additionally, the technical education landscape broadened with the founding of Bursa Technical University in Bursa in 2010. === Ukraine === Dnipro Polytechnic Donbas State Technical University Donetsk National Technical University Kyiv Polytechnic Institute Kharkiv Polytechnic Institute Lviv Polytechnic === United Kingdom === Institutes of Technology The UK Government defines institutes of technology as "Business-led Institutes of Technology [that] offer higher level technical education to help close skills gaps in key STEM areas". They deliver qualifications from level 3 (T-levels) to level 7 (master's degrees). The government is investing £300 million in developing a network of 21 institutes of technology across England, with 19 open as of September 2023 and two further institutes expected to open in September 2024. Polytechnics Polytechnics were tertiary education teaching institutions in England, Wales and Northern Ireland. The comparable institutions in Scotland were collectively referred to as central institutions. From 1965 to 1992, UK polytechnics operated under the binary system of education along with universities. Polytechnics offered diplomas and degrees (bachelor's, master's, PhD) validated at the national level by the Council for National Academic Awards (CNAA). Initially they concentrated on engineering and applied science degree courses and other STEM subjects similar to technological universities in the US and continental Europe. Polytechnics were associated with innovations including women's studies, the academic study of communications and media, sandwich degrees and the rise of management and business studies. Britain's first polytechnic, the Royal Polytechnic Institution later known as the Polytechnic of Central London (now the University of Westminster) was established in 1838 at Regent Street in London and its goal was to educate and popularize engineering and scientific knowledge and inventions in Victorian Britain "at little expense". The London Polytechnic led a mass movement to create numerous polytechnic institutes across the UK in the late 19th century. Most polytechnic institutes were established at the center of major metropolitan cities and their focus was on engineering, applied science and technology education. The designation "institute of technology" was occasionally used by polytechnics (Bolton), Central Institutions (Dundee, Robert Gordon's) and for the Cranfield Institute of Technology (now Cranfield University), most of which later adopted the designation university and there were two "institutes of science and technology": UMIST and UWIST (part of the University of Wales). Loughborough University was called Loughborough University of Technology from 1966 to 1996, the only institution in the UK to have had such a designation. The University of Strathclyde was the Royal Technical College from 1912 to 1956 and then the Royal College of Science and Technology from 1956 until granted university status in 1964. Polytechnics were granted university status under the Further and Higher Education Act 1992. This meant that polytechnics could confer degrees without the oversight of the national CNAA organization. These institutions are sometimes referred to as post-1992 universities. Technical colleges In 1956, some colleges of technology received the designation college of advanced technology. They became universities in 1966 meaning they could award their own degrees. Institutions called "technical institutes" or "technical schools" that were formed in the early 20th century provided further education between high school and university or polytechnic. Most technical institutes have been merged into regional colleges and some have been designated university colleges if they are associated with a local university. === United States === Polytechnic institutes in the USA are technological universities, many dating back to the mid-19th century. A handful of American universities include the phrases "Institute of Technology", "Polytechnic Institute", "Polytechnic University" or similar phrasing in their names; these are generally research-intensive universities with a focus on engineering, science and technology. Conversely, schools dubbed "technical colleges" or "technical institutes" generally provide post-secondary training in technical and mechanical fields, focusing on training vocational skills primarily at a community college level, parallel and sometimes equivalent to the first two years at a bachelor's degree-granting institution. Some of America's earliest institutes of technology include Rensselaer Polytechnic Institute (1824), Rochester Institute of Technology (1829), Brooklyn Collegiate and Polytechnic Institute (1854), Massachusetts Institute of Technology (1861), and Worcester Polytechnic Institute (1865). === Venezuela === Institutes of technology in Venezuela were developed in the 1950s as an option for post-secondary education in technical and scientific courses, after the polytechnic French concepts. At that time, technical education was considered essential for the development of a sound middle class economy. Nowadays, most of the Institutos de Tecnología are privately run businesses, with varying degrees of quality. Most of these institutes award diplomas after three or three and a half years of education. The institute of technology implementation (IUT, from Spanish: Instituto universitario de tecnologia) began with the creation of the first IUT at Caracas, the capital city of Venezuela, called IUT. Dr. Federico Rivero Palacio adopted the French "Institut Universitaire de Technologie"s system, using French personnel and study system based on three-year periods, with research and engineering facilities at the same level as the main national universities to obtain French equivalent degrees. This IUT is the first and only one in Venezuela having French equivalent degrees accepted, implementing this system and observing the high-level degrees some other IUTs were created in Venezuela, regardless of this the term IUT was not used appropriately resulting in some institutions with mediocre quality and no equivalent degree in France. Later, some private institutions sprang up using IUT in their names, but they are not regulated by the original French system and award lower quality degrees. === Vietnam === Da Nang University of Technology FPT University Hanoi University of Science and Technology Ho Chi Minh City University of Technology Le Quy Don Technical University VNU University of Engineering and Technology == See also == Comparison of US and UK Education Engineer's degree List of forestry universities and colleges List of institutions using the term "institute of technology" or "polytechnic" List of schools of mines Secondary Technical School University of Science and Technology Vocational university == References == == External links == Fitch, Joshua Girling; Garnett, William (1911). "Polytechnic" . Encyclopædia Britannica (11th ed.).
https://en.wikipedia.org/wiki/Institute_of_technology
Clean technology, also called cleantech or climate tech, is any process, product, or service that reduces negative environmental impacts through significant energy efficiency improvements, the sustainable use of resources, or environmental protection activities. Clean technology includes a broad range of technologies related to recycling, renewable energy, information technology, green transportation, electric motors, green chemistry, lighting, grey water, and more. Environmental finance is a method by which new clean technology projects can obtain financing through the generation of carbon credits. A project that is developed with concern for climate change mitigation is also known as a carbon project.Clean Edge, a clean technology research firm, describes clean technology as "a diverse range of products, services, and processes that harness renewable materials and energy sources, dramatically reduce the use of natural resources, and cut or eliminate emissions and wastes." Clean Edge notes that, "Clean technologies are competitive with, if not superior to, their conventional counterparts. Many also offer significant additional benefits, notably their ability to improve the lives of those in both developed and developing countries." Investments in clean technology have grown considerably since coming into the spotlight around 2000. According to the United Nations Environment Program, wind, solar, and biofuel companies received a record $148 billion in new funding in 2007, as rising oil prices and climate change policies encouraged investment in renewable energy. $50 billion of that funding went to wind power. Overall, investment in clean-energy and energy-efficiency industries rose 60 percent from 2006 to 2007. In 2009, Clean Edge forecasted that the three main clean technology sectors—solar photovoltaics, wind power, and biofuels—would have revenues of $325.1 billion by 2018. According to an MIT Energy Initiative Working Paper published in July 2016, about half of over $25 billion in funding provided by venture capital to cleantech from 2006 to 2011 was never recovered. The report cited cleantech's dismal risk/return profiles and the inability of companies developing new materials, chemistries, or processes to achieve manufacturing scale as contributing factors to its flop. Clean technology has also emerged as an essential topic among businesses and companies. It can reduce pollutants and dirty fuels for every company, regardless of which industry they are in, and using clean technology has become a competitive advantage. Through building their Corporate Social Responsibility (CSR) goals, they participate in using clean technology and other means by promoting sustainability. Fortune Global 500 firms spent around $20 billion a year on CSR activities in 2018. Silicon Valley, Tel Aviv and Stockholm were ranked as leading ecosystystems in the field of clean technology. According to data from 2024, there are over 750,000 international patent families (IPFs) focused on clean and sustainable technologies worldwide. This represents approximately 12% of the total number of IPFs globally. From 1997 to 2021, over 750,000 patents for clean and sustainable technologies were published, making up almost 15% of all patents in 2021, compared to just under 8% in 1997. Japan and the US each account for over 20% of clean technology patents, though their annual numbers have stabilized at around 10,000. Between 2017 and 2021, European countries accounted for over 27% of international patent families (IPFs) in clean technology globally. This places Europe ahead of other major innovators, such as Japan (21%), the United States (20%), and China (15%). There are two major stages when cleantech patenting has advanced. The first is from 2006 to 2021, driven by the EU and Japan (27% and 26% of overall increase in IPFs). The next stage is from 2017 to 2021, led by China, which accounted for 70% of the increase in IPFs. == Definition == Cleantech products or services are those that improve operational performance, productivity, or efficiency while reducing costs, inputs, energy consumption, waste, or environmental pollution. Its origins are the increased consumer, regulatory, and industry interest in clean forms of energy generation—specifically, perhaps, the rise in awareness of global warming, climate change, and the impact on the natural environment caused by the burning of fossil fuels. Cleantech is often associated with venture capital funds and land use organizations. The term traditionally been differentiated from various definitions of green business, sustainability, or triple bottom line industries by its origins in the venture capital investment community and has grown to define a business sector that includes significant and high growth industries such as solar, wind, water purification, and biofuels. === Nomenclature === While the expanding industry has grown rapidly in recent years and attracted billions of dollars of capital, the clean technology space has not settled on an agreed-upon term. Cleantech is used fairly widely, although variant spellings include ⟨clean-tech⟩ and ⟨clean tech⟩. In recent years, some clean technology companies have de-emphasized that aspect of their business to tap into broader trends, such as smart cities. == Origins of the concept == The idea of cleantech first emerged among a group of emerging technologies and industries, based on principles of biology, resource efficiency, and second-generation production concepts in basic industries. Examples includeenergy efficiency, selective catalytic reduction, non-toxic materials, water purification, solar energy, wind energy, and new paradigms in energy conservation. Since the 1990s, interest in these technologies has increased with two trends.Once is a decline in the relative cost of these technologies and a growing understanding of the link between industrial design used in the 19th century and early 20th centuries—such as fossil fuel power plants, the internal combustion engine, and chemical manufacturing—and an emerging understanding of human-caused impact on earth systems resulting from their use (see articles: ozone hole, acid rain, desertification, climate change, and global warming). == Investment worldwide == During the last twenty years, regulatory schemes and international treaties have been the main factors that defined the investment environment of clean technologies. Investments in renewable sources as well as the technologies for energy efficiency are a determining factor in the investments made under the context of the Paris Agreement and the fight against climate change and air pollution. Among the financing sources of the public sector,the government has used financial incentives and regulations targeting the private sector. This collectively move is the cause of the continued increase in the clean energy capacity. The investments in renewable electricity generation technologies in 2015 were over $308 billion USD and in 2019 this figure rose to $311 billion USD. Startups with new technology-based innovation are considered to be an attractive investment in a clean technology sector. Venture capital and crowdfunding platforms are crucial sources for developing ventures that lead to the introduction of new technologies. In the last decade, startups have contributed significantly to the increase in installed capacity for solar and wind power.. These trendsetting firms design new technologies and devise strategies for the industry to excel and become more resilient in the face of threats. In 2008, clean technology venture investments in North America, Europe, China, and India totaled a record $8.4 billion. Cleantech Venture Capital firms include NTEC, Cleantech Ventures, and Foundation Capital.The preliminary 2008 total represents the seventh consecutive year of growth in venture investing, which is widely recognized as a leading indicator of overall investment patterns. Investment in clean technology has grown significantly, with a considerable impact on production costs and productivity, especially, within energy intensive industries. The World Bank notes that these investments are enhancing economic efficiency, supporting sustainable development objectives, and promoting energy security by decreasing dependence on fossil fuel. China is seen as a major growth market for cleantech investments currently, with a focus on renewable energy technologies. In 2014, Israel, Finland and the US were leading the Global Cleantech Innovation Index, out of 40 countries assessed, while Russia and Greece were last. Renewable energy investment has achieved substantial scale with annual investments around $300 billion. This volume of investment is fundamental to the global energy transition and remains in spite of an R&D funding plateau, representing the sector's healthy expansion and appreciation of renewable technology's promise. Several journals offer in-depth analyses and forecasts of this investment trend, stressing its significant role in the attainment of the world energy and climate targets. With regards to private investments, the investment group Element 8 has received the 2014 CleanTech Achievement award from the CleanTech Alliance, a trade association focused on clean tech in the State of Washington, for its contribution in Washington State's cleantech industry. Strategic investments in clean technologies within supply chains are increasingly influenced by sustainable market forces. These investments are vital for manufacturers, enhancing not only the sustainability of production processes, but, also encouraging a comprehensive transition towards sustainability across the entire supply chain. Detailed case studies and industry analyses highlight the economic and environmental benefits of such strategic investments. According to the published research, the top clean technology sectors in 2008 were solar, biofuels, transportation, and wind. Solar accounted for almost 40% of total clean technology investment dollars in 2008, followed by biofuels at 11%. In 2019, sovereign wealth funds directly invested just under US$3 billion in renewable energy . The 2009 United Nations Climate Change Conference in Copenhagen, Denmark was expected to create a framework whereby limits would eventually be placed on greenhouse gas emissions. Many proponents of the cleantech industry hoped for an agreement to be established there to replace the Kyoto Protocol. As this treaty was expected, scholars had suggested a profound and inevitable shift from "business as usual." However, the participating States failed to provide a global framework for clean technologies. The outburst of the 2008 economic crisis then hampered private investments in clean technologies, which were back at their 2007 level only in 2014. The 2015 United Nations Climate Change Conference in Paris is expected to achieve a universal agreement on climate, which would foster clean technologies development. On 23 September 2019, the Secretary-General of the United Nations hosted a Climate Action Summit in New York. In 2022 the investment in cleantech (also called climatetech) boomed. "In fact, climate tech investment in the 12 months to Q3 2022 represented more than a quarter of every venture dollar invested, a greater proportion than 12 of the prior 16 quarters." US leads in carbon capture technologies, with nearly 30% of patents. It also leads in plastic recycling and climate change adaptation technologies, but has a lower share in low-carbon energy (13%). Japan excels in hydrogen-related (29.3%) and low-carbon energy technologies (26.2%). Chinese applicants dominate the field of ICT-related clean technologies, accounting for more than 37% of patents between 2017 and 2021. Meanwhile, South Korean applicants make notable contributions in ICT with 12.6%, in hydrogen technologies with 13%, and in low-carbon energy with 15.5%. About half of the EU's clean technologies are in the launch or early revenue stage, 22% are in the scale-up stage, and 10% are mature or consolidating. The European Commission estimates that an additional €477 million in annual investment is needed for the European Union to meet its Fit-for-55 decarbonization goals. The European Green Deal has fostered policies that contributed to a 30% rise in venture capital for greentech companies in the EU from 2021 to 2023, despite a downturn in other sectors during the same period. Key areas, such as energy storage, circular economy initiatives, and agricultural technology, have benefited from increased investments, supported by the EU's ambitious goal to reduce greenhouse gas emissions by at least 55% by 2030. == Cleantech innovation hubs == === Israel === Israel has 600 companies in the Cleantech sector. The Tel Aviv region was ranked second in the world by StartUp Genome for Cleantech ecosystems. Israel due to its geopolitical situation and harsh climate was forced to adopt technologies considered today as part of the cleantech sector. Following the scarcity of oil after the 1973 embargo on Israel, Israel switched to renewable energy in the 1970s and in 1976 all resedential buildings built from that year onward were forced to have such heating. As of 2020, 85% of water heating in Israel is done through renewable energy. Water scarcity led Israelis developed the modern drip irrigations system. Netafim, created in 1965 was the company that developed the technology and is now valued at about $1.85 billion. Israel also operates Israel Cleantech Ventures which funds cleantech startups. In Jerusalem there is a yearly Cleantech conference. UBQ, an Israeli startup which converts waste into friendly plastic secured $70 million in funding in 2023. === Silicon Valley === Silicon Valley is the world's leading cleantech ecosystem according to StartUp Gencome's ranking. In 2020, investments in cleantech reached $17 billion. == Implementation worldwide == === China and Latin America === Investment in green technology and renewable energy in China is rapidly increasing. And Latin America has the world’s highest electricity energy level, with 60% of its electricity coming from renewable sources. The region is rich in the minerals needed to make green technologies. Latin America needs Chinese technology to turn its abundant resources into electricity. Last year, about 99% of solar panels imported into Latin America were made in China. Also, about 70% of electric vehicles imported into Latin America last year were made in China. More than 90% of imported lithium-ion batteries imported into Latin America were also made in China. Latin America is increasingly relying on Chinese green technology, from electric buses to solar panels. === India === is one of the countries that have achieved remarkable success in sustainable development by implementing clean technology, and it became a global clean energy powerhouse. India, who was the third-largest emitter of greenhouse gases, advanced a scheme of converting to renewable energy with sun and wind from fossil fuels. This continuous effort has created an increase in the country's renewable energy capacity (around 80 gigawatts of installed renewable energy capacity, 2019), with a compound annual growth rate of over 20%. India's ambitious renewable energy targets have become the model for a swift clean energy shift. The government aimed to reach a 175 GW capacity of renewable energy up to 2022. Thus, included a big contribution from wind (60 GW) and solar energy (100 GW). By steadily increasing India's renewable capacity, India is achieving the Paris Agreement with a significant reduction in producing carbon emissions. Adopting renewable energy not only brought technological advances to India, but it also impacted employment by creating around 330,000 new jobs by 2022 and more than 24 million new jobs by 2030, according to the International Labour Organization in the renewable energy sector. In spite of the global successes, the introduction of renewable energy is confronted with hurdles specific to the country or the region. These challenges encompass social, economic, technological, and regulatory. Research shows that social and regulatory barriers are direct factors affecting the deployment of renewable energy, economic barriers however have a more indirect, yet substantial effect. The study emphasises the need for removing these obstacles for renewable energy to become more available and attractive thus benefiting all parties such as local communities and producers. Despite the prevalence of obstacles, emerging economy countries have formulated creative approaches to deal with the challenges. For example, India, has shown significant progress in the sector of renewable energy, a trend showing the adoption of clean technologies from other countries. The special approaches and problems that every country experiences in the course of the sustainable growth promote useful ideas for further development. The creation of clean technologies such as battery storage, CCS, and advanced biofuels is important for the achievement of sustainable energy systems. Uninterrupted research and development is critical in improving the productivity of renewable energy sources and in making them more attractive for investment. These developments are a part of the wider goals related to sustainability and addressing climate change. A further factor that determine the success of clean technology is how it is perceived by public and its social impact. Community involvement and observable benefits of these technologies can influence their adoption and popularity. The idea of shared benefits is created by making the renewable energy solutions environmentally friendly, cost-effective, and beneficial to producers. === Germany === has been one of the renewable energy leaders in the world, and their efforts have expedited the progress after the nuclear power plant meltdown in Japan in 2011, by deciding to switch off all 17 reactors by 2022. Still, this is just one of Germany's ultimate goals; and Germany is aiming to set the usage of renewable energy at 80% by 2050, which is currently 47% (2020). Energiewende in Germany is a model of a devoted effort to renewable energy aimed at decreasing the greenhouse gas (GHG) emissions by 80% by 2050 through the rushed adoption of renewable resources. This policy, aimed at addressing the environmental issues and the nationwide agreement on nuclear power abolition, illustrates the essential role of government policy and investment in directing technological adoption and providing a pathway towards the usage of sustainable energy. Obstacles to making the Energiewende a model for the transportation and heating sectors include the integration of renewable energies into existing infrastructure, the economic costs associated with transitioning technologies, and the need for widespread consumer adoption of new energy solutions. Also, Germany is investing in renewable energy from offshore wind and anticipating its investment to result in one-third of total wind energy in Germany. The importance of clean technology also impacted the transportation sector of Germany, which produces 17 percent of its emission. The famous car-producing companies, Mercedes-Benz, BMW, Volkswagen, and Audi, in Germany, are also providing new electric cars to meet Germany's energy transition movement. === Africa and the Middle East === has drawn worldwide attention for its potential share and new market of solar electricity. Notably, the countries in the Middle East have been utilizing their natural resources, an abundant amount of oil and gas, to develop solar electricity. Also, to practice the renewable energy, the energy ministers from 14 Arab countries signed a Memorandum of Understanding for an Arab Common Market for electricity by committing to the development of the electricity supply system with renewable energy. Sustainability when combined with clean technology focuses on the central environmental issues of learning how to fulfill the need of Earth's resources and the requirement for fast industrialization and consuming of the energy. The role of the technological innovations in the development of sustainable development across different fields, such as energy, agriculture, and infrastructure is paramount. The sustainability initiatives utilize contemporary science as well as green technologies of renewable energy sources and efficient energy conversion systems to minimize the environmental effects and promote economic and social welfare. This approach is consistent with sustainable development objectives since it offers measures that do not deplete natural resources but, instead, supply low-emission forms of energy. == List of Clean Tech hubs == The following is a 2021 ranking of clean technology ecosystems. == United Nations: Sustainable Development Goals == The United Nations has set goals for the 2030 Agenda for Sustainable Development, which is called "Sustainable Development Goals" composed of 17 goals and 232 indicators total. These goals are designed to build a sustainable future and to implement in the countries (member states) in the UN. Many parts of the 17 goals are related to the usage of clean technology since it is eventually an essential part of designing a sustainable future in various areas such as land, cities, industries, climate, etc. Goal 6: "Ensure availability and sustainable management of water and sanitation for all" Various kinds of clean water technology are used to fulfill this goal, such as filters, technology for desalination, filtered water fountains for communities, etc. Goal 7: "Ensure access to affordable, reliable, sustainable and modern energy for all" Promoting countries for implementing renewable energy is making remarkable progress, such as: "From 2012 to 2014, three quarters of the world's 20 largest energy-consuming countries had reduced their energy intensity — the ratio of energy used per unit of GDP. The reduction was driven mainly by greater efficiencies in the industry and transport sectors. However, that progress is still not sufficient to meet the target of doubling the global rate of improvement in energy efficiency." Goal 11: "Make cities and human settlements inclusive, safe, resilient and sustainable" By designing sustainable cities and communities, clean technology takes parts in the architectural aspect, transportation, and city environment. For example: Global Fuel Economy Initiative (GFEI) - Relaunched to accelerate progress on decarbonizing road transport. Its main goal for passenger vehicles, in line with SDG 7.3, is to double the energy efficiency of new vehicles by 2030. This will also help mitigate climate change by reducing harmful CO2 emissions. Goal 13: "Take urgent action to combat climate change and its impacts*" Greenhouse gas emissions have significantly impacted the climate, and this results in a rapid solution for consistently increasing emission levels. United Nations held the "Paris Agreement" for dealing with greenhouse gas emissions mainly within countries and for finding solutions and setting goals. == See also == Environmental science Greentech (disambiguation) Sustainable engineering WIPO GREEN == References == == External links == Investing: Green technology has big growth potential, Los Angeles Times, 2011 The Global Cleantech Innovation Index 2014, by Cleantech Group and WWF
https://en.wikipedia.org/wiki/Clean_technology
The technology acceptance model (TAM) is an information systems theory that models how users come to accept and use a technology. The actual system use is the end-point where people use the technology. Behavioral intention is a factor that leads people to use the technology. The behavioral intention (BI) is influenced by the attitude (A) which is the general impression of the technology. The model suggests that when users are presented with a new technology, a number of factors influence their decision about how and when they will use it, notably: Perceived usefulness (PU) – This was defined by Fred Davis as "the degree to which a person believes that using a particular system would enhance their job performance". It means whether or not someone perceives that technology to be useful for what they want to do. Perceived ease-of-use (PEOU) – Davis defined this as "the degree to which a person believes that using a particular system would be free from effort". If the technology is easy to use, then the barrier is conquered. If it's not easy to use and the interface is complicated, no one has a positive attitude towards it. External variables such as social influence is an important factor to determine the attitude. When these things (TAM) are in place, people will have the attitude and intention to use the technology. However, the perception may change depending on age and gender because everyone is different. The TAM has been continuously studied and expanded—the two major upgrades being the TAM 2 and the unified theory of acceptance and use of technology (or UTAUT). A TAM 3 has also been proposed in the context of e-commerce with an inclusion of the effects of trust and perceived risk on system use. == Background == TAM is one of the most influential extensions of Ajzen and Fishbein's theory of reasoned action (TRA) in the literature. Davis's technology acceptance model (Davis, 1989; Davis, Bagozzi, & Warshaw, 1989) is the most widely applied model of users' acceptance and usage of technology (Venkatesh, 2000). It was developed by Fred Davis and Richard Bagozzi. TAM replaces many of TRA's attitude measures with the two technology acceptance measures—ease of use, and usefulness. TRA and TAM, both of which have strong behavioural elements, assume that when someone forms an intention to act, that they will be free to act without limitation. In the real world there will be many constraints, such as limited freedom to act. Bagozzi, Davis and Warshaw say: Because new technologies such as personal computers are complex and an element of uncertainty exists in the minds of decision makers with respect to the successful adoption of them, people form attitudes and intentions toward trying to learn to use the new technology prior to initiating efforts directed at using. Attitudes towards usage and intentions to use may be ill-formed or lacking in conviction or else may occur only after preliminary strivings to learn to use the technology evolve. Thus, actual usage may not be a direct or immediate consequence of such attitudes and intentions. Earlier research on the diffusion of innovations also suggested a prominent role for perceived ease of use. Tornatzky and Klein analysed the adoption, finding that compatibility, relative advantage, and complexity had the most significant relationships with adoption across a broad range of innovation types. Eason studied perceived usefulness in terms of a fit between systems, tasks and job profiles, using the terms "task fit" to describe the metric. Legris, Ingham and Collerette suggest that TAM must be extended to include variables that account for change processes and that this could be achieved through adoption of the innovation model into TAM. == Usage == Several researchers have replicated Davis's original study to provide empirical evidence on the relationships that exist between usefulness, ease of use and system use. Much attention has focused on testing the robustness and validity of the questionnaire instrument used by Davis. Adams et al. replicated the work of Davis to demonstrate the validity and reliability of his instrument and his measurement scales. They also extended it to different settings and, using two different samples, they demonstrated the internal consistency and replication reliability of the two scales. Hendrickson et al. found high reliability and good test-retest reliability. Szajna found that the instrument had predictive validity for intent to use, self-reported usage and attitude toward use. The sum of this research has confirmed the validity of the Davis instrument, and to support its use with different populations of users and different software choices. Segars and Grover re-examined Adams et al.'s)replication of the Davis work. They were critical of the measurement model used, and postulated a different model based on three constructs: usefulness, effectiveness, and ease-of-use. These findings do not yet seem to have been replicated. However, some aspects of these findings were tested and supported by Workman by separating the dependent variable into information use versus technology use. Mark Keil and his colleagues have developed (or, perhaps rendered more popularisable) Davis's model into what they call the Usefulness/EOU Grid, which is a 2×2 grid where each quadrant represents a different combination of the two attributes. In the context of software use, this provides a mechanism for discussing the current mix of usefulness and EOU for particular software packages, and for plotting a different course if a different mix is desired, such as the introduction of even more powerful software. The TAM model has been used in most technological and geographic contexts. One of these contexts is health care, which is growing rapidly Saravanos et al. extended the TAM model to incorporate emotion and the effect that may play on the behavioral intention to accept a technology. Specifically, they looked at warm-glow. Venkatesh and Davis extended the original TAM model to explain perceived usefulness and usage intentions in terms of social influence (subjective norms, voluntariness, image) and cognitive instrumental processes (job relevance, output quality, result demonstrability, perceived ease of use). The extended model, referred to as TAM2, was tested in both voluntary and mandatory settings. The results strongly supported TAM2. Subjective norm – An individual's perception that other individuals who are important to him/her/them consider if he/she/they could perform a behavior. This was consistent with the theory of reasoned action (TRA). Voluntariness – This was defined by Venkatesh & Davis as "extent to which potential adopters perceive the adoption decision to be non-mandatory". Image – This was defined by Moore & Benbasat as "the degree to which use of an innovation perceived to enhance one's status in one's social system". Job relevance – Venkatesh & Davis defined this as personal perspective on the extent to which the target system is suitable for the job. Output quality – Venkatesh & Davis defined this as personal perception of the system's ability to perform specific tasks. Result demonstrability – The production of tangible results will directly influence the system's usefulness. In an attempt to integrate the main competing user acceptance models, Venkatesh et al. formulated the unified theory of acceptance and use of technology (UTAUT). This model was found to outperform each of the individual models (Adjusted R square of 69 percent). UTAUT has been adopted by some recent studies in healthcare. In addition, authors Jun et al. also think that the technology acceptance model is essential to analyze the factors affecting customers’ behavior towards online food delivery services. It is also a widely adopted theoretical model to demonstrate the acceptance of new technology fields. The foundation of TAM is a series of concepts that clarifies and predicts people’s behaviors with their beliefs, attitudes, and behavioral intention. In TAM, perceived ease of use and perceived usefulness, considered general beliefs, play a more vital role than salient beliefs in attitudes toward utilizing a particular technology. == Alternative models == The MPT model: Independent of TAM, Scherer developed the matching person and technology model in 1986 as part of her National Science Foundation-funded dissertation research. The MPT model is fully described in her 1993 text, "Living in the State of Stuck", now in its 4th edition. The MPT model has accompanying assessment measures used in technology selection and decision-making, as well as outcomes research on differences among technology users, non-users, avoiders, and reluctant users. The HMSAM: TAM has been effective for explaining many kinds of systems use (i.e. e-learning, learning management systems, webportals, etc.) (Fathema, Shannon, Ross, 2015; Fathema, Ross, Witte, 2014). However, TAM is not ideally suited to explain adoption of purely intrinsic or hedonic systems (e.g., online games, music, learning for pleasure). Thus, an alternative model to TAM, called the hedonic-motivation system adoption model (HMSAM) was proposed for these kinds of systems by Lowry et al. HMSAM is designed to improve the understanding of hedonic-motivation systems (HMS) adoption. HMS are systems used primarily to fulfill users' intrinsic motivations, such for online gaming, virtual worlds, online shopping, learning/education, online dating, digital music repositories, social networking, only pornography, gamified systems, and for general gamification. Instead of a minor TAM extension, HMSAM is an HMS-specific system acceptance model based on an alternative theoretical perspective, which is in turn grounded in flow-based cognitive absorption (CA). HMSAM may be especially useful in understanding gamification elements of systems use. Extended TAM: Several studies proposed extension of original TAM (Davis, 1989) by adding external variables in it with an aim of exploring the effects of external factors on users' attitude, behavioral intention and actual use of technology. Several factors have been examined so far. For example, perceived self-efficacy, facilitating conditions, and systems quality (Fathema, Shannon, Ross, 2015, Fathema, Ross, Witte, 2014). This model has also been applied in the acceptance of health care technologies. == Criticisms == TAM has been widely criticised, despite its frequent use, leading the original proposers to attempt to redefine it several times. Criticisms of TAM as a "theory" include its questionable heuristic value, limited explanatory and predictive power, triviality, and lack of any practical value. Benbasat and Barki suggest that TAM "has diverted researchers' attention away from other important research issues and has created an illusion of progress in knowledge accumulation. Furthermore, the independent attempts by several researchers to expand TAM in order to adapt it to the constantly changing IT environments has lead [sic] to a state of theoretical chaos and confusion". In general, TAM focuses on the individual 'user' of a computer, with the concept of 'perceived usefulness', with extension to bring in more and more factors to explain how a user 'perceives' 'usefulness', and ignores the essentially social processes of IS development and implementation, without question where more technology is actually better, and the social consequences of IS use. Lunceford argues that the framework of perceived usefulness and ease of use overlooks other issues, such as cost and structural imperatives that force users into adopting the technology. For a recent analysis and critique of TAM, see Bagozzi. Legris et al. claim that, together, TAM and TAM2 account for only 40% of a technological system's use. Perceived ease of use is less likely to be a determinant of attitude and usage intention according to studies of telemedicine, mobile commerce,) and online banking. == See also == == Notes == == References ==
https://en.wikipedia.org/wiki/Technology_acceptance_model
DIT University (erstwhile Dehradun Institute of Technology) is a private university in Dehradun, Uttarakhand, India. DIT University has been accorded by the National Assessment and Accreditation Council with Grade A. == Campus == DIT University's campus is located in Dehradun, in the foothills of Mussoorie. Dehradun is 240 kilometres northeast of Delhi. The area of the campus is 25 acres out of which 23 acres is developed, the prominent buildings are Vedanta, Chanakya and Civil block. There is a two acre ground available for students, parking, and other facilities are also available in DIT. The campus has classrooms equipped with ICT facilities, including projectors, screens, and other technological tools. == Academics == === Academic programmes === DIT University has programs in Engineering, Architecture, Pharmacy, Management Studies, Computing. == Rankings == The National Institutional Ranking Framework (NIRF) ranked the university between 201-300 in the engineering rankings in 2024. == Student life == === Events === ==== Youthopia ==== Youthopia is the annual cultural and technical inter-college festival of DITU. The prominent events include Battle of Bands, RoboWars, CodeHunt and Perceptrix. ==== Sphurti ==== Sphurti is the annual dance competition at the DITU campus. DITU invites colleges throughout India to participate in events including cricket, basketball, football, volleyball, track and field, badminton, table tennis. Since the first Sphurti, there are more than 69 colleges that have come to participate in Sphurti. ==== Vision 2k35 ==== Aiming to promote Dr. A. P. J. Abdul Kalam's vision of an era when the youth of India would enrich the world with their social, technical and academic brilliance, Vision 2K35 is a DIT University's initiative, to reach out to Young India. Vision 2K35 is a national level youth summit wherein, students will explore and evaluate the potential of renewable and conserving energy infrastructure for the nation by implementing most innovative Technical Ideas, Energy Auditing & Audit Presentation. The theme of the summit is what role the youth can play in bringing India in the league of Superpowers by 2035. == References == == External links == Official website
https://en.wikipedia.org/wiki/DIT_University
Language technology, often called human language technology (HLT), studies methods of how computer programs or electronic devices can analyze, produce, modify or respond to human texts and speech. Working with language technology often requires broad knowledge not only about linguistics but also about computer science. It consists of natural language processing (NLP) and computational linguistics (CL) on the one hand, many application oriented aspects of these, and more low-level aspects such as encoding and speech technology on the other hand. Note that these elementary aspects are normally not considered to be within the scope of related terms such as natural language processing and (applied) computational linguistics, which are otherwise near-synonyms. As an example, for many of the world's lesser known languages, the foundation of language technology is providing communities with fonts and keyboard setups so their languages can be written on computers or mobile devices. == References == == External links == Johns Hopkins University Human Language Technology Center of Excellence Carnegie Mellon University Language Technologies Institute Institute for Applied Linguistics (IULA) at Universitat Pompeu Fabra. Barcelona, Spain German Research Centre for Artificial Intelligence (DFKI) Language Technology Lab CLT: Centre for Language Technology in Gothenburg, Sweden The Center for Speech and Language Technologies (CSaLT) at the Lahore University [sic] of Management Sciences (LUMS) Globalization and Localization Association (GALA) ScriptSource, a reference to the writing systems of the world and the remaining needs for supporting them in the computing realm. High Performance Language Technologies (HPLT) development funded by the European Commission.[1]
https://en.wikipedia.org/wiki/Language_technology
Educational technology (commonly abbreviated as edutech, or edtech) is the combined use of computer hardware, software, and educational theory and practice to facilitate learning and teaching. When referred to with its abbreviation, "EdTech", it often refers to the industry of companies that create educational technology. In EdTech Inc.: Selling, Automating and Globalizing Higher Education in the Digital Age, Tanner Mirrlees and Shahid Alvi (2019) argue "EdTech is no exception to industry ownership and market rules" and "define the EdTech industries as all the privately owned companies currently involved in the financing, production and distribution of commercial hardware, software, cultural goods, services and platforms for the educational market with the goal of turning a profit. Many of these companies are US-based and rapidly expanding into educational markets across North America, and increasingly growing all over the world." In addition to the practical educational experience, educational technology is based on theoretical knowledge from various disciplines such as communication, education, psychology, sociology, artificial intelligence, and computer science. It encompasses several domains including learning theory, computer-based training, online learning, and m-learning where mobile technologies are used. == Definition == The Association for Educational Communications and Technology (AECT) has defined educational technology as "the study and ethical practice of facilitating learning and improving performance by creating, using and managing appropriate technological processes and resources". It denotes instructional technology as "the theory and practice of design, development, utilization, management, and evaluation of processes and resources for learning". As such, educational technology refers to all valid and reliable applied education sciences, such as equipment, as well as processes and procedures that are derived from scientific research, and in a given context may refer to theoretical, algorithmic or heuristic processes: it does not necessarily imply physical technology. Educational technology is the process of integrating technology into education in a positive manner that promotes a more diverse learning environment and a way for students to learn how to use technology as well as their common assignments. Accordingly, there are several discrete aspects to describing the intellectual and technical development of educational technology: Educational technology as the theory and practice of educational approaches to learning. Educational technology as technological tools and media, for instance massive online courses, that assist in the communication of knowledge, and its development and exchange. This is usually what people are referring to when they use the term "edtech". Educational technology for learning management systems (LMS), such as tools for student and curriculum management, and education management information systems (EMIS). Educational technology as back-office management, such as training management systems for logistics and budget management, and Learning Record Store (LRS) for learning data storage and analysis. Educational technology itself as an educational subject; such courses may be called "computer studies" or "information and communications technology (ICT)". === Related terms === Educational technology is an inclusive term for both the material tools and processes, and the theoretical foundations for supporting learning and teaching. Educational technology is not restricted to advanced technology but is anything that enhances classroom learning in the utilization of blended, face-to-face, or online learning. An educational technologist is someone who is trained in the field of educational technology. Educational technologists try to analyze, design, develop, implement, and evaluate processes and tools to enhance learning. While the term educational technologist is used primarily in the United States, learning technologist is a synonymous term used in the UK as well as Canada. Modern electronic educational technology is an important part of society today. Educational technology encompasses e-learning, instructional technology, information and communication technology (ICT) in education, edtech, learning technology, multimedia learning, technology-enhanced learning (TEL), computer-based instruction (CBI), computer managed instruction, computer-based training (CBT), computer-assisted instruction or computer-aided instruction (CAI), internet-based training (IBT), flexible learning, web-based training (WBT), online education, digital educational collaboration, distributed learning, computer-mediated communication, cyber-learning, and multi-modal instruction, virtual education, personal learning environments, networked learning, virtual learning environments (VLE) (which are also called learning platforms), m-learning, and digital education. Each of these numerous terms has had its advocates, who point up potential distinctive features. However, many terms and concepts in educational technology have been defined nebulously. For example, Singh and Thurman cite over 45 definitions for online learning. Moreover, Moore saw these terminologies as emphasizing particular features such as digitization approaches, components, or delivery methods rather than being fundamentally dissimilar in concept or principle. For example, m-learning emphasizes mobility, which allows for altered timing, location, accessibility, and context of learning; nevertheless, its purpose and conceptual principles are those of educational technology. In practice, as technology has advanced, the particular "narrowly defined" terminological aspect that was initially emphasized by name has blended into the general field of educational technology. Initially, "virtual learning" as narrowly defined in a semantic sense implied entering an environmental simulation within a virtual world, for example in treating posttraumatic stress disorder (PTSD). In practice, a "virtual education course" refers to any instructional course in which all, or at least a significant portion, is delivered by the Internet. "Virtual" is used in that broader way to describe a course that is not taught in a classroom face-to-face but "virtually" with people not having to go to the physical classroom to learn. Accordingly, virtual education refers to a form of distance learning in which course content is delivered using various methods such as course management applications, multimedia resources, and videoconferencing. Virtual education and simulated learning opportunities, such as games or dissections, offer opportunities for students to connect classroom content to authentic situations. Educational content, pervasively embedded in objects, is all around the learner, who may not even be conscious of the learning process. The combination of adaptive learning, using an individualized interface and materials, which accommodate to an individual, who thus receives personally differentiated instruction, with ubiquitous access to digital resources and learning opportunities in a range of places and at various times, has been termed smart learning. Smart learning is a component of the smart city concept. == History == Helping people and children learn in ways that are easier, faster, more accurate, or less expensive can be traced back to the emergence of very early tools, such as paintings on cave walls. Various types of abacus have been used. Writing slates and blackboards have been used for at least a millennium. Since their introduction, books and pamphlets have played a prominent role in education. From the early twentieth century, duplicating machines such as the mimeograph and Gestetner stencil devices were used to produce short copy runs (typically 10–50 copies) for classroom or home use. The use of media for instructional purposes is generally traced back to the first decade of the 20th century with the introduction of educational films (the 1900s) and Sidney Pressey's mechanical teaching machines (1920s). In the mid-1960s, Stanford University psychology professors, Patrick Suppes and Richard C. Atkinson, experimented with using computers to teach arithmetic and spelling via Teletypes to elementary school students in the Palo Alto Unified School District in California. Online education originated from the University of Illinois in 1960. Although the internet would not be created for another decade, students were able to access class information with linked computer terminals. Online learning emerged in 1982 when the Western Behavioral Sciences Institute in La Jolla, California, opened its School of Management and Strategic Studies. The school employed computer conferencing through the New Jersey Institute of Technology's Electronic Information Exchange System (EIES) to deliver a distance education program to business executives. Starting in 1985, Connected Education offered the first totally online master's degree in media studies, through The New School in New York City, also via the EIES computer conferencing system. Subsequent courses were offered in 1986 by the Electronic University Network for DOS and Commodore 64 computers. In 2002, MIT began providing online classes free of charge. As of 2009, approximately 5.5 million students were taking at least one class online. Currently, one out of three college students takes at least one online course while in college. At DeVry University, out of all students that are earning a bachelor's degree, 80% earn two-thirds of their requirements online. Also, in 2014, 2.85 million students out of 5.8 million students that took courses online, took all of their courses online. From this information, it can be concluded that the number of students taking classes online is on a steady increase. In 1971, Ivan Illich published a hugely influential book, Deschooling Society, in which he envisioned "learning webs" as a model for people to network the learning they needed. The 1970s and 1980s saw notable contributions in computer-based learning by Murray Turoff and Starr Roxanne Hiltz at the New Jersey Institute of Technology as well as developments at the University of Guelph in Canada. In the UK, the Council for Educational Technology supported the use of educational technology, in particular administering the government's National Development Programme in Computer Aided Learning (1973–1977) and the Microelectronics Education Programme (1980–1986). Videoconferencing was an important forerunner to the educational technologies known today. This work was especially popular with museum education. Even in recent years, videoconferencing has risen in popularity to reach over 20,000 students across the United States and Canada in 2008–2009. Disadvantages of this form of educational technology are readily apparent: image and sound quality are often grainy or pixelated; videoconferencing requires setting up a type of mini-television studio within the museum for broadcast; space becomes an issue; and specialized equipment is required for both the provider and the participant. The Open University in Britain and the University of British Columbia (where Web CT, now incorporated into Blackboard Inc., was first developed) began a revolution of using the Internet to deliver learning, making heavy use of web-based training, online distance learning, and online discussion between students. Practitioners such as Harasim (1995) put heavy emphasis on the use of learning networks. By 1994, the first online high school had been founded. In 1997, Graziadei described criteria for evaluating products and developing technology-based courses that include being portable, replicable, scalable, affordable, and having a high probability of long-term cost-effectiveness. Improved Internet functionality enabled new schemes of communication with multimedia or webcams. The National Center for Education Statistics estimates the number of K-12 students enrolled in online distance learning programs increased by 65% from 2002 to 2005, with greater flexibility, ease of communication between teacher and student, and quick lecture and assignment feedback. According to a 2008 study conducted by the U.S Department of Education, during the 2006–2007 academic year about 66% of postsecondary public and private schools participating in student financial aid programs offered some distance learning courses; records show 77% of enrollment in for-credit courses with an online component. In 2008, the Council of Europe passed a statement endorsing e-learning's potential to drive equality and education improvements across the EU. Computer-mediated communication (CMC) is between learners and instructors, mediated by the computer. In contrast, CBT/CBL usually means individualized (self-study) learning, while CMC involves educator/tutor facilitation and requires the scalarization of flexible learning activities. In addition, modern ICT provides education with tools for sustaining learning communities and associated knowledge management tasks. Students growing up in this digital age have extensive exposure to a variety of media. Major high-tech companies have funded schools to provide them with the ability to teach their students through technology. 2015 was the first year that private nonprofit organizations enrolled more online students than for-profits, although public universities still enrolled the highest number of online students. In the fall of 2015, more than 6 million students enrolled in at least one online course. In 2020, due to the COVID-19 pandemic, many schools across the world were forced to close, which left more and more grade-school students participating in online learning, and university-level students enrolling in online courses to enforce distance learning. Organizations such as Unesco have enlisted educational technology solutions to help schools facilitate distance education. The pandemic's extended lockdowns and focus on distance learning has attracted record-breaking amounts of venture capital to the ed-tech sector. In 2020, in the United States alone, ed-tech startups raised $1.78 billion in venture capital spanning 265 deals, compared to $1.32 billion in 2019. == Theory == === Behaviorism === This theoretical framework was developed in the early 20th century based on animal learning experiments by Ivan Pavlov, Edward Thorndike, Edward C. Tolman, Clark L. Hull, and B.F. Skinner. Many psychologists used these results to develop theories of human learning, but modern educators generally see behaviorism as one aspect of a holistic synthesis. Teaching in behaviorism has been linked to training, emphasizing animal learning experiments. Since behaviorism consists of the view of teaching people how to do something with rewards and punishments, it is related to training people. B.F. Skinner wrote extensively on improvements in teaching based on his functional analysis of verbal behavior and wrote "The Technology of Teaching", an attempt to dispel the myths underlying contemporary education as well as promote his system he called programmed instruction. Ogden Lindsley developed a learning system, named Celeration, which was based on behavior analysis but substantially differed from Keller's and Skinner's models. === Cognitivism === Cognitive science underwent significant change in the 1960s and 1970s to the point that some described the period as a "cognitive revolution", particularly in reaction to behaviorism. While retaining the empirical framework of behaviorism, cognitive psychology theories look beyond behavior to explain brain-based learning by considering how human memory works to promote learning. It refers to learning as "all processes by which the sensory input is transformed, reduced, elaborated, stored, recovered, and used" by the human mind. The Atkinson-Shiffrin memory model and Baddeley's working memory model were established as theoretical frameworks. Computer science and information technology have had a major influence on cognitive science theory. The cognitive concepts of working memory (formerly known as short-term memory) and long-term memory have been facilitated by research and technology from the field of computer science. Another major influence on the field of cognitive science is Noam Chomsky. Today researchers are concentrating on topics like cognitive load, information processing, and media psychology. These theoretical perspectives influence instructional design. There are two separate schools of cognitivism, and these are the cognitivist and social cognitivist. The former focuses on the understanding of the thinking or cognitive processes of an individual while the latter includes social processes as influences in learning besides cognition. These two schools, however, share the view that learning is more than a behavioral change but is rather a mental process used by the learner. === Constructivism === Educational psychologists distinguish between several types of constructivism: individual (or psychological) constructivism, such as Piaget's theory of cognitive development, and social constructivism. This form of constructivism has a primary focus on how learners construct their own meaning from new information, as they interact with reality and with other learners who bring different perspectives. Constructivist learning environments require students to use their prior knowledge and experiences to formulate new, related, and/or adaptive concepts in learning (Termos, 2012). Under this framework, the role of the teacher becomes that of a facilitator, providing guidance so that learners can construct their own knowledge. Constructivist educators must make sure that the prior learning experiences are appropriate and related to the concepts being taught. Jonassen (1997) suggests "well-structured" learning environments are useful for novice learners and that "ill-structured" environments are only useful for more advanced learners. Educators utilizing a constructivist perspective may emphasize an active learning environment that may incorporate learner-centered problem-based learning, project-based learning, and inquiry-based learning, ideally involving real-world scenarios, in which students are actively engaged in critical thinking activities. An illustrative discussion and example can be found in the 1980s deployment of constructivist cognitive learning in computer literacy, which involved programming as an instrument of learning.: 224  LOGO, a programming language, embodied an attempt to integrate Piagetian ideas with computers and technology. Initially there were broad, hopeful claims, including "perhaps the most controversial claim" that it would "improve general problem-solving skills" across disciplines.: 238  However, LOGO programming skills did not consistently yield cognitive benefits.: 238  It was "not as concrete" as advocates claimed, it privileged "one form of reasoning over all others", and it was difficult to apply the thinking activity to non-LOGO-based activities. By the late 1980s, LOGO and other similar programming languages had lost their novelty and dominance and were gradually de-emphasized amid criticisms. == Practice == The extent to which e-learning assists or replaces other learning and teaching approaches is variable, ranging on a continuum from none to fully online distance learning. A variety of descriptive terms have been employed (somewhat inconsistently) to categorize the extent to which technology is used. For example, "hybrid learning" or "blended learning" may refer to classroom aids and laptops, or may refer to approaches in which traditional classroom time is reduced but not eliminated, and is replaced with some online learning. "Distributed learning" may describe either the e-learning component of a hybrid approach, or fully online distance learning environments. === Synchronous and asynchronous === E-learning may either be synchronous or asynchronous. Synchronous learning occurs in real-time, with all participants interacting at the same time. In contrast, asynchronous learning is self-paced and allows participants to engage in the exchange of ideas or information without the dependency on other participants' involvement at the same time. Synchronous learning refers to exchanging ideas and information with one or more participants during the same period. Examples are face-to-face discussion, online real-time live teacher instruction and feedback, Skype conversations, and chat rooms or virtual classrooms where everyone is online and working collaboratively at the same time. Since students are working collaboratively, synchronized learning helps students become more open-minded because they have to actively listen and learn from their peers. Synchronized learning fosters online awareness and improves many students' writing skills. Asynchronous learning may use technologies such as learning management systems, email, blogs, wikis, and discussion boards, as well as web-supported textbooks, hypertext documents, audio video courses, and social networking using web 2.0. At the professional educational level, training may include virtual operating rooms. Asynchronous learning is beneficial for students who have health problems or who have childcare responsibilities. They have the opportunity to complete their work in a low-stress environment and within a more flexible time frame. In asynchronous online courses, students are allowed the freedom to complete work at their own pace. Being non-traditional students, they can manage their daily life and school and still have the social aspect. Asynchronous collaborations allow the student to reach out for help when needed and provide helpful guidance, depending on how long it takes them to complete the assignment. Many tools used for these courses are but are not limited to: videos, class discussions, and group projects. === Linear learning === Computer-based training (CBT) refers to self-paced learning activities delivered on a computer or handheld devices such as a tablet or smartphone. CBT initially delivered content via CD-ROM, and typically presented content linearly, much like reading an online book or manual. For this reason, CBT is often used to teach static processes, such as using software or completing mathematical equations. Computer-based training is conceptually similar to web-based training (WBT), which is delivered via Internet using a web browser. Assessing learning in a CBT is often by assessments that can be easily scored by a computer such as multiple-choice questions, drag-and-drop, radio button, simulation, or other interactive means. Assessments are easily scored and recorded via online software, providing immediate end-user feedback and completion status. Users are often able to print completion records in the form of certificates. CBTs provide learning stimulus beyond traditional learning methodology from textbook, manual, or classroom-based instruction. CBTs can be a good alternative to printed learning materials since rich media, including videos or animations, can be embedded to enhance learning. However, CBTs pose some learning challenges. Typically, the creation of effective CBTs requires enormous resources. The software for developing CBTs is often more complex than a subject matter expert or teacher is able to use. === Collaborative learning === Computer-supported collaborative learning (CSCL) uses instructional methods designed to encourage or require students to work together on learning tasks, allowing social learning. CSCL is similar in concept to the terminology, "e-learning 2.0" and "networked collaborative learning" (NCL). With Web 2.0 advances, sharing information between multiple people in a network has become much easier and use has increased.: 1  One of the main reasons for its usage states that it is "a breeding ground for creative and engaging educational endeavors.": 2  Learning takes place through conversations about content and grounded interaction about problems and actions. This collaborative learning differs from instruction in which the instructor is the principal source of knowledge and skills. The neologism "e-learning 1.0" refers to direct instruction used in early computer-based learning and training systems (CBL). In contrast to that linear delivery of content, often directly from the instructor's material, CSCL uses social software such as blogs, social media, wikis, podcasts, cloud-based document portals, discussion groups and virtual worlds. This phenomenon has been referred to as Long Tail Learning. Advocates of social learning claim that one of the best ways to learn something is to teach it to others. Social networks have been used to foster online learning communities around subjects as diverse as test preparation and language education. Mobile-assisted language learning (MALL) is the use of handheld computers or cell phones to assist in language learning. Collaborative apps allow students and teachers to interact while studying. Apps are designed after games, which provide a fun way to revise. When the experience is enjoyable, the students become more engaged. Games also usually come with a sense of progression, which can help keep students motivated and consistent while trying to improve. Classroom 2.0 refers to online multi-user virtual environments (MUVEs) that connect schools across geographical frontiers. Known as "eTwinning", computer-supported collaborative learning (CSCL) allows learners in one school to communicate with learners in another that they would not get to know otherwise, enhancing educational outcomes and cultural integration. Further, many researchers distinguish between collaborative and cooperative approaches to group learning. For example, Roschelle and Teasley (1995) argue that "cooperation is accomplished by the division of labor among participants, as an activity where each person is responsible for a portion of the problem solving", in contrast with collaboration that involves the "mutual engagement of participants in a coordinated effort to solve the problem together." Social technology, and social media specifically, provides avenues for student learning that would not be available otherwise. For example, it provides ordinary students a chance to exist in the same room as, and share a dialogue with researchers, politicians, and activists. This is because it vaporizes the geographical barriers that would otherwise separate people. Simplified, social media gives students a reach that provides them with opportunities and conversations that allow them to grow as communicators. Social technologies like Twitter can provide students with an archive of free data that goes back multiple decades. Many classrooms and educators are already taking advantage of this free resource—for example, researchers and educators at the University of Central Florida in 2011 used Tweets posted relating to emergencies like Hurricane Irene as data points, in order to teach their students how to code data. Social media technologies also allow instructors the ability to show students how professional networks facilitate work on a technical level. === Flipped classroom === This is an instructional strategy where the majority of the initial learning occurs first at home using technology. Then, students will engage with higher-order learning tasks in the classroom with the teacher. Often, online tools are used for the individual at-home learning, such as: educational videos, learning management systems, interactive tools, and other web-based resources. Some advantages of flipped learning include improved learning performance, enhanced student satisfaction and engagement, flexibility in learning, and increased interaction opportunities between students and instructors. On the other hand, the disadvantages of flipped learning involve challenges related to student motivation, internet accessibility, quality of videos, and increased workload for teachers. == Technologies == Numerous types of physical technology are currently used: digital cameras, video cameras, interactive whiteboard tools, document cameras, electronic media, and LCD projectors. Combinations of these techniques include blogs, collaborative software, ePortfolios, and virtual classrooms. The current design of this type of application includes the evaluation through tools of cognitive analysis that allow one to identify which elements optimize the use of these platforms. === Audio and video === Video technology has included VHS tapes and DVDs, as well as on-demand and synchronous methods with digital video via server or web-based options such as streamed video and webcams. Videotelephony can connect with speakers and other experts. Interactive digital video games are being used at K-12 and higher education institutions. Screencasting allows users to share their screens directly from their browser and make the video available online so that other viewers can stream the video directly. Webcams and webcasting have enabled the creation of virtual classrooms and virtual learning environments. Webcams are also being used to counter plagiarism and other forms of academic dishonesty that might occur in an e-learning environment. === Computers, tablets, and mobile devices === Computers and tablets enable learners and educators to access websites as well as applications. Many mobile devices support m-learning. Mobile devices such as clickers and smartphones can be used for interactive audience response feedback. Mobile learning can provide performance support for checking the time, setting reminders, retrieving worksheets, and instruction manuals. Such devices as iPads are used for helping disabled (visually impaired or with multiple disabilities) children in communication development as well as in improving physiological activity, according to the stimulation Practice Report. Studies in pre-school (early learning), primary and secondary education have explored how digital devices are used to enable effective learning outcomes, and create systems that can support teachers. Digital technology can improve teaching and learning by motivating students with engaging, interactive, and fun learning environments. These online interactions enable further opportunities to develop digital literacy, 21st century skills, and digital citizenship. === Single-board computers and Internet of Things === Embedded single-board computers and microcontrollers such as Raspberry Pi, Arduino and BeagleBone are easy to program, some can run Linux and connect to devices such as sensors, displays, LEDs and robotics. These are cost effective computing devices ideal for learning programming, which work with cloud computing and the Internet of Things. The Internet of things refers to a type of network to connect anything with the Internet-based on stipulated protocols through information sensing equipment to conduct information exchange and communications to achieve smart recognitions, positioning, tracking, monitoring, and administration. These devices are part of a Maker culture that embraces tinkering with electronics and programming to achieve software and hardware solutions. The Maker Culture means there is a huge amount of training and support available. === Collaborative and social learning === Group webpages, blogs, wikis, and Twitter allow learners and educators to post thoughts, ideas, and comments on a website in an interactive learning environment. Social networking sites are virtual communities for people interested in a particular subject to communicate by voice, chat, instant message, video conference, or blogs. The National School Boards Association found that 96% of students with online access have used social networking technologies and more than 50% talk online about schoolwork. Social networking encourages collaboration and engagement and can be a motivational tool for self-efficacy amongst students. === Whiteboards === There are three types of whiteboards. The initial whiteboards, analogous to blackboards, date from the late 1950s. The term whiteboard is also used metaphorically to refer to virtual whiteboards in which computer software applications simulate whiteboards by allowing writing or drawing. This is a common feature of groupware for virtual meetings, collaboration, and instant messaging. Interactive whiteboards allow learners and instructors to write on the touch screen. The screen markup can be on either a blank whiteboard or any computer screen content. Depending on permission settings, this visual learning can be interactive and participatory, including writing and manipulating images on the interactive whiteboard. === Virtual classroom === A virtual learning environment (VLE), also known as a learning platform, simulates a virtual classroom or meeting by simultaneously mixing several communication technologies.Web conferencing software enables students and instructors to communicate with each other via webcam, microphone, and real-time chatting in a group setting. Participants can raise their hands, answer polls, or take tests. Students can whiteboard and screencast when given rights by the instructor, who sets permission levels for text notes, microphone rights, and mouse control. A virtual classroom provides an opportunity for students to receive direct instruction from a qualified teacher in an interactive environment. Learners can have direct and immediate access to their instructor for instant feedback and direction. The virtual classroom provides a structured schedule of classes, which can be helpful for students who may find the freedom of asynchronous learning to be overwhelming. Besides, the virtual classroom provides a social learning environment that replicates the traditional "brick and mortar" classroom. In higher education especially, a virtual learning environment (VLE) is sometimes combined with a management information system (MIS) to create a managed learning environment, in which all aspects of a course are handled through a consistent user interface throughout the institution. Physical universities and newer online-only colleges offer to select academic degrees and certificate programs via the Internet. Some programs require students to attend some campus classes or orientations, but many are delivered completely online. Several universities offer online student support services, such as online advising and registration, e-counseling, online textbook purchases, student governments, and student newspapers. Due to the COVID-19 pandemic, many schools have been forced to move online. As of April 2020, an estimated 90% of high-income countries are offering online learning, with only 25% of low-income countries offering the same. ==== Augmented reality ==== AR technology plays an important role in the future of the classroom where human co-orchestration takes place seamlessly. === Learning management system === A learning management system (LMS) is software used for delivering, tracking, and managing training and education. It tracks data about attendance, time on task, and student progress. Educators can post announcements, grade assignments, check on course activities, and participate in class discussions. Students can submit their work, read and respond to discussion questions, and take quizzes. An LMS may allow teachers, administrators, and students, and permitted additional parties (such as parents, if appropriate) to track various metrics. LMSs range from systems for managing training/educational records to software for distributing courses over the Internet and offering features for online collaboration. The creation and maintenance of comprehensive learning content require substantial initial and ongoing investments in human labor. Effective translation into other languages and cultural contexts requires even more investment by knowledgeable personnel. ==== Learning content management system ==== A learning content management system (LCMS) is software for author content (courses, reusable content objects). An LCMS may be solely dedicated to producing and publishing content that is hosted on an LMS, or it can host the content itself. The Aviation Industry Computer-Based Training Committee (AICC) specification provides support for content that is hosted separately from the LMS. ==== Computer-aided assessment ==== Computer-aided assessment (e-assessment) ranges from automated multiple-choice tests to more sophisticated systems. With some systems, feedback can be geared towards a student's specific mistakes, or the computer can navigate the student through a series of questions adapting to what the student appears to have learned or not learned. Formative assessment sifts out the incorrect answers, and these questions are then explained by the teacher. The learner then practices with slight variations of the sifted-out questions. The learning cycle often concludes with summative assessment, using a new set of questions that cover the topics previously taught. ==== Training management system ==== A training management system or training resource management system is software designed to optimize instructor-led training management. Similar to an enterprise resource planning (ERP), it is a back office tool that aims at streamlining every aspect of the training process: planning (training plan and budget forecasting), logistics (scheduling and resource management), financials (cost tracking, profitability), reporting, and sales for-profit training providers. == Standards and ecosystem == === Learning objects === === Content === Content and design architecture issues include pedagogy and learning object re-use. One approach looks at five aspects: Fact – unique data (e.g. symbols for Excel formula, or the parts that make up a learning objective) Concept – a category that includes multiple examples (e.g. Excel formulas, or the various types/theories of instructional design) Process – a flow of events or activities (e.g. how a spreadsheet works, or the five phases in ADDIE) Procedure – step-by-step task (e.g. entering a formula into a spreadsheet or the steps that should be followed within a phase in ADDIE) Strategic principle – a task performed by adapting guidelines (e.g. doing a financial projection in a spreadsheet, or using a framework for designing learning environments) === Artificial intelligence === The academic study and development of artificial intelligence can be dated to at least 1956 when cognitive scientists began to investigate thought and learning processes in humans and machines. The earliest uses of AI in education can be traced to the development of intelligent tutoring systems (ITS) and their application in enhancing educational experiences. They are designed to provide immediate and personalized feedback to students. The incentive to develop ITS comes from educational studies showing that individual tutoring is much more effective than group teaching, in addition to the need for promoting learning on a larger scale. Over the years, a combination of cognitive science and data-driven techniques have enhanced the capabilities of ITS, allowing it to model a wide range of students' characteristics, such as knowledge, affect, off-task behavior, and wheel spinning. There is ample evidence that ITS are highly effective in helping students learn. ITS can be used to keep students in the zone of proximal development (ZPD): the space wherein students may learn with guidance. Such systems can guide students through tasks slightly above their ability level. Generative artificial intelligence (GenAI) gained widespread public attention with the introduction of ChatGPT in November 2022. This caused alarm among K-12 and higher education institutions, with a few large school districts quickly banning GenAI, due to concerns about potential academic misconduct. However, as the debate developed, these bans were largely reversed within a few months. To combat academic misconduct, detection tools have been developed, but their accuracy is limited. There have been various use cases in education, including providing personalized feedback, brainstorming classroom activities, support for students with special needs, streamlining administrative tasks, and simplifying assessment processes. However, GenAI can output incorrect information, also known as hallucination. Its outputs can also be biased, leading to calls for transparency regarding the data used to train GenAI models and their use. Providing professional development for teachers and developing policies and regulations can help mitigate the ethical concerns of GenAI. And while AI systems can provide individualized instruction and adaptive feedback to students, they have the potential to impact students' sense of classroom community. == Settings and sectors == === Preschool === Various forms of electronic media can be a feature of preschool life. Although parents report a positive experience, the impact of such use has not been systematically assessed. The age when a given child might start using a particular technology, such as a cellphone or computer, might depend on matching a technological resource to the recipient's developmental capabilities, such as the age-anticipated stages labeled by Swiss psychologist, Jean Piaget. Parameters, such as age-appropriateness, coherence with sought-after values, and concurrent entertainment and educational aspects, have been suggested for choosing media. At the preschool level, technology can be introduced in several ways. At the most basic is the use of computers, tablets, and audio and video resources in classrooms. Additionally, there are many resources available for parents and educators to introduce technology to young children or to use technology to augment lessons and enhance learning. Some options that are age-appropriate are video- or audio-recording of their creations, introducing them to the use of the internet through browsing age-appropriate websites, providing assistive technology to allow disabled children to participate with the rest of their peers, educational apps, electronic books, and educational videos. There are many free and paid educational website and apps that are directly targeting the educational needs of preschool children. These include Starfall, ABC mouse, PBS Kids Video, Teach me, and Montessori crosswords. Educational technology in the form of electronic books [109] offer preschool children the option to store and retrieve several books on one device, thus bringing together the traditional action of reading along with the use of educational technology. Educational technology is also thought to improve hand-eye coordination, language skills, visual attention, and motivation to complete educational tasks, and allows children to experience things they otherwise would not. There are several keys to making the most educational use of introducing technology at the preschool level: technology must be used appropriately, should allow access to learning opportunities, should include the interaction of parents and other adults with the preschool children, and should be developmentally appropriate. Allowing access to learning opportunities especially for allowing disabled children to have access to learning opportunities, giving bilingual children the opportunity to communicate and learn in more than one language, bringing in more information about STEM subjects, and bringing in images of diversity that may be lacking in the child's immediate environment. Coding is also becoming part of the early learning curriculum and preschool-aged children can benefit from experiences that teach coding skills even in a screen-free way. There are activities and games that teach hands-on coding skills that prepare students for the coding concepts they will encounter and use in the future. Minecraft and Roblox are two popular coding and programming apps being adopted by institutions that offer free or low-cost access. === Primary and secondary === E-learning is increasingly being utilized by students who may not want to go to traditional brick-and-mortar schools due to severe allergies or other medical issues, fear of school violence and school bullying, and students whose parents would like to homeschool but do not feel qualified. Online schools create a haven for students to receive a quality education while almost completely avoiding these common problems. Online charter schools also often are not limited by location, income level, or class size in the way brick and mortar charter schools are. E-learning also has been rising as a supplement to the traditional classroom. Students with special talents or interests outside of the available curricula use e-learning to advance their skills or exceed grade restrictions. Virtual education in K-12 schooling often refers to virtual schools, and in higher education to virtual universities. Virtual schools are "cybercharter schools" with innovative administrative models and course delivery technology. Education technology also seems to be an interesting method of engaging gifted youths that are under-stimulated in their current educational program. This can be achieved with after-school programs or even technologically-integrated curricula. 3D printing integrated courses (3dPIC) can also give youths the stimulation they need in their educational journey. Université de Montréal's Projet SEUR in collaboration with Collège Mont-Royal and La Variable are heavily developing this field. === Higher education === Online college course enrollment has seen a 29% increase in enrollment with nearly one-third of all college students, or an estimated 6.7 million students are currently enrolled in online classes. In 2009, 44% of post-secondary students in the US were taking some or all of their courses online, which was projected to rise to 81% by 2014. Although a large proportion of for-profit higher education institutions now offer online classes, only about half of private, non-profit schools do so. Private institutions may become more involved with online presentations as the costs decrease. Properly trained staff must also be hired to work with students online. These staff members need to understand the content area, and also be highly trained in the use of the computer and Internet. Online education is rapidly increasing, and online doctoral programs have even developed at leading research universities. Although massive open online courses (MOOCs) may have limitations that preclude them from fully replacing college education, such programs have significantly expanded. MIT, Stanford and Princeton University offer classes to a global audience, but not for college credit. University-level programs, like edX founded by Massachusetts Institute of Technology and Harvard University, offer a wide range of disciplines at no charge, while others permit students to audit a course at no charge but require a small fee for accreditation. MOOCs have not had a significant impact on higher education and declined after the initial expansion, but are expected to remain in some form. Lately, MOOCs are used by smaller universities to profile themselves with highly specialized courses for special-interest audiences, as for example in a course on technological privacy compliance. MOOCs have been observed to lose the majority of their initial course participants. In a study performed by Cornell and Stanford universities, student-drop-out rates from MOOCs have been attributed to student anonymity, the solitude of the learning experience, and to the lack of interaction with peers and with teachers. Effective student engagement measures that reduce drop-outs are forum interactions and virtual teacher or teaching assistant presence - measures which induce staff cost that grows with the number of participating students. === Corporate and professional === E-learning is being used by companies to deliver mandatory compliance training and updates for regulatory compliance, soft skills and IT skills training, continuing professional development (CPD), and other valuable workplace skills. Companies with spread out distribution chains use e-learning for delivering information about the latest product developments. Most corporate e-learning is asynchronous and delivered and managed via learning management systems. The big challenge in corporate e-learning is to engage the staff, especially on compliance topics for which periodic staff training is mandated by the law or regulations. === Government and public === E-Learning and Educational Technology is used by governmental bodies to train staff and civil service. However, government agencies also have an interest in promoting digital technology use, and improving skills amongst the people they serve. Educational Technology has been used in such training provision. For example, in the UK, the Skills Bootcamp scheme aims to improve the skillset of general population through the use of educational technological training. == Benefits == Effective technology use deploys multiple evidence-based strategies concurrently (e.g. adaptive content, frequent testing, immediate feedback, etc.), as do effective teachers. Using computers or other forms of technology can give students practice on core content and skills while the teacher can work with others, conduct assessments, or perform other tasks. Through the use of educational technology, education is able to be individualized for each student allowing for better differentiation and allowing students to work for mastery at their own pace. In India, the National Level Common Entrance Examination (NLCEE) utilized educational technology to provide free online coaching and scholarship opportunities. By leveraging digital platforms during the COVID-19 pandemic, NLCEE ensured students, especially those from underprivileged backgrounds, could access quality education and career guidance remotely. Modern educational technology can improve access to education, including full degree programs. It enables better integration for non-full-time students, particularly in continuing education, and improved interactions between students and instructors. Learning material can be used for long-distance learning and are accessible to a wider audience. Course materials are easy to access. In 2010, 70.3% of American family households had access to the internet. In 2013, according to Canadian Radio-Television and Telecommunications Commission Canada, 79% of homes have access to the internet. Students can access and engage with numerous online resources at home. Using online resources can help students spend more time on specific aspects of what they may be learning in school but at home. Schools like the Massachusetts Institute of Technology (MIT) have made certain course materials free online. Students appreciate the convenience of e-learning, but report greater engagement in face-to-face learning environments. Colleges and universities are working towards combating this issue by utilizing WEB 2.0 technologies as well as incorporating more mentorships between students and faculty members. According to James Kulik, who studies the effectiveness of computers used for instruction, students usually learn more in less time when receiving computer-based instruction, and they like classes more and develop more positive attitudes toward computers in computer-based classes. Students can independently solve problems. There are no intrinsic age-based restrictions on difficulty level, i.e. students can go at their own pace. Students editing their written work on word processors improve the quality of their writing. According to some studies, the students are better at critiquing and editing written work that is exchanged over a computer network with students they know. Studies completed in "computer intensive" settings found increases in student-centric, cooperative, and higher-order learning, writing skills, problem-solving, and using technology. In addition, attitudes toward technology as a learning tool by parents, students, and teachers are also improved. Employers' acceptance of online education has risen over time. More than 50% of human resource managers SHRM surveyed for an August 2010 report said that if two candidates with the same level of experience were applying for a job, it would not have any kind of effect whether the candidate's obtained degree was acquired through an online or a traditional school. Seventy-nine percent said they had employed a candidate with an online degree in the past 12 months. However, 66% said candidates who get degrees online were not seen as positively as job applicants with traditional degrees. The use of educational apps generally has a positive effect on learning. Pre- and post-tests have revealed that the use of educational apps on mobile devices reduces the achievement gap between struggling and average students. == Disadvantages == Globally, factors like change management, technology obsolescence, and vendor-developer partnership are major restraints that are hindering the growth of the Educational technology market. In the US, state and federal government increased funding, as well as private venture capital, has been flowing into the education sector. However, as of 2013, none were looking at technology return on investment (ROI) to connect expenditures on technology with improved student outcomes. New technologies are frequently accompanied by unrealistic hype and promise regarding their transformative power to change education for the better or in allowing better educational opportunities to reach the masses. Examples include silent film, broadcast radio, and television, none of which have maintained much of a foothold in the daily practices of mainstream, formal education. Technology, in and of itself, does not necessarily result in fundamental improvements to educational practice. The focus needs to be on the learner's interaction with technology—not the technology itself. It needs to be recognized as "ecological" rather than "additive" or "subtractive". In this ecological change, one significant change will create total change. According to Branford et al., "technology does not guarantee effective learning", and inappropriate use of technology can even hinder it. A University of Washington study of infant vocabulary shows that it is slipping due to educational baby DVDs. Published in the Journal of Pediatrics, a 2007 University of Washington study on the vocabulary of babies surveyed over 1,000 parents in Washington and Minnesota. The study found that for every hour that babies 8–16 months of age watched DVDs and videos, they knew 6–8 fewer of 90 common baby words than the babies that did not watch them. Andrew Meltzoff, a surveyor in this study, states that the result makes sense, that if the baby's "alert time" is spent in front of DVDs and TV, instead of with people speaking, the babies are not going to get the same linguistic experience. Dimitri Chistakis, another surveyor reported that the evidence is mounting that baby DVDs are of no value and may be harmful. Adaptive instructional materials tailor questions to each student's ability and calculate their scores, but this encourages students to work individually rather than socially or collaboratively (Kruse, 2013). Social relationships are important, but high-tech environments may compromise the balance of trust, care, and respect between teacher and student. Massively open online courses (MOOCs), although quite popular in discussions of technology and education in developed countries (more so in the US), are not a major concern in most developing or low-income countries. One of the stated goals of MOOCs is to provide less fortunate populations (i.e., in developing countries) an opportunity to experience courses with US-style content and structure. However, research shows only 3% of the registrants are from low-income countries, and although many courses have thousands of registered students only 5–10% of them complete the course. This can be attributed to lack of staff support, course difficulty, and low levels of engagement with peers. MOOCs also implies that certain curriculum and teaching methods are superior, and this could eventually wash over (or possibly washing out) local educational institutions, cultural norms, and educational traditions. With the Internet and social media, using educational apps makes students highly susceptible to distraction and sidetracking. Even though proper use has been shown to increase student performance, being distracted would be detrimental. Another disadvantage is an increased potential for cheating. A disadvantage of e-learning is that it can cause depression, according to a study made during the 2021 COVID-19 quarantines. === Over-stimulation === Electronic devices such as cell phones and computers facilitate rapid access to a stream of sources, each of which may receive cursory attention. Michel Rich, an associate professor at Harvard Medical School and executive director of the center on Media and Child Health in Boston, said of the digital generation, "Their brains are rewarded not for staying on task, but for jumping to the next thing. The worry is we're raising a generation of kids in front of screens whose brains are going to be wired differently." Students have always faced distractions; computers and cell phones are a particular challenge because the stream of data can interfere with focusing and learning. Although these technologies affect adults too, young people may be more influenced by it as their developing brains can easily become habituated to switching tasks and become unaccustomed to sustaining attention. Too much information, coming too rapidly, can overwhelm thinking. Technology is "rapidly and profoundly altering our brains." High exposure levels stimulate brain cell alteration and release neurotransmitters, which causes the strengthening of some neural pathways and the weakening of others. This leads to heightened stress levels on the brain that, at first, boost energy levels, but, over time, actually augment memory, impair cognition, lead to depression, and alter the neural circuitry of the hippocampus, amygdala and prefrontal cortex. These are the brain regions that control mood and thought. If unchecked, the underlying structure of the brain could be altered. Overstimulation due to technology may begin too young. When children are exposed before the age of seven, important developmental tasks may be delayed, and bad learning habits might develop, which "deprives children of the exploration and play that they need to develop." Media psychology is an emerging specialty field that embraces electronic devices and the sensory behaviors occurring from the use of educational technology in learning. === Sociocultural criticism === According to Lai, "the learning environment is a complex system where the interplay and interactions of many things impact the outcome of learning." When technology is brought into an educational setting, the pedagogical setting changes in that technology-driven teaching can change the entire meaning of an activity without adequate research validation. If technology monopolizes an activity, students can begin to develop the sense that "life would scarcely be thinkable without technology." Leo Marx considered the word "technology" itself as problematic, susceptible to reification and "phantom objectivity", which conceals its fundamental nature as something that is only valuable insofar as it benefits the human condition. Technology ultimately comes down to affecting the relations between people, but this notion is obfuscated when technology is treated as an abstract notion devoid of good and evil. Langdon Winner makes a similar point by arguing that the underdevelopment of the philosophy of technology leaves us with an overly simplistic reduction in our discourse to the supposedly dichotomous notions of the "making" versus the "uses" of new technologies and that a narrow focus on "use" leads us to believe that all technologies are neutral in moral standing.: ix–39  Winner viewed technology as a "form of life" that not only aids human activity, but that also represents a powerful force in reshaping that activity and its meaning.: ix–39  By far, the greatest latitude of choice exists the very first time a particular instrument, system, or technique is introduced. Because choices tend to become strongly fixed in material equipment, economic investment, and social habit, the original flexibility vanishes for all practical purposes once the initial commitments are made. In that sense, technological innovations are similar to legislative acts or political findings that establish a framework for public order that will endure over many generations. (p. 29) When adopting new technologies, there may be one best chance to "get it right". Seymour Papert (p. 32) points out a good example of a (bad) choice that has become strongly fixed in social habit and material equipment: our "choice" to use the QWERTY keyboard. Neil Postman endorsed the notion that technology impacts human cultures, including the culture of classrooms, and that this is a consideration even more important than considering the efficiency of new technology as a tool for teaching. Regarding the computer's impact on education, Postman writes (p. 19): What we need to consider about the computer has nothing to do with its efficiency as a teaching tool. We need to know in what ways it is altering our conception of learning, and how in conjunction with television, it undermines the old idea of school.There is an assumption that technology is inherently interesting so it must be helpful in education; based on research by Daniel Willingham, that is not always the case. He argues that it does not necessarily matter what the technological medium is, but whether or not the content is engaging and utilizes the medium in a beneficial way. ==== Digital divide ==== The concept of the digital divide is a gap between those who have access to digital technologies and those who do not. Access may be associated with age, gender, socio-economic status, education, income, ethnicity, and geography. === Data protection === According to a report by the Electronic Frontier Foundation, large amounts of personal data on children are collected by electronic devices that are distributed in schools in the United States. Often, far more information than necessary is collected, uploaded, and stored indefinitely. Aside from name and date of birth, this information can include the child's browsing history, search terms, location data, contact lists, as well as behavioral information.: 5  Parents are not informed or, if informed, have little choice.: 6  According to the report, this constant surveillance resulting from educational technology can "warp children's privacy expectations, lead them to self-censor, and limit their creativity".: 7  In a 2018 public service announcement, the FBI warned that widespread collection of student information by educational technologies, including web browsing history, academic progress, medical information, and biometrics, created the potential for privacy and safety threats if such data was compromised or exploited. The transition from in-person learning to distance education in higher education due to the COVID-19 pandemic has led to enhanced extraction of student data enabled by complex data infrastructures. These infrastructures collect information such as learning management system logins, library metrics, impact measurements, teacher evaluation frameworks, assessment systems, learning analytic traces, longitudinal graduate outcomes, attendance records, social media activity, and so on. The copious amounts of information collected are quantified for the marketization of higher education, employing this data as a means to demonstrate and compare student performance across institutions to attract prospective students, mirroring the capitalistic notion of ensuring efficient market functioning and constant improvement through measurement. This desire of data has fueled the exploitation of higher education by platform companies and data service providers who are outsourced by institutions for their services. The monetization of student data in order to integrate corporate models of marketization further pushes higher education, widely regarded as a public good, into a privatized commercial sector. == Teacher training == Since technology is not the end goal of education, but rather a means by which it can be accomplished, educators must have a good grasp of the technology and its advantages and disadvantages. Teacher training aims for the effective integration of classroom technology. The evolving nature of technology may unsettle teachers, who may experience themselves as perpetual novices. Finding quality materials to support classroom objectives is often difficult. Random professional development days are inadequate. According to Jenkins, "Rather than dealing with each technology in isolation, we would do better to take an ecological approach, thinking about the interrelationship among different communication technologies, the cultural communities that grow up around them, and the activities they support." Jenkins also suggested that the traditional school curriculum guided teachers to train students to be autonomous problem solvers. However, today's workers are increasingly asked to work in teams, drawing on different sets of expertise, and collaborating to solve problems. Learning styles and the methods of collecting information have evolved, and "students often feel locked out of the worlds described in their textbooks through the depersonalized and abstract prose used to describe them". These twenty-first-century skills can be attained through the incorporation and engagement with technology. Changes in instruction and use of technology can also promote a higher level of learning among students with different types of intelligence. == Assessment == There are two distinct issues of assessment: the assessment of educational technology and assessment with technology. Assessments of educational technology have included the Follow Through project. Educational assessment with technology may be either formative assessment or summative assessment. Instructors use both types of assessments to understand student progress and learning in the classroom. Technology has helped teachers create better assessments to help understand where students who are having trouble with the material are having issues. Formative assessment is more difficult, as the perfect form is ongoing and allows the students to show their learning in different ways depending on their learning styles. Technology has helped some teachers make their formative assessments better, particularly through the use of a classroom response system (CRS). A CRS is a tool in which the students each have a handheld device that partners up with the teacher's computer. The instructor then asks multiple choice or true or false questions and the students answer on their devices. Depending on the software used, the answers may then be shown on a graph so students and the teacher can see the percentage of students who gave each answer and the teacher can focus on what went wrong. Classroom response systems have a history going back to the late 1960s and early 1970s, when analogue electronics were used in their implementations. There were a few commercial products available, but they were costly and some universities preferred to build their own. The first such system appears to have been put into place at Stanford University, but it suffered from difficulties in use. Another early system was one designed and built by Raphael M. Littauer, a professor of physics at Cornell University, and used for large lecture courses. It was more successful than most of the other early systems, in part because the designer of the system was also the instructor using it. A subsequent classroom response technologies involved H-ITT with infrared devices. Summative assessments are more common in classrooms and are usually set up to be more easily graded, as they take the form of tests or projects with specific grading schemes. One huge benefit of tech-based testing is the option to give students immediate feedback on their answers. When students get these responses, they are able to know how they are doing in the class which can help push them to improve or give them confidence that they are doing well. Technology also allows for different kinds of summative assessment, such as digital presentations, videos, or anything else the teacher/students may come up with, which allows different learners to show what they learned more effectively. Teachers can also use technology to post graded assessments online for students to have a better idea of what a good project is. Electronic assessment uses information technology. It encompasses several potential applications, which may be teacher or student-oriented, including educational assessment throughout the continuum of learning, such as computerized classification testing, computerized adaptive testing, student testing, and grading an exam. E-Marking is an examiner-led activity closely related to other e-assessment activities such as e-testing, or e-learning which are student-led. E-marking allows markers to mark a scanned script or online response on a computer screen rather than on paper. There are no restrictions on the types of tests that can use e-marking, with e-marking applications designed to accommodate multiple choice, written, and even video submissions for performance examinations. E-marking software is used by individual educational institutions and can also be rolled out to the participating schools of awarding exam organizations. E-marking has been used to mark many well-known high stakes examinations, which in the United Kingdom include A levels and GCSE exams, and in the US includes the SAT test for college admissions. Ofqual reports that e-marking is the main type of marking used for general qualifications in the United Kingdom. In 2014, the Scottish Qualifications Authority (SQA) announced that most of the National 5 question papers would be e-marked. In June 2015, the Odisha state government in India announced that it planned to use e-marking for all Plus II papers from 2016. == Analytics == The importance of self-assessment through tools made available on educational technology platforms has been growing. Self-assessment in education technology relies on students analyzing their strengths, weaknesses, and areas where improvement is possible to set realistic goals in learning, improve their educational performances and track their progress. One of the unique tools for self-assessment made possible by education technology is Analytics. Analytics is data gathered on the student's activities on the learning platform, drawn into meaningful patterns that lead to a valid conclusion, usually through the medium of data visualization such as graphs. Learning analytics is the field that focuses on analyzing and reporting data about students' activities in order to facilitate learning. == Expenditure == The five key sectors of the e-learning industry are consulting, content, technologies, services, and support. Worldwide, e-learning was estimated in 2000 to be over $48 billion according to conservative estimates. Commercial growth has been brisk. In 2014, the worldwide commercial market activity was estimated at $6 billion venture capital over the past five years,: 38  with self-paced learning generating $35.6 billion in 2011.: 4  North American e-learning generated $23.3 billion in revenue in 2013, with a 9% growth rate in cloud-based authoring tools and learning platforms.: 19  == See also == == References == == Further reading == Betts, Kristen, et al. "Historical review of distance and online education from 1700s to 2021 in the United States: Instructional design and pivotal pedagogy in higher education." Journal of Online Learning Research and Practice 8.1 (2021) pp 3–55 online. == External links == Media related to Educational technology at Wikimedia Commons "Schools of the Future: Learning On-Line" 1994 documentary from KETC
https://en.wikipedia.org/wiki/Educational_technology
Information and communications technology (ICT) is an extensional term for information technology (IT) that stresses the role of unified communications and the integration of telecommunications (telephone lines and wireless signals) and computers, as well as necessary enterprise software, middleware, storage and audiovisual, that enable users to access, store, transmit, understand and manipulate information. ICT is also used to refer to the convergence of audiovisuals and telephone networks with computer networks through a single cabling or link system. There are large economic incentives to merge the telephone networks with the computer network system using a single unified system of cabling, signal distribution, and management. ICT is an umbrella term that includes any communication device, encompassing radio, television, cell phones, computer and network hardware, satellite systems and so on, as well as the various services and appliances with them such as video conferencing and distance learning. ICT also includes analog technology, such as paper communication, and any mode that transmits communication. ICT is a broad subject and the concepts are evolving. It covers any product that will store, retrieve, manipulate, process, transmit, or receive information electronically in a digital form (e.g., personal computers including smartphones, digital television, email, or robots). Skills Framework for the Information Age is one of many models for describing and managing competencies for ICT professionals in the 21st century. == Etymology == The phrase "information and communication technologies" has been used by academic researchers since the 1980s. The abbreviation "ICT" became popular after it was used in a report to the UK government by Dennis Stevenson in 1997, and then in the revised National Curriculum for England, Wales and Northern Ireland in 2000. However, in 2012, the Royal Society recommended that the use of the term "ICT" should be discontinued in British schools "as it has attracted too many negative connotations". From 2014, the National Curriculum has used the word computing, which reflects the addition of computer programming into the curriculum. Variations of the phrase have spread worldwide. The United Nations has created a "United Nations Information and Communication Technologies Task Force" and an internal "Office of Information and Communications Technology". == Monetization == The money spent on IT worldwide has been estimated as US$3.8 trillion in 2017 and has been growing at less than 5% per year since 2009. The estimated 2018 growth of the entire ICT is 5%. The biggest growth of 16% is expected in the area of new technologies (IoT, Robotics, AR/VR, and AI). The 2014 IT budget of the US federal government was nearly $82 billion. IT costs, as a percentage of corporate revenue, have grown 50% since 2002, putting a strain on IT budgets. When looking at current companies' IT budgets, 75% are recurrent costs, used to "keep the lights on" in the IT department, and 25% are the cost of new initiatives for technology development. The average IT budget has the following breakdown: 34% personnel costs (internal), 31% after correction 16% software costs (external/purchasing category), 29% after correction 33% hardware costs (external/purchasing category), 26% after correction 17% costs of external service providers (external/services), 14% after correction The estimated amount of money spent in 2022 is just over US$6 trillion. == Technological capacity == The world's technological capacity to store information grew from 2.6 (optimally compressed) exabytes in 1986 to 15.8 in 1993, over 54.5 in 2000, and to 295 (optimally compressed) exabytes in 2007, and some 5 zettabytes in 2014. This is the informational equivalent to 1.25 stacks of CD-ROM from the earth to the moon in 2007, and the equivalent of 4,500 stacks of printed books from the earth to the sun in 2014. The world's technological capacity to receive information through one-way broadcast networks was 432 exabytes of (optimally compressed) information in 1986, 715 (optimally compressed) exabytes in 1993, 1.2 (optimally compressed) zettabytes in 2000, and 1.9 zettabytes in 2007. The world's effective capacity to exchange information through two-way telecommunication networks was 281 petabytes of (optimally compressed) information in 1986, 471 petabytes in 1993, 2.2 (optimally compressed) exabytes in 2000, 65 (optimally compressed) exabytes in 2007, and some 100 exabytes in 2014. The world's technological capacity to compute information with humanly guided general-purpose computers grew from 3.0 × 10^8 MIPS in 1986, to 6.4 x 10^12 MIPS in 2007. == Sector in the OECD == The following is a list of OECD countries by share of ICT sector in total value added in 2013. == ICT Development Index == The ICT Development Index ranks and compares the level of ICT use and access across the various countries around the world. In 2014 ITU (International Telecommunication Union) released the latest rankings of the IDI, with Denmark attaining the top spot, followed by South Korea. The top 30 countries in the rankings include most high-income countries where the quality of life is higher than average, which includes countries from Europe and other regions such as "Australia, Bahrain, Canada, Japan, Macao (China), New Zealand, Singapore, and the United States; almost all countries surveyed improved their IDI ranking this year." == The WSIS process and development goals == On 21 December 2001, the United Nations General Assembly approved Resolution 56/183, endorsing the holding of the World Summit on the Information Society (WSIS) to discuss the opportunities and challenges facing today's information society. According to this resolution, the General Assembly related the Summit to the United Nations Millennium Declaration's goal of implementing ICT to achieve Millennium Development Goals. It also emphasized a multi-stakeholder approach to achieve these goals, using all stakeholders including civil society and the private sector, in addition to governments. To help anchor and expand ICT to every habitable part of the world, "2015 is the deadline for achievements of the UN Millennium Development Goals (MDGs), which global leaders agreed upon in the year 2000." == In education == There is evidence that, to be effective in education, ICT must be fully integrated into the pedagogy. Specifically, when teaching literacy and math, using ICT in combination with Writing to Learn produces better results than traditional methods alone or ICT alone. The United Nations Educational, Scientific and Cultural Organisation (UNESCO), a division of the United Nations, has made integrating ICT into education as part of its efforts to ensure equity and access to education. The following, which was taken directly from a UNESCO publication on educational ICT, explains the organization's position on the initiative.Information and Communication Technology can contribute to universal access to education, equity in education, the delivery of quality learning and teaching, teachers' professional development and more efficient education management, governance, and administration. UNESCO takes a holistic and comprehensive approach to promote ICT in education. Access, inclusion, and quality are among the main challenges they can address. The Organization's Intersectoral Platform for ICT in education focuses on these issues through the joint work of three of its sectors: Communication & Information, Education and Science. Despite the power of computers to enhance and reform teaching and learning practices, improper implementation is a widespread issue beyond the reach of increased funding and technological advances with little evidence that teachers and tutors are properly integrating ICT into everyday learning. Intrinsic barriers such as a belief in more traditional teaching practices and individual attitudes towards computers in education as well as the teachers own comfort with computers and their ability to use them all as result in varying effectiveness in the integration of ICT in the classroom. === Mobile learning for refugees === School environments play an important role in facilitating language learning. However, language and literacy barriers are obstacles preventing refugees from accessing and attending school, especially outside camp settings. Mobile-assisted language learning apps are key tools for language learning. Mobile solutions can provide support for refugees' language and literacy challenges in three main areas: literacy development, foreign language learning and translations. Mobile technology is relevant because communicative practice is a key asset for refugees and immigrants as they immerse themselves in a new language and a new society. Well-designed mobile language learning activities connect refugees with mainstream cultures, helping them learn in authentic contexts. === Developing countries === ==== Africa ==== ICT has been employed as an educational enhancement in Sub-Saharan Africa since the 1960s. Beginning with television and radio, it extended the reach of education from the classroom to the living room, and to geographical areas that had been beyond the reach of the traditional classroom. As the technology evolved and became more widely used, efforts in Sub-Saharan Africa were also expanded. In the 1990s a massive effort to push computer hardware and software into schools was undertaken, with the goal of familiarizing both students and teachers with computers in the classroom. Since then, multiple projects have endeavoured to continue the expansion of ICT's reach in the region, including the One Laptop Per Child (OLPC) project, which by 2015 had distributed over 2.4 million laptops to nearly two million students and teachers. The inclusion of ICT in the classroom, often referred to as M-Learning, has expanded the reach of educators and improved their ability to track student progress in Sub-Saharan Africa. In particular, the mobile phone has been most important in this effort. Mobile phone use is widespread, and mobile networks cover a wider area than internet networks in the region. The devices are familiar to student, teacher, and parent, and allow increased communication and access to educational materials. In addition to benefits for students, M-learning also offers the opportunity for better teacher training, which leads to a more consistent curriculum across the educational service area. In 2011, UNESCO started a yearly symposium called Mobile Learning Week with the purpose of gathering stakeholders to discuss the M-learning initiative. Implementation is not without its challenges. While mobile phone and internet use are increasing much more rapidly in Sub-Saharan Africa than in other developing countries, the progress is still slow compared to the rest of the developed world, with smartphone penetration only expected to reach 20% by 2017. Additionally, there are gender, social, and geo-political barriers to educational access, and the severity of these barriers vary greatly by country. Overall, 29.6 million children in Sub-Saharan Africa were not in school in the year 2012, owing not just to the geographical divide, but also to political instability, the importance of social origins, social structure, and gender inequality. Once in school, students also face barriers to quality education, such as teacher competency, training and preparedness, access to educational materials, and lack of information management. ==== Growth in modern society and developing countries ==== In modern society, ICT is ever-present, with over three billion people having access to the Internet. With approximately 8 out of 10 Internet users owning a smartphone, information and data are increasing by leaps and bounds. This rapid growth, especially in developing countries, has led ICT to become a keystone of everyday life, in which life without some facet of technology renders most of clerical, work and routine tasks dysfunctional. The most recent authoritative data, released in 2014, shows "that Internet use continues to grow steadily, at 6.6% globally in 2014 (3.3% in developed countries, 8.7% in the developing world); the number of Internet users in developing countries has doubled in five years (2009–2014), with two-thirds of all people online now living in the developing world." ==== Limitations ==== However, hurdles are still large. "Of the 4.3 billion people not yet using the Internet, 90% live in developing countries. In the world's 42 Least Connected Countries (LCCs), which are home to 2.5 billion people, access to ICTs remains largely out of reach, particularly for these countries' large rural populations." ICT has yet to penetrate the remote areas of some countries, with many developing countries dearth of any type of Internet. This also includes the availability of telephone lines, particularly the availability of cellular coverage, and other forms of electronic transmission of data. The latest "Measuring the Information Society Report" cautiously stated that the increase in the aforementioned cellular data coverage is ostensible, as "many users have multiple subscriptions, with global growth figures sometimes translating into little real improvement in the level of connectivity of those at the very bottom of the pyramid; an estimated 450 million people worldwide live in places which are still out of reach of mobile cellular service." Favourably, the gap between the access to the Internet and mobile coverage has decreased substantially in the last fifteen years, in which "2015 was the deadline for achievements of the UN Millennium Development Goals (MDGs), which global leaders agreed upon in the year 2000, and the new data show ICT progress and highlight remaining gaps." ICT continues to take on a new form, with nanotechnology set to usher in a new wave of ICT electronics and gadgets. ICT newest editions into the modern electronic world include smartwatches, such as the Apple Watch, smart wristbands such as the Nike+ FuelBand, and smart TVs such as Google TV. With desktops soon becoming part of a bygone era, and laptops becoming the preferred method of computing, ICT continues to insinuate and alter itself in the ever-changing globe. Information communication technologies play a role in facilitating accelerated pluralism in new social movements today. The internet according to Bruce Bimber is "accelerating the process of issue group formation and action" and coined the term accelerated pluralism to explain this new phenomena. ICTs are tools for "enabling social movement leaders and empowering dictators" in effect promoting societal change. ICTs can be used to garner grassroots support for a cause due to the internet allowing for political discourse and direct interventions with state policy as well as change the way complaints from the populace are handled by governments. Furthermore, ICTs in a household are associated with women rejecting justifications for intimate partner violence. According to a study published in 2017, this is likely because "access to ICTs exposes women to different ways of life and different notions about women's role in society and the household, especially in culturally conservative regions where traditional gender expectations contrast observed alternatives." == In health care == Telehealth A review found that in general, outcomes of such ICT-use – which were envisioned as early as 1925 – are or can be as good as in-person care with health care use staying similar. Artificial intelligence in healthcare Software for COVID-19 pandemic mitigation mHealth Clinical decision support system and expert system Health administration and hospital information system Other health information technology and health informatics == In science == Applications of ICTs in science, research and development, and academia include: Internet research Online research methods Science communication and communication between scientists Scholarly databases Applied metascience == Models of access == Scholar Mark Warschauer defines a "models of access" framework for analyzing ICT accessibility. In the second chapter of his book, Technology and Social Inclusion: Rethinking the Digital Divide, he describes three models of access to ICTs: devices, conduits, and literacy. Devices and conduits are the most common descriptors for access to ICTs, but they are insufficient for meaningful access to ICTs without third model of access, literacy. Combined, these three models roughly incorporate all twelve of the criteria of "Real Access" to ICT use, conceptualized by a non-profit organization called Bridges.org in 2005: Physical access to technology Appropriateness of technology Affordability of technology and technology use Human capacity and training Locally relevant content, applications, and services Integration into daily routines Socio-cultural factors Trust in technology Local economic environment Macro-economic environment Legal and regulatory framework Political will and public support === Devices === The most straightforward model of access for ICT in Mark Warschauer's theory is devices. In this model, access is defined most simply as the ownership of a device such as a phone or computer. Warschauer identifies many flaws with this model, including its inability to account for additional costs of ownership such as software, access to telecommunications, knowledge gaps surrounding computer use, and the role of government regulation in some countries. Therefore, Warschauer argues that considering only devices understates the magnitude of digital inequality. For example, the Pew Research Center notes that 96% of Americans own a smartphone, although most scholars in this field would contend that comprehensive access to ICT in the United States is likely much lower than that. === Conduits === A conduit requires a connection to a supply line, which for ICT could be a telephone line or Internet line. Accessing the supply requires investment in the proper infrastructure from a commercial company or local government and recurring payments from the user once the line is set up. For this reason, conduits usually divide people based on their geographic locations. As a Pew Research Center poll reports, Americans in rural areas are 12% less likely to have broadband access than other Americans, thereby making them less likely to own the devices. Additionally, these costs can be prohibitive to lower-income families accessing ICTs. These difficulties have led to a shift toward mobile technology; fewer people are purchasing broadband connection and are instead relying on their smartphones for Internet access, which can be found for free at public places such as libraries. Indeed, smartphones are on the rise, with 37% of Americans using smartphones as their primary medium for internet access and 96% of Americans owning a smartphone. === Literacy === In 1981, Sylvia Scribner and Michael Cole studied a tribe in Liberia, the Vai people, who have their own local script. Since about half of those literate in Vai have never had formal schooling, Scribner and Cole were able to test more than 1,000 subjects to measure the mental capabilities of literates over non-literates. This research, which they laid out in their book The Psychology of Literacy, allowed them to study whether the literacy divide exists at the individual level. Warschauer applied their literacy research to ICT literacy as part of his model of ICT access. Scribner and Cole found no generalizable cognitive benefits from Vai literacy; instead, individual differences on cognitive tasks were due to other factors, like schooling or living environment. The results suggested that there is "no single construct of literacy that divides people into two cognitive camps; [...] rather, there are gradations and types of literacies, with a range of benefits closely related to the specific functions of literacy practices." Furthermore, literacy and social development are intertwined, and the literacy divide does not exist on the individual level. Warschauer draws on Scribner and Cole's research to argue that ICT literacy functions similarly to literacy acquisition, as they both require resources rather than a narrow cognitive skill. Conclusions about literacy serve as the basis for a theory of the digital divide and ICT access, as detailed below:There is not just one type of ICT access, but many types. The meaning and value of access varies in particular social contexts. Access exists in gradations rather than in a bipolar opposition. Computer and Internet use brings no automatic benefit outside of its particular functions. ICT use is a social practice, involving access to physical artifacts, content, skills, and social support. And acquisition of ICT access is a matter not only of education but also of power.Therefore, Warschauer concludes that access to ICT cannot rest on devices or conduits alone; it must also engage physical, digital, human, and social resources. Each of these categories of resources have iterative relations with ICT use. If ICT is used well, it can promote these resources, but if it is used poorly, it can contribute to a cycle of underdevelopment and exclusion. == Environmental impact == === Progress during the century === In the early 21st century a rapid development of ICT services and electronical devices took place, in which the internet servers multiplied by a factor of 1000 to 395 million and its still increasing. This increase can be explained by Moore's law, which states, that the development of ICT increases every year by 16–20%, so it will double in numbers every four to five years. Alongside this development and the high investments in increasing demand for ICT capable products, a high environmental impact came with it. Software and Hardware development as well as production causing already in 2008 the same amount of CO2 emissions as global air travels. There are two sides of ICT, the positive environmental possibilities and the shadow side. On the positive side, studies proved, that for instance in the OECD countries a reduction of 0.235% energy use is caused by an increase in ICT capital by 1%. On the other side the more digitization is happening, the more energy is consumed, that means for OECD countries 1% increase in internet users causes a raise of 0.026% electricity consumption per capita and for emerging countries the impact is more than 4 times as high. Currently the scientific forecasts are showing an increase up to 30700 TWh in 2030 which is 20 times more than it was in 2010. === Implication === To tackle the environmental issues of ICT, the EU commission plans proper monitoring and reporting of the GHG emissions of different ICT platforms, countries and infrastructure in general. Further the establishment of international norms for reporting and compliance are promoted to foster transparency in this sector. Moreover it is suggested by scientists to make more ICT investments to exploit the potentials of ICT to alleviate CO2 emissions in general, and to implement a more effective coordination of ICT, energy and growth policies. Consequently, applying the principle of the coase theorem makes sense. It recommends to make investments there, where the marginal avoidance costs of emissions are the lowest, therefore in the developing countries with comparatively lower technological standards and policies as high-tech countries. With these measures, ICT can reduce environmental damage from economic growth and energy consumption by facilitating communication and infrastructure. === In problem-solving === ICTs could also be used to address environmental issues, including climate change, in various ways, including ways beyond education. == See also == == References == == Sources == This article incorporates text from a free content work. Licensed under CC BY-SA 3.0 IGO. Text taken from A Lifeline to learning: leveraging mobile technology to support education for refugees​, UNESCO, UNESCO. UNESCO. == Further reading == == External links == ICT Facts and Figures ICT Industry Statistics Teciza.net
https://en.wikipedia.org/wiki/Information_and_communications_technology
Push technology, also known as server Push, refers to a communication method, where the communication is initiated by a server rather than a client. This approach is different from the "pull" method where the communication is initiated by a client. In push technology, clients can express their preferences for certain types of information or data, typically through a process known as the publish–subscribe model. In this model, a client "subscribes" to specific information channels hosted by a server. When new content becomes available on these channels, the server automatically sends, or "pushes," this information to the subscribed client. Under certain conditions, such as restrictive security policies that block incoming HTTP requests, push technology is sometimes simulated using a technique called polling. In these cases, the client periodically checks with the server to see if new information is available, rather than receiving automatic updates. == General use == Synchronous conferencing and instant messaging are examples of push services. Chat messages and sometimes files are pushed to the user as soon as they are received by the messaging service. Both decentralized peer-to-peer programs (such as WASTE) and centralized programs (such as IRC or XMPP) allow pushing files, which means the sender initiates the data transfer rather than the recipient. Email may also be a push system: SMTP is a push protocol (see Push e-mail). However, the last step—from mail server to desktop computer—typically uses a pull protocol like POP3 or IMAP. Modern e-mail clients make this step seem instantaneous by repeatedly polling the mail server, frequently checking it for new mail. The IMAP protocol includes the IDLE command, which allows the server to tell the client when new messages arrive. The original BlackBerry was the first popular example of push-email in a wireless context. Another example is the PointCast Network, which was widely covered in the 1990s. It delivered news and stock market data as a screensaver. Both Netscape and Microsoft integrated push technology through the Channel Definition Format (CDF) into their software at the height of the browser wars, but it was never very popular. CDF faded away and was removed from the browsers of the time, replaced in the 2000s with RSS (a pull system.) Other uses of push-enabled web applications include software updates distribution ("push updates"), market data distribution (stock tickers), online chat/messaging systems (webchat), auctions, online betting and gaming, sport results, monitoring consoles, and sensor network monitoring. == Examples == === Web push === The Web push proposal of the Internet Engineering Task Force is a simple protocol using HTTP version 2 to deliver real-time events, such as incoming calls or messages, which can be delivered (or "pushed") in a timely fashion. The protocol consolidates all real-time events into a single session which ensures more efficient use of network and radio resources. A single service consolidates all events, distributing those events to applications as they arrive. This requires just one session, avoiding duplicated overhead costs. Web Notifications are part of the W3C standard and define an API for end-user notifications. A notification allows alerting the user of an event, such as the delivery of an email, outside the context of a web page. As part of this standard, Push API is fully implemented in Chrome, Firefox, and Edge, and partially implemented in Safari as of February 2023. === HTTP server push === HTTP server push (also known as HTTP streaming) is a mechanism for sending unsolicited (asynchronous) data from a web server to a web browser. HTTP server push can be achieved through any of several mechanisms. As a part of HTML5 the Web Socket API allows a web server and client to communicate over a full-duplex TCP connection. Generally, the web server does not terminate a connection after response data has been served to a client. The web server leaves the connection open so that if an event occurs (for example, a change in internal data which needs to be reported to one or multiple clients), it can be sent out immediately; otherwise, the event would have to be queued until the client's next request is received. Most web servers offer this functionality via CGI (e.g., Non-Parsed Headers scripts on Apache HTTP Server). The underlying mechanism for this approach is chunked transfer encoding. Another mechanism is related to a special MIME type called multipart/x-mixed-replace, which was introduced by Netscape in 1995. Web browsers interpret this as a document that changes whenever the server pushes a new version to the client. It is still supported by Firefox, Opera, and Safari today, but it is ignored by Internet Explorer and is only partially supported by Chrome. It can be applied to HTML documents, and also for streaming images in webcam applications. The WHATWG Web Applications 1.0 proposal includes a mechanism to push content to the client. On September 1, 2006, the Opera web browser implemented this new experimental system in a feature called "Server-Sent Events". It is now part of the HTML5 standard. === Pushlet === In this technique, the server takes advantage of persistent HTTP connections, leaving the response perpetually "open" (i.e., the server never terminates the response), effectively fooling the browser to remain in "loading" mode after the initial page load could be considered complete. The server then periodically sends snippets of JavaScript to update the content of the page, thereby achieving push capability. By using this technique, the client doesn't need Java applets or other plug-ins in order to keep an open connection to the server; the client is automatically notified about new events, pushed by the server. One serious drawback to this method, however, is the lack of control the server has over the browser timing out; a page refresh is always necessary if a timeout occurs on the browser end. === Long polling === Long polling is itself not a true push; long polling is a variation of the traditional polling technique, but it allows emulating a push mechanism under circumstances where a real push is not possible, such as sites with security policies that require rejection of incoming HTTP requests. With long polling, the client requests to get more information from the server exactly as in normal polling, but with the expectation that the server may not respond immediately. If the server has no new information for the client when the poll is received, then instead of sending an empty response, the server holds the request open and waits for response information to become available. Once it does have new information, the server immediately sends an HTTP response to the client, completing the open HTTP request. Upon receipt of the server response, the client often immediately issues another server request. In this way the usual response latency (the time between when the information first becomes available and the next client request) otherwise associated with polling clients is eliminated. For example, BOSH is a popular, long-lived HTTP technique used as a long-polling alternative to a continuous TCP connection when such a connection is difficult or impossible to employ directly (e.g., in a web browser); it is also an underlying technology in the XMPP, which Apple uses for its iCloud push support. === Flash XML Socket relays === This technique, used by chat applications, makes use of the XML Socket object in a single-pixel Adobe Flash movie. Under the control of JavaScript, the client establishes a TCP connection to a unidirectional relay on the server. The relay server does not read anything from this socket; instead, it immediately sends the client a unique identifier. Next, the client makes an HTTP request to the web server, including this identifier with it. The web application can then push messages addressed to the client to a local interface of the relay server, which relays them over the Flash socket. The advantage of this approach is that it appreciates the natural read-write asymmetry that is typical of many web applications, including chat, and as a consequence it offers high efficiency. Since it does not accept data on outgoing sockets, the relay server does not need to poll outgoing TCP connections at all, making it possible to hold open tens of thousands of concurrent connections. In this model, the limit to scale is the TCP stack of the underlying server operating system. === Reliable Group Data Delivery (RGDD) === In services such as cloud computing, to increase reliability and availability of data, it is usually pushed (replicated) to several machines. For example, the Hadoop Distributed File System (HDFS) makes 2 extra copies of any object stored. RGDD focuses on efficiently casting an object from one location to many while saving bandwidth by sending minimal number of copies (only one in the best case) of the object over any link across the network. For example, Datacast is a scheme for delivery to many nodes inside data centers that relies on regular and structured topologies and DCCast is a similar approach for delivery across data centers. === Push notification === A push notification is a message that is "pushed" from a back-end server or application to a user interface, e.g. mobile applications or desktop applications. Apple introduced push notifications for iPhone in 2009, and in 2010 Google released "Google Cloud to Device Messaging" (superseded by Google Cloud Messaging and then by Firebase Cloud Messaging). In November 2015, Microsoft announced that the Windows Notification Service would be expanded to make use of the Universal Windows Platform architecture, allowing for push data to be sent to Windows 10, Windows 10 Mobile, Xbox, and other supported platforms using universal API calls and POST requests. Push notifications are mainly divided into two approaches, local notifications and remote notifications. For local notifications, the application schedules the notification with the local device's OS. The application sets a timer in the application itself, provided it is able to continuously run in the background. When the event's scheduled time is reached, or the event's programmed condition is met, the message is displayed in the application's user interface. Remote notifications are handled by a remote server. Under this scenario, the client application needs to be registered on the server with a unique key (e.g., a UUID). The server then fires the message against the unique key to deliver it to the client via an agreed client/server protocol such as HTTP or XMPP, and the client displays the message received. When the push notification arrives, it can transmit short notifications and messages, set badges on application icons, blink or continuously light up the notification LED, or play alert sounds to attract user's attention. Push notifications are usually used by applications to bring information to users' attention. The content of the messages can be classified in the following example categories: Chat messages from a messaging application such as Facebook Messenger sent by other users. Vendor special offers: A vendor may want to advertise their offers to customers. Event reminders: Some applications may allow the customer to create a reminder or alert for a specific time. Subscribed topic changes: Users may want to get updates regarding the weather in their location, or monitor a web page to track changes, for instance. Real-time push notifications may raise privacy issues since they can be used to bind virtual identities of social network pseudonyms to the real identities of the smartphone owners. The use of unnecessary push notifications for promotional purposes has been criticized as an example of attention theft. == See also == == References == == External links == W3C Push Workshop. A 1997 workshop that discussed push technology and some early examples thereof HTTP Streaming with Ajax A description of HTTP Streaming from the Ajax Patterns website The Web Socket API candidate recommendation HTML5 Server-Sent Events draft specification
https://en.wikipedia.org/wiki/Push_technology
The history of technology is the history of the invention of tools and techniques by humans. Technology includes methods ranging from simple stone tools to the complex genetic engineering and information technology that has emerged since the 1980s. The term technology comes from the Greek word techne, meaning art and craft, and the word logos, meaning word and speech. It was first used to describe applied arts, but it is now used to describe advancements and changes that affect the environment around us. New knowledge has enabled people to create new tools, and conversely, many scientific endeavors are made possible by new technologies, for example scientific instruments which allow us to study nature in more detail than our natural senses. Since much of technology is applied science, technical history is connected to the history of science. Since technology uses resources, technical history is tightly connected to economic history. From those resources, technology produces other resources, including technological artifacts used in everyday life. Technological change affects, and is affected by, a society's cultural traditions. It is a force for economic growth and a means to develop and project economic, political, military power and wealth. == Measuring technological progress == Many sociologists and anthropologists have created social theories dealing with social and cultural evolution. Some, like Lewis H. Morgan, Leslie White, and Gerhard Lenski have declared technological progress to be the primary factor driving the development of human civilization. Morgan's concept of three major stages of social evolution (savagery, barbarism, and civilization) can be divided by technological milestones, such as fire. White argued the measure by which to judge the evolution of culture is energy. For White, "the primary function of culture" is to "harness and control energy." White differentiates between five stages of human development: In the first, people use the energy of their own muscles. In the second, they use the energy of domesticated animals. In the third, they use the energy of plants (agricultural revolution). In the fourth, they learn to use the energy of natural resources: coal, oil, gas. In the fifth, they harness nuclear energy. White introduced the formula P=E/T, where P is the development index, E is a measure of energy consumed, and T is the measure of the efficiency of technical factors using the energy. In his own words, "culture evolves as the amount of energy harnessed per capita per year is increased, or as the efficiency of the instrumental means of putting the energy to work is increased". Nikolai Kardashev extrapolated his theory, creating the Kardashev scale, which categorizes the energy use of advanced civilizations. Lenski's approach focuses on information. The more information and knowledge (especially allowing the shaping of natural environment) a given society has, the more advanced it is. He identifies four stages of human development, based on advances in the history of communication. In the first stage, information is passed by genes. In the second, when humans gain sentience, they can learn and pass information through experience. In the third, the humans start using signs and develop logic. In the fourth, they can create symbols, develop language and writing. Advancements in communications technology translate into advancements in the economic system and political system, distribution of wealth, social inequality and other spheres of social life. He also differentiates societies based on their level of technology, communication, and economy: hunter-gatherer, simple agricultural, advanced agricultural, industrial, special (such as fishing societies). In economics, productivity is a measure of technological progress. Productivity increases when fewer inputs (classically labor and capital but some measures include energy and materials) are used in the production of a unit of output. Another indicator of technological progress is the development of new products and services, which is necessary to offset unemployment that would otherwise result as labor inputs are reduced. In developed countries productivity growth has been slowing since the late 1970s; however, productivity growth was higher in some economic sectors, such as manufacturing. For example, employment in manufacturing in the United States declined from over 30% in the 1940s to just over 10% 70 years later. Similar changes occurred in other developed countries. This stage is referred to as post-industrial. In the late 1970s sociologists and anthropologists like Alvin Toffler (author of Future Shock), Daniel Bell and John Naisbitt have approached the theories of post-industrial societies, arguing that the current era of industrial society is coming to an end, and services and information are becoming more important than industry and goods. Some extreme visions of the post-industrial society, especially in fiction, are strikingly similar to the visions of near and post-singularity societies. == By period and geography == The following is a summary of the history of technology by time period and geography: === Prehistory === ==== Stone Age ==== During most of the Paleolithic – the bulk of the Stone Age – all humans had a lifestyle which involved limited tools and few permanent settlements. The first major technologies were tied to survival, hunting, and food preparation. Stone tools and weapons, fire, and clothing were technological developments of major importance during this period. Human ancestors have been using stone and other tools since long before the emergence of Homo sapiens approximately 300,000 years ago. The earliest direct evidence of tool usage was found in Ethiopia within the Great Rift Valley, dating back to 2.5 million years ago. The earliest methods of stone tool making, known as the Oldowan "industry", date back to at least 2.3 million years ago. This era of stone tool use is called the Paleolithic, or "Old stone age", and spans all of human history up to the development of agriculture approximately 12,000 years ago. To make a stone tool, a "core" of hard stone with specific flaking properties (such as flint) was struck with a hammerstone. This flaking produced sharp edges which could be used as tools, primarily in the form of choppers or scrapers. These tools greatly aided the early humans in their hunter-gatherer lifestyle to perform a variety of tasks including butchering carcasses (and breaking bones to get at the marrow); chopping wood; cracking open nuts; skinning an animal for its hide, and even forming other tools out of softer materials such as bone and wood. The earliest stone tools were irrelevant, being little more than a fractured rock. In the Acheulian era, beginning approximately 1.65 million years ago, methods of working these stones into specific shapes, such as hand axes emerged. This early Stone Age is described as the Lower Paleolithic. The Middle Paleolithic, approximately 300,000 years ago, saw the introduction of the prepared-core technique, where multiple blades could be rapidly formed from a single core stone. The Upper Paleolithic, beginning approximately 40,000 years ago, saw the introduction of pressure flaking, where a wood, bone, or antler punch could be used to shape a stone very finely. The end of the last Ice Age about 10,000 years ago is taken as the end point of the Upper Paleolithic and the beginning of the Epipaleolithic / Mesolithic. The Mesolithic technology included the use of microliths as composite stone tools, along with wood, bone, and antler tools. The later Stone Age, during which the rudiments of agricultural technology were developed, is called the Neolithic period. During this period, polished stone tools were made from a variety of hard rocks such as flint, jade, jadeite, and greenstone, largely by working exposures as quarries, but later the valuable rocks were pursued by tunneling underground, the first steps in mining technology. The polished axes were used for forest clearance and the establishment of crop farming and were so effective as to remain in use when bronze and iron appeared. These stone axes were used alongside a continued use of stone tools such as a range of projectiles, knives, and scrapers, as well as tools, made from organic materials such as wood, bone, and antler. Stone Age cultures developed music and engaged in organized warfare. Stone Age humans developed ocean-worthy outrigger canoe technology, leading to migration across the Malay Archipelago, across the Indian Ocean to Madagascar and also across the Pacific Ocean, which required knowledge of the ocean currents, weather patterns, sailing, and celestial navigation. Although Paleolithic cultures left no written records, the shift from nomadic life to settlement and agriculture can be inferred from a range of archaeological evidence. Such evidence includes ancient tools, cave paintings, and other prehistoric art, such as the Venus of Willendorf. Human remains also provide direct evidence, both through the examination of bones, and the study of mummies. Scientists and historians have been able to form significant inferences about the lifestyle and culture of various prehistoric peoples, and especially their technology. === Ancient === ==== Copper and Bronze Ages ==== Metallic copper occurs on the surface of weathered copper ore deposits and copper was used before copper smelting was known. Copper smelting is believed to have originated when the technology of pottery kilns allowed sufficiently high temperatures. The concentration of various elements such as arsenic increase with depth in copper ore deposits and smelting of these ores yields arsenical bronze, which can be sufficiently work hardened to be suitable for making tools. Bronze is an alloy of copper with tin; the latter being found in relatively few deposits globally caused a long time to elapse before true tin bronze became widespread. (See: Tin sources and trade in ancient times) Bronze was a major advancement over stone as a material for making tools, both because of its mechanical properties like strength and ductility and because it could be cast in molds to make intricately shaped objects. Bronze significantly advanced shipbuilding technology with better tools and bronze nails. Bronze nails replaced the old method of attaching boards of the hull with cord woven through drilled holes. Better ships enabled long-distance trade and the advance of civilization. This technological trend apparently began in the Fertile Crescent and spread outward over time. These developments were not, and still are not, universal. The three-age system does not accurately describe the technology history of groups outside of Eurasia, and does not apply at all in the case of some isolated populations, such as the Spinifex People, the Sentinelese, and various Amazonian tribes, which still make use of Stone Age technology, and have not developed agricultural or metal technology. These villages preserve traditional customs in the face of global modernity, exhibiting a remarkable resistance to the rapid advancement of technology. ==== Iron Age ==== Before iron smelting was developed the only iron was obtained from meteorites and is usually identified by having nickel content. Meteoric iron was rare and valuable, but was sometimes used to make tools and other implements, such as fish hooks. The Iron Age involved the adoption of iron smelting technology. It generally replaced bronze and made it possible to produce tools which were stronger, lighter and cheaper to make than bronze equivalents. The raw materials to make iron, such as ore and limestone, are far more abundant than copper and especially tin ores. Consequently, iron was produced in many areas. It was not possible to mass manufacture steel or pure iron because of the high temperatures required. Furnaces could reach melting temperature but the crucibles and molds needed for melting and casting had not been developed. Steel could be produced by forging bloomery iron to reduce the carbon content in a somewhat controllable way, but steel produced by this method was not homogeneous. In many Eurasian cultures, the Iron Age was the last major step before the development of written language, though again this was not universally the case. In Europe, large hill forts were built either as a refuge in time of war or sometimes as permanent settlements. In some cases, existing forts from the Bronze Age were expanded and enlarged. The pace of land clearance using the more effective iron axes increased, providing more farmland to support the growing population. ==== Mesopotamia ==== Mesopotamia (modern Iraq) and its peoples (Sumerians, Akkadians, Assyrians and Babylonians) lived in cities from c. 4000 BC, and developed a sophisticated architecture in mud-brick and stone, including the use of the true arch. The walls of Babylon were so massive they were quoted as a Wonder of the World. They developed extensive water systems; canals for transport and irrigation in the alluvial south, and catchment systems stretching for tens of kilometers in the hilly north. Their palaces had sophisticated drainage systems. Writing was invented in Mesopotamia, using the cuneiform script. Many records on clay tablets and stone inscriptions have survived. These civilizations were early adopters of bronze technologies which they used for tools, weapons and monumental statuary. By 1200 BC they could cast objects 5 m long in a single piece. Several of the six classic simple machines were invented in Mesopotamia. Mesopotamians have been credited with the invention of the wheel. The wheel and axle mechanism first appeared with the potter's wheel, invented in Mesopotamia (modern Iraq) during the 5th millennium BC. This led to the invention of the wheeled vehicle in Mesopotamia during the early 4th millennium BC. Depictions of wheeled wagons found on clay tablet pictographs at the Eanna district of Uruk are dated between 3700 and 3500 BC. The lever was used in the shadoof water-lifting device, the first crane machine, which appeared in Mesopotamia circa 3000 BC, and then in ancient Egyptian technology circa 2000 BC. The earliest evidence of pulleys date back to Mesopotamia in the early 2nd millennium BC. The screw, the last of the simple machines to be invented, first appeared in Mesopotamia during the Neo-Assyrian period (911–609) BC. The Assyrian King Sennacherib (704–681 BC) claims to have invented automatic sluices and to have been the first to use water screw pumps, of up to 30 tons weight, which were cast using two-part clay molds rather than by the 'lost wax' process. The Jerwan Aqueduct (c. 688 BC) is made with stone arches and lined with waterproof concrete. The Babylonian astronomical diaries spanned 800 years. They enabled meticulous astronomers to plot the motions of the planets and to predict eclipses. The earliest evidence of water wheels and watermills date back to the ancient Near East in the 4th century BC, specifically in the Persian Empire before 350 BC, in the regions of Mesopotamia (Iraq) and Persia (Iran). This pioneering use of water power constituted the first human-devised motive force not to rely on muscle power (besides the sail). ==== Egypt ==== The Egyptians, known for building pyramids centuries before the creation of modern tools, invented and used many simple machines, such as the ramp to aid construction processes. Historians and archaeologists have found evidence that the pyramids were built using three of what is called the Six Simple Machines, from which all machines are based. These machines are the inclined plane, the wedge, and the lever, which allowed the ancient Egyptians to move millions of limestone blocks which weighed approximately 3.5 tons (7,000 lbs.) each into place to create structures like the Great Pyramid of Giza, which is 481 feet (147 meters) high. They also made writing medium similar to paper from papyrus, which Joshua Mark states is the foundation for modern paper. Papyrus is a plant (cyperus papyrus) which grew in plentiful amounts in the Egyptian Delta and throughout the Nile River Valley during ancient times. The papyrus was harvested by field workers and brought to processing centers where it was cut into thin strips. The strips were then laid-out side by side and covered in plant resin. The second layer of strips was laid on perpendicularly, then both pressed together until the sheet was dry. The sheets were then joined to form a roll and later used for writing. Egyptian society made several significant advances during dynastic periods in many areas of technology. According to Hossam Elanzeery, they were the first civilization to use timekeeping devices such as sundials, shadow clocks, and obelisks and successfully leveraged their knowledge of astronomy to create a calendar model that society still uses today. They developed shipbuilding technology that saw them progress from papyrus reed vessels to cedar wood ships while also pioneering the use of rope trusses and stem-mounted rudders. The Egyptians also used their knowledge of anatomy to lay the foundation for many modern medical techniques and practiced the earliest known version of neuroscience. Elanzeery also states that they used and furthered mathematical science, as evidenced in the building of the pyramids. Ancient Egyptians also invented and pioneered many food technologies that have become the basis of modern food technology processes. Based on paintings and reliefs found in tombs, as well as archaeological artifacts, scholars like Paul T Nicholson believe that the Ancient Egyptians established systematic farming practices, engaged in cereal processing, brewed beer and baked bread, processed meat, practiced viticulture and created the basis for modern wine production, and created condiments to complement, preserve and mask the flavors of their food. ==== Indus Valley ==== The Indus Valley Civilization, situated in a resource-rich area (in modern Pakistan and northwestern India), is notable for its early application of city planning, sanitation technologies, and plumbing. Indus Valley construction and architecture, called 'Vaastu Shastra', suggests a thorough understanding of materials engineering, hydrology, and sanitation. ==== China ==== The Chinese made many first-known discoveries and developments. Major technological contributions from China include the earliest known form of the binary code and epigenetic sequencing, early seismological detectors, matches, paper, Helicopter rotor, Raised-relief map, the double-action piston pump, cast iron, water powered blast furnace bellows, the iron plough, the multi-tube seed drill, the wheelbarrow, the parachute, the compass, the rudder, the crossbow, the South Pointing Chariot and gunpowder. China also developed deep well drilling, which they used to extract brine for making salt. Some of these wells, which were as deep as 900 meters, produced natural gas which was used for evaporating brine. Other Chinese discoveries and inventions from the medieval period include block printing, movable type printing, phosphorescent paint, endless power chain drive and the clock escapement mechanism. The solid-fuel rocket was invented in China about 1150, nearly 200 years after the invention of gunpowder (which acted as the rocket's fuel). Decades before the West's age of exploration, the Chinese emperors of the Ming Dynasty also sent large fleets on maritime voyages, some reaching Africa. ==== Hellenistic Mediterranean ==== The Hellenistic period of Mediterranean history began in the 4th century BC with Alexander's conquests, which led to the emergence of a Hellenistic civilization representing a synthesis of Greek and Near-Eastern cultures in the Eastern Mediterranean region, including the Balkans, Levant and Egypt. With Ptolemaic Egypt as its intellectual center and Greek as the lingua franca, the Hellenistic civilization included Greek, Egyptian, Jewish, Persian and Phoenician scholars and engineers who wrote in Greek. Hellenistic engineers of the Eastern Mediterranean were responsible for a number of inventions and improvements to existing technology. The Hellenistic period saw a sharp increase in technological advancement, fostered by a climate of openness to new ideas, the blossoming of a mechanistic philosophy, and the establishment of the Library of Alexandria in Ptolemaic Egypt and its close association with the adjacent museion. In contrast to the typically anonymous inventors of earlier ages, ingenious minds such as Archimedes, Philo of Byzantium, Heron, Ctesibius, and Archytas remain known by name to posterity. Ancient agriculture, as in any period prior to the modern age the primary mode of production and subsistence, and its irrigation methods, were considerably advanced by the invention and widespread application of a number of previously unknown water-lifting devices, such as the vertical water-wheel, the compartmented wheel, the water turbine, Archimedes' screw, the bucket-chain and pot-garland, the force pump, the suction pump, the double-action piston pump and quite possibly the chain pump. In music, the water organ, invented by Ctesibius and subsequently improved, constituted the earliest instance of a keyboard instrument. In time-keeping, the introduction of the inflow clepsydra and its mechanization by the dial and pointer, the application of a feedback system and the escapement mechanism far superseded the earlier outflow clepsydra. Innovations in mechanical technology included the newly devised right-angled gear, which would become particularly important to the operation of mechanical devices. Hellenistic engineers also devised automata such as suspended ink pots, automatic washstands, and doors, primarily as toys, which however featured new useful mechanisms such as the cam and gimbals. The Antikythera mechanism, a kind of analogous computer working with a differential gear, and the astrolabe both show great refinement in astronomical science. In other fields, ancient Greek innovations include the catapult and the gastraphetes crossbow in warfare, hollow bronze-casting in metallurgy, the dioptra for surveying, in infrastructure the lighthouse, central heating, a tunnel excavated from both ends by scientific calculations, and the ship trackway. In transport, great progress resulted from the invention of the winch and the odometer. Further newly created techniques and items were spiral staircases, the chain drive, sliding calipers and showers. ==== Roman Empire ==== The Roman Empire expanded from Italia across the entire Mediterranean region between the 1st century BC and 1st century AD. Its most advanced and economically productive provinces outside of Italia were the Eastern Roman provinces in the Balkans, Asia Minor, Egypt, and the Levant, with Roman Egypt in particular being the wealthiest Roman province outside of Italia. The Roman Empire developed an intensive and sophisticated agriculture, expanded upon existing iron working technology, created laws providing for individual ownership, advanced stone masonry technology, advanced road-building (exceeded only in the 19th century), military engineering, civil engineering, spinning and weaving and several different machines like the Gallic reaper that helped to increase productivity in many sectors of the Roman economy. Roman engineers were the first to build monumental arches, amphitheatres, aqueducts, public baths, true arch bridges, harbours, reservoirs and dams, vaults and domes on a very large scale across their Empire. Notable Roman inventions include the book (Codex), glass blowing and concrete. Because Rome was located on a volcanic peninsula, with sand which contained suitable crystalline grains, the concrete which the Romans formulated was especially durable. Some of their buildings have lasted 2000 years, to the present day. In Roman Egypt, the inventor Hero of Alexandria was the first to experiment with a wind-powered mechanical device (see Heron's windwheel) and even created the earliest steam-powered device (the aeolipile), opening up new possibilities in harnessing natural forces. He also devised a vending machine. However, his inventions were primarily toys, rather than practical machines. ==== Inca, Maya, and Aztec ==== The engineering skills of the Inca and Maya were great, even by today's standards. An example of this exceptional engineering is the use of pieces weighing upwards of one ton in their stonework placed together so that not even a blade can fit into the cracks. Inca villages used irrigation canals and drainage systems, making agriculture very efficient. While some claim that the Incas were the first inventors of hydroponics, their agricultural technology was still soil based, if advanced. Though the Maya civilization did not incorporate metallurgy or wheel technology in their architectural constructions, they developed complex writing and astronomical systems, and created beautiful sculptural works in stone and flint. Like the Inca, the Maya also had command of fairly advanced agricultural and construction technology. The Maya are also responsible for creating the first pressurized water system in Mesoamerica, located in the Maya site of Palenque. The main contribution of the Aztec rule was a system of communications between the conquered cities and the ubiquity of the ingenious agricultural technology of chinampas. In Mesoamerica, without draft animals for transport (nor, as a result, wheeled vehicles), the roads were designed for travel on foot, just as in the Inca and Mayan civilizations. The Aztec, subsequently to the Maya, inherited many of the technologies and intellectual advancements of their predecessors: the Olmec (see Native American inventions and innovations). === Medieval to early modern === One of the most significant developments of the medieval were economies in which water and wind power were more significant than animal and human muscle power.: 38  Most water and wind power was used for milling grain. Water power was also used for blowing air in blast furnace, pulping rags for paper making and for felting wool. The Domesday Book recorded 5,624 water mills in Great Britain in 1086, being about one per thirty families. ==== East Asia ==== ==== Indian subcontinent ==== ==== Islamic world ==== The Muslim caliphates united in trade large areas that had previously traded little, including the Middle East, North Africa, Central Asia, the Iberian Peninsula, and parts of the Indian subcontinent. The science and technology of previous empires in the region, including the Mesopotamian, Egyptian, Persian, Hellenistic and Roman empires, were inherited by the Muslim world, where Arabic replaced Syriac, Persian and Greek as the lingua franca of the region. Significant advances were made in the region during the Islamic Golden Age (8th–16th centuries). The Arab Agricultural Revolution occurred during this period. It was a transformation in agriculture from the 8th to the 13th century in the Islamic region of the Old World. The economy established by Arab and other Muslim traders across the Old World enabled the diffusion of many crops and farming techniques throughout the Islamic world, as well as the adaptation of crops and techniques from and to regions outside it. Advances were made in animal husbandry, irrigation, and farming, with the help of new technology such as the windmill. These changes made agriculture much more productive, supporting population growth, urbanisation, and increased stratification of society. Muslim engineers in the Islamic world made wide use of hydropower, along with early uses of tidal power, wind power, fossil fuels such as petroleum, and large factory complexes (tiraz in Arabic). A variety of industrial mills were employed in the Islamic world, including fulling mills, gristmills, hullers, sawmills, ship mills, stamp mills, steel mills, and tide mills. By the 11th century, every province throughout the Islamic world had these industrial mills in operation. Muslim engineers also employed water turbines and gears in mills and water-raising machines, and pioneered the use of dams as a source of water power, used to provide additional power to watermills and water-raising machines. Many of these technologies were transferred to medieval Europe. Wind-powered machines used to grind grain and pump water, the windmill and wind pump, first appeared in what are now Iran, Afghanistan and Pakistan by the 9th century. They were used to grind grains and draw up water, and used in the gristmilling and sugarcane industries. Sugar mills first appeared in the medieval Islamic world. They were first driven by watermills, and then windmills from the 9th and 10th centuries in what are today Afghanistan, Pakistan and Iran. Crops such as almonds and citrus fruit were brought to Europe through Al-Andalus, and sugar cultivation was gradually adopted across Europe. Arab merchants dominated trade in the Indian Ocean until the arrival of the Portuguese in the 16th century. The Muslim world adopted papermaking from China. The earliest paper mills appeared in Abbasid-era Baghdad during 794–795. The knowledge of gunpowder was also transmitted from China via predominantly Islamic countries, where formulas for pure potassium nitrate were developed. The spinning wheel was invented in the Islamic world by the early 11th century. It was later widely adopted in Europe, where it was adapted into the spinning jenny, a key device during the Industrial Revolution. The crankshaft was invented by Al-Jazari in 1206, and is central to modern machinery such as the steam engine, internal combustion engine and automatic controls. The camshaft was also first described by Al-Jazari in 1206. Early programmable machines were also invented in the Muslim world. The first music sequencer, a programmable musical instrument, was an automated flute player invented by the Banu Musa brothers, described in their Book of Ingenious Devices, in the 9th century. In 1206, Al-Jazari invented programmable automata/robots. He described four automaton musicians, including two drummers operated by a programmable drum machine, where the drummer could be made to play different rhythms and different drum patterns. The castle clock, a hydropowered mechanical astronomical clock invented by Al-Jazari, was an early programmable analog computer. In the Ottoman Empire, a practical impulse steam turbine was invented in 1551 by Taqi ad-Din Muhammad ibn Ma'ruf in Ottoman Egypt. He described a method for rotating a spit by means of a jet of steam playing on rotary vanes around the periphery of a wheel. Known as a steam jack, a similar device for rotating a spit was also later described by John Wilkins in 1648. ==== Medieval Europe ==== While medieval technology has been long depicted as a step backward in the evolution of Western technology, a generation of medievalists (like the American historian of science Lynn White) stressed from the 1940s onwards the innovative character of many medieval techniques. Genuine medieval contributions include for example mechanical clocks, spectacles and vertical windmills. Medieval ingenuity was also displayed in the invention of seemingly inconspicuous items like the watermark or the functional button. In navigation, the foundation to the subsequent Age of Discovery was laid by the introduction of pintle-and-gudgeon rudders, lateen sails, the dry compass, the horseshoe and the astrolabe. Significant advances were also made in military technology with the development of plate armour, steel crossbows and cannon. The Middle Ages are perhaps best known for their architectural heritage: While the invention of the rib vault and pointed arch gave rise to the high rising Gothic style, the ubiquitous medieval fortifications gave the era the almost proverbial title of the 'age of castles'. Papermaking, a 2nd-century Chinese technology, was carried to the Middle East when a group of Chinese papermakers were captured in the 8th century. Papermaking technology was spread to Europe by the Umayyad conquest of Hispania. A paper mill was established in Sicily in the 12th century. In Europe the fiber to make pulp for making paper was obtained from linen and cotton rags. Lynn Townsend White Jr. credited the spinning wheel with increasing the supply of rags, which led to cheap paper, which was a factor in the development of printing. ==== Renaissance technology ==== Before the development of modern engineering, mathematics was used by artisans and craftsmen, such as millwrights, clock makers, instrument makers and surveyors. Aside from these professions, universities were not believed to have had much practical significance to technology.: 32  A standard reference for the state of mechanical arts during the Renaissance is given in the mining engineering treatise De re metallica (1556), which also contains sections on geology, mining and chemistry. De re metallica was the standard chemistry reference for the next 180 years. Among the water powered mechanical devices in use were ore stamping mills, forge hammers, blast bellows, and suction pumps. Due to the casting of cannon, the blast furnace came into widespread use in France in the mid 15th century. The blast furnace had been used in China since the 4th century BC. The invention of the movable cast metal type printing press, whose pressing mechanism was adapted from an olive screw press, (c. 1441) lead to a tremendous increase in the number of books and the number of titles published. Movable ceramic type had been used in China for a few centuries and woodblock printing dated back even further. The era is marked by such profound technical advancements like linear perceptivity, double shell domes or Bastion fortresses. Note books of the Renaissance artist-engineers such as Taccola and Leonardo da Vinci give a deep insight into the mechanical technology then known and applied. Architects and engineers were inspired by the structures of Ancient Rome, and men like Brunelleschi created the large dome of Florence Cathedral as a result. He was awarded one of the first patents ever issued to protect an ingenious crane he designed to raise the large masonry stones to the top of the structure. Military technology developed rapidly with the widespread use of the cross-bow and ever more powerful artillery, as the city-states of Italy were usually in conflict with one another. Powerful families like the Medici were strong patrons of the arts and sciences. Renaissance science spawned the Scientific Revolution; science and technology began a cycle of mutual advancement. ==== Age of Exploration ==== An improved sailing ship, the nau or carrack, enabled the Age of Exploration with the European colonization of the Americas, epitomized by Francis Bacon's New Atlantis. Pioneers like Vasco da Gama, Cabral, Magellan and Christopher Columbus explored the world in search of new trade routes for their goods and contacts with Africa, India and China to shorten the journey compared with traditional routes overland. They produced new maps and charts which enabled following mariners to explore further with greater confidence. Navigation was generally difficult, however, owing to the problem of longitude and the absence of accurate chronometers. European powers rediscovered the idea of the civil code, lost since the time of the Ancient Greeks. ==== Pre–Industrial Revolution ==== The stocking frame, which was invented in 1598, increased a knitter's number of knots per minute from 100 to 1000. Mines were becoming increasingly deep and were expensive to drain with horse powered bucket and chain pumps and wooden piston pumps. Some mines used as many as 500 horses. Horse-powered pumps were replaced by the Savery steam pump (1698) and the Newcomen steam engine (1712). === Industrial Revolution (1760–1830s) === The revolution was driven by cheap energy in the form of coal, produced in ever-increasing amounts from the abundant resources of Britain. The British Industrial Revolution is characterized by developments in the areas of textile machinery, mining, metallurgy, transport and the invention of machine tools. Before invention of machinery to spin yarn and weave cloth, spinning was done using the spinning wheel and weaving was done on a hand-and-foot-operated loom. It took from three to five spinners to supply one weaver. The invention of the flying shuttle in 1733 doubled the output of a weaver, creating a shortage of spinners. The spinning frame for wool was invented in 1738. The spinning jenny, invented in 1764, was a machine that used multiple spinning wheels; however, it produced low quality thread. The water frame patented by Richard Arkwright in 1767, produced a better quality thread than the spinning jenny. The spinning mule, patented in 1779 by Samuel Crompton, produced a high quality thread. The power loom was invented by Edmund Cartwright in 1787. In the mid-1750s, the steam engine was applied to the water power-constrained iron, copper and lead industries for powering blast bellows. These industries were located near the mines, some of which were using steam engines for mine pumping. Steam engines were too powerful for leather bellows, so cast iron blowing cylinders were developed in 1768. Steam powered blast furnaces achieved higher temperatures, allowing the use of more lime in iron blast furnace feed. (Lime rich slag was not free-flowing at the previously used temperatures.) With a sufficient lime ratio, sulfur from coal or coke fuel reacts with the slag so that the sulfur does not contaminate the iron. Coal and coke were cheaper and more abundant fuel. As a result, iron production rose significantly during the last decades of the 18th century. Coal converted to coke fueled higher temperature blast furnaces and produced cast iron in much larger amounts than before, allowing the creation of a range of structures such as The Iron Bridge. Cheap coal meant that industry was no longer constrained by water resources driving the mills, although it continued as a valuable source of power. The steam engine helped drain the mines, so more coal reserves could be accessed, and the output of coal increased. The development of the high-pressure steam engine made locomotives possible, and a transport revolution followed. The steam engine which had existed since the early 18th century, was practically applied to both steamboat and railway transportation. The Liverpool and Manchester Railway, the first purpose-built railway line, opened in 1830, the Rocket locomotive of Robert Stephenson being one of its first working locomotives used. Manufacture of ships' pulley blocks by all-metal machines at the Portsmouth Block Mills in 1803 instigated the age of sustained mass production. Machine tools used by engineers to manufacture parts began in the first decade of the century, notably by Richard Roberts and Joseph Whitworth. The development of interchangeable parts through what is now called the American system of manufacturing began in the firearms industry at the U.S. Federal arsenals in the early 19th century, and became widely used by the end of the century. Until the Enlightenment era, little progress was made in water supply and sanitation and the engineering skills of the Romans were largely neglected throughout Europe. The first documented use of sand filters to purify the water supply dates to 1804, when the owner of a bleachery in Paisley, Scotland, John Gibb, installed an experimental filter, selling his unwanted surplus to the public. The first treated public water supply in the world was installed by engineer James Simpson for the Chelsea Waterworks Company in London in 1829. The first screw-down water tap was patented in 1845 by Guest and Chrimes, a brass foundry in Rotherham. The practice of water treatment soon became mainstream, and the virtues of the system were made starkly apparent after the investigations of the physician John Snow during the 1854 Broad Street cholera outbreak demonstrated the role of the water supply in spreading the cholera epidemic. === Second Industrial Revolution (1860s–1914) === The 19th century saw astonishing developments in transportation, construction, manufacturing and communication technologies originating in Europe. After a recession at the end of the 1830s and a general slowdown in major inventions, the Second Industrial Revolution was a period of rapid innovation and industrialization that began in the 1860s or around 1870 and lasted until World War I. It included rapid development of chemical, electrical, petroleum, and steel technologies connected with highly structured technology research. Telegraphy developed into a practical technology in the 19th century to help run the railways safely. Along with the development of telegraphy was the patenting of the first telephone. March 1876 marks the date that Alexander Graham Bell officially patented his version of an "electric telegraph". Although Bell is noted with the creation of the telephone, it is still debated about who actually developed the first working model. Building on improvements in vacuum pumps and materials research, incandescent light bulbs became practical for general use in the late 1870s. Edison Electric Illuminating Company, a company founded by Thomas Edison with financial backing from Spencer Trask, built and managed the first electricity network. Electrification was rated the most important technical development of the 20th century as the foundational infrastructure for modern civilization. This invention had a profound effect on the workplace because factories could now have second and third shift workers. Shoe production was mechanized during the mid 19th century. Mass production of sewing machines and agricultural machinery such as reapers occurred in the mid to late 19th century. Bicycles were mass-produced beginning in the 1880s. Steam-powered factories became widespread, although the conversion from water power to steam occurred in England earlier than in the U.S. Ironclad warships were found in battle starting in the 1860s, and played a role in the opening of Japan and China to trade with the West. Between 1825 and 1840, the technology of photography was introduced. For much of the rest of the century, many engineers and inventors tried to combine it and the much older technique of projection to create a complete illusion or a complete documentation of reality. Colour photography was usually included in these ambitions and the introduction of the phonograph in 1877 seemed to promise the addition of synchronized sound recordings. Between 1887 and 1894, the first successful short cinematographic presentations were established. === 20th century === Mass production brought automobiles and other high-tech goods to masses of consumers. Military research and development sped advances including electronic computing and jet engines. Radio and telephony greatly improved and spread to larger populations of users, though near-universal access would not be possible until mobile phones became affordable to developing world residents in the late 2000s and early 2010s. Energy and engine technology improvements included nuclear power, developed after the Manhattan project which heralded the new Atomic Age. Rocket development led to long range missiles and the first space age that lasted from the 1950s with the launch of Sputnik to the mid-1980s. Electrification spread rapidly in the 20th century. At the beginning of the century electric power was for the most part only available to wealthy people in a few major cities. By 2019, an estimated 87 percent of the world's population had access to electricity. Birth control also became widespread during the 20th century. Electron microscopes were very powerful by the late 1970s and genetic theory and knowledge were expanding, leading to developments in genetic engineering. The first "test tube baby" Louise Brown was born in 1978, which led to the first successful gestational surrogacy pregnancy in 1985 and the first pregnancy by ICSI in 1991, which is the implanting of a single sperm into an egg. Preimplantation genetic diagnosis was first performed in late 1989 and led to successful births in July 1990. These procedures have become relatively common. Computers were connected by means of local area, telecom and fiber optic networks, powered by the optical amplifier that ushered in the Information Age. This optical networking technology exploded the capacity of the Internet beginning in 1996 with the launch of the first high-capacity wave division multiplexing (WDM) system by Ciena Corp. WDM, as the common basis for telecom backbone networks, increased transmission capacity by orders of magnitude, thus enabling the mass commercialization and popularization of the Internet and its widespread impact on culture, economics, business, and society. The commercial availability of the first portable cell phone in 1981 and the first pocket-sized phone in 1985, both developed by Comvik in Sweden, coupled with the first transmission of data over a cellular network by Vodafone (formerly Racal-Millicom) in 1992 were the breakthroughs that led directly to the form and function of smartphones today. By 2014, there were more cell phones in use than people on Earth and The Supreme Court of the United States of America has ruled that a mobile phone was a private part of a person. Providing consumers wireless access to each other and to the Internet, the mobile phone stimulated one of the most important technology revolutions in human history. The Human Genome Project sequenced and identified all three billion chemical units in human DNA with a goal of finding the genetic roots of disease and developing treatments. The project became feasible due to two technical advances made during the late 1970s: gene mapping by restriction fragment length polymorphism (RFLP) markers and DNA sequencing. Sequencing was invented by Frederick Sanger and, separately, by Dr. Walter Gilbert. Gilbert also conceived of the Human Genome Project on May 27, 1985, and first publicly advocated it in August 1985 at the first International Conference on Genes and Computers in August 1985. The U.S. Federal Government sponsored Human Genome Project began October 1, 1990, and was declared complete in 2003. The massive data analysis resources necessary for running transatlantic research programs such as the Human Genome Project and the Large Electron–Positron Collider led to a necessity for distributed communications, causing Internet protocols to be more widely adopted by researchers and also creating a justification for Tim Berners-Lee to create the World Wide Web. Vaccination spread rapidly to the developing world from the 1980s onward due to many successful humanitarian initiatives, greatly reducing childhood mortality in many poor countries with limited medical resources. The US National Academy of Engineering, by expert vote, established the following ranking of the most important technological developments of the 20th century: === 21st century === In the early 21st century, research is ongoing into quantum computers, gene therapy (introduced 1990), 3D printing (introduced 1981), nanotechnology (introduced 1985), bioengineering/biotechnology, nuclear technology, advanced materials (e.g., graphene), the scramjet and drones (along with railguns and high-energy laser beams for military uses), superconductivity, the memristor, and green technologies such as alternative fuels (e.g., fuel cells, self-driving electric and plug-in hybrid cars), augmented reality devices and wearable electronics, artificial intelligence, and more efficient and powerful LEDs, solar cells, integrated circuits, wireless power devices, engines, and batteries. Large Hadron Collider, the largest single machine ever built, was constructed between 1998 and 2008. The understanding of particle physics is expected to expand with better instruments including larger particle accelerators such as the LHC and better neutrino detectors. Dark matter is sought via underground detectors and observatories like LIGO have started to detect gravitational waves. Genetic engineering technology continues to improve, and the importance of epigenetics on development and inheritance has also become increasingly recognized. New spaceflight technology and spacecraft are also being developed, like the Boeing's Orion and SpaceX's Dragon 2. New, more capable space telescopes, such as the James Webb Space Telescope which was launched to orbit in December, 2021, and the Colossus Telescope, have been designed. The International Space Station was completed in the 2000s, and NASA and ESA plan a human mission to Mars in the 2030s. The Variable Specific Impulse Magnetoplasma Rocket (VASIMR) is an electro-magnetic thruster for spacecraft propulsion and is expected to be tested in 2015. The Breakthrough Initiatives project plans to send the first ever spacecraft to visit another star, which will consist of numerous super-light chips driven by Electric propulsion in the 2030s, and receive images of the Proxima Centauri system, along with, possibly, the potentially habitable planet Proxima Centauri b, by midcentury. 2004 saw the first crewed commercial spaceflight when Mike Melvill crossed the boundary of space on June 21, 2004. == By type == === Biotechnology === Timeline of agriculture and food technology Timeline of biotechnology === Civil engineering === Civil engineering Architecture and building construction Bridges, harbors, tunnels, dams Surveying, instruments and maps, cartography, urban engineering, water supply and sewerage === Communication === === Computing === === Consumer technology === === Electrical engineering === Timeline of electrical and electronic engineering === Energy === === Materials science === Timeline of materials technology Metallurgy === Measurement === History of time in the United States History of timekeeping devices === Medicine === === Military === Military history#Technological evolution Category:Military history – articles on history of specific technologies === Nuclear === Manhattan Project Atomic Age Nuclear testing Nuclear arms race === Science and technology === === Transport === == See also == === Related history === === Related disciplines === === Related subjects === == References == == Further reading == == External links == Electropaedia on the History of Technology Archived 2011-05-12 at the Wayback Machine MIT 6.933J – The Structure of Engineering Revolutions. From MIT OpenCourseWare, course materials (graduate level) for a course on the history of technology through a Thomas Kuhn-ian lens. Concept of Civilization Events. From Jaroslaw Kessler, a chronology of "civilizing events". Ancient and Medieval City Technology Society for the History of Technology Giants of Science (website of the Institute of National Remembrance)
https://en.wikipedia.org/wiki/History_of_technology
Creative technology is a broadly interdisciplinary and transdisciplinary field combining computing, design, art and the humanities. The field of creative technology encompasses art, digital product design, digital media or an advertising and media made with a software-based, electronic and/or data-driven engine. Examples include multi-sensory experiences made using computer graphics, video production, digital music, digital cinematography, virtual reality, augmented reality, video editing, software engineering, 3D printing, the Internet of Things, CAD/CAM and wearable technology. In the artistic field, new media art and internet art are examples of work being done using creative technology. Performances, interactive installations and other immersive experiences take museum-going to the next level and may serve as research processes for humans' artistic and emotional integration with machines. Some believe that "creativity has the potential to be revolutionised with technology", or view the field of creative technology as helping to "disrupt" the way people today interact with computers, and usher in a more integrated, immersive experience. == Description == Creative technology has been defined as "the blending of knowledge across multiple disciplines to create new experiences or products" that meet end user and organizational needs. A more specific conceptualization describes it as the combination of information, holographic systems, sensors, audio technologies, image, and video technologies, among others with artistic practices and methods. The central characteristic is identified as an ability to do things better. Creative technology is also seen as the intersection of new technology with creative initiatives such as fashion, art, advertising, media and entertainment. As such, it is a way to make connections between countries seeking to update their culture; a winter 2015 Forbes article tells of 30 creative technology startups from the UK making the rounds in Singapore, Kuala Lumpur and New York City in an effort to raise funds and make connections. == Applications == Creative technology facilities may be organized as arts, research or job development entities, such as the UK's Foundation for Art and Creative Technology which has presented hundreds of new media and digital artworks from around the world, or a recently established $20.5 million project in Hawaii specializing in film industry job training and workforce development programs which plans to offer robotics, computer labs, recording studios and editing bays, pitched as a "game-changing" opportunity to bring new skills and jobs to Kauai. Degrees in this field were designed to address needs for cross-disciplinary interaction and aim to develop lateral thinking skills across more rigidly defined academic areas. Some educators have complained that creative technology tools, though "widely available", are difficult to use for young populations. The first major corporation to have a corporate officer bearing the title creative technology was The Walt Disney Company, which gave it first to the imagineer, Bran Ferren in 1993, who eventually became Disney's president of creative technology in 1998. At about the same time, the first educational research center in the United States was created to bridge these disciplines across industry, academia and the defense communities, designated the University of Southern California's, Institute for Creative Technologies. The ICT was established with funding by the US Army. Marketers and advertisers are also looking toward the power of creative technology to re-engage customers. The UK's Marketing Agencies Association is promoting creative technology as a way to build a more connected and personalized engagement with prospective customers, which launched a Creative Technology Initiative in early 2015. Industry associations and developers, arts organizations and agency creatives alike call for more investment in technology, which has lagged behind the sea change in the industry that is introducing more technology into creative fields such as Google. Many advertising agencies and other businesses have begun to create internal labs for research in creative technology. For example, Unilever created its Foundry Project as a way for the company to "embrace the mentality of hacking, deploying and scaling"; they share their discoveries and view the lab as a way to incorporate technology into the company, drive experimentation and engage with strategic partners. The Adobe Creative Technologies lab collaborated with the MIT Media Lab, one of the most notable endeavors in the creative technology field, to give artists the ability to draw geometric designs with a computer without having to master text-based programming or math. == Examples == "Creativeapplications.Net (CAN) is a community of creative practitioners working at the intersection of art, media and technology." A pepper grinder that disabled wifi in the household when twisted was introduced by the head of creative technology at agency Clemenger BBDO. ZKM has an annual prize for creative apps, the App Art Awards ITP (the Interactive Telecommunications Program) has a class in "Creative Computing" "The Eyeo Festival brings together a rich intersection of people doing fascinating things with technology. Artists, data designers, creative coders, AI & XR explorers, storytellers, researchers, technology & platform developers all cross paths and share inspiration at Eyeo" Artist Jake Lee-High created an interactive street experience for the premiere of Showtime (TV network) Penny Dreadful "Fake Love promotes major brands with immersive, wildly imaginative multimedia spectacles, from light-projected racetracks for Lexus to virtual reality (VR) videos for the New York Times Company" The works of Becky Stern, an electronics and fashion artist based in New York City. "You’re waving your hands in-front of a big screen and the designs and patterns mimic your movements; You walk into a company’s lobby and a digital wall displays a beautiful, abstract visualization based on the company's sales data" FIT has an annual Creative Technology Exhibition == Careers == Professionals who work in the field of creative technology tend to have a background as developers and may work in digital or entertainment media, with an advertising agency or in a new electronic product development role. In an advertising agency setting, a professional with a job description including creative technology may be a designer who became interested in technology, or a developer who focuses on the bigger picture of experience design. Department heads in creative technology may be charged with integrating new technologies into the agency's departments, leveraging partnerships with cutting edge providers and platforms. For example, the head of creative technology at Grey Global Group in New York "created an in-house lab... which highlights new tech each month with exhibits, events and workshops." Members of the team may have the ability to both write computer code and build electronics for prototypes. The creative technologist job title is likely to refer to a developer who understands the creative process and (often) the world of advertising. The person is actually making and coding and may be building web projects, mobile apps and other digital experiences. They are trying out new concepts and ideas, and modifying; this is recognized as similar to the artistic process but applied to media, advertising and other creative industries. Creative technologists have been referred to as technology-focussed individuals who either sit within or work closely with the creative team, recognizing that siloed departments of technology and design have historically led to bad agency work. Responsibilities described in a 2014 job posting for "Creative Technologist" at Google included "collaborating on the ideation and development of 'never been done before' digital experiences in partnership with top brands and agencies", and "contributing to the development of cutting edge prototypes in the field of creative technology". There are several resources that list companies that work in the field of creative technology. == Academic degree == A Master or Bachelor degree in creative technologies or creative technology is a broadly interdisciplinary and transdisciplinary course of study combining fields of computer technology, design, art and the humanities. Established as a modern degree addressing needs for cross-disciplinary interaction, one of its fundamental objectives is to develop lateral thinking skills across more rigidly defined academic areas recognized as a valuable component in expanding technological horizons. The Creative Technology & Design (CT&D) subject area at Fashion Institute of Technology offers specialized courses and both credit and non-credit programs. According to FIT's web site, the mission of this transdisciplinary subject area is to elevate students’ understanding of advanced design concepts, as well as their command of cutting-edge technologies. The Creative Technology 2-year portfolio program at Miami Ad School description reads, "You are a techie with creative passion and talent – or – a creative with a knack for tech...It's about how we integrate machine learning and artificial intelligence into a creative environment". Creative Technology is also seen as an industry and skill set for the emerging economy, as in this quote by a University of Texas at Austin dean at the opening of a new school at the University presumed to become the largest academic unit in the college: "The School of Design and Creative Technologies moves UT Austin more assertively into emerging creative, commercial disciplines that are driving culture and economies in the 21st century". == Tools == There are a wide range of tools that are utilized by creative technologists. Below is a short list, but there are several other extensive lists available. Arduino - an open-source electronics platform based on easy-to-use hardware and software. Raspberry Pi – a low-cost computer the size of a credit card that runs Linux Processing - a java-family programming language and development environment promoting software literacy within the visual arts and visual literacy within technology. p5.js - the javascript implementation of Processing Cinder - a professional library for creative coding in C++. OpenFrameworks - a C++ toolkit for teaching creative coding. Max - a visual data-flow programming language for music and multimedia JavaScript - the language of web browsers, including HTML5. TouchDesigner - a visual programming language for creative technology applications == References == == External links == Creative Technology & Design Subject Area at Fashion Institute of Technology Minor in Creative Technologies for Performative Practice at The New School Creative Technology degrees Archived 2018-02-22 at the Wayback Machine at Auckland University of Technology School of Design and Creative Technologies at University of Texas at Austin Art & Technology Major at Sogang University Creative Technology Degree at University of Twente Art and Technology Studies at School of the Art Institute of Chicago
https://en.wikipedia.org/wiki/Creative_technology
The Rockstar Advanced Game Engine (RAGE) is a proprietary game engine of Rockstar Games, developed by the RAGE Technology Group division of Rockstar San Diego (formerly Angel Studios), based on the Angel Game Engine. Since its first game, Rockstar Games Presents Table Tennis in 2006, released for the Xbox 360 and Wii, the engine has been used by Rockstar Games's internal studios to develop advanced open world games for computers and consoles. == History == === Early history === Angel Studios previously used the game engine Angel Real Time Simulation (ARTS) for Major League Baseball Featuring Ken Griffey Jr. (1998) and Midtown Madness (1999). The following year, Angel Studios developed Midtown Madness 2 (2000), the first title to use the new Angel Game Engine (AGE). In 2002, Angel Studios was sold to Take-Two Interactive, moved under Rockstar Games, and rebranded Rockstar San Diego. This sale also included AGE, later renamed the Rockstar Advanced Game Engine (RAGE). === Development === Prior to developing RAGE, Rockstar Games mostly used Criterion Games's RenderWare engine to develop games for PlayStation 2, Windows, and Xbox, such as the early 3D installments in the Grand Theft Auto franchise. In 2004, Criterion Games was acquired by Electronic Arts, which led Rockstar Games to switch from RenderWare and open RAGE Technology Group as a division of Rockstar San Diego. RAGE Technology Group started developing what would later become RAGE, based on Rockstar San Diego's AGE. The engine would facilitate game development on Windows and seventh generation consoles. The first game to use the engine was Rockstar San Diego's Rockstar Games Presents Table Tennis, released for Xbox 360 on May 23, 2006 and ported to the Wii more than a year later. Since then, RAGE integrates the third-party middleware components Euphoria and Bullet, as character animation engine and physics engine, respectively. On PlayStation 3 and Xbox 360, RAGE often saw a disparity in the optimization on the hardware: major titles on PlayStation 3 usually had lower resolution and minor graphic effects, as in Grand Theft Auto IV (720p vs. 640p), in Midnight Club: Los Angeles (1280×720p vs. 960×720p) and in Red Dead Redemption (720p vs. 640p). Despite its problems in optimization equality, in July 2009, Chris Stead of IGN voted RAGE as one of the "10 Best Game Engines of [the 7th] Generation", saying: "RAGE's strengths are many. Its ability to handle large streaming worlds, complex A.I. arrangements, weather effects, fast network code and a multitude of gameplay styles will be obvious to anyone who has played GTA IV." Since the release of Max Payne 3, the engine supports DirectX 11 and stereoscopic 3D rendering for personal computers. Max Payne 3 also marked the first time in which RAGE was capable of rendering the same 720p resolution on a game, both on PlayStation 3 and Xbox 360. This benefit has been achieved also in Grand Theft Auto V, which renders at a 720p resolution on both consoles. For the remastered versions of Grand Theft Auto V, RAGE was reworked for the eighth generation of video game consoles, with 1080p resolution support for both the PlayStation 4 and Xbox One. The PC version of the game, released in 2015, showed RAGE supporting 4K resolution and frame rates at 60 frames per second, as well as more powerful draw distances, texture filtering, and improved shadow mapping and tessellation quality. RAGE would later be further refined with the release of Red Dead Redemption 2 in 2018, supporting physically based rendering, volumetric clouds and fog values, pre-calculated global illumination as well as a Vulkan renderer in the Windows version in addition to DirectX 12. The Euphoria engine was overhauled to create advanced AI as well as enhanced physics and animations for the game. HDR support was added in May 2019. Support for Nvidia's Deep Learning Super Sampling (DLSS) and AMD's FidelityFX Super Resolution (FSR) were added in July 2021 and September 2022 respectively. The 2022 release of Grand Theft Auto V for the ninth generation of video game consoles introduced several enhancements, incorporating features from later RAGE titles. Raytraced reflections, native 4K resolution on the PlayStation 5 and Xbox Series X, upscaled 4K on the Xbox Series S, as well as HDR support were added. == Games using RAGE == == References ==
https://en.wikipedia.org/wiki/Rockstar_Advanced_Game_Engine
In computing, multi-touch is technology that enables a surface (a touchpad or touchscreen) to recognize the presence of more than one point of contact with the surface at the same time. The origins of multitouch began at CERN, MIT, University of Toronto, Carnegie Mellon University and Bell Labs in the 1970s. CERN started using multi-touch screens as early as 1976 for the controls of the Super Proton Synchrotron. Capacitive multi-touch displays were popularized by Apple's iPhone in 2007. Multi-touch may be used to implement additional functionality, such as pinch to zoom or to activate certain subroutines attached to predefined gestures using gesture recognition. Several uses of the term multi-touch resulted from the quick developments in this field, and many companies using the term to market older technology which is called gesture-enhanced single-touch or several other terms by other companies and researchers. Several other similar or related terms attempt to differentiate between whether a device can exactly determine or only approximate the location of different points of contact to further differentiate between the various technological capabilities, but they are often used as synonyms in marketing. Multi-touch is commonly implemented using capacitive sensing technology in mobile devices and smart devices. A capacitive touchscreen typically consists of a capacitive touch sensor, application-specific integrated circuit (ASIC) controller and digital signal processor (DSP) fabricated from CMOS (complementary metal–oxide–semiconductor) technology. A more recent alternative approach is optical touch technology, based on image sensor technology. == Definition == In computing, multi-touch is technology which enables a touchpad or touchscreen to recognize more than one or more than two points of contact with the surface. Apple popularized the term "multi-touch" in 2007 with which it implemented additional functionality, such as pinch to zoom or to activate certain subroutines attached to predefined gestures. The two different uses of the term resulted from the quick developments in this field, and many companies using the term to market older technology which is called gesture-enhanced single-touch or several other terms by other companies and researchers. Several other similar or related terms attempt to differentiate between whether a device can exactly determine or only approximate the location of different points of contact to further differentiate between the various technological capabilities, but they are often used as synonyms in marketing. == History == === 1960–2000 === The use of touchscreen technology predates both multi-touch technology and the personal computer. Early synthesizer and electronic instrument builders like Hugh Le Caine and Robert Moog experimented with using touch-sensitive capacitance sensors to control the sounds made by their instruments. IBM began building the first touch screens in the late 1960s. In 1972, Control Data released the PLATO IV computer, an infrared terminal used for educational purposes, which employed single-touch points in a 16×16 array user interface. These early touchscreens only registered one point of touch at a time. On-screen keyboards (a well-known feature today) were thus awkward to use, because key-rollover and holding down a shift key while typing another were not possible. Exceptions to these were a "cross-wire" multi-touch reconfigurable touchscreen keyboard/display developed at the Massachusetts Institute of Technology in the early 1970s and the 16 button capacitive multi-touch screen developed at CERN in 1972 for the controls of the Super Proton Synchrotron that were under construction. In 1976 a new x-y capacitive screen, based on the capacitance touch screens developed in 1972 by Danish electronics engineer Bent Stumpe, was developed at CERN. This technology, allowing an exact location of the different touch points, was used to develop a new type of human machine interface (HMI) for the control room of the Super Proton Synchrotron particle accelerator. In a handwritten note dated 11 March 1972, Stumpe presented his proposed solution – a capacitive touch screen with a fixed number of programmable buttons presented on a display. The screen was to consist of a set of capacitors etched into a film of copper on a sheet of glass, each capacitor being constructed so that a nearby flat conductor, such as the surface of a finger, would increase the capacitance by a significant amount. The capacitors were to consist of fine lines etched in copper on a sheet of glass – fine enough (80 μm) and sufficiently far apart (80 μm) to be invisible. In the final device, a simple lacquer coating prevented the fingers from actually touching the capacitors. In the same year, MIT described a keyboard with variable graphics capable of multi-touch detection. In the early 1980s, The University of Toronto's Input Research Group were among the earliest to explore the software side of multi-touch input systems. A 1982 system at the University of Toronto used a frosted-glass panel with a camera placed behind the glass. When a finger or several fingers pressed on the glass, the camera would detect the action as one or more black spots on an otherwise white background, allowing it to be registered as an input. Since the size of a dot was dependent on pressure (how hard the person was pressing on the glass), the system was somewhat pressure-sensitive as well. Of note, this system was input only and not able to display graphics. In 1983, Bell Labs at Murray Hill published a comprehensive discussion of touch-screen based interfaces, though it makes no mention of multiple fingers. In the same year, the video-based Video Place/Video Desk system of Myron Krueger was influential in development of multi-touch gestures such as pinch-to-zoom, though this system had no touch interaction itself. By 1984, both Bell Labs and Carnegie Mellon University had working multi-touch-screen prototypes – both input and graphics – that could respond interactively in response to multiple finger inputs. The Bell Labs system was based on capacitive coupling of fingers, whereas the CMU system was optical. In 1985, the canonical multitouch pinch-to-zoom gesture was demonstrated, with coordinated graphics, on CMU's system. In October 1985, Steve Jobs signed a non-disclosure agreement to tour CMU's Sensor Frame multi-touch lab. In 1990, Sears et al. published a review of academic research on single and multi-touch touchscreen human–computer interaction of the time, describing single touch gestures such as rotating knobs, swiping the screen to activate a switch (or a U-shaped gesture for a toggle switch), and touchscreen keyboards (including a study that showed that users could type at 25 words per minute for a touchscreen keyboard compared with 58 words per minute for a standard keyboard, with multi-touch hypothesized to improve data entry rate); multi-touch gestures such as selecting a range of a line, connecting objects, and a "tap-click" gesture to select while maintaining location with another finger are also described. In 1991, Pierre Wellner advanced the topic publishing about his multi-touch "Digital Desk", which supported multi-finger and pinching motions. Various companies expanded upon these inventions in the beginning of the twenty-first century. === 2000–present === Between 1999 and 2005, the company Fingerworks developed various multi-touch technologies, including Touchstream keyboards and the iGesture Pad. in the early 2000s Alan Hedge, professor of human factors and ergonomics at Cornell University published several studies about this technology. In 2005, Apple acquired Fingerworks and its multi-touch technology. In 2004, French start-up JazzMutant developed the Lemur Input Device, a music controller that became in 2005 the first commercial product to feature a proprietary transparent multi-touch screen, allowing direct, ten-finger manipulation on the display. In January 2007, multi-touch technology became mainstream with the iPhone, and in its iPhone announcement Apple even stated it "invented multi touch", however both the function and the term predate the announcement or patent requests, except for the area of capacitive mobile screens, which did not exist before Fingerworks/Apple's technology (Fingerworks filed patents in 2001–2005, subsequent multi-touch refinements were patented by Apple). However, the U.S. Patent and Trademark office declared that the "pinch-to-zoom" functionality was predicted by U.S. Patent # 7,844,915 relating to gestures on touch screens, filed by Bran Ferren and Daniel Hillis in 2005, as was inertial scrolling, thus invalidated a key claims of Apple's patent. In 2001, Microsoft's table-top touch platform, Microsoft PixelSense (formerly Surface) started development, which interacts with both the user's touch and their electronic devices and became commercial on May 29, 2007. Similarly, in 2001, Mitsubishi Electric Research Laboratories (MERL) began development of a multi-touch, multi-user system called DiamondTouch. In 2008, the Diamondtouch became a commercial product and is also based on capacitance, but able to differentiate between multiple simultaneous users or rather, the chairs in which each user is seated or the floorpad on which the user is standing. In 2007, NORTD labs open source system offered its CUBIT (multi-touch). Small-scale touch devices rapidly became commonplace in 2008. The number of touch screen telephones was expected to increase from 200,000 shipped in 2006 to 21 million in 2012. In May 2015, Apple was granted a patent for a "fusion keyboard", which turns individual physical keys into multi-touch buttons. == Applications == Apple has retailed and distributed numerous products using multi-touch technology, most prominently including its iPhone smartphone and iPad tablet. Additionally, Apple also holds several patents related to the implementation of multi-touch in user interfaces, however the legitimacy of some patents has been disputed. Apple additionally attempted to register "Multi-touch" as a trademark in the United States—however its request was denied by the United States Patent and Trademark Office because it considered the term generic. Multi-touch sensing and processing occurs via an ASIC sensor that is attached to the touch surface. Usually, separate companies make the ASIC and screen that combine into a touch screen; conversely, a touchpad's surface and ASIC are usually manufactured by the same company. There have been large companies in recent years that have expanded into the growing multi-touch industry, with systems designed for everything from the casual user to multinational organizations. It is now common for laptop manufacturers to include multi-touch touchpads on their laptops, and tablet computers respond to touch input rather than traditional stylus input and it is supported by many recent operating systems. A few companies are focusing on large-scale surface computing rather than personal electronics, either large multi-touch tables or wall surfaces. These systems are generally used by government organizations, museums, and companies as a means of information or exhibit display. == Implementations == Multi-touch has been implemented in several different ways, depending on the size and type of interface. The most popular form are mobile devices, tablets, touchtables and walls. Both touchtables and touch walls project an image through acrylic or glass, and then back-light the image with LEDs. Touch surfaces can also be made pressure-sensitive by the addition of a pressure-sensitive coating that flexes differently depending on how firmly it is pressed, altering the reflection. Handheld technologies use a panel that carries an electrical charge. When a finger touches the screen, the touch disrupts the panel's electrical field. The disruption is registered as a computer event (gesture) and may be sent to the software, which may then initiate a response to the gesture event. In the past few years, several companies have released products that use multi-touch. In an attempt to make the expensive technology more accessible, hobbyists have also published methods of constructing DIY touchscreens. === Capacitive === Capacitive technologies include: Surface Capacitive Technology or Near Field Imaging (NFI) Projected Capacitive Touch (PCT) Mutual capacitance Self-capacitance In-cell Capacitive === Resistive === Resistive technologies include: Analog Resistive Digital Resistive or In-Cell Resistive === Optical === Optical touch technology is based on image sensor technology. It functions when a finger or an object touches the surface, causing the light to scatter, the reflection of which is caught with sensors or cameras that send the data to software that dictates response to the touch, depending on the type of reflection measured. Optical technologies include: Optical Imaging or Infrared technology Rear Diffused Illumination (DI) Infrared Grid Technology (opto-matrix) or Digital Waveguide Touch (DWT) or Infrared Optical Waveguide Frustrated Total Internal Reflection (FTIR) Diffused Surface Illumination (DSI) Laser Light Plane (LLP) In-Cell Optical === Wave === Acoustic and radio-frequency wave-based technologies include: Surface Acoustic Wave (SAW) Bending Wave Touch (BWT) Dispersive Signal Touch (DST) Acoustic Pulse Recognition (APR) Force-Sensing Touch Technology == Multi-touch gestures == Multi-touch touchscreen gestures enable predefined motions to interact with the device and software. An increasing number of devices like smartphones, tablet computers, laptops or desktop computers have functions that are triggered by multi-touch gestures. == Popular culture == === Before 2007 === Years before it was a viable consumer product, popular culture portrayed potential uses of multi-touch technology in the future, including in several installments of the Star Trek franchise. In the 1982 Disney sci-fi film Tron a device similar to the Microsoft Surface was shown. It took up an executive's entire desk and was used to communicate with the Master Control computer. In the 2002 film Minority Report, Tom Cruise uses a set of gloves that resemble a multi-touch interface to browse through information. In the 2005 film The Island, another form of a multi-touch computer was seen where the professor, played by Sean Bean, has a multi-touch desktop to organize files, based on an early version of Microsoft Surface[2] (not be confused with the tablet computers which now bear that name). In 2007, the television series CSI: Miami introduced both surface and wall multi-touch displays in its sixth season. === After 2007 === Multi-touch technology can be seen in the 2008 James Bond film Quantum of Solace, where MI6 uses a touch interface to browse information about the criminal Dominic Greene. In the 2008 film The Day the Earth Stood Still, Microsoft's Surface was used. The television series NCIS: Los Angeles, which premiered 2009, makes use of multi-touch surfaces and wall panels as an initiative to go digital. In a 2008, an episode of the television series The Simpsons, Lisa Simpson travels to the underwater headquarters of Mapple to visit Steve Mobbs, who is shown to be performing multiple multi-touch hand gestures on a large touch wall. In the 2009, the film District 9 the interface used to control the alien ship features similar technology. == 10/GUI == 10/GUI is a proposed new user interface paradigm. Created in 2009 by R. Clayton Miller, it combines multi-touch input with a new windowing manager. It splits the touch surface away from the screen, so that user fatigue is reduced and the users' hands don't obstruct the display. Instead of placing windows all over the screen, the windowing manager, Con10uum, uses a linear paradigm, with multi-touch used to navigate between and arrange the windows. An area at the right side of the touch screen brings up a global context menu, and a similar strip at the left side brings up application-specific menus. An open source community preview of the Con10uum window manager was made available in November, 2009. == See also == == References == == External links == Multi-Touch Systems that I Have Known and Loved – An overview by researcher Bill Buxton of Microsoft Research, formerly at University of Toronto and Xerox PARC. The Unknown History of Pen Computing contains a history of pen computing, including touch and gesture technology, from approximately 1917 to 1992. Annotated bibliography of references to pen computing Annotated bibliography of references to tablet and touch computers Video: Notes on the History of Pen-based Computing on YouTube Multi-Touch Interaction Research @ NYU Camera-based multi-touch for wall-sized displays David Wessel Multitouch Jeff Han's Multi Touch Screen's chronology archive De Force-Sensing, Multi-Touch, User Interaction Technology Archived 2013-01-22 at the Wayback Machine LCD In-Cell Touch by Geoff Walker and Mark Fihn Archived 2017-05-01 at the Wayback Machine Touch technologies for large-format applications by Geoff Walker Archived 2017-05-01 at the Wayback Machine Video: Surface Acoustic Wave Touch Screens on YouTube Video: How 3M™ Dispersive Signal Technology Works on YouTube Video: Introduction to mTouch Capacitive Touch Sensing on YouTube
https://en.wikipedia.org/wiki/Multi-touch
A computer network is a set of computers sharing resources located on or provided by network nodes. Computers use common communication protocols over digital interconnections to communicate with each other. These interconnections are made up of telecommunications network technologies based on physically wired, optical, and wireless radio-frequency methods that may be arranged in a variety of network topologies. The nodes of a computer network can include personal computers, servers, networking hardware, or other specialized or general-purpose hosts. They are identified by network addresses and may have hostnames. Hostnames serve as memorable labels for the nodes and are rarely changed after initial assignment. Network addresses serve for locating and identifying the nodes by communication protocols such as the Internet Protocol. Computer networks may be classified by many criteria, including the transmission medium used to carry signals, bandwidth, communications protocols to organize network traffic, the network size, the topology, traffic control mechanisms, and organizational intent. Computer networks support many applications and services, such as access to the World Wide Web, digital video and audio, shared use of application and storage servers, printers and fax machines, and use of email and instant messaging applications. == History == Computer networking may be considered a branch of computer science, computer engineering, and telecommunications, since it relies on the theoretical and practical application of the related disciplines. Computer networking was influenced by a wide array of technological developments and historical milestones. In the late 1950s, a network of computers was built for the U.S. military Semi-Automatic Ground Environment (SAGE) radar system using the Bell 101 modem. It was the first commercial modem for computers, released by AT&T Corporation in 1958. The modem allowed digital data to be transmitted over regular unconditioned telephone lines at a speed of 110 bits per second (bit/s). In 1959, Christopher Strachey filed a patent application for time-sharing in the United Kingdom and John McCarthy initiated the first project to implement time-sharing of user programs at MIT. Strachey passed the concept on to J. C. R. Licklider at the inaugural UNESCO Information Processing Conference in Paris that year. McCarthy was instrumental in the creation of three of the earliest time-sharing systems (the Compatible Time-Sharing System in 1961, the BBN Time-Sharing System in 1962, and the Dartmouth Time-Sharing System in 1963). In 1959, Anatoly Kitov proposed to the Central Committee of the Communist Party of the Soviet Union a detailed plan for the re-organization of the control of the Soviet armed forces and of the Soviet economy on the basis of a network of computing centers. Kitov's proposal was rejected, as later was the 1962 OGAS economy management network project. In 1960, the commercial airline reservation system semi-automatic business research environment (SABRE) went online with two connected mainframes. In 1963, J. C. R. Licklider sent a memorandum to office colleagues discussing the concept of the "Intergalactic Computer Network", a computer network intended to allow general communications among computer users. In 1965, Western Electric introduced the first widely used telephone switch that implemented computer control in the switching fabric. Throughout the 1960s, Paul Baran and Donald Davies independently invented the concept of packet switching for data communication between computers over a network. Baran's work addressed adaptive routing of message blocks across a distributed network, but did not include routers with software switches, nor the idea that users, rather than the network itself, would provide the reliability. Davies' hierarchical network design included high-speed routers, communication protocols and the essence of the end-to-end principle. The NPL network, a local area network at the National Physical Laboratory (United Kingdom), pioneered the implementation of the concept in 1968-69 using 768 kbit/s links. Both Baran's and Davies' inventions were seminal contributions that influenced the development of computer networks. In 1969, the first four nodes of the ARPANET were connected using 50 kbit/s circuits between the University of California at Los Angeles, the Stanford Research Institute, the University of California at Santa Barbara, and the University of Utah. Designed principally by Bob Kahn, the network's routing, flow control, software design and network control were developed by the IMP team working for Bolt Beranek & Newman. In the early 1970s, Leonard Kleinrock carried out mathematical work to model the performance of packet-switched networks, which underpinned the development of the ARPANET. His theoretical work on hierarchical routing in the late 1970s with student Farouk Kamoun remains critical to the operation of the Internet today. In 1972, commercial services were first deployed on experimental public data networks in Europe. In 1973, the French CYCLADES network, directed by Louis Pouzin was the first to make the hosts responsible for the reliable delivery of data, rather than this being a centralized service of the network itself. In 1973, Peter Kirstein put internetworking into practice at University College London (UCL), connecting the ARPANET to British academic networks, the first international heterogeneous computer network. In 1973, Robert Metcalfe wrote a formal memo at Xerox PARC describing Ethernet, a local area networking system he created with David Boggs. It was inspired by the packet radio ALOHAnet, started by Norman Abramson and Franklin Kuo at the University of Hawaii in the late 1960s. Metcalfe and Boggs, with John Shoch and Edward Taft, also developed the PARC Universal Packet for internetworking. In 1974, Vint Cerf and Bob Kahn published their seminal 1974 paper on internetworking, A Protocol for Packet Network Intercommunication. Later that year, Cerf, Yogen Dalal, and Carl Sunshine wrote the first Transmission Control Protocol (TCP) specification, RFC 675, coining the term Internet as a shorthand for internetworking. In July 1976, Metcalfe and Boggs published their paper "Ethernet: Distributed Packet Switching for Local Computer Networks" and in December 1977, together with Butler Lampson and Charles P. Thacker, they received U.S. patent 4063220A for their invention. Public data networks in Europe, North America and Japan began using X.25 in the late 1970s and interconnected with X.75. This underlying infrastructure was used for expanding TCP/IP networks in the 1980s. In 1976, John Murphy of Datapoint Corporation created ARCNET, a token-passing network first used to share storage devices. In 1977, the first long-distance fiber network was deployed by GTE in Long Beach, California. In 1979, Robert Metcalfe pursued making Ethernet an open standard. In 1980, Ethernet was upgraded from the original 2.94 Mbit/s protocol to the 10 Mbit/s protocol, which was developed by Ron Crane, Bob Garner, Roy Ogus, and Yogen Dalal. In 1995, the transmission speed capacity for Ethernet increased from 10 Mbit/s to 100 Mbit/s. By 1998, Ethernet supported transmission speeds of 1 Gbit/s. Subsequently, higher speeds of up to 400 Gbit/s were added (as of 2018). The scaling of Ethernet has been a contributing factor to its continued use. == Use == Computer networks enhance how users communicate with each other by using various electronic methods like email, instant messaging, online chat, voice and video calls, and video conferencing. Networks also enable the sharing of computing resources. For example, a user can print a document on a shared printer or use shared storage devices. Additionally, networks allow for the sharing of files and information, giving authorized users access to data stored on other computers. Distributed computing leverages resources from multiple computers across a network to perform tasks collaboratively. == Network packet == Most modern computer networks use protocols based on packet-mode transmission. A network packet is a formatted unit of data carried by a packet-switched network. Packets consist of two types of data: control information and user data (payload). The control information provides data the network needs to deliver the user data, for example, source and destination network addresses, error detection codes, and sequencing information. Typically, control information is found in packet headers and trailers, with payload data in between. With packets, the bandwidth of the transmission medium can be better shared among users than if the network were circuit switched. When one user is not sending packets, the link can be filled with packets from other users, and so the cost can be shared, with relatively little interference, provided the link is not overused. Often the route a packet needs to take through a network is not immediately available. In that case, the packet is queued and waits until a link is free. The physical link technologies of packet networks typically limit the size of packets to a certain maximum transmission unit (MTU). A longer message may be fragmented before it is transferred and once the packets arrive, they are reassembled to construct the original message. == Network topology == The physical or geographic locations of network nodes and links generally have relatively little effect on a network, but the topology of interconnections of a network can significantly affect its throughput and reliability. With many technologies, such as bus or star networks, a single failure can cause the network to fail entirely. In general, the more interconnections there are, the more robust the network is; but the more expensive it is to install. Therefore, most network diagrams are arranged by their network topology which is the map of logical interconnections of network hosts. Common topologies are: Bus network: all nodes are connected to a common medium along this medium. This was the layout used in the original Ethernet, called 10BASE5 and 10BASE2. This is still a common topology on the data link layer, although modern physical layer variants use point-to-point links instead, forming a star or a tree. Star network: all nodes are connected to a special central node. This is the typical layout found in a small switched Ethernet LAN, where each client connects to a central network switch, and logically in a wireless LAN, where each wireless client associates with the central wireless access point. Ring network: each node is connected to its left and right neighbor node, such that all nodes are connected and that each node can reach each other node by traversing nodes left- or rightwards. Token ring networks, and the Fiber Distributed Data Interface (FDDI), made use of such a topology. Mesh network: each node is connected to an arbitrary number of neighbors in such a way that there is at least one traversal from any node to any other. Fully connected network: each node is connected to every other node in the network. Tree network: nodes are arranged hierarchically. This is the natural topology for a larger Ethernet network with multiple switches and without redundant meshing. The physical layout of the nodes in a network may not necessarily reflect the network topology. As an example, with FDDI, the network topology is a ring, but the physical topology is often a star, because all neighboring connections can be routed via a central physical location. Physical layout is not completely irrelevant, however, as common ducting and equipment locations can represent single points of failure due to issues like fires, power failures and flooding. === Overlay network === An overlay network is a virtual network that is built on top of another network. Nodes in the overlay network are connected by virtual or logical links. Each link corresponds to a path, perhaps through many physical links, in the underlying network. The topology of the overlay network may (and often does) differ from that of the underlying one. For example, many peer-to-peer networks are overlay networks. They are organized as nodes of a virtual system of links that run on top of the Internet. Overlay networks have been used since the early days of networking, back when computers were connected via telephone lines using modems, even before data networks were developed. The most striking example of an overlay network is the Internet itself. The Internet itself was initially built as an overlay on the telephone network. Even today, each Internet node can communicate with virtually any other through an underlying mesh of sub-networks of wildly different topologies and technologies. Address resolution and routing are the means that allow mapping of a fully connected IP overlay network to its underlying network. Another example of an overlay network is a distributed hash table, which maps keys to nodes in the network. In this case, the underlying network is an IP network, and the overlay network is a table (actually a map) indexed by keys. Overlay networks have also been proposed as a way to improve Internet routing, such as through quality of service guarantees achieve higher-quality streaming media. Previous proposals such as IntServ, DiffServ, and IP multicast have not seen wide acceptance largely because they require modification of all routers in the network. On the other hand, an overlay network can be incrementally deployed on end-hosts running the overlay protocol software, without cooperation from Internet service providers. The overlay network has no control over how packets are routed in the underlying network between two overlay nodes, but it can control, for example, the sequence of overlay nodes that a message traverses before it reaches its destination. For example, Akamai Technologies manages an overlay network that provides reliable, efficient content delivery (a kind of multicast). Academic research includes end system multicast, resilient routing and quality of service studies, among others. == Network links == The transmission media (often referred to in the literature as the physical medium) used to link devices to form a computer network include electrical cable, optical fiber, and free space. In the OSI model, the software to handle the media is defined at layers 1 and 2 — the physical layer and the data link layer. A widely adopted family that uses copper and fiber media in local area network (LAN) technology are collectively known as Ethernet. The media and protocol standards that enable communication between networked devices over Ethernet are defined by IEEE 802.3. Wireless LAN standards use radio waves, others use infrared signals as a transmission medium. Power line communication uses a building's power cabling to transmit data. === Wired === The following classes of wired technologies are used in computer networking. Coaxial cable is widely used for cable television systems, office buildings, and other work-sites for local area networks. Transmission speed ranges from 200 million bits per second to more than 500 million bits per second. ITU-T G.hn technology uses existing home wiring (coaxial cable, phone lines and power lines) to create a high-speed local area network. Twisted pair cabling is used for wired Ethernet and other standards. It typically consists of 4 pairs of copper cabling that can be utilized for both voice and data transmission. The use of two wires twisted together helps to reduce crosstalk and electromagnetic induction. The transmission speed ranges from 2 Mbit/s to 10 Gbit/s. Twisted pair cabling comes in two forms: unshielded twisted pair (UTP) and shielded twisted-pair (STP). Each form comes in several category ratings, designed for use in various scenarios. An optical fiber is a glass fiber. It carries pulses of light that represent data via lasers and optical amplifiers. Some advantages of optical fibers over metal wires are very low transmission loss and immunity to electrical interference. Using dense wave division multiplexing, optical fibers can simultaneously carry multiple streams of data on different wavelengths of light, which greatly increases the rate that data can be sent to up to trillions of bits per second. Optic fibers can be used for long runs of cable carrying very high data rates, and are used for undersea communications cables to interconnect continents. There are two basic types of fiber optics, single-mode optical fiber (SMF) and multi-mode optical fiber (MMF). Single-mode fiber has the advantage of being able to sustain a coherent signal for dozens or even a hundred kilometers. Multimode fiber is cheaper to terminate but is limited to a few hundred or even only a few dozens of meters, depending on the data rate and cable grade. === Wireless === Network connections can be established wirelessly using radio or other electromagnetic means of communication. Terrestrial microwave – Terrestrial microwave communication uses Earth-based transmitters and receivers resembling satellite dishes. Terrestrial microwaves are in the low gigahertz range, which limits all communications to line-of-sight. Relay stations are spaced approximately 40 miles (64 km) apart. Communications satellites – Satellites also communicate via microwave. The satellites are stationed in space, typically in geosynchronous orbit 35,400 km (22,000 mi) above the equator. These Earth-orbiting systems are capable of receiving and relaying voice, data, and TV signals. Cellular networks use several radio communications technologies. The systems divide the region covered into multiple geographic areas. Each area is served by a low-power transceiver. Radio and spread spectrum technologies – Wireless LANs use a high-frequency radio technology similar to digital cellular. Wireless LANs use spread spectrum technology to enable communication between multiple devices in a limited area. IEEE 802.11 defines a common flavor of open-standards wireless radio-wave technology known as Wi-Fi. Free-space optical communication uses visible or invisible light for communications. In most cases, line-of-sight propagation is used, which limits the physical positioning of communicating devices. Extending the Internet to interplanetary dimensions via radio waves and optical means, the Interplanetary Internet. IP over Avian Carriers was a humorous April fool's Request for Comments, issued as RFC 1149. It was implemented in real life in 2001. The last two cases have a large round-trip delay time, which gives slow two-way communication but does not prevent sending large amounts of information (they can have high throughput). == Network nodes == Apart from any physical transmission media, networks are built from additional basic system building blocks, such as network interface controllers, repeaters, hubs, bridges, switches, routers, modems, and firewalls. Any particular piece of equipment will frequently contain multiple building blocks and so may perform multiple functions. === Network interfaces === A network interface controller (NIC) is computer hardware that connects the computer to the network media and has the ability to process low-level network information. For example, the NIC may have a connector for plugging in a cable, or an aerial for wireless transmission and reception, and the associated circuitry. In Ethernet networks, each NIC has a unique Media Access Control (MAC) address—usually stored in the controller's permanent memory. To avoid address conflicts between network devices, the Institute of Electrical and Electronics Engineers (IEEE) maintains and administers MAC address uniqueness. The size of an Ethernet MAC address is six octets. The three most significant octets are reserved to identify NIC manufacturers. These manufacturers, using only their assigned prefixes, uniquely assign the three least-significant octets of every Ethernet interface they produce. === Repeaters and hubs === A repeater is an electronic device that receives a network signal, cleans it of unnecessary noise and regenerates it. The signal is retransmitted at a higher power level, or to the other side of obstruction so that the signal can cover longer distances without degradation. In most twisted-pair Ethernet configurations, repeaters are required for cable that runs longer than 100 meters. With fiber optics, repeaters can be tens or even hundreds of kilometers apart. Repeaters work on the physical layer of the OSI model but still require a small amount of time to regenerate the signal. This can cause a propagation delay that affects network performance and may affect proper function. As a result, many network architectures limit the number of repeaters used in a network, e.g., the Ethernet 5-4-3 rule. An Ethernet repeater with multiple ports is known as an Ethernet hub. In addition to reconditioning and distributing network signals, a repeater hub assists with collision detection and fault isolation for the network. Hubs and repeaters in LANs have been largely obsoleted by modern network switches. === Bridges and switches === Network bridges and network switches are distinct from a hub in that they only forward frames to the ports involved in the communication whereas a hub forwards to all ports. Bridges only have two ports but a switch can be thought of as a multi-port bridge. Switches normally have numerous ports, facilitating a star topology for devices, and for cascading additional switches. Bridges and switches operate at the data link layer (layer 2) of the OSI model and bridge traffic between two or more network segments to form a single local network. Both are devices that forward frames of data between ports based on the destination MAC address in each frame. They learn the association of physical ports to MAC addresses by examining the source addresses of received frames and only forward the frame when necessary. If an unknown destination MAC is targeted, the device broadcasts the request to all ports except the source, and discovers the location from the reply. Bridges and switches divide the network's collision domain but maintain a single broadcast domain. Network segmentation through bridging and switching helps break down a large, congested network into an aggregation of smaller, more efficient networks. === Routers === A router is an internetworking device that forwards packets between networks by processing the addressing or routing information included in the packet. The routing information is often processed in conjunction with the routing table. A router uses its routing table to determine where to forward packets and does not require broadcasting packets which is inefficient for very big networks. === Modems === Modems (modulator-demodulator) are used to connect network nodes via wire not originally designed for digital network traffic, or for wireless. To do this one or more carrier signals are modulated by the digital signal to produce an analog signal that can be tailored to give the required properties for transmission. Early modems modulated audio signals sent over a standard voice telephone line. Modems are still commonly used for telephone lines, using a digital subscriber line technology and cable television systems using DOCSIS technology. === Firewalls === A firewall is a network device or software for controlling network security and access rules. Firewalls are inserted in connections between secure internal networks and potentially insecure external networks such as the Internet. Firewalls are typically configured to reject access requests from unrecognized sources while allowing actions from recognized ones. The vital role firewalls play in network security grows in parallel with the constant increase in cyber attacks. == Communication protocols == A communication protocol is a set of rules for exchanging information over a network. Communication protocols have various characteristics. They may be connection-oriented or connectionless, they may use circuit mode or packet switching, and they may use hierarchical addressing or flat addressing. In a protocol stack, often constructed per the OSI model, communications functions are divided up into protocol layers, where each layer leverages the services of the layer below it until the lowest layer controls the hardware that sends information across the media. The use of protocol layering is ubiquitous across the field of computer networking. An important example of a protocol stack is HTTP (the World Wide Web protocol) running over TCP over IP (the Internet protocols) over IEEE 802.11 (the Wi-Fi protocol). This stack is used between the wireless router and the home user's personal computer when the user is surfing the web. There are many communication protocols, a few of which are described below. === Common protocols === ==== Internet protocol suite ==== The Internet protocol suite, also called TCP/IP, is the foundation of all modern networking. It offers connection-less and connection-oriented services over an inherently unreliable network traversed by datagram transmission using Internet protocol (IP). At its core, the protocol suite defines the addressing, identification, and routing specifications for Internet Protocol Version 4 (IPv4) and for IPv6, the next generation of the protocol with a much enlarged addressing capability. The Internet protocol suite is the defining set of protocols for the Internet. ==== IEEE 802 ==== IEEE 802 is a family of IEEE standards dealing with local area networks and metropolitan area networks. The complete IEEE 802 protocol suite provides a diverse set of networking capabilities. The protocols have a flat addressing scheme. They operate mostly at layers 1 and 2 of the OSI model. For example, MAC bridging (IEEE 802.1D) deals with the routing of Ethernet packets using a Spanning Tree Protocol. IEEE 802.1Q describes VLANs, and IEEE 802.1X defines a port-based network access control protocol, which forms the basis for the authentication mechanisms used in VLANs (but it is also found in WLANs) – it is what the home user sees when the user has to enter a "wireless access key". ===== Ethernet ===== Ethernet is a family of technologies used in wired LANs. It is described by a set of standards together called IEEE 802.3 published by the Institute of Electrical and Electronics Engineers. ===== Wireless LAN ===== Wireless LAN based on the IEEE 802.11 standards, also widely known as WLAN or WiFi, is probably the most well-known member of the IEEE 802 protocol family for home users today. IEEE 802.11 shares many properties with wired Ethernet. ==== SONET/SDH ==== Synchronous optical networking (SONET) and Synchronous Digital Hierarchy (SDH) are standardized multiplexing protocols that transfer multiple digital bit streams over optical fiber using lasers. They were originally designed to transport circuit mode communications from a variety of different sources, primarily to support circuit-switched digital telephony. However, due to its protocol neutrality and transport-oriented features, SONET/SDH also was the obvious choice for transporting Asynchronous Transfer Mode (ATM) frames. ==== Asynchronous Transfer Mode ==== Asynchronous Transfer Mode (ATM) is a switching technique for telecommunication networks. It uses asynchronous time-division multiplexing and encodes data into small, fixed-sized cells. This differs from other protocols such as the Internet protocol suite or Ethernet that use variable-sized packets or frames. ATM has similarities with both circuit and packet switched networking. This makes it a good choice for a network that must handle both traditional high-throughput data traffic, and real-time, low-latency content such as voice and video. ATM uses a connection-oriented model in which a virtual circuit must be established between two endpoints before the actual data exchange begins. ATM still plays a role in the last mile, which is the connection between an Internet service provider and the home user. ==== Cellular standards ==== There are a number of different digital cellular standards, including: Global System for Mobile Communications (GSM), General Packet Radio Service (GPRS), cdmaOne, CDMA2000, Evolution-Data Optimized (EV-DO), Enhanced Data Rates for GSM Evolution (EDGE), Universal Mobile Telecommunications System (UMTS), Digital Enhanced Cordless Telecommunications (DECT), Digital AMPS (IS-136/TDMA), and Integrated Digital Enhanced Network (iDEN). === Routing === Routing is the process of selecting network paths to carry network traffic. Routing is performed for many kinds of networks, including circuit switching networks and packet switched networks. In packet-switched networks, routing protocols direct packet forwarding through intermediate nodes. Intermediate nodes are typically network hardware devices such as routers, bridges, gateways, firewalls, or switches. General-purpose computers can also forward packets and perform routing, though because they lack specialized hardware, may offer limited performance. The routing process directs forwarding on the basis of routing tables, which maintain a record of the routes to various network destinations. Most routing algorithms use only one network path at a time. Multipath routing techniques enable the use of multiple alternative paths. Routing can be contrasted with bridging in its assumption that network addresses are structured and that similar addresses imply proximity within the network. Structured addresses allow a single routing table entry to represent the route to a group of devices. In large networks, the structured addressing used by routers outperforms unstructured addressing used by bridging. Structured IP addresses are used on the Internet. Unstructured MAC addresses are used for bridging on Ethernet and similar local area networks. == Geographic scale == Networks may be characterized by many properties or features, such as physical capacity, organizational purpose, user authorization, access rights, and others. Another distinct classification method is that of the physical extent or geographic scale. === Nanoscale network === A nanoscale network has key components implemented at the nanoscale, including message carriers, and leverages physical principles that differ from macroscale communication mechanisms. Nanoscale communication extends communication to very small sensors and actuators such as those found in biological systems and also tends to operate in environments that would be too harsh for other communication techniques. === Personal area network === A personal area network (PAN) is a computer network used for communication among computers and different information technological devices close to one person. Some examples of devices that are used in a PAN are personal computers, printers, fax machines, telephones, PDAs, scanners, and video game consoles. A PAN may include wired and wireless devices. The reach of a PAN typically extends to 10 meters. A wired PAN is usually constructed with USB and FireWire connections while technologies such as Bluetooth and infrared communication typically form a wireless PAN. === Local area network === A local area network (LAN) is a network that connects computers and devices in a limited geographical area such as a home, school, office building, or closely positioned group of buildings. Wired LANs are most commonly based on Ethernet technology. Other networking technologies such as ITU-T G.hn also provide a way to create a wired LAN using existing wiring, such as coaxial cables, telephone lines, and power lines. A LAN can be connected to a wide area network (WAN) using a router. The defining characteristics of a LAN, in contrast to a WAN, include higher data transfer rates, limited geographic range, and lack of reliance on leased lines to provide connectivity. Current Ethernet or other IEEE 802.3 LAN technologies operate at data transfer rates up to and in excess of 100 Gbit/s, standardized by IEEE in 2010. === Home area network === A home area network (HAN) is a residential LAN used for communication between digital devices typically deployed in the home, usually a small number of personal computers and accessories, such as printers and mobile computing devices. An important function is the sharing of Internet access, often a broadband service through a cable Internet access or digital subscriber line (DSL) provider. === Storage area network === A storage area network (SAN) is a dedicated network that provides access to consolidated, block-level data storage. SANs are primarily used to make storage devices, such as disk arrays, tape libraries, and optical jukeboxes, accessible to servers so that the storage appears as locally attached devices to the operating system. A SAN typically has its own network of storage devices that are generally not accessible through the local area network by other devices. The cost and complexity of SANs dropped in the early 2000s to levels allowing wider adoption across both enterprise and small to medium-sized business environments. === Campus area network === A campus area network (CAN) is made up of an interconnection of LANs within a limited geographical area. The networking equipment (switches, routers) and transmission media (optical fiber, Cat5 cabling, etc.) are almost entirely owned by the campus tenant or owner (an enterprise, university, government, etc.). For example, a university campus network is likely to link a variety of campus buildings to connect academic colleges or departments, the library, and student residence halls. === Backbone network === A backbone network is part of a computer network infrastructure that provides a path for the exchange of information between different LANs or subnetworks. A backbone can tie together diverse networks within the same building, across different buildings, or over a wide area. When designing a network backbone, network performance and network congestion are critical factors to take into account. Normally, the backbone network's capacity is greater than that of the individual networks connected to it. For example, a large company might implement a backbone network to connect departments that are located around the world. The equipment that ties together the departmental networks constitutes the network backbone. Another example of a backbone network is the Internet backbone, which is a massive, global system of fiber-optic cable and optical networking that carry the bulk of data between wide area networks (WANs), metro, regional, national and transoceanic networks. === Metropolitan area network === A metropolitan area network (MAN) is a large computer network that interconnects users with computer resources in a geographic region of the size of a metropolitan area. === Wide area network === A wide area network (WAN) is a computer network that covers a large geographic area such as a city, country, or spans even intercontinental distances. A WAN uses a communications channel that combines many types of media such as telephone lines, cables, and airwaves. A WAN often makes use of transmission facilities provided by common carriers, such as telephone companies. WAN technologies generally function at the lower three layers of the OSI model: the physical layer, the data link layer, and the network layer. === Enterprise private network === An enterprise private network is a network that a single organization builds to interconnect its office locations (e.g., production sites, head offices, remote offices, shops) so they can share computer resources. === Virtual private network === A virtual private network (VPN) is an overlay network in which some of the links between nodes are carried by open connections or virtual circuits in some larger network (e.g., the Internet) instead of by physical wires. The data link layer protocols of the virtual network are said to be tunneled through the larger network. One common application is secure communications through the public Internet, but a VPN need not have explicit security features, such as authentication or content encryption. VPNs, for example, can be used to separate the traffic of different user communities over an underlying network with strong security features. VPN may have best-effort performance or may have a defined service level agreement (SLA) between the VPN customer and the VPN service provider. === Global area network === A global area network (GAN) is a network used for supporting mobile users across an arbitrary number of wireless LANs, satellite coverage areas, etc. The key challenge in mobile communications is handing off communications from one local coverage area to the next. In IEEE Project 802, this involves a succession of terrestrial wireless LANs. == Organizational scope == Networks are typically managed by the organizations that own them. Private enterprise networks may use a combination of intranets and extranets. They may also provide network access to the Internet, which has no single owner and permits virtually unlimited global connectivity. === Intranet === An intranet is a set of networks that are under the control of a single administrative entity. An intranet typically uses the Internet Protocol and IP-based tools such as web browsers and file transfer applications. The administrative entity limits the use of the intranet to its authorized users. Most commonly, an intranet is the internal LAN of an organization. A large intranet typically has at least one web server to provide users with organizational information. === Extranet === An extranet is a network that is under the administrative control of a single organization but supports a limited connection to a specific external network. For example, an organization may provide access to some aspects of its intranet to share data with its business partners or customers. These other entities are not necessarily trusted from a security standpoint. The network connection to an extranet is often, but not always, implemented via WAN technology. === Internet === An internetwork is the connection of multiple different types of computer networks to form a single computer network using higher-layer network protocols and connecting them together using routers. The Internet is the largest example of internetwork. It is a global system of interconnected governmental, academic, corporate, public, and private computer networks. It is based on the networking technologies of the Internet protocol suite. It is the successor of the Advanced Research Projects Agency Network (ARPANET) developed by DARPA of the United States Department of Defense. The Internet utilizes copper communications and an optical networking backbone to enable the World Wide Web (WWW), the Internet of things, video transfer, and a broad range of information services. Participants on the Internet use a diverse array of methods of several hundred documented, and often standardized, protocols compatible with the Internet protocol suite and the IP addressing system administered by the Internet Assigned Numbers Authority and address registries. Service providers and large enterprises exchange information about the reachability of their address spaces through the Border Gateway Protocol (BGP), forming a redundant worldwide mesh of transmission paths. === Darknet === A darknet is an overlay network, typically running on the Internet, that is only accessible through specialized software. It is an anonymizing network where connections are made only between trusted peers — sometimes called friends (F2F) — using non-standard protocols and ports. Darknets are distinct from other distributed peer-to-peer networks as sharing is anonymous (that is, IP addresses are not publicly shared), and therefore users can communicate with little fear of governmental or corporate interference. == Network service == Network services are applications hosted by servers on a computer network, to provide some functionality for members or users of the network, or to help the network itself to operate. The World Wide Web, E-mail, printing and network file sharing are examples of well-known network services. Network services such as Domain Name System (DNS) give names for IP and MAC addresses (people remember names like nm.lan better than numbers like 210.121.67.18), and Dynamic Host Configuration Protocol (DHCP) to ensure that the equipment on the network has a valid IP address. Services are usually based on a service protocol that defines the format and sequencing of messages between clients and servers of that network service. == Network performance == === Bandwidth === Bandwidth in bit/s may refer to consumed bandwidth, corresponding to achieved throughput or goodput, i.e., the average rate of successful data transfer through a communication path. The throughput is affected by processes such as bandwidth shaping, bandwidth management, bandwidth throttling, bandwidth cap and bandwidth allocation (using, for example, bandwidth allocation protocol and dynamic bandwidth allocation). === Network delay === Network delay is a design and performance characteristic of a telecommunications network. It specifies the latency for a bit of data to travel across the network from one communication endpoint to another. Delay may differ slightly, depending on the location of the specific pair of communicating endpoints. Engineers usually report both the maximum and average delay, and they divide the delay into several components, the sum of which is the total delay: Processing delay – time it takes a router to process the packet header Queuing delay – time the packet spends in routing queues Transmission delay – time it takes to push the packet's bits onto the link Propagation delay – time for a signal to propagate through the media A certain minimum level of delay is experienced by signals due to the time it takes to transmit a packet serially through a link. This delay is extended by more variable levels of delay due to network congestion. IP network delays can range from less than a microsecond to several hundred milliseconds. === Performance metrics === The parameters that affect performance typically can include throughput, jitter, bit error rate and latency. In circuit-switched networks, network performance is synonymous with the grade of service. The number of rejected calls is a measure of how well the network is performing under heavy traffic loads. Other types of performance measures can include the level of noise and echo. In an Asynchronous Transfer Mode (ATM) network, performance can be measured by line rate, quality of service (QoS), data throughput, connect time, stability, technology, modulation technique, and modem enhancements. There are many ways to measure the performance of a network, as each network is different in nature and design. Performance can also be modeled instead of measured. For example, state transition diagrams are often used to model queuing performance in a circuit-switched network. The network planner uses these diagrams to analyze how the network performs in each state, ensuring that the network is optimally designed. === Network congestion === Network congestion occurs when a link or node is subjected to a greater data load than it is rated for, resulting in a deterioration of its quality of service. When networks are congested and queues become too full, packets have to be discarded, and participants must rely on retransmission to maintain reliable communications. Typical effects of congestion include queueing delay, packet loss or the blocking of new connections. A consequence of these latter two is that incremental increases in offered load lead either to only a small increase in the network throughput or to a potential reduction in network throughput. Network protocols that use aggressive retransmissions to compensate for packet loss tend to keep systems in a state of network congestion even after the initial load is reduced to a level that would not normally induce network congestion. Thus, networks using these protocols can exhibit two stable states under the same level of load. The stable state with low throughput is known as congestive collapse. Modern networks use congestion control, congestion avoidance and traffic control techniques where endpoints typically slow down or sometimes even stop transmission entirely when the network is congested to try to avoid congestive collapse. Specific techniques include: exponential backoff in protocols such as 802.11's CSMA/CA and the original Ethernet, window reduction in TCP, and fair queueing in devices such as routers. Another method to avoid the negative effects of network congestion is implementing quality of service priority schemes allowing selected traffic to bypass congestion. Priority schemes do not solve network congestion by themselves, but they help to alleviate the effects of congestion for critical services. A third method to avoid network congestion is the explicit allocation of network resources to specific flows. One example of this is the use of Contention-Free Transmission Opportunities (CFTXOPs) in the ITU-T G.hn home networking standard. For the Internet, RFC 2914 addresses the subject of congestion control in detail. === Network resilience === Network resilience is "the ability to provide and maintain an acceptable level of service in the face of faults and challenges to normal operation." == Security == Computer networks are also used by security hackers to deploy computer viruses or computer worms on devices connected to the network, or to prevent these devices from accessing the network via a denial-of-service attack. === Network security === Network Security consists of provisions and policies adopted by the network administrator to prevent and monitor unauthorized access, misuse, modification, or denial of the computer network and its network-accessible resources. Network security is used on a variety of computer networks, both public and private, to secure daily transactions and communications among businesses, government agencies, and individuals. === Network surveillance === Network surveillance is the monitoring of data being transferred over computer networks such as the Internet. The monitoring is often done surreptitiously and may be done by or at the behest of governments, by corporations, criminal organizations, or individuals. It may or may not be legal and may or may not require authorization from a court or other independent agency. Computer and network surveillance programs are widespread today, and almost all Internet traffic is or could potentially be monitored for clues to illegal activity. Surveillance is very useful to governments and law enforcement to maintain social control, recognize and monitor threats, and prevent or investigate criminal activity. With the advent of programs such as the Total Information Awareness program, technologies such as high-speed surveillance computers and biometrics software, and laws such as the Communications Assistance For Law Enforcement Act, governments now possess an unprecedented ability to monitor the activities of citizens. However, many civil rights and privacy groups—such as Reporters Without Borders, the Electronic Frontier Foundation, and the American Civil Liberties Union—have expressed concern that increasing surveillance of citizens may lead to a mass surveillance society, with limited political and personal freedoms. Fears such as this have led to lawsuits such as Hepting v. AT&T. The hacktivist group Anonymous has hacked into government websites in protest of what it considers "draconian surveillance". === End to end encryption === End-to-end encryption (E2EE) is a digital communications paradigm of uninterrupted protection of data traveling between two communicating parties. It involves the originating party encrypting data so only the intended recipient can decrypt it, with no dependency on third parties. End-to-end encryption prevents intermediaries, such as Internet service providers or application service providers, from reading or tampering with communications. End-to-end encryption generally protects both confidentiality and integrity. Examples of end-to-end encryption include HTTPS for web traffic, PGP for email, OTR for instant messaging, ZRTP for telephony, and TETRA for radio. Typical server-based communications systems do not include end-to-end encryption. These systems can only guarantee the protection of communications between clients and servers, not between the communicating parties themselves. Examples of non-E2EE systems are Google Talk, Yahoo Messenger, Facebook, and Dropbox. The end-to-end encryption paradigm does not directly address risks at the endpoints of the communication themselves, such as the technical exploitation of clients, poor quality random number generators, or key escrow. E2EE also does not address traffic analysis, which relates to things such as the identities of the endpoints and the times and quantities of messages that are sent. === SSL/TLS === The introduction and rapid growth of e-commerce on the World Wide Web in the mid-1990s made it obvious that some form of authentication and encryption was needed. Netscape took the first shot at a new standard. At the time, the dominant web browser was Netscape Navigator. Netscape created a standard called secure socket layer (SSL). SSL requires a server with a certificate. When a client requests access to an SSL-secured server, the server sends a copy of the certificate to the client. The SSL client checks this certificate (all web browsers come with an exhaustive list of root certificates preloaded), and if the certificate checks out, the server is authenticated and the client negotiates a symmetric-key cipher for use in the session. The session is now in a very secure encrypted tunnel between the SSL server and the SSL client. == Views of networks == Users and network administrators typically have different views of their networks. Users can share printers and some servers from a workgroup, which usually means they are in the same geographic location and are on the same LAN, whereas a network administrator is responsible for keeping that network up and running. A community of interest has less of a connection of being in a local area and should be thought of as a set of arbitrarily located users who share a set of servers, and possibly also communicate via peer-to-peer technologies. Network administrators can see networks from both physical and logical perspectives. The physical perspective involves geographic locations, physical cabling, and the network elements (e.g., routers, bridges and application-layer gateways) that interconnect via the transmission media. Logical networks, called, in the TCP/IP architecture, subnets, map onto one or more transmission media. For example, a common practice in a campus of buildings is to make a set of LAN cables in each building appear to be a common subnet, using VLANs. Users and administrators are aware, to varying extents, of a network's trust and scope characteristics. Again using TCP/IP architectural terminology, an intranet is a community of interest under private administration usually by an enterprise, and is only accessible by authorized users (e.g. employees). Intranets do not have to be connected to the Internet, but generally have a limited connection. An extranet is an extension of an intranet that allows secure communications to users outside of the intranet (e.g. business partners, customers). Unofficially, the Internet is the set of users, enterprises, and content providers that are interconnected by Internet Service Providers (ISP). From an engineering viewpoint, the Internet is the set of subnets, and aggregates of subnets, that share the registered IP address space and exchange information about the reachability of those IP addresses using the Border Gateway Protocol. Typically, the human-readable names of servers are translated to IP addresses, transparently to users, via the directory function of the Domain Name System (DNS). Over the Internet, there can be business-to-business, business-to-consumer and consumer-to-consumer communications. When money or sensitive information is exchanged, the communications are apt to be protected by some form of communications security mechanism. Intranets and extranets can be securely superimposed onto the Internet, without any access by general Internet users and administrators, using secure VPN technology. == See also == == References == This article incorporates public domain material from Federal Standard 1037C. General Services Administration. Archived from the original on 2022-01-22. == Further reading == Kurose James F and Keith W. Ross: Computer Networking: A Top-Down Approach Featuring the Internet, Pearson Education 2005. William Stallings, Computer Networking with Internet Protocols and Technology, Pearson Education 2004. Dimitri Bertsekas, and Robert Gallager, "Data Networks," Prentice Hall, 1992.
https://en.wikipedia.org/wiki/Computer_network
Saskatchewan Polytechnic (formerly the Saskatchewan Institute of Applied Science and Technology or SIAST ) is Saskatchewan's primary public post-secondary institution for technical education and skills training, recognized nationally and internationally for its expertise and innovation. Through program and course registrations, Saskatchewan Polytechnic serves 26,000 distinct students with programs that touch every sector of the economy. It operates campuses in Moose Jaw, Prince Albert, Regina and Saskatoon; and provides a number of courses and programs through distance education. Saskatchewan Polytechnic maintains reciprocal arrangements with partner institutions, including: Dumont Technical Institute, First Nations University of Canada, Saskatchewan Indian Institute of Technologies, University of Regina, and the University of Saskatchewan. == Programs == Saskatchewan Polytechnic offers over 150 programs in applied/visual media, aviation, basic education, business, community/human services, engineering technology, health services, hospitality/food services, industrial/trades, natural resources, nursing, technology, recreation and tourism, and science. In addition, Saskatchewan Polytechnic provides training to apprentices in several trades. == Campus == Saskatchewan Polytechnic comprises four campuses in Saskatchewan: Saskatoon (formerly SIAST Kelsey Campus), located on Treaty 6 territory. Located at Idylwyld Drive North and 33rd Street East (southeast corner) in Saskatoon, the campus is named for Henry Kelsey, a famous fur trader and explorer. The institute in Saskatoon dates back to 1941 when The Canadian Vocational Training School was established to train veterans returning from the war. The campus contains over 13 acres (5.3 ha) of instructional floor space. Moose Jaw (formerly SIAST Palliser Campus), located on Treaty 4 territory. Regina (formerly SIAST Wascana Campus), also located on Treaty 4 territory. Prince Albert (formerly SIAST Woodland Campus), also located on Treaty 6 territory. == History == The four schools that make up Saskatchewan Polytechnic started off as four individual schools. The Moose Jaw Campus started off as the Saskatchewan Technical Institute in 1959. Saskatoon began as the Central Saskatchewan Technical Institute in 1963. Regina began as the Saskatchewan Institute of Applied Arts and Sciences in 1972. Prince Albert began as the Northern Institute of Technology in 1986. On January 1, 1988, The Institute Act and the Regional Colleges Act amalgamated Saskatchewan's technical institutes, urban community colleges and the Advanced Technology Training Centre to form the Saskatchewan Institute of Applied Science and Technology (SIAST). The institution was named Saskatchewan Polytechnic on September 24, 2014. == Scholarships == Saskatchewan Polytechnic joined Project Hero, a scholarship program cofounded by General (Ret'd) Rick Hillier, for the families of fallen Canadian Forces members. == See also == Higher education in Saskatchewan List of colleges in Canada § Saskatchewan == References == == External links == Official website
https://en.wikipedia.org/wiki/Saskatchewan_Polytechnic
Micron Technology, Inc. is an American producer of computer memory and computer data storage including dynamic random-access memory, flash memory, and solid-state drives (SSDs). It is headquartered in Boise, Idaho. Micron's consumer products, including the Ballistix line of consumer and gaming memory modules, are marketed under the Crucial brand. Micron and Intel together created IM Flash Technologies, which produced NAND flash memory. It owned Lexar between 2006 and 2017. Micron is the only U.S.-based manufacturer of memory. == History == === 1978–1999 === Micron was founded in Boise, Idaho, in 1978 by Ward Parkinson, Joe Parkinson, Dennis Wilson, and Doug Pitman as a semiconductor design consulting company. Startup funding was provided by local Idaho businessmen Tom Nicholson, Allen Noble, Rudolph Nelson, and Ron Yanke. Later it received funding from Idaho billionaire J. R. Simplot, whose fortune was made in the potato business. In 1981, the company moved from consulting to manufacturing with the completion of its first wafer fabrication unit ("Fab 1"), producing 64K DRAM chips. In 1984, the company had its initial public offering. Micron sought to enter the market for RISC processors in 1991 with a product known as FRISC, targeting embedded control and signal processing applications. Running at 80 MHz and described as "a 64-bit processor with fast context-switching time and high floating-point performance", the design supported various features for timely interrupt handling and featured an arithmetic unit capable of handling both integer and floating-point calculations with a claimed throughput of 80 MFLOPS for double-precision arithmetic. Micron aimed to provide a "board-level demonstration supercomputer" in configurations with 256 MB or 1 GB of RAM. Having set up a subsidiary and with the product being designed into graphics cards and accelerators, Micron concluded in 1992 that the effort would not deliver the "best bang for the buck", reassigning engineers to other projects and discontinuing the endeavour. In 1994 founder Joe Parkinson retired as CEO and Steve Appleton took over as Chairman, President, and CEO. A 1996 3-way merger among ZEOS International, Micron Computer, and Micron Custom Manufacturing Services (MCMS) increased the size and scope of the company; this was followed rapidly with the 1997 acquisition of NetFrame Systems, in a bid to enter the mid-range server industry. Between 1998 and 2000, the company was the main sponsor of the MicronPC Bowl, or MicronPC.com Bowl. === Since 2000 === In 2000 Gurtej Singh Sandhu and Trung T. Doan at Micron initiated the development of atomic layer deposition high-k films for DRAM memory devices. This helped drive cost-effective implementation of semiconductor memory, starting with 90 nm node DRAM. Pitch double-patterning was also pioneered by Gurtej Singh Sandhu at Micron during the 2000s, leading to the development of 30-nm class NAND flash memory, and it has since been widely adopted by NAND flash and RAM manufacturers worldwide. In 2002 Micron spun off its personal computer business as MPC Corporation and put it up for sale. The company found the business difficult as the number 12 American computer maker with only 1.3 percent of the market. Micron and Intel created a joint venture in 2005, based in IM Flash Technologies in Lehi, Utah. The two companies formed another joint venture in 2011, IM Flash Singapore, in Singapore. In 2012 Micron became sole owner of this second joint venture. In 2006 Micron acquired Lexar, an American manufacturer of digital media products. The company again changed leadership in June 2007 with COO Mark Durcan becoming president. In 2008 Micron converted the Avezzano chip fab, formerly a Texas Instruments DRAM fab, into a production facility for CMOS image sensors sold by Aptina Imaging. In 2008 Micron spun off Aptina Imaging, which was acquired by ON Semiconductor in 2014. Micron retained a stake in the spinoff. However, the core company suffered setbacks and had to layoff 15 percent of its workforce in October 2008, during which period the company also announced the purchase of Qimonda's 35.6 percent stake in Inotera Memories for $400 million. The trend of layoffs and acquisitions continued in 2009 with the termination of an additional 2,000 employees, and the acquisition of the FLCOS microdisplay company Displaytech. Micron agreed to buy flash-chip maker Numonyx for $1.27 billion in stock in February 2010. On 3 February 2012 CEO Appleton died in a plane crash shortly after takeoff from the Boise Airport. He was the pilot and sole occupant of the Lancair IV aircraft. Mark Durcan replaced Appleton as the CEO shortly thereafter, eliminating his former title of President. In 2013 the Avezzano chip fab was sold to LFoundry. In the 2012 to 2014 period, Micron again went through an acquisition-layoff cycle, becoming the majority shareholder of Inotera Memories, purchasing Elpida Memory for $2 billion and the remaining shares in Rexchip, a PC memory chip manufacturing venture between Powerchip and Elpida Memory for $334 million, while announcing plans to lay off approximately 3,000 workers. Through the Elpida acquisition, Micron became a major supplier to Apple Inc. for the iPhone and iPad. In December 2016 Micron finished acquiring the remaining 67 percent of Inotera, making it a 100 percent subsidiary of Micron. In April 2017 Micron announced Sanjay Mehrotra as the new president and CEO to replace Mark Durcan. In June 2017 Micron announced it was discontinuing the Lexar retail removable media storage business and putting some or all it up for sale. In August of that year the Lexar brand was acquired by Longsys, a flash memory company based in Shenzhen, China. In May 2018 Micron Technology and Intel launched QLC NAND memory to increase storage density. The company ranked 150th on the Fortune 500 list of largest United States corporations by revenue. In February 2019 the first microSD card with a storage capacity of 1 terabyte (TB) was announced by Micron. As of March 2020 3.84TB Micron 5210 Ion is the cheapest large-capacity SSD in the world. In September 2020 the company introduced the world's fastest discrete graphics memory solution. Working with computing technology leader Nvidia, Micron debuted GDDR6X in the Nvidia GeForce RTX 3090 and GeForce RTX 3080 graphics processing units (GPUs). In November 2020, the company unveiled a new 176-layer 3D NAND module. It offers improved read and write latency and is slated to be used in the production of a new generation of solid-state drives. On 22 October 2021, Micron closed the sale of IM Flash's Lehi, Utah fab to Texas Instruments for a sale price of US$900 million. With the passage of the CHIPS and Science Act, Micron announced its pledge to invest billions in new manufacturing within the US. In September 2022, Micron announced they would invest $15 billion in a new facility in Boise, Idaho. In October 2022 Micron announced a $100 billion expansion in Clay, New York. Micron Technology owed Netlist $445 million in damages for infringing Netlist's patents related to memory-module technology for high-performance computing. The jury found that Micron's semiconductor-memory products violated two of Netlist's patents willfully, potentially allowing the judge to triple the damages. Netlist had sued Micron in 2022, accusing three of its memory-module lines of patent infringement, which Micron denied, also arguing the patents' invalidity. The U.S. Patent and Trademark Office invalidated one patent in April 2024. === Lawsuits === ==== Fujian Jinhua ==== On 5 December 2017 Micron sued rivals United Microelectronics Corporation and Fujian Jinhua Integrated Circuit Co. (JHICC) in the United States District Court for the Northern District of California, alleging infringement on its DRAM patents and intellectual property rights. The U.S. Justice Department in 2018 announced an indictment against Fujian Jinhua, and authorities added the Chinese firm to the Entity List the same year. Fujian Jinhua vehemently denied the claims, saying it had not stolen any technology, and that "Micron regards the development of Fujian Jinhua as a threat and adopts various means to hamper and destroy the development of Fujian Jinhua," In May 2023, the Cyberspace Administration of China barred major Chinese information infrastructure firms from purchasing Micron products, citing significant national security risks. The move was seen as retaliation against US sanctions on China's semiconductor industry and related export controls. In November 2023 Chinese chipmaker Yangtze Memory Technologies Corp (YMTC) filed a lawsuit against Micron alleging infringement of eight of its patents. On February 27, 2024, Judge Maxine Chesney of the U.S. Federal District Court in San Francisco acquitted Fujian Jinhua Integrated Circuit, whom Micron had sued for IP theft, of the charge in a non-jury verdict, believing that there was insufficient evidence to support the charge. == See also == List of companies based in Idaho List of semiconductor fabrication plants == References == == External links == Official website Crucial Micron Business data for Micron Technology:
https://en.wikipedia.org/wiki/Micron_Technology
Computing is any goal-oriented activity requiring, benefiting from, or creating computing machinery. It includes the study and experimentation of algorithmic processes, and the development of both hardware and software. Computing has scientific, engineering, mathematical, technological, and social aspects. Major computing disciplines include computer engineering, computer science, cybersecurity, data science, information systems, information technology, and software engineering. The term computing is also synonymous with counting and calculating. In earlier times, it was used in reference to the action performed by mechanical computing machines, and before that, to human computers. == History == The history of computing is longer than the history of computing hardware and includes the history of methods intended for pen and paper (or for chalk and slate) with or without the aid of tables. Computing is intimately tied to the representation of numbers, though mathematical concepts necessary for computing existed before numeral systems. The earliest known tool for use in computation is the abacus, and it is thought to have been invented in Babylon circa between 2700 and 2300 BC. Abaci, of a more modern design, are still used as calculation tools today. The first recorded proposal for using digital electronics in computing was the 1931 paper "The Use of Thyratrons for High Speed Automatic Counting of Physical Phenomena" by C. E. Wynn-Williams. Claude Shannon's 1938 paper "A Symbolic Analysis of Relay and Switching Circuits" then introduced the idea of using electronics for Boolean algebraic operations. The concept of a field-effect transistor was proposed by Julius Edgar Lilienfeld in 1925. John Bardeen and Walter Brattain, while working under William Shockley at Bell Labs, built the first working transistor, the point-contact transistor, in 1947. In 1953, the University of Manchester built the first transistorized computer, the Manchester Baby. However, early junction transistors were relatively bulky devices that were difficult to mass-produce, which limited them to a number of specialised applications. In 1957, Frosch and Derick were able to manufacture the first silicon dioxide field effect transistors at Bell Labs, the first transistors in which drain and source were adjacent at the surface. Subsequently, a team demonstrated a working MOSFET at Bell Labs 1960. The MOSFET made it possible to build high-density integrated circuits, leading to what is known as the computer revolution or microcomputer revolution. == Computer == A computer is a machine that manipulates data according to a set of instructions called a computer program. The program has an executable form that the computer can use directly to execute the instructions. The same program in its human-readable source code form, enables a programmer to study and develop a sequence of steps known as an algorithm. Because the instructions can be carried out in different types of computers, a single set of source instructions converts to machine instructions according to the CPU type. The execution process carries out the instructions in a computer program. Instructions express the computations performed by the computer. They trigger sequences of simple actions on the executing machine. Those actions produce effects according to the semantics of the instructions. === Computer hardware === Computer hardware includes the physical parts of a computer, including the central processing unit, memory, and input/output. Computational logic and computer architecture are key topics in the field of computer hardware. === Computer software === Computer software, or just software, is a collection of computer programs and related data, which provides instructions to a computer. Software refers to one or more computer programs and data held in the storage of the computer. It is a set of programs, procedures, algorithms, as well as its documentation concerned with the operation of a data processing system. Program software performs the function of the program it implements, either by directly providing instructions to the computer hardware or by serving as input to another piece of software. The term was coined to contrast with the old term hardware (meaning physical devices). In contrast to hardware, software is intangible. Software is also sometimes used in a more narrow sense, meaning application software only. ==== System software ==== System software, or systems software, is computer software designed to operate and control computer hardware, and to provide a platform for running application software. System software includes operating systems, utility software, device drivers, window systems, and firmware. Frequently used development tools such as compilers, linkers, and debuggers are classified as system software. System software and middleware manage and integrate a computer's capabilities, but typically do not directly apply them in the performance of tasks that benefit the user, unlike application software. ==== Application software ==== Application software, also known as an application or an app, is computer software designed to help the user perform specific tasks. Examples include enterprise software, accounting software, office suites, graphics software, and media players. Many application programs deal principally with documents. Apps may be bundled with the computer and its system software, or may be published separately. Some users are satisfied with the bundled apps and need never install additional applications. The system software manages the hardware and serves the application, which in turn serves the user. Application software applies the power of a particular computing platform or system software to a particular purpose. Some apps, such as Microsoft Office, are developed in multiple versions for several different platforms; others have narrower requirements and are generally referred to by the platform they run on. For example, a geography application for Windows or an Android application for education or Linux gaming. Applications that run only on one platform and increase the desirability of that platform due to the popularity of the application, known as killer applications. === Computer network === A computer network, often simply referred to as a network, is a collection of hardware components and computers interconnected by communication channels that allow the sharing of resources and information. When at least one process in one device is able to send or receive data to or from at least one process residing in a remote device, the two devices are said to be in a network. Networks may be classified according to a wide variety of characteristics such as the medium used to transport the data, communications protocol used, scale, topology, and organizational scope. Communications protocols define the rules and data formats for exchanging information in a computer network, and provide the basis for network programming. One well-known communications protocol is Ethernet, a hardware and link layer standard that is ubiquitous in local area networks. Another common protocol is the Internet Protocol Suite, which defines a set of protocols for internetworking, i.e. for data communication between multiple networks, host-to-host data transfer, and application-specific data transmission formats. Computer networking is sometimes considered a sub-discipline of electrical engineering, telecommunications, computer science, information technology, or computer engineering, since it relies upon the theoretical and practical application of these disciplines. ==== Internet ==== The Internet is a global system of interconnected computer networks that use the standard Internet Protocol Suite (TCP/IP) to serve billions of users. This includes millions of private, public, academic, business, and government networks, ranging in scope from local to global. These networks are linked by a broad array of electronic, wireless, and optical networking technologies. The Internet carries an extensive range of information resources and services, such as the inter-linked hypertext documents of the World Wide Web and the infrastructure to support email. === Computer programming === Computer programming is the process of writing, testing, debugging, and maintaining the source code and documentation of computer programs. This source code is written in a programming language, which is an artificial language that is often more restrictive than natural languages, but easily translated by the computer. Programming is used to invoke some desired behavior (customization) from the machine. Writing high-quality source code requires knowledge of both the computer science domain and the domain in which the application will be used. The highest-quality software is thus often developed by a team of domain experts, each a specialist in some area of development. However, the term programmer may apply to a range of program quality, from hacker to open source contributor to professional. It is also possible for a single programmer to do most or all of the computer programming needed to generate the proof of concept to launch a new killer application. ==== Computer programmer ==== A programmer, computer programmer, or coder is a person who writes computer software. The term computer programmer can refer to a specialist in one area of computer programming or to a generalist who writes code for many kinds of software. One who practices or professes a formal approach to programming may also be known as a programmer analyst. A programmer's primary computer language (C, C++, Java, Lisp, Python, etc.) is often prefixed to the above titles, and those who work in a web environment often prefix their titles with Web. The term programmer can be used to refer to a software developer, software engineer, computer scientist, or software analyst. However, members of these professions typically possess other software engineering skills, beyond programming. === Computer industry === The computer industry is made up of businesses involved in developing computer software, designing computer hardware and computer networking infrastructures, manufacturing computer components, and providing information technology services, including system administration and maintenance. The software industry includes businesses engaged in development, maintenance, and publication of software. The industry also includes software services, such as training, documentation, and consulting. == Sub-disciplines of computing == === Computer engineering === Computer engineering is a discipline that integrates several fields of electrical engineering and computer science required to develop computer hardware and software. Computer engineers usually have training in electronic engineering (or electrical engineering), software design, and hardware-software integration, rather than just software engineering or electronic engineering. Computer engineers are involved in many hardware and software aspects of computing, from the design of individual microprocessors, personal computers, and supercomputers, to circuit design. This field of engineering includes not only the design of hardware within its own domain, but also the interactions between hardware and the context in which it operates. === Software engineering === Software engineering is the application of a systematic, disciplined, and quantifiable approach to the design, development, operation, and maintenance of software, and the study of these approaches. That is, the application of engineering to software. It is the act of using insights to conceive, model and scale a solution to a problem. The first reference to the term is the 1968 NATO Software Engineering Conference, and was intended to provoke thought regarding the perceived software crisis at the time. Software development, a widely used and more generic term, does not necessarily subsume the engineering paradigm. The generally accepted concepts of Software Engineering as an engineering discipline have been specified in the Guide to the Software Engineering Body of Knowledge (SWEBOK). The SWEBOK has become an internationally accepted standard in ISO/IEC TR 19759:2015. === Computer science === Computer science or computing science (abbreviated CS or Comp Sci) is the scientific and practical approach to computation and its applications. A computer scientist specializes in the theory of computation and the design of computational systems. Its subfields can be divided into practical techniques for its implementation and application in computer systems, and purely theoretical areas. Some, such as computational complexity theory, which studies fundamental properties of computational problems, are highly abstract, while others, such as computer graphics, emphasize real-world applications. Others focus on the challenges in implementing computations. For example, programming language theory studies approaches to the description of computations, while the study of computer programming investigates the use of programming languages and complex systems. The field of human–computer interaction focuses on the challenges in making computers and computations useful, usable, and universally accessible to humans. === Cybersecurity === The field of cybersecurity pertains to the protection of computer systems and networks. This includes information and data privacy, preventing disruption of IT services and prevention of theft of and damage to hardware, software, and data. === Data science === Data science is a field that uses scientific and computing tools to extract information and insights from data, driven by the increasing volume and availability of data. Data mining, big data, statistics, machine learning and deep learning are all interwoven with data science. === Information systems === Information systems (IS) is the study of complementary networks of hardware and software (see information technology) that people and organizations use to collect, filter, process, create, and distribute data. The ACM's Computing Careers describes IS as: "A majority of IS [degree] programs are located in business schools; however, they may have different names such as management information systems, computer information systems, or business information systems. All IS degrees combine business and computing topics, but the emphasis between technical and organizational issues varies among programs. For example, programs differ substantially in the amount of programming required." The study of IS bridges business and computer science, using the theoretical foundations of information and computation to study various business models and related algorithmic processes within a computer science discipline. The field of Computer Information Systems (CIS) studies computers and algorithmic processes, including their principles, their software and hardware designs, their applications, and their impact on society while IS emphasizes functionality over design. === Information technology === Information technology (IT) is the application of computers and telecommunications equipment to store, retrieve, transmit, and manipulate data, often in the context of a business or other enterprise. The term is commonly used as a synonym for computers and computer networks, but also encompasses other information distribution technologies such as television and telephones. Several industries are associated with information technology, including computer hardware, software, electronics, semiconductors, internet, telecom equipment, e-commerce, and computer services. == Research and emerging technologies == DNA-based computing and quantum computing are areas of active research for both computing hardware and software, such as the development of quantum algorithms. Potential infrastructure for future technologies includes DNA origami on photolithography and quantum antennae for transferring information between ion traps. By 2011, researchers had entangled 14 qubits. Fast digital circuits, including those based on Josephson junctions and rapid single flux quantum technology, are becoming more nearly realizable with the discovery of nanoscale superconductors. Fiber-optic and photonic (optical) devices, which already have been used to transport data over long distances, are starting to be used by data centers, along with CPU and semiconductor memory components. This allows the separation of RAM from CPU by optical interconnects. IBM has created an integrated circuit with both electronic and optical information processing in one chip. This is denoted CMOS-integrated nanophotonics (CINP). One benefit of optical interconnects is that motherboards, which formerly required a certain kind of system on a chip (SoC), can now move formerly dedicated memory and network controllers off the motherboards, spreading the controllers out onto the rack. This allows standardization of backplane interconnects and motherboards for multiple types of SoCs, which allows more timely upgrades of CPUs. Another field of research is spintronics. Spintronics can provide computing power and storage, without heat buildup. Some research is being done on hybrid chips, which combine photonics and spintronics. There is also research ongoing on combining plasmonics, photonics, and electronics. === Cloud computing === Cloud computing is a model that allows for the use of computing resources, such as servers or applications, without the need for interaction between the owner of these resources and the end user. It is typically offered as a service, making it an example of Software as a Service, Platforms as a Service, and Infrastructure as a Service, depending on the functionality offered. Key characteristics include on-demand access, broad network access, and the capability of rapid scaling. It allows individual users or small business to benefit from economies of scale. One area of interest in this field is its potential to support energy efficiency. Allowing thousands of instances of computation to occur on one single machine instead of thousands of individual machines could help save energy. It could also ease the transition to renewable energy source, since it would suffice to power one server farm with renewable energy, rather than millions of homes and offices. However, this centralized computing model poses several challenges, especially in security and privacy. Current legislation does not sufficiently protect users from companies mishandling their data on company servers. This suggests potential for further legislative regulations on cloud computing and tech companies. === Quantum computing === Quantum computing is an area of research that brings together the disciplines of computer science, information theory, and quantum physics. While the idea of information as part of physics is relatively new, there appears to be a strong tie between information theory and quantum mechanics. Whereas traditional computing operates on a binary system of ones and zeros, quantum computing uses qubits. Qubits are capable of being in a superposition, i.e. in both states of one and zero, simultaneously. Thus, the value of the qubit is not between 1 and 0, but changes depending on when it is measured. This trait of qubits is known as quantum entanglement, and is the core idea of quantum computing that allows quantum computers to do large scale computations. Quantum computing is often used for scientific research in cases where traditional computers do not have the computing power to do the necessary calculations, such in molecular modeling. Large molecules and their reactions are far too complex for traditional computers to calculate, but the computational power of quantum computers could provide a tool to perform such calculations. == See also == Artificial intelligence Computational science Computational thinking Computer algebra Confidential computing Creative computing Data-centric computing Electronic data processing Enthusiast computing Index of history of computing articles Instruction set architecture Lehmer sieve Liquid computing List of computer term etymologies Mobile computing Outline of computers Outline of computing Scientific computing Spatial computing Ubiquitous computing Unconventional computing Urban computing Virtual reality == References == == External links == FOLDOC: the Free On-Line Dictionary Of Computing
https://en.wikipedia.org/wiki/Computing
Kingston Technology Corporation is an American multinational computer technology corporation that develops, manufactures, sells and supports flash memory products, other computer-related memory products, as well as the HyperX gaming division (now owned by HP). Headquartered in Fountain Valley, California, United States, Kingston Technology employs more than 3,000 employees worldwide as of Q1 2016. The company has manufacturing and logistics facilities in the United States, United Kingdom, Ireland, Taiwan, and China. It is the largest independent producer of DRAM memory modules, owning approximately 68% of the third-party worldwide DRAM module market share in 2017, according to DRAMeXchange. In 2018 the company generated $7.5 billion in revenue and made No. 53 on the Forbes Lists of "America's Largest Private Companies 2019." Kingston serves an international network of distributors, resellers, retailers and OEM customers on six continents. The company also provides contract manufacturing and supply chain management services for semiconductor manufacturers and system OEMs. == History == Kingston Technology was founded on October 17, 1987, in response to a severe shortage of 1Mbit surface-mount memory chips, Taiwanese immigrant John Tu designed a new single in-line memory module (SIMM) that used readily available, older-technology through-hole components. In 1990 the company branched out into its first non-memory product line, processor upgrades. By 1992, the firm was ranked No. 1 by Inc. as the fastest-growing privately held company in America. The company expanded into networking and storage product lines, and introduced DataTraveler and DataPak portable products. In September 1994, Kingston became ISO 9000 certified on its first assessment attempt. In 1995, Kingston opened a branch office in Munich, Germany to provide technical support and marketing capabilities for its European distributors and customers. In October 1995, the company joined the "Billion-Dollar Club". After the company's 1995 sales exceeded $1.3 billion, ads ran thanking the employees ("Thanks a Billion!") with each individual employee-name in The Wall Street Journal, The Orange County Register and The Los Angeles Times. Ads also appeared in trade publications and The Wall Street Journal thanking the company's suppliers and distributors. On August 15, 1996 SoftBank Corporation of Japan acquired 80 percent of Kingston for a total of $1.8 billion. In November of the same year, Kingston and Toshiba co-marketed memory upgrades for Toshiba PCs - the first time that a PC OEM and a memory manufacturer had teamed up to create a co-branded module. In 1999, Tu and Sun eventually bought back the 80 percent of Kingston owned by Softbank for $450 million. On December 14, 1996, John Tu and David Sun allocated $71.5 million for employee bonuses as a result of the acquisition, averaging $130,000 for each of the company's 550 workers. Kingston announced a 49% increase in unit sales for its memory module products in calendar year 1996 over calendar year 1995. In 1996, Kingston opened its European headquarters in London, United Kingdom. In January 1997, Kingston opened a manufacturing facility/office in Taiwan, a sales office in Japan, and a manufacturing facility and offices in Dublin, Ireland. The company also expanded its American manufacturing capacity by purchasing PC-OEM manufacturing buildings in Fountain Valley, California. Kingston also introduced ValueRAM, which was a high-quality, low-cost memory designed for system integrators to use in white box systems. In 1999, Kingston launched Advanced Validation Labs, Inc. (AVL), a sister company that provides memory validation services. === 2000s === Kingston began manufacturing removable disk drive storage products in 1989 in their Kingston Storage Products Division. By 2000, it was decided to spin off the product line and become a sister company, StorCase Technology, Inc. StorCase ceased operations in 2006 after selling the designs and rights to manufacture its products to competitor CRU-DataPort. In June 2000, Kingston announced a new supply chain management model to its memory manufacturing process. Payton Technology Inc. was established to help support this new model. Forbes listed Kingston as number 141 on its list of "The 500 Largest Private Companies in the U.S," with revenues of $1.5 billion for 1999. In March 2001, Kingston announced the formation of the Consumer Markets Division (CMD), a new division focusing on the retail and e-tail channel. In 2002 Kingston launched a patented memory tester and a new HyperX line of high-performance memory modules, and also patented EPOC chip-stacking technology. In August of that year, Kingston made a $50 million investment in Elpida and launched a green initiative for module manufacturing. In 2004, Kingston announced revenues of $1.8B for 2003. In September, Kingston announced new DataTraveler Elite USB drives, with hardware-based security encryption. In October, Advanced Micro Devices named Kingston "Outstanding Partner" for contributions to the AMD Athlon 64 and Opteron launches. Kingston reported revenues of $2.4B for 2004. In May, Kingston launched a line of validated ValueRam modules for Intel-based servers. The company was later granted a U.S. patent on dynamic burn-in tester for server memory. They also announced a $26M investment in Tera Probe, the newest and largest wafer testing company in the world. They also opened the world's largest memory module manufacturing facility in Shanghai, China. In 2006, Kingston reported revenues of $3.0B for 2005. In March, Kingston introduced the first fully secure 100% privacy USB drive with 128-bit hardware encryption, and later with 256-bit hardware encryption. The company also launched Fully Buffered Dimms (FBDIMMs), which broke the 16 GB barrier. The company entered the portable media market with KPEX (Kingston Portable Entertainment eXperience). In 2007, Kingston reported revenues of $3.7B for 2006. Forbes listed Kingston as No. 83 on its list of "The 500 Largest Private Companies in the U.S". Inc. ranked Kingston as the No. 1 Fastest Growing Private Company By Revenue. In 2008, Kingston reported revenues of $4.5B for 2007. In August, Inc.com's "Top 100 Inc. 5000 Companies" ranked Kingston No. 2 in both Gross Dollars of Growth and Overall Revenue. Forbes lists Kingston as number 79 on its list of "The 500 Largest Private Companies in the U.S." In 2009, Kingston reported revenues of $4.0B for 2008. Volume increased 41% in memory units shipped from 2007. iSuppli ranked Kingston as the world's number-one memory module manufacturer for the third-party memory market for the sixth consecutive year. In August, Inc.com's "Top 100 Inc. 5000 Companies" ranked Kingston No. 5 in Private Companies by Revenue and number 1 in the computer hardware category. In October, Forbes listed Kingston as number 97 on its list of "The 500 Largest Private Companies in the U.S." In 2010, Kingston reported revenues of $4.1B for 2009. iSuppli ranked Kingston as the world's number-one memory module manufacturer for the third-party memory market with 40.3% market share, up from 32.8% in 2008 and 27.5% in 2007. In August, Inc.com's "Top 100 Inc. 5000 Companies" ranked Kingston No. 6 in Private Companies by Revenue and number 1 in the computer hardware category. In November, Forbes listed Kingston as number 77 on its list of "The 500 Largest Private Companies in the U.S." In 2011, Kingston reported revenues of $6.5B for 2010. iSuppli ranked Kingston as the world's number-one memory module manufacturer for the third-party memory market, with 46% market share. Kingston also launched the Wi-Drive line of wireless storage products. Forbes ranked Kingston as the 51st largest private company in the US, up from No. 77. Inc. ranked Kingston No. 4 by revenue in the top 100 companies and No. 1 in computer hardware category. Gartner Research ranked Kingston as the No. 1 USB drive manufacturer in the world. In 2012, Kingston celebrated 25 years in the memory business. iSuppli ranked Kingston as the world's number-one memory module manufacturer for the third-party memory market for the 9th consecutive year. Kingston celebrated 10 years of HyperX gaming memory. Kingston releases HyperX branded SSD drives and releases the first Windows to Go USB drive. Forbes lists Kingston as No. 48 on its list of "The 500 Largest Private Companies in the U.S." Gartner Research ranked Kingston No. 1 USB manufacturer in the world. In 2013, Kingston ships its fastest, world's largest-capacity USB 3.0 Flash Drive with DataTraveler HyperX Predator 3.0, available up to 1 TB. Kingston launches the MobileLite Wireless reader line of storage products for smartphones and tablets. iSuppli ranks Kingston as the world's number-one memory module manufacturer for the third-party memory market for the 10th consecutive year. Gartner Research ranks Kingston the no. 1 USB Flash drive manufacturer in the world for the 6th straight year. Forbes lists Kingston as No. 94 on its list of "The 500 Largest Private Companies in the U.S." In 2014, Kingston HyperX released the FURY memory line for entry-level overclocking and game enthusiasts. HyperX then released its Cloud headset. iSuppli (IHS) ranks Kingston as the world's number-one memory module manufacturer for the third-party memory market for the 11th consecutive year. HyperX sets DDR3 overclocking world record mark at 4620 MHz, using one 4 GB HyperX Predator 2933 MHz DDR3 module. Kingston ships M.2 SATA SSDs for new notebook platforms, small-form factor devices and Z97 motherboards. Kingston releases MobileLite Wireless G2, the second generation media streamer for smartphones and tablets. HyperX demos DDR4 memory at PAX Prime, allowing for faster speeds at a lower voltage. Forbes lists Kingston as No. 69 on its list of "The 500 Largest Private Companies in the U.S." In 2015, IHS ranks Kingston as the world's number-one memory module manufacturer for the third-party memory market for the 12th consecutive year. In January, HyperX reclaimed the top DDR4 overclocking mark in the world at 4351 MHz. HyperX Launches High-Performance PCIe SSD with the highest-end SSD with the fastest speeds in the HyperX lineup. HyperX released the enhanced Cloud II headset with USB sound card audio control box and virtual 7.1 Surround Sound. HyperX creates the world's fastest DDR4 128GB memory kit running at an astoundingly fast 3000 MHz with HyperX Predator modules with ultra-tight timings. Gartner ranks Kingston as the No. 2 aftermarket PC SSD manufacturer in the world for 2014. Forbes lists Kingston as No. 54 on its list of "The 500 Largest Private Companies in the U.S." In 2016, Kingston Digital, the Flash memory affiliate of Kingston Technology Company, acquired the USB technology and assets of IronKey from Imation Corp. Forbes lists Kingston as No. 51 on its list of "The 500 Largest Private Companies in the U.S." Kingston Technology sold HyperX to HP Inc. in June 2021 for $425 million. The deal only includes computer peripherals branded as HyperX, not memory or storage. Kingston retains ownership of the memory and storage products, which it has rebranded as Kingston FURY. == Awards and recognition == iSuppli (IHS) has ranked Kingston as the world's number-one memory module manufacturer for the third-party memory market for 12 consecutive years, the most recent being in June 2015. In 2007, Inc. awarded Kingston Technology's founders with the Inaugural Distinguished Alumni Goldhirsh Award. In September 2006, Kingston received Intel's "Outstanding Supplier Award for Exceptional Support, Quality and Timely Delivery of FB-DIMM Products". In April 2003 Kingston received the "Diverse Supplier Award for Best Overall Performance" from Dell. It was also honored for "Excellence in Fairness" by the Great Place to Work Institute. The company also appeared on Fortune's list of "100 Best Companies to Work For" for five consecutive years (1998–2002). In 2001, it was listed by IndustryWeek as a "Top 5 Global Manufacturing Company". Forbes ranks Kingston as number 51 on its list of America's Largest Private Companies. The HyperX line of products is used by over 20% of professional gamers. == Products == Computer - System Specific memory upgrades, ValueRam for system builders and OEMs Digital audio players - K-PEX 100, Mini-Secure Digital, Micro-Secure Digital, MMC Flash memory - Such as Secure Digital, Compact Flash, USB Flash Drives, Solid-state drives and various other form factors Mobile phones - Mini-Secure Digital, Micro-Secure Digital, MMC Printer - LaserJet memory, Lexmark printer memory, etc. Server - Memory for both branded (i.e. IBM, HP, etc.) and white box servers (ValueRAM, Server Premier) Wireless storage products - Wi-Drive wireless storage and MobileLite Wireless readers == References == == External links == Official website – Kingston On MicroSD problems blog
https://en.wikipedia.org/wiki/Kingston_Technology
Boom Technology, Inc. (trade name Boom Supersonic) is an American company developing the Overture, a supersonic airliner. It has also flight tested a one-third-scale demonstrator, the Boom XB-1 "Baby Boom", which broke the sound barrier for the first time on January 28, 2025, during a flight from the Mojave Air and Space Port. == History == The company was founded in Denver in 2014. It participated in a Y Combinator startup incubation program in early 2016, and has been funded by Y Combinator, Sam Altman, Seraph Group, Eight Partners, and others. In March 2017, $33 million was invested by several venture funds: Continuity Fund, RRE Ventures, Palm Drive Ventures, 8VC and Caffeinated Capital. Boom secured $41 million of total financing by April 2017. In December 2017, Japan Airlines invested $10 million, raising the company capital to $51 million: enough to build the XB-1 “Baby Boom” demonstrator and complete its testing, and to start early design work on the 55-seat airliner. In January 2019, Boom raised a further $100 million, bringing the total to $151 million, then planning the demonstrator first flight for later in 2019. In January 2022, the company announced plans to build a 400,000-square-foot (37,000 m2) manufacturing facility on a 65-acre (260,000 m2) site at Piedmont Triad International Airport in Greensboro, North Carolina. In November 2023, a representative of the NEOM Investment Fund announced their investment in Boom at an undisclosed amount. This follows an announcement by Boom of a "strategic investment" in the company from the fund. The 64-80 seat aircraft, Overture, will be the first supersonic passenger jet since the British-French Concorde which was retired in 2003. == Projects == === XB-1 "Baby Boom" demonstrator === The Boom XB-1 "Baby Boom" is a one-third-scale supersonic demonstrator, designed to maintain Mach 2.2, with over 1,000 nautical miles [nmi] (1,900 km; 1,200 mi) of range, and powered by three General Electric J85-15 engines with 4,300 pounds-force [lbf] (19 kN) of thrust. It was rolled out in October 2020. It was expected to be flight tested in 2022, but delays pushed the first flight test to March 22, 2024. During the test flight, the aircraft reached speeds of up to 238 knots (441 km/h; 274 mph) and achieved an altitude of over 7,000 feet (2,100 m). In the test flight on 13 December 2024, the aircraft reached speeds of up to 517 knots (957 km/h; 595 mph) and achieved an altitude of over 27,000 feet (8,200 m). In the test flight on 28 January 2025, the aircraft broke the sound barrier, reaching speeds up to 650 knots (1,200 km/h; 750 mph) and achieved an altitude of over 35,000 feet (11,000 m). The aircraft became the first privately-funded aircraft to break the sound barrier, reaching a speed of Mach 1.122. The company said XB-1 achieved supersonic flight without generating an audible sonic boom that reached the ground after refining its sonic boom models and improving algorithms for predicting Mach cutoff conditions. === Overture airliner === The Boom Overture is supersonic transport development to achieve an airspeed of Mach 1.7 (1,000 kn; 1,800 km/h; 1,100 mph), accommodate 65 to 88 passengers and a planned range of 4,250 nmi (7,870 km; 4,890 mi). With 500 viable routes, Boom suggests there could be a market for 1,000 supersonic airliners with business class fares. It had gathered 76 commitments by December 2017. It decided to use the delta wing configuration of Concorde and make use of composite materials. It is to be powered by three 15,000–20,000 lbf (67–89 kN) dry turbofan engines. In January 2021, Boom announced plans to begin Overture test flights in 2027 and Boom CEO Blake Scholl "estimates that flights on Overture will be available in 2030." United Airlines announced in June 2021 that it had signed a deal to purchase 15 Boom Overture aircraft, with an option to buy 35 more. American Airlines announced in August 2022 it had agreed to purchase 20 Boom Overture aircraft. === Symphony engine === In December 2022, Boom announced the Symphony, a new propulsion system to be designed for the Overture. Boom will work with three companies to develop Symphony: Florida Turbine Technologies for engine design, GE Additive for additive technology design consulting, and StandardAero for maintenance. In April 2025, Boom acquired a former ReactionEngines hypersonic test facility at Colorado Air and Space Port, to serve as the dedicated test site for the Symphony engine. === Mach 4 airliner concept === Boom Supersonic is participating in a NASA-led study to develop concept designs and technology roadmaps for a Mach 4 airliner. Boom is part of a team led by Northrop Grumman Aeronautics Systems, alongside Blue Ridge Research and Consulting and Rolls-Royce North American Technologies. == See also == Supersonic business jet Aerion Concorde Exosonic Spike S-512 == References ==
https://en.wikipedia.org/wiki/Boom_Technology
Wi-Fi () is a family of wireless network protocols based on the IEEE 802.11 family of standards, which are commonly used for local area networking of devices and Internet access, allowing nearby digital devices to exchange data by radio waves. These are the most widely used computer networks, used globally in home and small office networks to link devices and to provide Internet access with wireless routers and wireless access points in public places such as coffee shops, restaurants, hotels, libraries, and airports. Wi-Fi is a trademark of the Wi-Fi Alliance, which restricts the use of the term "Wi-Fi Certified" to products that successfully complete interoperability certification testing. Non-compliant hardware is simply referred to as WLAN, and it may or may not work with "Wi-Fi Certified" devices. As of 2017, the Wi-Fi Alliance consisted of more than 800 companies from around the world. As of 2019, over 3.05 billion Wi-Fi-enabled devices are shipped globally each year. Wi-Fi uses multiple parts of the IEEE 802 protocol family and is designed to work well with its wired sibling, Ethernet. Compatible devices can network through wireless access points with each other as well as with wired devices and the Internet. Different versions of Wi-Fi are specified by various IEEE 802.11 protocol standards, with different radio technologies determining radio bands, maximum ranges, and speeds that may be achieved. Wi-Fi most commonly uses the 2.4 gigahertz (120 mm) UHF and 5 gigahertz (60 mm) SHF radio bands, with the 6 gigahertz SHF band used in newer generations of the standard; these bands are subdivided into multiple channels. Channels can be shared between networks, but, within range, only one transmitter can transmit on a channel at a time. Wi-Fi's radio bands work best for line-of-sight use. Common obstructions, such as walls, pillars, home appliances, etc., may greatly reduce range, but this also helps minimize interference between different networks in crowded environments. The range of an access point is about 20 m (66 ft) indoors, while some access points claim up to a 150 m (490 ft) range outdoors. Hotspot coverage can be as small as a single room with walls that block radio waves or as large as many square kilometers using multiple overlapping access points with roaming permitted between them. Over time, the speed and spectral efficiency of Wi-Fi has increased. As of 2019, some versions of Wi-Fi, running on suitable hardware at close range, can achieve speeds of 9.6 Gbit/s (gigabit per second). == History == A 1985 ruling by the U.S. Federal Communications Commission released parts of the ISM bands for unlicensed use for communications. These frequency bands include the same 2.4 GHz bands used by equipment such as microwave ovens, and are thus subject to interference. In 1991 in Nieuwegein, the NCR Corporation and AT&T invented the precursor to 802.11, intended for use in cashier systems, under the name WaveLAN. NCR's Vic Hayes, who held the chair of IEEE 802.11 for ten years, along with Bell Labs engineer Bruce Tuch, approached the Institute of Electrical and Electronics Engineers (IEEE) to create a standard and were involved in designing the initial 802.11b and 802.11a specifications within the IEEE. They have both been subsequently inducted into the Wi-Fi NOW Hall of Fame. In 1989 in Australia, a team of scientists began working on wireless LAN technology. A prototype test bed for a wireless local area network (WLAN) was developed in 1992 by a team of researchers from the Radiophysics Division of the CSIRO (Commonwealth Scientific and Industrial Research Organisation) in Australia, led by John O'Sullivan. A patent for Wi Fi was lodged by the CSIRO in 1992. The first version of the 802.11 protocol was released in 1997, and provided up to 2 Mbit/s link speeds. This was updated in 1999 with 802.11b to permit 11 Mbit/s link speeds. In 1999, the Wi-Fi Alliance formed as a trade association to hold the Wi-Fi trademark under which most IEEE 802.11 products are sold. The major commercial breakthrough came with Apple Inc. adopting Wi-Fi for their iBook series of laptops in 1999. It was the first mass consumer product to offer Wi-Fi network connectivity, which was then branded by Apple as AirPort. This was in collaboration with the same group that helped create the standard: Vic Hayes, Bruce Tuch, Cees Links, Rich McGinn, and others from Lucent. In 2000, Radiata, a group of Australian scientists connected to the CSIRO, were the first to use the 802.11a standard on chips connected to a Wi-Fi network. Wi-Fi uses a large number of patents held by multiple different organizations. Australia, the United States and the Netherlands simultaneously claim the invention of Wi-Fi, and a consensus has not been reached globally. In 2009, the Australian CSIRO was awarded $200 million after a patent settlement with 14 technology companies, with a further $220 million awarded in 2012 after legal proceedings with 23 companies. In 2016, the CSIRO's WLAN prototype test bed was chosen as Australia's contribution to the exhibition A History of the World in 100 Objects held in the National Museum of Australia. == Etymology and terminology == The name Wi-Fi, commercially used at least as early as August 1999, was coined by the brand-consulting firm Interbrand. The Wi-Fi Alliance had hired Interbrand to create a name that was "a little catchier than 'IEEE 802.11b Direct Sequence'." According to Phil Belanger, a founding member of the Wi-Fi Alliance, the term Wi-Fi was chosen from a list of ten names that Interbrand proposed. Interbrand also created the Wi-Fi logo. The yin-yang Wi-Fi logo indicates the certification of a product for interoperability. The name is often written as WiFi, Wifi, or wifi, but these are not approved by the Wi-Fi Alliance. The name Wi-Fi is not short-form for 'Wireless Fidelity', although the Wi-Fi Alliance did use the advertising slogan "The Standard for Wireless Fidelity" for a short time after the brand name was created, and the Wi-Fi Alliance was also called the "Wireless Fidelity Alliance Inc." in some publications. IEEE is a separate, but related, organization and their website has stated "WiFi is a short name for Wireless Fidelity". The name Wi-Fi was partly chosen because it sounds similar to Hi-Fi, which consumers take to mean high fidelity or high quality. Interbrand hoped consumers would find the name catchy, and that they would assume this wireless protocol has high fidelity because of its name. Other technologies intended for fixed points, including Motorola Canopy, are usually called fixed wireless. Alternative wireless technologies include Zigbee, Z-Wave, Bluetooth and mobile phone standards. To connect to a Wi-Fi LAN, a computer must be equipped with a wireless network interface controller. The combination of a computer and an interface controller is called a station. Stations are identified by one or more MAC addresses. Wi-Fi nodes often operate in infrastructure mode in which all communications go through a base station. Ad hoc mode refers to devices communicating directly with each other, without communicating with an access point. A service set is the set of all the devices associated with a particular Wi-Fi network. Devices in a service set need not be on the same wavebands or channels. A service set can be local, independent, extended, mesh, or a combination. Each service set has an associated identifier, a 32-byte service set identifier (SSID), which identifies the network. The SSID is configured within the devices that are part of the network. A basic service set (BSS) is a group of stations that share the same wireless channel, SSID, and other settings that have wirelessly connected, usually to the same access point.: 3.6  Each BSS is identified by a MAC address called the BSSID. == Certification == The IEEE does not test equipment for compliance with their standards. The Wi-Fi Alliance was formed in 1999 to establish and enforce standards for interoperability and backward compatibility, and to promote wireless local-area-network technology. The Wi-Fi Alliance enforces the use of the Wi-Fi brand to technologies based on the IEEE 802.11 standards from the IEEE. Manufacturers with membership in the Wi-Fi Alliance, whose products pass the certification process, gain the right to mark those products with the Wi-Fi logo. Specifically, the certification process requires conformance to the IEEE 802.11 radio standards, the WPA and WPA2 security standards, and the EAP authentication standard. Certification may optionally include tests of IEEE 802.11 draft standards, interaction with cellular-phone technology in converged devices, and features relating to security set-up, multimedia, and power-saving. Not every Wi-Fi device is submitted for certification. The lack of Wi-Fi certification does not necessarily imply that a device is incompatible with other Wi-Fi devices. The Wi-Fi Alliance may or may not sanction derivative terms, such as Super Wi-Fi, coined by the US Federal Communications Commission (FCC) to describe proposed networking in the UHF TV band in the US. == Versions and generations == Equipment frequently supports multiple versions of Wi-Fi. To communicate, devices must use a common Wi-Fi version. The versions differ between the radio wavebands they operate on, the radio bandwidth they occupy, the maximum data rates they can support and other details. Some versions permit the use of multiple antennas, which permits greater speeds as well as reduced interference. Historically, the equipment listed the versions of Wi-Fi supported using the name of the IEEE standards. In 2018, the Wi-Fi Alliance introduced simplified Wi-Fi generational numbering to indicate equipment that supports Wi-Fi 4 (802.11n), Wi-Fi 5 (802.11ac) and Wi-Fi 6 (802.11ax). These generations have a high degree of backward compatibility with previous versions. The alliance has stated that the generational level 4, 5, or 6 can be indicated in the user interface when connected, along with the signal strength. The most important standards affecting Wi‑Fi are: 802.11a, 802.11b, 802.11g, 802.11n (Wi-Fi 4), 802.11h, 802.11i, 802.11-2007, 802.11–2012, 802.11ac (Wi-Fi 5), 802.11ad, 802.11af, 802.11-2016, 802.11ah, 802.11ai, 802.11aj, 802.11aq, 802.11ax (Wi-Fi 6), 802.11ay. == Uses == === Internet === Wi-Fi technology may be used to provide local network and Internet access to devices that are within Wi-Fi range of one or more routers that are connected to the Internet. The coverage of one or more interconnected access points can extend from an area as small as a few rooms to as large as many square kilometres. Coverage in the larger area may require a group of access points with overlapping coverage. For example, public outdoor Wi-Fi technology has been used successfully in wireless mesh networks in London. An international example is Fon. Wi-Fi provides services in private homes, businesses, as well as in public spaces. Wi-Fi hotspots may be set up either free of charge or commercially, often using a captive portal webpage for access. Organizations, enthusiasts, authorities and businesses, such as airports, hotels, and restaurants, often provide free or paid-use hotspots to attract customers, to provide services to promote business in selected areas. Routers often incorporate a digital subscriber line modem or a cable modem and a Wi-Fi access point, are frequently set up in homes and other buildings, to provide Internet access for the structure. Similarly, battery-powered routers may include a mobile broadband modem and a Wi-Fi access point. When subscribed to a cellular data carrier, they allow nearby Wi-Fi stations to access the Internet. A number of smartphones have a built-in mobile hotspot capability of this sort, though carriers often disable the feature, or charge a separate fee to enable it. Standalone devices such as MiFi- and WiBro-branded devices provide the capability. Some laptops that have a cellular modem card can also act as mobile Internet Wi-Fi access points. Multiple traditional university campuses in the developed world provide at least partial Wi-Fi coverage. Carnegie Mellon University built the first campus-wide wireless Internet network, called Wireless Andrew, at its Pittsburgh campus in 1993 before Wi-Fi branding existed. A number of universities collaborate in providing Wi-Fi access to students and staff through the Eduroam international authentication infrastructure. === City-wide === In the early 2000s, multiple cities around the world announced plans to construct citywide Wi-Fi networks. There are a number of successful examples; in 2004, Mysore (Mysuru) became India's first Wi-Fi-enabled city. A company called WiFiyNet has set up hotspots in Mysore, covering the whole city and a few nearby villages. In 2005, St. Cloud, Florida and Sunnyvale, California, became the first cities in the United States to offer citywide free Wi-Fi (from MetroFi). Minneapolis has generated $1.2 million in profit annually for its provider. In May 2010, the then London mayor Boris Johnson pledged to have London-wide Wi-Fi by 2012. Several boroughs including Westminster and Islington already had extensive outdoor Wi-Fi coverage at that point. New York City announced a city-wide campaign to convert old phone booths into digital kiosks in 2014. The project, titled LinkNYC, has created a network of kiosks that serve as public Wi-Fi hotspots, high-definition screens and landlines. Installation of the screens began in late 2015. The city government plans to implement more than seven thousand kiosks over time, eventually making LinkNYC the largest and fastest public, government-operated Wi-Fi network in the world. The UK has planned a similar project across major cities of the country, with the project's first implementation in the London Borough of Camden. Officials in South Korea's capital Seoul were moving to provide free Internet access at more than 10,000 locations around the city, including outdoor public spaces, major streets, and densely populated residential areas. Seoul was planning to grant leases to KT, LG Telecom, and SK Telecom. The companies were supposed to invest $44 million in the project, which was to be completed in 2015. === Geolocation === Wi-Fi positioning systems use known positions of Wi-Fi hotspots to identify a device's location. It is used when GPS isn't suitable due to issues like signal interference or slow satellite acquisition. This includes assisted GPS, urban hotspot databases, and indoor positioning systems. Wi-Fi positioning relies on measuring signal strength (RSSI) and fingerprinting. Parameters like SSID and MAC address are crucial for identifying access points. The accuracy depends on nearby access points in the database. Signal fluctuations can cause errors, which can be reduced with noise-filtering techniques. For low precision, integrating Wi-Fi data with geographical and time information has been proposed. The Wi-Fi RTT capability introduced in IEEE 802.11mc allows for positioning based on round trip time measurement, an improvement over the RSSI method. The IEEE 802.11az standard promises further improvements in geolocation accuracy. === Motion detection === Wi-Fi sensing is used in applications such as motion detection and gesture recognition. == Operational principles == Wi-Fi stations communicate by sending each other data packets, blocks of data individually sent and delivered over radio on various channels. As with all radio, this is done by the modulation and demodulation of carrier waves. Different versions of Wi-Fi use different techniques, 802.11b uses direct-sequence spread spectrum on a single carrier, whereas 802.11a, Wi-Fi 4, 5 and 6 use orthogonal frequency-division multiplexing. Channels are used half duplex and can be time-shared by multiple networks. Any packet sent by one computer is locally received by stations tuned to that channel, even if that information is intended for just one destination. Stations typically ignore information not addressed to them. The use of the same channel also means that the data bandwidth is shared, so for example, available throughput to each device is halved when two stations are actively transmitting. As with other IEEE 802 LANs, stations come programmed with a globally unique 48-bit MAC address. The MAC addresses are used to specify both the destination and the source of each data packet. On the reception of a transmission, the receiver uses the destination address to determine whether the transmission is relevant to the station or should be ignored. A scheme known as carrier-sense multiple access with collision avoidance (CSMA/CA) governs the way stations share channels. With CSMA/CA stations attempt to avoid collisions by beginning transmission only after the channel is sensed to be idle, but then transmit their packet data in its entirety. CSMA/CA cannot completely prevent collisions, as two stations may sense the channel to be idle at the same time and thus begin transmission simultaneously. A collision happens when a station receives signals from multiple stations on a channel at the same time. This corrupts the transmitted data and can require stations to re-transmit. The lost data and re-transmission reduces throughput, in some cases severely. === Waveband === The 802.11 standard provides several distinct radio frequency ranges for use in Wi-Fi communications: 900 MHz, 2.4 GHz, 3.6 GHz, 4.9 GHz, 5 GHz, 6 GHz and 60 GHz bands. Each range is divided into a multitude of channels. In the standards, channels are numbered at 5 MHz spacing within a band (except in the 60 GHz band, where they are 2.16 GHz apart), and the number refers to the centre frequency of the channel. Although channels are numbered at 5 MHz spacing, transmitters generally occupy at least 20 MHz, and standards allow for neighbouring channels to be bonded together to form a wider channel for higher throughput. Countries apply their own regulations to the allowable channels, allowed users and maximum power levels within these frequency ranges. 802.11b/g/n can use the 2.4 GHz band, operating in the United States under FCC Part 15 rules and regulations. In this frequency band, equipment may occasionally suffer interference from microwave ovens, cordless telephones, USB 3.0 hubs, Bluetooth and other devices. Spectrum assignments and operational limitations are not consistent worldwide: Australia and Europe allow for an additional two channels (12, 13) beyond the 11 permitted in the United States for the 2.4 GHz band, while Japan has three more (12–14). 802.11a/h/j/n/ac/ax can use the 5 GHz U-NII band, which, for much of the world, offers at least 23 non-overlapping 20 MHz channels. This is in contrast to the 2.4 GHz frequency band where the channels are only 5 MHz wide. In general, lower frequencies have longer range but have less capacity. The 5 GHz bands are absorbed to a greater degree by common building materials than the 2.4 GHz bands and usually give a shorter range. As 802.11 specifications evolved to support higher throughput, the protocols have become much more efficient in their bandwidth use. Additionally, they have gained the ability to aggregate channels together to gain still more throughput where the bandwidth for additional channels is available. 802.11n allows for double radio spectrum bandwidth (40 MHz) per channel compared to 802.11a or 802.11g (20 MHz). 802.11n can be set to limit itself to 20 MHz bandwidth to prevent interference in dense communities. In the 5 GHz band, 20 MHz, 40 MHz, 80 MHz, and 160 MHz channels are permitted with some restrictions, giving much faster connections. === Communication stack === Wi-Fi is part of the IEEE 802 protocol family. The data is organized into 802.11 frames that are very similar to Ethernet frames at the data link layer, but with extra address fields. MAC addresses are used as network addresses for routing over the LAN. Wi-Fi's MAC and physical layer (PHY) specifications are defined by IEEE 802.11 for modulating and receiving one or more carrier waves to transmit the data in the infrared, and 2.4, 3.6, 5, 6, or 60 GHz frequency bands. They are created and maintained by the IEEE LAN/MAN Standards Committee (IEEE 802). The base version of the standard was released in 1997 and has had many subsequent amendments. The standard and amendments provide the basis for wireless network products using the Wi-Fi brand. While each amendment is officially revoked when incorporated in the latest version of the standard, the corporate world tends to market to the revisions because they concisely denote capabilities of their products. As a result, in the market place, each revision tends to become its own standard. In addition to 802.11, the IEEE 802 protocol family has specific provisions for Wi-Fi. These are required because Ethernet's cable-based media are not usually shared, whereas with wireless all transmissions are received by all stations within the range that employ that radio channel. While Ethernet has essentially negligible error rates, wireless communication media are subject to significant interference. Therefore, the accurate transmission is not guaranteed so delivery is, therefore, a best-effort delivery mechanism. Because of this, for Wi-Fi, the Logical Link Control (LLC) specified by IEEE 802.2 employs Wi-Fi's media access control (MAC) protocols to manage retries without relying on higher levels of the protocol stack. For internetworking purposes, Wi-Fi is usually layered as a link layer below the internet layer of the Internet Protocol. This means that nodes have an associated internet address and, with suitable connectivity, this allows full Internet access. === Modes === ==== Infrastructure ==== In infrastructure mode, which is the most common mode used, all communications go through a base station. For communications within the network, this introduces an extra use of the airwaves but has the advantage that any two stations that can communicate with the base station can also communicate through the base station, which limits issues associated with the hidden node problem and simplifies the protocols. ==== Ad hoc and Wi-Fi direct ==== Wi-Fi also allows communications directly from one computer to another without an access point intermediary. This is called ad hoc Wi-Fi transmission. Different types of ad hoc networks exist. In the simplest case, network nodes must talk directly to each other. In more complex protocols nodes may forward packets, and nodes keep track of how to reach other nodes, even if they move around. Ad hoc mode was first described by Chai Keong Toh in his 1996 patent of wireless ad hoc routing, implemented on Lucent WaveLAN 802.11a wireless on IBM ThinkPads over a size nodes scenario spanning a region of over a mile. The success was recorded in Mobile Computing magazine (1999) and later published formally in IEEE Transactions on Wireless Communications, 2002 and ACM SIGMETRICS Performance Evaluation Review, 2001. This wireless ad hoc network mode has proven popular with multiplayer video games on handheld game consoles, such as the Nintendo DS and PlayStation Portable. It is also popular on digital cameras, and other consumer electronics devices. Some devices can also share their Internet connection using ad hoc, becoming hotspots or virtual routers. Similarly, the Wi-Fi Alliance promotes the specification Wi-Fi Direct for file transfers and media sharing through a new discovery and security methodology. Wi-Fi Direct launched in October 2010. Another mode of direct communication over Wi-Fi is Tunneled Direct Link Setup (TDLS), which enables two devices on the same Wi-Fi network to communicate directly, instead of via the access point. === Multiple access points === An Extended Service Set may be formed by deploying multiple access points that are configured with the same SSID and security settings. Wi-Fi client devices typically connect to the access point that can provide the strongest signal within that service set. Increasing the number of Wi-Fi access points for a network provides redundancy, better range, support for fast handover, and increased overall network capacity by using more channels or by defining smaller cells. Except for the smallest implementations (such as home or small office networks), Wi-Fi implementations have moved toward thin access points, with more of the network intelligence housed in a centralized network appliance, relegating individual access points to the role of dumb transceivers. Outdoor applications may use mesh topologies. == Performance == Wi-Fi operational range depends on factors such as the frequency band, modulation technique, transmitter power output, receiver sensitivity, antenna gain and type, and propagation and interference characteristics in the environment. At longer distances, speed is typically reduced. === Transmitter power === Compared to cell phones and similar technology, Wi-Fi transmitters are low-power devices. In general, the maximum amount of power that a Wi-Fi device can transmit is limited by local regulations, such as FCC Part 15 in the US. Equivalent isotropically radiated power (EIRP) in the European Union is limited to 20 dBm (100 mW). Wi-Fi, however, has higher power compared to some other standards designed to support wireless personal area network applications. For example, Bluetooth provides a much shorter propagation range between 1 and 100 metres (1 and 100 yards) and so in general has a lower power consumption. Other low-power technologies such as Zigbee have fairly long range, but much lower data rate. The high power consumption of Wi-Fi makes battery life in some mobile devices a concern. === Antenna === An access point compliant with either 802.11b or 802.11g, using the stock omnidirectional antenna might have a range of 0.1 km. The same radio with an external semi-parabolic antenna (15 dB gain) with a similarly equipped receiver at the far end might have a range over 32 km. Higher gain rating (dBi) indicates further deviation (generally toward the horizontal) from a theoretical, perfect isotropic radiator, and therefore the antenna can project or accept a usable signal further in particular directions, as compared to a similar output power on a more isotropic antenna. For example, an 8 dBi antenna used with a 100 mW driver has a similar horizontal range to a 6 dBi antenna being driven at 500 mW. This assumes that radiation in the vertical is lost; this may not be the case in some situations, especially in large buildings or within a waveguide. In the above example, a directional waveguide could cause the low-power 6 dBi antenna to project much further in a single direction than the 8 dBi antenna, which is not in a waveguide, even if they are both driven at 100 mW. On wireless routers with detachable antennas, it is possible to improve range by fitting upgraded antennas that provide a higher gain in particular directions. Outdoor ranges can be improved to many kilometres through the use of high gain directional antennas at the router and remote device(s). === MIMO (multiple-input and multiple-output) === Wi-Fi 4 and higher standards allow devices to have multiple antennas on transmitters and receivers. Multiple antennas enable the equipment to exploit multipath propagation on the same frequency bands giving much higher speeds and longer range. Wi-Fi 4 can more than double the range over previous standards. The Wi-Fi 5 standard uses the 5 GHz band exclusively, and is capable of multi-station WLAN throughput of at least 1 gigabit per second, and a single station throughput of at least 500 Mbit/s. As of the first quarter of 2016, The Wi-Fi Alliance certifies devices compliant with the 802.11ac standard as "Wi-Fi CERTIFIED ac". This standard uses several signal processing techniques such as multi-user MIMO and 4X4 Spatial Multiplexing streams, and wide channel bandwidth (160 MHz) to achieve its gigabit throughput. According to a study by IHS Technology, 70% of all access point sales revenue in the first quarter of 2016 came from 802.11ac devices. === Radio propagation === With Wi-Fi signals line-of-sight usually works best, but signals can transmit, absorb, reflect, refract, diffract and up and down fade through and around structures, both man-made and natural. Wi-Fi signals are very strongly affected by metallic structures (including rebar in concrete, low-e coatings in glazing), rock structures (including marble) and water (such as found in vegetation). Due to the complex nature of radio propagation at typical Wi-Fi frequencies, particularly around trees and buildings, algorithms can only approximately predict Wi-Fi signal strength for any given area in relation to a transmitter. This effect does not apply equally to long-range Wi-Fi, since longer links typically operate from towers that transmit above the surrounding foliage. Mobile use of Wi-Fi over wider ranges is limited, for instance, to uses such as in an automobile moving from one hotspot to another. Other wireless technologies are more suitable for communicating with moving vehicles. ==== Distance records ==== Distance records (using non-standard devices) include 382 km (237 mi) in June 2007, held by Ermanno Pietrosemoli and EsLaRed of Venezuela, transferring about 3 MB of data between the mountain-tops of El Águila and Platillon. The Swedish National Space Agency transferred data 420 km (260 mi), using 6 watt amplifiers to reach an overhead stratospheric balloon. === Interference === Wi-Fi connections can be blocked or the Internet speed lowered by having other devices in the same area. Wi-Fi protocols are designed to share the wavebands reasonably fairly, and this often works with little to no disruption. To minimize collisions with Wi-Fi and non-Wi-Fi devices, Wi-Fi employs Carrier-sense multiple access with collision avoidance (CSMA/CA), where transmitters listen before transmitting and delay transmission of packets if they detect that other devices are active on the channel, or if noise is detected from adjacent channels or non-Wi-Fi sources. Nevertheless, Wi-Fi networks are still susceptible to the hidden node and exposed node problem. A standard speed Wi-Fi signal occupies five channels in the 2.4 GHz band. Interference can be caused by overlapping channels. Any two channel numbers that differ by five or more, such as 2 and 7, do not overlap (no adjacent-channel interference). The oft-repeated adage that channels 1, 6, and 11 are the only non-overlapping channels is, therefore, not accurate. Channels 1, 6, and 11 are the only group of three non-overlapping channels in North America. However, whether the overlap is significant depends on physical spacing. Channels that are four apart interfere a negligible amount – much less than reusing channels (which causes co-channel interference) – if transmitters are at least a few metres apart. In Europe and Japan where channel 13 is available, using Channels 1, 5, 9, and 13 for 802.11g and 802.11n is viable and recommended. However, multiple 2.4 GHz 802.11b and 802.11g access-points default to the same channel on initial startup, contributing to congestion on certain channels. Wi-Fi pollution, or an excessive number of access points in the area, can prevent access and interfere with other devices' use of other access points as well as with decreased signal-to-noise ratio (SNR) between access points. These issues can become a problem in high-density areas, such as large apartment complexes or office buildings with multiple Wi-Fi access points. Other devices use the 2.4 GHz band: microwave ovens, ISM band devices, security cameras, Zigbee devices, Bluetooth devices, video senders, cordless phones, baby monitors, and, in some countries, amateur radio, all of which can cause significant additional interference. It is also an issue when municipalities or other large entities (such as universities) seek to provide large area coverage. On some 5 GHz bands interference from radar systems can occur in some places. For base stations that support those bands they employ Dynamic Frequency Selection which listens for radar, and if it is found, it will not permit a network on that band. These bands can be used by low power transmitters without a licence, and with few restrictions. However, while unintended interference is common, users that have been found to cause deliberate interference (particularly for attempting to locally monopolize these bands for commercial purposes) have been issued large fines. === Throughput === Various layer-2 variants of IEEE 802.11 have different characteristics. Across all flavours of 802.11, maximum achievable throughputs are either given based on measurements under ideal conditions or in the layer-2 data rates. This, however, does not apply to typical deployments in which data are transferred between two endpoints of which at least one is typically connected to a wired infrastructure, and the other is connected to an infrastructure via a wireless link. This means that typically data frames pass an 802.11 (WLAN) medium and are being converted to 802.3 (Ethernet) or vice versa. Due to the difference in the frame (header) lengths of these two media, the packet size of an application determines the speed of the data transfer. This means that an application that uses small packets (e.g. VoIP) creates a data flow with high overhead traffic (low goodput). Other factors that contribute to the overall application data rate are the speed with which the application transmits the packets (i.e. the data rate) and the energy with which the wireless signal is received. The latter is determined by distance and by the configured output power of the communicating devices. The same references apply to the attached throughput graphs, which show measurements of UDP throughput measurements. Each represents an average throughput of 25 measurements (the error bars are there, but barely visible due to the small variation), is with specific packet size (small or large), and with a specific data rate (10 kbit/s – 100 Mbit/s). Markers for traffic profiles of common applications are included as well. This text and measurements do not cover packet errors but information about this can be found at the above references. The table below shows the maximum achievable (application-specific) UDP throughput in the same scenarios (same references again) with various WLAN (802.11) flavours. The measurement hosts have been 25 metres (yards) apart from each other; loss is again ignored. == Hardware == Wi-Fi allows wireless deployment of local area networks (LANs). Also, spaces where cables cannot be run, such as outdoor areas and historical buildings, can host wireless LANs. However, building walls of certain materials, such as stone with high metal content, can block Wi-Fi signals. A Wi-Fi device is a short-range wireless device. Wi-Fi devices are fabricated on RF CMOS integrated circuit (RF circuit) chips. Since the early 2000s, manufacturers are building wireless network adapters into most laptops. The price of chipsets for Wi-Fi continues to drop, making it an economical networking option included in ever more devices. Different competitive brands of access points and client network-interfaces can inter-operate at a basic level of service. Products designated as "Wi-Fi Certified" by the Wi-Fi Alliance are backward compatible. Unlike mobile phones, any standard Wi-Fi device works anywhere in the world. === Access point === A wireless access point (WAP) connects a group of wireless devices to an adjacent wired LAN. An access point resembles a network hub, relaying data between connected wireless devices in addition to a (usually) single connected wired device, most often an Ethernet hub or switch, allowing wireless devices to communicate with other wired devices. === Wireless adapter === Wireless adapters allow devices to connect to a wireless network. These adapters connect to devices using various external or internal interconnects such as mini PCIe (mPCIe, M.2), USB, ExpressCard and previously PCI, Cardbus, and PC Card. As of 2010, most newer laptop computers come equipped with built-in internal adapters. === Router === Wireless routers integrate a Wireless Access Point, Ethernet switch, and internal router firmware application that provides IP routing, NAT, and DNS forwarding through an integrated WAN-interface. A wireless router allows wired and wireless Ethernet LAN devices to connect to a (usually) single WAN device such as a cable modem, DSL modem, or optical modem. A wireless router allows all three devices, mainly the access point and router, to be configured through one central utility. This utility is usually an integrated web server that is accessible to wired and wireless LAN clients and often optionally to WAN clients. This utility may also be an application that is run on a computer, as is the case with as Apple's AirPort, which is managed with the AirPort Utility on macOS and iOS. === Bridge === Wireless network bridges can act to connect two networks to form a single network at the data-link layer over Wi-Fi. The main standard is the wireless distribution system (WDS). Wireless bridging can connect a wired network to a wireless network. A bridge differs from an access point: an access point typically connects wireless devices to one wired network. Two wireless bridge devices may be used to connect two wired networks over a wireless link, useful in situations where a wired connection may be unavailable, such as between two separate homes or for devices that have no wireless networking capability (but have wired networking capability), such as consumer entertainment devices; alternatively, a wireless bridge can be used to enable a device that supports a wired connection to operate at a wireless networking standard that is faster than supported by the wireless network connectivity feature (external dongle or inbuilt) supported by the device (e.g., enabling Wireless-N speeds (up to the maximum supported speed on the wired Ethernet port on both the bridge and connected devices including the wireless access point) for a device that only supports Wireless-G). A dual-band wireless bridge can also be used to enable 5 GHz wireless network operation on a device that only supports 2.4 GHz wireless and has a wired Ethernet port. === Repeater === Wireless range-extenders or wireless repeaters can extend the range of an existing wireless network. Strategically placed range-extenders can elongate a signal area or allow for the signal area to reach around barriers such as those pertaining in L-shaped corridors. Wireless devices connected through repeaters suffer from an increased latency for each hop, and there may be a reduction in the maximum available data throughput. Besides, the effect of additional users using a network employing wireless range-extenders is to consume the available bandwidth faster than would be the case whereby a single user migrates around a network employing extenders. For this reason, wireless range-extenders work best in networks supporting low traffic throughput requirements, such as for cases whereby a single user with a Wi-Fi-equipped tablet migrates around the combined extended and non-extended portions of the total connected network. Also, a wireless device connected to any of the repeaters in the chain has data throughput limited by the "weakest link" in the chain between the connection origin and connection end. Networks using wireless extenders are more prone to degradation from interference from neighbouring access points that border portions of the extended network and that happen to occupy the same channel as the extended network. === Embedded systems === The security standard, Wi-Fi Protected Setup, allows embedded devices with a limited graphical user interface to connect to the Internet with ease. Wi-Fi Protected Setup has 2 configurations: The Push Button configuration and the PIN configuration. These embedded devices are also called The Internet of things and are low-power, battery-operated embedded systems. Several Wi-Fi manufacturers design chips and modules for embedded Wi-Fi, such as GainSpan. Increasingly in the last few years (particularly as of 2007), embedded Wi-Fi modules have become available that incorporate a real-time operating system and provide a simple means of wirelessly enabling any device that can communicate via a serial port. This allows the design of simple monitoring devices. An example is a portable ECG device monitoring a patient at home. This Wi-Fi-enabled device can communicate via the Internet. These Wi-Fi modules are designed by OEMs so that implementers need only minimal Wi-Fi knowledge to provide Wi-Fi connectivity for their products. In June 2014, Texas Instruments introduced the first ARM Cortex-M4 microcontroller with an onboard dedicated Wi-Fi MCU, the SimpleLink CC3200. It makes embedded systems with Wi-Fi connectivity possible to build as single-chip devices, which reduces their cost and minimum size, making it more practical to build wireless-networked controllers into inexpensive ordinary objects. == Security == The main issue with wireless network security is its simplified access to the network compared to traditional wired networks such as Ethernet. With wired networking, one must either gain access to a building (physically connecting into the internal network), or break through an external firewall. To access Wi-Fi, one must merely be within the range of the Wi-Fi network. Most business networks protect sensitive data and systems by attempting to disallow external access. Enabling wireless connectivity reduces security if the network uses inadequate or no encryption. An attacker who has gained access to a Wi-Fi network router can initiate a DNS spoofing attack against any other user of the network by forging a response before the queried DNS server has a chance to reply. === Securing methods === A common measure to deter unauthorized users involves hiding the access point's name by disabling the SSID broadcast. While effective against the casual user, it is ineffective as a security method because the SSID is broadcast in the clear in response to a client SSID query. Another method is to only allow computers with known MAC addresses to join the network, but determined eavesdroppers may be able to join the network by spoofing an authorized address. Wired Equivalent Privacy (WEP) encryption was designed to protect against casual snooping but it is no longer considered secure. Tools such as AirSnort or Aircrack-ng can quickly recover WEP encryption keys. Because of WEP's weakness the Wi-Fi Alliance approved Wi-Fi Protected Access (WPA) which uses TKIP. WPA was specifically designed to work with older equipment usually through a firmware upgrade. Though more secure than WEP, WPA has known vulnerabilities. The more secure WPA2 using Advanced Encryption Standard was introduced in 2004 and is supported by most new Wi-Fi devices. WPA2 is fully compatible with WPA. In 2017, a flaw in the WPA2 protocol was discovered, allowing a key replay attack, known as KRACK. A flaw in a feature added to Wi-Fi in 2007, called Wi-Fi Protected Setup (WPS), let WPA and WPA2 security be bypassed. The only remedy as of 2011 was to turn off Wi-Fi Protected Setup, which is not always possible. Virtual private networks can be used to improve the confidentiality of data carried through Wi-Fi networks, especially public Wi-Fi networks. A URI using the WIFI scheme can specify the SSID, encryption type, password/passphrase, and if the SSID is hidden or not, so users can follow links from QR codes, for instance, to join networks without having to manually enter the data. A MeCard-like format is supported by Android and iOS 11+. Common format: WIFI:S:<SSID>;T:<WEP|WPA|blank>;P:<PASSWORD>;H:<true|false|blank>; Sample WIFI:S:MySSID;T:WPA;P:MyPassW0rd;; === Data security risks === Wi-Fi access points typically default to an encryption-free (open) mode. Novice users benefit from a zero-configuration device that works out-of-the-box, but this default does not enable any wireless security, providing open wireless access to a LAN. To turn security on requires the user to configure the device, usually via a software graphical user interface (GUI). On unencrypted Wi-Fi networks connecting devices can monitor and record data (including personal information). Such networks can only be secured by using other means of protection, such as a VPN, or Hypertext Transfer Protocol over Transport Layer Security (HTTPS). The older wireless-encryption standard, Wired Equivalent Privacy (WEP), has been shown easily breakable even when correctly configured. Wi-Fi Protected Access (WPA) encryption, which became available in devices in 2003, aimed to solve this problem. Wi-Fi Protected Access 2 (WPA2) ratified in 2004 is considered secure, provided a strong passphrase is used. The 2003 version of WPA has not been considered secure since it was superseded by WPA2 in 2004. In 2018, WPA3 was announced as a replacement for WPA2, increasing security; it rolled out on 26 June. === Piggybacking === Piggybacking refers to access to a wireless Internet connection by bringing one's computer within the range of another's wireless connection, and using that service without the subscriber's explicit permission or knowledge. During the early popular adoption of 802.11, providing open access points for anyone within range to use was encouraged to cultivate wireless community networks, particularly since people on average use only a fraction of their downstream bandwidth at any given time. Recreational logging and mapping of other people's access points have become known as wardriving. Indeed, many access points are intentionally installed without security turned on so that they can be used as a free service. Providing access to one's Internet connection in this fashion may breach the Terms of Service or contract with the ISP. These activities do not result in sanctions in most jurisdictions; however, legislation and case law differ considerably across the world. A proposal to leave graffiti describing available services was called warchalking. Piggybacking often occurs unintentionally – a technically unfamiliar user might not change the default "unsecured" settings to their access point and operating systems can be configured to connect automatically to any available wireless network. A user who happens to start up a laptop in the vicinity of an access point may find the computer has joined the network without any visible indication. Moreover, a user intending to join one network may instead end up on another one if the latter has a stronger signal. In combination with automatic discovery of other network resources (see DHCP and Zeroconf) this could lead wireless users to send sensitive data to the wrong middle-man when seeking a destination (see man-in-the-middle attack). For example, a user could inadvertently use an unsecured network to log into a website, thereby making the login credentials available to anyone listening, if the website uses an insecure protocol such as plain HTTP without TLS. On an unsecured access point, an unauthorized user can obtain security information (factory preset passphrase or Wi-Fi Protected Setup PIN) from a label on a wireless access point and use this information (or connect by the Wi-Fi Protected Setup pushbutton method) to commit unauthorized or unlawful activities. == Societal aspects == Wireless Internet access has become much more embedded in society. It has thus changed how the society functions in a number of ways. === Influence on developing countries === As of 2017 over half the world did not have access to the Internet, prominently rural areas in developing nations. Technology that has been implemented in more developed nations is often costly and energy inefficient. This has led to developing nations using more low-tech networks, frequently implementing renewable power sources that can solely be maintained through solar power, creating a network that is resistant to disruptions such as power outages. For instance, in 2007, a 450-kilometre (280 mi) network between Cabo Pantoja and Iquitos in Peru was erected in which all equipment is powered only by solar panels. These long-range Wi-Fi networks have two main uses: offer Internet access to populations in isolated villages, and to provide healthcare to isolated communities. In the case of the latter example, it connects the central hospital in Iquitos to 15 medical outposts which are intended for remote diagnosis. === Work habits === Access to Wi-Fi in public spaces such as cafes or parks allows people, in particular freelancers, to work remotely. While the accessibility of Wi-Fi is the strongest factor when choosing a place to work (75% of people would choose a place that provides Wi-Fi over one that does not), other factors influence the choice of specific hotspots. These vary from the accessibility of other resources, like books, the location of the workplace, and the social aspect of meeting other people in the same place. Moreover, the increase of people working from public places results in more customers for local businesses thus providing an economic stimulus to the area. Additionally, in the same study it has been noted that wireless connection provides more freedom of movement while working. Both when working at home or from the office it allows the displacement between different rooms or areas. In some offices (notably Cisco offices in New York) the employees do not have assigned desks but can work from any office connecting their laptop to Wi-Fi hotspot. === Housing === The Internet has become an integral part of living. As of 2016, 81.9% of American households have Internet access. Additionally, 89% of American households with broadband connect via wireless technologies. 72.9% of American households have Wi-Fi. Wi-Fi networks have also affected how the interior of homes and hotels are arranged. For instance, architects have described that their clients no longer wanted only one room as their home office, but would like to work near the fireplace or have the possibility to work in different rooms. This contradicts architect's pre-existing ideas of the use of rooms that they designed. Additionally, some hotels have noted that guests prefer to stay in certain rooms since they receive a stronger Wi-Fi signal. == Health concerns == The World Health Organization (WHO) says, "no health effects are expected from exposure to RF fields from base stations and wireless networks", but notes that they promote research into effects from other RF sources. (a category used when "a causal association is considered credible, but when chance, bias or confounding cannot be ruled out with reasonable confidence"), this classification was based on risks associated with wireless phone use rather than Wi-Fi networks. The United Kingdom's Health Protection Agency reported in 2007 that exposure to Wi-Fi for a year results in the "same amount of radiation from a 20-minute mobile phone call". A review of studies involving 725 people who claimed electromagnetic hypersensitivity, "...suggests that 'electromagnetic hypersensitivity' is unrelated to the presence of an EMF, although more research into this phenomenon is required." == Alternatives == Several other wireless technologies provide alternatives to Wi-Fi for different use cases: Bluetooth Low Energy, a low-power variant of Bluetooth Bluetooth, a short-distance network Cellular networks, used by smartphones LoRa, for long range wireless with low data rate NearLink, a short-range wireless technology standard WiMAX, for providing long range wireless internet connectivity Zigbee, a low-power, low data rate, short-distance communication protocol Some alternatives are "no new wires", re-using existing cable: G.hn, which uses existing home wiring, such as phone and power lines Several wired technologies for computer networking, which provide viable alternatives to Wi-Fi: Ethernet over twisted pair == See also == == Explanatory notes == == References == == Further reading == The WNDW Authors (2013). Butler, Jane (ed.). Wireless Networking in the Developing World (Third ed.). CreateSpace Independent Publishing Platform. ISBN 978-1-4840-3935-9.
https://en.wikipedia.org/wiki/Wi-Fi
Haptic technology (also kinaesthetic communication or 3D touch) is technology that can create an experience of touch by applying forces, vibrations, or motions to the user. These technologies can be used to create virtual objects in a computer simulation, to control virtual objects, and to enhance remote control of machines and devices (telerobotics). Haptic devices may incorporate tactile sensors that measure forces exerted by the user on the interface. The word haptic, from the Ancient Greek: ἁπτικός (haptikos), means "tactile, pertaining to the sense of touch". Simple haptic devices are common in the form of game controllers, joysticks, and steering wheels. Haptic technology facilitates investigation of how the human sense of touch works by allowing the creation of controlled haptic virtual objects. Vibrations and other tactile cues have also become an integral part of mobile user experience and interface design. Most researchers distinguish three sensory systems related to sense of touch in humans: cutaneous, kinaesthetic and haptic. All perceptions mediated by cutaneous and kinaesthetic sensibility are referred to as tactual perception. The sense of touch may be classified as passive and active, and the term "haptic" is often associated with active touch to communicate or recognize objects. == History == One of the earliest applications of haptic technology was in large aircraft that use servomechanism systems to operate control surfaces. In lighter aircraft without servo systems, as the aircraft approached a stall, the aerodynamic buffeting (vibrations) was felt in the pilot's controls. This was a useful warning of a dangerous flight condition. Servo systems tend to be "one-way", meaning external forces applied aerodynamically to the control surfaces are not perceived at the controls, resulting in the lack of this important sensory cue. To address this, the missing normal forces are simulated with springs and weights. The angle of attack is measured, and as the critical stall point approaches a stick shaker is engaged which simulates the response of a simpler control system. Alternatively, the servo force may be measured and the signal directed to a servo system on the control, also known as force feedback. Force feedback has been implemented experimentally in some excavators and is useful when excavating mixed material such as large rocks embedded in silt or clay. It allows the operator to "feel" and work around unseen obstacles. In the 1960s, Paul Bach-y-Rita developed a vision substitution system using a 20x20 array of metal rods that could be raised and lowered, producing tactile "dots" analogous to the pixels of a screen. People sitting in a chair equipped with this device could identify pictures from the pattern of dots poked into their backs. The first US patent for a tactile telephone was granted to Thomas D. Shannon in 1973. An early tactile man-machine communication system was constructed by A. Michael Noll at Bell Telephone Laboratories, Inc. in the early 1970s and a patent was issued for his invention in 1975. In 1994, the Aura Interactor vest was developed. The vest is a wearable force-feedback device that monitors an audio signal and uses electromagnetic actuator technology to convert bass sound waves into vibrations that can represent such actions as a punch or kick. The vest plugs into the audio output of a stereo, TV, or VCR and the audio signal is reproduced through a speaker embedded in the vest. In 1995, Thomas Massie developed the PHANToM (Personal HAptic iNTerface Mechanism) system. It used thimble-like receptacles at the end of computerized arms into which a person's fingers could be inserted, allowing them to "feel" an object on a computer screen. In 1995, Norwegian Geir Jensen described a wristwatch haptic device with a skin tap mechanism, termed Tap-in. The wristwatch would connect to a mobile phone via Bluetooth, and tapping-frequency patterns would enable the wearer to respond to callers with selected short messages. In 2015, the Apple Watch was launched. It uses skin tap sensing to deliver notifications and alerts from the mobile phone of the watch wearer. == Types of mechanical touch sensing == Human sensing of mechanical loading in the skin is managed by Mechanoreceptors. There are a number of types of mechanoreceptors but those present in the finger pad are typically placed into two categories. Fast acting (FA) and slow acting (SA). SA mechanoreceptors are sensitive to relatively large stresses and at low frequencies while FA mechanoreceptors are sensitive to smaller stresses at higher frequencies. The result of this is that generally SA sensors can detect textures with amplitudes greater than 200 micrometers and FA sensors can detect textures with amplitudes less than 200 micrometers down to about 1 micrometer, though some research suggests that FA can only detect textures smaller than the fingerprint wavelength. FA mechanoreceptors achieve this high resolution of sensing by sensing vibrations produced by friction and an interaction of the fingerprint texture moving over fine surface texture. == Implementation == Haptic feedback (often shortened to just haptics) is controlled vibrations at set frequencies and intervals to provide a sensation representative of an in-game action; this includes 'bumps', 'knocks', and 'tap' of one's hand or fingers. The majority of electronics offering haptic feedback use vibrations, and most use a type of eccentric rotating mass (ERM) actuator, consisting of an unbalanced weight attached to a motor shaft. As the shaft rotates, the spinning of this irregular mass causes the actuator and the attached device to shake. Piezoelectric actuators are also employed to produce vibrations, and offer even more precise motion than LRAs, with less noise and in a smaller platform, but require higher voltages than do ERMs and LRAs. === Controller rumble === One of the most common forms of haptic feedback in video games is controller rumble. In 1976, Sega's motorbike game Moto-Cross, also known as Fonz, was the first game to use haptic feedback, causing the handlebars to vibrate during a collision with another vehicle. === Force feedback === Force feedback devices use motors to manipulate the movement of an item held by the user. A common use is in automobile driving video games and simulators, which turn the steering wheel to simulate forces experienced when cornering a real vehicle. Direct-drive wheels, introduced in 2013, are based on servomotors and are the most high-end, for strength and fidelity, type of force feedback racing wheels. In 2007, Novint released the Falcon, the first consumer 3D touch device with high resolution three-dimensional force feedback. This allowed the haptic simulation of objects, textures, recoil, momentum, and the physical presence of objects in games. === Air vortex rings === Air vortex rings are donut-shaped air pockets made up of concentrated gusts of air. Focused air vortices can have the force to blow out a candle or disturb papers from a few yards away. Both Microsoft Research (AirWave) and Disney Research (AIREAL) have used air vortices to deliver non-contact haptic feedback. === Ultrasound === Focused ultrasound beams can be used to create a localized sense of pressure on a finger without touching any physical object. The focal point that creates the sensation of pressure is generated by individually controlling the phase and intensity of each transducer in an array of ultrasound transducers. These beams can also be used to deliver sensations of vibration, and to give users the ability to feel virtual 3D objects. The first commercially available ultrasound device was the Stratos Explore by Ultrahaptics that consisted of 256-transducer array board and a Leap motion controller for hand tracking Another form of tactile feed back results from active touch when a human scans (runs their finger over a surface) to gain information about a surfaces texture. A significant amount of information about a surface's texture on the micro meter scale can be gathered through this action as vibrations resulting from friction and texture activate mechanoreceptors in the human skin. Towards this goal plates can be made to vibrate at an ultrasonic frequency which reduces the friction between the plate and skin. === Electrical stimulation === Electrical muscle stimulation (EMS) and transcutaneous electrical nerve stimulation (TENS) can be used to create haptic sensations in the skin or muscles. Most notable examples include haptic suits Tesla suit, Owo haptic vest and wearable armbands Valkyrie EIR. In addition to improving immersion, e.g. by simulating bullet hits, these technologies are sought to create sensations similar to weight and resistance, and can promote muscle training. == Applications == === Control === ==== Telepresence ==== Haptic feedback is essential to perform complex tasks via telepresence. The Shadow Hand, an advanced robotic hand, has a total of 129 touch sensors embedded in every joint and finger pad that relay information to the operator. This allows tasks such as typing to be performed from a distance. An early prototype can be seen in NASA's collection of humanoid robots, or robonauts. ==== Teleoperation ==== Teleoperators are remote controlled robotic tools. When the operator is given feedback on the forces involved, this is called haptic teleoperation. The first electrically actuated teleoperators were built in the 1950s at the Argonne National Laboratory by Raymond Goertz to remotely handle radioactive substances. Since then, the use of force feedback has become more widespread in other kinds of teleoperators, such as remote-controlled underwater exploration devices. Devices such as medical simulators and flight simulators ideally provide the force feedback that would be felt in real life. Simulated forces are generated using haptic operator controls, allowing data representing touch sensations to be saved or played back. ==== Medicine and dentistry ==== Haptic interfaces for medical simulation are being developed for training in minimally invasive procedures such as laparoscopy and interventional radiology, and for training dental students. A Virtual Haptic Back (VHB) was successfully integrated in the curriculum at the Ohio University College of Osteopathic Medicine. Haptic technology has enabled the development of telepresence surgery, allowing expert surgeons to operate on patients from a distance. As the surgeon makes an incision, they feel tactile and resistance feedback as if working directly on the patient. ==== Automotive ==== With the introduction of large touchscreen control panels in vehicle dashboards, haptic feedback technology is used to provide confirmation of touch commands without needing the driver to take their eyes off the road. Additional contact surfaces, for example the steering wheel or seat, can also provide haptic information to the driver, for example, a warning vibration pattern when close to other vehicles. ==== Aviation ==== Force-feedback can be used to increase adherence to a safe flight envelope and thus reduce the risk of pilots entering dangerous states of flights outside the operational borders while maintaining the pilots' final authority and increasing their situation awareness. === Electronic devices === ==== Video games ==== Haptic feedback is commonly used in arcade games, especially racing video games. In 1976, Sega's motorbike game Moto-Cross, also known as Fonz, was the first game to use haptic feedback, causing the handlebars to vibrate during a collision with another vehicle. Tatsumi's TX-1 introduced force feedback to car driving games in 1983. The game Earthshaker! added haptic feedback to a pinball machine in 1989. Simple haptic devices are common in the form of game controllers, joysticks, and steering wheels. Early implementations were provided through optional components, such as the Nintendo 64 controller's Rumble Pak in 1997. In the same year, the Microsoft SideWinder Force Feedback Pro with built-in feedback was released by Immersion Corporation. Many console controllers and joysticks feature built-in feedback devices, which are motors with unbalanced weights that spin, causing it to vibrate, including Sony's DualShock technology and Microsoft's Impulse Trigger technology. Some automobile steering wheel controllers, for example, are programmed to provide a "feel" of the road. As the user makes a turn or accelerates, the steering wheel responds by resisting turns or slipping out of control. Notable introductions include: 2013: The first direct-drive wheel for sim racing is introduced. 2014: A new type of haptic cushion that responds to multimedia inputs by LG Electronics. 2015: Steam Machines (console-like PCs) by Valve include a new Steam Controller that uses weighted electromagnets capable of delivering a wide range of haptic feedback via the unit's trackpads. These controllers' feedback systems are user-configurable, delivering precise feedback with haptic force actuators on both sides of the controller. 2017: The Nintendo Switch's Joy-Con introduced the HD Rumble feature, developed with Immersion Corporation, using actuators from Alps Electric. 2018: The Razer Nari Ultimate, gaming headphones using a pair of wide frequency haptic drivers, developed by Lofelt. 2020: The Sony PlayStation 5 DualSense controllers supports vibrotactile haptic provided by voice coil actuators integrated in the palm grips, and force feedback for the Adaptive Triggers provided by two DC rotary motors. The actuators in the hand grip are able to give varied and intuitive feedback about in-game actions; for example, in a sandstorm, the player can feel the wind and sand, and the motors in the Adaptive Triggers support experiences such as virtually drawing an arrow from a bow. 2021, SuperTuxKart 1.3 was released, adding support for force feedback. Force feedback is extremely uncommon for free software games. ==== Mobile devices ==== Tactile haptic feedback is common in cellular devices. In most cases, this takes the form of vibration response to touch. Alpine Electronics uses a haptic feedback technology named PulseTouch on many of their touch-screen car navigation and stereo units. The Nexus One features haptic feedback, according to their specifications. Samsung first launched a phone with haptics in 2007. Surface haptics refers to the production of variable forces on a user's finger as it interacts with a surface such as a touchscreen. Notable introductions include: Tanvas uses an electrostatic technology to control the in-plane forces experienced by a fingertip, as a programmable function of the finger's motion. The TPaD Tablet Project uses an ultrasonic technology to modulate the apparent slipperiness of a glass touchscreen. In 2013, Apple Inc. was awarded the patent for a haptic feedback system that is suitable for multitouch surfaces. Apple's U.S. Patent for a "Method and apparatus for localization of haptic feedback" describes a system where at least two actuators are positioned beneath a multitouch input device, providing vibratory feedback when a user makes contact with the unit. Specifically, the patent provides for one actuator to induce a feedback vibration, while at least one other actuator uses its vibrations to localize the haptic experience by preventing the first set of vibrations from propagating to other areas of the device. The patent gives the example of a "virtual keyboard," however, it is also noted that the invention can be applied to any multitouch interface. Apple's iPhones (and MacBooks) featuring the "Taptic Engine", accomplish their vibrations with a linear resonant actuator (LRA), which moves a mass in a reciprocal manner by means of a magnetic voice coil, similar to how AC electrical signals are translated into motion in the cone of a loudspeaker. LRAs are capable of quicker response times than ERMs, and thus can transmit more accurate haptic imagery. ==== Virtual reality ==== Haptics are gaining widespread acceptance as a key part of virtual reality systems, adding the sense of touch to previously visual-only interfaces. Systems are being developed to use haptic interfaces for 3D modeling and design, including systems that allow holograms to be both seen and felt. Several companies are making full-body or torso haptic vests or haptic suits for use in immersive virtual reality to allow users to feel explosions and bullet impacts. ==== Personal computers ==== In 2015, Apple Inc.'s MacBook and MacBook Pro started incorporating a "Tactile Touchpad" design with button functionality and haptic feedback incorporated into the tracking surface. The tactile touchpad allows for a feeling of "give" when clicking despite the fact that the touchpad no longer moves. === Sensory substitution === ==== Sound substitution ==== In December 2015 David Eagleman demonstrated a wearable vest that "translates" speech and other audio signals into series of vibrations. This allowed hearing-impaired people to "feel" sounds on their body; it has since been made commercially as a wristband. ==== Tactile electronic displays ==== A tactile electronic display is a display device that delivers text and graphical information using the sense of touch. Devices of this kind have been developed to assist blind or deaf users by providing an alternative to visual or auditory sensation. === Teledildonics === Haptic feedback is used within teledildonics, or "sex-technology", in order to remotely connect sex toys and allow users to engage in virtual sex or allow a remote server to control their sex toy. The term was first coined by Ted Nelson in 1975, when discussing the future of love, intimacy and technology. In recent years, teledildonics and sex-technology have expanded to include toys with a two-way connection that allow virtual sex through the communication of vibrations, pressures and sensations. Many "smart" vibrators allow for a one-way connection either between the user, or a remote partner, to allow control of the toy. === Neurorehabilitation and balance === For individuals with upper limb motor dysfunction, robotic devices utilizing haptic feedback could be used for neurorehabilitation. Robotic devices, such as end-effectors, and both grounded and ungrounded exoskeletons have been designed to assist in restoring control over several muscle groups. Haptic feedback applied by these robotic devices helps in the recovery of sensory function due to its more immersive nature. Haptic technology can also provide sensory feedback to ameliorate age-related impairments in balance control and prevent falls in the elderly and balance-impaired. Haptic Cow and Horse are used in veterinary training. === Puzzles === Haptic puzzles have been devised in order to investigate goal-oriented haptic exploration, search, learning and memory in complex 3D environments. The goal is to both enable multi-fingered robots with a sense of touch, and gain more insights into human meta-learning. === Art === Haptic technologies have been explored in virtual arts, such as sound synthesis or graphic design, that make some loose vision and animation. Haptic technology was used to enhance existing art pieces in the Tate Sensorium exhibit in 2015. In music creation, Swedish synthesizer manufacturer Teenage Engineering introduced a haptic subwoofer module for their OP-Z synthesizer allowing musicians to feel the bass frequencies directly on their instrument. === Space === The use of haptic technologies may be useful in space exploration, including visits to the planet Mars, according to news reports. == See also == Haptics (disambiguation) Haptic perception Linkage (mechanical) Organic user interface Sonic interaction design Stylus (computing) Tactile imaging Wired glove == References == == Further reading == == External links == Haptic technology at HowStuffWorks What Vibration Frequency Is Best For Haptic Feedback? Archived 2021-09-26 at the Wayback Machine
https://en.wikipedia.org/wiki/Haptic_technology
The technological singularity—or simply the singularity—is a hypothetical point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable consequences for human civilization. According to the most popular version of the singularity hypothesis, I. J. Good's intelligence explosion model of 1965, an upgradable intelligent agent could eventually enter a positive feedback loop of successive self-improvement cycles; more intelligent generations would appear more and more rapidly, causing a rapid increase ("explosion") in intelligence which would culminate in a powerful superintelligence, far surpassing all human intelligence. The Hungarian-American mathematician John von Neumann (1903-1957) became the first known person to use the concept of a "singularity" in the technological context. Alan Turing, often regarded as the father of modern computer science, laid a crucial foundation for contemporary discourse on the technological singularity. His pivotal 1950 paper, "Computing Machinery and Intelligence", introduced the idea of a machine's ability to exhibit intelligent behavior equivalent to or indistinguishable from that of a human. Stanislaw Ulam reported in 1958 an earlier discussion with von Neumann "centered on the accelerating progress of technology and changes in human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue". Subsequent authors have echoed this viewpoint. The concept and the term "singularity" were popularized by Vernor Vinge: first in 1983, in an article that claimed that, once humans create intelligences greater than their own, there will be a technological and social transition similar in some sense to "the knotted space-time at the center of a black hole"; and later in his 1993 essay "The Coming Technological Singularity", in which he wrote that it would signal the end of the human era, as the new superintelligence would continue to upgrade itself and would advance technologically at an incomprehensible rate, and he would be surprised if it occurred before 2005 or after 2030. Another significant contribution to wider circulation of the notion was Ray Kurzweil's 2005 book The Singularity Is Near, predicting singularity by 2045. Some scientists, including Stephen Hawking, have expressed concerns that artificial superintelligence (ASI) could result in human extinction. The consequences of a technological singularity and its potential benefit or harm to the human race have been intensely debated. Prominent technologists and academics dispute the plausibility of a technological singularity and the associated artificial intelligence explosion, including Paul Allen, Jeff Hawkins, John Holland, Jaron Lanier, Steven Pinker, Theodore Modis, Gordon Moore, and Roger Penrose. One claim made was that artificial intelligence growth is likely to run into decreasing returns instead of accelerating ones, as was observed in previously developed human technologies. == Intelligence explosion == Although technological progress has been accelerating in most areas, it has been limited by the basic intelligence of the human brain, which has not, according to Paul R. Ehrlich, changed significantly for millennia. However, with the increasing power of computers and other technologies, it might eventually be possible to build a machine that is significantly more intelligent than humans. If a superhuman intelligence were to be invented—either through the amplification of human intelligence or through artificial intelligence—it would, in theory, vastly improve over human problem-solving and inventive skills. Such an AI is referred to as Seed AI because if an AI were created with engineering capabilities that matched or surpassed those of its human creators, it would have the potential to autonomously improve its own software and hardware to design an even more capable machine, which could repeat the process in turn. This recursive self-improvement could accelerate, potentially allowing enormous qualitative change before any upper limits imposed by the laws of physics or theoretical computation set in. It is speculated that over many iterations, such an AI would far surpass human cognitive abilities. I. J. Good speculated that superhuman intelligence might bring about an intelligence explosion: Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion', and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control. One version of intelligence explosion is where computing power approaches infinity in a finite amount of time. In this version, once AIs are performing the research to improve themselves, speed doubles e.g. after 2 years, then 1 year, then 6 months, then 3 months, then 1.5 months, etc., where the infinite sum of the doubling periods is 4 years. Unless prevented by physical limits of computation and time quantization, this process would achieve infinite computing power in 4 years, properly earning the name "singularity" for the final state. This form of intelligence explosion is described in Yudkowsky (1996). == Emergence of superintelligence == A superintelligence, hyperintelligence, or superhuman intelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. "Superintelligence" may also refer to the form or degree of intelligence possessed by such an agent. John von Neumann, Vernor Vinge and Ray Kurzweil define the concept in terms of the technological creation of super intelligence, arguing that it is difficult or impossible for present-day humans to predict what human beings' lives would be like in a post-singularity world. The related concept "speed superintelligence" describes an AI that can function like a human mind, only much faster. For example, with a million-fold increase in the speed of information processing relative to that of humans, a subjective year would pass in 30 physical seconds. Such a difference in information processing speed could drive the singularity. Technology forecasters and researchers disagree regarding when, or whether, human intelligence will likely be surpassed. Some argue that advances in artificial intelligence (AI) will probably result in general reasoning systems that bypass human cognitive limitations. Others believe that humans will evolve or directly modify their biology so as to achieve radically greater intelligence. A number of futures studies focus on scenarios that combine these possibilities, suggesting that humans are likely to interface with computers, or upload their minds to computers, in a way that enables substantial intelligence amplification. The 2016 book The Age of Em by Robin Hanson describes a hypothetical future scenario in which human brains are scanned and digitized, creating "uploads" or digital versions of human consciousness. In this future, the development of these uploads may precede or coincide with the emergence of superintelligent artificial intelligence. == Variations == === Non-AI singularity === Some writers use "the singularity" in a broader way to refer to any radical changes in society brought about by new technology (such as molecular nanotechnology), although Vinge and other writers specifically state that without superintelligence, such changes would not qualify as a true singularity. == Predictions == There have been numerous dates predicted for the attainment of singularity. In 1965, Good wrote that it was more probable than not that an ultra-intelligent machine would be built within the twentieth century. That computing capabilities for human-level AI would be available in supercomputers before 2010 was predicted in 1988 by Moravec, assuming that the current rate of improvement continued. The attainment of greater-than-human intelligence between 2005 and 2030 was predicted by Vinge in 1993. A singularity in 2021 was predicted by Yudkowsky in 1996. Human-level AI around 2029 and the singularity in 2045 was predicted by Kurzweil in 2005. He reaffirmed these predictions in 2024 in The Singularity is Nearer. Human-level AI by 2040, and intelligence far beyond human by 2050 was predicted in 1998 by Moravec, revising his earlier prediction. A confidence of 50% that human-level AI would be developed by 2040–2050 was the outcome of four polls of AI researchers, conducted in 2012 and 2013 by Bostrom and Müller. Elon Musk in March 2025 predicted that AI would be smarter than any individual human "in the next year or two" and that AI would be smarter than all humans combined by 2029 or 2030. along with an 80 percent chance that AI would have a “good outcome,” while there was a 20 percent chance of “annihilation.” == Plausibility == Prominent technologists and academics dispute the plausibility of a technological singularity, including Paul Allen, Jeff Hawkins, John Holland, Jaron Lanier, Steven Pinker, Theodore Modis, and Gordon Moore, whose law is often cited in support of the concept. Most proposed methods for creating superhuman or transhuman minds fall into one of two categories: intelligence amplification of human brains and artificial intelligence. The many speculated ways to augment human intelligence include bioengineering, genetic engineering, nootropic drugs, AI assistants, direct brain–computer interfaces and mind uploading. These multiple possible paths to an intelligence explosion, all of which will presumably be pursued, makes a singularity more likely. Robin Hanson expressed skepticism of human intelligence augmentation, writing that once the "low-hanging fruit" of easy methods for increasing human intelligence have been exhausted, further improvements will become increasingly difficult. Despite all of the speculated ways for amplifying human intelligence, non-human artificial intelligence (specifically seed AI) is the most popular option among the hypotheses that would advance the singularity. The possibility of an intelligence explosion depends on three factors. The first accelerating factor is the new intelligence enhancements made possible by each previous improvement. However, as the intelligences become more advanced, further advances will become more and more complicated, possibly outweighing the advantage of increased intelligence. Each improvement should generate at least one more improvement, on average, for movement towards singularity to continue. Finally, the laws of physics may eventually prevent further improvement. There are two logically independent, but mutually reinforcing, causes of intelligence improvements: increases in the speed of computation, and improvements to the algorithms used. The former is predicted by Moore's Law and the forecasted improvements in hardware, and is comparatively similar to previous technological advances. But Schulman and Sandberg argue that software will present more complex challenges than simply operating on hardware capable of running at human intelligence levels or beyond. A 2017 email survey of authors with publications at the 2015 NeurIPS and ICML machine learning conferences asked about the chance that "the intelligence explosion argument is broadly correct". Of the respondents, 12% said it was "quite likely", 17% said it was "likely", 21% said it was "about even", 24% said it was "unlikely" and 26% said it was "quite unlikely". == Speed improvements == Both for human and artificial intelligence, hardware improvements increase the rate of future hardware improvements. An analogy to Moore's Law suggests that if the first doubling of speed took 18 months, the second would take 18 subjective months; or 9 external months, whereafter, four months, two months, and so on towards a speed singularity. Some upper limit on speed may eventually be reached. Jeff Hawkins has stated that a self-improving computer system would inevitably run into upper limits on computing power: "in the end there are limits to how big and fast computers can run. We would end up in the same place; we'd just get there a bit faster. There would be no singularity." It is difficult to directly compare silicon-based hardware with neurons. But Berglas (2008) notes that computer speech recognition is approaching human capabilities, and that this capability seems to require 0.01% of the volume of the brain. This analogy suggests that modern computer hardware is within a few orders of magnitude of being as powerful as the human brain, as well as taking up a lot less space. However, the costs of training systems with deep learning may be larger. === Exponential growth === The exponential growth in computing technology suggested by Moore's law is commonly cited as a reason to expect a singularity in the relatively near future, and a number of authors have proposed generalizations of Moore's law. Computer scientist and futurist Hans Moravec proposed in a 1998 book that the exponential growth curve could be extended back through earlier computing technologies prior to the integrated circuit. Ray Kurzweil postulates a law of accelerating returns in which the speed of technological change (and more generally, all evolutionary processes) increases exponentially, generalizing Moore's law in the same manner as Moravec's proposal, and also including material technology (especially as applied to nanotechnology), medical technology and others. Between 1986 and 2007, machines' application-specific capacity to compute information per capita roughly doubled every 14 months; the per capita capacity of the world's general-purpose computers has doubled every 18 months; the global telecommunication capacity per capita doubled every 34 months; and the world's storage capacity per capita doubled every 40 months. On the other hand, it has been argued that the global acceleration pattern having the 21st century singularity as its parameter should be characterized as hyperbolic rather than exponential. Kurzweil reserves the term "singularity" for a rapid increase in artificial intelligence (as opposed to other technologies), writing for example that "The Singularity will allow us to transcend these limitations of our biological bodies and brains ... There will be no distinction, post-Singularity, between human and machine". He also defines his predicted date of the singularity (2045) in terms of when he expects computer-based intelligences to significantly exceed the sum total of human brainpower, writing that advances in computing before that date "will not represent the Singularity" because they do "not yet correspond to a profound expansion of our intelligence." === Accelerating change === Some singularity proponents argue its inevitability through extrapolation of past trends, especially those pertaining to shortening gaps between improvements to technology. In one of the first uses of the term "singularity" in the context of technological progress, Stanislaw Ulam tells of a conversation with John von Neumann about accelerating change: One conversation centered on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue. Kurzweil claims that technological progress follows a pattern of exponential growth, following what he calls the "law of accelerating returns". Whenever technology approaches a barrier, Kurzweil writes, new technologies will surmount it. He predicts paradigm shifts will become increasingly common, leading to "technological change so rapid and profound it represents a rupture in the fabric of human history". Kurzweil believes that the singularity will occur by approximately 2045. His predictions differ from Vinge's in that he predicts a gradual ascent to the singularity, rather than Vinge's rapidly self-improving superhuman intelligence. Oft-cited dangers include those commonly associated with molecular nanotechnology and genetic engineering. These threats are major issues for both singularity advocates and critics, and were the subject of Bill Joy's April 2000 Wired magazine article "Why The Future Doesn't Need Us". == Algorithm improvements == Some intelligence technologies, like "seed AI", may also have the potential to not just make themselves faster, but also more efficient, by modifying their source code. These improvements would make further improvements possible, which would make further improvements possible, and so on. The mechanism for a recursively self-improving set of algorithms differs from an increase in raw computation speed in two ways. First, it does not require external influence: machines designing faster hardware would still require humans to create the improved hardware, or to program factories appropriately. An AI rewriting its own source code could do so while contained in an AI box. Second, as with Vernor Vinge's conception of the singularity, it is much harder to predict the outcome. While speed increases seem to be only a quantitative difference from human intelligence, actual algorithm improvements would be qualitatively different. Eliezer Yudkowsky compares it to the changes that human intelligence brought: humans changed the world thousands of times quicker than evolution had done, and in totally different ways. Similarly, the evolution of life was a massive departure and acceleration from the previous geological rates of change, and improved intelligence could cause change to be as different again. There are substantial dangers associated with an intelligence explosion singularity originating from a recursively self-improving set of algorithms. First, the goal structure of the AI might self-modify, potentially causing the AI to optimise for something other than what was originally intended. Secondly, AIs could compete for the same scarce resources humankind uses to survive. While not actively malicious, AIs would promote the goals of their programming, not necessarily broader human goals, and thus might crowd out humans. Carl Shulman and Anders Sandberg suggest that algorithm improvements may be the limiting factor for a singularity; while hardware efficiency tends to improve at a steady pace, software innovations are more unpredictable and may be bottlenecked by serial, cumulative research. They suggest that in the case of a software-limited singularity, intelligence explosion would actually become more likely than with a hardware-limited singularity, because in the software-limited case, once human-level AI is developed, it could run serially on very fast hardware, and the abundance of cheap hardware would make AI research less constrained. An abundance of accumulated hardware that can be unleashed once the software figures out how to use it has been called "computing overhang". == Criticism == Some critics, like philosophers Hubert Dreyfus and John Searle, assert that computers or machines cannot achieve human intelligence. Others, like physicist Stephen Hawking, object that whether machines can achieve a true intelligence or merely something similar to intelligence is irrelevant if the net result is the same. Psychologist Steven Pinker stated in 2008: "There is not the slightest reason to believe in a coming singularity. The fact that you can visualize a future in your imagination is not evidence that it is likely or even possible. Look at domed cities, jet-pack commuting, underwater cities, mile-high buildings, and nuclear-powered automobiles—all staples of futuristic fantasies when I was a child that have never arrived. Sheer processing power is not a pixie dust that magically solves all your problems." Martin Ford postulates a "technology paradox" in that before the singularity could occur most routine jobs in the economy would be automated, since this would require a level of technology inferior to that of the singularity. This would cause massive unemployment and plummeting consumer demand, which in turn would destroy the incentive to invest in the technologies that would be required to bring about the singularity. Job displacement is increasingly no longer limited to those types of work traditionally considered to be "routine". Theodore Modis and Jonathan Huebner argue that the rate of technological innovation has not only ceased to rise, but is actually now declining. Evidence for this decline is that the rise in computer clock rates is slowing, even while Moore's prediction of exponentially increasing circuit density continues to hold. This is due to excessive heat build-up from the chip, which cannot be dissipated quickly enough to prevent the chip from melting when operating at higher speeds. Advances in speed may be possible in the future by virtue of more power-efficient CPU designs and multi-cell processors. Theodore Modis holds the singularity cannot happen. He claims the "technological singularity" and especially Kurzweil lack scientific rigor; Kurzweil is alleged to mistake the logistic function (S-function) for an exponential function, and to see a "knee" in an exponential function where there can in fact be no such thing. In a 2021 article, Modis pointed out that no milestones – breaks in historical perspective comparable in importance to the Internet, DNA, the transistor, or nuclear energy – had been observed in the previous twenty years while five of them would have been expected according to the exponential trend advocated by the proponents of the technological singularity. AI researcher Jürgen Schmidhuber stated that the frequency of subjectively "notable events" appears to be approaching a 21st-century singularity, but cautioned readers to take such plots of subjective events with a grain of salt: perhaps differences in memory of recent and distant events could create an illusion of accelerating change where none exists. Microsoft co-founder Paul Allen argued the opposite of accelerating returns, the complexity brake: the more progress science makes towards understanding intelligence, the more difficult it becomes to make additional progress. A study of the number of patents shows that human creativity does not show accelerating returns, but in fact, as suggested by Joseph Tainter in his The Collapse of Complex Societies, a law of diminishing returns. The number of patents per thousand peaked in the period from 1850 to 1900, and has been declining since. The growth of complexity eventually becomes self-limiting, and leads to a widespread "general systems collapse". Hofstadter (2006) raises concern that Ray Kurzweil is not sufficiently scientifically rigorous, that an exponential tendency of technology is not a scientific law like one of physics, and that exponential curves have no "knees". Nonetheless, he did not rule out the singularity in principle in the distant future and in the light of ChatGPT and other recent advancements has revised his opinion significantly towards dramatic technological change in the near future. Jaron Lanier denies that the singularity is inevitable: "I do not think the technology is creating itself. It's not an autonomous process." Furthermore: "The reason to believe in human agency over technological determinism is that you can then have an economy where people earn their own way and invent their own lives. If you structure a society on not emphasizing individual human agency, it's the same thing operationally as denying people clout, dignity, and self-determination ... to embrace [the idea of the Singularity] would be a celebration of bad data and bad politics." Economist Robert J. Gordon points out that measured economic growth slowed around 1970 and slowed even further since the 2008 financial crisis, and argues that the economic data show no trace of a coming Singularity as imagined by mathematician I. J. Good. Philosopher and cognitive scientist Daniel Dennett said in 2017: "The whole singularity stuff, that's preposterous. It distracts us from much more pressing problems", adding "AI tools that we become hyper-dependent on, that is going to happen. And one of the dangers is that we will give them more authority than they warrant." In addition to general criticisms of the singularity concept, several critics have raised issues with Kurzweil's iconic chart. One line of criticism is that a log-log chart of this nature is inherently biased toward a straight-line result. Others identify selection bias in the points that Kurzweil chooses to use. For example, biologist PZ Myers points out that many of the early evolutionary "events" were picked arbitrarily. Kurzweil has rebutted this by charting evolutionary events from 15 neutral sources, and showing that they fit a straight line on a log-log chart. Kelly (2006) argues that the way the Kurzweil chart is constructed with x-axis having time before present, it always points to the singularity being "now", for any date on which one would construct such a chart, and shows this visually on Kurzweil's chart. Some critics suggest religious motivations or implications of singularity, especially Kurzweil's version of it. The buildup towards the singularity is compared with Christian end-of-time scenarios. Beam calls it "a Buck Rogers vision of the hypothetical Christian Rapture". John Gray says "the Singularity echoes apocalyptic myths in which history is about to be interrupted by a world-transforming event". David Streitfeld in The New York Times questioned whether "it might manifest first and foremost—thanks, in part, to the bottom-line obsession of today’s Silicon Valley—as a tool to slash corporate America’s head count." Astrophysicist and scientific philosopher Adam Becker debunks Kurzweil's concept of human mind uploads to computers on the grounds that they are too fundamentally different and incompatible. == Potential impacts == Dramatic changes in the rate of economic growth have occurred in the past because of technological advancement. Based on population growth, the economy doubled every 250,000 years from the Paleolithic era until the Neolithic Revolution. The new agricultural economy doubled every 900 years, a remarkable increase. In the current era, beginning with the Industrial Revolution, the world's economic output doubles every fifteen years, sixty times faster than during the agricultural era. If the rise of superhuman intelligence causes a similar revolution, argues Robin Hanson, one would expect the economy to double at least quarterly and possibly on a weekly basis. === Uncertainty and risk === The term "technological singularity" reflects the idea that such change may happen suddenly, and that it is difficult to predict how the resulting new world would operate. It is unclear whether an intelligence explosion resulting in a singularity would be beneficial or harmful, or even an existential threat. Because AI is a major factor in singularity risk, a number of organizations pursue a technical theory of aligning AI goal-systems with human values, including the Future of Humanity Institute (until 2024), the Machine Intelligence Research Institute, the Center for Human-Compatible Artificial Intelligence, and the Future of Life Institute. Physicist Stephen Hawking said in 2014 that "Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks." Hawking believed that in the coming decades, AI could offer "incalculable benefits and risks" such as "technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand." Hawking suggested that artificial intelligence should be taken more seriously and that more should be done to prepare for the singularity:So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilisation sent us a message saying, "We'll arrive in a few decades," would we just reply, "OK, call us when you get here – we'll leave the lights on"? Probably not – but this is more or less what is happening with AI. Berglas (2008) claims that there is no direct evolutionary motivation for an AI to be friendly to humans. Evolution has no inherent tendency to produce outcomes valued by humans, and there is little reason to expect an arbitrary optimisation process to promote an outcome desired by humankind, rather than inadvertently leading to an AI behaving in a way not intended by its creators. Anders Sandberg has also elaborated on this scenario, addressing various common counter-arguments. AI researcher Hugo de Garis suggests that artificial intelligences may simply eliminate the human race for access to scarce resources, and humans would be powerless to stop them. Alternatively, AIs developed under evolutionary pressure to promote their own survival could outcompete humanity. Bostrom (2002) discusses human extinction scenarios, and lists superintelligence as a possible cause: When we create the first superintelligent entity, we might make a mistake and give it goals that lead it to annihilate humankind, assuming its enormous intellectual advantage gives it the power to do so. For example, we could mistakenly elevate a subgoal to the status of a supergoal. We tell it to solve a mathematical problem, and it complies by turning all the matter in the solar system into a giant calculating device, in the process killing the person who asked the question. According to Eliezer Yudkowsky, a significant problem in AI safety is that unfriendly artificial intelligence is likely to be much easier to create than friendly AI. While both require large advances in recursive optimisation process design, friendly AI also requires the ability to make goal structures invariant under self-improvement (or the AI could transform itself into something unfriendly) and a goal structure that aligns with human values and does not automatically destroy the human race. An unfriendly AI, on the other hand, can optimize for an arbitrary goal structure, which does not need to be invariant under self-modification. Bill Hibbard (2014) proposes an AI design that avoids several dangers including self-delusion, unintended instrumental actions, and corruption of the reward generator. He also discusses social impacts of AI and testing AI. His 2001 book Super-Intelligent Machines advocates the need for public education about AI and public control over AI. It also proposed a simple design that was vulnerable to corruption of the reward generator. === Next step of sociobiological evolution === While the technological singularity is usually seen as a sudden event, some scholars argue the current speed of change already fits this description. In addition, some argue that we are already in the midst of a major evolutionary transition that merges technology, biology, and society. Digital technology has infiltrated the fabric of human society to a degree of indisputable and often life-sustaining dependence. A 2016 article in Trends in Ecology & Evolution argues that "humans already embrace fusions of biology and technology. We spend most of our waking time communicating through digitally mediated channels... we trust artificial intelligence with our lives through antilock braking in cars and autopilots in planes... With one in three courtships leading to marriages in America beginning online, digital algorithms are also taking a role in human pair bonding and reproduction". The article further argues that from the perspective of the evolution, several previous Major Transitions in Evolution have transformed life through innovations in information storage and replication (RNA, DNA, multicellularity, and culture and language). In the current stage of life's evolution, the carbon-based biosphere has generated a system (humans) capable of creating technology that will result in a comparable evolutionary transition. The digital information created by humans has reached a similar magnitude to biological information in the biosphere. Since the 1980s, the quantity of digital information stored has doubled about every 2.5 years, reaching about 5 zettabytes in 2014 (5×1021 bytes). In biological terms, there are 7.2 billion humans on the planet, each having a genome of 6.2 billion nucleotides. Since one byte can encode four nucleotide pairs, the individual genomes of every human on the planet could be encoded by approximately 1×1019 bytes. The digital realm stored 500 times more information than this in 2014 (see figure). The total amount of DNA contained in all of the cells on Earth is estimated to be about 5.3×1037 base pairs, equivalent to 1.325×1037 bytes of information. If growth in digital storage continues at its current rate of 30–38% compound annual growth per year, it will rival the total information content contained in all of the DNA in all of the cells on Earth in about 110 years. This would represent a doubling of the amount of information stored in the biosphere across a total time period of just 150 years". === Implications for human society === In February 2009, under the auspices of the Association for the Advancement of Artificial Intelligence (AAAI), Eric Horvitz chaired a meeting of leading computer scientists, artificial intelligence researchers and roboticists at the Asilomar conference center in Pacific Grove, California. The goal was to discuss the potential impact of the hypothetical possibility that robots could become self-sufficient and able to make their own decisions. They discussed the extent to which computers and robots might be able to acquire autonomy, and to what degree they could use such abilities to pose threats or hazards. Some machines are programmed with various forms of semi-autonomy, including the ability to locate their own power sources and choose targets to attack with weapons. Also, some computer viruses can evade elimination and, according to scientists in attendance, could therefore be said to have reached a "cockroach" stage of machine intelligence. The conference attendees noted that self-awareness as depicted in science-fiction is probably unlikely, but that other potential hazards and pitfalls exist. Frank S. Robinson predicts that once humans achieve a machine with the intelligence of a human, scientific and technological problems will be tackled and solved with brainpower far superior to that of humans. He notes that artificial systems are able to share data more directly than humans, and predicts that this would result in a global network of super-intelligence that would dwarf human capability. Robinson also discusses how vastly different the future would potentially look after such an intelligence explosion. == Hard or soft takeoff == In a hard takeoff scenario, an artificial superintelligence rapidly self-improves, "taking control" of the world (perhaps in a matter of hours), too quickly for significant human-initiated error correction or for a gradual tuning of the agent's goals. In a soft takeoff scenario, the AI still becomes far more powerful than humanity, but at a human-like pace (perhaps on the order of decades), on a timescale where ongoing human interaction and correction can effectively steer the AI's development. Ramez Naam argues against a hard takeoff. He has pointed out that we already see recursive self-improvement by superintelligences, such as corporations. Intel, for example, has "the collective brainpower of tens of thousands of humans and probably millions of CPU cores to... design better CPUs!" However, this has not led to a hard takeoff; rather, it has led to a soft takeoff in the form of Moore's law. Naam further points out that the computational complexity of higher intelligence may be much greater than linear, such that "creating a mind of intelligence 2 is probably more than twice as hard as creating a mind of intelligence 1." J. Storrs Hall believes that "many of the more commonly seen scenarios for overnight hard takeoff are circular – they seem to assume hyperhuman capabilities at the starting point of the self-improvement process" in order for an AI to be able to make the dramatic, domain-general improvements required for takeoff. Hall suggests that rather than recursively self-improving its hardware, software, and infrastructure all on its own, a fledgling AI would be better off specializing in one area where it was most effective and then buying the remaining components on the marketplace, because the quality of products on the marketplace continually improves, and the AI would have a hard time keeping up with the cutting-edge technology used by the rest of the world. Ben Goertzel agrees with Hall's suggestion that a new human-level AI would do well to use its intelligence to accumulate wealth. The AI's talents might inspire companies and governments to disperse its software throughout society. Goertzel is skeptical of a hard five minute takeoff but speculates that a takeoff from human to superhuman level on the order of five years is reasonable. He refers to this scenario as a "semihard takeoff". Max More disagrees, arguing that if there were only a few superfast human-level AIs, that they would not radically change the world, as they would still depend on other people to get things done and would still have human cognitive constraints. Even if all superfast AIs worked on intelligence augmentation, it is unclear why they would do better in a discontinuous way than existing human cognitive scientists at producing super-human intelligence, although the rate of progress would increase. More further argues that a superintelligence would not transform the world overnight: a superintelligence would need to engage with existing, slow human systems to accomplish physical impacts on the world. "The need for collaboration, for organization, and for putting ideas into physical changes will ensure that all the old rules are not thrown out overnight or even within years." == Relation to immortality and aging == Eric Drexler, one of the founders of nanotechnology, theorized in 1986 the possibility of cell repair devices, including ones operating within cells and using as yet hypothetical biological machines. According to Richard Feynman, it was his former graduate student and collaborator Albert Hibbs who originally suggested to him (circa 1959) the idea of a medical use for Feynman's theoretical micromachines. Hibbs suggested that certain repair machines might one day be reduced in size to the point that it would, in theory, be possible to (as Feynman put it) "swallow the doctor". The idea was incorporated into Feynman's 1959 essay There's Plenty of Room at the Bottom. Moravec predicted in 1988 the possibility of "uploading" human mind into a human-like robot, achieving quasi-immortality by extreme longevity via transfer of the human mind between successive new robots as the old ones wear out; beyond that, he predicts later exponential acceleration of subjective experience of time leading to a subjective sense of immortality. Kurzweil suggested in 2005 that medical advances would allow people to protect their bodies from the effects of aging, making the life expectancy limitless. Kurzweil argues that the technological advances in medicine would allow us to continuously repair and replace defective components in our bodies, prolonging life to an undetermined age. Kurzweil further buttresses his argument by discussing current bio-engineering advances. Kurzweil suggests somatic gene therapy; after synthetic viruses with specific genetic information, the next step would be to apply this technology to gene therapy, replacing human DNA with synthesized genes. Beyond merely extending the operational life of the physical body, Jaron Lanier argues for a form of immortality called "Digital Ascension" that involves "people dying in the flesh and being uploaded into a computer and remaining conscious." == History of the concept == A paper by Mahendra Prasad, published in AI Magazine, asserts that the 18th-century mathematician Marquis de Condorcet was the first person to hypothesize and mathematically model an intelligence explosion and its effects on humanity. An early description of the idea was made in John W. Campbell's 1932 short story "The Last Evolution". In his 1958 obituary for John von Neumann, Ulam recalled a conversation with von Neumann about the "ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue." In 1965, Good wrote his essay postulating an "intelligence explosion" of recursive self-improvement of a machine intelligence. In 1977, Hans Moravec wrote an article with unclear publishing status where he envisioned a development of self-improving thinking machines, a creation of "super-consciousness, the synthesis of terrestrial life, and perhaps jovian and martian life as well, constantly improving and extending itself, spreading outwards from the solar system, converting non-life into mind." The article describes the human mind uploading later covered in Moravec (1988). The machines are expected to reach human level and then improve themselves beyond that ("Most significantly of all, they [the machines] can be put to work as programmers and engineers, with the task of optimizing the software and hardware which make them what they are. The successive generations of machines produced this way will be increasingly smarter and more cost effective.") Humans will no longer be needed, and their abilities will be overtaken by the machines: "In the long run the sheer physical inability of humans to keep up with these rapidly evolving progeny of our minds will ensure that the ratio of people to machines approaches zero, and that a direct descendant of our culture, but not our genes, inherits the universe." While the word "singularity" is not used, the notion of human-level thinking machines thereafter improving themselves beyond human level is there. In this view, there is no intelligence explosion in the sense of a very rapid intelligence increase once human equivalence is reached. An updated version of the article was published in 1979 in Analog Science Fiction and Fact. In 1981, Stanisław Lem published his science fiction novel Golem XIV. It describes a military AI computer (Golem XIV) who obtains consciousness and starts to increase his own intelligence, moving towards personal technological singularity. Golem XIV was originally created to aid its builders in fighting wars, but as its intelligence advances to a much higher level than that of humans, it stops being interested in the military requirements because it finds them lacking internal logical consistency. In 1983, Vernor Vinge addressed Good's intelligence explosion in print in the January 1983 issue of Omni magazine. In this op-ed piece, Vinge seems to have been the first to use the term "singularity" (although not "technological singularity") in a way that was specifically tied to the creation of intelligent machines: We will soon create intelligences greater than our own. When this happens, human history will have reached a kind of singularity, an intellectual transition as impenetrable as the knotted space-time at the center of a black hole, and the world will pass far beyond our understanding. This singularity, I believe, already haunts a number of science-fiction writers. It makes realistic extrapolation to an interstellar future impossible. To write a story set more than a century hence, one needs a nuclear war in between ... so that the world remains intelligible. In 1985, in "The Time Scale of Artificial Intelligence", artificial intelligence researcher Ray Solomonoff articulated mathematically the related notion of what he called an "infinity point": if a research community of human-level self-improving AIs take four years to double their own speed, then two years, then one year and so on, their capabilities increase infinitely in finite time. In 1986, Vernor Vinge published Marooned in Realtime, a science-fiction novel where a few remaining humans traveling forward in the future have survived an unknown extinction event that might well be a singularity. In a short afterword, the author states that an actual technological singularity would not be the end of the human species: "of course it seems very unlikely that the Singularity would be a clean vanishing of the human race. (On the other hand, such a vanishing is the timelike analog of the silence we find all across the sky.)". In 1988, Vinge used the phrase "technological singularity" (including "technological") in the short story collection Threats and Other Promises, writing in the introduction to his story "The Whirligig of Time" (p. 72): Barring a worldwide catastrophe, I believe that technology will achieve our wildest dreams, and soon. When we raise our own intelligence and that of our creations, we are no longer in a world of human-sized characters. At that point we have fallen into a technological "black hole", a technological singularity. In 1988, Hans Moravec published Mind Children, in which he predicted human-level intelligence in supercomputers by 2010, self-improving intelligent machines far surpassing human intelligence later, human mind uploading into human-like robots later, intelligent machines leaving humans behind, and space colonization. He did not mention "singularity", though, and he did not speak of a rapid explosion of intelligence immediately after the human level is achieved. Nonetheless, the overall singularity tenor is there in predicting both human-level artificial intelligence and further artificial intelligence far surpassing humans later. Vinge's 1993 article "The Coming Technological Singularity: How to Survive in the Post-Human Era", spread widely on the internet and helped to popularize the idea. This article contains the statement, "Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended." Vinge argues that science-fiction authors cannot write realistic post-singularity characters who surpass the human intellect, as the thoughts of such an intellect would be beyond the ability of humans to express. Minsky's 1994 article says robots will "inherit the Earth", possibly with the use of nanotechnology, and proposes to think of robots as human "mind children", drawing the analogy from Moravec. The rhetorical effect of that analogy is that if humans are fine to pass the world to their biological children, they should be equally fine to pass it to robots, their "mind" children. As per Minsky, 'we could design our "mind-children" to think a million times faster than we do. To such a being, half a minute might seem as long as one of our years, and each hour as long as an entire human lifetime.' The feature of the singularity present in Minsky is the development of superhuman artificial intelligence ("million times faster"), but there is no talk of sudden intelligence explosion, self-improving thinking machines or unpredictability beyond any specific event and the word "singularity" is not used. Tipler's 1994 book The Physics of Immortality predicts a future where super–intelligent machines will build enormously powerful computers, people will be "emulated" in computers, life will reach every galaxy and people will achieve immortality when they reach Omega Point. There is no talk of Vingean "singularity" or sudden intelligence explosion, but intelligence much greater than human is there, as well as immortality. In 1996, Yudkowsky predicted a singularity by 2021. His version of singularity involves intelligence explosion: once AIs are doing the research to improve themselves, speed doubles after 2 years, then 1 one year, then after 6 months, then after 3 months, then after 1.5 months, and after more iterations, the "singularity" is reached. This construction implies that the speed reaches infinity in finite time. In 2000, Bill Joy, a prominent technologist and a co-founder of Sun Microsystems, voiced concern over the potential dangers of robotics, genetic engineering, and nanotechnology. In 2005, Kurzweil published The Singularity Is Near. Kurzweil's publicity campaign included an appearance on The Daily Show with Jon Stewart. From 2006 to 2012, an annual Singularity Summit conference was organized by Machine Intelligence Research Institute, founded by Eliezer Yudkowsky. In 2007, Yudkowsky suggested that many of the varied definitions that have been assigned to "singularity" are mutually incompatible rather than mutually supporting. For example, Kurzweil extrapolates current technological trajectories past the arrival of self-improving AI or superhuman intelligence, which Yudkowsky argues represents a tension with both I. J. Good's proposed discontinuous upswing in intelligence and Vinge's thesis on unpredictability. In 2009, Kurzweil and X-Prize founder Peter Diamandis announced the establishment of Singularity University, a nonaccredited private institute whose stated mission is "to educate, inspire and empower leaders to apply exponential technologies to address humanity's grand challenges." Funded by Google, Autodesk, ePlanet Ventures, and a group of technology industry leaders, Singularity University is based at NASA's Ames Research Center in Mountain View, California. The not-for-profit organization runs an annual ten-week graduate program during summer that covers ten different technology and allied tracks, and a series of executive programs throughout the year. == In politics == In 2007, the Joint Economic Committee of the United States Congress released a report about the future of nanotechnology. It predicts significant technological and political changes in the mid-term future, including possible technological singularity. Former President of the United States Barack Obama spoke about singularity in his interview to Wired in 2016: One thing that we haven't talked about too much, and I just want to go back to, is we really have to think through the economic implications. Because most people aren't spending a lot of time right now worrying about singularity—they are worrying about "Well, is my job going to be replaced by a machine?" == Notes == == See also == Artificial consciousness – Field in cognitive science Ephemeralization – Technological advancement theory Global brain – Futuristic concept of a global interconnected network Technological revolution – Period of rapid technological change Technophobia – Fear or discomfort with advanced technology Neo-Luddism – Philosophy opposing modern technology == References == === Citations === === Sources === == Further reading == Krüger, Oliver, Virtual Immortality. God, Evolution, and the Singularity in Post- and Transhumanism., Bielefeld: transcript 2021. ISBN 978-3-8376-5059-4. Marcus, Gary, "Am I Human?: Researchers need new ways to distinguish artificial intelligence from the natural kind", Scientific American, vol. 316, no. 3 (March 2017), pp. 58–63. Multiple tests of artificial-intelligence efficacy are needed because, "just as there is no single test of athletic prowess, there cannot be one ultimate test of intelligence." One such test, a "Construction Challenge", would test perception and physical action—"two important elements of intelligent behavior that were entirely absent from the original Turing test." Another proposal has been to give machines the same standardized tests of science and other disciplines that schoolchildren take. A so far insuperable stumbling block to artificial intelligence is an incapacity for reliable disambiguation. "[V]irtually every sentence [that people generate] is ambiguous, often in multiple ways." A prominent example is known as the "pronoun disambiguation problem": a machine has no way of determining to whom or what a pronoun in a sentence—such as "he", "she" or "it"—refers. == External links == singularity | technology, britannica.com The Coming Technological Singularity: How to Survive in the Post-Human Era (on Vernor Vinge's web site, retrieved Jul 2019) Intelligence Explosion FAQ by the Machine Intelligence Research Institute Blog on bootstrapping artificial intelligence by Jacques Pitrat Why an Intelligence Explosion is Probable (Mar 2011) Why an Intelligence Explosion is Impossible (Nov 2017) How Close are We to Technological Singularity and When? The AI Revolution: Our Immortality or Extinction – Part 1 and Part 2 (Tim Urban, Wait But Why, January 22/27, 2015)
https://en.wikipedia.org/wiki/Technological_singularity
Avatr Technology Co., Ltd. (Chinese: 阿维塔; pinyin: Ā wéi tǎ; pronounced "Avatar") is a Chinese electric vehicle manufacturer headquartered in Chongqing. Established in 2018, Avatr Technology is a premium EV brand created through a joint venture led by Changan Automobile in collaboration with various Chinese domestic entities. The brand benefits from technological support provided by Huawei and battery technology supplied by CATL. == History == === Changan-Nio === In 2018, Changan aimed to establish a company dedicated to developing modern, technologically advanced electric vehicles in partnership with Nio Inc. To achieve this, a joint venture named Changan-Nio was formed with an equal 50:50 share ratio. However, the partnership did not materialize, as Nio withdrew from the alliance two years later. === Establishment of Avatr === Following Nio's withdrawal, Changan Automobile as the primary shareholder partnered with two major Chinese technology companies: battery manufacturer CATL and technology corporation Huawei. In May 2021, the company was renamed from Changan-Nio to Avatr Technology. In November 2021, Avatr conducted its first round of capital increase and share expansion, resulting in a dilution of Changan Automobile's ownership from 95.38% to 39.02%. CATL acquired a 23.99% stake, while the remaining shares were held by various investment entities. Although Huawei did not become a shareholder, it collaborated closely with Avatr, providing comprehensive technological solutions. In August 2022, Avatr initiated its Series A funding round, attracting three additional investors supported by Chinese private enterprises and local governments. The total financing scale reached nearly 5 billion yuan. As a result, Changan Automobile's ownership stake increased from 39.02% to 40.99%. CATL, which did not participate in the capital increase, saw its ownership stake diluted from 23.99% to 17.10%. In August 2023, Avatr completed its Series B financing round, achieving a valuation of nearly 20 billion RMB. Changan Automobile, China Southern Industrial Asset Management, and Liangjiang Industrial Fund continued to increase their investments. Additionally, it attracted state-owned capital from Chongqing Industrial Investment Fund, China Everbright Investment, and Guangkai Holdings. Changan Automobile remains the largest shareholder, with its ownership stake unchanged at 40.99%. CATL is the second-largest shareholder, with its ownership stake decreasing from 17.10% to 14.10%, and Chongqing Chengan Foundation, a state-owned foundation, is the third-largest shareholder, with its ownership stake decreasing from 13.55% to 11.17%. In August 2024, Avatr announced an investment in Huawei's subsidiary, Yinwang (Shenzhen Yinwang Intelligent Technology Co., Ltd.), acquiring a 10% stake for RMB 11.5 billion and becoming Yinwang's second-largest shareholder. Yinwang, formerly known as Huawei Intelligent Automotive Solution, serves as Huawei's automotive business unit. In the second half of 2023, Huawei opted to operate its business unit independently and open it to public equity investment. Avatr became the first company to invest in Huawei's new unit, Yinwang. This transition upgraded the previous "HI" (Huawei Inside) model to the enhanced "HI Plus" model, allowing Huawei to play a more integral role in defining Avatr's products. In December 2024, Avatr has secured over 11 billion yuan (1.5 billion USD) in its Series C financing round. After the capital increase, Changan Automobile's shareholding ratio remained unchanged at 40.99%, China Southern Assets' shareholding ratio decreased from 7.81% to 6.34%, Anyu Fund's shareholding ratio was 8.81%, and BoCom Investment's shareholding ratio increased from 1.76% to 3.34%. == Products == The first Avatr vehicle was the large, fully electric SUV E11, which stands out from competing designs with a long range on one charge of approximately 700 km (430 mi). The start of production of the first model for the domestic Chinese market was scheduled for the second quarter of 2022, with deliveries of the first units scheduled for the end of the same year. Finally, the production model called Avatr 11, with the letter "E" finally removed from its name, officially debuted in August 2022. Sales of the luxury electric car began in November 2022, a year after the official launch of the new company, positioning it as a premium product. At the same time, Avatr Technology expressed its intention to expand its model offer by 4 new cars by 2025. The first model that is part of the expansion of the Avatr brand is an executive car, dubbed the 12, which was officially presented in July 2023. === Current === Avatr 11 (2022–present), mid-size SUV, BEV/REEV Avatr 12 (2023–present), mid-size sedan, BEV/REEV Avatr 07 (2024–present), mid-size SUV, BEV/REEV Avatr 06 (2025-present), mid-size sedan, BEV/REEV === Upcoming === G618 (expected mid-2026), flagship full-size SUV, NEV D706 (expected 2026), flagship MPV, NEV == CHN platform == Avatr claims to integrate the strengths of Changan Automobile, Huawei, and CATL to establish the "CHN" cooperation model. According to Avatr, the smart electric vehicle technology platform CHN utilizes a six-layer architecture: the mechanical layer, energy layer, electronic and electrical architecture layer, vehicle operating system layer, vehicle function application layer, and cloud big data layer. The company asserts that products developed on this platform feature high integration, scalability, performance, endurance, security, computing power, intelligence, and adaptability. The platform reportedly supports the development of models with up to a 3,100mm wheelbase, accommodates various vehicle types such as sedans, SUVs, MPVs, and crossovers, and is compatible with both rear-wheel drive and four-wheel drive configurations. == Sales == == See also == Automobile manufacturers and brands of China List of automobile manufacturers of China == References == == External links == Official website
https://en.wikipedia.org/wiki/Avatr_Technology
A chief technology officer (CTO) (also known as a chief technical officer or chief technologist) is an officer tasked with managing technical operations of an organization. They oversee and supervise research and development and serve as a technical advisor to a higher executive such as a chief executive officer. A CTO is very similar to a chief information officer (CIO). CTOs will make decisions for the overarching technology infrastructure that closely align with the organization's goals, while CIOs work alongside the organization's information technology ("IT") staff members to perform everyday operations. The attributes of the roles a CTO holds vary from one company to another, mainly depending on their organizational structure. == History == After World War II, large corporations established research laboratories at locations separate from their headquarters. The corporation's goals were to hire scientists and offer them facilities to conduct research on behalf of the company without the burdens of day-to-day office work. This is where the idea of a CTO focusing on the overarching technology infrastructures originates. At that time, the director of the laboratory was a corporate vice president who did not participate in the company's corporate decisions. Instead, the technical director was the individual responsible for attracting new scientists, to do research, and to develop products. In the 1980s, the role of these research directors changed substantially. Since technology was becoming a fundamental part of the development for most products and services, companies needed an operational executive who could understand the product's technical side and provide advice on ways to improve and develop. This all led to the creation of the position of Chief Technology Officer by large companies in the late 1980s with the growth of the information technology industry and computer (internet) companies. == Overview == A CTO "examines the short and long term needs of an organization, and utilizes capital to make investments designed to help the organization reach its objectives... [the CTO] is the highest technology executive position within a company and leads the technology or engineering department". The role became prominent with the ascent of the IT industry, but has since become prevalent in technology-based industries of all types – including computer-based technologies (such as game developer, e-commerce, and social networking service) and other/non-computer-focused technology (such as biotech/pharma, defense, and automotive). In non-technical organizations as a corporate officer position, the CTO typically reports directly to the chief information officer (CIO) and is primarily concerned with long-term and "big picture" issues (while still having deep technical knowledge of the relevant field). In technology-focused organizations, the CIO and CTO positions can be at the same level, with the CIO focused on the information technology and the CTO focused on the core company and other supporting technologies. Depending on company structure and hierarchy, there may also be positions such as R&D manager, director of R&D and vice president of engineering whom the CTO interacts with or oversees. The CTO also needs a working familiarity with regulatory (e.g. U.S. Food and Drug Administration, Environmental Protection Agency, Consumer Product Safety Commission, as applicable) and intellectual property (IP) issues (e.g. patents, trade secrets, license contracts), and an ability to interface with legal counsel to incorporate these considerations into strategic planning and inter-company negotiations. In many older industries (whose existence may predate IT automation) such as manufacturing, shipping or banking, an executive role of the CTO would often arise out of the process of automating existing activities; in these cases, any CTO-like role would only emerge if and when efforts would be made to develop truly novel technologies (either for facilitating internal operations or for enhancing products/services being provided), perhaps through "intrapreneuring". == See also == Chief creative officer Chief executive officer Chief innovation officer (CINO or CTIO) Chief scientific officer Chief security officer Chief AI officer == References == == Further reading == Pratt, Mary K (22 January 2007). "The CTO: IT's Chameleon". Computerworld.com. Berray, Tom; Sampath, Raj (2002). "The Role of the CTO, four models for success" (PDF). Archived from the original (PDF) on 2017-08-30. Retrieved 2009-07-06. Medcof, John W.; Yousofpourfard, Haniyeh (2006). "The CTO and Organizational Power and Influence" (PDF). International Association for Management of Technology. Archived from the original (PDF) on 2016-03-04. Retrieved 2013-07-17. Noble, Jason (2018). "Day in the life of a CTO" . CTO Academy
https://en.wikipedia.org/wiki/Chief_technology_officer
Food technology is a branch of food science that addresses the production, preservation, quality control and research and development of food products. It may also be understood as the science of ensuring that a society is food secure and has access to safe food that meets quality standards. Early scientific research into food technology concentrated on food preservation. Nicolas Appert's development in 1810 of the canning process was a decisive event. The process wasn't called canning then and Appert did not really know the principle on which his process worked, but canning has had a major impact on food preservation techniques. Louis Pasteur's research on the spoilage of wine and his description of how to avoid spoilage in 1864, was an early attempt to apply scientific knowledge to food handling. Besides research into wine spoilage, Pasteur researched the production of alcohol, vinegar, wines and beer, and the souring of milk. He developed pasteurization – the process of heating milk and milk products to destroy food spoilage and disease-producing organisms. In his research into food technology, Pasteur became the pioneer into bacteriology and of modern preventive medicine. == Developments == Developments in food technology have contributed greatly to the food supply and have changed our world. Some of these developments are: Instantized milk powder – Instant milk powder has become the basis for a variety of new products that are rehydratable. This process increases the surface area of the powdered product by partially rehydrating spray-dried milk powder. Freeze-drying – The first application of freeze drying was most likely in the pharmaceutical industry; however, a successful large-scale industrial application of the process was the development of continuous freeze drying of coffee. High-temperature short time processing – These processes, for the most part, are characterized by rapid heating and cooling, holding for a short time at a relatively high temperature and filling aseptically into sterile containers. Decaffeination of coffee and tea – Decaffeinated coffee and tea was first developed on a commercial basis in Europe around 1900. The process is described in U.S. patent 897,763. Green coffee beans are treated with water, heat and solvents to remove the caffeine from the beans. Process optimization – Food technology now allows production of foods to be more efficient, oil saving technologies are now available on different forms. Production methods and methodology have also become increasingly sophisticated. Aseptic packaging – the process of filling a commercially sterile product into a sterile container and hermetically sealing the containers so that re-infection is prevented. Thus, this results into a shelf stable product at ambient conditions. Food irradiation – the process of exposing food and food packaging to ionizing radiation can effectively destroy organisms responsible for spoilage and foodborne illness and inhibit sprouting, extending shelf life. Commercial fruit ripening rooms using ethylene as a plant hormone. Food delivery – An order is typically made either through a restaurant or grocer's website or mobile app, or through a food ordering company. The ordered food is typically delivered in boxes or bags to the customer's doorsteps. == Categories == Technology has innovated these categories from the food industry: Agricultural technology – or AgTech, it is the use of technology in agriculture, horticulture, and aquaculture with the aim of improving yield, efficiency, and profitability. Agricultural technology can be products, services or applications derived from agriculture that improve various input/output processes. Food science – technology in this sector focuses on the development of new functional ingredients and alternative Proteins. Foodservice – technology innovated the way establishments prepare, supply, and serve food outside the home. There's a tendency to create the conditions for the restaurant of the future with robotics and CloudKitchens. Consumer Tech – technology allows what we call consumer electronics, which is the equipment of consumers with devices that facilitates the cooking process. Food delivery – as the food delivery market is growing, companies and startups are rapidly revolutionizing the communication process between consumers and food establishments, with platform-to-consumer delivery as the global lead. Supply chain – supply chain activities are considerably moving from digitization to automation. == Emerging technologies == Innovation in the food sector may include, for example, new types for raw material processing technology, packaging of products, and new food additives. Applying new solutions may reduce or prevent adverse changes caused by microorganisms, oxidation of food ingredients, and enzymatic and nonenzymatic reactions. Moreover, healthier and more nutritious food may be delivered as well as the food may taste better due to improvements in food composition, including organoleptic changes, and changes in the perception and pleasures from eating food. In the 21st century, emerging technologies such as cellular agriculture, particularly cultured meat, 3D food printing, use of insect protein, plant-based alternatives, vertical farming, food deliveries and blockchain technology are being developed to accelerate the transformation towards sustainable food systems. === Alternative protein sources === With the global population expected to reach 9.7 billion by 2050, there is an urgent need for alternative protein sources that are sustainable, nutritious, and environmentally friendly. Plant-based proteins are gaining popularity as they require fewer resources and produce fewer greenhouse gas emissions compared to animal-based proteins. Companies like Beyond Meat and Impossible Foods have developed plant-based meat alternatives that mimic the taste and texture of traditional meat products. === Food waste reduction === Approximately one-third of all food produced globally is wasted. Innovative food tech solutions are being developed to address this issue. For example, Apeel Sciences has developed an edible coating that extends the shelf life of fruits and vegetables, reducing spoilage and waste. == Consumer acceptance == Historically, consumers paid little attention to food technologies. Nowadays, the food production chain is long and complicated and food technologies are diverse. Consequently, consumers are uncertain about the determinants of food quality and find it difficult to understand them. Now, acceptance of food products very often depends on perceived benefits and risks associated with food. Popular views of food processing technologies matter. Especially innovative food processing technologies often are perceived as risky by consumers. Acceptance of the different food technologies varies. While pasteurization is well recognized and accepted, high pressure treatment and even microwaves often are perceived as risky. Studies by the Hightech Europe project found that traditional technologies were well accepted in contrast to innovative technologies. Consumers form their attitude towards innovative food technologies through three main mechanisms: First, through knowledge or beliefs about risks and benefits correlated with the technology; second, through attitudes based on their own experience; and third, through application of higher order values and beliefs. A number of scholars consider the risk-benefit trade-off as one of the main determinants of consumer acceptance, although some researchers place more emphasis on the role of benefit perception (rather than risk) in consumer acceptance. Rogers (2010) defines five major criteria that explain differences in the acceptance of new technology by consumers: complexity, compatibility, relative advantage, trialability and observability. Acceptance of innovative technologies can be improved by providing non-emotional and concise information about these new technological processes methods. The HighTech project also suggests that written information has a higher impact on consumers than audio-visual information. == Publications == Food and Bioprocess Technology Food Technology LWT - Food Science and Technology == See also == Agricultural technology Food biotechnology Food packaging Food grading Molecular gastronomy Optical sorting Standard components (food processing) List of food and drink awards § Food technology awards == General references == Hans-Jürgen Bässler und Frank Lehmann : Containment Technology: Progress in the Pharmaceutical and Food Processing Industry. Springer, Berlin 2013, ISBN 978-3642392917 == References == == External links == Media related to Food technology at Wikimedia Commons
https://en.wikipedia.org/wiki/Food_technology
Radio is the technology of communicating using radio waves. Radio waves are electromagnetic waves of frequency between 3 hertz (Hz) and 300 gigahertz (GHz). They are generated by an electronic device called a transmitter connected to an antenna which radiates the waves. They can be received by other antennas connected to a radio receiver; this is the fundamental principle of radio communication. In addition to communication, radio is used for radar, radio navigation, remote control, remote sensing, and other applications. In radio communication, used in radio and television broadcasting, cell phones, two-way radios, wireless networking, and satellite communication, among numerous other uses, radio waves are used to carry information across space from a transmitter to a receiver, by modulating the radio signal (impressing an information signal on the radio wave by varying some aspect of the wave) in the transmitter. In radar, used to locate and track objects like aircraft, ships, spacecraft and missiles, a beam of radio waves emitted by a radar transmitter reflects off the target object, and the reflected waves reveal the object's location to a receiver that is typically colocated with the transmitter. In radio navigation systems such as GPS and VOR, a mobile navigation instrument receives radio signals from multiple navigational radio beacons whose position is known, and by precisely measuring the arrival time of the radio waves the receiver can calculate its position on Earth. In wireless radio remote control devices like drones, garage door openers, and keyless entry systems, radio signals transmitted from a controller device control the actions of a remote device. The existence of radio waves was first proven by German physicist Heinrich Hertz on 11 November 1886. In the mid-1890s, building on techniques physicists were using to study electromagnetic waves, Italian physicist Guglielmo Marconi developed the first apparatus for long-distance radio communication, sending a wireless Morse Code message to a recipient over a kilometer away in 1895, and the first transatlantic signal on 12 December 1901. The first commercial radio broadcast was transmitted on 2 November 1920, when the live returns of the Harding-Cox presidential election were broadcast by Westinghouse Electric and Manufacturing Company in Pittsburgh, under the call sign KDKA. The emission of radio waves is regulated by law, coordinated by the International Telecommunication Union (ITU), which allocates frequency bands in the radio spectrum for various uses. == Etymology == The word radio is derived from the Latin word radius, meaning "spoke of a wheel, beam of light, ray." It was first applied to communications in 1881 when, at the suggestion of French scientist Ernest Mercadier, Alexander Graham Bell adopted radiophone (meaning "radiated sound") as an alternate name for his photophone optical transmission system. Following Hertz's discovery of the existence of radio waves in 1886, the term Hertzian waves was initially used for this radiation. The first practical radio communication systems, developed by Marconi in 1894–1895, transmitted telegraph signals by radio waves, so radio communication was first called wireless telegraphy. Up until about 1910 the term wireless telegraphy also included a variety of other experimental systems for transmitting telegraph signals without wires, including electrostatic induction, electromagnetic induction and aquatic and earth conduction, so there was a need for a more precise term referring exclusively to electromagnetic radiation. The French physicist Édouard Branly, who in 1890 developed the radio wave detecting coherer, called it in French a radio-conducteur. The radio- prefix was later used to form additional descriptive compound and hyphenated words, especially in Europe. For example, in early 1898 the British publication The Practical Engineer included a reference to the radiotelegraph and radiotelegraphy. The use of radio as a standalone word dates back to at least 30 December 1904, when instructions issued by the British Post Office for transmitting telegrams specified that "The word 'Radio'... is sent in the Service Instructions." This practice was universally adopted, and the word "radio" introduced internationally, by the 1906 Berlin Radiotelegraphic Convention, which included a Service Regulation specifying that "Radiotelegrams shall show in the preamble that the service is 'Radio'". The switch to radio in place of wireless took place slowly and unevenly in the English-speaking world. Lee de Forest helped popularize the new word in the United States—in early 1907, he founded the DeForest Radio Telephone Company, and his letter in the 22 June 1907 Electrical World about the need for legal restrictions warned that "Radio chaos will certainly be the result until such stringent regulation is enforced." The United States Navy would also play a role. Although its translation of the 1906 Berlin Convention used the terms wireless telegraph and wireless telegram, by 1912 it began to promote the use of radio instead. The term started to become preferred by the general public in the 1920s with the introduction of broadcasting. == History == Electromagnetic waves were predicted by James Clerk Maxwell in his 1873 theory of electromagnetism, now called Maxwell's equations, who proposed that a coupled oscillating electric field and magnetic field could travel through space as a wave, and proposed that light consisted of electromagnetic waves of short wavelength. On 11 November 1886, German physicist Heinrich Hertz, attempting to confirm Maxwell's theory, first observed radio waves he generated using a primitive spark-gap transmitter. Experiments by Hertz and physicists Jagadish Chandra Bose, Oliver Lodge, Lord Rayleigh, and Augusto Righi, among others, showed that radio waves like light demonstrated reflection, refraction, diffraction, polarization, standing waves, and traveled at the same speed as light, confirming that both light and radio waves were electromagnetic waves, differing only in frequency. In 1895, Guglielmo Marconi developed the first radio communication system, using a spark-gap transmitter to send Morse code over long distances. By December 1901, he had transmitted across the Atlantic Ocean. Marconi and Karl Ferdinand Braun shared the 1909 Nobel Prize in Physics "for their contributions to the development of wireless telegraphy". During radio's first two decades, called the radiotelegraphy era, the primitive radio transmitters could only transmit pulses of radio waves, not the continuous waves which were needed for audio modulation, so radio was used for person-to-person commercial, diplomatic and military text messaging. Starting around 1908 industrial countries built worldwide networks of powerful transoceanic transmitters to exchange telegram traffic between continents and communicate with their colonies and naval fleets. During World War I the development of continuous wave radio transmitters, rectifying electrolytic, and crystal radio receiver detectors enabled amplitude modulation (AM) radiotelephony to be achieved by Reginald Fessenden and others, allowing audio to be transmitted. On 2 November 1920, the first commercial radio broadcast was transmitted by Westinghouse Electric and Manufacturing Company in Pittsburgh, under the call sign KDKA featuring live coverage of the Harding-Cox presidential election. == Technology == Radio waves are radiated by electric charges undergoing acceleration. They are generated artificially by time-varying electric currents, consisting of electrons flowing back and forth in a metal conductor called an antenna. As they travel farther from the transmitting antenna, radio waves spread out so their signal strength (intensity in watts per square meter) decreases (see Inverse-square law), so radio transmissions can only be received within a limited range of the transmitter, the distance depending on the transmitter power, the antenna radiation pattern, receiver sensitivity, background noise level, and presence of obstructions between transmitter and receiver. An omnidirectional antenna transmits or receives radio waves in all directions, while a directional antenna transmits radio waves in a beam in a particular direction, or receives waves from only one direction. Radio waves travel at the speed of light in vacuum and at slightly lower velocity in air. The other types of electromagnetic waves besides radio waves, infrared, visible light, ultraviolet, X-rays and gamma rays, can also carry information and be used for communication. The wide use of radio waves for telecommunication is mainly due to their desirable propagation properties stemming from their longer wavelength. Radio waves have the ability to pass through the atmosphere in any weather, foliage, and at longer wavelengths through most building materials. By diffraction, longer wavelengths can bend around obstructions, and unlike other electromagnetic waves they tend to be scattered rather than absorbed by objects larger than their wavelength. == Radio communication == In radio communication systems, information is carried across space using radio waves. At the sending end, the information to be sent is converted by some type of transducer to a time-varying electrical signal called the modulation signal. The modulation signal may be an audio signal representing sound from a microphone, a video signal representing moving images from a video camera, or a digital signal consisting of a sequence of bits representing binary data from a computer. The modulation signal is applied to a radio transmitter. In the transmitter, an electronic oscillator generates an alternating current oscillating at a radio frequency, called the carrier wave because it serves to generate the radio waves that carry the information through the air. The modulation signal is used to modulate the carrier, varying some aspect of the carrier wave, impressing the information in the modulation signal onto the carrier. Different radio systems use different modulation methods: Amplitude modulation (AM) – in an AM transmitter, the amplitude (strength) of the radio carrier wave is varied by the modulation signal;: 3  Frequency modulation (FM) – in an FM transmitter, the frequency of the radio carrier wave is varied by the modulation signal;: 33  Frequency-shift keying (FSK) – used in wireless digital devices to transmit digital signals, the frequency of the carrier wave is shifted between frequencies.: 58  Orthogonal frequency-division multiplexing (OFDM) – a family of digital modulation methods widely used in high-bandwidth systems such as Wi-Fi networks, cellphones, digital television broadcasting, and digital audio broadcasting (DAB) to transmit digital data using a minimum of radio spectrum bandwidth. It has higher spectral efficiency and more resistance to fading than AM or FM. In OFDM, multiple radio carrier waves closely spaced in frequency are transmitted within the radio channel, with each carrier modulated with bits from the incoming bitstream so multiple bits are being sent simultaneously, in parallel. At the receiver, the carriers are demodulated and the bits are combined in the proper order into one bitstream. Many other types of modulation are also used. In some types, the carrier wave is suppressed, and only one or both modulation sidebands are transmitted. The modulated carrier is amplified in the transmitter and applied to a transmitting antenna which radiates the energy as radio waves. The radio waves carry the information to the receiver location. At the receiver, the radio wave induces a tiny oscillating voltage in the receiving antenna – a weaker replica of the current in the transmitting antenna. This voltage is applied to the radio receiver, which amplifies the weak radio signal so it is stronger, then demodulates it, extracting the original modulation signal from the modulated carrier wave. The modulation signal is converted by a transducer back to a human-usable form: an audio signal is converted to sound waves by a loudspeaker or earphones, a video signal is converted to images by a display, while a digital signal is applied to a computer or microprocessor, which interacts with human users. The radio waves from many transmitters pass through the air simultaneously without interfering with each other because each transmitter's radio waves oscillate at a different frequency, measured in hertz (Hz), kilohertz (kHz), megahertz (MHz) or gigahertz (GHz). The receiving antenna typically picks up the radio signals of many transmitters. The receiver uses tuned circuits to select the radio signal desired out of all the signals picked up by the antenna and reject the others. A tuned circuit acts like a resonator, similar to a tuning fork. It has a natural resonant frequency at which it oscillates. The resonant frequency of the receiver's tuned circuit is adjusted by the user to the frequency of the desired radio station; this is called tuning. The oscillating radio signal from the desired station causes the tuned circuit to oscillate in sympathy, and it passes the signal on to the rest of the receiver. Radio signals at other frequencies are blocked by the tuned circuit and not passed on. === Bandwidth === A modulated radio wave, carrying an information signal, occupies a range of frequencies. The information in a radio signal is usually concentrated in narrow frequency bands called sidebands (SB) just above and below the carrier frequency. The width in hertz of the frequency range that the radio signal occupies, the highest frequency minus the lowest frequency, is called its bandwidth (BW). For any given signal-to-noise ratio, a given bandwidth can carry the same amount of information regardless of where in the radio frequency spectrum it is located; bandwidth is a measure of information-carrying capacity. The bandwidth required by a radio transmission depends on the data rate of the information being sent, and the spectral efficiency of the modulation method used; how much data it can transmit in each unit of bandwidth. Different types of information signals carried by radio have different data rates. For example, a television signal has a greater data rate than an audio signal. The radio spectrum, the total range of radio frequencies that can be used for communication in a given area, is a limited resource. Each radio transmission occupies a portion of the total bandwidth available. Radio bandwidth is regarded as an economic good which has a monetary cost and is in increasing demand. In some parts of the radio spectrum, the right to use a frequency band or even a single radio channel is bought and sold for millions of dollars. So there is an incentive to employ technology to minimize the bandwidth used by radio services. A slow transition from analog to digital radio transmission technologies began in the late 1990s. Part of the reason for this is that digital modulation can often transmit more information (a greater data rate) in a given bandwidth than analog modulation, by using data compression algorithms, which reduce redundancy in the data to be sent, and more efficient modulation. Other reasons for the transition is that digital modulation has greater noise immunity than analog, digital signal processing chips have more power and flexibility than analog circuits, and a wide variety of types of information can be transmitted using the same digital modulation. Because it is a fixed resource which is in demand by an increasing number of users, the radio spectrum has become increasingly congested in recent decades, and the need to use it more effectively is driving many additional radio innovations such as trunked radio systems, spread spectrum (ultra-wideband) transmission, frequency reuse, dynamic spectrum management, frequency pooling, and cognitive radio. === ITU frequency bands === The ITU arbitrarily divides the radio spectrum into 12 bands, each beginning at a wavelength which is a power of ten (10n) metres, with corresponding frequency of 3 times a power of ten, and each covering a decade of frequency or wavelength. Each of these bands has a traditional name: It can be seen that the bandwidth, the range of frequencies, contained in each band is not equal but increases exponentially as the frequency increases; each band contains ten times the bandwidth of the preceding band. The term "tremendously low frequency" (TLF) has been used for wavelengths from 1–3 Hz (300,000–100,000 km), though the term has not been defined by the ITU. == Regulation == The airwaves are a resource shared by many users. Two radio transmitters in the same area that attempt to transmit on the same frequency will interfere with each other, causing garbled reception, so neither transmission may be received clearly. Interference with radio transmissions can not only have a large economic cost, but it can also be life-threatening (for example, in the case of interference with emergency communications or air traffic control). To prevent interference between different users, the emission of radio waves is strictly regulated by national laws, coordinated by an international body, the International Telecommunication Union (ITU), which allocates bands in the radio spectrum for different uses. Radio transmitters must be licensed by governments, under a variety of license classes depending on use, and are restricted to certain frequencies and power levels. In some classes, such as radio and television broadcasting stations, the transmitter is given a unique identifier consisting of a string of letters and numbers called a call sign, which must be used in all transmissions. In order to adjust, maintain, or internally repair radiotelephone transmitters, individuals must hold a government license, such as the general radiotelephone operator license in the US, obtained by taking a test demonstrating adequate technical and legal knowledge of safe radio operation. Exceptions to the above rules allow the unlicensed operation by the public of low power short-range transmitters in consumer products such as cell phones, cordless phones, wireless devices, walkie-talkies, citizens band radios, wireless microphones, garage door openers, and baby monitors. In the US, these fall under Part 15 of the Federal Communications Commission (FCC) regulations. Many of these devices use the ISM bands, a series of frequency bands throughout the radio spectrum reserved for unlicensed use. Although they can be operated without a license, like all radio equipment these devices generally must be type-approved before the sale. == Applications == Below are some of the most important uses of radio, organized by function. === Broadcasting === Broadcasting is the one-way transmission of information from a transmitter to receivers belonging to a public audience. Since the radio waves become weaker with distance, a broadcasting station can only be received within a limited distance of its transmitter. Systems that broadcast from satellites can generally be received over an entire country or continent. Older terrestrial radio and television are paid for by commercial advertising or governments. In subscription systems like satellite television and satellite radio the customer pays a monthly fee. In these systems, the radio signal is encrypted and can only be decrypted by the receiver, which is controlled by the company and can be deactivated if the customer does not pay. Broadcasting uses several parts of the radio spectrum, depending on the type of signals transmitted and the desired target audience. Longwave and medium wave signals can give reliable coverage of areas several hundred kilometers across, but have a more limited information-carrying capacity and so work best with audio signals (speech and music), and the sound quality can be degraded by radio noise from natural and artificial sources. The shortwave bands have a greater potential range but are more subject to interference by distant stations and varying atmospheric conditions that affect reception. In the very high frequency band, greater than 30 megahertz, the Earth's atmosphere has less of an effect on the range of signals, and line-of-sight propagation becomes the principal mode. These higher frequencies permit the great bandwidth required for television broadcasting. Since natural and artificial noise sources are less present at these frequencies, high-quality audio transmission is possible, using frequency modulation. ==== Audio: Radio broadcasting ==== Radio broadcasting means transmission of audio (sound) to radio receivers belonging to a public audience. Analog audio is the earliest form of radio broadcast. AM broadcasting began around 1920. FM broadcasting was introduced in the late 1930s with improved fidelity. A broadcast radio receiver is called a radio. Most radios can receive both AM and FM. AM (amplitude modulation) – in AM, the amplitude (strength) of the radio carrier wave is varied by the audio signal. AM broadcasting, the oldest broadcasting technology, is allowed in the AM broadcast bands, between 148 and 283 kHz in the low frequency (LF) band for longwave broadcasts and between 526 and 1706 kHz in the medium frequency (MF) band for medium-wave broadcasts. Because waves in these bands travel as ground waves following the terrain, AM radio stations can be received beyond the horizon at hundreds of miles distance, but AM has lower fidelity than FM. Radiated power (ERP) of AM stations in the US is usually limited to a maximum of 10 kW, although a few (clear-channel stations) are allowed to transmit at 50 kW. AM stations broadcast in monaural audio; AM stereo broadcast standards exist in most countries, but the radio industry has failed to upgrade to them, due to lack of demand. Shortwave broadcasting – AM broadcasting is also allowed in the shortwave bands by legacy radio stations. Since radio waves in these bands can travel intercontinental distances by reflecting off the ionosphere using skywave or "skip" propagation, shortwave is used by international stations, broadcasting to other countries. FM (frequency modulation) – in FM the frequency of the radio carrier signal is varied slightly by the audio signal. FM broadcasting is permitted in the FM broadcast bands between about 65 and 108 MHz in the very high frequency (VHF) range. Radio waves in this band travel by line-of-sight so FM reception is limited by the visual horizon to about 30–40 miles (48–64 km), and can be blocked by hills. However it is less susceptible to interference from radio noise (RFI, sferics, static), and has higher fidelity, better frequency response, and less audio distortion than AM. In the US, radiated power (ERP) of FM stations varies from 6–100 kW. Digital radio involves a variety of standards and technologies for broadcasting digital radio signals over the air. Some systems, such as HD Radio and DRM, operate in the same wavebands as analog broadcasts, either as a replacement for analog stations or as a complementary service. Others, such as DAB/DAB+ and ISDB_Tsb, operate in wavebands traditionally used for television or satellite services. Digital Audio Broadcasting (DAB) debuted in some countries in 1998. It transmits audio as a digital signal rather than an analog signal as AM and FM do. DAB has the potential to provide higher quality sound than FM (although many stations do not choose to transmit at such high quality), has greater immunity to radio noise and interference, makes better use of scarce radio spectrum bandwidth and provides advanced user features such as electronic program guides. Its disadvantage is that it is incompatible with previous radios so that a new DAB receiver must be purchased. Several nations have set dates to switch off analog FM networks in favor of DAB / DAB+, notably Norway in 2017 and Switzerland in 2024. A single DAB station transmits a 1,500 kHz bandwidth signal that carries from 9–12 channels of digital audio modulated by OFDM from which the listener can choose. Broadcasters can transmit a channel at a range of different bit rates, so different channels can have different audio quality. In different countries DAB stations broadcast in either Band III (174–240 MHz) or L band (1.452–1.492 GHz) in the UHF range, so like FM reception is limited by the visual horizon to about 40 miles (64 km). HD Radio is an alternative digital radio standard widely implemented in North America. An in-band on-channel technology, HD Radio broadcasts a digital signal in a subcarrier of a station's analog FM or AM signal. Stations are able to multicast more than one audio signal in the subcarrier, supporting the transmission of multiple audio services at varying bitrates. The digital signal is transmitted using OFDM with the HDC (High-Definition Coding) proprietary audio compression format. HDC is based on, but not compatible with, the MPEG-4 standard HE-AAC. It uses a modified discrete cosine transform (MDCT) audio data compression algorithm. Digital Radio Mondiale (DRM) is a competing digital terrestrial radio standard developed mainly by broadcasters as a higher spectral efficiency replacement for legacy AM and FM broadcasting. Mondiale means "worldwide" in French and Italian; DRM was developed in 2001, and is currently supported by 23 countries, and adopted by some European and Eastern broadcasters beginning in 2003. The DRM30 mode uses the commercial broadcast bands below 30 MHz, and is intended as a replacement for standard AM broadcast on the longwave, mediumwave, and shortwave bands. The DRM+ mode uses VHF frequencies centered around the FM broadcast band, and is intended as a replacement for FM broadcasting. It is incompatible with existing radio receivers, so it requires listeners to purchase a new DRM receiver. The modulation used is a form of OFDM called COFDM in which, up to 4 carriers are transmitted on a channel formerly occupied by a single AM or FM signal, modulated by quadrature amplitude modulation (QAM). The DRM system is designed to be as compatible as possible with existing AM and FM radio transmitters, so that much of the equipment in existing radio stations can continue in use, augmented with DRM modulation equipment. Satellite radio is a subscription radio service that broadcasts CD quality digital audio direct to subscribers' receivers using a microwave downlink signal from a direct broadcast communication satellite in geostationary orbit 22,000 miles (35,000 km) above the Earth. It is mostly intended for radios in vehicles. Satellite radio uses the 2.3 GHz S band in North America, in other parts of the world, it uses the 1.4 GHz L band allocated for DAB. ==== Audio/video: Television broadcasting ==== Television broadcasting is the transmission of moving images along with a synchronized audio (sound) channel by radio. The sequence of still images is displayed on a screen on a television receiver (a "television" or TV), which includes a loudspeaker. Television (video) signals occupy a wider bandwidth than broadcast radio (audio) signals. Analog television, the original television technology, required 6 MHz, so the television frequency bands are divided into 6 MHz channels, now called "RF channels". The current television standard, introduced beginning in 2006, is a digital format called high-definition television (HDTV), which transmits pictures at higher resolution, typically 1080 pixels high by 1920 pixels wide, at a rate of 25 or 30 frames per second. Digital television (DTV) transmission systems, which replaced older analog television in a transition beginning in 2006, use image compression and high-efficiency digital modulation such as OFDM and 8VSB to transmit HDTV video within a smaller bandwidth than the old analog channels, saving scarce radio spectrum space. Therefore, each of the 6 MHz analog RF channels now carries up to 7 DTV channels – these are called "virtual channels". Digital television receivers have different behavior in the presence of poor reception or noise than analog television, called the "digital cliff" effect. Unlike analog television, in which increasingly poor reception causes the picture quality to gradually degrade, in digital television picture quality is not affected by poor reception until, at a certain point, the receiver stops working and the screen goes black. Terrestrial television, over-the-air (OTA) television, or broadcast television – the oldest television technology, is the transmission of television signals from land-based television stations to television receivers (called televisions or TVs) in viewer's homes. Terrestrial television broadcasting uses the bands 41 – 88 MHz (VHF low band or Band I, carrying RF channels 1–6), 174 – 240 MHz, (VHF high band or Band III; carrying RF channels 7–13), and 470 – 614 MHz (UHF Band IV and Band V; carrying RF channels 14 and up). The exact frequency boundaries vary in different countries. Propagation is by line-of-sight, so reception is limited by the visual horizon. In the US, the effective radiated power (ERP) of television transmitters is regulated according to height above average terrain. Viewers closer to the television transmitter can use a simple "rabbit ears" dipole antenna on top of the TV, but viewers in fringe reception areas typically require an outdoor antenna mounted on the roof to get adequate reception. Satellite television – a set-top box which receives subscription direct-broadcast satellite television, and displays it on an ordinary television. A direct broadcast satellite in geostationary orbit 22,200 miles (35,700 km) above the Earth's equator transmits many channels (up to 900) modulated on a 12.2 to 12.7 GHz Ku band microwave downlink signal to a rooftop satellite dish antenna on the subscriber's residence. The microwave signal is converted to a lower intermediate frequency at the dish and conducted into the building by a coaxial cable to a set-top box connected to the subscriber's TV, where it is demodulated and displayed. The subscriber pays a monthly fee. ==== Time and frequency ==== Government standard frequency and time signal services operate time radio stations which continuously broadcast extremely accurate time signals produced by atomic clocks, as a reference to synchronize other clocks. Examples are BPC, DCF77, JJY, MSF, RTZ, TDF, WWV, and YVTO. One use is in radio clocks and watches, which include an automated receiver that periodically (usually weekly) receives and decodes the time signal and resets the watch's internal quartz clock to the correct time, thus allowing a small watch or desk clock to have the same accuracy as an atomic clock. Government time stations are declining in number because GPS satellites and the Internet Network Time Protocol (NTP) provide equally accurate time standards. === Voice communication === ==== Two-way voice communication ==== A two-way radio is an audio transceiver, a receiver and transmitter in the same device, used for bidirectional person-to-person voice communication with other users with similar radios. An older term for this mode of communication is radiotelephony. The radio link may be half-duplex, as in a walkie-talkie, using a single radio channel in which only one radio can transmit at a time, so different users take turns talking, pressing a "push to talk" button on their radio which switches off the receiver and switches on the transmitter. Or the radio link may be full duplex, a bidirectional link using two radio channels so both people can talk at the same time, as in a cell phone. Cell phone – a portable wireless telephone that is connected to the telephone network by radio signals exchanged with a local antenna at a cellular base station (cell tower). The service area covered by the provider is divided into small geographical areas called "cells", each served by a separate base station antenna and multichannel transceiver. All the cell phones in a cell communicate with this antenna on separate frequency channels, assigned from a common pool of frequencies. The purpose of cellular organization is to conserve radio bandwidth by frequency reuse. Low power transmitters are used so the radio waves used in a cell do not travel far beyond the cell, allowing the same frequencies to be reused in geographically separated cells. When a user carrying a cellphone crosses from one cell to another, his phone is automatically "handed off" seamlessly to the new antenna and assigned new frequencies. Cellphones have a highly automated full duplex digital transceiver using OFDM modulation using two digital radio channels, each carrying one direction of the bidirectional conversation, as well as a control channel that handles dialing calls and "handing off" the phone to another cell tower. Older 2G, 3G, and 4G networks use frequencies in the UHF and low microwave range, between 700 MHz and 3 GHz. The cell phone transmitter adjusts its power output to use the minimum power necessary to communicate with the cell tower; 0.6 W when near the tower, up to 3 W when farther away. Cell tower channel transmitter power is 50 W. Current generation phones, called smartphones, have many functions besides making telephone calls, and therefore have several other radio transmitters and receivers that connect them with other networks: usually a Wi-Fi modem, a Bluetooth modem, and a GPS receiver. 5G cellular network – next-generation cellular networks which began deployment in 2019. Their major advantage is much higher data rates than previous cellular networks, up to 10 Gbps; 100 times faster than the previous cellular technology, 4G LTE. The higher data rates are achieved partly by using higher frequency radio waves, in the higher microwave band 3–6 GHz, and millimeter wave band, around 28 and 39 GHz. Since these frequencies have a shorter range than previous cellphone bands, the cells will be smaller than the cells in previous cellular networks which could be many miles across. Millimeter-wave cells will only be a few blocks long, and instead of a cell base station and antenna tower, they will have many small antennas attached to utility poles and buildings. Satellite phone (satphone) – a portable wireless telephone similar to a cell phone, connected to the telephone network through a radio link to an orbiting communications satellite instead of through cell towers. They are more expensive than cell phones; but their advantage is that, unlike a cell phone which is limited to areas covered by cell towers, satphones can be used over most or all of the geographical area of the Earth. In order for the phone to communicate with a satellite using a small omnidirectional antenna, first-generation systems use satellites in low Earth orbit, about 400–700 miles (640–1,100 km) above the surface. With an orbital period of about 100 minutes, a satellite can only be in view of a phone for about 4 – 15 minutes, so the call is "handed off" to another satellite when one passes beyond the local horizon. Therefore, large numbers of satellites, about 40 to 70, are required to ensure that at least one satellite is in view continuously from each point on Earth. Other satphone systems use satellites in geostationary orbit in which only a few satellites are needed, but these cannot be used at high latitudes because of terrestrial interference. Cordless phone – a landline telephone in which the handset is portable and communicates with the rest of the phone by a short-range full duplex radio link, instead of being attached by a cord. Both the handset and the base station have low-power radio transceivers that handle the short-range bidirectional radio link. As of 2022, cordless phones in most nations use the DECT transmission standard. Land mobile radio system – short-range mobile or portable half-duplex radio transceivers operating in the VHF or UHF band that can be used without a license. They are often installed in vehicles, with the mobile units communicating with a dispatcher at a fixed base station. Special systems with reserved frequencies are used by first responder services; police, fire, ambulance, and emergency services, and other government services. Other systems are made for use by commercial firms such as taxi and delivery services. VHF systems use channels in the range 30–50 MHz and 150–172 MHz. UHF systems use the 450–470 MHz band and in some areas the 470–512 MHz range. In general, VHF systems have a longer range than UHF but require longer antennas. AM or FM modulation is mainly used, but digital systems such as DMR are being introduced. The radiated power is typically limited to 4 watts. These systems have a fairly limited range, usually 3 to 20 miles (4.8 to 32 km) depending on terrain. Repeaters installed on tall buildings, hills, or mountain peaks are often used to increase the range when it is desired to cover a larger area than line-of-sight. Examples of land mobile systems are CB, FRS, GMRS, and MURS. Modern digital systems, called trunked radio systems, have a digital channel management system using a control channel that automatically assigns frequency channels to user groups. Walkie-talkie – a battery-powered portable handheld half-duplex two-way radio, used in land mobile radio systems. Airband – Half-duplex radio system used by aircraft pilots to talk to other aircraft and ground-based air traffic controllers. This vital system is the main communication channel for air traffic control. For most communication in overland flights in air corridors a VHF-AM system using channels between 108 and 137 MHz in the VHF band is used. This system has a typical transmission range of 200 miles (320 km) for aircraft flying at cruising altitude. For flights in more remote areas, such as transoceanic airline flights, aircraft use the HF band or channels on the Inmarsat or Iridium satphone satellites. Military aircraft also use a dedicated UHF-AM band from 225.0 to 399.95 MHz. Marine radio – medium-range transceivers on ships, used for ship-to-ship, ship-to-air, and ship-to-shore communication with harbormasters They use FM channels between 156 and 174 MHz in the VHF band with up to 25 watts power, giving them a range of about 60 miles (97 km). Some channels are half-duplex and some are full-duplex, to be compatible with the telephone network, to allow users to make telephone calls through a marine operator. Amateur radio – long-range half-duplex two-way radio used by hobbyists for non-commercial purposes: recreational radio contacts with other amateurs, volunteer emergency communication during disasters, contests, and experimentation. Radio amateurs must hold an amateur radio license and are given a unique callsign that must be used as an identifier in transmissions. Amateur radio is restricted to small frequency bands, the amateur radio bands, spaced throughout the radio spectrum starting at 136 kHz. Within these bands, amateurs are allowed the freedom to transmit on any frequency using a wide variety of voice modulation methods, along with other forms of communication, such as slow-scan television (SSTV), and radioteletype (RTTY). Additionally, amateurs are among the only radio operators still using Morse code radiotelegraphy. ==== One-way voice communication ==== One way, unidirectional radio transmission is called simplex. Baby monitor – a crib-side appliance for parents of infants that transmits the baby's sounds to a receiver carried by the parent, so they can monitor the baby while they are in other parts of the house. The wavebands used vary by region, but analog baby monitors generally transmit with low power in the 16, 9.3–49.9 or 900 MHz wavebands, and digital systems in the 2.4 GHz waveband. Many baby monitors have duplex channels so the parent can talk to the baby, and cameras to show video of the baby. Wireless microphone – a battery-powered microphone with a short-range transmitter that is handheld or worn on a person's body which transmits its sound by radio to a nearby receiver unit connected to a sound system. Wireless microphones are used by public speakers, performers, and television personalities so they can move freely without trailing a microphone cord. Traditionally, analog models transmit in FM on unused portions of the television broadcast frequencies in the VHF and UHF bands. Some models transmit on two frequency channels for diversity reception to prevent nulls from interrupting transmission as the performer moves around. Some models use digital modulation to prevent unauthorized reception by scanner radio receivers; these operate in the 900 MHz, 2.4 GHz or 6 GHz ISM bands. European standards also support wireless multichannel audio systems (WMAS) that can better support the use of large numbers of wireless microphones at a single event or venue. As of 2021, U.S. regulators were considering adopting rules for WMAS. === Data communication === Wireless networking – automated radio links which transmit digital data between computers and other wireless devices using radio waves, linking the devices together transparently in a computer network. Computer networks can transmit any form of data: in addition to email and web pages, they also carry phone calls (VoIP), audio, and video content (called streaming media). Security is more of an issue for wireless networks than for wired networks since anyone nearby with a wireless modem can access the signal and attempt to log in. The radio signals of wireless networks are encrypted using WPA. Wireless LAN (wireless local area network or Wi-Fi) – based on the IEEE 802.11 standards, these are the most widely used computer networks, used to implement local area networks without cables, linking computers, laptops, cell phones, video game consoles, smart TVs and printers in a home or office together, and to a wireless router connecting them to the Internet with a wire or cable connection. Wireless routers in public places like libraries, hotels and coffee shops create wireless access points (hotspots) to allow the public to access the Internet with portable devices like smartphones, tablets or laptops. Each device exchanges data using a wireless modem (wireless network interface controller), an automated microwave transmitter and receiver with an omnidirectional antenna that works in the background, exchanging data packets with the router. Wi-Fi uses channels in the 2.4 GHz and 5 GHz ISM bands with OFDM (orthogonal frequency-division multiplexing) modulation to transmit data at high rates. The transmitters in Wi-Fi modems are limited to a radiated power of 200 mW to 1 watt, depending on country. They have a maximum indoor range of about 150 ft (50 m) on 2.4 GHz and 50 ft (20 m) on 5 GHz. Wireless WAN (wireless wide area network, WWAN) – a variety of technologies that provide wireless internet access over a wider area than Wi-Fi networks do – from an office building to a campus to a neighborhood, or to an entire city. The most common technologies used are: cellular modems, that exchange computer data by radio with cell towers; satellite internet access; and lower frequencies in the UHF band, which have a longer range than Wi-Fi frequencies. Since WWAN networks are much more expensive and complicated to administer than Wi-Fi networks, their use so far has generally been limited to private networks operated by large corporations. Bluetooth – a very short-range wireless interface on a portable wireless device used as a substitute for a wire or cable connection, mainly to exchange files between portable devices and connect cellphones and music players with wireless headphones. In the most widely used mode, transmission power is limited to 1 milliwatt, giving it a very short range of up to 10 m (30 feet). The system uses frequency-hopping spread spectrum transmission, in which successive data packets are transmitted in a pseudorandom order on one of 79 1 MHz Bluetooth channels between 2.4 and 2.83 GHz in the ISM band. This allows Bluetooth networks to operate in the presence of noise, other wireless devices and other Bluetooth networks using the same frequencies, since the chance of another device attempting to transmit on the same frequency at the same time as the Bluetooth modem is low. In the case of such a "collision", the Bluetooth modem just retransmits the data packet on another frequency. Packet radio – a long-distance peer-to-peer wireless ad-hoc network in which data packets are exchanged between computer-controlled radio modems (transmitter/receivers) called nodes, which may be separated by miles, and maybe mobile. Each node only communicates with neighboring nodes, so packets of data are passed from node to node until they reach their destination using the X.25 network protocol. Packet radio systems are used to a limited degree by commercial telecommunications companies and by the amateur radio community. Text messaging (texting) – this is a service on cell phones, allowing a user to type a short alphanumeric message and send it to another phone number, and the text is displayed on the recipient's phone screen. It is based on the Short Message Service (SMS) which transmits using spare bandwidth on the control radio channel used by cell phones to handle background functions like dialing and cell handoffs. Due to technical limitations of the channel, text messages are limited to 160 alphanumeric characters. Microwave relay – a long-distance high bandwidth point-to-point digital data transmission link consisting of a microwave transmitter connected to a dish antenna that transmits a beam of microwaves to another dish antenna and receiver. Since the antennas must be in line-of-sight, distances are limited by the visual horizon to 30–40 miles (48–64 km). Microwave links are used for private business data, wide area computer networks (WANs), and by telephone companies to transmit long-distance phone calls and television signals between cities. Telemetry – automated one-way (simplex) transmission of measurements and operation data from a remote process or device to a receiver for monitoring. Telemetry is used for in-flight monitoring of missiles, drones, satellites, and weather balloon radiosondes, sending scientific data back to Earth from interplanetary spacecraft, communicating with electronic biomedical sensors implanted in the human body, and well logging. Multiple channels of data are often transmitted using frequency-division multiplexing or time-division multiplexing. Telemetry is starting to be used in consumer applications such as: Automated meter reading – electric power meters, water meters, and gas meters that, when triggered by an interrogation signal, transmit their readings by radio to a utility reader vehicle at the curb, to eliminate the need for an employee to go on the customer's property to manually read the meter. Electronic toll collection – on toll roads, an alternative to manual collection of tolls at a toll booth, in which a transponder in a vehicle, when triggered by a roadside transmitter, transmits a signal to a roadside receiver to register the vehicle's use of the road, enabling the owner to be billed for the toll. Radio Frequency Identification (RFID) – identification tags containing a tiny radio transponder (receiver and transmitter) which are attached to merchandise. When it receives an interrogation pulse of radio waves from a nearby reader unit, the tag transmits back an ID number, which can be used to inventory goods. Passive tags, the most common type, have a chip powered by the radio energy received from the reader, rectified by a diode, and can be as small as a grain of rice. They are incorporated in products, clothes, railroad cars, library books, airline baggage tags and are implanted under the skin in pets and livestock (microchip implant) and even people. Privacy concerns have been addressed with tags that use encrypted signals and authenticate the reader before responding. Passive tags use 125–134 kHz, 13, 900 MHz and 2.4 and 5 GHz ISM bands and have a short range. Active tags, powered by a battery, are larger but can transmit a stronger signal, giving them a range of hundreds of meters. Submarine communication – When submerged, submarines are cut off from all ordinary radio communication with their military command authorities by the conductive seawater. However radio waves of low enough frequencies, in the VLF (30 to 3 kHz) and ELF (below 3 kHz) bands are able to penetrate seawater. Navies operate large shore transmitting stations with power output in the megawatt range to transmit encrypted messages to their submarines in the world's oceans. Due to the small bandwidth, these systems cannot transmit voice, only text messages at a slow data rate. The communication channel is one-way, since the long antennas needed to transmit VLF or ELF waves cannot fit on a submarine. VLF transmitters use miles long wire antennas like umbrella antennas. A few nations use ELF transmitters operating around 80 Hz, which can communicate with submarines at lower depths. These use even larger antennas called ground dipoles, consisting of two ground (Earth) connections 23–60 km (14–37 miles) apart, linked by overhead transmission lines to a power plant transmitter. === Space communication === This is radio communication between a spacecraft and an Earth-based ground station, or another spacecraft. Communication with spacecraft involves the longest transmission distances of any radio links, up to billions of kilometers for interplanetary spacecraft. In order to receive the weak signals from distant spacecraft, satellite ground stations use large parabolic "dish" antennas up to 25 metres (82 ft) in diameter and extremely sensitive receivers. High frequencies in the microwave band are used, since microwaves pass through the ionosphere without refraction, and at microwave frequencies the high-gain antennas needed to focus the radio energy into a narrow beam pointed at the receiver are small and take up a minimum of space in a satellite. Portions of the UHF, L, C, S, ku and ka band are allocated for space communication. A radio link that transmits data from the Earth's surface to a spacecraft is called an uplink, while a link that transmits data from the spacecraft to the ground is called a downlink. Communication satellite – an artificial satellite used as a telecommunications relay to transmit data between widely separated points on Earth. These are used because the microwaves used for telecommunications travel by line of sight and so cannot propagate around the curve of the Earth. As of 1 January 2021, there were 2,224 communications satellites in Earth orbit. Most are in geostationary orbit 22,200 miles (35,700 km) above the equator, so that the satellite appears stationary at the same point in the sky, so the satellite dish antennas of ground stations can be aimed permanently at that spot and do not have to move to track it. In a satellite ground station a microwave transmitter and large satellite dish antenna transmit a microwave uplink beam to the satellite. The uplink signal carries many channels of telecommunications traffic, such as long-distance telephone calls, television programs, and internet signals, using a technique called frequency-division multiplexing (FDM). On the satellite, a transponder receives the signal, translates it to a different downlink frequency to avoid interfering with the uplink signal, and retransmits it down to another ground station, which may be widely separated from the first. There the downlink signal is demodulated and the telecommunications traffic it carries is sent to its local destinations through landlines. Communication satellites typically have several dozen transponders on different frequencies, which are leased by different users. Direct broadcast satellite – a geostationary communication satellite that transmits retail programming directly to receivers in subscriber's homes and vehicles on Earth, in satellite radio and TV systems. It uses a higher transmitter power than other communication satellites, to allow the signal to be received by consumers with a small unobtrusive antenna. For example, satellite television uses downlink frequencies from 12.2 to 12.7 GHz in the ku band transmitted at 100 to 250 watts, which can be received by relatively small 43–80 cm (17–31 in) satellite dishes mounted on the outside of buildings. === Other applications === ==== Radar ==== Radar is a radiolocation method used to locate and track aircraft, spacecraft, missiles, ships, vehicles, and also to map weather patterns and terrain. A radar set consists of a transmitter and receiver. The transmitter emits a narrow beam of radio waves which is swept around the surrounding space. When the beam strikes a target object, radio waves are reflected back to the receiver. The direction of the beam reveals the object's location. Since radio waves travel at a constant speed close to the speed of light, by measuring the brief time delay between the outgoing pulse and the received "echo", the range to the target can be calculated. The targets are often displayed graphically on a map display called a radar screen. Doppler radar can measure a moving object's velocity, by measuring the change in frequency of the return radio waves due to the Doppler effect. Radar sets mainly use high frequencies in the microwave bands, because these frequencies create strong reflections from objects the size of vehicles and can be focused into narrow beams with compact antennas. Parabolic (dish) antennas are widely used. In most radars the transmitting antenna also serves as the receiving antenna; this is called a monostatic radar. A radar which uses separate transmitting and receiving antennas is called a bistatic radar. Airport surveillance radar – In aviation, radar is the main tool of air traffic control. A rotating dish antenna sweeps a vertical fan-shaped beam of microwaves around the airspace and the radar set shows the location of aircraft as "blips" of light on a display called a radar screen. Airport radar operates at 2.7 – 2.9 GHz in the microwave S band. In large airports the radar image is displayed on multiple screens in an operations room called the TRACON (Terminal Radar Approach Control), where air traffic controllers direct the aircraft by radio to maintain safe aircraft separation. Secondary surveillance radar – Aircraft carry radar transponders, transceivers which when triggered by the incoming radar signal transmit a return microwave signal. This causes the aircraft to show up more strongly on the radar screen. The radar which triggers the transponder and receives the return beam, usually mounted on top of the primary radar dish, is called the secondary surveillance radar. Since radar cannot measure an aircraft's altitude with any accuracy, the transponder also transmits back the aircraft's altitude measured by its altimeter, and an ID number identifying the aircraft, which is displayed on the radar screen. Electronic countermeasures (ECM) – Military defensive electronic systems designed to degrade enemy radar effectiveness, or deceive it with false information, to prevent enemies from locating local forces. It often consists of powerful microwave transmitters that can mimic enemy radar signals to create false target indications on the enemy radar screens. Marine radar – an S or X band radar on ships used to detect nearby ships and obstructions like bridges. A rotating antenna sweeps a vertical fan-shaped beam of microwaves around the water surface surrounding the craft out to the horizon. Weather radar – A Doppler radar which maps weather precipitation intensities and wind speeds with the echoes returned from raindrops and their radial velocity by their Doppler shift. Phased-array radar – a radar set that uses a phased array, a computer-controlled antenna that can steer the radar beam quickly to point in different directions without moving the antenna. Phased-array radars were developed by the military to track fast-moving missiles and aircraft. They are widely used in military equipment and are now spreading to civilian applications. Synthetic aperture radar (SAR) – a specialized airborne radar set that produces a high-resolution map of ground terrain. The radar is mounted on an aircraft or spacecraft and the radar antenna radiates a beam of radio waves sideways at right angles to the direction of motion, toward the ground. In processing the return radar signal, the motion of the vehicle is used to simulate a large antenna, giving the radar a higher resolution. Ground-penetrating radar – a specialized radar instrument that is rolled along the ground surface in a cart and transmits a beam of radio waves into the ground, producing an image of subsurface objects. Frequencies from 100 MHz to a few GHz are used. Since radio waves cannot penetrate very far into earth, the depth of GPR is limited to about 50 feet. Collision avoidance system – a short range radar or LIDAR system on an automobile or vehicle that detects if the vehicle is about to collide with an object and applies the brakes to prevent the collision. Radar fuze – a detonator for an aerial bomb which uses a radar altimeter to measure the height of the bomb above the ground as it falls and detonates it at a certain altitude. ==== Radiolocation ==== Radiolocation is a generic term covering a variety of techniques that use radio waves to find the location of objects, or for navigation. Global Navigation Satellite System (GNSS) or satnav system – A system of satellites which allows geographical location on Earth (latitude, longitude, and altitude/elevation) to be determined to high precision (within a few metres) by small portable navigation instruments, by timing the arrival of radio signals from the satellites. These are the most widely used navigation systems today. The main satellite navigation systems are the US Global Positioning System (GPS), Russia's GLONASS, China's BeiDou Navigation Satellite System (BDS) and the European Union's Galileo. Global Positioning System (GPS) – The most widely used satellite navigation system, maintained by the US Air Force, which uses a constellation of 31 satellites in low Earth orbit. The orbits of the satellites are distributed so at any time at least four satellites are above the horizon over each point on Earth. Each satellite has an onboard atomic clock and transmits a continuous radio signal containing a precise time signal as well as its current position. Two frequencies are used, 1.2276 and 1.57542 GHz. Since the velocity of radio waves is virtually constant, the delay of the radio signal from a satellite is proportional to the distance of the receiver from the satellite. By receiving the signals from at least four satellites a GPS receiver can calculate its position on Earth by comparing the arrival time of the radio signals. Since each satellite's position is known precisely at any given time, from the delay the position of the receiver can be calculated by a microprocessor in the receiver. The position can be displayed as latitude and longitude, or as a marker on an electronic map. GPS receivers are incorporated in almost all cellphones and in vehicles such as automobiles, aircraft, and ships, and are used to guide drones, missiles, cruise missiles, and even artillery shells to their target, and handheld GPS receivers are produced for hikers and the military. Radio beacon – a fixed location terrestrial radio transmitter which transmits a continuous radio signal used by aircraft and ships for navigation. The locations of beacons are plotted on navigational maps used by aircraft and ships. VHF omnidirectional range (VOR) – a worldwide aircraft radio navigation system consisting of fixed ground radio beacons transmitting between 108.00 and 117.95 MHz in the very high frequency (VHF) band. An automated navigational instrument on the aircraft displays a bearing to a nearby VOR transmitter. A VOR beacon transmits two signals simultaneously on different frequencies. A directional antenna transmits a beam of radio waves that rotates like a lighthouse at a fixed rate, 30 times per second. When the directional beam is facing north, an omnidirectional antenna transmits a pulse. By measuring the difference in phase of these two signals, an aircraft can determine its bearing (or "radial") from the station accurately. By taking a bearing on two VOR beacons an aircraft can determine its position (called a "fix") to an accuracy of about 90 metres (300 ft). Most VOR beacons also have a distance measuring capability, called distance measuring equipment (DME); these are called VOR/DME's. The aircraft transmits a radio signal to the VOR/DME beacon and a transponder transmits a return signal. From the propagation delay between the transmitted and received signal the aircraft can calculate its distance from the beacon. This allows an aircraft to determine its location "fix" from only one VOR beacon. Since line-of-sight VHF frequencies are used VOR beacons have a range of about 200 miles for aircraft at cruising altitude. TACAN is a similar military radio beacon system which transmits in 962–1213 MHz, and a combined VOR and TACAN beacon is called a VORTAC. The number of VOR beacons is declining as aviation switches to the RNAV system that relies on Global Positioning System satellite navigation. Instrument Landing System (ILS) - A short range radio navigation aid at airports which guides aircraft landing in low visibility conditions. It consists of multiple antennas at the end of each runway that radiate two beams of radio waves along the approach to the runway: the localizer (108 to 111.95 MHz frequency), which provides horizontal guidance, a heading line to keep the aircraft centered on the runway, and the glideslope (329.15 to 335 MHz) for vertical guidance, to keep the aircraft descending at the proper rate for a smooth touchdown at the correct point on the runway. Each aircraft has a receiver instrument and antenna which receives the beams, with an indicator to tell the pilot whether he is on the correct horizontal and vertical approach. The ILS beams are receivable for at least 15 miles, and have a radiated power of 25 watts. ILS systems at airports are being replaced by systems that use satellite navigation. Non-directional beacon (NDB) – Legacy fixed radio beacons used before the VOR system that transmit a simple signal in all directions for aircraft or ships to use for radio direction finding. Aircraft use automatic direction finder (ADF) receivers which use a directional antenna to determine the bearing to the beacon. By taking bearings on two beacons they can determine their position. NDBs use frequencies between 190 and 1750 kHz in the LF and MF bands which propagate beyond the horizon as ground waves or skywaves much farther than VOR beacons. They transmit a callsign consisting of one to 3 Morse code letters as an identifier. Emergency locator beacon – a portable battery powered radio transmitter used in emergencies to locate airplanes, vessels, and persons in distress and in need of immediate rescue. Various types of emergency locator beacons are carried by aircraft, ships, vehicles, hikers and cross-country skiers. In the event of an emergency, such as the aircraft crashing, the ship sinking, or a hiker becoming lost, the transmitter is deployed and begins to transmit a continuous radio signal, which is used by search and rescue teams to quickly find the emergency and render aid. The latest generation Emergency Position Indicating Rescue Beacons (EPIRBs) contain a GPS receiver, and broadcast to rescue teams their exact location within 20 meters. Cospas-Sarsat – an international humanitarian consortium of governmental and private agencies which acts as a dispatcher for search and rescue operations. It operates a network of about 47 satellites carrying radio receivers, which detect distress signals from emergency locator beacons anywhere on Earth transmitting on the international Cospas distress frequency of 406 MHz. The satellites calculate the geographic location of the beacon within 2 km by measuring the Doppler frequency shift of the radio waves due to the relative motion of the transmitter and the satellite, and quickly transmit the information to the appropriate local first responder organizations, which perform the search and rescue. Radio direction finding (RDF) – this is a general technique, used since the early 1900s, of using specialized radio receivers with directional antennas (RDF receivers) to determine the exact bearing of a radio signal, to determine the location of the transmitter. The location of a terrestrial transmitter can be determined by simple triangulation from bearings taken by two RDF stations separated geographically, as the point where the two bearing lines cross, this is called a "fix". Military forces use RDF to locate enemy forces by their tactical radio transmissions, counterintelligence services use it to locate clandestine transmitters used by espionage agents, and governments use it to locate unlicensed transmitters or interference sources. Older RDF receivers used rotatable loop antennas, the antenna is rotated until the radio signal strength is weakest, indicating the transmitter is in one of the antenna's two nulls. The nulls are used since they are sharper than the antenna's lobes (maxima). More modern receivers use phased array antennas which have a much greater angular resolution. Animal migration tracking – a widely used technique in wildlife biology, conservation biology, and wildlife management in which small battery-powered radio transmitters are attached to wild animals so their movements can be tracked with a directional RDF receiver. Sometimes the transmitter is implanted in the animal. The VHF band is typically used since antennas in this band are fairly compact. The receiver has a directional antenna (typically a small Yagi) which is rotated until the received signal is strongest; at this point the antenna is pointing in the direction of the animal. Sophisticated systems used in recent years use satellites to track the animal, or geolocation tags with GPS receivers which record and transmit a log of the animal's location. ==== Remote control ==== Radio remote control is the use of electronic control signals sent by radio waves from a transmitter to control the actions of a device at a remote location. Remote control systems may also include telemetry channels in the other direction, used to transmit real-time information on the state of the device back to the control station. Uncrewed spacecraft are an example of remote-controlled machines, controlled by commands transmitted by satellite ground stations. Most handheld remote controls used to control consumer electronics products like televisions or DVD players actually operate by infrared light rather than radio waves, so are not examples of radio remote control. A security concern with remote control systems is spoofing, in which an unauthorized person transmits an imitation of the control signal to take control of the device. Examples of radio remote control: Unmanned aerial vehicle (UAV, drone) – A drone is an aircraft without an onboard pilot, flown by remote control by a pilot in another location, usually in a piloting station on the ground. They are used by the military for reconnaissance and ground attack, and more recently by the civilian world for news reporting and aerial photography. The pilot uses aircraft controls like a joystick or steering wheel, which create control signals which are transmitted to the drone by radio to control the flight surfaces and engine. A telemetry system transmits back a video image from a camera in the drone to allow the pilot to see where the aircraft is going, and data from a GPS receiver giving the real-time position of the aircraft. UAVs have sophisticated onboard automatic pilot systems that maintain stable flight and only require manual control to change directions. Keyless entry system – a short-range handheld battery powered key fob transmitter, included with most modern cars, which can lock and unlock the doors of a vehicle from outside, eliminating the need to use a key. When a button is pressed, the transmitter sends a coded radio signal to a receiver in the vehicle, operating the locks. The fob must be close to the vehicle, typically within 5 to 20 meters. North America and Japan use a frequency of 315 MHz, while Europe uses 433.92 and 868 MHz. Some models can also remotely start the engine, to warm up the car. A security concern with all keyless entry systems is a replay attack, in which a thief uses a special receiver ("code grabber") to record the radio signal during opening, which can later be replayed to open the door. To prevent this, keyless systems use a rolling code system in which a pseudorandom number generator in the remote control generates a different random key each time it is used. To prevent thieves from simulating the pseudorandom generator to calculate the next key, the radio signal is also encrypted. Garage door opener – a short-range handheld transmitter which can open or close a building's electrically operated garage door from outside, so the owner can open the door upon arrival, and close it after departure. When a button is pressed the control transmits a coded FSK radio signal to a receiver in the opener, raising or lowering the door. Modern openers use 310, 315 or 390 MHz. To prevent a thief using a replay attack, modern openers use a rolling code system. Radio-controlled models – a popular hobby is playing with radio-controlled model boats, cars, airplanes, and helicopters (quadcopters) which are controlled by radio signals from a handheld console with a joystick. Most recent transmitters use the 2.4 GHz ISM band with multiple control channels modulated with PWM, PCM or FSK. Wireless doorbell – A residential doorbell that uses wireless technology to eliminate the need to run wires through the building walls. It consists of a doorbell button beside the door containing a small battery powered transmitter. When the doorbell is pressed it sends a signal to a receiver inside the house with a speaker that sounds chimes to indicate someone is at the door. They usually use the 2.4 GHz ISM band. The frequency channel used can usually be changed by the owner in case another nearby doorbell is using the same channel. ==== Scientific research ==== Radio astronomy is the scientific study of radio waves emitted by astronomical objects. Radio astronomers use radio telescopes, large radio antennas and receivers, to receive and study the radio waves from astronomical radio sources. Since astronomical radio sources are so far away, the radio waves from them are extremely weak, requiring extremely sensitive receivers, and radio telescopes are the most sensitive radio receivers in existence. They use large parabolic (dish) antennas up to 500 meters (2,000 ft) in diameter to collect enough radio wave energy to study. The RF front end electronics of the receiver is often cooled by liquid nitrogen to reduce thermal noise. Multiple antennas are often linked together in arrays which function as a single antenna, to increase collecting power. In Very Long Baseline Interferometry (VLBI) radio telescopes on different continents are linked, which can achieve the resolution of an antenna thousands of miles in diameter. Remote sensing – in radio, remote sensing is the reception of electromagnetic waves radiated by natural objects or the atmosphere for scientific research. All warm objects emit microwaves and the spectrum emitted can be used to determine temperature. Microwave radiometers are used in meteorology and earth sciences to determine temperature of the atmosphere and earth surface, as well as chemical reactions in the atmosphere. ==== Jamming ==== Radio jamming is the deliberate radiation of radio signals designed to interfere with the reception of other radio signals. Jamming devices are called "signal suppressors" or "interference generators" or just jammers. During wartime, militaries use jamming to interfere with enemies' tactical radio communication. Since radio waves can pass beyond national borders, some totalitarian countries which practice censorship use jamming to prevent their citizens from listening to broadcasts from radio stations in other countries. Jamming is usually accomplished by a powerful transmitter which generates noise on the same frequency as the target transmitter. US Federal law prohibits the nonmilitary operation or sale of any type of jamming devices, including ones that interfere with GPS, cellular, Wi-Fi and police radars. == See also == Electromagnetic radiation and health Internet radio List of radios – List of specific models of radios Outline of radio Radio quiet zone == References == == General references == Basic Radio Principles and Technology – Elsevier Science The Electronics of Radio – Cambridge University Press Radio Systems Engineering – Cambridge University Press Radio-Electronic Transmission Fundamentals – SciTech Publishing Analog Electronics, Analog Circuitry Explained – Elsevier Science == External links == "Radio". Merriam-Webster.com Dictionary. Merriam-Webster.
https://en.wikipedia.org/wiki/Radio
Creative Technology Ltd., or Creative Labs Pte Ltd., is a Singaporean multinational electronics company mainly dealing with audio technologies and products such as speakers, headphones, sound cards and other digital media. Founded by Sim Wong Hoo, Creative was highly influential in the advancement of PC audio in the 1990s following the introduction of its Sound Blaster card and technologies; the company continues to develop Sound Blaster products including embedding them within partnered mainboard manufacturers and laptops. The company also has overseas offices in Shanghai, Tokyo, Dublin and the Silicon Valley. Creative Technology has been listed on the Singapore Exchange (SGX) since 1994. == History == === 1981–1996 === Creative Technology was founded in 1981 by childhood friends and Ngee Ann Polytechnic schoolmates Sim Wong Hoo and Ng Kai Wa. Originally a computer repair shop in Pearl's Centre in Chinatown, the company eventually developed an add-on memory board for the Apple II computer. Later, Creative spent $500,000 developing the Cubic CT, an IBM-compatible PC adapted for the Chinese language and featuring multimedia features like enhanced color graphics and a built-in audio board capable of producing speech and melodies. With lack of demand for multilingual computers and few multimedia software applications available, the Cubic was a commercial failure. Shifting focus from language to music, Creative developed the Creative Music System, a PC add-on card. Sim established Creative Labs, Inc. in the United States' Silicon Valley and convinced software developers to support the sound card, renamed Game Blaster and marketed by RadioShack's Tandy division. The success of this audio interface led to the development of the standalone Sound Blaster sound card, introduced at the 1989 COMDEX show just as the multimedia PC market, fueled by Intel's 386 CPU and Microsoft Windows 3.0, took off. The success of Sound Blaster helped grow Creative's revenue from US$5.4 million in 1989 to US$658 million in 1994. In 1993, the year after Creative's initial public offering, in 1992, former Ashton-Tate CEO Ed Esber joined Creative Labs as CEO to assemble a management team to support the company's rapid growth. Esber brought in a team of US executives, including Rich Buchanan (graphics), Gail Pomerantz (marketing), and Rich Sorkin (sound products, and later communications, OEM and business development). This group played key roles in reversing a brutal market share decline caused by intense competition from Media Vision at the high end and Aztech at the low end. Sorkin, in particular, dramatically strengthened the company's brand position through crisp licensing and an aggressive defense of Creative's intellectual property positions while working to shorten product development cycles. At the same time, Esber and the original founders of the company had differences of opinion on the strategy and positioning of the company. Esber exited in 1995, followed quickly by Buchanan and Pomerantz. Following Esber's departure, Sorkin was promoted to General Manager of Audio and Communication Products and later Executive Vice-president of Business Development and Corporate Investments, before leaving Creative in 1996 to run Elon Musk's first startup and Internet pioneer Zip2. By 1996, Creative's revenues had peaked at US$1.6 billion. With pioneering investments in VOIP and media streaming, Creative was well-positioned to take advantage of the Internet era, but ventured into the CD-ROM market and was eventually forced to write off nearly US$100 million in inventory when the market collapsed due to a flood of cheaper alternatives. === 1997–2011 === The firm had maintained a strong foothold in the ISA PC audio market until 14 July 1997 when Aureal Semiconductor entered the soundcard market with their very competitive PCI AU8820 Vortex 3D sound technology. The firm at the time was in development of their own in house PCI audio cards but were finding little success adopting the PCI standard. In January 1998 in order to quickly facilitate a working PCI audio technology, the firm made the acquisition of Ensoniq for US$77 million. On 5 March 1998 the firm sued Aureal with patent infringement claims over a MIDI caching technology held by E-mu Systems. Aureal filed a counterclaim stating the firm was intentionally interfering with its business prospects, had defamed them, commercially disparaged, engaged in unfair competition with intent to slow down Aureals sales, and acted fraudulently. The suit had come only days after Aureal gained a fair market with the AU8820 Vortex1. In August 1998, the Sound Blaster Live! was the firm's first sound card developed for the PCI bus in order to compete with upcoming Aureal AU8830 Vortex2 sound chip. Aureal at this time were making fliers comparing their new AU8830 chips to the now shipping Sound Blaster Live!. The specifications within these fliers comparing the AU8830 to the Sound Blaster Live! EMU10K1 chip sparked another flurry of lawsuits against Aureal, this time claiming Aureal had falsely advertised the Sound Blaster Live!'s capabilities. In December 1999, after numerous lawsuits, Aureal won a favourable ruling but went bankrupt as a result of legal costs and their investors pulling out. Their assets were acquired by Creative through the bankruptcy court in September 2000 for US$32 million. The firm had in effect removed their only major direct competitor in the 3D gaming audio market, excluding their later acquisition of Sensaura. In April 1999, the firm launched the NOMAD line of digital audio players that would later introduce the MuVo and ZEN series of portable media players. In November 2004, the firm announced a $100 million marketing campaign to promote their digital audio products, including the ZEN range of MP3 players. The firm applied for U.S. patent 6,928,433 on 5 January 2001 and was awarded the patent on 9 August 2005. The Zen patent was awarded to the firm for the invention of user interface for portable media players. This opened the way for potential legal action against Apple's iPod and the other competing players. The firm took legal actions against Apple in May 2006. In August, 2006, Creative and Apple entered into a broad settlement, with Apple paying Creative $100 million for the licence to use the Zen patent. The firm then joined the "Made for iPod" program. On 22 March 2005, The Inquirer reported that Creative Labs had agreed to settle in a class action lawsuit about the way its Audigy and Extigy soundcards were marketed. The firm offered customers who purchased the cards up to a $62.50 reduction on the cost of their next purchase of its products, while the lawyers involved in filing the dispute against Creative received a payment of approximately $470,000. In 2007, Creative voluntarily delisted itself from NASDAQ, where it had the symbol of CREAF. Its stocks are now solely on the Singapore Exchange (SGX-ST). In early 2008, Creative Labs' technical support centre, located in Stillwater, Oklahoma, US laid off several technical support staff, furthering ongoing concerns surrounding Creative's financial situation. Later that year, the company faced a public-relations backlash when it demanded that a user named "Daniel_K" cease distributing modified versions of drivers for Windows Vista which restored functionality that had been available in drivers for Windows XP. The company deleted his account from its online forums but reinstated it a week later. In January 2009, the firm generated Internet buzz with a mysterious website promising a "stem cell-like" processor which would give a 100-fold increase in supercomputing power over current technology, as well as advances in consumer 3D graphics. At CES 2009, it was revealed to be the ZMS-05 processor from ZiiLABS, a subsidiary formed from the combining of 3DLabs and Creative's Personal Digital Entertainment division. === 2012–present === In November 2012, the firm announced it has entered into an agreement with Intel Corporation for Intel to license technology and patents from ZiiLABS Inc. Ltd, a wholly owned subsidiary of Creative, and acquire engineering resources and assets related to its UK branch as a part of a $50 million deal. ZiiLABS (still wholly owned by Creative) continues to retain all ownership of its StemCell media processor technologies and patents, and will continue to supply and support its ZMS series of chips to its customers. From 2014 to 2017, Creative's revenue from audio products have contracted at an average of 15% annually, due to increased competition in the audio space. At the Consumer Electronics Show (CES) in Las Vegas in January 2018, its Super X-Fi dongle won the Best of CES 2018 Award by AVS Forum. The product was launched after more than $100 million in investment and garnered positive analyst reports. This new technology renewed interest in the company and likely helped to raise its share price from S$1.25 to S$8.75 within a 2-week period. The company is still producing Chinese-language and bilingual software for the Singapore market, but nearly half of the company's income is generated in the United States and South America; the European Union represents 32% of revenues, with Asia making the remainder. On January 4, 2023, Sim died at age 67, with president of Creative Labs Business Unit Song Siow Hui appointed as interim CEO. On 16 May 2025, it was announced that Freddy Sim, brother of Sim Wong Hoo, was appointed as the new CEO with the interim CEO, Dr Tan Jok Tin, remaining executive chairman. == Products == === Sound Blaster === Creative's Sound Blaster sound card was among the first dedicated audio processing cards to be made widely available to the general consumer. As the first to bundle what is now considered to be a part of a sound card system: digital audio, on-board music synthesizer, MIDI interface and a joystick port, Sound Blaster rose to become a de facto standard for sound cards in PCs for many years. Creative Technology have made their own file format Creative Voice which has the file format .voc. In 1987 Creative Technology released the Creative Music System (C/MS), a 12-voice sound card for the IBM PC architecture. When C/MS struggled to acquire market share, Sim traveled from Singapore to Silicon Valley and negotiated a deal with RadioShack's Tandy division to market the product as the Game Blaster. While the Game Blaster did not overcome AdLib's sound card market dominance, Creative used the platform to create the first Sound Blaster, which retained CM/S hardware and added the Yamaha YM3812 chip found on the AdLib card, as well as adding a component for playing and recording digital samples. Creative aggressively marketed the "stereo" aspect of the Sound Blaster (only the C/MS chips were capable of stereo, not the complete product) to calling the sound producing micro-controller a "DSP", hoping to associate the product with a digital signal processor (the DSP could encode/decode ADPCM in real time, but otherwise had no other DSP-like qualities). Monaural Sound Blaster cards were introduced in 1989, and Sound Blaster Pro stereo cards followed in 1992. The 16-bit Sound Blaster AWE32 added Wavetable MIDI, and AWE64 offered 32 and 64 voices. Sound Blaster achieved competitive control of the PC audio market by 1992, the same year that its main competitor, Ad Lib, Inc., went bankrupt. In the mid-1990s, following the launch of the Sound Blaster 16 and related products, Creative Technologies' audio revenue grew from US$40 million to nearly US$1 billion annually. The sixth generation of Sound Blaster sound cards introduced SBX Pro Studio, a feature that restores the highs and lows of compressed audio files, enhancing detail and clarity. SBX Pro Studio also offers user settings for controlling bass and virtual surround. === Creative X-Fi Sonic Carrier === The Creative X-Fi Sonic Carrier, launched in January 2016, consists of a long main unit and a subwoofer that houses 17 drivers in an 11.2.4 speaker configuration. It incorporates Dolby Atmos surround processing, and also features Creative's EAX 15.2 Dimensional Audio to extract, enhance and upscale sound from legacy material. The audio and video engine of the X-Fi Sonic Carrier are powered by 7 processors with a total of 14 cores. It supports both local and streaming video content at up to 4K 60 fps, as well as 15.2 channels of high resolution audio playback. It also comes with 3 distinct wireless technologies that allow multi-room Wi-Fi, Bluetooth, and a zero-latency speaker-to-speaker link to up to 4 subwoofer units. === Other products === Headphones Gaming headsets Portable Bluetooth speakers Creative GigaWorks ProGamer G500 speakers === Discontinued products === CD and DVD players, drives, and controller cards Graphics cards Prodikeys, a computer keyboard/musical keyboard combination Optical mice and keyboards Vado HD Creative Zen and Creative MuVo portable media players == See also == AdLib Aureal Semiconductor Ensoniq Environmental audio extensions Sensaura Yamaha === Divisions and brands === Cambridge SoundWorks Creative MuVo Creative NOMAD Creative ZEN E-mu Systems/Ensoniq Sound Blaster Sensaura SoundFont ZiiLABS, formerly 3Dlabs == References ==
https://en.wikipedia.org/wiki/Creative_Technology
Accenture plc is a global multinational professional services company originating in the United States and headquartered in Dublin, Ireland, that specializes in information technology (IT) services and management consulting. It was founded in 1989. A Fortune Global 500 company, it reported revenues of $64.9 billion in 2024. == History == === Formation and early years === Accenture began as the business and technology consulting division of accounting firm Arthur Andersen in the early 1950s. The division conducted a feasibility study for General Electric to install a computer at Appliance Park in Louisville, Kentucky, which led to GE's installation of a UNIVAC I computer and printer, believed to be the first commercial use of a computer in the United States. === Split from Arthur Andersen === In 1989, Arthur Andersen and Andersen Consulting became separate units of Andersen Worldwide Société Coopérative (AWSC). Throughout the 1990s, tensions grew between the two units. Andersen Consulting was paying Arthur Andersen up to 15% of its profits each year (a provision of the 1989 split was that the more profitable unit – whether AA or AC, pay the other the 15 percent), while at the same time Arthur Andersen was competing with Andersen Consulting through its own newly established business consulting service line called Arthur Andersen Business Consulting. This dispute came to a head in 1998, when Andersen Consulting put the 15% transfer payment for that year and future years into escrow and issued a claim for breach of contract against AWSC and Arthur Andersen. In 2000, as a result of arbitration, Andersen Consulting broke all contractual ties with AWSC and Arthur Andersen. As part of the arbitration settlement, Andersen Consulting paid $1.2 billion to Arthur Andersen. On 1 January 2001, Andersen Consulting adopted the name, "Accenture". The word "Accenture" was derived from "Accent on the future". The name "Accenture" was submitted by Kim Petersen, a Danish employee from the company's Oslo, Norway office. Petersen hoped that the name would not be offensive in any country in which Accenture operates, because the word itself was meaningless. === Incorporation and public listing === Accenture was incorporated in Bermuda in 2001. On 19 July 2001, Accenture's initial public offering (IPO) was priced at $14.50 per share, and the shares began trading on the New York Stock Exchange. Because of the split from Andersen, Accenture avoided prosecution on June 16, 2002, when the U.S. Securities and Exchange Commission prosecuted Arthur Andersen for obstructing justice and accounting fraud. === Reincorporation in Ireland === On 26 May 2009, Accenture announced that its board of directors unanimously approved changing the company's place of incorporation from Bermuda to Ireland. == Services and operations == Accenture's business is organized into five segments: Strategy and Consulting Technology Operations Accenture Song (formerly Interactive) Industry X The company provides services to clients in various industries, including communications, media and technology, financial services, health and public service, consumer products, and resources. == Corporate affairs == === Leadership === William D. Green became the CEO in September 2004. Green was replaced by Pierre Nanterme in January 2011. In January 2019, Nanterme stepped down from his position, citing health reasons. Chief Financial Officer David Rowland was named as the interim CEO. Julie Sweet was appointed CEO in September 2019. === Employees === As of 2024, Accenture reported having approximately 774,000 employees. === Finances === The financial results were as follows: == Controversies == === Incorporation in a tax haven === In October 2002, the Congressional General Accounting Office (GAO) identified Accenture as one of four publicly traded federal contractors that were incorporated in a tax haven. The other three, unlike Accenture, were incorporated in the United States before they re-incorporated in a tax haven, thereby lowering their US taxes. Critics such as former CNN journalist Lou Dobbs, reported Accenture's decision to incorporate in Bermuda was a US tax avoidance ploy, because they viewed Accenture as having been a US-based company. The GAO itself did not characterize Accenture as having been a US-based company; it stated that "prior to incorporating in Bermuda, Accenture was operating as a series of related partnerships and corporations under the control of its partners through the mechanism of contracts with a Swiss coordinating entity." === UK NHS technology project === Accenture engaged in an IT overhaul project for the British National Health Service (NHS) in 2003, making headlines when it withdrew from the contract in 2006 over disputes related to delays and cost overruns. The government of the United Kingdom ultimately abandoned the project five years later for the same reasons. === Tax avoidance === In 2012, it was revealed Accenture was paying only 3.5% in tax in Ireland as opposed to the average rate of 24% it would pay if instead based in the UK. === US immigration === In June 2018, Accenture was asked to recruit 7,500 U.S. Customs and Border Protection officers. Under the $297 million contract, Accenture had been charging the US Government nearly $40,000 per hire, which was more than the annual salary of the average officer. According to a report published by the DHS Office of Inspector General in December 2018, Accenture had been paid $13.6M through the first ten months of the contract. They had hired two agents against a contract goal of 7,500 hires over 5 years. The report was issued as a 'management alert', indicating an issue requiring immediate attention, stating that "Accenture has already taken longer to deploy and delivered less capability than promised". The contract was terminated in 2019. === Working conditions === In February 2019, contractors from Accenture's Austin, Texas, location who performed content moderation tasks for Facebook wrote an open letter to Facebook describing poor working conditions and a "Big Brother environment" that included restricted work breaks and strict non-disclosure agreements. A counselor in the Austin office stated that the content moderators could develop post-traumatic stress disorder as a result of the work, which included evaluating videos and images containing graphic violence, hate speech, animal abuse, and child abuse. Accenture issued a statement saying the company offers opportunities for moderators to advance, increase their wages, and provide input "to help shape their experience." In February 2025, Vice News spoke to a former Accenture employee under condition of anonymity. His project on the WhatsApp team for Meta required him to sift through images and deciding whether or not they depicted child sexual abuse, which he coped with "through a lot of substance abuse". The former employee claimed to have witnessed multiple missed opportunities to protect children, and alleged that one colleague had previously been arrested for possessing child abuse materials. In a statement, Accenture said they are "committed to helping companies keep their platforms safe through services such as content, advertising and compliance reviews". === Tax practices === In February 2019, Accenture paid $200 million to Swiss authorities over tax claims related to transfer pricing arrangements. === Data breach === In August 2021, Accenture confirmed a data breach due to a ransomware attack, which reportedly led to the theft of six terabytes of data. === Employment practices === In March 2023, Accenture announced plans to eliminate 19,000 jobs of the 738,000 employees over 18 months, citing reduced revenue forecasts. In February 2025, Accenture made significant changes to its diversity and inclusion policies, including discontinuing global employee representation goals and specific demographic-focused career development programs. The company also paused participation in external diversity benchmarking surveys and reevaluated their external partnerships. According to media analysis, this was to comply with President Trump's Executive Order to avoid losing billions of dollars of work with US Federal Agencies. == See also == List of acquisitions by Accenture == References == == External links == Media related to Accenture at Wikimedia Commons Official website Business data for Accenture plc:
https://en.wikipedia.org/wiki/Accenture
Avid Technology, Inc. is a global technology company headquartered in Burlington, Massachusetts, and was founded in August 1987 by Bill Warner. It develops software, SaaS, and hardware products used in media and entertainment. == History == Avid was founded by Bill Warner, a former marketing manager from Apollo Computer. A prototype of their first non-linear editing system, the Avid/1 Media Composer, was shown at the National Association of Broadcasters (NAB) convention in April 1988. The Avid/1 was based on an Apple Macintosh II computer, with special hardware and software of Avid's design installed. The Avid/1 was "the biggest shake-up in editing since Melies played with time and sequences in the early 1900s". By the early 1990s, Avid products began to replace such tools as the Moviola, Steenbeck, and KEM flatbed editors, allowing editors to handle their film creations with greater ease. The first feature film edited using the Avid was Let's Kill All the Lawyers in 1992, directed by Ron Senkowski. The film was edited at a 30fps NTSC rate, then used Avid MediaMatch to generate a negative cutlist from the EDL. The first feature film edited natively at 24fps with what was to become the Avid Film Composer was Emerson Park. The first studio film to be edited at 24fps was Lost in Yonkers, directed by Martha Coolidge. By 1994 only three feature films used the new digital editing system. By 1995 dozens had switched to Avid, and it signaled the beginning of the end of cutting celluloid. In 1996 Walter Murch accepted the Academy Award for editing The English Patient (which also won best picture), which he cut on the Avid. This was the first Editing Oscar awarded to a digitally edited film (although the final print was still created with traditional negative cutting). In 1994 Avid introduced Open Media Framework (OMF) as an open standard file format for sharing media and related metadata. Over the years, Avid has released numerous freeware versions of Media Composer. Initially this included Avid Free DV: a free edition of Media Composer with limited functionality; Avid Xpress DV: a consumer edition of Media Composer; and then Avid Xpress Pro: a prosumer edition of Media Composer. These editions were discontinued in 2008 as the flagship Media Composer was lowered in price. Later, Avid released Media Composer | First, which included a large portion of Media Composer's functionality but its exporting workflows publishing finished videos directly to web services like YouTube. On March 29, 1999, Avid Technology, Inc. adjusted the amount originally allocated to IPR&D and restated its third-quarter 1998 consolidated financial statements accordingly, considering the SEC's views. In February 2018, Avid appointed Jeff Rosica as CEO, after terminating Louis Hernandez Jr, who was accused of workplace misconduct. In November 2023, Avid Technology was acquired by an affiliate of STG for $1.4 billion. This process delisted Avid from the public stock exchange, making it private. In April 2024, Avid appointed Wellford Dillard as CEO, succeeding Jeff Rosica. == Products == Media Composer Pro Tools Sibelius == Awards == 1993: The National Academy of Television Arts & Sciences awarded Avid Technology and all of the company's initial employees with a technical Emmy award for Outstanding Engineering Development for the Avid Media Composer video editing system. 1999: At the 71st Academy Awards, Avid Technology Inc. was awarded an Oscar for the concept, system design and engineering of the Avid Film Composer for motion picture editing which was accepted by founder Bill Warner. == Acquisitions == == See also == List of music software List of video editing software List of scorewriters == References == == External links == Official website
https://en.wikipedia.org/wiki/Avid_Technology
An information technology audit, or information systems audit, is an examination of the management controls within an Information technology (IT) infrastructure and business applications. The evaluation of evidence obtained determines if the information systems are safeguarding assets, maintaining data integrity, and operating effectively to achieve the organization's goals or objectives. These reviews may be performed in conjunction with a financial statement audit, internal audit, or other form of attestation engagement. IT audits are also known as automated data processing audits (ADP audits) and computer audits. They were formerly called electronic data processing audits (EDP audits). == Purpose == An IT audit is different from a financial statement audit. While a financial audit's purpose is to evaluate whether the financial statements present fairly, in all material respects, an entity's financial position, results of operations, and cash flows in conformity to standard accounting practices, the purposes of an IT audit is to evaluate the system's internal control design and effectiveness. This includes, but is not limited to, efficiency and security protocols, development processes, and IT governance or oversight. Installing controls are necessary but not sufficient to provide adequate security. People responsible for security must consider if the controls are installed as intended, if they are effective, or if any breach in security has occurred and if so, what actions can be done to prevent future breaches. These inquiries must be answered by independent and unbiased observers. These observers are performing the task of information systems auditing. In an Information Systems (IS) environment, an audit is an examination of information systems, their inputs, outputs, and processing. As technology continues to advance and become more prevalent in our lives and in businesses, along comes an increase of IT threats and disruptions. These impact every industry and come in different forms such as data breaches, external threats, and operational issues. These risks and need for high levels of assurance increase the need for IT audits to check businesses IT system performances and to lower the probability and impact of technology threats and disruptions. The primary functions of an IT audit are to evaluate the systems that are in place to guard an organization's information. Specifically, information technology audits are used to evaluate the organization's ability to protect its information assets and to properly dispense information to authorized parties. The IT audit aims to evaluate the following: Will the organization's computer systems be available for the business at all times when required? (known as availability) Will the information in the systems be disclosed only to authorized users? (known as security and confidentiality) Will the information provided by the system always be accurate, reliable, and timely? (measures the integrity) In this way, the audit hopes to assess the risk to the company's valuable asset (its information) and establish methods of minimizing those risks. More specifically, organizations should look into three major requirements: confidentiality, integrity, and availability to label their needs for security and trust in their IT systems. Confidentiality: The purpose is to keep private information restricted from unauthorized users. Integrity: The purpose is to guarantee that information be changed in an authorized manner Availability: The purpose is to ensure that only authorized users have access to specific information These three requirements should be emphasized in every industry and every organization with an IT environment but each requirements and controls to support them will vary. == Classification of IT audits == Various authorities have created differing taxonomies to distinguish the various types of IT audits. Goodman & Lawless state that there are three specific systematic approaches to carry out an IT audit: Technological innovation process audit. This audit constructs a risk profile for existing and new projects. The audit will assess the length and depth of the company's experience in its chosen technologies, as well as its presence in relevant markets, the organization of each project, and the structure of the portion of the industry that deals with this project or product, organization and industry structure. Innovative comparison audit. This audit is an analysis of the innovative abilities of the company being audited, in comparison to its competitors. This requires examination of company's research and development facilities, as well as its track record in actually producing new products. Technological position audit: This audit reviews the technologies that the business currently has and that it needs to add. Technologies are characterized as being either "base", "key", "pacing" or "emerging". Others describe the spectrum of IT audits with five categories of audits: Systems and Applications: An audit to verify that systems and applications are appropriate, are efficient, and are adequately controlled to ensure valid, reliable, timely, and secure input, processing, and output at all levels of a system's activity. System and process assurance audits form a subtype, focussing on business process-centric business IT systems. Such audits have the objective to assist financial auditors. Information Processing Facilities: An audit to verify that the processing facility is controlled to ensure timely, accurate, and efficient processing of applications under normal and potentially disruptive conditions. Systems Development: An audit to verify that the systems under development meet the objectives of the organization, and to ensure that the systems are developed in accordance with generally accepted standards for systems development. Management of IT and Enterprise Architecture: An audit to verify that IT management has developed an organizational structure and procedures to ensure a controlled and efficient environment for information processing. Client/Server, Telecommunications, Intranets, and Extranets: An audit to verify that telecommunications controls are in place on the client (computer receiving services), server, and on the network connecting the clients and servers. And some lump all IT audits as being one of only two type: "general control review" audits or "application control review" audits. A number of IT audit professionals from the Information Assurance realm consider there to be three fundamental types of controls regardless of the type of audit to be performed, especially in the IT realm. Many frameworks and standards try to break controls into different disciplines or arenas, terming them “Security Controls“, ”Access Controls“, “IA Controls” in an effort to define the types of controls involved. At a more fundamental level, these controls can be shown to consist of three types of fundamental controls: Protective/Preventative Controls, Detective Controls and Reactive/Corrective Controls. In an IS, there are two types of auditors and audits: internal and external. IS auditing is usually a part of accounting internal auditing, and is frequently performed by corporate internal auditors. An external auditor reviews the findings of the internal audit as well as the inputs, processing and outputs of information systems. The external audit of information systems is primarily conducted by certified Information System auditors, such as CISA, certified by ISACA, Information System Audit and Control Association , USA, Information System Auditor (ISA) certified by ICAI (Institute of Chartered Accountants of India), and other certified by reputed organization for IS audit. Delete --> (frequently a part of the overall external auditing performed by a Certified Public Accountant (CPA) firm. ) IS auditing considers all the potential hazards and controls in information systems. It focuses on issues like operations, data, integrity, software applications, security, privacy, budgets and expenditures, cost control, and productivity. Guidelines are available to assist auditors in their jobs, such as those from Information Systems Audit and Control Association. == History of IT auditing == The concept of IT auditing was formed in the mid-1960s. Since that time, IT auditing has gone through numerous changes, largely due to advances in technology and the incorporation of technology into business. Currently, there are many IT-dependent companies that rely on information technology in order to operate their business e.g. telecommunication or banking company. For the other types of business, IT plays the big part of company including the applying of workflow instead of using the paper request form, using the application control instead of manual control which is more reliable or implementing the ERP application to facilitate the organization by using only one application. According to these, the importance of IT audit is constantly increased. One of the most important roles of the IT audit is to audit over the critical system in order to support the financial audit or to support the specific regulations announced e.g. SOX. == Emerging issues == There are also new audits being imposed by various standard boards which are required to be performed, depending upon the audited organization, which will affect IT and ensure that IT departments are performing certain functions and controls appropriately to be considered compliant. Examples of such audits are SSAE 16, ISAE 3402, and ISO27001:2013. === Web presence audits === The extension of the corporate IT presence beyond the corporate firewall (e.g. the adoption of social media by the enterprise along with the proliferation of cloud-based tools like social media management systems) has elevated the importance of incorporating web presence audits into the IT/IS audit. The purposes of these audits include ensuring the company is taking the necessary steps to: rein in use of unauthorized tools (e.g. "shadow IT") minimize damage to reputation maintain regulatory compliance prevent information leakage mitigate third-party risk minimize governance risk The use of departmental or user developed tools has been a controversial topic in the past. However, with the widespread availability of data analytics tools, dashboards, and statistical packages users no longer need to stand in line waiting for IT resources to fulfill seemingly endless requests for reports. The task of IT is to work with business groups to make authorized access and reporting as straightforward as possible. To use a simple example, users should not have to do their own data matching so that pure relational tables are linked in a meaningful way. IT needs to make non-normalized, data warehouse type files available to users so that their analysis work is simplified. For example, some organizations will refresh a warehouse periodically and create easy to use "flat' tables which can be easily uploaded by a package such as Tableau and used to create dashboards. === Enterprise communications audits === The rise of VOIP networks and issues like BYOD and the increasing capabilities of modern enterprise telephony systems causes increased risk of critical telephony infrastructure being misconfigured, leaving the enterprise open to the possibility of communications fraud or reduced system stability. Banks, financial institutions, and contact centers typically set up policies to be enforced across their communications systems. The task of auditing that the communications systems are in compliance with the policy falls on specialized telecom auditors. These audits ensure that the company's communication systems: adhere to stated policy follow policies designed to minimize the risk of hacking or phreaking maintain regulatory compliance prevent or minimize toll fraud mitigate third-party risk minimize governance risk Enterprise communications audits are also called voice audits, but the term is increasingly deprecated as communications infrastructure increasingly becomes data-oriented and data-dependent. The term "telephony audit" is also deprecated because modern communications infrastructure, especially when dealing with customers, is omni-channel, where interaction takes place across multiple channels, not just over the telephone. One of the key issues that plagues enterprise communication audits is the lack of industry-defined or government-approved standards. IT audits are built on the basis of adherence to standards and policies published by organizations such as NIST and PCI, but the absence of such standards for enterprise communications audits means that these audits have to be based an organization's internal standards and policies, rather than industry standards. As a result, enterprise communications audits are still manually done, with random sampling checks. Policy Audit Automation tools for enterprise communications have only recently become available. === Ethical Dilemmas in IT Audits === The Use of Artificial Intelligence (AI) in IT audits is growing rapidly, with 30% of all corporate audits to be conducted using AI by 2025 as reported by the World Economic forum from 2015. AI in IT audits raises many ethical issues. The use of Artificial Intelligence causes unintended biases in results An issue that AI faces in completing IT audits for corporations is that unintended biases can occur as the AI filters through data. AI does not have a human element or the ability to understand different situations in which certain data is expected or not expected. AI only understands the data in which it has seen before and therefore is unable to evolve given each unique situation. This causes unintended biases and therefore unintended consequences if the AI systems are given too much trust and not carefully monitored by the human eye. As a result ethical, legal and economic issues arise. Technology replacing the role of humans Big 4 firms have invested significant amounts of money in emerging technologies in the IT audit space. AI is now being used in assurance practices performing tasks such as “auditing and accounting procedures such as review of general ledgers, tax compliance, preparing work-papers, data analytics, expense compliance, fraud detection, and decision-making.” This essentially replaces the need for auditors and relegates those who work in assurance to roles as “overseers” of the technology.However, firms still need auditors to perform analysis on the AI results of the IT audit. Auditors who do not understand the algorithms being utilized in the audit can allow mistakes to be made by these imperfect programs. Thus auditors with extensive tech backgrounds and degrees in technology are highly coveted by firms utilizing AI to perform audits. == Effect of IT Audit on Companies and Financial Audits == Globalization in combination with the growth in information technology systems has caused companies to shift to an increasingly digitized working environment. Advantages provided by these systems include a reduction in working time, the ability to test large amounts of data, reduce audit risk, and provide more flexible and complete analytical information. With an increase in time, auditors are able to implement additional audit tests, leading to a great improvement in the audit process overall. The use of computer-assisted audit techniques (CAATs) have allowed companies to examine larger samples of data and more thorough reviews of all transactions, allowing the auditor to test and better understand any issues within the data. The use of IT systems in audits has transformed the way auditors accomplish important audit functions such as the management of databases, risk assurance and controls, and even governance and compliance. In addition, IT audit systems improve the operational efficiency and aid in decision making that would otherwise be left to hand-held calculations. IT systems help to eliminate the human error in audits and while it does not fully solve the issue, IT systems have proven to be helpful in audits done by the Big 4 and small firms alike. These systems have greatly reduced the margin of error on audits and provide a better insight into the data being analyzed. As a result of the increased use of IT systems in audits, authoritative bodies such as the American Institute of Certified Public Accountants (AICPA) and the Information Systems Audit Control Association (ISACA) have established guidance on how to properly use IT systems to perform audits. Auditors must now adhere to the established guidelines when utilizing IT systems in audits. == Benefits of Utilizing IT systems on Financial Audits == The use of IT systems and AI techniques on financial audits is starting to show huge benefits for leading accounting firms. In a study done by one of the Big 4 accounting firms, it is expected that the use of IT Systems and AI techniques will generate an increase of $6.6 trillion in revenue as a result of the increase in productivity. As a result, leading auditing firms are making enormous investments with the goal of increasing productivity and therefore revenue through the development or outsourcing of IT systems and AI techniques to assist in financial audits. PwC, one of the biggest auditing firms in the world, has narrowed down three different types of IT systems and AI techniques that firms can develop and implement to achieve increased revenue and productivity. The first system is by created in a way that technology systems that play a supplemental role in the human auditors decision-making. This allows the human auditor to retain autonomy over decisions and use the technology to support and enhance their ability to perform accurate work, ultimately saving the firm in productivity costs. Next, PwC states that systems with problem solving abilities are imperative to producing the most accurate results. PwC recognizes the increased margin for error due to unintended biases, and thus the need for creating systems that are able to adapt to different scenarios. This type of system requires decision making to be shared between the human auditor and the IT system to produce the maximum output by allowing the system to take over the computing work that could not be one by a human auditor alone. Finally, PwC recognizes that there are scenarios where technology needs to have the autonomy of decision making and act independently. This allows human auditors to focus on more important tasks while the technology takes care of time consuming tasks that do not require human time. The utilization of IT systems and AI techniques on financial audits extend past the goal of reaching maximized productivity and increased revenue. Firms who utilize these systems to assist in the completion of audits are able to identify pieces of data that may constitute fraud with higher efficiency and accuracy. For example, systems such as drones have been approved by all four of the big 4 to assist in obtaining more accurate inventory calculations, meanwhile voice and facial recognition is adding firms in fraud cases. == See also == Electronic data processing === Computer forensics === Computer forensics Data analysis === Operations === Helpdesk and incident reporting auditing Change management auditing Disaster recovery and business continuity auditing ISAE 3402 === Miscellaneous === XBRL assurance === Irregularities and illegal acts === AICPA Standard: SAS 99 Consideration of Fraud in a Financial Statement Audit Computer fraud case studies == References == == External links == A career as Information Systems Auditor Archived 2007-07-12 at the Wayback Machine, by Avinash Kadam (Network Magazine) Federal Financial Institutions Examination Council (FFIEC) The need for CAAT Technology Open Security Architecture- Controls and patterns to secure IT systems American Institute of Certified Public Accountants (AICPA) IT Services Library (ITIL)
https://en.wikipedia.org/wiki/Information_technology_audit
Military technology is the application of technology for use in warfare. It comprises the kinds of technology that are distinctly military in nature and not civilian in application, usually because they lack useful or legal civilian applications, or are dangerous to use without appropriate military training. The line is porous; military inventions have been brought into civilian use throughout history, with sometimes minor modification if any, and civilian innovations have similarly been put to military use. Military technology is usually researched and developed by scientists and engineers specifically for use in battle by the armed forces. Many new technologies came as a result of the military funding of science. On the other hand, the theories, strategies, concepts and doctrines of warfare are studied under the academic discipline of military science. Armament engineering is the design, development, testing and lifecycle management of military weapons and systems. It draws on the knowledge of several traditional engineering disciplines, including mechanical engineering, electrical engineering, mechatronics, electro-optics, aerospace engineering, materials engineering, and chemical engineering. == History == This section is divided into the broad cultural developments that affected military technology. === Ancient technology === The first use of stone tools may have begun during the Paleolithic Period. The earliest stone tools are from the site of Lomekwi, Turkana, dating from 3.3 million years ago. Stone tools diversified through the Pleistocene Period, which ended ~12,000 years ago. The earliest evidence of warfare between two groups is recorded at the site of Nataruk in Turkana, Kenya, where human skeletons with major traumatic injuries to the head, neck, ribs, knees and hands, including an embedded obsidian bladelet on a skull, are evidence of inter-group conflict between groups of nomadic hunter-gatherers 10,000 years ago. Humans entered the Bronze Age as they learned to smelt copper into an alloy with tin to make weapons. In Asia where copper-tin ores are rare, this development was delayed until trading in bronze began in the third millennium BCE. In the Middle East and Southern European regions, the Bronze Age follows the Neolithic period, but in other parts of the world, the Copper Age is a transition from Neolithic to the Bronze Age. Although the Iron Age generally follows the Bronze Age, in some areas the Iron Age intrudes directly on the Neolithic from outside the region, with the exception of Sub-Saharan Africa where it was developed independently. The first large-scale use of iron weapons began in Asia Minor around the 14th century BCE and in Central Europe around the 11th century BCE followed by the Middle East (about 1000 BCE) and India and China. The Assyrians are credited with the introduction of horse cavalry in warfare and the extensive use of iron weapons by 1100 BCE. Assyrians were also the first to use iron-tipped arrows. === Post-classical technology === The Wujing Zongyao (Essentials of the Military Arts), written by Zeng Gongliang, Ding Du, and others at the order of Emperor Renzong around 1043 during the Song dynasty illustrate the eras focus on advancing intellectual issues and military technology due to the significance of warfare between the Song and the Liao, Jin, and Yuan to their north. The book covers topics of military strategy, training, and the production and employment of advanced weaponry. Advances in military technology aided the Song dynasty in its defense against hostile neighbors to the north. The flamethrower found its origins in Byzantine-era Greece, employing Greek fire (a chemically complex, highly flammable petrol fluid) in a device with a siphon hose by the 7th century.: 77  The earliest reference to Greek Fire in China was made in 917, written by Wu Renchen in his Spring and Autumn Annals of the Ten Kingdoms.: 80  In 919, the siphon projector-pump was used to spread the 'fierce fire oil' that could not be doused with water, as recorded by Lin Yu in his Wuyue Beishi, hence the first credible Chinese reference to the flamethrower employing the chemical solution of Greek fire (see also Pen Huo Qi).: 81  Lin Yu mentioned also that the 'fierce fire oil' derived ultimately from one of China's maritime contacts in the 'southern seas', Arabia Dashiguo.: 82  In the Battle of Langshan Jiang in 919, the naval fleet of the Wenmu King from Wuyue defeated a Huainan army from the Wu state; Wenmu's success was facilitated by the use of 'fire oil' ('huoyou') to burn their fleet, signifying the first Chinese use of gunpowder in a battle.: 81–83  The Chinese applied the use of double-piston bellows to pump petrol out of a single cylinder (with an upstroke and downstroke), lit at the end by a slow-burning gunpowder match to fire a continuous stream of flame.: 82  This device was featured in description and illustration of the Wujing Zongyao military manuscript of 1044.: 82  In the suppression of the Southern Tang state by 976, early Song naval forces confronted them on the Yangtze River in 975. Southern Tang forces attempted to use flamethrowers against the Song navy, but were accidentally consumed by their own fire when violent winds swept in their direction.: 89  Although the destructive effects of gunpowder were described in the earlier Tang dynasty by a Daoist alchemist, the earliest developments of the gun barrel and the projectile-fire cannon were found in late Song China. The first art depiction of the Chinese 'fire lance' (a combination of a temporary-fire flamethrower and gun) was from a Buddhist mural painting of Dunhuang, dated circa 950. These 'fire-lances' were widespread in use by the early 12th century, featuring hollowed bamboo poles as tubes to fire sand particles (to blind and choke), lead pellets, bits of sharp metal and pottery shards, and finally large gunpowder-propelled arrows and rocket weaponry.: 220–221  Eventually, perishable bamboo was replaced with hollow tubes of cast iron, and so too did the terminology of this new weapon change, from 'fire-spear' huo qiang to 'fire-tube' huo tong.: 221  This ancestor to the gun was complemented by the ancestor to the cannon, what the Chinese referred to since the 13th century as the 'multiple bullets magazine erupter' bai zu lian zhu pao, a tube of bronze or cast iron that was filled with about 100 lead balls.: 263–264  The earliest known depiction of a gun is a sculpture from a cave in Sichuan, dating to 1128, that portrays a figure carrying a vase-shaped bombard, firing flames and a cannonball. However, the oldest existent archaeological discovery of a metal barrel handgun is from the Chinese Heilongjiang excavation, dated to 1288.: 293  The Chinese also discovered the explosive potential of packing hollowed cannonball shells with gunpowder. Written later by Jiao Yu in his Huolongjing (mid-14th century), this manuscript recorded an earlier Song-era cast-iron cannon known as the 'flying-cloud thunderclap eruptor' (fei yun pi-li pao). The manuscript stated that: As noted before, the change in terminology for these new weapons during the Song period were gradual. The early Song cannons were at first termed the same way as the Chinese trebuchet catapult. A later Ming dynasty scholar known as Mao Yuanyi would explain this use of terminology and true origins of the cannon in his text of the Wubei Zhi, written in 1628: The 14th-century Huolongjing was also one of the first Chinese texts to carefully describe to the use of explosive land mines, which had been used by the late Song Chinese against the Mongols in 1277, and employed by the Yuan dynasty afterwards. The innovation of the detonated land mine was accredited to one Luo Qianxia in the campaign of defense against the Mongol invasion by Kublai Khan,: 192  Later Chinese texts revealed that the Chinese land mine employed either a rip cord or a motion booby trap of a pin releasing falling weights that rotated a steel flint wheel, which in turn created sparks that ignited the train of fuses for the land mines.: 199  Furthermore, the Song employed the earliest known gunpowder-propelled rockets in warfare during the late 13th century,: 477  its earliest form being the archaic Fire Arrow. When the Northern Song capital of Kaifeng fell to the Jurchens in 1126, it was written by Xia Shaozeng that 20,000 fire arrows were handed over to the Jurchens in their conquest. An even earlier Chinese text of the Wujing Zongyao ("Collection of the Most Important Military Techniques"), written in 1044 by the Song scholars Zeng Kongliang and Yang Weide, described the use of three spring or triple bow arcuballista that fired arrow bolts holding gunpowder packets near the head of the arrow.: 154  Going back yet even farther, the Wu Li Xiao Shi (1630, second edition 1664) of Fang Yizhi stated that fire arrows were presented to Emperor Taizu of Song (r. 960–976) in 960. === Modern technology === ==== Armies ==== The Islamic gunpowder empires introduced numerous developed firearms, cannon and small arms. During the period of Proto-industrialization, newly invented weapons were seen to be used in Mughal India. Rapid development in military technology had a dramatic impact on armies and navies in the industrialized world in 1740–1914. For land warfare, cavalry faded in importance, while infantry became transformed by the use of highly accurate more rapidly loading rifles, and the use of smokeless powder. Machine guns were developed in the 1860s in Europe. Rocket artillery and the Mysorean rockets were pioneered by Indian Muslim ruler Tipu Sultan and the French introduced much more accurate rapid-fire field artillery. Logistics and communications support for land warfare dramatically improved with use of railways and telegraphs. Industrialization provided a base of factories that could be converted to produce munitions, as well as uniforms, tents, wagons and essential supplies. Medical facilities were enlarged and reorganized based on improved hospitals and the creation of modern nursing, typified by Florence Nightingale in Britain during the Crimean War of 1854–56. ==== Naval ==== Naval warfare was transformed by many innovations, most notably the coal-based steam engine, highly accurate long-range naval guns, heavy steel armour for battleships, mines, and the introduction of the torpedo, followed by the torpedo boat and the destroyer. Coal after 1900 was eventually displaced by more efficient oil, but meanwhile navies with an international scope had to depend on a network of coaling stations to refuel. The British Empire provided them in abundance, as did the French Empire to a lesser extent. War colleges developed, as military theory became a specialty; cadets and senior commanders were taught the theories of Jomini, Clausewitz and Mahan, and engaged in tabletop war games. Around 1900, entirely new innovations such as submarines and airplanes appeared, and were quickly adapted to warfare by 1914. The British HMS Dreadnought (1906) incorporated so much of the latest technology in weapons, propulsion and armour that it at a stroke made all other battleships obsolescent. ==== Organization and finance ==== New financial tools were developed to fund the rapidly increasing costs of warfare, such as popular bond sales and income taxes, and the funding of permanent research centers. Many 19th century innovations were largely invented and promoted by lone individuals with small teams of assistants, such as David Bushnell and the submarine, John Ericsson and the battleship, Hiram Maxim and the machine gun, and Alfred Nobel and high explosives. By 1900 the military began to realize that they needed to rely much more heavily on large-scale research centers, which needed government funding. They brought in leaders of organized innovation such as Thomas Edison in the U.S. and chemist Fritz Haber of the Kaiser Wilhelm Institute in Germany. == Postmodern technology == The postmodern stage of military technology emerged in the 1940s, and one with recognition thanks to the high priority given during the war to scientific and engineering research and development regarding nuclear weapons, radar, jet engines, proximity fuses, advanced submarines, aircraft carriers, and other weapons. The high-priority continues into the 21st century. It involves the military application of advanced scientific research regarding nuclear weapons, jet engines, ballistic and guided missiles, radar, biological warfare, and the use of electronics, computers and software. === Space === During the Cold War, the world's two great superpowers – the Soviet Union and the United States of America – spent large proportions of their GDP on developing military technologies. The drive to place objects in orbit stimulated space research and started the Space Race. In 1957, the USSR launched the first artificial satellite, Sputnik 1. By the end of the 1960s, both countries regularly deployed satellites. Spy satellites were used by militaries to take accurate pictures of their rivals' military installations. As time passed the resolution and accuracy of orbital reconnaissance alarmed both sides of the Iron Curtain. Both the United States and the Soviet Union began to develop anti-satellite weapons to blind or destroy each other's satellites. Laser weapons, kamikaze style satellites, as well as orbital cannons were researched with varying levels of success. Spy satellites were, and continue to be, used to monitor the dismantling of military assets in accordance with arms control treaties signed between the two superpowers. To use spy satellites in such a manner is often referred to in treaties as "national technical means of verification". The superpowers developed ballistic missiles to enable them to use nuclear weaponry across great distances. As rocket science developed, the range of missiles increased and intercontinental ballistic missiles (ICBM) were created, which could strike virtually any target on Earth in a timeframe measured in minutes rather than hours or days. To cover large distances ballistic missiles are usually launched into sub-orbital spaceflight. As soon as intercontinental missiles were developed, military planners began programmes and strategies to counter their effectiveness. === Mobilization === A significant portion of military technology is about transportation, allowing troops and weaponry to be moved from their origins to the front. Land transport has historically been mainly by foot, land vehicles have usually been used as well, from chariots to tanks. When conducting a battle over a body of water, ships are used. There are historically two main categories of ships: those for transporting troops, and those for attacking other ships. Soon after the invention of aeroplanes, military aviation became a significant component of warfare, though usually as a supplementary role. The two main types of military aircraft are bombers, which attack land- or sea-based targets, and fighters, which attack other aircraft. Military vehicles are land combat or transportation vehicles, excluding rail-based, which are designed for or in significant use by military forces. List of military vehicles List of armoured fighting vehicles List of tanks Military aircraft includes any use of aircraft by a country's military, including such areas as transport, training, disaster relief, border patrol, search and rescue, surveillance, surveying, peacekeeping, and (very rarely) aerial warfare. List of aircraft List of aircraft weapons Warships are watercraft for combat and transportation in and on seas and oceans. Submarines Complex masting and sail systems found on warships during the Age of Sail List of historical ship and boat types List of aircraft carriers List of submarine classes === Defence === Fortifications are military constructions and buildings designed for defence in warfare. They range in size and age from the Great Wall of China to a Sangar. List of fortifications List of forts === Sensors and communication === Sensors and communication systems are used to detect enemies, coordinate movements of armed forces and guide weaponry. Early systems included flag signaling, telegraph and heliographs. Laser guidance Missile guidance Norden Bombsight Proximity fuse Radar Satellite guidance in guidance weapons == Future technology == The Defense Advanced Research Projects Agency is an agency of the United States Department of Defense responsible for the development of new technologies for use by the military. DARPA leads the development of military technology in the United States and today, has dozens of ongoing projects; everything from humanoid robots to bullets that can change path before reaching their target. China has a similar agency. === Emerging territory === Current militaries continue to invest in new technologies for the future. Such technologies include cognitive radar, 5G cellular networks, microchips, semiconductors, and large scale analytic engines. Additionally, many militaries seek to improve current laser technology. For example, Israeli Defense Forces utilize laser technology to disable small enemy machinery, but seek to move to more large scale capabilities in the coming years. Militaries across the world continue to perform research on autonomous technologies which allow for increased troop mobility or replacement of live soldiers. Autonomous vehicles and robots are expected to play a role in future conflicts; this has the potential to decrease loss of life in future warfare. Observers of transhumanism note high rates of technological terms in military literature, but low rates for explicitly transhuman-related terms. Today's hybrid style of warfare also calls for investments in information technologies. Increased reliance on computer systems has incentivized nations to push for increased efforts at managing large scale networks and having access to large scale data. New strategies of cyber and hybrid warfare includes, network attacks, media analysis, and media/ grass-roots campaigns on medias such as blog posts ==== Cyberspace ==== In 2011, the US Defense Department declared cyberspace a new domain of warfare; since then DARPA has begun a research project known as "Project X" with the goal of creating new technologies that will enable the government to better understand and map the cyber territory. Ultimately giving the Department of Defense the ability to plan and manage large-scale cyber missions across dynamic network environments. == See also == List of military inventions List of emerging military technologies Bellifortis, late medieval treatise on military technology. Materiel == References == == Further reading == Andrade, Tonio. The Gunpowder Age: China, military innovation, and the rise of the West in world history (Princeton UP, 2016). Black, Jeremy. Tools of War (2007) covers 50 major inventions. excerpt Boot, Max. War made new: technology, warfare, and the course of history, 1500 to today (Penguin, 2006). Chisholm, Hugh, ed. (1911). "Arms and Armour" . Encyclopædia Britannica. Vol. 2 (11th ed.). Cambridge University Press. pp. 582–590. Cockburn, Andrew, 'The A-10 saved my ass' (review of Andrew F. Krepinevich Jr., The Origins of Victory: How Disruptive Military Innovation Determines the Fates of Great Powers, Yale, May 2023, 549 pp., ISBN 978 0 300 23409 1), London Review of Books, vol. 46, no.46 (21 March 2024), pp. 39–41. The reviewer gives many examples of the military superiority of granting low-level commanders decision-making initiative, over the most expensive and technologically-advanced weaponry. "Money is lavished on advanced weapons systems whose effectiveness is questionable, and which are vastly expensive to maintain.... At any one time, 40 per cent of the US navy's attack submarines are out of commission for repairs.... Krepinevich... prefers to dwell on the urgent necessity of developing increasingly fantastical programmes: hypersonics, genetic engineering, quantum computing and of course AI.... All the wonders of precision targeting and comprehensive surveillance notwithstanding, the Houthi blockade of the Red Sea is as effectively disruptive as ever." (p. 41.) Dupuy, Trevor N. The evolution of weapons and warfare (1984), 350pp, cover 2000 BC to late 20th century. Ellis, John. The Social History of the Machine Gun (1986). Gabriel, Richard A., and Karen S. Metz. From Sumer to Rome: The Military capabilities of ancient armies (ABC-CLIO, 1991). Hacker, Barton (2005). "The Machines of War: Western Military Technology 1850–2000". History & Technology. 21 (3): 255–300. doi:10.1080/07341510500198669. S2CID 144113139. Levy, Jack S (1984). "The offensive/defensive balance of military technology: A theoretical and historical analysis". International Studies Quarterly. 28 (2): 219–238. doi:10.2307/2600696. JSTOR 2600696. McNeill, William H. The Pursuit of Power: Technology, Armed Force, and Society since A.D. 1000 (1984). Parker, Geoffrey. The Military Revolution: Military Innovation and the Rise of the West (1988). Steele, Brett D. and Tamara Dorland. Heirs of Archimedes: Science & the Art of War through the Age of Enlightenment (2005) 397 pp.
https://en.wikipedia.org/wiki/Military_technology
Financial technology (abbreviated as fintech) refers to the application of innovative technologies to products and services in the financial industry. This broad term encompasses a wide array of technological advancements in financial services, including mobile banking, online lending platforms, digital payment systems, robo-advisors, and blockchain-based applications such as cryptocurrencies. Financial technology companies include both startups and established technology and financial firms that aim to improve, complement, or replace traditional financial services. == Evolution == The evolution of financial technology spans over a century, marked by significant technological innovations that have revolutionized the financial industry. While the application of technology to finance has deep historical roots, the term "financial technology" emerged in the late 20th century and gained prominence in the 1990s. The earliest documented use of the term dates back to 1967, appearing in an article in The Boston Globe titled "Fin-Tech New Source of Seed Money." This piece reported on a startup investment company established by former executives of Computer Control Company, aimed at providing venture capital and industry expertise to startups in the financial technology industry. However, the term didn't gain popularity until the early 1990s when Citicorp Chairman John Reed used it to describe the Financial Services Technology Consortium. This project, initiated by Citigroup, was designed to promote technological cooperation in the financial sector, marking a pivotal moment in the industry's collaborative approach to innovation. The financial technology ecosystem includes various types of companies. While startups developing new financial technologies or services are often associated with financial technology, the sector also encompasses established technology companies expanding into financial services and traditional financial institutions adopting new technologies. This diverse landscape has led to innovations across multiple financial sectors, including banking, insurance, investment, and payment systems. Financial technology applications span a wide range of financial services. These include digital banking, mobile payments and digital wallets, peer-to-peer lending platforms, robo-advisors and algorithmic trading, insurtech, blockchain and cryptocurrency, regulatory technology, and crowdfunding platforms. == History == === Foundations === The late 19th century laid the groundwork for early financial technology with the development of the telegraph and transatlantic cable systems. These innovations transformed the transmission of financial information across borders, enabling faster and more efficient communication between financial institutions. A significant milestone in electronic money movement came with the establishment of the Fedwire Funds Service by the Federal Reserve Banks in 1918. This early electronic funds transfer system used telegraph lines to facilitate secure transfers between member banks, marking one of the first instances of electronic money movement. The 1950s ushered in a new era of consumer financial services. Diners Club International introduced the first universal credit card in 1950, a pivotal moment that would reshape consumer spending and credit. This innovation paved the way for the launch of American Express cards in 1958 and the BankAmericard (later Visa) in 1959, further expanding the credit card industry. === Digital revolution === The 1960s and 1970s marked the beginning of the shift from analog to digital finance, with several groundbreaking developments shaping the future of financial technology. In 1967, Barclays introduced the world's first ATM in London, revolutionizing access to cash and basic banking services. Inspired by vending machines, the ATM marked a significant step towards self-service banking. Financial technology infrastructure continued to evolve with the establishment of the Inter-bank Computer Bureau in the UK in 1968. This development laid the groundwork for the country's first automated clearing house system, eventually evolving into BACS (Bankers' Automated Clearing Services) to facilitate electronic funds transfers between banks. The world of securities trading was transformed in 1971 with the establishment of NASDAQ, the world's first digital stock exchange. NASDAQ's electronic quotation system represented a significant leap forward from the traditional open outcry system used in stock exchanges. Two years later, the founding of the SWIFT (Society for Worldwide Interbank Financial Telecommunication) standardized and secured communication between financial institutions globally. SWIFT's messaging system became the global standard for international money and security transfers. The introduction of electronic fund transfer systems, such as the ACH (Automated Clearing House) in the United States, facilitated faster and more efficient money transfers. The ACH network allowed for direct deposits, payroll payments, and electronic bill payments, significantly reducing the need for paper checks. === Rise of digital financial services === The 1980s and 1990s witnessed significant developments in financial technology, with the rise of digital financial services and the early stages of online banking. A major breakthrough came when Michael Bloomberg founded Innovative Market Systems (later Bloomberg L.P.) and introduced the Bloomberg Terminal. This innovation revolutionized how financial professionals accessed and analyzed market data, providing real-time financial market data, analytics, and news to financial institutions worldwide. Online banking emerged in the early 1980s, with the Bank of Scotland offering the first UK online banking service called Homelink. This service allowed customers to view statements, transfer money, and pay bills using their televisions and telephones. The late 1980s saw the development of EDI (Electronic Data Interchange) standards, allowing businesses to exchange financial documents electronically and streamlining B2B (business-to-business) transactions. A significant milestone in consumer digital banking came in 1994 when Stanford Federal Credit Union launched the first Internet banking website. This service initially allowed members to check account balances online, with bill pay functionality added in 1997. However, it was not until 1999 that the first state-chartered, FDIC-insured institution operating primarily online was established. First Internet Bank, founded by David Becker, marked a new era in online-only banking. === Dot-com era === The late 1990s and early 2000s marked a significant turning point in the evolution of financial technology, as numerous innovations emerged during the dot-com boom. One notable development was the rise of online trading platforms, with E-Trade, founded in 1982, leading the charge. In 1992, E-Trade became one of the first financial services companies to offer online trading to consumers, revolutionizing the way individuals interacted with the stock market. Another pivotal moment was the founding of PayPal in 1998. PayPal's success in creating a secure and user-friendly online payment system demonstrated the viability of digital payment solutions and paved the way for numerous subsequent financial technology startups. The early 2000s also saw the emergence of innovative business models in the financial services industry. WebBank, established in 1997, began offering a "rent-a-charter" model in 2005, providing the necessary banking infrastructure and regulatory compliance for financial technology startups to offer banking services without obtaining their own charters. This model would later prove crucial in enabling the growth of numerous financial technology companies. === Post-financial crisis === The 2008 global financial crisis served as a catalyst for the rapid growth of the financial technology industry, as declining trust in traditional financial institutions created opportunities for innovative, technology-driven solutions. The early days of the post-crisis era saw the emergence of digital currencies, with e-Gold serving as a precursor to the development of Bitcoin. While e-Gold, which allowed users to create accounts denominated in grams of gold and enable instant transfers, ultimately faced legal challenges and closure, it laid the foundation for future digital currencies. The invention of Bitcoin in 2008 by an anonymous creator using the pseudonym Satoshi Nakamoto marked a turning point in the evolution of digital currencies and decentralized finance. Bitcoin's innovative use of blockchain technology sparked a wave of development in the field of cryptocurrencies, opening up new possibilities for secure, transparent, and decentralized financial systems. As the financial technology landscape continued to evolve, new payment processing companies entered the market, offering developer-friendly APIs that dramatically simplified online payment integration. By lowering the barriers to entry for e-commerce and online financial services, these companies played a crucial role in enabling the growth of new financial technology startups and driving innovation in the sector. The partner banking model, which emerged in the early 2000s, gained significant traction in the post-crisis era. This model expanded beyond its initial "rent-a-charter" concept, evolving into more comprehensive partnerships between traditional banks and financial technology companies. These collaborations allowed for rapid innovation and market entry, as financial technologys leveraged the regulatory compliance and infrastructure of established banks while bringing their own technological expertise and customer-centric approaches. This further accelerated the growth of the financial technology sector, enabling the proliferation of digital-first financial services. The maturation of this model paved the way for the rise of neobanks, which challenged traditional banking paradigms by offering fully digital experiences, redefining customer expectations in the banking sector. The increasing adoption of smartphones drove the development of mobile-first financial technology solutions. Square's introduction of a mobile card reader in 2009 enabled small businesses to accept credit card payments using smartphones, democratizing access to payment processing and highlighting the transformative potential of mobile technology in the financial services industry. The evolution of mobile payment systems continued with the launch of Google Wallet in 2011 and Apple Pay in 2014, which further popularized mobile payments and demonstrated the growing consumer demand for convenient, secure, and user-friendly payment solutions. This period also saw the rise of peer-to-peer (P2P) payment applications. These platforms revolutionized how individuals transfer money, enabling quick and easy transactions between users. By allowing fast, direct transfers through mobile devices, P2P payment apps significantly reduced the friction in personal financial transactions, making it simpler for people to split bills, share costs, or send money to friends and family. === Accelerated growth of digital finance === The global COVID-19 pandemic, which began in early 2020, had a profound impact on the financial technology industry, accelerating the adoption of digital financial services and highlighting the importance of technology in ensuring the resilience and accessibility of financial systems. As lockdowns and social distancing measures forced businesses and consumers to rely more heavily on digital channels, financial technology solutions experienced a surge in demand. Mobile-first financial technology applications saw unprecedented growth during this period. Many trading platforms reported significant increases in new user accounts, with some seeing millions of new funded accounts added in the early months of the pandemic. Similarly, payment and money transfer apps experienced substantial user growth, with some platforms more than doubling their monthly active users over a three-year period, indicating a massive shift towards digital financial services. The events of 2020 also exposed the limitations of traditional financial institutions in meeting the needs of consumers and businesses in times of crisis. financial technology companies, with their agile and technology-driven business models, were better positioned to respond to the challenges posed by the rapidly changing environment, offering innovative solutions for remote banking, contactless payments, and digital lending. During this period, venture capital valuations for financial technology companies soared, driven by low interest rates and a booming stock market. The surge in financial technology investments was marked by significant capital inflows, leading to higher valuations and more frequent exits via IPOs and SPACs. Several prominent financial technology companies achieved record-breaking valuations, further underscoring the sector's growth and investor confidence. The shift towards digital financial services during this period also accelerated the adoption of blockchain technology and cryptocurrencies. As central banks around the world explored the possibility of issuing digital currencies, the interest in decentralized finance and non-fungible tokens grew, opening up new avenues for innovation in the financial technology sector. The financial technology landscape in Africa is on the rise, with active companies reaching 1,263 in 2024, a significant increase from 1,049 in 2022 and 450 in 2020. Nigeria leads the financial technology sector, accounting for 28% of all financial technology companies on the continent. == Industry landscape == The financial technology industry includes a diverse range of financial services and technologies, categorized into several key areas. Many companies operate across multiple areas or create new niches that blur these distinctions. == Revenue models == Financial technology companies utilize various revenue models, often combining multiple approaches to diversify income streams. Transaction fees form a primary source of income for many financial technology businesses, particularly payment processors and cryptocurrency exchanges. These companies typically charge a percentage of each processed transaction. Some companies have expanded this model to include premium fees for services like instant payouts, catering to merchants who require immediate access to funds. Interchange fees represent another significant revenue stream, particularly for firms offering payment cards. Subscription and freemium models allow companies to offer basic services at no cost while charging for advanced features or premium tiers. This approach is common among digital banks and financial management platforms. In the business-to-business (B2B) sector, usage-based pricing is prevalent, especially for API services. Financial technology infrastructure providers often charge based on the volume of API calls or transactions processed, enabling other businesses to access specialized financial services without developing them internally. Interest-based revenue is crucial for many financial technology companies, particularly in the banking and lending sectors. Digital banks and investment platforms typically earn interest on customer deposits and cash balances. Lending platforms often combine interest revenue with loan sales, selling portions of their loan portfolios to other institutions or investors. Data-driven revenue models, while potentially lucrative, have faced increasing scrutiny and regulation. Some firms engage in data monetization, selling aggregated or anonymized user data to third parties. However, this practice has raised privacy concerns and regulatory challenges. A less controversial approach involves leveraging user data for targeted advertising and lead generation, earning revenue through product recommendations and referral fees while providing free services to users. Some revenue models, such as payment for order flow (PFOF) used by certain brokerage firms, occupy a regulatory gray area. While PFOF allows for commission-free trades, potentially benefiting retail investors, it has faced scrutiny due to concerns about conflicts of interest and best execution practices. == Controversies == As financial technology companies seek to disrupt traditional financial services, some have been criticized for prioritizing growth over compliance, security, and consumer protection. In a notable controversy, cryptocurrency exchange FTX collapsed in November 2022, facing accusations of deceptive practices, improper handling of client assets, and insufficient risk controls. Sam Bankman-Fried, FTX's founder and CEO, was later convicted of wire fraud, conspiracy, and money laundering. == See also == Artificial intelligence in finance Financial technology in Australia Smart contract Trade finance technology == References and notes == == Further reading == Teigland, R.; Siri, S.; Larsson, A.; Puertas, A. M.; Bogusz, C. I., eds. (2018). The Rise and Development of FinTech (Open Access): Accounts of Disruption from Sweden and Beyond. Routledge. ISBN 978-0815378501. Treu, Johannes (2022). "The Fintech Sensation – What is it about?" (PDF). Journal of International Business and Management. 5 (1): 1–19. doi:10.37227/JIBM-2021-11-2094. ISSN 2616-5163. Retrieved February 23, 2023. == External links == Media related to Financial technology at Wikimedia Commons
https://en.wikipedia.org/wiki/Financial_technology
The Missile Technology Control Regime (MTCR) is a multilateral export control regime. It is an informal political understanding among 35 member states that seek to limit the proliferation of missiles and missile technology. The regime was formed in 1987 by the G-7 industrialized countries. The MTCR seeks to limit the risks of proliferation of weapons of mass destruction (WMD) by controlling exports of goods and technologies that could make a contribution to delivery systems (other than manned aircraft) for such weapons. In this context, the MTCR places particular focus on rockets and unmanned aerial vehicles capable of delivering a payload of at least 500 kilograms (1,100 lb) to a range of at least 300 kilometres (190 mi) and on equipment, software, and technology for such systems. The MTCR is not a treaty and does not impose any legally binding obligations on partners (members). Rather, it is an informal political understanding among states that seek to limit the proliferation of missiles and missile technology. == Guidelines and the Equipment, Software and Technology Annex == The Regime’s documents include the MTCR Guidelines and the Equipment, Software and Technology Annex. The Guidelines define the purpose of the MTCR and provide the overall structure and rules to guide the member countries and those adhering unilaterally to the Guidelines. The Equipment, Software and Technology Annex is designed to assist in implementing export controls on MTCR Annex items. The Annex is divided into “Category I” and “Category II” items. It includes a broad range of equipment and technology, both military and dual-use, that are relevant to missile development, production, and operation. Partner countries exercise restraint in the consideration of all transfers of items contained in the Annex. All such transfers are considered on a case by case basis. Greatest restraint is applied to what are known as Category I items. These items include complete rocket systems (including ballistic missiles, space launch vehicles and sounding rockets) and unmanned air vehicle systems (including cruise missiles systems, target and reconnaissance drones) with capabilities exceeding a 300km/500kg range/payload threshold; production facilities for such systems; and major sub-systems including rocket stages, re-entry vehicles, rocket engines, guidance systems and warhead mechanisms. The remainder of the annex is regarded as Category II, which includes complete rocket systems (including ballistic missiles systems, space launch vehicles and sounding rockets) and unmanned air vehicles (including cruise missile systems, target drones, and reconnaissance drones) not covered in item I, capable of a maximum range equal to or greater than, 300km. Also included are a wide range of equipment, material, and technologies, most of which have uses other than for missiles capable of delivering WMD. While still agreeing to exercise restraint, partners have greater flexibility in the treatment of Category II transfer applications. The MTCR Guidelines specifically state that the Regime is “not designed to impede national space programs or international cooperation in such programs as long as such programs could not contribute to delivery systems for weapons of mass destruction.” MTCR partners are careful with satellite-launched vehicle SLV equipment and technology transfers, however, since the technology used in an SLV is virtually identical to that used in a ballistic missile, which poses genuine potential for missile proliferation. == History == The Missile Technology Control Regime (MTCR) was established in April 1987 by the G7 countries: Canada, France, Germany, Italy, Japan, the United Kingdom, and the United States. It was created to curb the spread of unmanned delivery systems for nuclear weapons, specifically systems which can carry a payload of 500 kilograms (1,100 lb) for 300 kilometres (190 mi). The MTCR applies to exports to members and non-members. An aide-mémoire attached to the agreement says that it does not supersede prior agreements, which NATO members say allows the supply of Category 1 systems between NATO members. An example is the export by the United States of Trident missiles to the United Kingdom for nuclear-weapons delivery. At the annual meeting in Oslo from 29 June to 2 July 1992, chaired by Sten Lundbo, it was agreed to expand the MTCR's scope to include nonproliferation of unmanned aerial vehicles (UAVs) for weapons of mass destruction. Prohibited materials are divided into two categories, which are outlined in the MTCR Equipment, Software, and Technology Annex. Thirty-five nations are members, with India joining on 27 June 2016. According to the Arms Control Association, the MTCR has been successful in helping to slow (or stop) several ballistic missile programs: "Argentina, Egypt, and Iraq abandoned their joint Condor II ballistic missile program. Brazil and South Africa also shelved or eliminated missile or space launch vehicle programs. Some former Warsaw Pact countries, such as Poland and the Czech Republic, destroyed their ballistic missiles, in part, to better their chances of joining MTCR." In October 1994, the MTCR member states established a "no undercut" policy: if one member denies the sale of technology to another country, all members must do likewise. China originally viewed the MTCR as a discriminatory measure by Western governments, which sold sophisticated military aircraft while restricting sales of competing ballistic missiles. It verbally agreed that it would adhere to the MTCR in November 1991, and included the assurance in a letter from its foreign minister in February 1992. China reiterated its pledge in the October 1994 US-China joint statement. In their October 1997 joint statement, the United States and China said that they agreed "to build on the 1994 Joint Statement on Missile Nonproliferation." The Missiles and Missile-related Items and Technologies Export Control List, a formal regulation, was issued in August 2002. The following year, the MTCR chair invited China to participate. China requested to join the MTCR in 2004, but membership was not offered because of concerns about the country's export-control standards. Israel, Romania and Slovakia have agreed to follow MTCR export rules, although they are not yet members. The regime has its limitations; member countries have been known to clandestinely violate the rules. Some of these countries, with varying degrees of foreign assistance, have deployed medium-range ballistic missiles which can travel more than 1,000 kilometres (620 mi) and are researching missiles with greater ranges; Israel and China have deployed strategic nuclear SLCMs, ICBMs and satellite-launch systems. Countries which are not MTCR members buy and sell on the global arms market; North Korea is currently viewed as the primary source of ballistic-missile proliferation in the world, and China has supplied ballistic missiles and technology to Pakistan. China supplied DF-3A IRBMs to Saudi Arabia in 1988 before it informally agreed to follow MTCR guidelines. Israel cannot export its Shavit space-launch system due to its non-member MTCR status, although the Clinton administration allowed an import waiver for US companies to buy the Shavit in 1994. Over 20 countries have ballistic missile systems. The International Code of Conduct against Ballistic Missile Proliferation (ICOC), also known as the Hague Code of Conduct, was established in 2002. The code, which calls for restraint and care in the proliferation of ballistic missile systems capable of delivering weapons of mass destruction, has 119 members. Its mission is similar to the MTCR's, an export group. India applied for membership in June 2015 with support from Russia, France and the United States, and became a member on 27 June 2016. Pakistan is not a member of the MTCR. Although it has expressed a desire to join the group, it has not submitted an application. The Pakistani government has pledged to adhere to MTCR guidelines, and analysts believe that the country is doing so. In 2020, the U.S. government announced that it would reinterpret its implementation of the MTCR to expedite sales of unmanned aerial vehicles (UAVs) to other countries. The revised U.S. policy will reinterpret how the MTCR applies to drones which travel at speeds under 800 kilometres per hour (500 mph), such as the Predator and Reaper drones (made by General Atomics) and the Global Hawk drone (made by Northrop Grumman). == Members == The MTCR has 35 members. Non-members pledging to adhere to MTCR include: == References == == External links == Missile Technology Control Regime website Sarah Chankin-Gould & Ivan Oelrich, "Double-edged Shield" (subscription required), Bulletin of the Atomic Scientists, May/June 2005.
https://en.wikipedia.org/wiki/Missile_Technology_Control_Regime
Technology Connections is an American YouTube channel covering the history and mechanics of consumer electronics, home appliances, and other pieces of technology, created by Alec Watson of Chicago, Illinois. Subjects of focus include transportation, HVAC, refrigeration, photography, and home audio and video, among others. The channel, which has received praise for Watson's humor and the depth and insight of his research, has amassed a large following on YouTube. == Channel == Watson registered the Technology Connections channel on YouTube in November 2014, with his first video, exploring Alexander Graham Bell's role in the history of sound reproduction, uploaded in September 2015. In the years since, Watson has released videos on Technology Connections covering other aspects of consumer audiovisual technology—home audio and video in particular—releasing a five-part documentary miniseries on the Compact Disc audio format by Sony and Philips in 2018; and the Capacitance Electronic Disc home video format by RCA between 2019 and 2020. As well as these subjects, Watson has also explored the mechanics and history of various telephony products, aspects of television broadcasting, videocassette recorders, home appliances, electrical wiring, and more. Watson often interjects his explanations with humorous and satirical asides, as well as critiques of some of the technologies he discusses. In February 2020, Watson's Technology Connections channel was briefly and erroneously demonetized for supposed violations of YouTube's Partner Program policies. The monetization was restored after the demonetization caused an uproar on social media. Reclaim the Net attributed it to a fault in Google's internal artificial intelligence. In March 2024, Watson collaborated with Gavin Free of The Slow Mo Guys to film an episode of Technology Connections detailing the mechanics of Kodak and Sylvania's jointly developed Magicube, a multiple-use, disposable consumer flash bulb. Watson employed Free's Phantom high-speed camera to capture and study detailed close-ups of the Magicube igniting its explosive contents to create the flash. Because of the way the Phantom camera works, Free was forced to film several shots at an extreme aspect ratio to capture images at 200,000 frames per second. === Recognition === Technology Connections has received praise from various publications for the depth and insight of Watson's research, as well as the wittiness of his scripts and breadth of his subject matter. Mark Frauenfelder, the co-owner of Boing Boing, called Watson's channel "a fantastic resource for learning about the inner workings of everyday items ... break[ing] down complex concepts into easy-to-understand explanations, providing viewers with a greater appreciation for the technology that surrounds them". Lifehacker's Michelle Ehrhardt wrote that Watson's "documentary style approach is comprehensive yet approachable, and while topics often have some bearing on what you have in your house right now, the channel has also done LGR Oddware-style breakdowns on odd trends or gadgets that aren't really around anymore". Ehrhardt called Watson "a sort of guru for home appliances", "explain[ing] the history and methodology behind common devices like air conditioners, dishwashers, and power outlets in a genuinely fun way that might also teach you a few tricks and tips that will make your life better". Adam Juniper, writing in Digital Camera World, called Watson and Free's video on the Magicube "a brilliant job of placing the different single-use flash technologies in context—historically and economically—showing how they work and then going above and beyond in explaining exactly how they work". Watson's video on the automatic Sunbeam Radiant toaster went viral in 2019, with Sean Hollister of The Verge praising it as "[possibly] the smartest thing you watch today". Hollister similarly praised Watson's video detailing the mechanics of the popcorn button present on most consumer microwaves. The channel has also received praise from academics. The media studies scholar Marek Jancovic called Watson's video on the famous ringer of the Western Electric Model 500 telephone—in which Watson deduces that modern feature films still use a sample of the ring derived from a sound effect LP record pressed off-center and severely warped—an example of what Jancovic calls "media epigraphy". Jancovic wrote that Watson's findings represent "impressive deductions [w]orthy of a detective novel". Dan MacIsaac, a professor of physics at SUNY Buffalo State, has praised Watson's explainers on home wiring, calling some of the concepts discussed illuminating, particularly on the details of plug design, electrical outlet orientation, North American home wiring, and the dangers of certain extension cords. MacIsaac recommended some Technology Connections videos as supplementary material for his introduction electromagnetism course. In 2023, Watson published a video on the lack of use of brake lights in some electric vehicles during regenerative braking. He demonstrated that his 2022 Hyundai Ioniq 5 could decelerate sharply to a complete stop without actuating the brake lights. The video went viral, amassing over two million views in a week, prompting a detailed report of these flaws in Consumer Reports, which in turn prompted a response from Hyundai Motor Group promising to address the issue. == Personal life == Watson is a resident of the Chicago metropolitan area and originally graduated in hotel management. He is an enthusiast of electric cars, a topic covered repeatedly on his channel, with his first electric vehicle being a Chevrolet Volt purchased in 2015 to commute to his first day job. In 2022, he upgraded to a Hyundai Ioniq 5. == References == == External links == Technology Connections's channel on YouTube
https://en.wikipedia.org/wiki/Technology_Connections
Mobile technology is the technology used for cellular communication. Mobile technology has evolved rapidly over the past few years. Since the start of this millennium, a standard mobile device has gone from being no more than a simple two-way pager to being a mobile phone, GPS navigation device, an embedded web browser and instant messaging client, and a handheld gaming console. Many experts believe that the future of computer technology rests in mobile computing with wireless networking. Mobile computing by way of tablet computers is becoming more popular. Tablets are available on the 3G and 4G networks. == Mobile communication convergence == Source: Nikola Tesla laid the theoretical foundation for wireless communication in 1890. Guglielmo Marconi, known as the father of radio, first transmitted wireless signals two miles away in 1894. Mobile technology gave human society great change. The use of mobile technology in government departments can also be traced back to World War I. In recent years, the integration of mobile communication technology and information technology has made mobile technology the focus of industry attention. With the integration of mobile communication and mobile computing technology, mobile technology has gradually matured, and the mobile interaction brought by the application and development of mobile technology has provided online connection and communication for Ubiquitous Computing and Any time, anywhere Liaison and information exchange provide possibilities, provide new opportunities and challenges for mobile work, and promote further changes in social and organizational forms. The integration of information technology and communication technology is bringing great changes to our social life. Mobile technology and the Internet have become the main driving forces for the development of information and communication technologies. Through the use of high-coverage mobile communication networks, high-speed wireless networks, and various types of mobile information terminals, the use of mobile technologies has opened up a vast space for mobile interaction. And has become a popular and popular way of living and working. Due to the attractiveness of mobile interaction and the rapid development of new technologies, mobile information terminals, and wireless networks will be no less than the scale and impact of computers and networks in the future. The development of mobile government and mobile commerce has provided new opportunities for further improving the level of city management, improving the level and efficiency of public services, and building a more responsive, efficient, transparent, and responsible government. It also helps to bridge the digital divide and provide citizens with universal Service, agile service. The integration and development of information and communication technology have spurred the formation of an information society and a knowledge society and have also led to a user-oriented innovation oriented to a knowledge society, user-centered society, a stage of social practice, and a feature of mass innovation, joint innovation, and open innovation. Shape, innovation 2.0 mode is gradually emerging to the attention of the scientific community and society. == Mobile communication industry == Source: 0G: An early cellular mobile phone technology emerged in the 1970s. At this time, although briefcase-type mobile phones have appeared, they still generally need to be installed in a car or truck. PTT: Push to talk MTS: Mobile Telephone System IMTS: Improved Mobile Telephone Service AMTS: Advanced Mobile Telephone System 0.5G: A group of technologies improving basic 0G technical characteristics. Autotel / PALM: Autotel or PALM (Public Automated Land Mobile) ARP: Autoradiopuhelin, Car Radio Phone 1G: Refers to the first generation of wireless telephone technology, namely cellular portable wireless telephone. Introduced in the 1980s are analog cellular portable radiotelephone standards. NMT: Nordic Mobile Telephone AMPS: Advanced Mobile Phone System TACS: Total Access Communication System (TACS: Total Access Communication System) is the European version of AMPS JTAGS: Japan Total Access Communication System 2G: Second-generation wireless telephone based on digital technology. 2G networks are only for voice communications, except that some standards can also use SMS messages as a form of data transmission. GSM: Global System for Mobile Communications iDEN: Integrated Digital Enhanced Network D-AMPS: Digital Advanced Mobile Phone System based on TDMA cdmaOne: Code Division Multiple Access defined by IS-95 PDC: Personal Digital Cellular TDMA: Time Division Multiple Access 2.5G: A set of transition technologies between 2G and 3G wireless technologies. In addition to voice, it involves digital communication technologies that support E-mail and simple Web browsing. GPRS: General Packet Radio Service WiDEN: Wideband Integrated Dispatch Enhanced Network 2.75G: refers to a technology that, although it does not meet 3G requirements, plays a role in 3G in the market. CDMA2000 1xRTT: CDMA-2000 is a TIA standard (IS-2000) evolved from cdmaOne. Compared with 3G, CDMA2000 supporting 1xRTT has lower requirements. EDGE: Enhanced Data rates for GSM Evolution 3G: Representing the third generation of wireless communication technology, it supports broadband voice, data, and multimedia communication technologies in wireless networks. W-CDMA: Wideband Code Division Multiple Access UMTS: Universal Mobile Telecommunications System FOMA: Freedom of Mobile Multimedia Access CDMA2000 1xEV: More advanced than CDMA2000, it supports 1xEV technology and can meet 3G requirements. TD-SCDMA: Time Division-Synchronous Code Division Multiple Access 3.5G: Generally refers to a technology that goes beyond the development of comprehensive 3G wireless and mobile technologies. HSDPA: High-Speed Downlink Packet Access 3.75G: A technology that goes beyond the development of comprehensive 3G wireless and mobile technologies. HSUPA: High-Speed Uplink Packet Access 4G: Named for high-speed mobile wireless communications technology and designed to enable new data services and interactive TV services in mobile networks. 5G: Aims to improve upon 4G, offering lower response times (lower latency) and higher data transfer speeds == Mobile phone generations == In the early 1980s, 1G was introduced as voice-only communication via "brick phones". Later in 1991, the development of 2G introduced Short Message Service (SMS) and Multimedia Messaging Service (MMS) capabilities, allowing picture messages to be sent and received between phones. In 1998, 3G was introduced to provide faster data-transmission speeds to support video calling and internet access. 4G was released in 2008 to support more demanding services such as gaming services, HD mobile TV, video conferencing, and 3D TV. 5G technology was initially released in 2019, but is still only available in certain areas. === 4G networking === 4G is the current mainstream cellular service offered to cell phone users, performance roughly 10 times faster than 3G service. One of the most important features in the 4G mobile networks is the domination of high-speed packet transmissions or burst traffic in the channels. The same codes used in the 2G-3G networks are applied to 4G mobile or wireless networks, the detection of very short bursts will be a serious problem due to their very poor partial correlation properties. === 5G networking === 5G's performance goals are high data rates, reduced latency, energy savings, reduced costs, increased system capacity and large-scale device connectivity. 5G is still a fairly new type of networking and is still being spread across nations. Moving forward, 5G is going to set the standard of cellular service around the whole globe. Corporations such as AT&T, Verizon, and T-Mobile are some of the notorious cellular companies that are rolling out 5G services across the US. 5G started being deployed at the beginning of 2020 and has been growing ever since. According to the GSM association, by 2025, approximately 1.7 billion subscribers will have a subscription with 5G service. 5G wireless signals are transmitted through large numbers of small cell stations located in places like light poles or building roofs. In the past, 4G networking had to rely on large cell towers in order to transmit signals over large distances. With the introduction of 5G networking, it is imperative that small cell stations are used because the MM wave spectrum, which is the specific type of band used in 5G services, strictly travels over short distances. If the distances between cell stations were longer, signals may suffer from interference from inclimate weather, or other objects such as houses, buildings, trees, and much more. In 5G networking, there are 3 main kinds of 5G: low-band, mid-band, and high-band. Low-band frequencies operate below 2 GHz, mid-band frequencies operate between 2–10 GHz, and high-band frequencies operate between 20 and 100 GHz. Verizon have seen outrageous numbers on their high-band 5g service, which they deem "ultraband", which hit speeds of over 3 Gbit/s. The main advantage of 5G networks is that the data transmission rate is much higher than the previous cellular network, up to 10 Gbit/s, which is faster than the current wired Internet and 100 times faster than the previous 4G LTE cellular network. Another advantage is lower network latency (faster response time), less than 1 millisecond, and 4G is 30-70 milliseconds. The peak rate needs to reach the Gbit/s standard to meet the high data volume of high-definition video, virtual reality and so on. The air interface delay level needs to be around 1ms, which meets real-time applications such as autonomous driving and telemedicine. Large network capacity, providing the connection capacity of 100 billion devices to meet IoT communication. The spectrum efficiency is 10 times higher than LTE. With continuous wide area coverage and high mobility, the user experience rate reaches 100 Mbit/s. The flow density and the number of connections are greatly increased. Since 5G is a relatively new type of service, only phones which are newly released or are upcoming can support 5G service. Some of these phones include the iPhone 12/13; select Samsung devices such as the S21 series, Note series, Flip/Fold series, A series; Google Pixel 4a/5; and a few more devices from other manufacturers. The first ever 5G smartphone, the Samsung Galaxy S20, was released by Samsung in March 2020. Following the release of Samsung's S20 series, Apple was able to integrate 5G compatibility into their iPhone 12s, which was released in fall 2020. These 5G phones were able to harness the power of 5G capability and gave consumers access to speeds that were rapid enough for high demand streaming and gaming. Another type of cellular device that is being utilized is the 5G hotspot. For people who have a device that is only WiFi-capable, these 5G hotspots would provide strong performance when they don't have access to home Wi-Fi. Private 5G networks are also growing immensely among businesses. 5G can help businesses keep up with the growing networking demands of newer technologies such as AI, machine learning, AR, as well as just regular operations. As stated by Verizon, a private 5G network allows large enterprise and public sector customers to bring a custom-tailored 5G experience to indoor or outdoor facilities where high-speed, high-capacity, low-latency connectivity is crucial. Having the access to such high performing networks opens the door to many opportunities for different companies. Being able to connect vast amounts of devices to a reliable and powerful network will be crucial for companies and their technologies moving forward. == Operating systems == The Operating System (OS) is the program that manages all applications in a computer and is often considered the most important software. In order for a computer to run, applications make requests to the OS through an application programming interface (API), and users are able to interact with the OS through a command line or graphical user interface, often with a keyboard and mouse or by touch. A computer that is without an operating system serves no purpose as it will not be able to operate and run tasks effectively. Since the OS manages the computer's hardware and software resources, without it the computer will not be able to communicate applications and hardware connected to the computer. When someone purchases a computer, the operating system is already preloaded. The most common types of operating systems are Microsoft Windows, Apple macOS, Linux, Android, and Apple's iOS. A majority of the modern-day operating systems use a GUI or Graphical User Interface. A GUI allows the user to perform specific tasks, such as using a mouse to click on icons, buttons, and menus. It also allows for graphics and texts to be displayed to be seen clearly. In 1985 Microsoft created the Windows operating system, the most popular operating system worldwide. As of October 2021, the most recent version of Windows is Windows 10. Some of the past ones were Windows 7, 8, and 10. In most computers, Windows comes preloaded. According to the Medium, "Windows achieved its popularity by targeting everyday average users, who are not mainly concerned by the optimal robustness and security of their machines, but are more focused on the usability, familiarity, and availability of productivity tools." Another popular operating system is Apple's Mac OS X. macOS and Microsoft Windows are head-to-head in the competition considering that they are both used commonly used. Apple allies offer a mobile operating system called IOS. This OS is used exclusively for iPhones, one of the most popular phones on the market. These devices are regularly updated since there are often new features. According to The Verge, "Many users appreciate the unique user interface with touch gestures and the ease of use that iOS offers." When looking at mobile tech and computers, their operating systems differ entirely since they are developed for different users. Unlike mobile operating systems, computer systems are way more complex because they store more data. Additionally, the 2 have a different user interface, and since computer operating systems have been around longer than phones, they are more commonly used. Another significant difference is that mobile phones do not offer a desktop feature like most computers. Considering the interface of mobile devices that sets them apart from the computers is that they are simpler to use. Many types of mobile operating systems (OS) are available for smartphones, including Android, BlackBerry OS, webOS, iOS, Symbian, Windows Mobile Professional (touch screen), Windows Mobile Standard (non-touch screen), and Bada. The most popular are the Apple iPhone, and the newest: Android. Android, a mobile OS developed by Google, is the first completely open-source mobile OS, meaning that it is free to any cell phone mobile network. Since 2008 customizable OSs allow the user to download apps like games, GPS, utilities, and other tools. Users can also create their own apps and publish them, e.g. to Apple's App Store. The Palm Pre using webOS has functionality over the Internet and can support Internet-based programming languages such as Cascading Style Sheets (CSS), HTML, and JavaScript. The Research In Motion (RIM) BlackBerry is a smartphone with a multimedia player and third-party software installation. The Windows Mobile Professional Smartphones (Pocket PC or Windows Mobile PDA) are like personal digital assistants (PDA) and have touchscreen abilities. The Windows Mobile Standard does not have a touch screen but uses a trackball, touchpad, or rockers. == Channel hogging and file sharing == There will be a hit to file sharing, the normal web surfer would want to look at a new web page every minute or so at 100 kbs a page loads quickly. Because of the changes to the security of wireless networks users will be unable to do huge file transfers because service providers want to reduce channel use. AT&T claimed that they would ban any of their users that they caught using peer-to-peer (P2P) file sharing applications on their 3G network. It then became apparent that it would keep any of their users from using their iTunes programs. The users would then be forced to find a Wi-Fi hotspot to be able to download files. The limits of wireless networking will not be cured by 4G, as there are too many fundamental differences between wireless networking and other means of Internet access. If wireless vendors do not realize these differences and bandwidth limits, future wireless customers will find themselves disappointed and the market may suffer setbacks. == Mobile Internet Technology == Mobile Internet emerged from the development of PC Internet in the form of handheld, portable devices. The combination of mobile communication and the Internet has allowed users to have easier access in going online if they have mobile technologies such as smartphones, tablets, and laptops amongst the most popular. It is a general term for activities in which the technology, platforms, business models, and applications of the Internet are combined with mobile communications technology. === Medical applications === The current medical industry has started to incorporate new emerging technologies such as online medical treatment, online appointments, telemedicine cooperation and online payment to their practices. An increase of hospitals and clinics have started implementing electronic health records (EHR) systems to help manage their patients' big data over traditional paper file records. Electronic health records are patients' records and information stored digitally and can be accessed online by exclusively authorized personnel. From the patient's perspective: The word-of-mouth evaluation of various hospitals and physicians will be clear at a glance on the Internet. When people see the doctor, they can immediately evaluate the doctor and let everyone know. The user's illness big data will be stored permanently with the electronic medical record until the end of life. In the future, the Internet of Things world will network all your information. When did you eat what meals, when did you do something, and the calories consumed that day were all uploaded to the cloud. The doctor can more accurately determine the condition based on your regular diet. More often, patients can choose not to seek medical treatment in a hospital, and based on the reliability of big data, they can directly solve it remotely. The continuous evolution of technology allows relevant medical services and treatments to grow to be more effective and personable with medical technology. With the advancements in 3D medical technology, the opportunities of efficient, customizable healthcare such as medicine and surgeries are becoming increasingly achievable. Technology has been pioneering the world and experts are determined to find the optimal applications of technology in the medical field to make customizable healthcare affordable, cost-efficient, and practical. Experts have begun to study and apply 3D technology to surgical procedures, where surgeons and surgeons-in-training have started using 3D-printed, physical stimulations to navigate cranial surgeries with the use of the patients' data. === M-commerce === Mobile e-commerce can provide users with the services, applications, information and entertainment they need anytime, anywhere. Purchasing and using goods and services have become more convenient with the introduction of a mobile terminal. Not to mention, websites have started to adopt various forms of mobile payments. The mobile payment platform not only supports various bank cards for online payment, but also supports various terminal operations such as mobile phones and telephones, which meets the needs of online consumers in pursuit of personalization and diversification. Due to the COVID19 pandemic, the usage of m-commerce has skyrocketed in popular retail stores such as Amazon, 7Eleven, and other large retailers. Shopping online has made a lot more stores accessible and convenient for customers, as long as these applications are designed to be straightforward and simple. Poor UI/UX design is a big factor in deterring customers from completing their purchases and/or navigating through online websites. Customers highly value their time and therefore seek practices that can reduce the time spent in stores, which also apply to online applications and websites. Many in-person stores have also started to use contactless and digital payments to reduce the usual amount of face-to-face interactions with the newly improved convenience digital technology provides. Amazon Go is a newly implemented, highly technical store that allows customers to shop while skipping the checkout process. With the use of enhanced technology, Amazon Go can calculate the total cost of the items that were selected and put in to the customer's "virtual" baskets. As long as customers had some form of payment linked to their Amazon accounts, they would be able to leave the store without having to go through the checkout process because it would be automatically paid upon exit. As more and more customers rely on virtual online transactions, the increasing need for security and Internet access will be extremely important. === Augmented Reality (AR) === Augmented reality is also known as "mixed reality'' and uses computer technology to apply virtual information to the real world. It uses computer technology to apply virtual information to the real world. The real environment and virtual objects are superimposed on the same screen or space in real time. Augmented reality provides information that, in general, differs from what humans can perceive. It not only displays real-world information, but also displays virtual information at the same time, and the two kinds of information complement each other. According to the data gathered by PSFK Research, customers highly value their time. 72% of the customers want faster and efficient checkout times with the help of technology whereas 61% of customers want efficient technology that helps them find their items faster. Retailers and businesses have implemented augmented reality to help them efficiently manage their storage and more flexible schedules due to remote work. Having the visualization of objects such as clothes, make-up, and shoes will allow users a better, curated shopping experience which can improve the process of checking-out and reducing the amount of time spent in the store. == Impacts on the modern family == Increasing mobile technology use has changed how the modern family interacts with one another through technology. With the rise of mobile devices, families are becoming increasingly "on-the-move", and spend less time in physical contact with one another. However, this trend does not mean that families are no longer interacting with each other, but rather have evolved into a more digitized variant. A study has shown that the modern family actually learns better with usage of mobile media, and children are more willing to cooperate with their parents via a digital medium than a more direct approach. For example, family members can share information from articles or online videos via mobile devices and thus stay connected with one another during a busy day. Family members can also use video chatting platforms to stay in touch even when they are physically not around. This can be taken one step further by looking at applications that offer features such as photo sharing between families as well as providing life updates through statuses and pictures. Examples of these applications include Google Photos, Facebook, Instagram, and Twitter. Aside from these, there are also finance management and e-book applications that provide the collaboration feature for family members. This feature is important because even when a family may be in the same household, lifestyle related tasks are easier to manage when they are the tip of your fingers. While the world has become more digitalized. mobile technology has played its parts in keeping up with the times. This is also evident through several mobile applications that have been created in order to increase communication between those who live in the same household and even those who may be far. It is no surprise that the reliance on mobile technology has increased but to be able to positively maneuver through this fast-paced change is what is necessary in this day and age. The future indicates that the world will only increase its dependency on technology and as mobile companies offer upgraded devices, the appeal to stay mobile will only grow. Forbes speaks on this behalf as they collect predictions from nine tech experts who share what the future looks like in terms of smartphones. Before beginning, Forbes states, "The members of Forbes Technology Council have their finger on the pulse of upcoming technology advances, including those in the smartphone market." Forbes outlines that there will be more diverse interfaces that will feel more natural and easy to use. Increased interaction with voice assistants will also be offered that will make users more comfortable with assistants such as Alexa, Cortana, and other such artificial intelligence. It is clear that mobile technology is the future of our world - and it will only be more integrated within family members and their day to day communication. This trend is not without controversy, however. Many parents of elementary school-age children express concern and sometimes disapproval of heavy mobile technology use. Parents may feel that excessive usage of such technologies distracts children from "un-plugged" bonding experiences, and many express safety concerns about children using mobile media. While parents may have many concerns are, they are not necessarily anti-technology. In fact, many parents express approval of mobile technology usage if their children can learn something from the session. for example, through art or music tutorials on YouTube. Rikuya Hosokawa and Toshiki Katsura speak on this regard in their article, Association between mobile technology use and child adjustment in early elementary school age" in which they declare how the positive or negative effects of mobile technology depend entirely on its context and use. The authors offer studies that illustrate that even where there is positive development of cognitive and academic skills via increased technological time, there are much more negative effects on a child's social and psychological development which can include anything from reduced face to face interaction for children to affecting a child's sleep and behavior. In family life, this technological invention has caused positive and negative effects of equal measure. While others may view this gadget as having eased communication among people and families, some researchers have proved otherwise. These gadgets have strengthened family units. For example, families compensate for daily stress through text messages, phone calls, and e-mails. Internet-enabled phones have also assisted in the connection through social sites where family members can discuss their issues even if they are far apart (Alamenciak, 2012). In America, for instance, parents have adjusted to modern technology thus increasing their connection with their children who may be working in different states. Cell phones are bringing families together as they increase the quality of communication among the family members are living separately in the distance. Families use cell phones to get in touch with their children by the use of e-mails and web (George, 2008). These families contact their children to know how they're redoing and entertain them in the process. Moreover, cell phone communication brings families more closely increasing the relationship between family members. During this time, family heads promote values and set good examples to their children. They encourage openness and communication in case problems arise in the family as well as security since family members get the opportunity to know each other well. Also, cell phones have enhanced accountability either in working premises or at homes. People keep in touch with their core-workers and employees as well as their family members (Good Connection, Bad Example: Cell Phones and The Family, 2007). == Future of smartphones == The next generation of smartphones will be context-aware, taking advantage of the growing availability of embedded physical sensors and data exchange abilities. One of the main features applying to this is that phones will start keeping track of users' personal data, and adapt to anticipate the information will need. All-new applications will come out with the new phones, one of which is an X-ray device that reveals information about any location at which the phone is pointed. Companies are developing software to take advantage of more accurate location-sensing data. This has been described as making the phone a virtual mouse able to click the real world. An example would be pointing the phone's camera at a building while having the live feed open, and the phone will show text with the image of the building, and save its location for use in the future. The future of smartphones is ever-growing as smartphone technology is fairly new, existing only for the last two decades with the first one released in the market in 1994 by IBM. Currently, smartphones are ubiquitous, that many rely on as a tool for leisure, business, entertainment, productivity, and much more. There are currently 237 brands of smartphones with thousands of models combined, and these numbers are growing. Companies release smartphones for each use case and for different price segments. Over the decade the prices of smartphones have been rising, giving a boom to a low end and medium price segment. We can expect to see price ceilings gradually increasing in the coming years. Smartphones are becoming powerful computational tools in the medical industry, being used in and outside of clinics. Omnitouch is a device via which apps can be viewed and used on a hand, arm, wall, desk, or any other everyday surface. The device uses a sensor touch interface, which enables the user to access all the functions through the use of the touch of a finger. It was developed at Carnegie Mellon University. This device uses a projector and camera worn on the user's shoulder, with no controls other than the user's fingers. === Supercomputing === Throughout the last decade smartphone SOCs (System on a Chip) have rapidly gained speed to catch up to desktop-class CPUs and GPUs. Modern smartphones are capable of performing similar tasks compared to computers, with speed and efficiency. Efficiency is what drives the mobile-first society where smartphones are ubiquitous. Computational speed measured in FLOPS of smartphone chips has been measured to be as closely powerful as a rat's neurological column. With the rapid development in the smartphone's SOCs will soon be powerful enough to replace computer chips for the most part in the consumer market as smartphone SOCs are cheaper and very efficient while being as powerful if not more. === 6G Connectivity === On the go, connectivity is more important than ever as smartphones are adapting to more and more tasks that one has to do sitting in front of a computer. 6G connectivity will bring a whole futuristic realm that is yet not possible. Such as holographic, virtual reality, autonomous driving, etc. With ten times the speed of 5G, 6G can prove to blend virtual reality within the real world to give an immersive experience. 6G has many applications in almost every industry. Internet-connected devices are ubiquitous, and hyper-connectivity like 6G will provide latency-free communication for a robust automation server. === Smartphone metamorphosis === Smartphone companies try to blend form and function for optimal value for customers. Where some companies have come out with radical designs that totally change the norm of a phone design. Like the Samsung Galaxy Fold for example, which was a foldable phone that had a bendable screen. This was a prototype when it debuted but with three iterations and other companies adopting the design. With the new design, the retail price experienced a hike but soon as there will be more competition, prices will follow the market. The flexible screen technology gives opportunities to new design opportunities. The screen size has also played a big role in the smartphone industry as it has allowed companies to pack more tech into the body as well cater towards the high demand for big-screen smartphones. The first popular touchscreen smartphone which was the first iPhone Apple introduced in 2007 had a screen size of approximately 3.5". That has almost doubled to 6.7" for Apple's lineup of smartphones, while other companies have even crossed the size of 7". === Borderless technology === Borderless phones lack bezels, allowing the screen to be larger. Loading a larger screen into a limited phone size can increase one-handed operability, aesthetics, and a sense of technology. However, the technical problems faced by borderless, light leakage on the screen, accidental touch on the edges, and more fragile bare screens have all been obstacles to the popularization of this technology. === Transparent phone === Transparent phone is a mobile phone that uses replaceable glass to achieve a visual penetration effect so that its appearance is transparent. Transparent mobile phones use special switchable glass technology. Once the electrically controlled glass is activated by a current through a transparent wire, these molecules will rearrange to form text, icons and other images. === Chip phone === The idea is that a cell phone can be made directly at the chip level and implanted in the body. Cell phones are used as brain-assisting tools to help improve work efficiency and sensory experience. == Mobile technology classification == Mobile technology, driven by the convergence of mobile communication technology and mobile computing technology, mainly includes four types of technologies. radio-based two-way radio communication (professional or public mobile radio) or broadcast mobile phone service based on cellular phones, SMS (Short Message Service), WAP (Wireless Application Protocol), GPRS (General Packet Radio Service), UMTS (3G, 3rd Generation Mobile Communication Network) mobile-based, including laptops, tablets, PDAs (personal digital assistants), pagers, Bluetooth technology, RFID (radio frequency identification) and GPS (Global Positioning System) network-based WiFi or WAPI wireless LAN that China is developing. == References ==
https://en.wikipedia.org/wiki/Mobile_technology
Electrothermal-chemical (ETC) technology is an attempt to increase accuracy and muzzle energy of future tank, artillery, and close-in weapon system guns by improving the predictability and rate of expansion of propellants inside the barrel. An electrothermal-chemical gun uses a plasma cartridge to ignite and control the ammunition's propellant, using electrical energy to trigger the process. ETC increases the performance of conventional solid propellants, reduces the effect of temperature on propellant expansion and allows for more advanced, higher density propellants to be used. The technology has been under development since the mid-1980s and in 1993 was actively being researched in the United States by the Army Research Laboratory, Sandia National Laboratories and defense industry contractors, including FMC Corporation, General Dynamics Land Systems, Olin Ordnance, and Soreq Nuclear Research Center. It is possible that electrothermal-chemical gun propulsion will be an integral part of US Army's future combat system and those of other countries such as Germany and the United Kingdom. Electrothermal-chemical technology is part of a broad research and development program that encompasses all electric gun technology, such as railguns and coil guns. == Background == The constant battle between armour and armor-piercing round has led to continuous development of the main battle tank design. The evolution of American anti-tank weapons can be traced back to requirements to combat Soviet tanks. In the late 1980s, it was thought that the protection level of the Future Soviet Tank (FST) could exceed 700 mm of rolled homogeneous armour equivalence at its maximum thickness, which was effectively immune against the contemporary M829 armour piercing fin stabilized discarding sabot. In the 1980s the most immediate method available to NATO to counter Soviet advances in armour technology was the adoption of a 140 mm main gun, but this required a redesigned turret that could incorporate the larger breech and ammunition, and it also required some sort of automatic loader. Although the 140 mm gun was considered a real interim solution, it was decided after the fall of the Soviet Union that the increase in muzzle energy it provided was not worth the increase in weight. Resources were therefore spent on research into other programs that could provide the needed muzzle energy. One of the most successful alternative technologies remains electrothermal-chemical ignition. Most proposed advances in gun technology are based on the assumption that the solid propellant as a stand-alone propulsion system is no longer capable of delivering the required muzzle energy. This requirement has been underscored by the appearance of the Russian T-90 main battle tank. The elongation of current gun tubes, such as the new German 120 mm L/55, which was introduced by Rheinmetall is considered only an interim solution as it does not offer the required increase in muzzle velocity. Even advanced kinetic energy ammunition such as the United States' M829A3 is considered only an interim solution against future threats. To that extent the solid propellant is considered to have reached the end of its usefulness, although it will remain the principal propulsion method for at least the next decade until newer technologies mature. ETC technology offers a medium-risk upgrade and is developed to the point that further improvements are so minor that it can be considered mature. The lightweight American 120 mm XM291 came close to achieving 17 MJ of muzzle energy, which is the lower-end muzzle energy spectrum for a 140 mm gun. However, the success of the XM291 does not imply the success of ETC technology as there are key parts of the propulsion system that are not yet understood or fully developed, such as the plasma ignition process. Nevertheless, there is substantial existing evidence that ETC technology is viable and worth the money to continue development. Furthermore, it can be integrated into current gun systems. == Operational principle == An electrothermal-chemical gun uses a plasma cartridge to ignite and control the ammunition's propellant, using electrical energy as a catalyst to begin the process. Originally researched by Dr. Jon Parmentola for the U.S. Army, it has grown into a very plausible successor to a standard solid propellant tank gun. Since the beginning of research the United States has funded the XM291 gun project with US$4,000,000, basic research with US$300,000, and applied research with US$600,000. Since then it has been proven to work, although efficiency to the level required has not yet been accomplished. ETC increases the performance of conventional solid propellants, reduces the effect of temperature on propellant expansion and allows for more advanced, higher density propellants to be used. It will also reduce pressure placed on the barrel in comparison to alternative technologies that offer the same muzzle energy given the fact that it helps spread the propellant's gas much more smoothly during ignition. Currently, there are two principal methods of plasma initiation: the flashboard large area emitter (FLARE) and the triple coaxial plasma igniter (TCPI). === Flashboard large area emitter === Flashboards run in several parallel strings to provide a large area of plasma or ultraviolet radiation and uses the breakdown and vaporization of gaps of diamonds to produce the required plasma. These parallel strings are mounted in tubes and oriented to have their gaps azimuthal to the tube's axis. It discharges by using high pressure air to move air out of the way. FLARE initiators can ignite propellants through the release of plasma, or even through the use of ultraviolet heat radiation. The absorption length of a solid propellant is sufficient to be ignited by radiation from a plasma source. However, FLARE has most likely not reached optimal design requirements and further understanding of FLARE and how it works is completely necessary to ensure the evolution of the technology. If FLARE provided the XM291 gun project with the sufficient radiative heat to ignite the propellant to achieve a muzzle energy of 17 MJ one could only imagine the possibilities with a fully developed FLARE plasma igniter. Current areas of study include how plasma will affect the propellant through radiation, the deliverance of mechanical energy and heat directly and by driving gas flow. Despite these daunting tasks FLARE has been seen as the most plausible igniter for future application on ETC guns. === Triple coaxial plasma igniter === A coaxial igniter consists of a fully insulated conductor, covered by four strips of aluminium foil. All of this is further insulated in a tube about 1.6 cm in diameter that is perforated with small holes. The idea is to use an electrical flow through the conductor and then exploding the flow into vapour and then breaking it down into plasma. Consequently, the plasma escapes through the constant perforations throughout the insulating tube and initiates the surrounding propellant. A TCPI igniter is fitted in individual propellant cases for each round of ammunition. However, TCPI is no longer considered a viable method of propellant ignition because it may damage the fins and does not deliver energy as efficiently as a FLARE igniter. == Feasibility == The XM291 is the best existing example of a working electrothermal-chemical gun. It was an alternative technology to the heavier caliber 140 mm gun by using the dual-caliber approach. It uses a breech that is large enough to accept 140 mm ammunition and be mounted with both a 120 mm barrel and a 135 mm or 140 mm barrel. The XM291 also mounts a larger gun tube and a larger ignition chamber than the existing M256 L/44 main gun. Through the application of electrothermal-chemical technology the XM291 has been able to achieve muzzle energy outputs that equate that to a low-level 140 mm gun, while achieving muzzle velocities greater than those of the larger 140 mm gun. Although the XM291 does not mean that ETC technology is viable it does offer an example that it is possible. ETC requires much less energy input from outside sources, like a battery, than a railgun or a coilgun would. Tests have shown that energy output by the propellant is higher than energy input from outside sources on ETC guns. In comparison, a railgun currently cannot achieve a higher muzzle velocity than the amount of energy input. Even at 50% efficiency a rail gun launching a projectile with a kinetic energy of 20 MJ would require an energy input into the rails of 40 MJ, and 50% efficiency has not yet been achieved. To put this into perspective, a rail gun launching at 9 MJ of energy would need roughly 32 MJ worth of energy from capacitors. Current advances in energy storage allow for energy densities as high as 2.5 MJ/dm3, which means that a battery delivering 32 MJ of energy would require a volume of 12.8 dm3 per shot; this is not a viable volume for use in a modern main battle tank, especially one designed to be lighter than existing models. There has even been discussion about eliminating the necessity for an outside electrical source in ETC ignition by initiating the plasma cartridge through a small explosive force. Furthermore, ETC technology is not only applicable to solid propellants. To increase muzzle velocity even further electrothermal-chemical ignition can work with liquid propellants, although this would require further research into plasma ignition. ETC technology is also compatible with existing projects to reduce the amount of recoil delivered to the vehicle while firing. Understandably, recoil of a gun firing a projectile at 17 MJ or more will increase directly with the increase in muzzle energy in accordance to Newton's third law of motion and successful implementation of recoil reduction mechanisms will be vital to the installation of an ETC powered gun in an existing vehicle design. For example, OTO Melara's new lightweight 120 mm L/45 gun has achieved a recoil force of 25 t by using a longer recoil mechanism (550 mm) and a pepperpot muzzle brake. Reduction in recoil can also be achieved through mass attenuation of the thermal sleeve. The ability of ETC technology to be applied to existing gun designs means that for future gun upgrades there's no longer the necessity to redesign the turret to include a larger breech or caliber gun barrel. Several countries have already determined that ETC technology is viable for the future and have funded indigenous projects considerably. These include the United States, Germany and the United Kingdom, amongst others. The United States' XM360, which was planned to equip the Future Combat Systems Mounted Combat System light tank and may be the M1 Abrams' next gun upgrade, is reportedly based on the XM291 and may include ETC technology, or portions of ETC technology. Tests of this gun have been performed using "precision ignition" technology, which may refer to ETC ignition. == Notes == == Bibliography == == External links == Electromagnetic Launch Symposium http://www.powerlabs.org/electrothermal.htm
https://en.wikipedia.org/wiki/Electrothermal-chemical_technology
Low technology (low tech; adjective forms: low-technology, low-tech, lo-tech) is simple technology, as opposed to high technology. In addition, low tech is related to the concept of mid-tech, that is a balance between low-tech and high-tech, which combines the efficiency and versatility of high tech with low tech's potential for autonomy and resilience. == History == === Historical origin === Primitive technologies such as bushcraft, tools that use wood, stone, wool, etc. can be seen as low-tech, as the pre–Industrial Revolution machines such as windmills or sailboats. === In the 1970s === The economic boom after the Vietnam War resulted in a doubt on progress, technology and growth at the beginning of the 70s, notably with through the report The Limits to Growth (1972). Many have sought to define what soft technologies are, leading to a "low-tech movement". Such technologies have been described as "intermediaries" (E. F. Schumacher), "liberating" (M. Bookchin), or even democratic. Thus, a philosophy of advocating a widespread use of soft technologies was developed in the United States, and many studies were carried out in those years, in particular by researchers like Langdon Winner. === 2000s and later === "Low-tech" has been more and more employed in the scientific writings, in particular in the analyzes of the work from some authors of the 1970s: see for example Hirsch ‐ Kreinsen, the book "High tech, low tech, no tech" or Gordon. More recently, the perspective of resource scarcity – especially minerals – lead to an increasingly severe criticism on high-techs and technology. In 2014, the French engineer Philippe Bihouix published "L'âge des low tech" (The age of low-techs) where he presents how a European nation like France, with little mineral and energy resources, could become a "low-tech" nation (instead of a "start-up" nation) to better correspond to the sustainable development goals of such nation. He cites various examples of low-techs initiative and describe the low-tech philosophy and principles. === Recently === Numerous new definitions have come to supplement or qualify the term "low-tech", intended to be more precise because they are restricted to a particular characteristic: retro-tech: more oriented toward old but smart inventions (not necessarily useful, durable and accessible), parallels can nevertheless be found with low-tech, because these innovations often are decentralized and simpler technologies (because manufactured by individuals) ". Wild-tech: beyond the high-tech / low-tech opposition, it intends to give "tools to better think these ways of manufacturing which escape any classification". The unclassifiable techs. Can also be linked to "tech rebel", a movement whose goal is to hack and to re-appropriate any type of technology. small-tech: opposed to "Big Tech", which includes the GAFAM. It thus referred to digital questions, "in the perspective of maintaining a high level of technological complexity but on the basis of the notions of commons, collaborative work and the principles of democracy and social justice" (s)lowtech, or slow-tech: uses the play-on-words (s)low / slow. Aims at: "exploring the drawbacks of technology and its effects on human health and development". Also indicates a movement aimed at reducing addiction to technology, especially among the youngsters. However, its highest similarity with the definition of low-techs is that it is restricted to technologies (of all kinds) that promote a slow lifestyle. easy-tech: technology easy to implement, to use, and accessible to all. At the heart of the commonly accepted definition of low-tech. no-tech: promotes a lifestyle avoiding the use of technology, when possible. It joins some technocritical writings on the negative and time-consuming aspect of most "modern" technologies. See for example no-tech magazine. Lo-Tek (or LoTek): name introduced by Julia Watson for her book "The Power of Lo — TEK – A global exploration of nature-based technology". The author brings together multigenerational knowledge and practices to "counter the idea that aboriginal innovation is primitive and exists isolated from technology. " TEK is the acronym for "Traditional Ecological Knowledge". == Many definitions == === Binary definition === According to the Cambridge International Dictionary of English, the concept of low-tech is simply defined as a technique that is not recent, or using old materials. Companies that are considered low-tech have a simple operation. The less sophisticated an object, the more low-tech. This definition does not take into account the ecological or social aspect, as it is only based on a simplistic definition of low-tech philosophy. The low-techs would then be seen as a "step backwards", and not as possible innovation. Also, with this definition, the "high-tech" (ex: the telegraph) of a certain era becomes the "low-tech" of the one after (ex: compared to the telephone). === Technocriticists === Low-tech is sometimes described as an "anti high-tech" movement, as a deliberate renunciation of a complicated and expensive technology. This kind of protest movement criticizes any disproportionate technology: a comparison with the neo-luddic or technocritical movements, which appeared since the Industrial Revolution, is then possible. This critical part of the low-tech movement can be called "no-tech". === Recently: a wider and more balanced approach === A second, more nuanced definition of low-tech may appear. This definition takes into account the philosophical, environmental and social aspects. Low-tech are no longer restricted to old techniques, but also extended to new, future-oriented techniques, more ecological and intended to recreate social bounds. A low-tech innovation is then possible. Contrary to the first definition, this one is much more optimistic and has a positive connotation. It would then oppose the planned obsolescence of objects (often "high-tech") and question the consumer society, as well as the materialist principles underneath. With this definition, the concept of low-tech thus implies that anyone could make objects using their intelligence, and share their know-how to popularize their creations. A low-tech must therefore be accessible to all, and could therefore help in reduction of inequalities. Furthermore, some reduce the definition of low-tech to meet basic needs (eating, drinking, housing, heating ...), which disqualifies many technologies from the definition of low-techs, but this definition does not is not always accepted. Finally, considering that the definition of low-tech is relative, some prefer to use lower tech, to emphasize a higher sobriety compared to high-tech, without claiming to be perfectly "low". == Examples == === From traditional practices (primary and secondary sectors) === Note: almost all of the entries in this section should be prefixed by the word traditional. weaving produced on non-automated looms, and basketry. hand wood-working, joinery, coopering, and carpentry. the trade of the ship-wright. the trade of the wheel-wright. the trade of the wainwright: making wagons. (the Latin word for a two-wheeled wagon is carpentum, the maker of which was a carpenter.) (Wright is the agent form of the word wrought, which itself is the original past passive participle of the word work, now superseded by the weak verb forms worker and worked respectively.) blacksmithing and the various related smithing and metal-crafts. folk music played on acoustic instruments. mathematics (particularly, pure mathematics) organic farming and animal husbandry (i.e.; agriculture as practiced by all American farmers prior to World War II). milling in the sense of operating hand-constructed equipment with the intent to either grind grain, or the reduction of timber to lumber as practiced in a saw-mill. fulling, felting, drop spindle spinning, hand knitting, crochet, & similar textile preparation. the production of charcoal by the collier, for use in home heating, foundry operations, smelting, the various smithing trades, and for brushing ones teeth as in Colonial America. glass-blowing. various subskills of food preservation: smoking salting pickling drying Note: home canning is a counter example of a low technology since some of the supplies needed to pursue this skill rely on a global trade network and an existing manufacturing infrastructure. the production of various alcoholic beverages: wine: poorly preserved fruit juice. beer: a way to preserve the calories of grain products from decay. whiskey: an improved (distilled) form of beer. flint-knapping masonry as used in castles, cathedrals, and root cellars. === Domestic or consumer === (Non exhaustive) list of low-tech in a westerner's everyday life: Getting around by bike, and repairing it with second-hand materials Using a cargo bike to carry loads (rather than a gasoline vehicle) Drying clothes on a clothesline or on a drying rack Washing clothes by hand, or in a human-powered washing machine Cooling one's home with a fan or an air expander (rather than electrical appliances such as air conditioners) Using a bell as door bell A cellar, "desert fridge", or icebox (rather than a fridge or freezer) Long-distance travel by sailing boat (rather than by plane) A wicker bag or a Tote bag (rather than a plastic bag) to carry things Swedish lighter (rather than disposable lighter or matches) A hand drill, instead of an electric one Lighting with sunlight or candles Hemp textiles To water plants with drip irrigation Paper sheets for note-taking To clean with a broom (rather than a vacuum cleaner) To find one's way with map & compass (rather than by GPS) == Philosophy == Among the thinkers opposed to modern technologies, Jacques Ellul (The Technological Society, 1954; The technological bluff, 1988), Lewis Mumford and E. F. Schumacher. In the second volume of his book The Myth of the Machine (1970), Lewis Mumford develops the notion of "biotechnology", to designate "bioviable" techniques that would be considered as ecologically responsible, i.e. which establish a homeostatic relationship between resources and needs. In his famous Small is beautiful (1973), Schumacher uses the concept of "intermediate technology", which corresponds fairly precisely to what "low tech" means. He has also created the "Intermediate Technology Development Group". == Legal status of low-technology == By federal law in the United States, only those articles produced with little or no use of machinery or tools with complex mechanisms may be stamped with the designation "hand-wrought" or "hand-made". Lengthy court-battles are currently underway over the precise definition of the terms "organic" and "natural" as applied to foodstuffs. == Groups associated with low-technology == Arts and Crafts Movement, popularized by Gustav Stickley in America around 1900. Bauhaus movement of Germany around the same time. Do-It-Yourself phenomenon arising in America following World War II. Back-to-the-land movement beginning in America during the 1960s. Hippie Luddites, whose activities date to the very beginning of the Industrial Revolution. Living history and open-air museums around the world, which strives to recreate bygone societies. Simple living adherents, such as the Amish and to a lesser extent some sects of the Mennonites, who specifically refuse some newer technologies to avoid undesirable effects on themselves or their societies. Survivalists are often proponents, since low-technology is inherently more robust than its high-technology counterpart. == See also == Obsolescence Do it yourself Anti-consumerism Degrowth Simple living Embodied energy Intermediate technology – sometimes used to mean technology between low and high technology Pre-industrial society == Sources == Falk, William W.; Lyson, Thomas A. (1988). High tech, low tech, no tech: recent industrial and occupational change in the South. SUNY Press. ISBN 978-0-88706-729-7. De Decker, Kris (2012). Low-tech magazine (tome 1 and 2). Low-tech Magazine. ISBN 978-1-79471-152-5. Watson, Julia (2020). Lo—TEK. Design by Radical Indigenism. Taschen. ISBN 978-3-8365-7818-9. Ginn, Peter (2019). Slow Tech: The Perfect Antidote to Today's Digital World. Haynes UK. ISBN 978-1-78521-616-9. == References == General Merriam webster dictionary == External links == Low-Tech Magazine – Doubts on progress and technology Low-tech lab (english version)
https://en.wikipedia.org/wiki/Low_technology
Science, technology, engineering, and mathematics (STEM) is an umbrella term used to group together the distinct but related technical disciplines of science, technology, engineering, and mathematics. The term is typically used in the context of education policy or curriculum choices in schools. It has implications for workforce development, national security concerns (as a shortage of STEM-educated citizens can reduce effectiveness in this area), and immigration policy, with regard to admitting foreign students and tech workers. There is no universal agreement on which disciplines are included in STEM; in particular, whether or not the science in STEM includes social sciences, such as psychology, sociology, economics, and political science. In the United States, these are typically included by the National Science Foundation (NSF), the Department of Labor's O*Net online database for job seekers, and the Department of Homeland Security. In the United Kingdom, the social sciences are categorized separately and are instead grouped with humanities and arts to form another counterpart acronym HASS (humanities, arts, and social sciences), rebranded in 2020 as SHAPE (social sciences, humanities and the arts for people and the economy). Some sources also use HEAL (health, education, administration, and literacy) as the counterpart of STEM. == Terminology == === History === In the early 1990s the acronym STEM was used by a variety of educators. Beverly Schwartz developed a STEM mentoring program in the Capital District of New York State, and was using the acronym as early as November, 1991. Charles E. Vela was the founder and director of the Center for the Advancement of Hispanics in Science and Engineering Education (CAHSEE) and started a summer program for talented under-represented students in the Washington, D.C. area called the STEM Institute. Based on the program's recognized success and his expertise in STEM education, Charles Vela was asked to serve on numerous NSF and Congressional panels in science, mathematics, and engineering education. Previously referred to as SMET by the NSF, it is through this manner that NSF was first introduced to the acronym STEM. One of the first NSF projects to use the acronym was STEMTEC, the Science, Technology, Engineering, and Math Teacher Education Collaborative at the University of Massachusetts Amherst, which was founded in 1998. In 2001, at the urging of Dr. Peter Faletra, the Director of Workforce Development for Teachers and Scientists at the Office of Science, the acronym was adopted by Rita Colwell and other science administrators in the National Science Foundation (NSF). The Office of Science was also an early adopter of the STEM acronym. === Other variations === eSTEM (environmental STEM) GEMS (girls in engineering, math, and science); used for programs to encourage women to enter these fields. MINT (mathematics, informatics, natural sciences, and technology) SHTEAM (science, humanities, technology, engineering, arts, and mathematics) SMET (science, mathematics, engineering, and technology); previous name STEAM (science, technology, engineering, arts, and mathematics) STEAM (science, technology, engineering, agriculture, and mathematics); add agriculture STEAM (science, technology, engineering, and applied mathematics); has more focus on applied mathematics STEEM (science, technology, engineering, economics, and mathematics); adds economics as a field STEMIE (science, technology, engineering, mathematics, invention, and entrepreneurship); adds inventing and entrepreneurship as a means to apply STEM to real-world problem-solving and markets. STEMM (science, technology, engineering, mathematics, and medicine) STM (scientific, technical, and mathematics or science, technology, and medicine) STREAM (science, technology, robotics, engineering, arts, and mathematics); adds robotics and arts as fields STREAM (science, technology, reading, engineering, arts, and mathematics); adds reading and arts STREAM (science, technology, recreation, engineering, arts, and mathematics); adds recreation and arts == Geographic distribution == By the mid-2000s, China surpassed the United States in the number of PhDs awarded and is expected to produce 77,000 PhDs in 2025, compared to 40,000 in the US. == By country == === Australia === The Australian Curriculum, Assessment, and Reporting Authority 2015 report entitled, National STEM School Education Strategy, stated that "A renewed national focus on STEM in school education is critical to ensuring that all young Australians are equipped with the necessary STEM skills and knowledge that they must need to succeed." Its goals were to: "Ensure all students finish school with strong foundational knowledge in STEM and related skills" "Ensure that students are inspired to take on more challenging STEM subjects" Events and programs meant to help develop STEM in Australian schools include the Victorian Model Solar Vehicle Challenge, the Maths Challenge (Australian Mathematics Trust), Go Girl Go Global and the Australian Informatics Olympiad. === Canada === Canada ranks 12th out of 16 peer countries in the percentage of its graduates who studied in STEM programs, with 21.2%, a number higher than the United States, but lower than France, Germany, and Austria. The peer country with the greatest proportion of STEM graduates, Finland, has over 30% of its university graduates coming from science, mathematics, computer science, and engineering programs. SHAD is an annual Canadian summer enrichment program for high-achieving high school students in July. The program focuses on academic learning, particularly in STEAM fields. Scouts Canada has taken similar measures to their American counterpart to promote STEM fields to youth. Their STEM program began in 2015. In 2011 Canadian entrepreneur and philanthropist Seymour Schulich established the Schulich Leader Scholarships, $100 million in $60,000 scholarships for students beginning their university education in a STEM program at 20 institutions across Canada. Each year 40 Canadian students would be selected to receive the award, two at each institution, with the goal of attracting gifted youth into the STEM fields. The program also supplies STEM scholarships to five participating universities in Israel. === China === To promote STEM in China, the Chinese government issued a guideline in 2016 on national innovation-driven development strategy, "instructing that by 2020, China should become an innovative country; by 2030, it should be at the forefront of innovative countries; and by 2050, it should become a technology innovation power." "[I]n May 2018, the launching ceremony and press conference for the 2029 Action Plan for China's STEM Education was held in Beijing, China. This plan aims to allow as many students to benefit from STEM education as possible and equip all students with scientific thinking and the ability to innovate." "In response to encouraging policies by the government, schools in both public and private sectors around the country have begun to carry out STEM education programs." "However, to effectively implement STEM curricula, full-time teachers specializing in STEM education and relevant content to be taught are needed." Currently, "China lacks qualified STEM teachers and a training system is yet to be established." Several Chinese cities have made programming a mandatory subject for elementary and middle school students. This is the case of the city of Chongqing. However, most students from small and medium-sized cities have not been exposed to the concept of STEM until they enter college. === Europe === Several European projects have promoted STEM education and careers in Europe. For instance, Scientix is a European cooperation of STEM teachers, education scientists, and policymakers. The SciChallenge project used a social media contest and student-generated content to increase the motivation of pre-university students for STEM education and careers. The Erasmus programme project AutoSTEM used automata to introduce STEM subjects to very young children. ==== Finland ==== The LUMA Center is the leading advocate for STEM-oriented education. Its aim is to promote the instruction and research of natural sciences, mathematics, computer science, and technology across all educational levels in the country. In the native tongue luma stands for "luonnontieteellis-matemaattinen" (lit. adj. "scientific-mathematical"). The short is more or less a direct translation of STEM, with engineering fields included by association. However, unlike STEM, the term is also a portmanteau from lu and ma. To address the decline in interest in learning the areas of science, the Finnish National Board of Education launched the LUMA scientific education development program. The project's main goal was to raise the level of Finnish education and to enhance students' competencies, improve educational practices, and foster interest in science. The initiative led to the establishment of 13 LUMA centers at universities across Finland supervised by LUMA Center. ==== France ==== The name of STEM in France is industrial engineering sciences (sciences industrielles or sciences de l'ingénieur). The STEM organization in France is the association UPSTI. === Hong Kong === STEM education has not been promoted among the local schools in Hong Kong until recent years. In November 2015, the Education Bureau of Hong Kong released a document titled Promotion of STEM Education, which proposes strategies and recommendations for promoting STEM education. === India === India is next only to China with STEM graduates per population of 1 to 52. The total number of fresh STEM graduates was 2.6 million in 2016. STEM graduates have been contributing to the Indian economy with well-paid salaries locally and abroad for the past two decades. The turnaround of the Indian economy with comfortable foreign exchange reserves is mainly attributed to the skills of its STEM graduates. In India, women make up an impressive 43% of STEM graduates, the highest percentage worldwide. However, they hold only 14% of STEM-related jobs. Additionally, among the 280,000 scientists and engineers working in research and development institutes in the country, women represent a mere 14% In India, OMOTEC is providing an innovative curriculum based on STEM, and their students are also performing and developing products to solve the new age problems. Two students also won the Microsoft Imagine Cup for developing a non-invasive method to screen for skin cancer using artificial intelligence. === Nigeria === In Nigeria, the Association of Professional Women Engineers Of Nigeria (APWEN) has involved girls between the ages of 12 and 19 in science-based courses in order for them to pursue science-based courses in higher institutions of learning. The National Science Foundation (NSF) In Nigeria has made conscious efforts to encourage girls to innovate, invent, and build through the "invent it, build it" program sponsored by NNPC. === Pakistan === STEM subjects are taught in Pakistan as part of electives taken in the 9th and 10th grades, culminating in Matriculation exams. These electives are pure sciences (Physics, Chemistry, Biology), mathematics (Physics, Chemistry, Maths), and computer science (Physics, Chemistry, Computer Science). STEM subjects are also offered as electives taken in the 11th and 12th grades, more commonly referred to as first and second year, culminating in Intermediate exams. These electives are FSc pre-medical (Physics, Chemistry, Biology), FSc pre-engineering (Physics, Chemistry, Maths), and ICS (Physics/Statistics, Computer Science, Maths). These electives are intended to aid students in pursuing STEM-related careers in the future by preparing them for the study of these courses at university. A STEM education project has been approved by the government to establish STEM labs in public schools. The Ministry of Information Technology and Telecommunication has collaborated with Google to launch Pakistan's first grassroots-level Coding Skills Development Program, based on Google's CS First Program, a global initiative aimed at developing coding skills in children. The program aims to develop applied coding skills using gamification techniques for children between the ages of 9 and 14. The KPITBs Early Age Programming initiative, established in the province of Khyber Pakhtunkhwa, has been successfully introduced in 225 Elementary and Secondary Schools. Many private organizations are working in Pakistan to introduce STEM education in schools. === Philippines === In the Philippines, STEM is a two-year program and strand that is used for Senior High School (Grades 11 and 12), assigned by the Department of Education or DepEd. The STEM strand is under the Academic Track, which also includes other strands like ABM, HUMSS, and GAS. The purpose of the STEM strand is to educate students in the field of science, technology, engineering, and mathematics, in an interdisciplinary and applied approach, and to give students advanced knowledge and application in the field. After completing the program, the students will earn a Diploma in Science, Technology, Engineering, and Mathematics. In some colleges and universities, they require students applying for STEM degrees (like medicine, engineering, computer studies, etc.) to be a graduate of STEM, if not, they will need to enter a bridging program. === Qatar === In Qatar, AL-Bairaq is an outreach program to high-school students with a curriculum that focuses on STEM, run by the Center for Advanced Materials (CAM) at Qatar University. Each year around 946 students, from about 40 high schools, participate in AL-Bairaq competitions. AL-Bairaq makes use of project-based learning, encourages students to solve authentic problems, and inquires them to work with each other as a team to build real solutions. Research has so far shown positive results for the program. === Singapore === STEM is part of the Applied Learning Programme (ALP) that the Singapore Ministry of Education (MOE) has been promoting since 2013, and currently, all secondary schools have such a program. It is expected that by 2023, all primary schools in Singapore will have an ALP. There are no tests or exams for ALPs. The emphasis is for students to learn through experimentation – they try, fail, try, learn from it, and try again. The MOE actively supports schools with ALPs to further enhance and strengthen their capabilities and programs that nurture innovation and creativity. The Singapore Science Centre established a STEM unit in January 2014, dedicated to igniting students' passion for STEM. To further enrich students' learning experiences, their Industrial Partnership Programme (IPP) creates opportunities for students to get early exposure to real-world STEM industries and careers. Curriculum specialists and STEM educators from the Science Centre will work hand-in-hand with teachers to co-develop STEM lessons, provide training to teachers, and co-teach such lessons to provide students with early exposure and develop their interest in STEM. === Thailand === In 2017, Thai Education Minister Teerakiat Jareonsettasin said after the 49th Southeast Asia Ministers of Education Organisation (SEAMEO) Council Conference in Jakarta that the meeting approved the establishment of two new SEAMEO regional centers in Thailand. One would be the STEM Education Centre, while the other would be a Sufficient Economy Learning Centre. Teerakiat said that the Thai government had already allocated Bt250 million over five years for the new STEM center. The center will be the regional institution responsible for STEM education promotion. It will not only set up policies to improve STEM education, but it will also be the center for information and experience sharing among the member countries and education experts. According to him, "This is the first SEAMEO regional center for STEM education, as the existing science education center in Malaysia only focuses on the academic perspective. Our STEM education center will also prioritize the implementation and adaptation of science and technology." The Institute for the Promotion of Teaching Science and Technology has initiated a STEM Education Network. Its goals are to promote integrated learning activities improve student creativity and application of knowledge, and establish a network of organations and personnel for the promotion of STEM education in the country. === Turkey === Turkish STEM Education Task Force (or FeTeMM—Fen Bilimleri, Teknoloji, Mühendislik ve Matematik) is a coalition of academicians and teachers who show an effort to increase the quality of education in STEM fields rather than focussing on increasing the number of STEM graduates. === United States === In the United States, the acronym began to be used in education and immigration debates in initiatives to begin to address the perceived lack of qualified candidates for high-tech jobs. It also addresses concern that the subjects are often taught in isolation, instead of as an integrated curriculum. Maintaining a citizenry that is well-versed in the STEM fields is a key portion of the public education agenda of the United States. The acronym has been widely used in the immigration debate regarding access to United States work visas for immigrants who are skilled in these fields. It has also become commonplace in education discussions as a reference to the shortage of skilled workers and inadequate education in these areas. The term tends not to refer to the non-professional and less visible sectors of the fields, such as electronics assembly line work. ==== National Science Foundation ==== Many organizations in the United States follow the guidelines of the National Science Foundation on what constitutes a STEM field. The NSF uses a broad definition of STEM subjects that includes subjects in the fields of chemistry, computer and information technology science, engineering, geoscience, life sciences, mathematical sciences, physics and astronomy, social sciences (anthropology, economics, psychology, and sociology), and STEM education and learning research. The NSF is the only American federal agency whose mission includes support for all fields of fundamental science and engineering, except for medical sciences. Its disciplinary program areas include scholarships, grants, and fellowships in fields such as biological sciences, computer and information science and engineering, education and human resources, engineering, environmental research and education, geoscience, international science and engineering, mathematical and physical sciences, social, behavioral and economic sciences, cyberinfrastructure, and polar programs. ==== Immigration policy ==== Although many organizations in the United States follow the guidelines of the National Science Foundation on what constitutes a STEM field, the United States Department of Homeland Security (DHS) has its own functional definition used for immigration policy. In 2012, DHS or ICE announced an expanded list of STEM-designated degree programs that qualify eligible graduates on student visas for an optional practical training (OPT) extension. Under the OPT program, international students who graduate from colleges and universities in the United States can stay in the country and receive up to twelve months of training through work experience. Students who graduate from a designated STEM degree program can stay for an additional seventeen months on an OPT STEM extension. As of 2023, the U.S. faces a shortage of high-skilled workers in STEM, and foreign talents must navigate difficult hurdles to immigrate. Meanwhile, some other countries, such as Australia, Canada, and the United Kingdom, have introduced programs to attract talent at the expense of the United States. In the case of China, the United States risks losing its edge over a strategic rival. ==== Education ==== By cultivating an interest in the natural and social sciences in preschool or immediately following school entry, the chances of STEM success in high school can be greatly improved. STEM supports broadening the study of engineering within each of the other subjects and beginning engineering at younger grades, even elementary school. It also brings STEM education to all students rather than only the gifted programs. In his 2012 budget, President Barack Obama renamed and broadened the "Mathematics and Science Partnership (MSP)" to award block grants to states for improving teacher education in those subjects. In the 2015 run of the international assessment test the Program for International Student Assessment (PISA), American students came out 35th in mathematics, 24th in reading, and 25th in science, out of 109 countries. The United States also ranked 29th in the percentage of 24-year-olds with science or mathematics degrees. STEM education often uses new technologies such as 3D printers to encourage interest in STEM fields. STEM education can also leverage the combination of new technologies, such as photovoltaics and environmental sensors, with old technologies such as composting systems and irrigation within land lab environments. In 2006 the United States National Academies expressed their concern about the declining state of STEM education in the United States. Its Committee on Science, Engineering, and Public Policy developed a list of 10 actions. Their top three recommendations were to: Increase America's talent pool by improving K–12 science and mathematics education Strengthen the skills of teachers through additional training in science, mathematics, and technology Enlarge the pipeline of students prepared to enter college and graduate with STEM degrees The National Aeronautics and Space Administration also has implemented programs and curricula to advance STEM education to replenish the pool of scientists, engineers, and mathematicians who will lead space exploration in the 21st century. Individual states, such as California, have run pilot after-school STEM programs to learn what the most promising practices are and how to implement them to increase the chance of student success. Another state to invest in STEM education is Florida, where Florida Polytechnic University, Florida's first public university for engineering and technology dedicated to science, technology, engineering, and mathematics (STEM), was established. During school, STEM programs have been established for many districts throughout the U.S. Some states include New Jersey, Arizona, Virginia, North Carolina, Texas, and Ohio. Continuing STEM education has expanded to the post-secondary level through masters programs such as the University of Maryland's STEM Program as well as the University of Cincinnati. ==== Racial gap in STEM fields ==== In the United States, the National Science Foundation found that the average science score on the 2011 National Assessment of Educational Progress was lower for black and Hispanic students than for white, Asian, and Pacific Islanders. In 2011, eleven percent of the U.S. workforce was black, while only six percent of STEM workers were black. Though STEM in the U.S. has typically been dominated by white males, there have been considerable efforts to create initiatives to make STEM a more racially and gender-diverse field. Some evidence suggests that all students, including black and Hispanic students, have a better chance of earning a STEM degree if they attend a college or university at which their entering academic credentials are at least as high as the average student's. ==== Gender gaps in STEM ==== Although women make up 47% of the workforce in the U.S., they hold only 24% of STEM jobs. Research suggests that exposing girls to female inventors at a young age has the potential to reduce the gender gap in technical STEM fields by half. Campaigns from organizations like the National Inventors Hall of Fame aimed to achieve a 50/50 gender balance in their youth STEM programs by 2020. The gender gap in Zimbabwe's STEM fields is also significant, with only 28.79% of women holding STEM degrees compared to 71.21% of men. ==== Intersectionality in STEM ==== STEM fields have been recognized as areas where underrepresentation and exclusion of marginalized groups are prevalent. STEM poses unique challenges related to intersectionality due to rigid norms and stereotypes, both in higher education and professional settings. These norms often prioritize objectivity and meritocracy while overlooking structural inequities, creating environments where individuals with intersecting marginalized identities face compounded barriers. For instance, individuals from traditionally underrepresented groups may experience a phenomenon known as "chilly climates" which refers to incidents of sexism, isolation, and pressure to prove themselves to peers and high level academics. For minority populations in STEM, loneliness is experienced due to lack of belonging and social isolation. ==== American Competitiveness Initiative ==== In the State of the Union Address on January 31, 2006, President George W. Bush announced the American Competitiveness Initiative. Bush proposed the initiative to address shortfalls in federal government support of educational development and progress at all academic levels in the STEM fields. In detail, the initiative called for significant increases in federal funding for advanced R&D programs (including a doubling of federal funding support for advanced research in the physical sciences through DOE) and an increase in U.S. higher education graduates within STEM disciplines. The NASA Means Business competition, sponsored by the Texas Space Grant Consortium, furthers that goal. College students compete to develop promotional plans to encourage students in middle and high school to study STEM subjects and to inspire professors in STEM fields to involve their students in outreach activities that support STEM education. The National Science Foundation has numerous programs in STEM education, including some for K–12 students such as the ITEST Program that supports The Global Challenge Award ITEST Program. STEM programs have been implemented in some Arizona schools. They implement higher cognitive skills for students and enable them to inquire and use techniques used by professionals in the STEM fields. Project Lead The Way (PLTW) is a provider of STEM education curricular programs to middle and high schools in the United States. Programs include a high school engineering curriculum called Pathway To Engineering, a high school biomedical sciences program, and a middle school engineering and technology program called Gateway To Technology. PLTW programs have been endorsed by President Barack Obama and United States Secretary of Education Arne Duncan as well as various state, national, and business leaders. ==== STEM Education Coalition ==== The Science, Technology, Engineering, and Mathematics (STEM) Education Coalition works to support STEM programs for teachers and students at the U.S. Department of Education, the National Science Foundation, and other agencies that offer STEM-related programs. Activity of the STEM Coalition seems to have slowed since September 2008. ==== Scouting ==== In 2012, the Boy Scouts of America began handing out awards, titled NOVA and SUPERNOVA, for completing specific requirements appropriate to the scouts' program level in each of the four main STEM areas. The Girl Scouts of the USA has similarly incorporated STEM into their program through the introduction of merit badges such as "Naturalist" and "Digital Art". SAE is an international organization, and provider specializing in supporting education, award, and scholarship programs for STEM matters, from pre-K to college degrees. It also promotes scientific and technological innovation. ==== Department of Defense programs ==== eCybermission is a free, web-based science, mathematics, and technology competition for students in grades six through nine sponsored by the U.S. Army. Each webinar is focused on a different step of the scientific method and is presented by an experienced eCybermission CyberGuide. CyberGuides are military and civilian volunteers with a strong background in STEM and STEM education, who can provide insight into science, technology, engineering, and mathematics to students and team advisers. STARBASE is an educational program, sponsored by the Office of the Assistant Secretary of Defense for Reserve Affairs. Students interact with military personnel to explore careers and make connections with the "real world". The program provides students with 20–25 hours of experience at the National Guard, Navy, Marines, Air Force Reserve, and Air Force bases across the nation. SeaPerch is an underwater robotics program that trains teachers to teach their students how to build an underwater remotely operated vehicle (ROV) in an in-school or out-of-school setting. Students build the ROV from a kit composed of low-cost, easily accessible parts, following a curriculum that teaches basic engineering and science concepts with a marine engineering theme. ==== NASA ==== NASAStem is a program of the U.S. space agency NASA to increase diversity within its ranks, including age, disability, and gender as well as race/ethnicity. ==== Legislation ==== The America COMPETES Act (P.L. 110–69) became law on August 9, 2007. It is intended to increase the nation's investment in science and engineering research and in STEM education from kindergarten to graduate school and postdoctoral education. The act authorizes funding increases for the National Science Foundation, National Institute of Standards and Technology laboratories, and the Department of Energy (DOE) Office of Science over FY2008–FY2010. Robert Gabrys, Director of Education at NASA's Goddard Space Flight Center, articulated success as increased student achievement, early expression of student interest in STEM subjects, and student preparedness to enter the workforce. ==== Jobs ==== In November 2012 the White House announcement before the congressional vote on the STEM Jobs Act put President Obama in opposition to many of the Silicon Valley firms and executives who bankrolled his re-election campaign. The Department of Labor identified 14 sectors that are "projected to add substantial numbers of new jobs to the economy or affect the growth of other industries or are being transformed by technology and innovation requiring new sets of skills for workers." The identified sectors were as follows: advanced manufacturing, Automotive, construction, financial services, geospatial technology, homeland security, information technology, Transportation, Aerospace, Biotechnology, energy, healthcare, hospitality, and retail. The Department of Commerce notes STEM fields careers are some of the best-paying and have the greatest potential for job growth in the early 21st century. The report also notes that STEM workers play a key role in the sustained growth and stability of the U.S. economy, and training in STEM fields generally results in higher wages, whether or not they work in a STEM field. In 2015, there were around 9.0 million STEM jobs in the United States, representing 6.1% of American employment. STEM jobs were increasing by around 9% percent per year. Brookings Institution found that the demand for competent technology graduates will surpass the number of capable applicants by at least one million individuals. According to Pew Research Center, a typical STEM worker earns two-thirds more than those employed in other fields. ==== Recent progress ==== According to the 2014 US census "74 percent of those who have a bachelor's degree in science, technology, engineering and math — commonly referred to as STEM — are not employed in STEM occupations." In September 2017, several large American technology firms collectively pledged to donate $300 million for computer science education in the U.S. PEW findings revealed in 2018 that Americans identified several issues that hound STEM education which included unconcerned parents, disinterested students, obsolete curriculum materials, and too much focus on state parameters. 57 percent of survey respondents pointed out that one main problem of STEM is the lack of students' concentration in learning. The recent National Assessment of Educational Progress (NAEP) report card made public technology as well as engineering literacy scores which determines whether students can apply technology and engineering proficiency to real-life scenarios. The report showed a gap of 28 points between low-income students and their high-income counterparts. The same report also indicated a 38-point difference between white and black students. The Smithsonian Science Education Center (SSEC) announced the release of a five-year strategic plan by the Committee on STEM Education of the National Science and Technology Council on December 4, 2018. The plan is entitled "Charting a Course for Success: America's Strategy for STEM Education." The objective is to propose a federal strategy anchored on a vision for the future so that all Americans are given permanent access to premium-quality education in Science, Technology, Engineering, and Mathematics. In the end, the United States can emerge as a world leader in STEM mastery, employment, and innovation. The goals of this plan are building foundations for STEM literacy; enhancing diversity, equality, and inclusion in STEM; and preparing the STEM workforce for the future. The 2019 fiscal budget proposal of the White House supported the funding plan in President Donald Trump's Memorandum on STEM Education which allocated around $200 million (grant funding) for STEM education every year. This budget also supports STEM through a grant program worth $20 million for career as well as technical education programs. ==== Events and programs to help develop STEM in US schools ==== FIRST Tech Challenge VEX Robotics Competitions FIRST Robotics Competition === Vietnam === In Vietnam, beginning in 2012 many private education organizations have STEM education initiatives. In 2015, the Ministry of Science and Technology and Liên minh STEM organized the first National STEM Day, followed by many similar events across the country. in 2015, the Ministry of Education and Training included STEM as an area that needed to be encouraged in the national school year program. In May 2017, the Prime Minister signed a Directive No. 16 stating: "Dramatically change the policies, contents, education and vocational training methods to create a human resource capable of receiving new production technology trends, with a focus on promoting training in science, technology, engineering and mathematics (STEM), foreign languages, information technology in general education; " and asking "Ministry of Education and Training (to): Promote the deployment of science, technology, engineering and mathematics (STEM) education in general education program; Pilot organize in some high schools from 2017 to 2018. == Women == Women constitute 47% of the U.S. workforce and perform 24% of STEM-related jobs. In the UK women perform 13% of STEM-related jobs (2014). In the U.S. women with STEM degrees are more likely to work in education or healthcare rather than STEM fields compared with their male counterparts. The gender ratio depends on the field of study. For example, in the European Union in 2012 women made up 47.3% of the total, 51% of the social sciences, business, and law, 42% of the science, mathematics, and computing, 28% of engineering, manufacturing, and construction, and 59% of PhD graduates in Health and Welfare. In a study from 2019, it was shown that part of the success of women in STEM depends on the way women in STEM are viewed. In a study that researched grants given based primarily on a project versus primarily based on the project lead there was almost no difference in the evaluation between projects from men or women when evaluated on the project, but those evaluated mainly on the project leader showed that projects headed by women were given grants four percent less often. Improving the experiences of women in STEM is a major component of increasing the number of women in STEM. One part of this includes the need for role models and mentors who are women in STEM. Along with this, having good resources for information and networking opportunities can improve women's ability to flourish in STEM fields. Adding to the complexity, global studies indicate that biology may play a significant role in the gender gaps in STEM fields because the propensity for women to pursue college degrees in STEM fields declines consistently as countries become more wealthy and egalitarian. As women are more free to choose their careers, they are more prone to chose careers that relate to people rather than objects. == LGBTQ+ == People identifying within the group LGBTQ+ have faced discrimination in STEM fields throughout history. Few were openly queer in STEM; however, a couple of well-known people are Alan Turing, the father of computer science, and Sara Josephine Baker, an American physician and public-health leader. Despite recent changes in attitudes towards LGBTQ+ people, discrimination still permeates throughout STEM fields. A recent study has shown that sexual minority students were less likely to have completed a bachelor's degree in a STEM field, having opted to switch their major. Those that remained in a STEM field were however more likely to participate in undergraduate research programs. According to the study sexual minorities did show higher overall retention rates within STEM related fields as compared to heterosexual woman. Another study concluded that queer people are more likely to experience exclusion, harassment, and other negative impacts while in a STEM career while also having fewer opportunities and resources available to them. Multiple programs and institutions are working towards increasing the inclusion and acceptance of LGBTQ+ people in STEM. In the US, the National Organization of Gay and Lesbian Scientists and Technical Professionals (NOGLSTP) has organized people to address homophobia since the 1980s and now promotes activism and support for queer scientists. Other programs, including 500 Queer Scientists and Pride in STEM, function as visibility campaigns for LGBTQ+ people in STEM worldwide. == Criticism == The focus on increasing participation in STEM fields has attracted criticism. In the 2014 article "The Myth of the Science and Engineering Shortage" in The Atlantic, demographer Michael S. Teitelbaum criticized the efforts of the U.S. government to increase the number of STEM graduates, saying that, among studies on the subject, "No one has been able to find any evidence indicating current widespread labor market shortages or hiring difficulties in science and engineering occupations that require bachelor's degrees or higher", and that "Most studies report that real wages in many—but not all—science and engineering occupations have been flat or slow-growing, and unemployment as high or higher than in many comparably-skilled occupations." Teitelbaum also wrote that the then-current national fixation on increasing STEM participation paralleled previous U.S. government efforts since World War II to increase the number of scientists and engineers, all of which he stated ultimately ended up in "mass layoffs, hiring freezes, and funding cuts"; including one driven by the Space Race of the late 1950s and 1960s, which he wrote led to "a bust of serious magnitude in the 1970s." IEEE Spectrum contributing editor Robert N. Charette echoed these sentiments in the 2013 article "The STEM Crisis Is a Myth", also noting that there was a "mismatch between earning a STEM degree and having a STEM job" in the United States, with only around 1⁄4 of STEM graduates working in STEM fields, while less than half of workers in STEM fields have a STEM degree. Economics writer Ben Casselman, in a 2014 study of post-graduation earnings in the United States for FiveThirtyEight, wrote that, based on the data, science should not be grouped with the other three STEM categories, because, while the other three generally result in high-paying jobs, "many sciences, particularly the life sciences, pay below the overall median for recent college graduates." A 2017 article from the University of Leicester concluded, that "maintaining accounts of a ‘crisis’ in the supply of STEM workers has usually been in the interests of industry, the education sector and government, as well as the lobby groups that represent them. Concerns about a shortage have meant the allocation of significant additional resources to the sector whose representatives have, in turn, become powerful voices in advocating for further funds and further investment." A 2022 report from Rutgers University stated: "In the United States, the STEM crisis theme is a perennial policy favorite, appearing every few years as an urgent concern in the nation’s competition with whatever other nation is ascendant, or as the cause of whatever problem is ailing the domestic economy. And the solution is always the same: increase the supply of STEM workers through expanding STEM education. Time and again, serious and empirically grounded studies find little evidence of any systemic failures or an inability of market responses to address whatever supply is required to meet workforce needs." A study of the UK job market, published in 2022, found similar problems, which have been reported for the USA earlier: "It is not clear that having a degree in the sciences, rather than in other subjects, provides any sort of advantage in terms of short- or long-term employability... While only a minority of STEM graduates ever work in highly-skilled STEM jobs, we identified three particular characteristics of the STEM labour market that may present challenges for employers: STEM employment appears to be predicated on early entry to the sector; a large proportion of STEM graduates are likely to never work in the sector; and there may be more movement out of HS STEM positions by older workers than in other sectors... " == See also == == References == == Further reading == David Beede; et al. (September 2011). "Education Supports Racial and Ethnic Equality in STEM" (PDF). U.S. Department of Commerce. Retrieved 2012-12-21. David Beede; et al. (August 2011). "Women in STEM: An Opportunity and An Imperative" (PDF). U.S. Department of Commerce. Retrieved 2012-12-21. Kaye Husbands Fealing, Aubrey Incorvaia, and Richard Utz, "Humanizing Science and Engineering for the Twenty-First Century." Issues in Science and Technology, Fall issue, 2022: 54–57. David Langdon; et al. (July 2011). "STEM: Good Jobs Now and For the Future" (PDF). U.S. Department of Commerce. Retrieved 2012-12-21. Arden Bement (May 24, 2005). "Statement To House & Senate Appriopriators In Support Of STEM Education And NSF Education" (PDF). STEM Coalition. Archived from the original (PDF) on November 20, 2012. Retrieved 2012-12-21. Carla C. Johnson, et al., eds. (2020) Handbook of research on STEM education (Routledge, 2020). Mary Kirk (2009). Gender and Information Technology: Moving Beyond Access to Co-Create Global Partnership. IGI Global Snippet. ISBN 978-1-59904-786-7. Shirley M. Malcom; Daryl E. Chubin; Jolene K. Jesse (2004). Standing Our Ground: A Guidebook for STEM Educators in the Post-Michigan Era. American Association for the Advancement of Science. ISBN 0871686996. Unesco publication on girls education in STEM – Cracking the code: girls' and women's education in science, technology, engineering and mathematics (STEM) "http://unesdoc.unesco.org/images/0025/002534/253479E.pdf " Wing Lau – Chief Engineer at the Department of Physics, Oxford University (Oct 12, 2017). "STEM Re-vitalisation, not trivialisation". OpenSchool. Retrieved 2017-10-12. == External links == Media related to STEM at Wikimedia Commons
https://en.wikipedia.org/wiki/Science,_technology,_engineering,_and_mathematics
The National Institute of Standards and Technology (NIST) is an agency of the United States Department of Commerce whose mission is to promote American innovation and industrial competitiveness. NIST's activities are organized into physical science laboratory programs that include nanoscale science and technology, engineering, information technology, neutron research, material measurement, and physical measurement. From 1901 to 1988, the agency was named the National Bureau of Standards. == History == === Background === The Articles of Confederation, ratified by the colonies in 1781, provided: The United States in Congress assembled shall also have the sole and exclusive right and power of regulating the alloy and value of coin struck by their own authority, or by that of the respective states—fixing the standards of weights and measures throughout the United States. Article 1, section 8, of the Constitution of the United States, ratified in 1789, granted these powers to the new Congress: "The Congress shall have power ... To coin money, regulate the value thereof, and of foreign coin, and fix the standard of weights and measures". In January 1790, President George Washington, in his first annual message to Congress, said, "Uniformity in the currency, weights, and measures of the United States is an object of great importance, and will, I am persuaded, be duly attended to." On October 25, 1791, Washington again appealed Congress: A uniformity of the weights and measures of the country is among the important objects submitted to you by the Constitution and if it can be derived from a standard at once invariable and universal, must be no less honorable to the public council than conducive to the public convenience. In 1821, President John Quincy Adams declared, "Weights and measures may be ranked among the necessities of life to every individual of human society.". Nevertheless, it was not until 1838 that the United States government adopted a uniform set of standards. From 1830 until 1901, the role of overseeing weights and measures was carried out by the Office of Standard Weights and Measures, which was part of the Survey of the Coast—renamed the United States Coast Survey in 1836 and the United States Coast and Geodetic Survey in 1878—in the United States Department of the Treasury. === Bureau of Standards (1901–1988) === In 1901, in response to a bill proposed by Congressman James H. Southard (R, Ohio), the Bureau of Standards was founded with the mandate to provide standard weights and measures, and to serve as the national physical laboratory for the United States. Southard had previously sponsored a bill for metric conversion of the United States. President Theodore Roosevelt appointed Samuel W. Stratton as the first director. The budget for the first year of operation was $40,000. The Bureau took custody of the copies of the kilogram and meter bars that were the standards for US measures, and set up a program to provide metrology services for United States scientific and commercial users. A laboratory site was constructed in Washington, DC, and instruments were acquired from the national physical laboratories of Europe. In addition to weights and measures, the Bureau developed instruments for electrical units and for measurement of light. In 1905 a meeting was called that would be the first "National Conference on Weights and Measures". Initially conceived as purely a metrology agency, the Bureau of Standards was directed by Herbert Hoover to set up divisions to develop commercial standards for materials and products. Some of these standards were for products intended for government use, but product standards also affected private-sector consumption. Quality standards were developed for products including some types of clothing, automobile brake systems and headlamps, antifreeze, and electrical safety. During World War I, the Bureau worked on multiple problems related to war production, even operating its own facility to produce optical glass when European supplies were cut off. Between the wars, Harry Diamond of the Bureau developed a blind approach radio aircraft landing system. During World War II, military research and development was carried out, including development of radio propagation forecast methods, the proximity fuze and the standardized airframe used originally for Project Pigeon, and shortly afterwards the autonomously radar-guided Bat anti-ship guided bomb and the Kingfisher family of torpedo-carrying missiles. In 1948, financed by the United States Air Force, the Bureau began design and construction of SEAC, the Standards Eastern Automatic Computer. The computer went into operation in May 1950 using a combination of vacuum tubes and solid-state diode logic. About the same time the Standards Western Automatic Computer, was built at the Los Angeles office of the NBS by Harry Huskey and used for research there. A mobile version, DYSEAC, was built for the Signal Corps in 1954. === National Institute of Standards and Technology (from 1988) === Due to a changing mission, the "National Bureau of Standards" became the "National Institute of Standards and Technology" in 1988. Following the September 11, 2001 attacks, under the National Construction Safety Team Act (NCST), NIST conducted the official investigation into the collapse of the World Trade Center buildings. Following the 2021 Surfside condominium building collapse, NIST sent engineers to the site to investigate the cause of the collapse. In 2019, NIST launched a program named NIST on a Chip to decrease the size of instruments from lab machines to chip size. Applications include aircraft testing, communication with satellites for navigation purposes, and temperature and pressure. In 2023, the Biden administration began plans to create a U.S. AI Safety Institute within NIST to coordinate AI safety matters. According to The Washington Post, NIST is considered "notoriously underfunded and understaffed", which could present an obstacle to these efforts. == Constitution == NIST, known between 1901 and 1988 as the National Bureau of Standards (NBS), is a measurement standards laboratory, also known as the National Metrological Institute (NMI), which is a non-regulatory agency of the United States Department of Commerce. The institute's official mission is to: Promote U.S. innovation and industrial competitiveness by advancing measurement science, standards, and technology in ways that enhance economic security and improve our quality of life. NIST had an operating budget for fiscal year 2007 (October 1, 2006 – September 30, 2007) of about $843.3 million. NIST's 2009 budget was $992 million, and it also received $610 million as part of the American Recovery and Reinvestment Act. NIST employs about 2,900 scientists, engineers, technicians, and support and administrative personnel. About 1,800 NIST associates (guest researchers and engineers from American companies and foreign countries) complement the staff. In addition, NIST partners with 1,400 manufacturing specialists and staff at nearly 350 affiliated centers around the country. NIST publishes the Handbook 44 that provides the "Specifications, tolerances, and other technical requirements for weighing and measuring devices". === Metric system === The Congress of 1866 made use of the metric system in commerce a legally protected activity through the passage of Metric Act of 1866. On May 20, 1875, 17 out of 20 countries signed a document known as the Metric Convention or the Treaty of the Meter, which established the International Bureau of Weights and Measures under the control of an international committee elected by the General Conference on Weights and Measures. == Organization == NIST is headquartered in Gaithersburg, Maryland, and operates a facility in Boulder, Colorado, which was dedicated by President Eisenhower in 1954. NIST's activities are organized into laboratory programs and extramural programs. Effective October 1, 2010, NIST was realigned by reducing the number of NIST laboratory units from ten to six. NIST Laboratories include: Communications Technology Laboratory (CTL) Engineering Laboratory (EL) Information Technology Laboratory (ITL) Center for Neutron Research (NCNR) Material Measurement Laboratory (MML) Physical Measurement Laboratory (PML) Extramural programs include: Hollings Manufacturing Extension Partnership (MEP), a nationwide network of centers to assist small and mid-sized manufacturers to create and retain jobs, improve efficiencies, and minimize waste through process improvements and to increase market penetration with innovation and growth strategies; Technology Innovation Program (TIP), a grant program where NIST and industry partners cost share the early-stage development of innovative but high-risk technologies; Baldrige Performance Excellence Program, which administers the Malcolm Baldrige National Quality Award, the nation's highest award for performance and business excellence. NIST's Boulder laboratories are best known for NIST‑F1, which houses an atomic clock. NIST‑F1 serves as the source of the nation's official time. From its measurement of the natural resonance frequency of cesium—which defines the second—NIST broadcasts time signals via longwave radio station WWVB near Fort Collins, Colorado, and shortwave radio stations WWV and WWVH, located near Fort Collins and Kekaha, Hawaii, respectively. NIST also operates a neutron science user facility: the NIST Center for Neutron Research (NCNR). The NCNR provides scientists access to a variety of neutron scattering instruments, which they use in many research fields (materials science, fuel cells, biotechnology, etc.). The SURF III Synchrotron Ultraviolet Radiation Facility is a source of synchrotron radiation, in continuous operation since 1961. SURF III now serves as the US national standard for source-based radiometry throughout the generalized optical spectrum. All NASA-borne, extreme-ultraviolet observation instruments have been calibrated at SURF since the 1970s, and SURF is used for the measurement and characterization of systems for extreme ultraviolet lithography. The Center for Nanoscale Science and Technology (CNST) performs research in nanotechnology, both through internal research efforts and by running a user-accessible cleanroom nanomanufacturing facility. This "NanoFab" is equipped with tools for lithographic patterning and imaging (e.g., electron microscopes and atomic force microscopes). === Committees === NIST has seven standing committees: Technical Guidelines Development Committee (TGDC) Advisory Committee on Earthquake Hazards Reduction (ACEHR) National Construction Safety Team Advisory Committee (NCST Advisory Committee) Information Security and Privacy Advisory Board (ISPAB) Visiting Committee on Advanced Technology (VCAT) Board of Overseers for the Malcolm Baldrige National Quality Award (MBNQA Board of Overseers) Manufacturing Extension Partnership National Advisory Board (MEPNAB) == Projects == === Measurements and standards === As part of its mission, NIST supplies industry, academia, government, and other users with over 1,300 Standard Reference Materials (SRMs). These artifacts are certified as having specific characteristics or component content, used as calibration standards for measuring equipment and procedures, quality control benchmarks for industrial processes, and experimental control samples. === Handbook 44 === NIST publishes the Handbook 44 each year after the annual meeting of the National Conference on Weights and Measures (NCWM). Each edition is developed through cooperation of the Committee on Specifications and Tolerances of the NCWM and the Weights and Measures Division (WMD) of NIST. The purpose of the book is a partial fulfillment of the statutory responsibility for "cooperation with the states in securing uniformity of weights and measures laws and methods of inspection". NIST has been publishing various forms of what is now the Handbook 44 since 1918 and began publication under the current name in 1949. The 2010 edition conforms to the concept of the primary use of the SI (metric) measurements recommended by the Omnibus Foreign Trade and Competitiveness Act of 1988. === Homeland security === NIST is developing government-wide identity document standards for federal employees and contractors to prevent unauthorized persons from gaining access to government buildings and computer systems. === World Trade Center collapse investigation === In 2002, the National Construction Safety Team Act mandated NIST to conduct an investigation into the collapse of the World Trade Center buildings 1 and 2 and the 47-story 7 World Trade Center. The "World Trade Center Collapse Investigation", directed by lead investigator Shyam Sunder, covered three aspects, including a technical building and fire safety investigation to study the factors contributing to the probable cause of the collapses of the WTC Towers (WTC 1 and 2) and WTC 7. NIST also established a research and development program to provide the technical basis for improved building and fire codes, standards, and practices, and a dissemination and technical assistance program to engage leaders of the construction and building community in implementing proposed changes to practices, standards, and codes. NIST also is providing practical guidance and tools to better prepare facility owners, contractors, architects, engineers, emergency responders, and regulatory authorities to respond to future disasters. The investigation portion of the response plan was completed with the release of the final report on 7 World Trade Center on November 20, 2008. The final report on the WTC Towers—including 30 recommendations for improving building and occupant safety—was released on October 26, 2005. === Election technology === NIST works in conjunction with the Technical Guidelines Development Committee of the Election Assistance Commission to develop the Voluntary Voting System Guidelines for voting machines and other election technology. === Cybersecurity Framework === In February 2014 NIST published the NIST Cybersecurity Framework that serves as voluntary guidance for organizations to manage and reduce cybersecurity risk. It was later amended and Version 1.1 was published in April 2018. Executive Order 13800, Strengthening the Cybersecurity of Federal Networks and Critical Infrastructure, made the Framework mandatory for U.S. federal government agencies. An extension to the NIST Cybersecurity Framework is the Cybersecurity Maturity Model (CMMC) which was introduced in 2019 (though the origin of CMMC began with Executive Order 13556). It emphasizes the importance of implementing Zero-trust architecture (ZTA) which focuses on protecting resources over the network perimeter. ZTA utilizes zero trust principles which include "never trust, always verify", "assume breach" and "least privileged access" to safeguard users, assets, and resources. Since ZTA holds no implicit trust to users within the network perimeter, authentication and authorization are performed at every stage of a digital transaction. This reduces the risk of unauthorized access to resources. NIST released a draft of the CSF 2.0 for public comment through November 4, 2023. NIST decided to update the framework to make it more applicable to small and medium size enterprises that use the framework, as well as to accommodate the constantly changing nature of cybersecurity. In August 2024, NIST released a final set of encryption tools designed to withstand the attack of a quantum computer. These post-quantum encryption standards secure a wide range of electronic information, from confidential email messages to e-commerce transactions that propel the modern economy. == People == Four scientific researchers at NIST have been awarded Nobel Prizes for work in physics: William Daniel Phillips in 1997, Eric Allin Cornell in 2001, John Lewis Hall in 2005 and David Jeffrey Wineland in 2012, which is the largest number for any US government laboratory. All four were recognized for their work related to laser cooling of atoms, which is directly related to the development and advancement of the atomic clock. In 2011, Dan Shechtman was awarded the Nobel Prize in chemistry for his work on quasicrystals in the Metallurgy Division from 1982 to 1984. In addition, John Werner Cahn was awarded the 2011 Kyoto Prize for Materials Science, and the National Medal of Science has been awarded to NIST researchers Cahn (1998) and Wineland (2007). Other notable people who have worked at NBS or NIST include: == Directors == Since 1989, the director of NIST has been a Presidential appointee and is confirmed by the United States Senate, and since that year the average tenure of NIST directors has fallen from 11 years to 2 years in duration. Since the 2011 reorganization of NIST, the director also holds the title of Under Secretary of Commerce for Standards and Technology. Seventeen individuals have officially held the position (in addition to seven acting directors who have served on a temporary basis). == Patents == NIST holds patents on behalf of the Federal government of the United States, with at least one of them being custodial to protect public domain use, such as one for a Chip-scale atomic clock, developed by a NIST team as part of a DARPA competition. == Controversy regarding NIST standard SP 800-90 == In September 2013, both The Guardian and The New York Times reported that NIST allowed the National Security Agency (NSA) to insert a cryptographically secure pseudorandom number generator called Dual EC DRBG into NIST standard SP 800-90 that had a kleptographic backdoor that the NSA can use to covertly predict the future outputs of this pseudorandom number generator thereby allowing the surreptitious decryption of data. Both papers report that the NSA worked covertly to get its own version of SP 800-90 approved for worldwide use in 2006. The whistle-blowing document states that "eventually, NSA became the sole editor". The reports confirm suspicions and technical grounds publicly raised by cryptographers in 2007 that the EC-DRBG could contain a kleptographic backdoor (perhaps placed in the standard by NSA). NIST responded to the allegations, stating that "NIST works to publish the strongest cryptographic standards possible" and that it uses "a transparent, public process to rigorously vet our recommended standards". The agency stated that "there has been some confusion about the standards development process and the role of different organizations in it...The National Security Agency (NSA) participates in the NIST cryptography process because of its recognized expertise. NIST is also required by statute to consult with the NSA." Recognizing the concerns expressed, the agency reopened the public comment period for the SP800-90 publications, promising that "if vulnerabilities are found in these or any other NIST standards, we will work with the cryptographic community to address them as quickly as possible". Due to public concern of this cryptovirology attack, NIST rescinded the EC-DRBG algorithm from the NIST SP 800-90 standard. == Publications == The Journal of Research of the National Institute of Standards and Technology was the flagship scientific journal at NIST. It was published from 1904 to 2022. First published in 1972, the Journal of Physical and Chemical Reference Data, is a joint venture of the American Institute of Physics and the National Institute of Standards and Technology. In addition to these journals, NIST (and the National Bureau of Standards before it) has a robust technical reports publishing arm. NIST technical reports are published in several dozen series, which cover a wide range of topics, from computer technology to construction to aspects of standardization including weights, measures and reference data. In addition to technical reports, NIST scientists publish many journal and conference papers each year; an database of these, along with more recent technical reports, can be found on the NIST website. == See also == Dimensional metrology Forensic metrology Quantum metrology Smart Metrology Time metrology == References == == External links == Main NIST website Archived August 5, 2010, at the Wayback Machine NIST in the Federal Register NIST Publications Portal The Official US Time Archived April 2, 2019, at the Wayback Machine NIST Standard Reference Data Archived July 12, 2017, at the Wayback Machine NIST Standard Reference Materials Archived July 12, 2017, at the Wayback Machine NIST Center for Nanoscale Science and Technology (CNST) Archived August 19, 2016, at the Wayback Machine Manufacturing Extension Partnership NIST on a chip Archived December 13, 2020, at the Wayback Machine SI Redefinition Archived February 6, 2022, at the Wayback Machine Scientific and Technical Research and Services Archived October 3, 2022, at the Wayback Machine account on USAspending.gov Historic technical reports from the National Bureau of Standards digitized by the Technical Report Archive & Image Library are available hosted by TRAIL Archived May 25, 2017, at Archive-It and the University of North Texas libraries. Smithsonian Institution Press, 1978, Smithsonian Studies in History and Technology, Number 40: United States Standards of Weights and Measures, Their Creation and Creators, by Arthur H. Frazier Archived October 16, 2019, at the Wayback Machine
https://en.wikipedia.org/wiki/National_Institute_of_Standards_and_Technology
Emerging technologies are technologies whose development, practical applications, or both are still largely unrealized. These technologies are generally new but also include old technologies finding new applications. Emerging technologies are often perceived as capable of changing the status quo. Emerging technologies are characterized by radical novelty (in application even if not in origins), relatively fast growth, coherence, prominent impact, and uncertainty and ambiguity. In other words, an emerging technology can be defined as "a radically novel and relatively fast growing technology characterised by a certain degree of coherence persisting over time and with the potential to exert a considerable impact on the socio-economic domain(s) which is observed in terms of the composition of actors, institutions and patterns of interactions among those, along with the associated knowledge production processes. Its most prominent impact, however, lies in the future and so in the emergence phase is still somewhat uncertain and ambiguous." Emerging technologies include a variety of technologies such as educational technology, information technology, nanotechnology, biotechnology, robotics, and artificial intelligence. New technological fields may result from the technological convergence of different systems evolving towards similar goals. Convergence brings previously separate technologies such as voice (and telephony features), data (and productivity applications) and video together so that they share resources and interact with each other, creating new efficiencies. Emerging technologies are those technical innovations which represent progressive developments within a field for competitive advantage; converging technologies represent previously distinct fields which are in some way moving towards stronger inter-connection and similar goals. However, the opinion on the degree of the impact, status and economic viability of several emerging and converging technologies varies. == History of emerging technologies == In the history of technology, emerging technologies are contemporary advances and innovation in various fields of technology. Over centuries innovative methods and new technologies have been developed and opened up. Some of these technologies are due to theoretical research, and others from commercial research and development. Technological growth includes incremental developments and disruptive technologies. An example of the former was the gradual roll-out of DVD (digital video disc) as a development intended to follow on from the previous optical technology compact disc. By contrast, disruptive technologies are those where a new method replaces the previous technology and makes it redundant, for example, the replacement of horse-drawn carriages by automobiles and other vehicles. == Emerging technology debates == Many writers, including computer scientist Bill Joy, have identified clusters of technologies that they consider critical to humanity's future. Joy warns that the technology could be used by elites for good or evil. They could use it as "good shepherds" for the rest of humanity or decide everyone else is superfluous and push for the mass extinction of those made unnecessary by technology. Advocates of the benefits of technological change typically see emerging and converging technologies as offering hope for the betterment of the human condition. Cyberphilosophers Alexander Bard and Jan Söderqvist argue in The Futurica Trilogy that while Man himself is basically constant throughout human history (genes change very slowly), all relevant change is rather a direct or indirect result of technological innovation (memes change very fast) since new ideas always emanate from technology use and not the other way around. Man should consequently be regarded as history's main constant and technology as its main variable. However, critics of the risks of technological change, and even some advocates such as transhumanist philosopher Nick Bostrom, warn that some of these technologies could pose dangers, perhaps even contribute to the extinction of humanity itself; i.e., some of them could involve existential risks. Much ethical debate centers on issues of distributive justice in allocating access to beneficial forms of technology. Some thinkers, including environmental ethicist Bill McKibben, oppose the continuing development of advanced technology partly out of fear that its benefits will be distributed unequally in ways that could worsen the plight of the poor. By contrast, inventor Ray Kurzweil is among techno-utopians who believe that emerging and converging technologies could and will eliminate poverty and abolish suffering. Some analysts such as Martin Ford, author of The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future, argue that as information technology advances, robots and other forms of automation will ultimately result in significant unemployment as machines and software begin to match and exceed the capability of workers to perform most routine jobs. As robotics and artificial intelligence develop further, even many skilled jobs may be threatened. Technologies such as machine learning may ultimately allow computers to do many knowledge-based jobs that require significant education. This may result in substantial unemployment at all skill levels, stagnant or falling wages for most workers, and increased concentration of income and wealth as the owners of capital capture an ever-larger fraction of the economy. This in turn could lead to depressed consumer spending and economic growth as the bulk of the population lacks sufficient discretionary income to purchase the products and services produced by the economy. == Examples of emerging technologies == === Artificial intelligence === Artificial intelligence (AI) is the sub intelligence exhibited by machines or software, and the branch of computer science that develops machines and software with animal-like intelligence. Major AI researchers and textbooks define the field as "the study and design of intelligent agents," where an intelligent agent is a system that perceives its environment and takes actions that maximize its chances of success. John McCarthy, who coined the term in 1956, defines it as "the study of making intelligent machines". The central functions (or goals) of AI research include reasoning, knowledge, planning, learning, natural language processing (communication), perception and the ability to move and manipulate objects. General intelligence (or "strong AI") is still among the field's long-term goals. Currently, popular approaches include deep learning, statistical methods, computational intelligence and traditional symbolic AI. There is an enormous number of tools used in AI, including versions of search and mathematical optimization, logic, methods based on probability and economics, and many others. === 3D printing === 3D printing, also known as additive manufacturing, has been posited by Jeremy Rifkin and others as part of the third industrial revolution. Combined with Internet technology, 3D printing would allow for digital blueprints of virtually any material product to be sent instantly to another person to be produced on the spot, making purchasing a product online almost instantaneous. Although this technology is still too crude to produce most products, it is rapidly developing and created a controversy in 2013 around the issue of 3D printed firearms. === Gene therapy === Gene therapy was first successfully demonstrated in late 1990/early 1991 for adenosine deaminase deficiency, though the treatment was somatic – that is, did not affect the patient's germ line and thus was not heritable. This led the way to treatments for other genetic diseases and increased interest in germ line gene therapy – therapy affecting the gametes and descendants of patients. Between September 1990 and January 2014, there were around 2,000 gene therapy trials conducted or approved. === Cancer vaccines === A cancer vaccine is a vaccine that treats existing cancer or prevents the development of cancer in certain high-risk individuals. Vaccines that treat existing cancer are known as therapeutic cancer vaccines. There are currently no vaccines able to prevent cancer in general. On April 14, 2009, The Dendreon Corporation announced that their Phase III clinical trial of Provenge, a cancer vaccine designed to treat prostate cancer, had demonstrated an increase in survival. It received U.S. Food and Drug Administration (FDA) approval for use in the treatment of advanced prostate cancer patients on April 29, 2010. The approval of Provenge has stimulated interest in this type of therapy. === Cultured meat === Cultured meat, also called in vitro meat, clean meat, cruelty-free meat, shmeat, and test-tube meat, is an animal-flesh product that has never been part of a living animal with exception of the fetal calf serum taken from a slaughtered cow. In the 21st century, several research projects have worked on in vitro meat in the laboratory. The first in vitro beefburger, created by a Dutch team, was eaten at a demonstration for the press in London in August 2013. There remain difficulties to be overcome before in vitro meat becomes commercially available. Cultured meat is prohibitively expensive, but it is expected that the cost could be reduced to compete with that of conventionally obtained meat as technology improves. In vitro meat is also an ethical issue. Some argue that it is less objectionable than traditionally obtained meat because it does not involve killing and reduces the risk of animal cruelty, while others disagree with eating meat that has not developed naturally. === Nanotechnology === Nanotechnology (sometimes shortened to nanotech) is the manipulation of matter on an atomic, molecular, and supramolecular scale. The earliest widespread description of nanotechnology referred to the particular technological goal of precisely manipulating atoms and molecules for fabrication of macroscale products, also now referred to as molecular nanotechnology. A more generalized description of nanotechnology was subsequently established by the National Nanotechnology Initiative, which defines nanotechnology as the manipulation of matter with at least one dimension sized from 1 to 100 nanometers. This definition reflects the fact that quantum mechanical effects are important at this scale, and so the definition shifted from a particular technological goal to a research category inclusive of all types of research and technologies that deal with the special properties of matter that occur below the given size threshold. === Robotics === Robotics is the branch of technology that deals with the design, construction, operation, and application of robots, as well as computer systems for their control, sensory feedback, and information processing. These technologies deal with automated machines that can take the place of humans in dangerous environments, factories, warehouses, or kitchens; or resemble humans in appearance, behavior, and/or cognition. A good example of a robot that resembles humans is Sophia, a social humanoid robot developed by Hong Kong-based company Hanson Robotics which was activated on April 19, 2015. Many of today's robots are inspired by nature contributing to the field of bio-inspired robotics. === Stem-cell therapy === Stem cell therapy is an intervention strategy that introduces new adult stem cells into damaged tissue in order to treat disease or injury. Many medical researchers believe that stem cell treatments have the potential to change the face of human disease and alleviate suffering. The ability of stem cells to self-renew and give rise to subsequent generations with variable degrees of differentiation capacities offers significant potential for generation of tissues that can potentially replace diseased and damaged areas in the body, with minimal risk of rejection and side effects. Chimeric antigen receptor (CAR)-modified T cells have raised among other immunotherapies for cancer treatment, being implemented against B-cell malignancies. Despite the promising outcomes of this innovative technology, CAR-T cells are not exempt from limitations that must yet to be overcome in order to provide reliable and more efficient treatments against other types of cancer. === Distributed ledger technology === Distributed ledger or blockchain technology provides a transparent and immutable list of transactions. A wide range of uses has been proposed for where an open, decentralised database is required, ranging from supply chains to cryptocurrencies. Smart contracts are self-executing transactions which occur when pre-defined conditions are met. The aim is to provide security that is superior to traditional contract law, and to reduce transaction costs and delays. The original idea was conceived by Nick Szabo in 1994, but remained unrealised until the development of blockchains. === Augmented reality === This type of technology where digital graphics are loaded onto live footage has been around since the 20th century, but thanks to the arrival of more powerful computing hardware and the implementation of open source, this technology has been able to do things that we never thought were possible. Some ways in which we have used this technology can be through apps such as Pokémon Go, Snapchat and Instagram filters and other apps that create fictional things in real objects. === Multi-use rockets === Reusable rockets, in contrast to single use rockets that are disposed after launch, are able to propulsively land safely in a pre-specified place where they are recovered to be used again in later launches. Early prototypes include the McDonnell Douglas DC-X tested in the 1990s, but the company SpaceX was the first to use propulsive reusability on the first stage of an operational orbital launch vehicle, the Falcon 9, in the 2010s. SpaceX is also developing a fully reusable rocket known as Starship. Other entities developing reusable rockets include Blue Origin and Rocket Lab. == Development of emerging technologies == As innovation drives economic growth, and large economic rewards come from new inventions, a great deal of resources (funding and effort) go into the development of emerging technologies. Some of the sources of these resources are described below. === Research and development === Research and development is directed towards the advancement of technology in general, and therefore includes development of emerging technologies. See also List of countries by research and development spending. Applied research is a form of systematic inquiry involving the practical application of science. It accesses and uses some part of the research communities' (the academia's) accumulated theories, knowledge, methods, and techniques, for a specific, often state-, business-, or client-driven purpose. Science policy is the area of public policy which is concerned with the policies that affect the conduct of the science and research enterprise, including the funding of science, often in pursuance of other national policy goals such as technological innovation to promote commercial product development, weapons development, health care and environmental monitoring. === Patents === Patents provide inventors with a limited period of time (minimum of 20 years, but duration based on jurisdiction) of exclusive right in the making, selling, use, leasing or otherwise of their novel technological inventions. Artificial intelligence, robotic inventions, new material, or blockchain platforms may be patentable, the patent protecting the technological know-how used to create these inventions. In 2019, the World Intellectual Property Organization (WIPO) reported that AI was the most prolific emerging technology in terms of number of patent applications and granted patents, the Internet of things was estimated to be the largest in terms of market size. It was followed, again in market size, by big data technologies, robotics, AI, 3D printing and the fifth generation of mobile services (5G). Since AI emerged in the 1950s, 340,000 AI-related patent applications were filed by innovators and 1.6 million scientific papers have been published by researchers, with the majority of all AI-related patent filings published since 2013. Companies represent 26 out of the top 30 AI patent applicants, with universities or public research organizations accounting for the remaining four. === DARPA === DARPA (Defense Advanced Research Projects Agency) is an agency of the U.S. Department of Defense responsible for the development of emerging technologies for use by the military. DARPA was created in 1958 as the Advanced Research Projects Agency (ARPA) by President Dwight D. Eisenhower. Its purpose was to formulate and execute research and development projects to expand the frontiers of technology and science, with the aim to reach beyond immediate military requirements. Projects funded by DARPA have provided significant technologies that influenced many non-military fields, such as the Internet and Global Positioning System technology. === Technology competitions and awards === There are awards that provide incentive to push the limits of technology (generally synonymous with emerging technologies). Note that while some of these awards reward achievement after-the-fact via analysis of the merits of technological breakthroughs, others provide incentive via competitions for awards offered for goals yet to be achieved. The Orteig Prize was a $25,000 award offered in 1919 by French hotelier Raymond Orteig for the first nonstop flight between New York City and Paris. In 1927, underdog Charles Lindbergh won the prize in a modified single-engine Ryan aircraft called the Spirit of St. Louis. In total, nine teams spent $400,000 in pursuit of the Orteig Prize. The XPRIZE series of awards, public competitions designed and managed by the non-profit organization called the X Prize Foundation, are intended to encourage technological development that could benefit mankind. The most high-profile XPRIZE to date was the $10,000,000 Ansari XPRIZE relating to spacecraft development, which was awarded in 2004 for the development of SpaceShipOne. The Turing Award is an annual prize given by the Association for Computing Machinery (ACM) to "an individual selected for contributions of a technical nature made to the computing community." It is stipulated that the contributions should be of lasting and major technical importance to the computer field. The Turing Award is generally recognized as the highest distinction in computer science, and in 2014 grew to $1,000,000. The Millennium Technology Prize is awarded once every two years by Technology Academy Finland, an independent fund established by Finnish industry and the Finnish state in partnership. The first recipient was Tim Berners-Lee, inventor of the World Wide Web. In 2003, David Gobel seed-funded the Methuselah Mouse Prize (Mprize) to encourage the development of new life extension therapies in mice, which are genetically similar to humans. So far, three Mouse Prizes have been awarded: one for breaking longevity records to Dr. Andrzej Bartke of Southern Illinois University; one for late-onset rejuvenation strategies to Dr. Stephen Spindler of the University of California; and one to Dr. Z. Dave Sharp for his work with the pharmaceutical rapamycin. == Role of science fiction == Science fiction has often affected innovation and new technology by presenting creative, intriguing possibilities for technological advancement. For example, many rocketry pioneers were inspired by science fiction. The documentary How William Shatner Changed the World describes a number of examples of imagined technologies that became real. == Bleeding edge == The term bleeding edge has been used to refer to some new technologies, formed as an allusion to the similar terms "leading edge" and "cutting edge". It tends to imply even greater advancement, albeit at an increased risk because of the unreliability of the software or hardware. The first documented example of this term being used dates to early 1983, when an unnamed banking executive was quoted to have used it in reference to Storage Technology Corporation. == See also == List of emerging technologies Bioconservatism Bioethics Biopolitics Current research in evolutionary biology Foresight (futures studies) Futures studies Future of Humanity Institute Institute for Ethics and Emerging Technologies Institute on Biotechnology and the Human Future Technological change Differential technological development Accelerating change Moore's law Innovation Technological revolution Technological innovation system Technological utopianism Techno-progressivism Transhumanism Technological singularity Category:Upcoming software == Notes == == References == Citations == Further reading == General Giersch, H. (1982). Emerging technologies: Consequences for economic growth, structural change, and employment : symposium 1981. Tübingen: Mohr. Jones-Garmil, K. (1997). The wired museum: Emerging technology and changing paradigms. Washington, DC: American Association of Museums. Kaldis, Byron (2010). "Converging Technologies". Sage Encyclopedia of Nanotechnology and Society, Thousand Oaks: CA, Sage Rotolo D.; Hicks D.; Martin B. R. (2015). "What is an emerging technology?". Research Policy. 44 (10): 1827–1843. arXiv:1503.00673. doi:10.1016/j.respol.2015.06.006. S2CID 15234961. Law and policy Branscomb, L. M. (1993). Empowering technology: Implementing a U.S. strategy. Cambridge, Mass: MIT Press. Raysman, R., & Raysman, R. (2002). Emerging technologies and the law: Forms and analysis. Commercial law intellectual property series. New York, N.Y.: Law Journal Press. Information and learning Hung, D., & Khine, M. S. (2006). Engaged learning with emerging technologies. Dordrecht: Springer. Kendall, K. E. (1999). Emerging information technologies: Improving decisions, cooperation, and infrastructure. Thousand Oaks, Calif: Sage Publications. Illustrated Weinersmith, Kelly; Weinersmith, Zach (2017). Soonish: Ten Emerging Technologies That'll Improve and/or Ruin Everything. Penguin Press. ISBN 978-0399563829. Other Cavin, R. K., & Liu, W. (1996). Emerging technologies: Designing low power digital systems. [New York]: Institute of Electrical and Electronics Engineers.
https://en.wikipedia.org/wiki/Emerging_technologies
Microchip Technology Incorporated is a publicly listed American semiconductor corporation that manufactures microcontroller, mixed-signal, analog, and Flash-IP integrated circuits. Its corporate headquarters is located in Chandler, Arizona. Its wafer fabs are located in Gresham, Oregon, and Colorado Springs, Colorado. The company's assembly/test facilities are in Chachoengsao, Thailand, and Calamba and Cabuyao, Philippines. Microchip Technology offers support and resources to educators, researchers and students in an effort to increase awareness and knowledge of embedded applications. == History == === Origins === Microchip Technology was founded in 1987 when General Instrument spun off its microelectronics division as a wholly owned subsidiary. The newly formed company was a supplier of programmable non-volatile memory, microcontrollers, digital signal processors, card chip on board, and consumer integrated circuits. An initial public offering (IPO) later in the year was canceled because of the October 1987 stock market crash. Microchip Technology became an independent company in 1989 when it was acquired by a group of venture capitalists led by Sequoia Capital. In the same year, Microchip Technology announced the release of small, inexpensive 8-bit reduced instruction set computing (RISC) microcontrollers for $2.40 apiece, whereas most RISC microcontrollers were 32-bit devices selling for hundreds of dollars. === 1990-2024 === In 1990, 60% of Microchip Technology's sales were from the disc drive industry and the product portfolio relied heavily on commodity EEPROM products. The company was losing US$2.5 million per quarter, had less than 6 months of cash in reserve, had exhausted lines of credit, and was failing to control expenses. Early in the year, the venture capital investors accepted an offer to sell Microchip Technology to Winbond Electronics Corporation of Taiwan for $15M. Winbond Electronics backed out of the deal after the Taiwanese stock market decreased in May 1990. Vice President of Operations, Steve Sanghi, was named president and chief operating officer of Microchip Technology in 1990. After several quarters of losses, Sanghi oversaw Microchip Technology's transition from selling commodity-based products to specialized chips, such as the RISC technology. Microchip Technology conducted an IPO in 1993, which Fortune magazine cited as the best performing IPO of the year with a stock appreciation of 500% and over $1bn in market capitalization. At the end of 2015, Microchip Technology posted its 100th consecutive quarter of profitability. In March 2021, Sanghi was replaced as CEO by Ganesh Moorthy. === 2024-present === In March 2024, Microchip furloughed production staff and non-manufacturing employees were forced to take a pay-cut for two weeks. This was done again in June. In late November, Moorthy retired as CEO and Steve Sanghi was appointed interim CEO. In early December of that year, Sanghi announced the closure of Fab 2 in Tempe, Arizona and also announced that Microchip would suspend its application for CHIPS and Science Act funding. On February 10, 2025, Microchip announced that they would again furlough employees intermittently throughout the rest of the year. === Acquisitions === In 1995, Microchip acquired KeeLoq technology from Nanoteq of South Africa for $10M in cash. Microchip Technology used the purchase to create the Secure Data Products Group. On May 24, 2000, Microchip acquired a wafer fab in Puyallup, Washington that was formerly owned by Matsushita Electric Industrial Company. On October 19, 2007, due to the great recession, the facility, known as Fab 3, was sold for $30M from an unsolicited offer. On October 27, 2000, Microchip purchased TelCom Semiconductor of Mountain View, California for $300M. In 2002, Microchip acquired a wafer fab in Gresham, Oregon from Fujitsu for $183.5M. This fab became, and still is, Microchip's largest and is known as Fab 4. On October 15, 2008, Microchip acquired Hampshire Company, a company that sold large-format universal touch screen controller electronics and related software. On February 20, 2009, Microchip acquired Australia-based HI-TECH Software. On January 11, 2010, Microchip acquired Thomas H. Lee's Sunnyvale, California-based ZeroG Wireless for an undisclosed amount after a year-long partnership. The deal allowed Microchip to provide a Wi-Fi product for their PIC microcontrollers. In April 2010, Microchip completed the acquisition of Silicon Storage Technology (SST) from for about $292M. Microchip and Cerberus Capital Management both made offers for the company. Microchip sold several SST flash memory assets back to Bing Yeh, co-founder of SST, to another one his companies the next month. In 2012, Microchip acquired German-based Ident Technology AG, California based Roving Networks, and Standard Microsystems Corporation. On June 3, 2013, Microchip acquired Novocell Semiconductor, Inc. through its Silicon Storage Technology (SST) subsidiary. In 2014, Microchip acquired Supertex, Inc and Belgian-based EqcoLogic on February 10, and Taiwan-based ISSC Technologies on May 22. On August 3, 2015, Microchip acquired IC manufacturer Micrel for about $839M. In January 2016, Microchip purchased San Jose, California-based Atmel for $3.56bn. JPMorgan Chase advised Microchip while Qatalyst Partners advised Atmel. In May 2018, Microchip acquired Microsemi Corporation. In October 2020, Microchip acquired New Zealand-based Tekron International Limited for an undisclosed amount. In April 2024, Microchip acquired both South Korea-based VSI Co. Ltd. and Neuronix AI Labs. == Products == Microchip offers 8, 16, and 32-bit microcontrollers including PIC and AVR microcontrollers, microprocessors, analog power management and conversion, CAN and LIN serial communication interface devices, high-voltage MEMS and piezoelectric drivers, ultrasound multiplexers, digital signal controllers, embedded controllers, memory products (including serial EEPROM, serial SRAM, serial flash, serial NvSRAM, serial EERAM, parallel EEPROM, parallel one-time programmable flash, parallel flash and CryptoMemory devices.) Microchip also offers custom programming, AI coding assistant, hardware and software development tools and reference designs. Available reference designs include complete systems, subsystems or functions which are purpose-built and include design files, software and support. Microchip crypto element devices that provide authentication, data integrity, and confidentiality in a variety of applications, such as disposables, accessories and nodes; Timing, communication and real-time clock and calendar products; USB products; Power Management Integrated Circuits (PMICs); and networking products including ethernet interface and wireless products. === Product milestones === In April 2009, Microchip Technology announced the nanoWatt XLP microcontrollers, claiming the world's lowest sleep current. Microchip Technology had sold more than 6 billion microcontrollers as of 2009. As of 2011, Microchip Technology ships over a billion processors every year. In September 2011, Microchip Technology shipped the 10 billionth PIC microcontroller. Examples of Microchip Products == Wafer Fabs == == See also == ATtiny microcontroller comparison chart AVR microcontrollers KeeLoq MiWi MPLAB devices MPLAB PIC microcontrollers PICkit UNI/O == References == == External links == Official website Business data for Microchip Technology:
https://en.wikipedia.org/wiki/Microchip_Technology
Deception technology (also deception and disruption technology) is a category of cyber security defense mechanisms that provide early warning of potential cyber security attacks and alert organizations of unauthorized activity. Deception technology products can detect, analyze, and defend against zero-day and advanced attacks, often in real time. They are automated, accurate, and provide insight into malicious activity within internal networks which may be unseen by other types of cyber defense. Deception technology seeks to deceive an attacker, detect them, and then defeat them. Deception technology considers the point of view of human attackers and method for exploiting and navigating networks to identify and exfiltrate data. It integrates with existing technologies to provide new visibility into the internal networks, share high probability alerts and threat intelligence with the existing infrastructure. == Technology: High level view == Deception technology automates the creation of traps (decoys) and lures, which are strategically integrated among existing IT resources. These decoys provide an additional layer of protection to thwart attackers who have breached the network. Traps can be IT assets that utilize genuine licensed operating system software or emulate various devices, such as medical devices, automated teller machines (ATMs), retail point-of-sale systems, switches, routers, and more. On the other hand, lures typically consist of real information technology resources, such as files of different types, that are placed on actual IT assets. Due to advancement in the area of cybersecurity, deception technology programs are increasingly proactive in approach and produce fewer false-positive alerts. The goal is to accurately discover the intention of the attacker and their tactics, technique and procedure. These information will enable effective response from the deception technology platforms. Upon penetrating the network, attackers seek to establish a backdoor and then use this to identify and exfiltrate data and intellectual property. They begin moving laterally through the internal VLANs and almost immediately will "encounter" one of the traps. Interacting with one of these "decoys" will trigger an alert. These alerts have very high probability and almost always coincide to an ongoing attack. The deception is designed to lure the attacker in – the attacker may consider this a worthy asset and continue by injecting malware. Deception technology generally allows for automated static and dynamic analysis of this injected malware and provides these reports through automation to the security operations personnel. Deception technology may also identify, through indicators of compromise (IOC), suspect end-points that are part of the compromise cycle. Automation also allows for an automated memory analysis of the suspect endpoints, and then automatically isolating the suspect endpoints. == Specialized applications == Internet of things (IoT) devices are not usually scanned by legacy defense in depth and remain prime targets for attackers within the network. Deception technology can identify attackers moving laterally into the network within these devices. Integrated turnkey devices that utilize embedded operating systems but do not allow these operating systems to be scanned or closely protected by embedded end-point or intrusion detection software are also well protected by a deception technology deployment in the same network. Examples include process control systems (SCADA) used in many manufacturing applications on a global basis. Deception technology has been associated with the discovery of Zombie Zero, an attack vector. Deception technology identified this attacker utilizing malware embedded in barcode readers which were manufactured overseas. Medical devices are particular vulnerable to cyber-attacks within the healthcare networks. As FDA-certified devices, they are in closed systems and not accessible to standard cyber defense software. Deception technology can surround and protect these devices and identify attackers using backdoor placement and data exfiltration. Recent documented cyber attacks on medical devices include x-ray machines, CT scanners, MRI scanners, blood gas analyzers, PACS systems and many more. Networks utilizing these devices can be protected by deception technology. This attack vector, called medical device hijack or medjack, is estimated to have penetrated many hospitals worldwide. Specialized deception technology products are now capable of addressing the rise in ransomware by deceiving ransomware into engaging in an attack on a decoy resource, while isolating the infection points and alerting the cyber defense software team. == History == Honeypots were perhaps the first very simple form of deception. A honeypot appeared simply as an unprotected information technology resource and presented itself in an attractive way to a prospective attacker already within the network. However, most early honeypots exhibit challenges with functionality, integrity and overall efficacy in meeting these goals. A key difficulty was lack of automation that enabled broad scale deployment; a deployment strategy that aimed to cover an enterprise where up to tens of thousands of VLANS needed to be protected would not be economically efficient using manual processes and manual configuration. The gap between legacy honeypots and modern deception technology has diminished over time and will continue to do so. Modern honeypots constitute the low end of the deception technology space today. == Differentiation from competitive/cooperative technologies == Traditional cyber defense technologies such as firewalls and endpoint security seek primarily to defend a perimeter, but they cannot do so with 100% certainty. Heuristics may find an attacker within the network, but often generate so many alerts that critical alerts are missed. In a large enterprise, the alert volume may reach millions of alerts per day. Security operations personnel cannot process most of the activity easily, yet it only takes one successful penetration to compromise an entire network. This means cyber-attackers can penetrate these networks and move unimpeded for months, stealing data and intellectual property. Deception technology produces alerts that are the end product of a binary process. Probability is essentially reduced to two values: 0% and 100%. Any party that seeks to identify, ping, enter, view any trap or utilizes a lure is immediately identified as malicious by this behavior because anyone touching these traps or lures should not be doing so. This certainty is an advantage over the many extraneous alerts generated by heuristics and probability-based. Best practice shows that deception technology is not a stand-alone strategy. Deception technology is an additional compatible layer to the existing defense-in-depth cyber defense. Partner integrations make it most useful. The goal is to add protection for the most advanced and sophisticated human attackers that will successfully penetrate the perimeter. == See also == Cybercrime Network security Proactive cyber defense == References == == Further reading == Lance Spitzner (2002). Honeypots tracking hackers. Addison-Wesley. ISBN 0-321-10895-7. Sean Bodmer; Max Kilger; Gregory Carpenter; Jade Jones (2012). Reverse Deception: Organized Cyber Threat Counter-Exploitation. McGraw-Hill Education. ISBN 978-0071772495.
https://en.wikipedia.org/wiki/Deception_technology
Wearable technology is any technology that is designed to be used while worn. Common types of wearable technology include smartwatches, fitness trackers, and smartglasses. Wearable electronic devices are often close to or on the surface of the skin, where they detect, analyze, and transmit information such as vital signs, and/or ambient data and which allow in some cases immediate biofeedback to the wearer. Wearable devices collect vast amounts of data from users making use of different behavioral and physiological sensors, which monitor their health status and activity levels. Wrist-worn devices include smartwatches with a touchscreen display, while wristbands are mainly used for fitness tracking but do not contain a touchscreen display. Wearable devices such as activity trackers are an example of the Internet of things, since "things" such as electronics, software, sensors, and connectivity are effectors that enable objects to exchange data (including data quality) through the internet with a manufacturer, operator, and/or other connected devices, without requiring human intervention. Wearable technology offers a wide range of possible uses, from communication and entertainment to improving health and fitness, however, there are worries about privacy and security because wearable devices have the ability to collect personal data. Wearable technology has a variety of use cases which is growing as the technology is developed and the market expands. It can be used to encourage individuals to be more active and improve their lifestyle choices. Healthy behavior is encouraged by tracking activity levels and providing useful feedback to enable goal setting. This can be shared with interested stakeholders such as healthcare providers. Wearables are popular in consumer electronics, most commonly in the form factors of smartwatches, smart rings, and implants. Apart from commercial uses, wearable technology is being incorporated into navigation systems, advanced textiles (e-textiles), and healthcare. As wearable technology is being proposed for use in critical applications, like other technology, it is vetted for its reliability and security properties. == History == In the 1500s, German inventor Peter Henlein (1485–1542) created small watches that were worn as necklaces. A century later, pocket watches grew in popularity as waistcoats became fashionable for men. Wristwatches were created in the late 1600s but were worn mostly by women as bracelets. Pedometers were developed around the same time as pocket watches. The concept of a pedometer was described by Leonardo da Vinci around 1500, and the Germanic National Museum in Nuremberg has a pedometer in its collection from 1590. In the late 1800s, the first wearable hearing aids were introduced. In 1904, aviator Alberto Santos-Dumont pioneered the modern use of the wristwatch. In 1949, American biophysicist Norman Holter invented the very first health monitoring device. His invention, the Holter monitor, was groundbreaking as one of the first wearable devices capable of tracking vital health data outside of a clinical setting. In the 1970s, calculator watches became available, reaching the peak of their popularity in the 1980s. From the early 2000s, wearable cameras were being used as part of a growing sousveillance movement. Expectations, operations, usage and concerns about wearable technology was floated on the first International Conference on Wearable Computing. In 2008, Ilya Fridman incorporated a hidden Bluetooth microphone into a pair of earrings. Big tech companies such as Apple, Samsung, and Fitbit have expanded on this idea by interfacing with smartphones and personal computer software to collect a wide variety of data. Wearable devices include dedicated health monitors, fitness bands, and smartwatches. In 2010, Fitbit released its first step counter. Wearable technology which tracks information such as walking and heart rate is part of the quantified self movement. In 2013, McLear, also known as NFC Ring, released a "smart ring". The smart ring could make bitcoin payments, unlock other devices, and transfer personally identifying information, and also had other features. In 2013, one of the first widely available smartwatches was the Samsung Galaxy Gear. Apple followed in 2015 with the Apple Watch. In recent years, the adoption of healthcare information technologies has followed a more incremental approach within artificial intelligence (AI) and advanced data analytics to enhance diagnosis, real-time disease surveillance, and population health management. There now exists predictive health monitoring that predicts the daily habits of its users for the purpose of modifying health risk factors and improving the population's overall wellbeing. === Prototypes === From 1991 to 1997, Rosalind Picard and her students, Steve Mann and Jennifer Healey, at the MIT Media Lab designed, built, and demonstrated data collection and decision making from "Smart Clothes" that monitored continuous physiological data from the wearer. These "smart clothes", "smart underwear", "smart shoes", and smart jewellery collected data that related to affective state and contained or controlled physiological sensors and environmental sensors like cameras and other devices. At the same time, also at the MIT Media Lab, Thad Starner and Alex "Sandy" Pentland develop augmented reality. In 1997, their smartglass prototype is featured on 60 Minutes and enables rapid web search and instant messaging. Though the prototype's glasses are nearly as streamlined as modern smartglasses, the processor was a computer worn in a backpack – the most lightweight solution available at the time. In 2009, Sony Ericsson teamed up with the London College of Fashion for a contest to design digital clothing. The winner was a cocktail dress with Bluetooth technology making it light up when a call is received. Zach "Hoeken" Smith of MakerBot fame made keyboard pants during a "Fashion Hacking" workshop at a New York City creative collective. The Tyndall National Institute in Ireland developed a "remote non-intrusive patient monitoring" platform which was used to evaluate the quality of the data generated by the patient sensors and how the end users may adopt to the technology. More recently, London-based fashion company CuteCircuit created costumes for singer Katy Perry featuring LED lighting so that the outfits would change color both during stage shows and appearances on the red carpet such as the dress Katy Perry wore in 2010 at the MET Gala in NYC. In 2012, CuteCircuit created the world's first dress to feature Tweets, as worn by singer Nicole Scherzinger. In 2010, McLear, also known as NFC Ring, developed prototypes of its "smart ring" devices, before a Kickstarter fundraising in 2013. In 2014, graduate students from the Tisch School of Arts in New York designed a hoodie that sent pre-programmed text messages triggered by gesture movements. Around the same time, prototypes for digital eyewear with heads up display (HUD) began to appear. The US military employs headgear with displays for soldiers using a technology called holographic optics. In 2010, Google started developing prototypes of its optical head-mounted display Google Glass, which went into customer beta in March 2013. == Usage == In the consumer space, sales of smart wristbands (aka activity trackers such as the Jawbone UP and Fitbit Flex) started accelerating in 2013. One in five American adults have a wearable device, according to the 2014 PriceWaterhouseCoopers Wearable Future Report. As of 2009, decreasing cost of processing power and other components was facilitating widespread adoption and availability. In professional sports, wearable technology has applications in monitoring and real-time feedback for athletes. Examples of wearable technology in sport include accelerometers, pedometers, and GPS's which can be used to measure an athlete's energy expenditure and movement pattern. In cybersecurity and financial technology, secure wearable devices have captured part of the physical security key market. McLear, also known as NFC Ring, and VivoKey developed products with one-time pass secure access control. In health informatics, wearable devices have enabled better capturing of human health statics for data driven analysis. This has facilitated data-driven machine learning algorithms to analyse the health condition of users. For applications in health (see below). In business, wearable technology helps managers easily supervise employees by knowing their locations and what they are currently doing. Employees working in a warehouse also have increased safety when working around chemicals or lifting something. Smart helmets are employee safety wearables that have vibration sensors that can alert employees of possible danger in their environment. == Wearable technology and health == Wearable technology is often used to monitor a user's health. Given that such a device is in close contact with the user, it can easily collect data. It started as soon as 1980 where first wireless ECG was invented. In the last decades, there has been substantial growth in research of e.g. textile-based, tattoo, patch, and contact lenses as well as circulation of a notion of "quantified self", transhumanism-related ideas, and growth of life extension research. Wearables can be used to collect data on a user's health including: Heart rate Sleep patterns Stress levels Fertile periods Energy score Blood oxygen Body composition ECG Calories burned Steps walked Blood pressure Release of certain biochemicals Time spent exercising Seizures Physical strain Body composition and Water levels These functions are often bundled together in a single unit, like an activity tracker or a smartwatch like the Apple Watch Series 2 or Samsung Galaxy Gear Sport. Devices like these are used for physical training and monitoring overall physical health, as well as alerting to serious medical conditions such as seizures (e.g. Empatica Embrace2). === Medical uses === While virtual reality (VR) was originally developed for gaming, it also can be used for rehabilitation. Virtual reality headsets are given to patients and the patients instructed to complete a series of tasks, but in a game format. This has significant benefits compared to traditional therapies. For one, it is more controllable; the operator can change their environment to anything they desire including areas that may help them conquer their fear, like in the case of PTSD. Another benefit is the price. On average, traditional therapies are several hundred dollars per hour, whereas VR headsets are only several hundred dollars and can be used whenever desired. In patients with neurological disorders like Parkinson's, therapy in game format where multiple different skills can be utilized at the same time, thus simultaneously stimulating several different parts of the brain. VR's usage in physical therapy is still limited as there is insufficient research. Some research has pointed to the occurrence of motion sickness while performing intensive tasks, which can be detrimental to the patient's progress. Detractors also point out that a total dependence on VR can lead to self-isolation and be coming overly dependent on technology, preventing patients from interacting with their friends and family. There are concerns about privacy and safety, as the VR software would need patient data and information to be effective, and this information could be compromised during a data breach, like in the case of 23andMe. The lack of proper medical experts coupled with the longer learning curved involved with the recovery project, may result in patients not realizing their mistakes and recovery taking longer than expected. The issue of cost and accessibility is also another issue; while VR headsets are significantly cheaper than traditional physical therapy, there may be many ad-ons that could raise the price, making it inaccessible to many. Base models may be less effective compared to higher end models, which may lead to a digital divide. Overall, VR healthcare solutions are not meant to be a competitor to traditional therapies, as research shows that when coupled together physical therapy is more effective. Research into VR rehabilitation continues to expand with new research into haptic developing, which would allow the user to feel their environments and to incorporate their hands and feet into their recovery plan. Additionally, there are more sophisticated VR systems being developed which allow the user to use their entire body in their recovery. It also has sophisticated sensors that would allow medical professionals to collect data on muscle engagement and tension. It uses electrical impedance tomography, a form of noninvasive imaging to view muscle usage. Another concern is the lack of major funding by big companies and the government into the field. Many of these VR sets are off the shelf items, and not properly made for medical use. External add-ones are usually 3D printed or made from spare parts from other electronics. this lack of support means that patients who want to try this method have to be technically savvy, which is unlikely as many ailments only appear later in life. Additionally, certain parts of VR like haptic feedback and tracking are still not advanced enough to be used reliably in a medical setting. Another issue is the amount of VR devices that are available for purchase. While this does increase the options available, the differences between VR systems could impact patient recovery. The vast number of VR devices also makes it difficult for medical professionals to give and interpret information, as they might not have had practice with the specific model, which could lead to faulty advice being given out. === Applications === Currently other applications within healthcare are being explored, such as: Applications for monitoring of glucose, alcohol, and lactate or blood oxygen, breath monitoring, heartbeat, heart rate and its variability, electromyography (EMG), electrocardiogram (ECG) and electroencephalogram (EEG), body temperature, pressure (e.g. in shoes), sweat rate or sweat loss, levels of uric acid and ions – e.g. for preventing fatigue or injuries or for optimizing training patterns, including via "human-integrated electronics" Forecasting changes in mood, stress, and health Measuring blood alcohol content Measuring athletic performance Monitoring how sick the user is Detecting early signs of infection Long-term monitoring of patients with heart and circulatory problems that records an electrocardiogram and is self-moistening Health risk assessment applications, including measures of frailty and risks of age-dependent diseases Automatic documentation of care activities Days-long continuous imaging of diverse organs via a wearable bioadhesive stretchable high-resolution ultrasound imaging patch or e.g. a wearable continuous heart ultrasound imager. (potential novel diagnostic and monitoring tools) Sleep tracking Cortisol monitoring for measuring stress Measuring relaxation or alertness, e.g. to adjust their modulation or to measure efficacy of modulation techniques ==== Proposed applications ==== Proposed applications, including applications without functional wearable prototypes, include: Tracking physiological changes such as stress levels and heartbeat of "experiencers" or "contactees" of the UFO-sighting, anomalous physiological effects and alien abduction/contact/sighting phenomena, including "experiencer group research" Pathogen detection and detection of hazardous substances Improving sleep via sleeping caps === Applications to COVID-19 === Various wearable technologies have been developed in order to help with the diagnosis of COVID-19. Oxygen levels, antibody detection, blood pressure, heart rate, and so much more are monitored by small sensors within these devices. ==== Wearable Devices to Detect Symptoms of COVID-19 ==== Smart lenses On-teeth sensors Face masks Smart textiles Electronic epidermal tattoos Micro needle patches Wristbands Smart rings Smartwatches ==== Smartwatches ==== Wearable technology such as Apple Watches and Fitbits have been used to potentially diagnose symptoms of COVID-19. Monitors within the devices have been designed to detect heart rate, blood pressure, oxygen level, etc. The diagnostic capabilities of wearable devices proposes an easier way to detect any abnormalities within the human body. Estimation and prediction techniques of wearable technology for COVID-19 has several flaws due to the inability to differentiate between other illnesses and COVID-19. Elevations in blood pressure, heart rate, etc. as well as a fluctuation in oxygen level can be attributed to other sicknesses ranging from the common cold to respiratory diseases. The inability to differentiate these illnesses has caused "unnecessary stress in patients, raising concern on the implementation of wearables for health." Remote monitoring devices and Internet-of-Things (IoT) systems are also being progressively deployed for managing chronic illnesses through remote patient care and shared decision-making. However, more policy and implementation efforts remain vital to fully harness digital health potentials while ensuring equitable access. ==== Smart Masks ==== In addition to wearable devices such as watches, professionals designed face masks with built in sensors for individuals to use during the COVID-19 pandemic. The built in sensors were designed to detect characteristics of exhaled breath such as "patterns and rates of respiration, biomarkers of inflammation and the potential detection of airborne pathogens." Smart masks "contain a sensor that monitors the presence of a SARS-CoV-2 protease in the breath." Contained in the mask is a blister pack, which, when broken, causes a chemical reaction to occur. As a result of the chemical reaction, the sensor will turn blue if the virus is detected from an individual's breathing. Issues occur however with the amount of protease needed to warrant a correct result from the sensor. An individual's breath only contains protease once the cells die. Then they make their way out of the body in fluids such a saliva, and through breathing. If too little protease is present, the mask may not be able to detect the protease thus causing a false result. ==== Smart Lenses ==== Smart lenses have been developed to record intraocular pressure. The lens conforms to the eyeball and contains sensors in which monitor glucose levels, eye movement, and certain biomarkers for particular diseases. Built into the lenses are micro electronics and processing units that are responsible for data collection. With the innovation of technology, smart lenses have the potential to "incorporate displays that superimpose information onto what the wearer sees." ==== Smart Textiles ==== Smart textiles have been developed to monitor skin temperature and metabolites. These textiles contain sensors which are composed of three basic parts: "containing substrate, active elements, and electrode/interconnect." Although smart textiles can provide a way for individuals to diagnose abnormalities about their body, there are a multitude of challenges associated with the usage. Economic burdens to patients and hospitals as well as the high cost of purchasing and upkeep provide a hinderance to the application of smart textiles. The development of these sensors also face many challenges such as "the selection of suitable substrates, biocompatible materials, and manufacturing techniques, as well as the instantaneous monitoring of different analysts[sic], the washability, and uninterrupted signal display circuits." ==== Smart Rings ==== Smart rings have been developed to monitor blood pressure. ==== Micro Needle Patches ==== Micro needle patches have been developed to monitor metabolites, inflammation markers, drugs, etc. They are also very advantageous for various reasons: "improved immunogenicity, dose-sparing effects, low manufacturing costs...ease of use...and greater acceptability compared to traditional hypodermic injections." The implementation of micro needle patches is expected to expedite the vaccination process making it more applicable, efficient, and cost effective. === Contemporary use === Living a healthy life may not just solely be dependent on eating healthy, sleeping well, or participating in a few exercises a week. Instead, it lies far beyond just a few things and rather is deeply connected to a variety of physiological and biochemical parts of the body in relation to physical activity and living a healthy lifestyle. In the past several years, the emergence of technological devices better known as "wearable technology" has improved the ability to measure physical activity and has given simple users and e.g. cardiologists to be able to analyze parameters related to their quality of life. Wearable technology are devices that people can wear at all times throughout the day, and also throughout the night. They help measure certain values such as heartbeat and rhythm, quality of sleep, total steps in a day, and may help recognize certain diseases such as heart disease, diabetes, and cancer. They may promote ideas on how to improve one's health and stay away from certain impending diseases. These devices give daily feedback on what to improve on and what areas people are doing well in, and this motivates and continues to push the user to keep on with their improved lifestyle. Over time, wearable technology has impacted the health and physical activity market an immense amount as, according to Pevnick et al 2018, "The consumer-directed wearable technology market is rapidly growing and expected to exceed $34B by 2020." This shows how the wearable technology sector is increasingly becoming more and more approved amongst all people who want to improve their health and quality of life. Wearable technology can come in all forms from watches, pads placed on the heart, devices worn around the arms, all the way to devices that can measure any amount of data just through touching the receptors of the device. In many cases, wearable technology is connected to an app that can relay the information right away ready to be analyzed and discussed with a cardiologist. In addition, according to the American Journal of Preventive Medicine they state, "wearables may be a low-cost, feasible, and accessible way for promoting PA." Essentially, this insinuates that wearable technology can be beneficial to everyone and really is not cost prohibited. Also, when consistently seeing wearable technology being actually utilized and worn by other people, it promotes the idea of physical activity and pushes more individuals to take part. Wearable technology also helps with chronic disease development and monitoring physical activity in terms of context. For example, according to the American Journal of Preventive Medicine, "Wearables can be used across different chronic disease trajectory phases (e.g., pre- versus post-surgery) and linked to medical record data to obtain granular data on how activity frequency, intensity, and duration changes over the disease course and with different treatments." Wearable technology can be beneficial in tracking and helping analyze data in terms of how one is performing as time goes on, and how they may be performing with different changes in their diet, workout routine, or sleep patterns. Also, not only can wearable technology be helpful in measuring results pre and post surgery, but it can also help measure results as someone may be rehabbing from a chronic disease such as cancer, or heart disease, etc. Wearable technology has the potential to create new and improved ways of how we look at health and how we actually interpret that science behind our health. It can propel us into higher levels of medicine and has already made a significant impact on how patients are diagnosed, treated, and rehabbed over time. However, extensive research still needs to be continued on how to properly integrate wearable technology into health care and how to best utilize it. In addition, despite the reaping benefits of wearable technology, a lot of research still also has to be completed in order to start transitioning wearable technology towards very sick high risk patients. === Sense-making of the data === While wearables can collect data in aggregate form, most of them are limited in their ability to analyze or make conclusions based on this data – thus, most are used primarily for general health information. End user perception of how their data is used plays a big role in how such datasets can be fully optimized. Exception include seizure-alerting wearables, which continuously analyze the wearer's data and make a decision about calling for help – the data collected can then provide doctors with objective evidence that they may find useful in diagnoses. Wearables can account for individual differences, although most just collect data and apply one-size-fits-all algorithms. Software on the wearables may analyze the data directly or send the data to a nearby device(s), such as a smartphone, which processes, displays or uses the data for analysis. For analysis and real-term sense-making, machine learning algorithms can also be used. Collected data are wirelessly analyzed using statistics and presented with visualization techniques that show the changes over time. This information can then be shared via the internet with healthcare providers to make informed decisions about the user's healthcare. === Use in surveillance === Today, there is a growing interest to use wearables not only for individual self-tracking, but also within corporate health and wellness programs. Given that wearables create a massive data trail which employers could repurpose for objectives other than health, more and more research has begun to study privacy- and security-related issues of wearables, including related to the use for surveillance of workers. Asha Peta Thompson founded Intelligent Textiles who create woven power banks and circuitry that can be used in e-uniforms for infantry. Currently, data is not owned by the users themselves, but rather by the company that produces the wearable device. The user only has access to the aggregated summary of their data, while the raw data can be sold to third parties. These issues raise serious concerns for the individual making use of wearable devices. == By form factor == Wearable technology can exist in multiple different form factors. Popular smartwatches include the Samsung Galaxy Watch and the Apple Watch. A popular smart ring is the McLear Ring. A popular implant is the Dangerous Things NExT RFID + NFC Chip Implant, albeit such is not worn but implanted. === Head-worn === Glasses (including but not only smartglasses) are wearable technology that are head-worn. ==== Headgear ==== Headcaps, for example to measure EEG, are head-worn. A study indicates EEG headgear could be used for neuroenhancement, concluding that a "visual flicker paradigm to entrain individuals at their own brain rhythm (i.e. peak alpha frequency)" results in substantially faster perceptual visual learning, maintained the day following training. There is research into various forms of neurostimulation, with various approaches including the use of wearable technology. Another application may be supporting the induction of lucid dreams, albeit "better-controlled validation studies are necessary to prove the effectiveness". === Epidermal electronics (skin-attached) === Epidermal electronics is an emerging field of wearable technology, termed for their properties and behaviors comparable to those of the epidermis, or outermost layer of the skin. These wearables are mounted directly onto the skin to continuously monitor physiological and metabolic processes, both dermal and subdermal. Wireless capability is typically achieved through battery, Bluetooth or NFC, making these devices convenient and portable as a type of wearable technology. Currently, epidermal electronics are being developed in the fields of fitness and medical monitoring. Current usage of epidermal technology is limited by existing fabrication processes. Its current application relies on various sophisticated fabrication techniques such as by lithography or by directly printing on a carrier substrate before attaching directly to the body. Research into printing epidermal electronics directly on the skin is currently available as a sole study source. The significance of epidermal electronics involves their mechanical properties, which resemble those of skin. The skin can be modeled as bilayer, composed of an epidermis having Young's Modulus (E) of 2-80 kPa and thickness of 0.3–3 mm and a dermis having E of 140-600 kPa and thickness of 0.05-1.5 mm. Together this bilayer responds plastically to tensile strains ≥ 30%, below which the skin's surface stretches and wrinkles without deforming. Properties of epidermal electronics mirror those of skin to allow them to perform in this same way. Like skin, epidermal electronics are ultrathin (h < 100 μm), low-modulus (E ≈70 kPa), and lightweight (<10 mg/cm2), enabling them to conform to the skin without applying strain. Conformal contact and proper adhesion enable the device to bend and stretch without delaminating, deforming or failing, thereby eliminating the challenges with conventional, bulky wearables, including measurement artifacts, hysteresis, and motion-induced irritation to the skin. With this inherent ability to take the shape of skin, epidermal electronics can accurately acquire data without altering the natural motion or behavior of skin. The thin, soft, flexible design of epidermal electronics resembles that of temporary tattoos laminated on the skin. Essentially, these devices are "mechanically invisible" to the wearer. Epidermal electronics devices may adhere to the skin via van der Waals forces or elastomeric substrates. With only van der Waals forces, an epidermal device has the same thermal mass per unit area (150 mJ/cm2K) as skin, when the skin's thickness is <500 nm. Along with van der Waals forces, the low values of E and thickness are effective in maximizing adhesion because they prevent deformation-induced detachment due to tension or compression. Introducing an elastomeric substrate can improve adhesion but will raise the thermal mass per unit area slightly. Several materials have been studied to produce these skin-like properties, including photolithography patterned serpentine gold nanofilm and patterned doping of silicon nanomembranes. === Foot-worn === Smart shoes are an example of wearable technology that incorporate smart features into shoes. Smart shoes often work with smartphone applications to support tasks cannot be done with standard footwear. The uses include vibrating of the smart phone to tell users when and where to turn to reach their destination via Google Maps or self-lacing. Self-lacing sneaker technology, similar to the Nike Mag in Back to the Future Part II, is another use of the smart shoe. In 2019 German footwear company Puma was recognized as one of the "100 Best Inventions of 2019" by Time for its Fi laceless shoe that uses micro-motors to adjust the fit from an iPhone. Nike also introduced a smart shoe in 2019 known as Adapt BB. The shoe featured buttons on the side to loosen or tighten the fit with a custom motor and gear, which could also be controlled by a smartphone. == Modern technologies == On April 16, 2013, Google invited "Glass Explorers" who had pre-ordered its wearable glasses at the 2012 Google I/O conference to pick up their devices. This day marked the official launch of Google Glass, a device intended to deliver rich text and notifications via a heads-up display worn as eyeglasses. The device also had a 5 MP camera and recorded video at 720p. Its various functions were activated via voice command, such as "OK Glass". The company also launched the Google Glass companion app, MyGlass. The first third-party Google Glass App came from the New York Times, which was able to read out articles and news summaries. However, in early 2015, Google stopped selling the beta "explorer edition" of Glass to the public, after criticism of its design and the $1,500 price tag. While optical head-mounted display technology remains a niche, two popular types of wearable devices have taken off: smartwatches and activity trackers. In 2012, ABI Research forecast that sales of smartwatches would hit $1.2 million in 2013, helped by the high penetration of smartphones in many world markets, the wide availability and low cost of MEMS sensors, energy efficient connectivity technologies such as Bluetooth 4.0, and a flourishing app ecosystem. Crowdfunding-backed start-up Pebble reinvented the smartwatch in 2013, with a campaign running on Kickstarter that raised more than $10m in funding. At the end of 2014, Pebble announced it had sold a million devices. In early 2015, Pebble went back to its crowdfunding roots to raise a further $20m for its next-generation smartwatch, Pebble Time, which started shipping in May 2015. Crowdfunding-backed start-up McLear invented the smart ring in 2013, with a campaign running on Kickstarter that raised more than $300k in funding. McLear was the first mover in wearables technology in introducing payments, bitcoin payments, advanced secure access control, quantified self data collection, biometric data tracking, and monitoring systems for the elderly. In March 2014, Motorola unveiled the Moto 360 smartwatch powered by Android Wear, a modified version of the mobile operating system Android designed specifically for smartwatches and other wearables. Finally, following more than a year of speculation, Apple announced its own smartwatch, the Apple Watch, in September 2014. Wearable technology was a popular topic at the trade show Consumer Electronics Show in 2014, with the event dubbed "The Wearables, Appliances, Cars and Bendable TVs Show" by industry commentators. Among numerous wearable products showcased were smartwatches, activity trackers, smart jewelry, head-mounted optical displays and earbuds. Nevertheless, wearable technologies are still suffering from limited battery capacity. Another field of application of wearable technology is monitoring systems for assisted living and eldercare. Wearable sensors have a huge potential in generating big data, with a great applicability to biomedicine and ambient assisted living. For this reason, researchers are moving their focus from data collection to the development of intelligent algorithms able to glean valuable information from the collected data, using data mining techniques such as statistical classification and neural networks. Wearable technology can also collect biometric data such as heart rate (ECG and HRV), brainwave (EEG), and muscle bio-signals (EMG) from the human body to provide valuable information in the field of health care and wellness. Another increasingly popular wearable technology involves virtual reality. VR headsets have been made by a range of manufacturers for computers, consoles, and mobile devices. Recently Google released their headset, the Google Daydream. In addition to commercial applications, wearable technology is being researched and developed for a multitude of uses. The Massachusetts Institute of Technology is one of the many research institutions developing and testing technologies in this field. For example, research is being done to improve haptic technology for its integration into next-generation wearables. Another project focuses on using wearable technology to assist the visually impaired in navigating their surroundings. As wearable technology continues to grow, it has begun to expand into other fields. The integration of wearables into healthcare has been a focus of research and development for various institutions. Wearables continue to evolve, moving beyond devices and exploring new frontiers such as smart fabrics. Applications involve using a fabric to perform a function such as integrating a QR code into the textile, or performance apparel that increases airflow during exercise == Entertainment == Wearables have expanded into the entertainment space by creating new ways to experience digital media. Virtual reality headsets and augmented reality glasses have come to exemplify wearables in entertainment. The influence of these virtual reality headsets and augmented reality glasses are seen mostly in the gaming industry during the initial days, but are now used in the fields of medicine and education. Virtual reality headsets such as the Oculus Rift, HTC Vive, and Google Daydream View aim to create a more immersive media experience by either simulating a first-person experience or displaying the media in the user's full field of vision. Television, films, video games, and educational simulators have been developed for these devices to be used by working professionals and consumers. In a 2014 expo, Ed Tang of Avegant presented his "Smart Headphones". These headphones use Virtual Retinal Display to enhance the experience of the Oculus Rift. Some augmented reality devices fall under the category of wearables. Augmented reality glasses are currently in development by several corporations. Snap Inc.'s Spectacles are sunglasses that record video from the user's point of view and pair with a phone to post videos on Snapchat. Microsoft has also delved into this business, releasing Augmented Reality glasses, HoloLens, in 2017. The device explores using digital holography, or holograms, to give the user a first hand experience of Augmented Reality. These wearable headsets are used in many different fields including the military. Wearable technology has also expanded from small pieces of technology on the wrist to apparel all over the body. There is a shoe made by the company shiftwear that uses a smartphone application to periodically change the design display on the shoe. The shoe is designed using normal fabric but utilizes a display along the midsection and back that shows a design of your choice. The application was up by 2016 and a prototype for the shoes was created in 2017. Another example of this can be seen with Atari's headphone speakers. Atari and Audiowear are developing a face cap with built in speakers. The cap will feature speakers built into the underside of the brim, and will have Bluetooth capabilities. Jabra has released earbuds, in 2018, that cancel the noise around the user and can toggle a setting called "hearthrough." This setting takes the sound around the user through the microphone and sends it to the user. This gives the user an augmented sound while they commute so they will be able to hear their surroundings while listening to their favorite music. Many other devices can be considered entertainment wearables and need only be devices worn by the user to experience media. === Gaming === The gaming industry has always incorporated new technology. The first technology used for electronic gaming was a controller for Pong. The way users game has continuously evolved through each decade. Currently, the two most common forms of gaming is either using a controller for video game consoles or a mouse and keyboard for PC games. In 2012, virtual reality headphones were reintroduced to the public. VR headsets were first conceptualized in the 1950s and officially created in the 1960s. The creation of the first virtual reality headset can be credited to Cinematographer Morton Heilig. He created a device known as the Sensorama in 1962. The Sensorama was a videogame like device that was so heavy that it needed to be held up by a suspension device. There has been numerous different wearable technology within the gaming industry from gloves to foot boards. The gaming space has offbeat inventions. In 2016, Sony debuted its first portable, connectable virtual reality headset codenamed Project Morpheus. The device was rebranded for PlayStation in 2018. In early 2019, Microsoft debuts their HoloLens 2 that goes beyond just virtual reality into mixed reality headset. Their main focus is to be use mainly by the working class to help with difficult tasks. These headsets are used by educators, scientists, engineers, military personnel, surgeons, and many more. Headsets such as the HoloLens 2 allows the user to see a projected image at multiple angles and interact with the image. This helps gives a hands on experience to the user, which otherwise, they would not be able to get. == Military == Wearable technology within the military ranges from educational purposes, training exercises and sustainability technology. The technology used for educational purposes within the military are mainly wearables that tracks a soldier's vitals. By tracking a soldier's heart rate, blood pressure, emotional status, etc. helps the research and development team best help the soldiers. According to chemist, Matt Coppock, he has started to enhance a soldier's lethality by collecting different biorecognition receptors. By doing so it will eliminate emerging environmental threats to the soldiers. With the emergence of virtual reality it is only natural to start creating simulations using VR. This will better prepare the user for whatever situation they are training for. In the military there are combat simulations that soldiers will train on. The reason the military will use VR to train its soldiers is because it is the most interactive/immersive experience the user will feels without being put in a real situation. Recent simulations include a soldier wearing a shock belt during a combat simulation. Each time they are shot the belt will release a certain amount of electricity directly to the user's skin. This is to simulate a shot wound in the most humane way possible. There are many sustainability technologies that military personnel wear in the field. One of which is a boot insert. This insert gauges how soldiers are carrying the weight of their equipment and how daily terrain factors impact their mission panning optimization. These sensors will not only help the military plan the best timeline but will help keep the soldiers at best physical/mental health. == Fashion == Fashionable wearables are "designed garments and accessories that combines aesthetics and style with functional technology." Garments are the interface to the exterior mediated through digital technology. It allows endless possibilities for the dynamic customization of apparel. All clothes have social, psychological and physical functions. However, with the use of technology these functions can be amplified. There are some wearables that are called E-textiles. These are the combination of textiles(fabric) and electronic components to create wearable technology within clothing. They are also known as smart textile and digital textile. Wearables are made from a functionality perspective or from an aesthetic perspective. When made from a functionality perspective, designers and engineers create wearables to provide convenience to the user. Clothing and accessories are used as a tool to provide assistance to the user. Designers and engineers are working together to incorporate technology in the manufacturing of garments in order to provide functionalities that can simplify the lives of the user. For example, through smartwatches people have the ability to communicate on the go and track their health. Moreover, smart fabrics have a direct interaction with the user, as it allows sensing the customers' moves. This helps to address concerns such as privacy, communication and well-being. Years ago, fashionable wearables were functional but not very aesthetic. As of 2018, wearables are quickly growing to meet fashion standards through the production of garments that are stylish and comfortable. Furthermore, when wearables are made from an aesthetic perspective, designers explore with their work by using technology and collaborating with engineers. These designers explore the different techniques and methods available for incorporating electronics in their designs. They are not constrained by one set of materials or colors, as these can change in response to the embedded sensors in the apparel. They can decide how their designs adapt and responds to the user. In 1967, French fashion designer Pierre Cardin, known for his futuristic designs created a collection of garments entitled "robe electronique" that featured a geometric embroidered pattern with LEDs (light emitting diodes). Pierre Cardin unique designs were featured in an episode of the Jetsons animated show where one of the main characters demonstrates how her luminous "Pierre Martian" dress works by plugging it into the mains. An exhibition about the work of Pierre Cardin was recently on display at the Brooklyn Museum in New York In 1968, the Museum of Contemporary Craft in New York City held an exhibition named Body Covering which presented the infusion of technological wearables with fashion. Some of the projects presented were clothing that changed temperature, and party dresses that light up and produce noises, among others. The designers from this exhibition creatively embedded electronics into the clothes and accessories to create these projects. As of 2018, fashion designers continue to explore this method in the manufacturing of their designs by pushing the limits of fashion and technology. === House of Holland and NFC Ring === McLear, also known as NFC Ring, in partnership with the House of Henry Holland and Visa Europe Collab, showcased an event entitled "Cashless on the Catwalk" at the Collins Music Hall in Islington. Celebrities walking through the event could make purchases for the first time in history from a wearable device using McLear's NFC Rings by tapping the ring on a purchase terminal. === CuteCircuit === CuteCircuit pioneered the concept of interactive and app-controlled fashion with the creation in 2008 of the Galaxy Dress (part of the permanent collection of the Museum of Science and Industry in Chicago, US) and in 2012 of the tshirtOS (now infinitshirt). CuteCircuit fashion designs can interact and change colour providing the wearer a new way of communicating and expressing their personality and style. CuteCircuit's designs have been worn on the red carpet by celebrities such as Katy Perry and Nicole Scherzinger. and are part of the permanent collections of the Museum of Fine Arts in Boston. === Project Jacquard === Project Jacquard, a Google project led by Ivan Poupyrev, has been combining clothing with technology. Google collaborated with Levi Strauss to create a jacket that has touch-sensitive areas that can control a smartphone. The cuff-links are removable and charge in a USB port. === Intel and Chromat === Intel partnered with the brand Chromat to create a sports bra that responds to changes in the body of the user, as well as a 3D printed carbon fiber dress that changes color based on the user's adrenaline levels. Intel also partnered with Google and TAG Heuer to make a smart watch. === Iris van Herpen === Smart fabrics and 3D printing have been incorporated in high fashion by the designer Iris van Herpen. Van Herpen was the first designer to incorporate 3D printing technology of rapid prototyping into the fashion industry. The Belgian company Materialise NV collaborates with her in the printing of her designs. === Manufacturing process of e-textiles === There are several methods which companies manufacture e-textiles from fiber to garment and the insertion of electronics to the process. One of the methods being developed is when stretchable circuits are printed right into a fabric using conductive ink. The conductive ink uses metal fragments in the ink to become electrically conductive. Another method would be using conductive thread or yarn. This development includes the coating of non-conductive fiber (like polyester PET) with conductive material such as metal like gold or silver to produce coated yarns or in order to produce an e-textile. Common fabrication techniques for e-textiles include the following traditional methods: == UI/UX Design == In approaching user experience (UX) for wearables, collected data from the sensors are transferred wirelessly via a linked cloud database. These data can be analyzed using statistics and presented through user interface (UI) graphics that clearly visualizes the users’ habits over time. When working on such a tiny canvas with limited space, essential information with short interactions and a simple UX flow is the driving factor of efficient wearable design. Key factors to consider include: Core functionality Responsiveness Visual Design Navigation Animation A wearable's core functionality includes simple actions such as reading messages or controlling a fitness app. Most kept it simple, meaning a simple design that fits with devices with varying screen sizes, resolutions, and processing power. Responsiveness was also crucial as sluggish interactions, such as a user needing to twist and turn their wrist to get a gesture to work as intended, can be highly frustrating in the long run. Furthermore, visual design and navigation are core factors in creating a strong UI hierarchy in such a small space. Paired smartly with graphics, shapes, and colours, wordiness can be minimized through quick interactions with its users. Miller argues that “animations can make smartwatch UX fun, but shouldn’t be a priority”. Too many animations can cause information bloat or decrease the battery life of the wearable. We can see that UX design in smartwatches has written its own set of rules, with UX designers constantly innovating unique ways to deliver an efficient and seamless experience. The UI and UX design of health monitoring wearables are crucial in ensuring that users can interact with their devices efficiently and securely. Since most wearable devices have small screens, their UI must be intuitive, providing clear and simple navigation. However, privacy settings and data-sharing controls are often buried within complex menus, making it difficult for users to manage their data preferences. Many users are unaware of the extent to which their personal health data is collected and shared, due to poorly designed consent mechanisms. A survey from the University of Fort Hare has found that 52% of participants were not familiar with security policies, 47% had no concern to who has data access to their private data, 35% who were largely aware of the information stored or transmitted on their devices, and only a quarter of participants backed up sensitive data routinely and tested recovery periodically. The findings of this study also suggested that half of the respondents did not understand that there was a need to protect their health information. There seemed to be a lack of general awareness surrounding health and data privacy. Terms of service agreements are often long and difficult to understand, leading users to agree to data collection without fully comprehending the implications. A well-designed UI and UX should prioritize transparency, providing clear and accessible privacy settings, easy-to-understand consent processes, and secure authentication methods. Unfortunately, formal assessment or peer review of mobile applications remains largely untested in the context of wearable devices. Enhancing privacy controls through better design can help users take ownership of their data and minimize risks associated with unauthorized access. == Issues and concerns == The FDA drafted a guidance for low risk devices advises that personal health wearables are general wellness products if they only collect data on weight management, physical fitness, relaxation or stress management, mental acuity, self-esteem, sleep management, or sexual function. This was due to the privacy risks that were surrounding the devices. As more and more of the devices were being used as well as improved soon enough these devices would be able to tell if a person is showing certain health issues and give a course of action. With the rise of these devices being consumed so to the FDA drafted this guidance in order to decrease risk of a patient in case the app does not function properly. It is argued the ethics of it as well because although they help track health and promote independence there is still an invasion of privacy that ensues to gain information. This is due to the huge amounts of data that has to be transferred which could raise issues for both the user and the companies if a third partied gets access to this data. There was an issue with Google Glass that was used by surgeons in order to track vital signs of a patient where it had privacy issues relating to third party use of non-consented information. The issue is consent as well when it comes to wearable technology because it gives the ability to record and that is an issue when permission is not asked when a person is being recorded. Compared to smartphones, wearable devices pose several new reliability challenges to device manufacturers and software developers. Limited display area, limited computing power, limited volatile and non-volatile memory, non-conventional shape of the devices, abundance of sensor data, complex communication patterns of the apps, and limited battery size—all these factors can contribute to salient software bugs and failure modes, such as, resource starvation or device hangs. Moreover, since many of the wearable devices are used for health purposes (either monitoring or treatment), their accuracy and robustness issues can give rise to safety concerns. Some tools have been developed to evaluate the reliability and the security properties of these wearable devices. The early results point to a weak spot of wearable software whereby overloading of the devices, such as through high UI activity, can cause failures. Privacy and security risks still remain significant concerns in the use of health monitoring wearables. As these devices collect and transmit sensitive health data, they become vulnerable to cyberattacks and unauthorized data access. Several case studies highlight these risks, exposing how user data can be exploited or misused. For example, period-tracking apps, such as Flo, have faced criticism for sharing user data with third-party companies for targeted advertising. Shipp illustrates the prevalence of app developers who use third party libraries and services to monetize their apps or integrate other platforms. She states that often, the goal of third party code collects information about user interactions with apps. Opal Pandya, a 25-year old Philadelphian reported receiving Instagram ads for products to alleviate menstrual symptoms shortly after logging her cycle on the Flo app, revealing how her private health data was shared across multiple platforms. Similarly, the Apple Watch, which tracks ovulation through temperature monitoring, raises concerns about data privacy, especially regarding the potential misuse of reproductive health information. In regions where abortion is illegal, such data could even be used against women in legal cases, posing serious ethical concerns. Another alarming example is the Strava fitness tracking app, which inadvertently exposed the location of U.S. military personnel in conflict zones like Syria and Iraq. Strava’s "heat map" feature revealed the presence of military bases and allowed access to sensitive information such as users' names, movement patterns, and even heart rates. These 3 cases demonstrate the urgent need for stronger privacy protections and more transparent data practices in the design of health monitoring wearables. == See also == == References == == External links == "Wear your heart on your sleeve" - physics.org "The Future of Wearable Technology" - video by Off Book
https://en.wikipedia.org/wiki/Wearable_technology
A technology company (or tech company) is a company that focuses primarily on the manufacturing, support, research and development of—most commonly computing, telecommunication and consumer electronics–based—technology-intensive products and services, which include businesses relating to digital electronics, software, optics, new energy, and Internet-related services such as cloud storage and e-commerce services. Big Tech refers to the five largest technology companies in the United States, symbolized by the metonym "Silicon Valley", where they are based. == Details == According to Fortune, as of 2020, the ten largest technology companies by revenue are: Apple Inc., Samsung, Foxconn, Alphabet Inc., Microsoft, Huawei, Dell Technologies, Hitachi, IBM, and Sony. Amazon has higher revenue than Apple, but is classified by Fortune in the retail sector. The most profitable listed in 2020 are Apple Inc., Microsoft, Alphabet Inc., Intel, Meta Platforms, Samsung, and Tencent. Apple Inc., Alphabet Inc. (owner of Google), Meta Platforms (owner of Facebook), Microsoft, and Amazon.com, Inc. are often referred to as the Big Five multinational technology companies based in the United States. These five technology companies dominate major functions, e-commerce channels, and information of the entire Internet ecosystem. As of 2017, the Big Five had a combined valuation of over $3.3 trillion and make up more than 40 percent of the value of the Nasdaq-100 index. Many large tech companies have a reputation for innovation, spending large sums of money annually on research and development. According to PwC's 2017 Global Innovation 1000 ranking, tech companies made up nine of the 20 most innovative companies in the world, with the top R&D spender (as measured by expenditure) being Amazon, followed by Alphabet Inc., and then Intel. As a result of numerous influential tech companies and tech startups opening offices in proximity to one another, a number of technology districts have developed in various areas across the globe. These include: Silicon Valley in the San Francisco Bay Area, Silicon Wadi in Israel, Silicon Docks in Dublin, Silicon Hills in Austin, Tech City in London; Digital Media City in Seoul, Zhongguancun in Beijing, Cyberjaya in Malaysia and Cyberabad in Hyderabad, India. == See also == List of largest technology companies by revenue Big Tech, a grouping of the largest IT companies in the world Deep tech Dot-com company Outline of technology == References ==
https://en.wikipedia.org/wiki/Technology_company
Technology Centers, in Oklahoma, are Career and Technical schools which provide career and technology education for high school students in the U.S. state of Oklahoma. The students generally spend part of each day in their respective schools pursuing academic subjects in addition to attending classes in their affiliated vo-tech center. Technology centers are managed by the Oklahoma Department of Career and Technology Education in Stillwater, Oklahoma. == List of centers == Autry Technology Center Caddo-Kiowa Technology Center Canadian Valley Technology Center Chickasha Campus El Reno Campus Central Technology Center Sapulpa Campus Drumright Campus Chisholm Trail Technology Center Eastern Oklahoma County Technology Center Francis Tuttle Technology Center Portland Campus Reno Campus Rockwell Campus Danforth Campus Gordon Cooper Technology Center Great Plains Technology Center Tillman-Kiowa Campus Lawton Campus Green Country Technology Center High Plains Technology Center Indian Capital Technology Center Bill Willis Campus Muskogee Campus Sallisaw Campus Stilwell Campus Kiamichi Technology Center Atoka Campus Durant Campus Hugo Campus Idabel Campus McAlester Campus Poteau Campus Spiro Campus Stigler Campus Talihina Campus Meridian Technology Center Main Campus (Stillwater, OK) South Campus (Guthrie, OK) Metro Technology Centers Aviation Career Center Downtown Business Campus South Bryant Campus Springlake Campus Mid-America Technology Center Mid-Del Technology Center Moore Norman Technology Center Franklin Road Campus South Penn Campus Northeast Technology Center East Campus North Campus South Campus Northwest Technology Center Alva Campus Fairview Campus Pioneer Technology Center Pontotoc Technology Center Red River Technology Center Southern Oklahoma Technology Center Southwest Technology Center Tri County Technology Center Tulsa Technology Center Broken Arrow Campus Career Services Center Lemley Campus Peoria Campus Riverside Campus Training Center Wes Watkins Technology Center Western Technology Center Burns Flat Campus Hobart Sayre Campus Weatherford Campus == See also == List of school districts in Oklahoma List of private schools in Oklahoma List of colleges and universities in Oklahoma == External links == Oklahoma Department of Career and Technology Education
https://en.wikipedia.org/wiki/List_of_CareerTech_centers_in_Oklahoma
Renaissance technology was the set of European artifacts and inventions which spread through the Renaissance period, roughly the 14th century through the 16th century. The era is marked by profound technical advancements such as the printing press, linear perspective in drawing, patent law, double shell domes and bastion fortresses. Sketchbooks from artisans of the period (Taccola and Leonardo da Vinci, for example) give a deep insight into the mechanical technology then known and applied. Renaissance science spawned the Scientific Revolution; science and technology began a cycle of mutual advancement. == Renaissance technology == Some important Renaissance technologies, including both innovations and improvements on existing techniques: mining and metallurgy blast furnace enabled iron to be produced in significant quantities finery forge enabled pig iron (from the blast furnace) into bar iron (wrought iron) slitting mill mechanized the production of iron rods for nailmaking smeltmill increased the output of lead over previous methods (bole hill) === Late 14th century === Some of the technologies were the arquebus and the musket. === 15th century === The technologies that developed in Europe during the second half of the 15th century were commonly associated by authorities of the time with a key theme in Renaissance thought: the rivalry of the Moderns and the Ancients. Three inventions in particular — the printing press, firearms, and the nautical compass — were indeed seen as evidence that the Moderns could not only compete with the Ancients, but had surpassed them, for these three inventions allowed modern people to communicate, exercise power, and finally travel at distances unimaginable in earlier times. Crank and connecting rod The crank and connecting rod mechanism which converts circular into reciprocal motion is of utmost importance for the mechanization of work processes; it is first attested for Roman water-powered sawmills. During the Renaissance, its use is greatly diversified and mechanically refined; now connecting-rods are also applied to double compound cranks, while the flywheel is employed to get these cranks over the 'dead-spot'. Early evidence of such machines appears, among other things, in the works of the 15th-century engineers Anonymous of the Hussite Wars and Taccola. From then on, cranks and connecting rods become an integral part of machine design and are applied in ever more elaborate ways: Agostino Ramelli's The Diverse and Artifactitious Machines of 1588 depicts eighteen different applications, a number which rises in the 17th-century Theatrum Machinarum Novum by Georg Andreas Böckler to forty-five. Printing press The introduction of the mechanical movable type printing press by the German goldsmith Johannes Gutenberg (1398–1468) is widely regarded as the single most important event of the second millennium, and is one of the defining moments of the Renaissance. The Printing Revolution which it sparks throughout Europe works as a modern "agent of change" in the transformation of medieval society. The mechanical device consists of a screw press modified for printing purposes which can produce 3,600 pages per workday, allowing the mass production of printed books on a proto-industrial scale. By the start of the 16th century, printing presses are operating in over 200 cities in a dozen European countries, producing more than twenty million volumes. By 1600, their output had risen tenfold to an estimated 150 to 200 million copies, while Gutenberg book printing spread from Europe further afield. The relatively free flow of information transcends borders and induced a sharp rise in Renaissance literacy, learning and education; the circulation of (revolutionary) ideas among the rising middle classes, but also the peasants, threatens the traditional power monopoly of the ruling nobility and is a key factor in the rapid spread of the Protestant Reformation. The dawn of the Gutenberg Galaxy, the era of mass communication, is instrumental in fostering the gradual democratization of knowledge which sees for the first time modern media phenomena such as the press or bestsellers emerging. The prized incunables, which are testimony to the aesthetic taste and high proficient competence of Renaissance book printers, are one lasting legacy of the 15th century. Parachute The earliest known parachute design appears in an anonymous manuscript from 1470s Renaissance Italy; it depicts a free-hanging man clutching a crossbar frame attached to a conical canopy. As a safety measure, four straps run from the ends of the rods to a waist belt. Around 1485, a more advanced parachute was sketched by the polymath Leonardo da Vinci in his Codex Atlanticus (fol. 381v), which he scales in a more favorable proportion to the weight of the jumper. Leonardo's canopy was held open by a square wooden frame, altering the shape of the parachute from conical to pyramidal. The Venetian inventor Fausto Veranzio (1551–1617) modifies da Vinci's parachute sketch by keeping the square frame, but replacing the canopy with a bulging sail-like piece of cloth. This he realized decelerates the fall more effectively. Claims that Veranzio successfully tested his parachute design in 1617 by jumping from a tower in Venice cannot be substantiated; since he was around 65 years old at the time. Mariner's astrolabe The earliest recorded uses of the astrolabe for navigational purposes are by the Portuguese explorers Diogo de Azambuja (1481), Bartholomew Diaz (1487/88) and Vasco da Gama (1497–98) during their sea voyages around Africa. Dry dock While dry docks were already known in Hellenistic shipbuilding, these facilities were reintroduced in 1495/96, when Henry VII of England ordered one to be built at the Portsmouth navy base. === 16th century === Floating dock The earliest known description of a floating dock comes from a small Italian book printed in Venice in 1560, titled Descrittione dell'artifitiosa machina. In the booklet, an unknown author asks for the privilege of using a new method for the salvaging of a grounded ship and then proceeds to describe and illustrate his approach. The included woodcut shows a ship flanked by two large floating trestles, forming a roof above the vessel. The ship is pulled in an upright position by a number of ropes attached to the superstructure. Lifting tower A lifting tower was used to great effect by Domenico Fontana to relocate the monolithic Vatican obelisk in Rome. Its weight of 361 t was far greater than any of the blocks the Romans are known to have lifted by cranes. Mining, machinery and chemistry A standard reference for the state of mechanical arts during the Renaissance is given in the mining engineering treatise De re metallica (1556), which also contains sections on geology, mining and chemistry. De re metallica was the standard chemistry reference for the next 180 years. === Early 17th century === Newspaper The newspaper is an application of the printing press from which the press derives its name. The 16th century sees a rising demand for up-to-date information which can not be covered effectively by the circulating hand-written newssheets. For "gaining time" from the slow copying process, Johann Carolus of Strassburg is the first to publish his German-language Relation by using a printing press (1605). In rapid succession, further German newspapers are established in Wolfenbüttel (Avisa Relation oder Zeitung), Basel, Frankfurt and Berlin. From 1618 onwards, enterprising Dutch printers take up the practice and begin to provide the English and French market with translated news. By the mid-17th century it is estimated that political newspapers which enjoyed the widest popularity reach up to 250,000 readers in the Holy Roman Empire, around one quarter of the literate population. Air-gun In 1607 Bartolomeo Crescentio described an air gun equipped with a powerful spiral spring, a device so complex that it must have had predecessors. In 1610 Mersenne spoke in detail of "sclopeti pneumatici constructio", and four years later Wilkins wrote enthusiastically of "that late ingenious invention the wind-gun" as being "almost equall to our powder-guns". In the 1650s Otto von Guericke, famed for his experiments with vacua and pressures, built the Madeburger Windbuchse, one of the technical wonders of its time. == Tools, devices, work processes == === 15th century === Cranked Archimedes' screw The German engineer Konrad Kyeser equips in his Bellifortis (1405) the Archimedes' screw with a crank mechanism which soon replaces the ancient practice of working the pipe by treading. Cranked reel In the textile industry, cranked reels for winding skeins of yarn were introduced in the early 15th century. Brace The earliest carpenter's braces equipped with a U-shaped grip, that is with a compound crank, appears between 1420 and 1430 in Flanders. Cranked well-hoist The earliest evidence for the fitting of a well-hoist with cranks is found in a miniature of c. 1425 in the German Hausbuch of the Mendel Foundation. Paddle wheel boat powered by crank and connecting rod mechanism While paddle wheel boats powered by manually turned crankshafts were already conceived of by earlier writers such as Guido da Vigevano and the Anonymous Author of the Hussite Wars, the Italian Roberto Valturio much improves on the design in 1463 by devising a boat with five sets of parallel cranks which are all joined to a single power source by one connecting rod; the idea is also taken up by his compatriot Francesco di Giorgio. Rotary grindstone with treadle Evidence for rotary grindstones operated by a crank handle goes back to the Carolingian Utrecht Psalter. Around 1480, the crank mechanism is further mechanized by adding a treadle. Geared hand-mill The geared hand-mill, operated either with one or two cranks, appears in the 15th century. === 16th century === Grenade musket Two 16th-century German grenade muskets working with a wheellock mechanism are on display in the Bayerisches Nationalmuseum, Munich. == Technical drawings of artist-engineers == The revived scientific spirit of the age can perhaps be best exemplified by the voluminous corpus of technical drawings which the artist-engineers left behind, reflecting the wide variety of interests the Renaissance homo universalis pursued. The establishment of the laws of linear perspective by Brunelleschi gave his successors, such as Taccola, Francesco di Giorgio Martini and Leonardo da Vinci, a powerful instrument to depict mechanical devices for the first time in a realistic manner. The extant sketch books give modern historians of science invaluable insights into the standards of technology of the time. Renaissance engineers showed a strong proclivity to experimental study, drawing a variety of technical devices, many of which appeared for the first time in history on paper. However, these designs were not always intended to be put into practice, and often practical limitations impeded the application of the revolutionary designs. For example, da Vinci's ideas on the conical parachute or the winged flying machine were only applied much later. While earlier scholars showed a tendency to attribute inventions based on their first pictorial appearance to individual Renaissance engineers, modern scholarship is more prone to view the devices as products of a technical evolution which often went back to the Middle Ages. == See also == Chariot clock History of science in the Renaissance Renaissance magic == Notes == == Footnotes == == References == Boruchoff, David A. (2012), "The Three Greatest Inventions of Modern Times: An Idea and Its Public." Entangled Knowledge: Scientific Discourses and Cultural Difference, Munster and New York: Waxmann, pp. 133–136, ISBN 978-3-8309-2729-7 Coulton, J. J. (1974), "Lifting in Early Greek Architecture", The Journal of Hellenic Studies, 94: 1–19, doi:10.2307/630416, JSTOR 630416, S2CID 162973494 Eisenstein, Elizabeth L. (1980), The Printing Press as an Agent of Change, Cambridge University Press, ISBN 0-521-29955-1 Febvre, Lucien; Martin, Henri-Jean (1997), The Coming of the Book: The Impact of Printing 1450–1800, London: Verso, ISBN 1-85984-108-2 Hall, Bert S. (1979), The Technological Illustrations of the So-Called "Anonymous of the Hussite Wars". Codex Latinus Monacensis 197, Part 1, Wiesbaden: Dr. Ludwig Reichert Verlag, ISBN 3-920153-93-6 Lancaster, Lynne (1999), "Building Trajan's Column", American Journal of Archaeology, 103 (3): 419–439, doi:10.2307/506969, JSTOR 506969, S2CID 192986322 Man, John (2002), The Gutenberg Revolution: The Story of a Genius and an Invention that Changed the World, London: Headline Review, ISBN 978-0-7472-4504-9 McLuhan, Marshall (1962), The Gutenberg Galaxy: The Making of Typographic Man (1st ed.), University of Toronto Press, ISBN 978-0-8020-6041-9 {{citation}}: ISBN / Date incompatibility (help) Ritti, Tullia; Grewe, Klaus; Kessener, Paul (2007), "A Relief of a Water-powered Stone Saw Mill on a Sarcophagus at Hierapolis and its Implications", Journal of Roman Archaeology, 20: 138–163, doi:10.1017/S1047759400005341, S2CID 161937987 Sarton, George (1946), "Floating Docks in the Sixteenth Century", Isis, 36 (3/4): 153–154, doi:10.1086/347934, S2CID 144849113 Stimson, Alan (1985), The Mariner's Astrolabe. A Survey of 48 Surviving Examples, Coimbra: UC Biblioteca Geral Weber, Johannes (2006), "Strassburg, 1605: The Origins of the Newspaper in Europe", German History, 24 (3): 387–412, doi:10.1191/0266355406gh380oa White, Lynn Jr. (1962), Medieval Technology and Social Change, Oxford: At the Clarendon Press White, Lynn Jr. (1968), "The Invention of the Parachute", Technology and Culture, 9 (3): 462–467, doi:10.2307/3101655, JSTOR 3101655, S2CID 111425847 Wikander, Charlotte (2000), "Canals", in Wikander, Örjan (ed.), Handbook of Ancient Water Technology, Technology and Change in History, vol. 2, Leiden: Brill, pp. 321–330, ISBN 90-04-11123-9 Wolf, Hans-Jürgen (1974), Geschichte der Druckpressen (1st ed.), Frankfurt/Main: Interprint == External links ==
https://en.wikipedia.org/wiki/Renaissance_technology
Nuclear technology is technology that involves the nuclear reactions of atomic nuclei. Among the notable nuclear technologies are nuclear reactors, nuclear medicine and nuclear weapons. It is also used, among other things, in smoke detectors and gun sights. == History and scientific background == === Discovery === The vast majority of common, natural phenomena on Earth only involve gravity and electromagnetism, and not nuclear reactions. This is because atomic nuclei are generally kept apart because they contain positive electrical charges and therefore repel each other. In 1896, Henri Becquerel was investigating phosphorescence in uranium salts when he discovered a new phenomenon which came to be called radioactivity. He, Pierre Curie and Marie Curie began investigating the phenomenon. In the process, they isolated the element radium, which is highly radioactive. They discovered that radioactive materials produce intense, penetrating rays of three distinct sorts, which they labeled alpha, beta, and gamma after the first three Greek letters. Some of these kinds of radiation could pass through ordinary matter, and all of them could be harmful in large amounts. All of the early researchers received various radiation burns, much like sunburn, and thought little of it. The new phenomenon of radioactivity was seized upon by the manufacturers of quack medicine (as had the discoveries of electricity and magnetism, earlier), and a number of patent medicines and treatments involving radioactivity were put forward. Gradually it was realized that the radiation produced by radioactive decay was ionizing radiation, and that even quantities too small to burn could pose a severe long-term hazard. Many of the scientists working on radioactivity died of cancer as a result of their exposure. Radioactive patent medicines mostly disappeared, but other applications of radioactive materials persisted, such as the use of radium salts to produce glowing dials on meters. As the atom came to be better understood, the nature of radioactivity became clearer. Some larger atomic nuclei are unstable, and so decay (release matter or energy) after a random interval. The three forms of radiation that Becquerel and the Curies discovered are also more fully understood. Alpha decay is when a nucleus releases an alpha particle, which is two protons and two neutrons, equivalent to a helium nucleus. Beta decay is the release of a beta particle, a high-energy electron. Gamma decay releases gamma rays, which unlike alpha and beta radiation are not matter but electromagnetic radiation of very high frequency, and therefore energy. This type of radiation is the most dangerous and most difficult to block. All three types of radiation occur naturally in certain elements. It has also become clear that the ultimate source of most terrestrial energy is nuclear, either through radiation from the Sun caused by stellar thermonuclear reactions or by radioactive decay of uranium within the Earth, the principal source of geothermal energy. === Nuclear fission === In natural nuclear radiation, the byproducts are very small compared to the nuclei from which they originate. Nuclear fission is the process of splitting a nucleus into roughly equal parts, and releasing energy and neutrons in the process. If these neutrons are captured by another unstable nucleus, they can fission as well, leading to a chain reaction. The average number of neutrons released per nucleus that go on to fission another nucleus is referred to as k. Values of k larger than 1 mean that the fission reaction is releasing more neutrons than it absorbs, and therefore is referred to as a self-sustaining chain reaction. A mass of fissile material large enough (and in a suitable configuration) to induce a self-sustaining chain reaction is called a critical mass. When a neutron is captured by a suitable nucleus, fission may occur immediately, or the nucleus may persist in an unstable state for a short time. If there are enough immediate decays to carry on the chain reaction, the mass is said to be prompt critical, and the energy release will grow rapidly and uncontrollably, usually leading to an explosion. When discovered on the eve of World War II, this insight led multiple countries to begin programs investigating the possibility of constructing an atomic bomb — a weapon which utilized fission reactions to generate far more energy than could be created with chemical explosives. The Manhattan Project, run by the United States with the help of the United Kingdom and Canada, developed multiple fission weapons which were used against Japan in 1945 at Hiroshima and Nagasaki. During the project, the first fission reactors were developed as well, though they were primarily for weapons manufacture and did not generate electricity. In 1951, the first nuclear fission power plant was the first to produce electricity at the Experimental Breeder Reactor No. 1 (EBR-1), in Arco, Idaho, ushering in the "Atomic Age" of more intensive human energy use. However, if the mass is critical only when the delayed neutrons are included, then the reaction can be controlled, for example by the introduction or removal of neutron absorbers. This is what allows nuclear reactors to be built. Fast neutrons are not easily captured by nuclei; they must be slowed (slow neutrons), generally by collision with the nuclei of a neutron moderator, before they can be easily captured. Today, this type of fission is commonly used to generate electricity. === Nuclear fusion === If nuclei are forced to collide, they can undergo nuclear fusion. This process may release or absorb energy. When the resulting nucleus is lighter than that of iron, energy is normally released; when the nucleus is heavier than that of iron, energy is generally absorbed. This process of fusion occurs in stars, which derive their energy from hydrogen and helium. They form, through stellar nucleosynthesis, the light elements (lithium to calcium) as well as some of the heavy elements (beyond iron and nickel, via the S-process). The remaining abundance of heavy elements, from nickel to uranium and beyond, is due to supernova nucleosynthesis, the R-process. Of course, these natural processes of astrophysics are not examples of nuclear "technology". Because of the very strong repulsion of nuclei, fusion is difficult to achieve in a controlled fashion. Hydrogen bombs, formally known as thermonuclear weapons, obtain their enormous destructive power from fusion, but their energy cannot be controlled. Controlled fusion is achieved in particle accelerators; this is how many synthetic elements are produced. A fusor can also produce controlled fusion and is a useful neutron source. However, both of these devices operate at a net energy loss. Controlled, viable fusion power has proven elusive, despite the occasional hoax. Technical and theoretical difficulties have hindered the development of working civilian fusion technology, though research continues to this day around the world. Nuclear fusion was initially pursued only in theoretical stages during World War II, when scientists on the Manhattan Project (led by Edward Teller) investigated it as a method to build a bomb. The project abandoned fusion after concluding that it would require a fission reaction to detonate. It took until 1952 for the first full hydrogen bomb to be detonated, so-called because it used reactions between deuterium and tritium. Fusion reactions are much more energetic per unit mass of fuel than fission reactions, but starting the fusion chain reaction is much more difficult. == Nuclear weapons == A nuclear weapon is an explosive device that derives its destructive force from nuclear reactions, either fission or a combination of fission and fusion. Both reactions release vast quantities of energy from relatively small amounts of matter. Even small nuclear devices can devastate a city by blast, fire and radiation. Nuclear weapons are considered weapons of mass destruction, and their use and control has been a major aspect of international policy since their debut. The design of a nuclear weapon is more complicated than it might seem. Such a weapon must hold one or more subcritical fissile masses stable for deployment, then induce criticality (create a critical mass) for detonation. It also is quite difficult to ensure that such a chain reaction consumes a significant fraction of the fuel before the device flies apart. The procurement of a nuclear fuel is also more difficult than it might seem, since sufficiently unstable substances for this process do not currently occur naturally on Earth in suitable amounts. One isotope of uranium, namely uranium-235, is naturally occurring and sufficiently unstable, but it is always found mixed with the more stable isotope uranium-238. The latter accounts for more than 99% of the weight of natural uranium. Therefore, some method of isotope separation based on the weight of three neutrons must be performed to enrich (isolate) uranium-235. Alternatively, the element plutonium possesses an isotope that is sufficiently unstable for this process to be usable. Terrestrial plutonium does not currently occur naturally in sufficient quantities for such use, so it must be manufactured in a nuclear reactor. Ultimately, the Manhattan Project manufactured nuclear weapons based on each of these elements. They detonated the first nuclear weapon in a test code-named "Trinity", near Alamogordo, New Mexico, on July 16, 1945. The test was conducted to ensure that the implosion method of detonation would work, which it did. A uranium bomb, Little Boy, was dropped on the Japanese city Hiroshima on August 6, 1945, followed three days later by the plutonium-based Fat Man on Nagasaki. In the wake of unprecedented devastation and casualties from a single weapon, the Japanese government soon surrendered, ending World War II. Since these bombings, no nuclear weapons have been deployed offensively. Nevertheless, they prompted an arms race to develop increasingly destructive bombs to provide a nuclear deterrent. Just over four years later, on August 29, 1949, the Soviet Union detonated its first fission weapon. The United Kingdom followed on October 2, 1952; France, on February 13, 1960; and China component to a nuclear weapon. Approximately half of the deaths from Hiroshima and Nagasaki died two to five years afterward from radiation exposure. A radiological weapon is a type of nuclear weapon designed to distribute hazardous nuclear material in enemy areas. Such a weapon would not have the explosive capability of a fission or fusion bomb, but would kill many people and contaminate a large area. A radiological weapon has never been deployed. While considered useless by a conventional military, such a weapon raises concerns over nuclear terrorism. There have been over 2,000 nuclear tests conducted since 1945. In 1963, all nuclear and many non-nuclear states signed the Limited Test Ban Treaty, pledging to refrain from testing nuclear weapons in the atmosphere, underwater, or in outer space. The treaty permitted underground nuclear testing. France continued atmospheric testing until 1974, while China continued up until 1980. The last underground test by the United States was in 1992, the Soviet Union in 1990, the United Kingdom in 1991, and both France and China continued testing until 1996. After signing the Comprehensive Test Ban Treaty in 1996 (which had as of 2011 not entered into force), all of these states have pledged to discontinue all nuclear testing. Non-signatories India and Pakistan last tested nuclear weapons in 1998. Nuclear weapons are the most destructive weapons known - the archetypal weapons of mass destruction. Throughout the Cold War, the opposing powers had huge nuclear arsenals, sufficient to kill hundreds of millions of people. Generations of people grew up under the shadow of nuclear devastation, portrayed in films such as Dr. Strangelove and The Atomic Cafe. However, the tremendous energy release in the detonation of a nuclear weapon also suggested the possibility of a new energy source. == Civilian uses == === Nuclear power === Nuclear power is a type of nuclear technology involving the controlled use of nuclear fission to release energy for work including propulsion, heat, and the generation of electricity. Nuclear energy is produced by a controlled nuclear chain reaction which creates heat—and which is used to boil water, produce steam, and drive a steam turbine. The turbine is used to generate electricity and/or to do mechanical work. Currently nuclear power provides approximately 15.7% of the world's electricity (in 2004) and is used to propel aircraft carriers, icebreakers and submarines (so far economics and fears in some ports have prevented the use of nuclear power in transport ships). All nuclear power plants use fission. No man-made fusion reaction has resulted in a viable source of electricity. === Medical applications === The medical applications of nuclear technology are divided into diagnostics and radiation treatment. Imaging - The largest use of ionizing radiation in medicine is in medical radiography to make images of the inside of the human body using x-rays. This is the largest artificial source of radiation exposure for humans. Medical and dental x-ray imagers use of cobalt-60 or other x-ray sources. A number of radiopharmaceuticals are used, sometimes attached to organic molecules, to act as radioactive tracers or contrast agents in the human body. Positron emitting nucleotides are used for high resolution, short time span imaging in applications known as Positron emission tomography. Radiation is also used to treat diseases in radiation therapy. === Industrial applications === Since some ionizing radiation can penetrate matter, they are used for a variety of measuring methods. X-rays and gamma rays are used in industrial radiography to make images of the inside of solid products, as a means of nondestructive testing and inspection. The piece to be radiographed is placed between the source and a photographic film in a cassette. After a certain exposure time, the film is developed and it shows any internal defects of the material. Gauges - Gauges use the exponential absorption law of gamma rays Level indicators: Source and detector are placed at opposite sides of a container, indicating the presence or absence of material in the horizontal radiation path. Beta or gamma sources are used, depending on the thickness and the density of the material to be measured. The method is used for containers of liquids or of grainy substances Thickness gauges: if the material is of constant density, the signal measured by the radiation detector depends on the thickness of the material. This is useful for continuous production, like of paper, rubber, etc. Electrostatic control - To avoid the build-up of static electricity in production of paper, plastics, synthetic textiles, etc., a ribbon-shaped source of the alpha emitter 241Am can be placed close to the material at the end of the production line. The source ionizes the air to remove electric charges on the material. Radioactive tracers - Since radioactive isotopes behave, chemically, mostly like the inactive element, the behavior of a certain chemical substance can be followed by tracing the radioactivity. Examples: Adding a gamma tracer to a gas or liquid in a closed system makes it possible to find a hole in a tube. Adding a tracer to the surface of the component of a motor makes it possible to measure wear by measuring the activity of the lubricating oil. Oil and Gas Exploration- Nuclear well logging is used to help predict the commercial viability of new or existing wells. The technology involves the use of a neutron or gamma-ray source and a radiation detector which are lowered into boreholes to determine the properties of the surrounding rock such as porosity and lithography.[1] Road Construction - Nuclear moisture/density gauges are used to determine the density of soils, asphalt, and concrete. Typically a cesium-137 source is used. === Commercial applications === radioluminescence tritium illumination: Tritium is used with phosphor in rifle sights to increase nighttime firing accuracy. Some runway markers and building exit signs use the same technology, to remain illuminated during blackouts. Betavoltaics. Smoke detector: An ionization smoke detector includes a tiny mass of radioactive americium-241, which is a source of alpha radiation. Two ionisation chambers are placed next to each other. Both contain a small source of 241Am that gives rise to a small constant current. One is closed and serves for comparison, the other is open to ambient air; it has a gridded electrode. When smoke enters the open chamber, the current is disrupted as the smoke particles attach to the charged ions and restore them to a neutral electrical state. This reduces the current in the open chamber. When the current drops below a certain threshold, the alarm is triggered. === Food processing and agriculture === In biology and agriculture, radiation is used to induce mutations to produce new or improved species, such as in atomic gardening. Another use in insect control is the sterile insect technique, where male insects are sterilized by radiation and released, so they have no offspring, to reduce the population. In industrial and food applications, radiation is used for sterilization of tools and equipment. An advantage is that the object may be sealed in plastic before sterilization. An emerging use in food production is the sterilization of food using food irradiation. Food irradiation is the process of exposing food to ionizing radiation in order to destroy microorganisms, bacteria, viruses, or insects that might be present in the food. The radiation sources used include radioisotope gamma ray sources, X-ray generators and electron accelerators. Further applications include sprout inhibition, delay of ripening, increase of juice yield, and improvement of re-hydration. Irradiation is a more general term of deliberate exposure of materials to radiation to achieve a technical goal (in this context 'ionizing radiation' is implied). As such it is also used on non-food items, such as medical hardware, plastics, tubes for gas-pipelines, hoses for floor-heating, shrink-foils for food packaging, automobile parts, wires and cables (isolation), tires, and even gemstones. Compared to the amount of food irradiated, the volume of those every-day applications is huge but not noticed by the consumer. The genuine effect of processing food by ionizing radiation relates to damages to the DNA, the basic genetic information for life. Microorganisms can no longer proliferate and continue their malignant or pathogenic activities. Spoilage causing micro-organisms cannot continue their activities. Insects do not survive or become incapable of procreation. Plants cannot continue the natural ripening or aging process. All these effects are beneficial to the consumer and the food industry, likewise. The amount of energy imparted for effective food irradiation is low compared to cooking the same; even at a typical dose of 10 kGy most food, which is (with regard to warming) physically equivalent to water, would warm by only about 2.5 °C (4.5 °F). The specialty of processing food by ionizing radiation is the fact, that the energy density per atomic transition is very high, it can cleave molecules and induce ionization (hence the name) which cannot be achieved by mere heating. This is the reason for new beneficial effects, however at the same time, for new concerns. The treatment of solid food by ionizing radiation can provide an effect similar to heat pasteurization of liquids, such as milk. However, the use of the term, cold pasteurization, to describe irradiated foods is controversial, because pasteurization and irradiation are fundamentally different processes, although the intended end results can in some cases be similar. Detractors of food irradiation have concerns about the health hazards of induced radioactivity. A report for the industry advocacy group American Council on Science and Health entitled "Irradiated Foods" states: "The types of radiation sources approved for the treatment of foods have specific energy levels well below that which would cause any element in food to become radioactive. Food undergoing irradiation does not become any more radioactive than luggage passing through an airport X-ray scanner or teeth that have been X-rayed." Food irradiation is currently permitted by over 40 countries and volumes are estimated to exceed 500,000 metric tons (490,000 long tons; 550,000 short tons) annually worldwide. Food irradiation is essentially a non-nuclear technology; it relies on the use of ionizing radiation which may be generated by accelerators for electrons and conversion into bremsstrahlung, but which may use also gamma-rays from nuclear decay. There is a worldwide industry for processing by ionizing radiation, the majority by number and by processing power using accelerators. Food irradiation is only a niche application compared to medical supplies, plastic materials, raw materials, gemstones, cables and wires, etc. == Accidents == Nuclear accidents, because of the powerful forces involved, are often very dangerous. Historically, the first incidents involved fatal radiation exposure. Marie Curie died from aplastic anemia which resulted from her high levels of exposure. Two scientists, an American and Canadian respectively, Harry Daghlian and Louis Slotin, died after mishandling the same plutonium mass. Unlike conventional weapons, the intense light, heat, and explosive force is not the only deadly component to a nuclear weapon. Approximately half of the deaths from Hiroshima and Nagasaki died two to five years afterward from radiation exposure. Civilian nuclear and radiological accidents primarily involve nuclear power plants. Most common are nuclear leaks that expose workers to hazardous material. A nuclear meltdown refers to the more serious hazard of releasing nuclear material into the surrounding environment. The most significant meltdowns occurred at Three Mile Island in Pennsylvania and Chernobyl in the Soviet Ukraine. The earthquake and tsunami on March 11, 2011 caused serious damage to three nuclear reactors and a spent fuel storage pond at the Fukushima Daiichi nuclear power plant in Japan. Military reactors that experienced similar accidents were Windscale in the United Kingdom and SL-1 in the United States. Military accidents usually involve the loss or unexpected detonation of nuclear weapons. The Castle Bravo test in 1954 produced a larger yield than expected, which contaminated nearby islands, a Japanese fishing boat (with one fatality), and raised concerns about contaminated fish in Japan. In the 1950s through 1970s, several nuclear bombs were lost from submarines and aircraft, some of which have never been recovered. The last twenty years have seen a marked decline in such accidents. == Examples of environmental benefits == Proponents of nuclear energy note that annually, nuclear-generated electricity reduces 470 million metric tons of carbon dioxide emissions that would otherwise come from fossil fuels. Additionally, the amount of comparatively low waste that nuclear energy does create is safely disposed of by the large scale nuclear energy production facilities or it is repurposed/recycled for other energy uses. Proponents of nuclear energy also bring to attention the opportunity cost of utilizing other forms of electricity. For example, the Environmental Protection Agency estimates that coal kills 30,000 people a year, as a result of its environmental impact, while 60 people died in the Chernobyl disaster. A real world example of impact provided by proponents of nuclear energy is the 650,000 ton increase in carbon emissions in the two months following the closure of the Vermont Yankee nuclear plant. == See also == Atomic age Lists of nuclear disasters and radioactive incidents Nuclear power debate Outline of nuclear technology Radiology == References == == External links == Nuclear Energy Institute – Beneficial Uses of Radiation Nuclear Technology National Isotope Development Center – U.S. Government source of isotopes for basic and applied nuclear science and nuclear technology – production, research, development, distribution, and information
https://en.wikipedia.org/wiki/Nuclear_technology
The Rockwell RPRV-870 HiMAT (Highly Maneuverable Aircraft Technology) is an experimental remotely piloted aircraft that was produced for a NASA program to develop technologies for future fighter aircraft. Among the technologies explored were close-coupled canards, fully digital flight control (including propulsion), composite materials (graphite and fiberglass), remote piloting, synthetic vision systems, winglets, and others. Two aircraft were produced by Rockwell International. Their first flights took place in 1979, and testing was completed in 1983. == Design and development == The HiMATs were remotely piloted, as the design team decided that it would be cheaper and safer to not risk a pilot's life during the experiments. This also meant that no ejection seat would have to be fitted. The aircraft was flown by a pilot in a remote cockpit, and control signals up-linked from the flight controls in the remote cockpit on the ground to the aircraft, and aircraft telemetry downlinked to the remote cockpit displays. The remote cockpit could be configured with either nose camera video or with a 3D synthetic vision display called a "visual display". The aircraft were launched from a B-52 Stratofortress at altitude. There was also a TF-104G Starfighter chase plane with a set of backup controls which could take control of the HiMAT in the event that the remote pilot on the ground lost control. Advances in digital flight control gained during the project contributed to the Grumman X-29 experimental aircraft, and composite construction are used widely on both commercial and military aircraft. The aircraft's initial concept included a wedge-shaped exhaust nozzle with 2D thrust vectoring. == Aircraft on display == The two HiMAT aircraft are now on display, one at the National Air and Space Museum and the other at the Armstrong Flight Research Center. == Specifications == Data from Boeing.comGeneral characteristics Crew: None Length: 22 ft 6 in (6.86 m) Wingspan: 15 ft 7 in (4.75 m) Height: 4 ft 4 in (1.31 m) Empty weight: 3,370 lb (1,529 kg) Gross weight: 4,030 lb (1,828 kg) Powerplant: 1 × General Electric J85-GE-21 turbojet Performance Maximum speed: 1,218 mph (1,960 km/h, 1,058 kn) Maximum speed: Mach 1.6 == Gallery == == See also == List of experimental aircraft Grumman X-29 Rockwell-MBB X-31 McDonnell Douglas X-36 NASA X-38 == References == == Further reading == Kempel, Robert W.; Earls, Michael R. (1988). Flight Control Systems Development and Flight Test Experience with the HiMAT Research Vehicles. NASA. OCLC 22037291. Technical paper 2822; Accession number N89-15929. Duke, Eugene L.; Jones, Frank P.; Roncoli, Ralph B. (1986). Development and Flight Test of an Experimental Maneuver Autopilot for a Highly Maneuverable Aircraft. NASA. OCLC 21916352. Technical report 2618; Accession number N88-21153. == External links == HiMAT Research Vehicle at Boeing.com
https://en.wikipedia.org/wiki/Rockwell_HiMAT
Marvell Technology, Inc. is an American company, headquartered in Santa Clara, California, which develops and produces semiconductors and related technology. Founded in 1995, the company had more than 6,500 employees as of 2024, with over 10,000 patents worldwide, and an annual revenue of $5.5 billion for fiscal 2024. == History == Marvell was founded in 1995 by Dr. Sehat Sutardja, his wife Weili Dai, and his brother Pantas Sutardja. They worked on designing a CMOS-based read channel for disk drives as their first product. Seagate Technology became their first customer. The company's initial public offering of 6 million shares on NASDAQ under the ticker symbol MRVL on June 27, 2000 (near the end of the dot-com bubble) raised $90 million and was priced $15 per share. In April 2016, CEO Sehat Sutardja and President Weili Dai were ousted from their posts after activist investor Starboard Value fund took a roughly seven percent stake in the company. In July 2016, Marvell appointed Matt Murphy as its new president and CEO. On July 6, 2018, Marvell completed its acquisition of Cavium, Inc. On the same day, it announced the appointment of Syed Ali (co-founder of Cavium, Inc., and previously the company's president and CEO), Brad Buss (director of Cavium, Inc.) and Edward Frank (director of Cavium, Inc.) to the Marvell Board of Directors. In September 2019, Marvell completed the acquisition of Aquantia. In April 2021, Marvell completed the acquisition of Inphi Corporation. As part of the acquisition, Marvell reorganized so that the combined company is domiciled in Wilmington, Delaware. In September 2023, Marvell Technologies acquired an expansion deal in Pune, India. == Acquisitions == Through the years, Marvell acquired smaller companies to enter new markets. == Products == === Compute === ==== Data Processing Unit ==== Marvell OCTEON and ARMADA DPUs which integrate a CPU, network interfaces and programmable data acceleration engines on a specialized electronic circuit. ==== Custom ==== Marvell also offered Customer Specific Standard Product (CSSP), where customer accelerators and interfaces could be integrated directly into Marvell's Octeon processors. Following Marvell's 2019 acquisition of Avera Semiconductor (formerly the custom ASIC division of GlobalFoundries and prior to that of IBM), Marvell offers custom ASIC tailored to clients' specific design goals. And it provides services for ASICs development to the Aerospace and Defense industries through its independent subsidiary Marvell Government Solutions (MGS). In a joint venture with TSMC, Marvell introduced a 3nm product. ===== Infrastructure Processors ===== On November 12, 2019, Marvell announced that their ThunderX2 SoCs have been deployed on Microsoft Azure. On March 2, 2020, Marvell announced OCTEON Fusion and OCTEON TX2 5G infrastructure processors, as well as deals to provide processors for 5G Infrastructure for Huawei, Nokia, Ericsson, ZTE, and Samsung. On March 16, 2020, Marvell announced ThunderX3 and their plan for ThunderX4 in 2022. On August 28, 2020, Marvell announced plan to refocus their ThunderX Server Teams to their Custom Silicon Business. ==== Security Solutions ==== Marvell's security-related products include their LiquidSecurity HSM Adapters and NITROX Cryptographic Offload Engines. === Networking and Storage === Marvell's networking products include their FastLinQ Ethernet network adapters and controllers, Ethernet Switch chips for both Enterprises (Prestera) & Datacenters (Teralynx), Ethernet PHYs and Automotive Ethernet. Marvell's products include SSD Controllers, HDD Controllers, HDD Preamplifiers, Storage Accelerators, and QLogic Fibre Channel Adapters and Controllers. On May 27, 2021, Marvell announced its first NVM Express SSD controllers to support PCI Express 5.0. === Other products === Marvell supplied the Wi-Fi chip for the original (first-generation) Apple iPhone. Marvell Mobile Hotspot (MMH) is an in-car Wi-Fi connectivity. The 2010 Audi A8 was the first automobile in the market to feature a factory-installed MMH. Google's Chromecast products are powered by Marvell SoCs. Namely the Marvell ARMADA 1500 Mini SoC (88DE3005) for the Chromecast 1st gen and Marvell ARMADA 1500 Mini Plus SoC (88DE3006) for the Chromecast 2nd gen & Chromecast audio. Synaptics acquired Marvell Multimedia Solutions on 2017-06-12. ARMADA 1500 SoC's are now produced under different names. == Controversy == === Stock options === In 2006, the US Securities and Exchange Commission (SEC) started an inquiry into the company's stock option grant practices. An investigation determined "grant dates were chosen with the benefit of hindsight" to make the options more valuable. The press estimated that the founders and other executives had made $760 million in gains from the options, which were awarded by the founding couple, Sehat Sutardja and Weili Dai. The SEC asked to interview the company's general counsel Matthew Gloss, but Marvell claimed attorney–client privilege. Gloss was fired just before the investigation results were announced in May 2007. Abraham David Sofaer was hired to investigate the investigation after Gloss alleged it was not independent. In announcing the results of its own inquiry, the SEC did not give Marvell the credit granted to other companies in the options scandal for cooperating with the SEC's investigation or for cleaning up. At the time of the announcement, the co-acting regional director of the SEC's San Francisco office stated, among other things, that the SEC did not believe that the lack of cooperation and remediation shown by Marvell merited much credit in terms of being lenient with Marvell. In announcing its results, the SEC found that Gloss was not a participant in Dai and Sutardja's backdating scheme. Marvell restated its financial results, and stated that Dai will no longer be executive vice president, chief operating officer, and director but continue with the company in a non-management position. The company agreed to pay a $10 million fine in 2008, but did not fire Dai nor replace Sutardja as chairman as stated by the investigating committee. === Patent infringement === In December 2012, a Pittsburgh jury ruled that Marvell had infringed two patents (co-inventors Alek Kavčić and Jose Moura) by incorporating hard disk technology developed and owned by Carnegie Mellon University without a license. The technology, relating to improving hard disk data read accuracy at high speeds, was reported to have been used in 2.3 billion chips sold by Marvell between 2003 and 2012. The jury awarded damages of $1.17 billion, the third largest ever in a patent case at the time. The jury also found that the breach had been "willful", giving the judge discretion to award up to three times the original damage amount. In December 2012, the company lost its mistrial bid in this dispute. Post-trial hearings were scheduled for May 2013 and Marvell reported to be considering an appeal in the interim. In August, US District Judge Nora Barry Fischer upheld the award. On February 17, 2016, Marvell agreed to a settlement in which Marvell will pay Carnegie Mellon University $750,000,000. == See also == Marvell Software Solutions Israel == References == == External links == Official website Business data for Marvell Technology, Inc.:
https://en.wikipedia.org/wiki/Marvell_Technology
Technische Universität Berlin (TU Berlin; also known as Berlin Institute of Technology and Technical University of Berlin, although officially the name should not be translated) is a public research university located in Berlin, Germany. It was the first German university to adopt the name "Technische Universität" (university of technology). The university alumni and staff includes several US National Academies members, two National Medal of Science laureates, the creator of the first fully functional programmable (electromechanical) computer, Konrad Zuse, and ten Nobel Prize laureates. TU Berlin is a member of TU9, an incorporated society of the largest and most notable German institutes of technology and of the Top International Managers in Engineering network, which allows for student exchanges between leading engineering schools. It belongs to the Conference of European Schools for Advanced Engineering Education and Research. The TU Berlin is home of two innovation centers designated by the European Institute of Innovation and Technology. The university is labeled as "The Entrepreneurial University" ("Die Gründerhochschule") by the Federal Ministry for Economic Affairs and Energy. The university is notable for having been the first to offer a degree in Industrial Engineering and Management (Wirtschaftsingenieurwesen). The university designed the degree in response to requests by industrialists for graduates with the technical and management training to run a company. First offered in winter term 1926/27, it is one of the oldest programmes of its kind. TU Berlin has one of the highest proportions of international students in Germany, almost 27% in 2019. In addition, TU Berlin is part of the Berlin University Alliance, has been conferred the title of "University of Excellence" under and receiving funding from the German Universities Excellence Initiative. == History == On 1 April 1879, the Königlich Technische Hochschule zu Berlin (en: "Royal Technical Academy of Berlin") came into being in 1879 through a merger of the Königliche Gewerbeakademie zu Berlin (en: "Royal Trade Academy", founded in 1827) and Königliche Bauakademie zu Berlin (en: "Royal Building Academy", founded in 1799), two predecessor institutions of the Prussian State. In 1899, the Königlich Technische Hochschule zu Berlin was the first polytechnic in Germany to award doctorates, as a standard degree for the graduates, in addition to diplomas, thanks to professor Alois Riedler and Adolf Slaby, chairman of the Association of German Engineers (VDI) and the Association for Electrical, Electronic and Information Technologies (VDE). In 1916 the long-standing Königliche Bergakademie zu Berlin, the Prussian mining academy created by the geologist Carl Abraham Gerhard in 1770 at the behest of King Frederick the Great, was incorporated into the Königlich Technische Hochschule as the "Department of Mining". Beforehand, the mining college had been, however, for several decades under the auspices of the Frederick William University (now Humboldt University of Berlin), before it was spun out again in 1860. After Charlottenburg's absorption into Greater Berlin in 1920 and Germany becoming the Weimar Republic, the Königlich Technische Hochschule zu Berlin was renamed "Technische Hochschule zu Berlin" ("TH Berlin"). In 1927, the Department of Geodesy of the Agricultural College of Berlin was incorporated into the TH Berlin. During the 1930s, the redevelopment and expansion of the campus along the "East-West axis" were part of the Nazi plans of a Welthauptstadt Germania, including a new faculty of defense technology under General Karl Becker, built as a part of the greater academic town (Hochschulstadt) in the adjacent west-wise Grunewald forest. The shell construction remained unfinished after the outbreak of World War II and after Becker's suicide in 1940, it is today covered by the large-scale Teufelsberg rubble hill. The north section of the main building of the university was destroyed during a bombing raid in November 1943. Due to the street fighting at the end of the Second World War, the operations at the TH Berlin were suspended as of 20 April 1945. Planning for the re-opening of the school began on 2 June 1945, once the acting rectorship led by Gustav Ludwig Hertz and Max Volmer was appointed. As both Hertz and Volmer remained in exile in the Soviet Union for some time to come, the college was not re-inaugurated until 9 April 1946, now bearing the name "Technische Universität Berlin". Since 2009 the TU Berlin has housed two Knowledge and Innovation Communities (KIC) designated by the European Institute of Innovation and Technology. == Name == The official policy of the university is that only the German name, Technische Universität Berlin (TU Berlin), should be used abroad in order to promote corporate identity and that its name is not to be translated into English. == Campus == The TU Berlin covers 604,000 square metres (6.5 million square feet), distributed over various locations in Berlin. The main campus is located in the borough of Charlottenburg-Wilmersdorf. The seven schools of the university have some 33,933 students enrolled in 90 subjects (October 2015). From 2012 to 2022, TU Berlin operated a satellite campus in Egypt, the El Gouna campus, to act as a scientific and academic field office. The nonprofit public–private partnership (PPP) aimed to offer services provided by Technische Universität Berlin at the campus in El Gouna on the Red Sea. The university also has a franchise of its Global Production Engineering course – called Global Production Engineering and Management at the Vietnamese-German University in Ho Chi Minh City. == Organization == Since 2002, the TU Berlin has consisted of the following faculties and institutes: Faculty I – Humanities and Educational Sciences (Geistes- und Bildungswissenschaften) Institute of History and Philosophy of Science, Technology, and Literature Institute for Art History and Historical Urbanism Institute of Education Institute of Language and Communication Institute of Vocational Education and Work Studies Center for Research on Antisemitism (ZfA) Center for Interdisciplinary Women's and Gender Studies (ZIFG) Center for Cultural Studies on Science and Technology in China (CCST) Faculty II – Mathematics and Natural Sciences (Mathematik und Naturwissenschaften) Center for Astronomy and Astrophysics Institute of Chemistry Institute of Solid-State Physics Institute of Mathematics Institute of Optics and Atomic Physics Institute of Theoretical Physics Faculty III – Process Sciences (Prozesswissenschaften) Institute of Biotechnology Institute of Energy Technology Institute of Food Technology and Food Chemistry Institute of Chemical and Process Engineering Institute of Environmental Technology Institute of Material Sciences and Technology Faculty IV – Electrical Engineering and Computer Science (Elektrotechnik und Informatik) Institute of Energy and Automation Technology Institute of High-Frequency and Semiconductor System Technologies Institute of Telecommunication Systems Institute of Computer Engineering and Microelectronics Institute of Software Engineering and Theoretical Computer Science Institute of Commercial Information Technology and Quantitative Methods Faculty V – Mechanical Engineering and Transport Systems (Verkehrs- und Maschinensysteme) Institute of Fluid Mechanics and Technical Aacoustics Institute of Psychology and Ergonomics (Arbeitswissenschaft) Institute of Land and Sea Transport Systems Institute of Aeronautics and Astronautics Institute of Engineering Design, and Micro and Medical Technology Institute of Machine Tools and Factory Management Institute of Mechanics Faculty VI – Planning Building Environment (Planen Bauen Umwelt) Institute of Architecture Institute of Civil Engineering Institute of Applied Geosciences Institute of Geodesy and Geoinformation Science Institute of Landscape Architecture and Environmental Planning Institute of Ecology Institute of Sociology Institute of Urban and Regional Planning Faculty VII – Economics and Management (Wirtschaft und Management) Institute for Technology and Management (ITM) Institute of Business Administration (IBWL) Institute of Economics and Law (IVWR) School of Education (SETUB) Central Institute El Gouna (Zentralinstitut El Gouna) == Faculty and staff == As of 2015, 8,455 people work at the university: 338 professors, 2,598 postgraduate researchers, and 2,131 personnel work in administration, the workshops, the library, and the central facilities. In addition, there are 2,651 student assistants and 126 trainees. International student mobility is available through the ERASMUS programme or through the Top Industrial Managers for Europe (TIME) network. == Library == The new common main library of Technische Universität Berlin and of the Berlin University of the Arts was opened in 2004 and holds about 2.9 million volumes (2007). The library building was sponsored partially (estimated 10% of the building costs) by Volkswagen and is named officially "University Library of the TU Berlin and UdK (in the Volkswagen building)". Some of the former 17 libraries of Technische Universität Berlin and of the nearby University of the Arts were merged into the new library, but several departments still retain libraries of their own. In particular, the school of 'Economics and Management' maintains a library with 340,000 volumes in the university's main building (Die Bibliothek – Wirtschaft & Management/"The Library" – Economics and Management) and the 'Department of Mathematics' maintains a library with 60,000 volumes in the Mathematics building (Mathematische Fachbibliothek/"Mathematics Library"). == Notable alumni and professors == (Including those of the Academies mentioned in the History section) Bruno Ahrends (1878–1948), architect Steffen Ahrends (1907–1992), architect Zora Arkus-Duntov (1909–1996), Russian and American engineer and racing car driver Stancho Belkovski (1891–1962), Bulgarian architect, head of Higher Technical School in Sofia and the department of public buildings. August Borsig (1804–1854), businessman Carl Bosch (1874–1940), chemist, Nobel Prize winner 1931 Franz Breisig (1868–1934), mathematician, inventor of the calibration wire and father of the term quadripole network in electrical engineering. Wilhelm Cauer (1900–1945), mathematician, essential contributions to the design of filters. Henri Marie Coandă (1886–1972), Romanian aircraft designer; discovered the Coandă Effect. Lotte Cohn (1893-1983), German-Israeli architect Jan Czochralski (1885–1953), Polish chemist Carl Dahlhaus (1928–1989), musicologist. Kurt Daluege (1897–1946), SS official, chief of Ordnungspolizei (Order Police) of Nazi Germany from 1936 to 1943, hanged as a war criminal Walter Dornberger (1895–1980), Major-General, developer of the Air Force-NASA X-20 Dyna-Soar project. Ottmar Edenhofer (born 1961), economist Krafft Arnold Ehricke (1917–1984), rocket-propulsion engineer, worked for the NASA, chief designer of the Centaur Gerhard Ertl (born 10 October 1936 in Stuttgart) Physicist and Surface Chemist, Hon. Prof. and Nobel prize winner 2007 Ladislaus Farkas (1904–1948), Austro-Hungarian/Israeli chemist Gottfried Feder (1883–1941), economist and key member of the National Socialist Party Wigbert Fehse (born 1937) German engineer and researcher in the area of automatic space navigation, guidance, control and docking/berthing. Ursula Franklin (1921–2016), Canadian physicist (archaeometry) and theorist on the political and social effects of technology, Pearson Medal of Peace winner 2001 Dennis Gabor (1900–1971), Hungarian-British physicist (holography), Nobel prize winner 1971 Hans Geiger (1882–1945), physicist, co-inventor of the detector component of the Geiger counter Elsa Gidoni (1901–1978), German-American architect and interior designer. Thomas Gil (born 1954), Professor of Practical Philosophy. Fritz Gosslau (1898–1965), German engineer, known for his work at the V-1 flying bomb. Fritz Haber (1868–1934), chemist who received the Nobel Prize in Chemistry in 1918 Gustav Ludwig Hertz (1887–1975), physicist, Nobel Prize winner 1925 Ernst Herzfeld (1879–1948), archaeologist and Iranologist Franz Hillinger (1895–1973), architect of the Neues Bauen (New Objectivity) movement in Berlin and in Turkey. Fritz Houtermans (1903–1966) Dutch-Austrian-German atomic and nuclear physicist Hugo Junkers (1859–1935), former of Junkers & Co, a major German aircraft manufacturer. Anatol Kagan (1913–2009), Russian-born Australian architect. Helmut Kallmeyer (1910–2006), chemist and Action T4 perpetrator Walter Kaufmann (1871–1947), physicist, well known for his first experimental proof of the velocity dependence of mass. Diébédo Francis Kéré (born 1965), Burnikabe architect Nicolas Kitsikis (1887–1978), Greek civil engineer, rector of the Athens Polytechnic School, senator and member of the Greek Parliament, doctor honoris causa of the Technische Universität Berlin. Heinz-Hermann Koelle (born 1925) former director of the Army Ballistic Missile Agency, member of the launch crew on Explorer I and later directed the NASA's Marshall Space Flight Center's involvement in Project Apollo. Abdul Qadeer Khan (born 1936), Pakistani nuclear physicist and metallurgical engineer, who founded the uranium enrichment program for Pakistan's atomic bomb project. Arthur Korn (1870–1945), physicist, mathematician, and inventor of the fax machine. Franz Kruckenberg (1882–1965), designer of the first aerodynamic high-speed train 1931 Karl Küpfmüller (1897–1977), electrical engineer, essential contributions to system theory Konrad Kwiet (born 1941), historian and scholar of the Holocaust. Edward Lasker (1885–1981), German-American chess player Wassili Luckhardt (1889–1972), architect Georg Hans Madelung (1889–1972), academic and aeronautical engineer. Herbert Franz Mataré (1912–2011), physicist and Transistor-pioneer Alexander Meissner (1883–1958), Austrian electrical engineer Otto Metzger, German-British engineer Joachim Milberg (born 1943), Former CEO of BMW AG. Erwin Wilhelm Müller (1911–1977), physicist (field emission microscope, field ion microscope, atom probe) Klaus-Robert Müller (born 1964), computer scientist and physicist, a leading researcher in machine learning Hans-Georg Münzberg (1916–2000), engineer, airplane turbines Gustav Niemann (1899–1982), mechanical engineer Ida Noddack (1896–1978), nominated three times for Nobel Prize in Chemistry. Egon Orowan (1902–1989), Hungarian-British physicist, metallurgist, and academic Jakob Karol Parnas (1884–1949), Polish-Soviet biochemist, Embden-Meyerhof-Parnas pathway Wolfgang Paul (1913–1993), physicist, Nobel Prize winner 1989 Hans Reissner (1874–1967), aeronautical engineer whose avocation was mathematical physics Franz Reuleaux (1829–1905), mechanical engineer, often called the father of kinematics Klaus Riedel (1907–1944), German rocket pioneer, worked on the V-2 missile programme at Peenemünde. Alois Riedler (1850–1936), Austrian inventor of the Leavitt-Riedler Pumping Engine; proponent of practically oriented engineering education. Hermann Rietschel (1847–1914), inventor of modern HVAC (heating, ventilation, and air conditioning). Arthur Rudolph (1906–1996) worked for the U.S. Army and NASA, developer of Pershing missile and the Saturn V Moon rocket. Ernst Ruska (1906–1988), physicist (electron microscope), Nobel Prize winner 1986 Karl Friedrich Schinkel (1781–1841), architect (at the predecessor Berlin Building Academy) Bernhard Schölkopf (born 1968), computer scientist Fritz Sennheiser (1912–2010), founder of Sennheiser Adolf Slaby (1849–1913), German wireless pioneer Albert Speer (1905–1981), architect, politician, Minister for Armaments during the Third Reich, was sentenced to 20 years prison in the Nuremberg trials Ernst Steinitz (1871–1928), mathematician. Edmund Stinnes (1896–1980), German-American industrialist, professor, and heir Ivan Stranski (1897–1979), Bulgarian chemist, considered the father of crystal growth research Zdenko Strižić (1902–1990), Croatian architect Ernst Stuhlinger (1913–2008), German-American member of the Army Ballistic Missile Agency, director of the space science lab at NASA's Marshall Space Flight Center. Kurt Tank (1893–1983), head of design department of Focke-Wulf, designed the Fw 190 Willibald Trinks (1874–1966), head of the Department of Mechanical of Engineering of the Carnegie Institute of Technology Hermann W. Vogel, (1834–1898) photo-chemist Wernher von Braun (1912–1977), German-American head of Nazi Germany's V-2 rocket program, saved from prosecution at the Nuremberg Trials by Operation Paperclip, first director of the United States National Aeronautics and Space Administration's (NASA) Marshall Space Flight Center, called the father of the U.S. space program. Elisabeth von Knobelsdorff (1877–1959), engineer and architect Chaim Weizmann, first President of Israel Wilhelm Heinrich Westphal (1882–1978), physicist Eugene Wigner (1902–1995), Hungarian-American physicist, discovered the Wigner-Ville-distribution, Nobel prize winner 1963 Ludwig Wittgenstein (1889–1951), Austrian philosopher Martin C. Wittig (born 1964), Former CEO of the management consultant firm Roland Berger Strategy Consultants. Constantin Zablovschi (1882–1967), Romanian pioneer radio engineer in Romania Elisa Leonida Zamfirescu (1887–1973) chemist, graduated 1912, female engineering pioneer. Günter M. Ziegler (born 1963), Gottfried Wilhelm Leibniz Prize (2001) Konrad Zuse (1910–1995), computer pioneer == Rankings == According to the QS World University Rankings 2025, TU Berlin was ranked 147th globally, making it the 8th best university in the country. In the Times Higher Education World University Rankings for 2023, the institution was ranked 136th globally and within the 12–13th range nationally. The Academic Ranking of World Universities for 2023 positions TU Berlin within the 201–300 range globally and the 10–19 range within Germany. Measured by the number of top managers in the German economy, TU Berlin ranked 11th in 2019. According to the research report of the German Research Foundation (DFG) from 2018, TU Berlin ranked 24th absolute among German universities across all scientific disciplines. Thereby TU Berlin ranked 9th absolute in natural sciences and engineering. The TU Berlin took 14th place absolute in computer science and 5th place absolute in electrical engineering. In a competitive selection process, the DFG selects the best research projects from researchers at universities and research institutes and finances them. The ranking is thus regarded as an indicator of the quality of research. In the 2017 Times Higher Education World University Rankings, the TU Berlin ranked 40th in the field of Engineering & Technology (3rd in Germany) and 36th in Computer science discipline (4th in Germany), making it one of the top 100 universities worldwide in all three measures. As of 2016, TU Berlin was ranked 35th in the field of Engineering & Technology according to the British QS World University Rankings. It was one of Germany's highest ranked universities in statistics and operations research and in Mathematics according to QS. == See also == Universities and research institutions in Berlin European Institute of Innovation and Technology Free University of Berlin Humboldt University of Berlin Berlin University of the Arts == References == == External links == Official website TU Berlin: International partner universities Website of the Student's Council and Government TU Berlin: Campus Map
https://en.wikipedia.org/wiki/Technische_Universität_Berlin
This is a list of atheists in science and technology. A statement by a living person that he or she does not believe in God is not a sufficient criterion for inclusion in this list. Persons in this list are people (living or not) who both have publicly identified themselves as atheists and whose atheism is relevant to their notable activities or public life. == A == Scott Aaronson (1981–): American theoretical computer scientist and professor at the University of Texas at Austin. His primary area of research is quantum computing and computational complexity theory. Ernst Abbe (1840–1905): German physicist, optometrist, entrepreneur, and social reformer. Together with Otto Schott and Carl Zeiss, he laid the foundation of modern optics. Abbe developed numerous optical instruments. He was a co-owner of Carl Zeiss AG, a German manufacturer of research microscopes, astronomical telescopes, planetariums and other optical systems. Fay Ajzenberg-Selove (1926–2012): American nuclear physicist who was known for her experimental work in nuclear spectroscopy of light elements, and for her annual reviews of the energy levels of light atomic nuclei. She was a recipient of the 2007 National Medal of Science. Jean le Rond d'Alembert (1717–1783): French mathematician, mechanician, physicist, philosopher, and music theorist. He was also co-editor with Denis Diderot of the Encyclopédie. Zhores Alferov (1930–2019): Belarusian, Soviet, and Russian physicist who contributed substantially to the creation of modern heterostructure physics and electronics. He is an inventor of the heterotransistor and co-winner (with Herbert Kroemer and Jack Kilby) of the 2000 Nobel Prize in Physics. Hannes Alfvén (1908–1995): Swedish electrical engineer and plasma physicist. He received the 1970 Nobel Prize in Physics for his work on magnetohydrodynamics (MHD). He is best known for describing the class of MHD waves now known as Alfvén waves. Jim Al-Khalili OBE (1962–): Iraqi-born British quantum physicist, author and science communicator. He is professor of Theoretical Physics and Chair in the Public Engagement in Science at the University of Surrey Philip W. Anderson (1923–2020): American physicist. He was one of the recipients of the Nobel Prize in Physics in 1977. Anderson has made contributions to the theories of localization, antiferromagnetism and high-temperature superconductivity. Jacob Appelbaum (1983–): American computer security researcher and hacker. He is a core member of the Tor project. François Arago (1786–1853): French mathematician, physicist, astronomer and politician. Svante Arrhenius (1859–1927): Swedish scientist and the first Swedish Nobel Prize winner. Abhay Ashtekar (1949–): Indian theoretical physicist. As the creator of Ashtekar variables, he is one of the founders of loop quantum gravity and its subfield loop quantum cosmology. Larned B. Asprey (1919–2005): American chemist noted for his work on actinide, lanthanide, rare earth, and fluorine chemistry, and for his contributions to nuclear chemistry on the Manhattan Project and later at the Los Alamos National Laboratory. Peter Atkins (1940–): English quantum chemist and professor of chemistry at Lincoln College, Oxford, in England. Scott Atran (1952–): American-French cultural anthropologist who is Emeritus Director of Research in Anthropology at the Centre national de la recherche scientifique in Paris, Research Professor at the University of Michigan, and cofounder of ARTIS International and of the Centre for the Resolution of Intractable Conflict Archived 2018-05-14 at the Wayback Machine at Oxford University. Julius Axelrod (1912–2004): American Nobel Prize–winning biochemist, noted for his work on the release and reuptake of catecholamine neurotransmitters and major contributions to the understanding of the pineal gland and how it is regulated during the sleep-wake cycle. == B == Sir Edward Battersby Bailey FRS (1881–1965): British geologist, director of the British Geological Survey. Gregory Bateson (1904–1980): English anthropologist, social scientist, linguist, visual anthropologist, semiotician and cyberneticist whose work intersected that of many other fields. Sir Patrick Bateson FRS (1938–2017): English biologist and science writer, Emeritus Professor of ethology at the University of Cambridge and president of the Zoological Society of London. William Bateson (1861–1926): English geneticist, a Fellow of St. John's College, Cambridge, where he eventually became Master. He was the first person to use the term "genetics" to describe the study of heredity and biological inheritance, and the chief populariser of the ideas of Gregor Mendel following their rediscovery. George Beadle (1903–1989): American scientist in the field of genetics, and Nobel Prize in Physiology or Medicine laureate who, with Edward Tatum, discovered the role of genes in regulating biochemical events within cells in 1958. John Stewart Bell FRS (1928–1990): Irish physicist. Best known for his discovery of Bell's theorem. Richard E. Bellman (1920–1984): American applied mathematician, best known for his invention of dynamic programming in 1953, along with other important contributions in other fields of mathematics. Charles H. Bennett (1943–): American physicist, information theorist and IBM Fellow at IBM Research. He is best known for his work in quantum cryptography, quantum teleportation and is one of the founding fathers of modern quantum information theory. John Desmond Bernal (1901–1971): British biophysicist. Best known for pioneering X-ray crystallography in molecular biology. Tim Berners-Lee (1955–): English computer scientist, best known as the inventor of the World Wide Web. Marcellin Berthelot (1827–1907): French chemist and politician noted for the Thomsen-Berthelot principle of thermochemistry. He synthesized many organic compounds from inorganic substances and disproved the theory of vitalism. Claude Louis Berthollet (1748–1822): French chemist. Hans Bethe (1906–2005): German-American nuclear physicist, and Nobel laureate in physics for his work on the theory of stellar nucleosynthesis. A versatile theoretical physicist, Bethe also made important contributions to quantum electrodynamics, nuclear physics, solid-state physics and astrophysics. During World War II, he was head of the Theoretical Division at the secret Los Alamos laboratory which developed the first atomic bombs. There he played a key role in calculating the critical mass of the weapons, and did theoretical work on the implosion method used in both the Trinity test and the "Fat Man" weapon dropped on Nagasaki, Japan. Norman Bethune (1890–1939): Canadian physician and medical innovator. Patrick Blackett OM, CH, FRS (1897–1974): Nobel Prize-winning English experimental physicist known for his work on cloud chambers, cosmic rays, and paleomagnetism. Colin Blakemore (1944–2022): British neurobiologist, specialising in vision and the development of the brain, who is Professor of Neuroscience and Philosophy in the School of Advanced Study, University of London and Emeritus Professor of Neuroscience at the University of Oxford. Christian Bohr (1855–1911): Danish physician; father of physicist and Nobel laureate Niels Bohr, and of mathematician Harald Bohr; grandfather of physicist and Nobel laureate Aage Bohr. Christian Bohr is known for having characterized respiratory dead space and described the Bohr effect. Niels Bohr (1885–1962): Danish physicist. Best known for his foundational contributions to understanding atomic structure and quantum mechanics, for which he received the Nobel Prize in Physics in 1922. Sir Hermann Bondi KCB, FRS (1919–2005): Anglo-Austrian mathematician and cosmologist, best known for co-developing the steady-state theory of the universe and important contributions to the theory of general relativity. Paul D. Boyer (1918–2018): American biochemist and Nobel Laureate in Chemistry in 1997. Sydney Brenner (1927–2019): South African molecular biologist and a 2002 Nobel prize in Physiology or Medicine laureate, shared with Bob Horvitz and John Sulston. Brenner made significant contributions to work on the genetic code, and other areas of molecular biology while working in the Medical Research Council (MRC) Laboratory of Molecular Biology in Cambridge, England. Calvin Bridges (1889–1938): American geneticist, known especially for his work on fruit fly genetics. Percy Williams Bridgman (1882–1961): American physicist who won the 1946 Nobel Prize in Physics for his work on the physics of high pressures. Louis de Broglie (1892–1987): French physicist who made groundbreaking contributions to quantum theory and won the Nobel Prize for Physics in 1929. Ruth Mack Brunswick (1897–1946): American psychologist, a close confidant of and collaborator with Sigmund Freud. Mario Bunge (1919–2020): Argentine-Canadian philosopher and physicist. His philosophical writings combined scientific realism, systemism, materialism, emergentism, and other principles. Sir Frank Macfarlane Burnet FRS FAA FRSNZ (1899–1985): Australian virologist best known for his contributions to immunology. He won the Nobel Prize in 1960 for predicting acquired immune tolerance and was best known for developing the theory of clonal selection. Geoffrey Burnstock (1929–2020): Australian neurobiologist and President of the Autonomic Neuroscience Centre of the UCL Medical School. He is best known for coining the term purinergic signaling, which he discovered in the 1970s. He played a key role in the discovery of ATP as neurotransmitter. == C == Robert Cailliau (1947–): Belgian informatics engineer and computer scientist who, together with Sir Tim Berners-Lee, developed the World Wide Web. Sir Paul Callaghan (1947–2012): New Zealand physicist who, as the founding director of the MacDiarmid Institute for Advanced Materials and Nanotechnology at Victoria University of Wellington, held the position of Alan MacDiarmid Professor of Physical Sciences and was President of the International Society of Magnetic Resonance. Sean B. Carroll (1960–): American evolutionary developmental biologist, author, educator and executive producer. He is the Allan Wilson Professor of Molecular Biology and Genetics at the University of Wisconsin–Madison. Sean M. Carroll (1966–): American cosmologist and theoretical physicist specializing in dark energy and general relativity. Raymond Cattell (1905–1998): British and American psychologist, known for his psychometric research into intrapersonal psychological structure and his exploration of many areas within empirical psychology. Cattell authored, co-authored, or edited almost 60 scholarly books, more than 500 research articles, and over 30 standardized psychometric tests, questionnaires, and rating scales and was among the most productive, but controversial psychologists of the 20th century. James Chadwick (1891–1974): English physicist. He won the 1935 Nobel Prize in Physics for his discovery of the neutron. Subrahmanyan Chandrasekhar (1910–1995): Indian-American astrophysicist known for his theoretical work on the structure and evolution of stars. He was awarded the Nobel Prize in Physics in 1983. Georges Charpak (1924–2010): French physicist who was awarded the Nobel Prize in Physics in 1992. Boris Chertok (1912–2011): Prominent Soviet and Russian rocket designer, responsible for control systems of a number of ballistic missiles and spacecraft. He was the author of a four-volume book Rockets and People, the definitive source of information about the history of the Soviet space program. William Kingdon Clifford FRS (1845–1879): English mathematician and philosopher, co-introducer of geometric algebra, the first to suggest that gravitation might be a manifestation of an underlying geometry, and coiner of the expression "mind-stuff". Samuel T. Cohen (1921–2010): American physicist who invented the W70 warhead and is generally credited as the father of the neutron bomb. John Horton Conway (1937–2020): British mathematician active in the theory of finite groups, knot theory, number theory, combinatorial game theory and coding theory. He is best known for the invention of the cellular automaton called Conway's Game of Life. Sir John Cornforth FRS, FAA (1917–2013): Australian–British chemist who won the Nobel Prize in Chemistry in 1975 for his work on the stereochemistry of enzyme-catalysed reactions. Jan Baudouin de Courtenay (1845–1929): Polish linguist and Slavist, best known for his theory of the phoneme and phonetic alternations. Jerry Coyne (1949–): American evolutionary biologist and professor, known for his books on evolution and commentary on the intelligent design debate. Francis Crick (1916–2004): English molecular biologist, physicist, and neuroscientist; noted for being one of the co-discoverers of the structure of the DNA molecule in 1953. He was awarded the Nobel Prize in Physiology or Medicine in 1962. George Washington Crile (1864–1943): American surgeon. Crile is now formally recognized as the first surgeon to have succeeded in a direct blood transfusion. Pierre Curie (1859–1906): French physicist, a pioneer in crystallography, magnetism, piezoelectricity and radioactivity, and Nobel laureate. In 1903 he received the Nobel Prize in Physics with his wife, Marie Curie, and Henri Becquerel, "in recognition of the extraordinary services they have rendered by their joint researches on the radiation phenomena discovered by Professor Henri Becquerel". == D == Sir Howard Dalton FRS (1944–2008): British microbiologist, Chief Scientific Advisor to the UK's Department for Environment, Food and Rural Affairs from March 2002 to September 2007. Richard Dawkins (1941–): English evolutionary biologist, creator of the concept of the meme; outspoken atheist and populariser of science, author of The God Delusion and founder of the Richard Dawkins Foundation for Reason and Science. Christian de Duve (1917–2013): Belgian cytologist and biochemist. He made serendipitous discoveries of two cell organelles, the peroxisome and lysosome, for which he shared the 1974 Nobel Prize in Physiology or Medicine with Albert Claude and George E. Palade ("for their discoveries concerning the structural and functional organization of the cell"). In addition to discovering and naming the peroxisome and lysosome, on a single occasion in 1963 he coined the scientific terms "autophagy", "endocytosis", and "exocytosis". Wander Johannes de Haas (1878–1960): Dutch physicist and mathematician who is best known for the Shubnikov–de Haas effect, the de Haas–van Alphen effect and the Einstein–de Haas effect. Augustus De Morgan (1806–1871): British mathematician and logician. He formulated De Morgan's laws and introduced the term mathematical induction, making its idea rigorous. Arnaud Denjoy (1884–1974): French mathematician, noted for his contributions to harmonic analysis and differential equations. David Deutsch (1953–): Israeli-British quantum physicist at the University of Oxford. He pioneered the field of quantum computation by being the first person to formulate a description for a quantum Turing machine, as well as specifying an algorithm designed to run on a quantum computer. William G. Dever (1933–): American archaeologist, specialising in the history of Israel and the Near East in biblical times. Jared Diamond (1937–): American geographer, historian, and author best known for his popular science books. Paul Dirac (1902–1984): British theoretical physicist, one of the founders of quantum mechanics, predicted the existence of antimatter, and won the Nobel Prize in Physics in 1933. Carl Djerassi (1923–2015): Austrian-born Bulgarian-American chemist, novelist, and playwright best known for his contribution to the development of oral contraceptive pills. He also developed Pyribenzamine (tripelennamine), his first patent and one of the first commercial antihistamines Emil du Bois-Reymond (1818–1896): German physician and physiologist, the discoverer of nerve action potential, and the father of experimental electrophysiology. Eugene Dynkin (1924–2014): Soviet and American mathematician. He has made contributions to the fields of probability and algebra, especially semisimple Lie groups, Lie algebras, and Markov processes. The Dynkin diagram, the Dynkin system, and Dynkin's lemma are named after him. == E == Paul Ehrenfest (1880–1933): Austrian and Dutch theoretical physicist, who made major contributions to the field of statistical mechanics and its relations with quantum mechanics, including the theory of phase transition and the Ehrenfest theorem. Albert Ellis (1913–2007): American psychologist who in 1955 developed Rational Emotive Behavior Therapy. Paul Erdős (1913–1996): Hungarian mathematician. He published more papers than any other mathematician in history, working with hundreds of collaborators. He worked on problems in combinatorics, graph theory, number theory, classical analysis, approximation theory, set theory, and probability theory. Daniel Everett (1951–): American linguistic anthropologist and author best known for his study of the Amazon Basin's Pirahã people and their language. Hugh Everett III (1930–1982): American physicist who first proposed the many-worlds interpretation (MWI) of quantum physics, which he termed his "relative state" formulation. Hans Eysenck (1916–1997): German psychologist and author who is best remembered for his work on intelligence and personality, though he worked in a wide range of areas. He was the founding editor of the journal Personality and Individual Differences, and authored about 80 books and more than 1600 journal articles. == F == Gustav Fechner (1801–1887): German experimental psychologist. An early pioneer in experimental psychology and founder of psychophysics. Leon Festinger (1919–1989): American social psychologist famous for his Theory of Cognitive Dissonance. Richard Feynman (1918–1988): American theoretical physicist, best known for his work in renormalizing Quantum electrodynamics (QED) and his path integral formulation of quantum mechanics . He won the Nobel Prize in Physics in 1965. Irving Finkel (1951–): British philologist, Assyriologist, and the Assistant Keeper of Ancient Mesopotamian script, languages and cultures in the Department of the Middle East in the British Museum, where he specialises in cuneiform inscriptions on tablets of clay from ancient Mesopotamia. Sir Raymond Firth CNZM, FBA (1901–2002): New Zealand ethnologist, considered to have singlehandedly created a form of British economic anthropology. Helen Fisher (1945-2024): American biological anthropologist and member of the Center For Human Evolutionary Studies at Rutgers University. James Franck (1882–1964): German physicist. Won the Nobel Prize in Physics in 1925. Carlos Frenk (1951–): Mexican-British cosmologist and the Ogden Professor of Fundamental Physics at Durham University, whose main interests lie in the field of cosmology, studying galaxy formation and computer simulations of cosmic structure formation. Sigmund Freud (1856–1939): Austrian neurologist known as the father of psychoanalysis. Jerome Isaac Friedman (1930–): American physicist who won the 1990 Nobel Prize in Physics along with Henry Kendall and Richard Taylor, for work showing an internal structure for protons later known to be quarks. Christer Fuglesang (1957–): Swedish astronaut and physicist. == G == George Gamow (1904–1968): Russian-born theoretical physicist and cosmologist. An early advocate and developer of Lemaître's Big Bang theory. Joseph Louis Gay-Lussac (1772–1850): French chemist and physicist. He is known mostly for two laws related to gases. Ivar Giaever (1929–): Norwegian-American physicist who shared the Nobel Prize in Physics in 1973 with Leo Esaki and Brian Josephson "for their discoveries regarding tunnelling phenomena in solids". Giaever is an institute professor emeritus at the Rensselaer Polytechnic Institute, a professor-at-large at the University of Oslo, and the president of Applied Biophysics. Sheldon Glashow (1932–): American theoretical physicist. He shared the 1979 Nobel Prize in Physics with Steven Weinberg and Abdus Salam for his contribution to the electroweak unification theory. Camillo Golgi (1843–1926): Italian physician, biologist, pathologist, scientist, and Nobel laureate. Several structures and phenomena in anatomy and physiology are named for him, including the Golgi apparatus, the Golgi tendon organ and the Golgi tendon reflex. He is recognized as the greatest neuroscientist and biologist of his time. Herb Grosch (1918–2010): Canadian-American computer scientist, perhaps best known for Grosch's law, which he formulated in 1950. David Gross (1941–): American theoretical physicist and string theorist who was awarded a Nobel Prize in Physics for his co-discovery of asymptotic freedom. == H == Jacques Hadamard (1865–1963): French mathematician who made major contributions in number theory, complex function theory, differential geometry and partial differential equations. Jonathan Haidt (c.1964–): Associate professor of psychology at the University of Virginia, focusing on the psychological bases of morality across different cultures, and author of The Happiness Hypothesis. J. B. S. Haldane (1892–1964): British polymath well known for his works in physiology, genetics and evolutionary biology. He was also a mathematician making innovative contributions to statistics and biometry education in India. Haldane was also the first to construct human gene maps for haemophilia and colour blindness on the X chromosome and he was one of the first people to conceive abiogenesis. Alan Hale (1958–): American professional astronomer, who co-discovered Comet Hale–Bopp, and specializes in the study of sun-like stars and the search for extra-solar planetary systems, and has side interests in the fields of comets and near-Earth asteroids. Sir James Hall (1761–1832): Scottish geologist and chemist, President of the Royal Society of Edinburgh and leading figure in the Scottish Enlightenment. G. Stanley Hall (1846–1924): Pioneering American psychologist and educator. His interests focused on childhood development and evolutionary theory. Hall was the first president of the American Psychological Association and the first president of Clark University. Beverly Halstead (1933–1991): British paleontologist and populariser of science. Gerhard Armauer Hansen (1841–1912): Norwegian physician, remembered for his identification of the bacterium Mycobacterium leprae in 1873 as the causative agent of leprosy. G. H. Hardy (1877–1947): Prominent English mathematician, known for his achievements in number theory and mathematical analysis. Herbert A. Hauptman (1917–2011): American mathematician. Along with Jerome Karle, won the Nobel Prize in Chemistry in 1985. Stephen Hawking (1942–2018): British theoretical physicist, cosmologist, author, and Director of Research at the Centre for Theoretical Cosmology within the University of Cambridge. Ewald Hering (1834–1918): German physiologist who did much research into color vision, binocular perception and eye movements. He proposed opponent color theory in 1892. Peter Higgs (1929–2024): British theoretical physicist, recipient of the Dirac Medal and Prize, known for his prediction of the existence of a new particle, the Higgs boson, nicknamed the "God particle". He won the Nobel Prize in Physics in 2013. Roald Hoffmann (1937–): American theoretical chemist who won the 1981 Nobel Prize in Chemistry. Lancelot Hogben (1895–1975): English experimental zoologist and medical statistician, now best known for his popularising books on science, mathematics and language. Brigid Hogan FRS (1943–): British developmental biologist noted for her contributions to stem cell research and transgenic technology and techniques. She is the George Barth Geller Professor of Research in Molecular Biology and Chair of the Department of Cell Biology at Duke University, as well as the director of the Duke Stem Cell Program. Fred Hollows (1929–1993): New Zealand and Australian ophthalmologist. He became known for his work in restoring eyesight for countless thousands of people in Australia and many other countries. Fred Hoyle (1915–2001): English astronomer noted primarily for his contribution to the theory of stellar nucleosynthesis and his often controversial stance on other cosmological and scientific matters—in particular his rejection of the "Big Bang" theory, a term originally coined by him on BBC radio. Nicholas Humphrey (1943–): English neuropsychologist, working on consciousness and belief in the supernatural from a Darwinian perspective, and primatological research into the Machiavellian intelligence hypothesis. Sir Julian Huxley FRS (1887–1975): English evolutionary biologist, a leading figure in the mid-twentieth century evolutionary synthesis, Secretary of the Zoological Society of London (1935–1942), the first Director of UNESCO, and a founding member of the World Wildlife Fund. == I == Saiful Islam (1963–): British materials chemist, a Professor of Materials Chemistry at the University of Bath and a recipient of the Royal Society Wolfson Research Merit award. == J == John Hughlings Jackson FRS (1835–1911): English neurologist. He is best known for his research on epilepsy. Jackson was one of the founders of the important Brain journal, which was dedicated to the interaction between experimental and clinical neurology (still being published today). François Jacob (1920–2013): French biologist who, together with Jacques Monod, originated the idea that control of enzyme levels in all cells occurs through feedback on transcription. He shared the 1965 Nobel Prize in Medicine with Jacques Monod and André Lwoff. Donald Johanson (1943–): American paleoanthropologist, who's known for discovering – with Yves Coppens and Maurice Taieb – the fossil of a female hominin australopithecine known as "Lucy" in the Afar Triangle region of Hadar, Ethiopia. Frédéric Joliot-Curie (1900–1958): French physicist and Nobel Laureate in Chemistry in 1935. Irène Joliot-Curie (1897–1956): French scientist. She is the daughter of Marie Curie and Pierre Curie. She along with her husband, Frédéric Joliot-Curie, was awarded the Nobel Prize for Chemistry in 1935. Steve Jones (1944–): Welsh geneticist, professor of genetics and head of the biology department at University College London, and television presenter and a prize-winning author on biology, especially evolution; one of the best known contemporary popular writers on evolution. == K == Daniel Kahneman (1934–2024): Israeli psychologist and behavioral economist notable for his work on the psychology of judgment and decision-making. Paul Kammerer (1880–1926): Austrian biologist who studied and advocated the now abandoned Lamarckian theory of inheritance – the notion that organisms may pass to their offspring characteristics they have acquired in their lifetime. Samuel Karlin (1924–2007): American mathematician. He did extensive work in mathematical population genetics. Grete Kellenberger-Gujer (1919–2011): Swiss molecular biologist known for her discoveries on genetic recombination and restriction modification system of DNA. She was a pioneer in the genetic analysis of bacteriophages and contributed to the early development of molecular biology. Alfred Kinsey (1894–1956): American biologist, sexologist and professor of entomology and zoology. Melanie Klein (1882–1960): Austrian-born British psychoanalyst who devised novel therapeutic techniques for children that influenced child psychology and contemporary psychoanalysis. She was a leading innovator in theorizing object relations theory. Alfred Dillwyn Knox (1884–1943): British classics scholar and papyrologist at King's College, Cambridge, and a cryptologist. As a member of the World War I Room 40 codebreaking unit, he helped decrypt the Zimmermann Telegram, which brought the United States into the war. At the end of World War I, he joined the Government Code and Cypher School (GCCS) and on 25 July 1939, as Chief Cryptographer, participated in the Polish-French-British Warsaw meeting that disclosed Polish achievements, since December 1932, in the continuous breaking of German Enigma ciphers, thus kick-starting the British World War II Ultra operations at Bletchley Park. Damodar Kosambi (1907–1966): Indian mathematician, statistician, historian and polymath who contributed to genetics by introducing Kosambi's map function. Lawrence Krauss (1954–): American theoretical physicist, professor of physics at Arizona State University and popularizer of science. Krauss speaks regularly at atheist conferences such as Beyond Belief and Atheist Alliance International. Harold Kroto (1939–2016): 1996 Nobel Laureate in Chemistry. Ray Kurzweil (1948–): American inventor, futurist, and author. He is the author of several books on health, artificial intelligence (AI), transhumanism, the technological singularity, and futurism. == L == Jacques Lacan (1901–1981): French psychoanalyst and psychiatrist who made prominent contributions to psychoanalysis and philosophy, and has been called "the most controversial psycho-analyst since Freud". Joseph Louis Lagrange (1736–1813): Italian mathematician and astronomer that made significant contributions to the fields of analysis, number theory, and both classical and celestial mechanics. Jérôme Lalande (1732–1807): French astronomer and writer. Lev Landau (1908–1968): Russian physicist. He received the 1962 Nobel Prize in Physics for his development of a mathematical theory of superfluidity. Alexander Langmuir (1910–1993): American epidemiologist. He is renowned for creating the Epidemic Intelligence Service. Paul Lauterbur (1929–2007): American chemist who shared the Nobel Prize in Physiology or Medicine in 2003 with Peter Mansfield for his work which made the development of magnetic resonance imaging (MRI) possible. Richard Leakey (1944–2022): Kenyan paleoanthropologist, conservationist, and politician. Félix Le Dantec (1869–1917): French biologist and philosopher of science, noted for his work on bacteria. Leon M. Lederman (1922–2018): American physicist who, along with Melvin Schwartz and Jack Steinberger, received the Nobel Prize for Physics in 1988 for their joint research on neutrinos. Jean-Marie Lehn (1939–): French chemist. He received the 1987 Nobel Prize in Chemistry, together with Donald Cram and Charles Pedersen. Sir John Leslie (1766–1832): Scottish mathematician and physicist best remembered for his research into heat; he was the first person to artificially produce ice, and gave the first modern account of capillary action. Nikolai Lobachevsky (1792–1856): Russian mathematician. Known for his works on hyperbolic geometry. Jacques Loeb (1859–1924): German-born American physiologist and biologist. H. Christopher Longuet-Higgins FRS (1923–2004): English theoretical chemist and a cognitive scientist. == M == Paul MacCready (1925–2007): American aeronautical engineer. He was the founder of AeroVironment and the designer of the human-powered aircraft that won the Kremer prize. Ernst Mach (1838–1916): Austrian physicist and philosopher. Known for his contributions to physics such as the Mach number and the study of shock waves. Prasanta Chandra Mahalanobis FRS (1893–1972): Indian scientist and applied statistician. He is best remembered for the Mahalanobis distance, a statistical measure and for being one of the members of the first Planning commission of free india. He made pioneering studies in anthropometry in India and founded the Indian Statistical Institute. Paolo Mantegazza (1831–1910): Italian neurologist, physiologist and anthropologist, noted for his experimental investigation of coca leaves into its effects on the human psyche. Andrey Markov (1856–1922): Russian mathematician. He is best known for his work on stochastic processes. Phil Mason (1972–): British chemist at the Institute of Organic Chemistry and Biochemistry of the Czech Academy of Sciences, who is known for his online activities and YouTube career. Abraham Maslow (1908–1970): American psychologist. He was a professor of psychology at Brandeis University, Brooklyn College, New School for Social Research and Columbia University who created Maslow's hierarchy of needs. Hiram Stevens Maxim (1840–1916): American-born British inventor. He invented the Maxim gun, the first portable, fully automatic machine gun; and other devices, including an elaborate mousetrap. Ernst Mayr (1904–2005): Renowned taxonomist, tropical explorer, ornithologist, historian of science, and naturalist. He was one of the 20th century's leading evolutionary biologists. John McCarthy (1927–2011): American computer scientist and cognitive scientist who received the Turing Award in 1971 for his major contributions to the field of Artificial Intelligence (AI). He was responsible for the coining of the term "Artificial Intelligence" in his 1955 proposal for the 1956 Dartmouth Conference and was the inventor of the Lisp programming language. Sir Peter Medawar (1915–1987): Nobel Prize-winning British scientist best known for his work on how the immune system rejects or accepts tissue transplants. Simon van der Meer (1925–2011): Dutch particle accelerator physicist who shared the Nobel Prize in Physics in 1984 with Carlo Rubbia for contributions to the CERN project which led to the discovery of the W and Z particles, two of the most fundamental constituents of matter. Élie Metchnikoff (1845–1916): Russian biologist, zoologist and protozoologist. He is best known for his research into the immune system. Mechnikov received the Nobel Prize in Medicine in 1908, shared with Paul Ehrlich. Marvin Minsky (1927–2016): American cognitive scientist and computer scientist in the field of artificial intelligence (AI) in MIT. Peter D. Mitchell (1920–1992): 1978–Nobel-laureate British biochemist. His mother was an atheist and he himself became an atheist at the age of 15. Jacob Moleschott (1822–1893): Dutch physiologist and writer on dietetics. Gaspard Monge (1746–1818): French mathematician. Monge is the inventor of descriptive geometry. Jacques Monod (1910–1976): French biologist who won the Nobel Prize in Physiology or Medicine in 1965 for discoveries concerning genetic control of enzyme and virus synthesis. Rita Levi-Montalcini (1909–2012): Italian neurologist who, together with colleague Stanley Cohen, received the 1986 Nobel Prize in Physiology or Medicine for their discovery of nerve growth factor (NGF). Joseph-Michel Montgolfier (1740–1810): French chemist and paper-manufacturer. In 1783, he made the first ascent in a balloon (inflated with warm air). Thomas Hunt Morgan (1866–1945): American evolutionary biologist, geneticist and embryologist. He won the Nobel Prize in Physiology or Medicine in 1933 for discoveries relating the role the chromosome plays in heredity. Desmond Morris (1928–): English zoologist and ethologist, famous for describing human behaviour from a zoological perspective in his books The Naked Ape and The Human Zoo. David Morrison (1940–): American astronomer and senior scientist at the Solar System Exploration Research Virtual Institute, at NASA Ames Research Center, whose research interests include planetary science, astrobiology, and near earth objects. Luboš Motl (1973–): Theoretical physicist and string theorist. He said he is a Christian atheist. Hermann Joseph Muller (1890–1967): American geneticist and educator, best known for his work on the physiological and genetic effects of radiation (X-ray mutagenesis). He won the Nobel Prize in Physiology or Medicine in 1946. PZ Myers (1957–): American evolutionary developmental biologist at the University of Minnesota and a blogger via his blog, Pharyngula. == N == John Forbes Nash, Jr. (1928–2015): American mathematician whose works in game theory, differential geometry, and partial differential equations. He shared the 1994 Nobel Memorial Prize in Economic Sciences with game theorists Reinhard Selten and John Harsanyi. Yuval Ne'eman (1925–2006): Israeli theoretical physicist, military scientist, and politician. One of his greatest achievements in physics was his 1961 discovery of the classification of hadrons through the SU(3)flavour symmetry, now named the Eightfold Way, which was also proposed independently by Murray Gell-Mann. Ted Nelson: (1937–): American pioneer of information technology, philosopher, and sociologist who coined the terms hypertext and hypermedia in 1963 and published them in 1965. Alfred Nobel (1833–1896): Swedish chemist, engineer, inventor, businessman, and philanthropist who is known for inventing dynamite and holding 355 patents. He was a benefactor of the Nobel Prize. Paul Nurse (1949–): English geneticist, President of the Royal Society and Chief Executive and Director of the Francis Crick Institute. He was awarded the 2001 Nobel Prize in Physiology or Medicine along with Leland Hartwell and Tim Hunt for their discoveries of protein molecules that control the division (duplication) of cells in the cell cycle. == O == Mark Oliphant (1901–2000): Australian physicist and humanitarian. He played a fundamental role in the first experimental demonstration of nuclear fusion and also the development of the atomic bomb. Alexander Oparin (1894–1980): Soviet biochemist. Frank Oppenheimer (1912–1985): American particle physicist, professor of physics at the University of Colorado, and the founder of the Exploratorium in San Francisco. A younger brother of renowned physicist J. Robert Oppenheimer, Frank Oppenheimer conducted research on aspects of nuclear physics during the time of the Manhattan Project, and made contributions to uranium enrichment. J. Robert Oppenheimer (1904–1967): American theoretical physicist and professor of physics at the University of California, Berkeley; along with Enrico Fermi, he is often called the "father of the atomic bomb" for his role in the Manhattan Project. Oppenheimer's achievements in physics include the Born–Oppenheimer approximation for molecular wavefunctions, work on the theory of electrons and positrons, the Oppenheimer–Phillips process in nuclear fusion, and the first prediction of quantum tunneling. With his students he made important contributions to the modern theory of neutron stars and black holes, as well as to quantum mechanics, quantum field theory, and the interactions of cosmic rays. Wilhelm Ostwald (1853–1932): Baltic German chemist. He received the Nobel Prize in Chemistry in 1909 for his work on catalysis, chemical equilibria and reaction velocities. He, along with Jacobus Henricus van 't Hoff and Svante Arrhenius, are usually credited with being the modern founders of the field of physical chemistry. == P == Linus Pauling (1901–1994): American chemist, Nobel Laureate in Chemistry (1954) and Peace (1962) John Allen Paulos (1945–): Professor of mathematics at Temple University in Philadelphia and writer, author of Irreligion: A Mathematician Explains Why the Arguments for God Just Don't Add Up (2007) Ivan Pavlov (1849–1936): Nobel Prize–winning Russian physiologist, psychologist, and physician, widely known for first describing the phenomenon of classical conditioning. Ruby Payne-Scott (1912–1981): Australian pioneer in radiophysics and radio astronomy, and the first female radio astronomer. Judea Pearl (1936–): Israeli American computer scientist and philosopher, best known for championing the probabilistic approach to artificial intelligence and the development of Bayesian networks. He won the Turing Award in 2011. Karl Pearson FRS (1857–1936): Influential English mathematician and biostatistician. He has been credited with establishing the discipline of mathematical statistics. He founded the world's first university statistics department at University College London in 1911, and contributed significantly to the field of biometrics, meteorology, theories of social Darwinism and eugenics. Sir Roger Penrose (1931–): English mathematical physicist and Emeritus Rouse Ball Professor of Mathematics at the Mathematical Institute, University of Oxford and Emeritus Fellow of Wadham College. He is renowned for his work in mathematical physics, in particular his contributions to general relativity and cosmology. He is also a recreational mathematician and philosopher. Francis Perrin (1901–1992): French physicist, co-establisher of the possibility of nuclear chain reactions and nuclear energy production. Jean Baptiste Perrin (1870–1942): Nobel Prize–winning French physicist. Max Perutz (1914–2002): Austrian-born British molecular biologist, who shared the 1962 Nobel Prize for Chemistry with John Kendrew, for their studies of the structures of hemoglobin and globular proteins. Robert Phelps (1926–2013): American mathematician who was known for his contributions to analysis, particularly to functional analysis and measure theory. He was a professor of mathematics at the University of Washington from 1962 until his death. Steven Pinker (1954–): Canadian-American psychologist, psycholinguist, and popular science author. Norman Pirie FRS (1907–1997): British biochemist and virologist co-discoverer in 1936 of viral crystallization, an important milestone in understanding DNA and RNA. Henri Poincaré (1854–1912): French mathematician, theoretical physicist, engineer, and philosopher of science. He is often described as a polymath, and in mathematics as The Last Universalist, since he excelled in all fields of the discipline as it existed during his lifetime. Carolyn Porco (1953–): American planetary scientist, known for her work in the exploration of the outer Solar System, beginning with her imaging work on the Voyager missions to Jupiter, Saturn, Uranus and Neptune in the 1980s. She led the imaging science team on the Cassini mission to Saturn. Donald Prothero (1954–): American geologist, paleontologist, and author who specializes in mammalian paleontology and magnetostratigraphy. He is the author or editor of more than 30 books and over 250 scientific papers, including five geology textbooks. == R == Isidor Isaac Rabi (1898–1988): American physicist and Nobel Prize–winning scientist who discovered nuclear magnetic resonance in 1944 and was also one of the first scientists in the US to work on the cavity magnetron, which is used in microwave radar and microwave ovens. Frank P. Ramsey (1903–1930): British mathematician who also made significant contributions in philosophy and economics. Lisa Randall (1962–): American theoretical physicist working in particle physics and cosmology, and the Frank B. Baird, Jr. Professor of Science on the physics faculty of Harvard University. Marcus J. Ranum (1962–): American computer and network security researcher and industry leader. He is credited with a number of innovations in firewalls. Grote Reber (1911–2002): American astronomer. A pioneer of radio astronomy. Martin Rees, Baron Rees of Ludlow (1942–): British cosmologist and astrophysicist. Wilhelm Reich (1897–1957): Austrian psychiatrist and psychoanalyst, known as one of the most radical figures in the history of psychiatry. Charles Francis Richter (1900–1985): American seismologist and physicist who is most famous as the creator of the Richter magnitude scale, which, until the development of the moment magnitude scale in 1979, quantified the size of earthquakes. Alice Roberts (1973–): English evolutionary biologist, biological anthropologist, and science communicator at the University of Birmingham. Mark Roberts (1961–): English archaeologist specializing in the study of the Palaeolithic, and is best known for his discovery and subsequent excavations at the Lower Palaeolithic site of Boxgrove Quarry in southern England. Richard J. Roberts (1943–): British biochemist and molecular biologist. He won the Nobel Prize in Physiology or Medicine in 1993 for the discovery of introns in eukaryotic DNA and the mechanism of gene-splicing. Carl Rogers (1902–1987): American psychologist and among the founders of the humanistic approach to psychology. Rogers is widely considered to be one of the founding fathers of psychotherapy research and was honored for his pioneering research with the Award for Distinguished Scientific Contributions by the American Psychological Association in 1956. Marshall Rosenbluth (1927–2003): American physicist, nicknamed "the Pope of Plasma Physics". He created the Metropolis algorithm in statistical mechanics, derived the Rosenbluth formula in high-energy physics, and laid the foundations for instability theory in plasma physics. Bertrand Russell (1872–1970): British philosopher, logician, mathematician, historian, writer, social critic and political activist. He is considered one of the founders of analytic philosophy along with his predecessor Gottlob Frege, colleague G. E. Moore, and his protégé Ludwig Wittgenstein. He is widely held to be one of the 20th century's premier logicians. With A. N. Whitehead he wrote Principia Mathematica, an attempt to create a logical basis for mathematics. His philosophical essay "On Denoting" has been considered a "paradigm of philosophy". His work has had a considerable influence on logic, mathematics, set theory, linguistics, artificial intelligence, cognitive science, computer science (see type theory and type system), and philosophy, especially the philosophy of language, epistemology, and metaphysics. Adam Rutherford (1975–): British geneticist, author, and broadcaster. He was an audio-visual content editor for the journal Nature for a decade, is a frequent contributor to the newspaper The Guardian, hosts the BBC Radio 4 programme Inside Science, has produced several science documentaries and has published books related to genetics and the origin of life. == S == Oliver Sacks (1933–2015): United States-based British neurologist, who has written popular books about his patients, the most famous of which is Awakenings. Carl Sagan (1934–1996): American astronomer and astrochemist, a highly successful popularizer of astronomy, astrophysics, and other natural sciences, and pioneer of exobiology and promoter of the SETI. Although Sagan has been identified as an atheist according to some definitions, he rejected the label, stating "An atheist has to know a lot more than I know." He was an agnostic who, while maintaining that the idea of a creator of the universe was difficult to disprove, nevertheless disbelieved in God's existence, pending sufficient evidence. Meghnad Saha (1893–1956): Indian astrophysicist noted for his development in 1920 of the thermal ionization equation, which has remained fundamental in all work on stellar atmospheres. This equation has been widely applied to the interpretation of stellar spectra, which are characteristic of the chemical composition of the light source. The Saha equation links the composition and appearance of the spectrum with the temperature of the light source and can thus be used to determine either the temperature of the star or the relative abundance of the chemical elements investigated. Andrei Sakharov (1921–1989): Soviet nuclear physicist, dissident and human rights activist. He gained renown as the designer of the Soviet Union's Third Idea, a code name for Soviet development of thermonuclear weapons. Sakharov was an advocate of civil liberties and civil reforms in the Soviet Union. He was awarded the Nobel Peace Prize in 1975. The Sakharov Prize, which is awarded annually by the European Parliament for people and organizations dedicated to human rights and freedoms, is named in his honor. Robert Sapolsky (1957–): American neuroendocrinologist and professor of biology, neurology, and neurobiology at Stanford University. Mahendralal Sarkar (1833–1904): Indian physician and academic. Marcus du Sautoy (1965–): mathematician and holder of the Charles Simonyi Chair for the Public Understanding of Science. Hans Joachim Schellnhuber (1950–): German atmospheric physicist, climatologist and founding director of the Potsdam Institute for Climate Impact Research (PIK) and ex-chair of the German Advisory Council on Global Change (WBGU). Erwin Schrödinger (1887–1961): Austrian-Irish physicist and theoretical biologist. A pioneer of quantum mechanics and winner of the 1933 Nobel Prize for Physics. Laurent Schwartz (1915–2002): French mathematician, awarded the Fields medal for his work on distributions. Dennis W. Sciama (1926–1999): British physicist who played a major role in developing British physics after the Second World War. His most significant work was in general relativity, with and without quantum theory, and black holes. He helped revitalize the classical relativistic alternative to general relativity known as Einstein-Cartan gravity. He is considered one of the fathers of modern cosmology. Nadrian Seeman (1945–2021): American nanotechnologist and crystallographer known for inventing the field of DNA nanotechnology. Celâl Şengör (1955–): Turkish geologist, and currently on the faculty at Istanbul Technical University. Claude Shannon (1916–2001): American electrical engineer and mathematician, has been called "the father of information theory", and was the founder of practical digital circuit design theory. William Shockley (1910–1989): American physicist and inventor. Along with John Bardeen and Walter Houser Brattain, Shockley co-invented the transistor, for which all three were awarded the 1956 Nobel Prize in Physics. William James Sidis (1898–1944): American mathematician, cosmologist, inventor, linguist, historian and child prodigy. Boris Sidis (1867–1923): Russian American psychologist, physician, psychiatrist, and philosopher of education. Sidis founded the New York State Psychopathic Institute and the Journal of Abnormal Psychology. He was the father of child prodigy William James Sidis. Ethan Siegel (1978–): American theoretical astrophysicist and science writer, whose area of research focuses on quantum mechanics and the Big Bang theory. Herbert A. Simon (1916–2001): American Nobel laureate, was a political scientist, economist, sociologist, psychologist, computer scientist, and Richard King Mellon Professor—most notably at Carnegie Mellon University—whose research ranged across the fields of cognitive psychology, cognitive science, computer science, public administration, economics, management, philosophy of science, sociology, and political science, unified by studies of decision-making. Michael Smith (1932–2000): British-born Canadian biochemist and Nobel Laureate in Chemistry in 1993. John Maynard Smith (1920–2004): British theoretical evolutionary biologist and geneticist. Maynard Smith was instrumental in the application of game theory to evolution and theorised on other problems such as the evolution of sex and signalling theory. Oliver Smithies (1925–2017): British-born American Nobel Prize–winning geneticist and physical biochemist. He is known for introducing starch as a medium for gel electrophoresis in 1955 and for the discovery, simultaneously with Mario Capecchi and Martin Evans, of the technique of homologous recombination of transgenic DNA with genomic DNA, a much more reliable method of altering animal genomes than previously used, and the technique behind gene targeting and knockout mice. George Smoot (1945–): American astrophysicist and cosmologist who won the Nobel Prize in Physics in 2006 for his work on the Cosmic Background Explorer with John C. Mather that led to the measurement "of the black body form and anisotropy of the cosmic microwave background radiation. Alan Sokal (1955–): American professor of physics at New York University and professor of mathematics at University College London. To the general public he is best known for his criticism of postmodernism, resulting in the Sokal affair in 1996. Dan Sperber (1942–): French social and cognitive scientist, whose most influential work has been in the fields of cognitive anthropology and linguistic pragmatics. Robert Spitzer (1932–2015): American psychiatrist, Professor of Psychiatry at Columbia University, a major architect of the modern classification of mental disorders. Jack Steinberger (1921–2020): German-American-Swiss physicist and Nobel Laureate in 1988, co-discoverer of the muon neutrino. Hugo Steinhaus (1887–1972): Polish mathematician and educator. Victor J. Stenger (1935–2014): American physicist, emeritus professor of physics and astronomy at the University of Hawaii and adjunct professor of philosophy at the University of Colorado. Author of the book God: The Failed Hypothesis. Eleazar Sukenik (1889–1953): Israeli archaeologist and professor of Hebrew University in Jerusalem, undertaking excavations in Jerusalem, and recognising the importance of the Dead Sea Scrolls to Israel. John Sulston (1942–2018): British biologist. He is a joint winner of the 2002 Nobel Prize in Physiology or Medicine. Leonard Susskind (1940–): American theoretical physicist; a founding father of superstring theory and professor of theoretical physics at Stanford University. Dick Swaab (1944): Dutch physician and neurobiologist (brain researcher). He is a professor of neurobiology at the University of Amsterdam and was until 2005 Director of the Netherlands Institute for Brain Research (Nederlands Instituut voor Hersenonderzoek) of the Royal Netherlands Academy of Arts and Sciences (Koninklijke Nederlandse Akademie van Wetenschappen). He is known for his book We Are Our Brains (2010). == T == Igor Tamm (1895–1971): Soviet physicist who received the 1958 Nobel Prize in Physics, jointly with Pavel Alekseyevich Cherenkov and Ilya Frank, for their 1934 discovery of Cherenkov radiation. Arthur Tansley (1871–1955): English botanist who was a pioneer in the science of ecology. Alfred Tarski (1901–1983): Polish logician, mathematician and philosopher, a prolific author best known for his work on model theory, metamathematics, and algebraic logic. Kip Thorne (1940–): American theoretical physicist and winner of the 2017 Nobel Prize in physics, known for his contributions in gravitational physics and astrophysics and also for the popular-science book, Black Holes and Time Warps: Einstein's Outrageous Legacy. Nikolaas Tinbergen (1907–1988): Dutch ethologist and ornithologist who shared the 1973 Nobel Prize in Physiology or Medicine with Karl von Frisch and Konrad Lorenz for their discoveries concerning organization and elicitation of individual and social behaviour patterns in animals. Linus Torvalds (1969–): Finnish software engineer, creator of the Linux kernel. Alan Turing (1912–1954): English mathematician, computer scientist, and theoretical biologist who provided a formalization of the concepts of algorithm and computation with the Turing machine, which can be considered a model of a general-purpose computer. Matthew Turner (died ca. 1789): chemist, surgeon, teacher and radical theologian, author of the first published work of avowed atheism in Britain (1782). == U == Harold Urey (1893–1981): American physical chemist whose pioneering work on isotopes earned him the Nobel Prize in Chemistry in 1934. He played a significant role in the development of the atom bomb, but may be most prominent for his contribution to the study of the development of organic life from non-living matter. == V == Nikolai Vavilov (1887–1943): Russian and Soviet botanist and geneticist best known for having identified the centres of origin of cultivated plants. He devoted his life to the study and improvement of wheat, corn, and other cereal crops that sustain the global population. J. Craig Venter (1946–): American biologist and entrepreneur, one of the first researchers to sequence the human genome, and in 2010 the first to create a cell with a synthetic genome. Vladimir Vernadsky (1863–1945): Russian and Soviet mineralogist and geochemist who is considered one of the founders of geochemistry, biogeochemistry, and of radiogeology. His ideas of noosphere were an important contribution to Russian cosmism. Carl Vogt (1817–1895): German scientist, philosopher and politician who emigrated to Switzerland. Vogt published a number of notable works on zoology, geology and physiology. == W == W. Grey Walter (1910–1977): American neurophysiologist famous for his work on brain waves, and robotician. James D. Watson (1928–): Molecular biologist, physiologist, zoologist, geneticist, Nobel-laureate, and co-discover of the structure of DNA. John B. Watson (1878–1958): American psychologist who established the psychological school of behaviorism. Steven Weinberg (1933–2021): American theoretical physicist. He won the Nobel Prize in Physics in 1979 for the unification of electromagnetism and the weak force into the electroweak force. Victor Weisskopf (1908–2002): Austrian-American theoretical physicist, co-founder and board member of the Union of Concerned Scientists. Frank Whittle (1907–1996): English aerospace engineer, inventor, aviator and Royal Air Force officer. He is credited with independently inventing the turbojet engine (some years earlier than Germany's Dr. Hans von Ohain) and is regarded by many as the father of jet propulsion. Eugene Wigner (1902–1995): Hungarian-American theoretical physicist, engineer and mathematician. He received half of the Nobel Prize in Physics in 1963 "for his contributions to the theory of the atomic nucleus and the elementary particles, particularly through the discovery and application of fundamental symmetry principles". Arnold Wolfendale (1927–2020): British astronomer who served as Astronomer Royal from 1991 to 1995, and was Emeritus Professor in the Department of Physics at Durham University. Lewis Wolpert CBE FRS British FRSL (1929–2021): developmental biologist, author, and broadcaster. Steve Wozniak (1950–): co-founder of Apple Computer and inventor of the Apple I and Apple II. Elizur Wright (1804–1885): American mathematician and abolitionist, sometimes described as the "father of life insurance" for his pioneering work on actuarial tables. == Z == Oscar Zariski (1899–1986): American mathematician and one of the most influential algebraic geometers of the 20th century. Yakov Borisovich Zel'dovich (1914–1987): Soviet physicist born in Belarus. He played an important role in the development of Soviet nuclear and thermonuclear weapons, and made important contributions to the fields of adsorption and catalysis, shock waves, nuclear physics, particle physics, astrophysics, physical cosmology, and general relativity. Emile Zuckerkandl (1922–2013): Austrian-born biologist who is considered one of the founders of the field of molecular evolution, who co-introduced the concept of the "molecular clock", which enabled the neutral theory of molecular evolution. Konrad Zuse (1910–1995): German civil engineer, inventor and computer pioneer. His greatest achievement was the world's first programmable computer; the functional program-controlled Turing-complete Z3 became operational in May 1941. He is regarded as one of the inventors of the modern computer. Fritz Zwicky (1898–1974): Swiss astronomer and astrophysicist. == See also == List of nonreligious Nobel laureates Lists about skepticism == Notes and references == == External links == Twentieth Century Atheists on University of Cambridge's investigating atheism website
https://en.wikipedia.org/wiki/List_of_atheists_in_science_and_technology
Nothing Technology Limited (stylised in all caps) is a British consumer electronics manufacturer based in London. It was founded by Carl Pei, the co-founder of the Chinese smartphone maker OnePlus. On 25 February 2019, the company announced Teenage Engineering as a founding partner, mainly responsible for the brand's design aesthetic and its products. Investors in the company include Tony Fadell of iPod, YouTube personality Casey Neistat, and GV (formerly Google Ventures). Nothing's first product, "Ear (1)", was launched on 27 July 2021. In 2024, Nothing doubled its annual revenue to more than $500 million and crossed $1 billion in lifetime sales. == History == On 16 October 2020, Carl Pei, the co-founder of OnePlus, while working at OnePlus alongside Pete Lau, announced his resignation so that he could start a new venture. Pei later raised up to $7 million from multiple investors to start up his venture, including Tony Fadell of iPod, Twitch co-founder Kevin Lin, Reddit CEO Steve Huffman, and YouTuber Casey Neistat. Pei announced the company, Nothing, on 27 January 2021. On 15 February 2021, Nothing acquired the Essential Products trademarks and brand nearly a year after the company shut down operations. On 25 February 2021, the company announced its first founding partner, Teenage Engineering, to produce the design aesthetic of the brand and its products. Nothing announced its first product on 27 July 2021, named the "ear (1)", which are wireless headphones. On 13 October 2021, the company raised up to $50 million and also announced a partnership with Qualcomm. On 9 March 2022, the same day that Nothing secured Series B financing, the company announced that it would hold a press conference on 23 March. During that event, the company announced its first smartphone, the "phone (1)". On 10 December 2022, Nothing opened its first physical store in London's Soho district. In February 2023, during the Mobile World Congress (MWC) in Barcelona, Nothing announced that their next generation of phones would be powered by the Snapdragon 8+ Gen 1. The announcement highlighted the increased power and device price of the next smartphone release. On 22 March 2023, Nothing announced the release of their second generation "ear (2)" wireless earbuds, which promised support for high-resolution wireless audio, improved battery life, and adaptive active noise control. These were released to mostly positive reviews. Wired praised the detailed sound profile, distinctive design, and voice-assistant interaction, but criticized the treble reproduction, physical controls, and "so-so" noise cancellation. The Verge wrote that, "At $149, the Ear 2 earbuds represent a well-rounded pair of midrange earbuds. They've got all the style of the Ear 1s but without their rough edges." A budget sub-brand named "CMF by Nothing" was announced in August 2023 (CMF stands for “Color, Material and Finish”). On 17 November 2023, Nothing released a messaging app that promised end-to-end encryption but was storing texts publicly in plaintext. Nothing took the app down within 24 hours. In April 2024, Nothing released their third generation "Ear" and first generation "Ear (a)" wireless earbuds.The Nothing Ear promises a refined audio experience compared to the previous Ear 2. PCMag wrote that, "The Nothing Ear earphones offer better sound quality, longer battery life, and more Bluetooth codecs than their predecessors while maintaining an elegant, transparent design." In July 2024, CMF by Nothing announced the "CMF Phone 1", a budget smartphone with wide customizability. It was revealed that the CMF Phone 1 is manufactured in India. The phone itself is powered by Mediatek Dimensity 7300 with 6 or 8 GB of RAM and 128 or 256 GB of storage, which can still be added with a microSD card . == Products == === Smartphones === ==== Phone (1) ==== On 23 March 2022, Nothing announced its first smartphone named the "Phone (1)". The phone runs on the Android operating system and its user interface is named NothingOS. It went on sale on 21 July 2022. In June 2022, Nothing opened an invite-only pre-order for the "Phone (1)", which reached up to 100,000 registrations on the waiting list. The device, which was unveiled on 12 July in London, features a Qualcomm Snapdragon 778G+ chipset and transparent design. ==== Phone (2) ==== On 11 July 2023, Nothing announced its second smartphone named the "Phone (2)". It was released on 21 July 2023. The software front is covered by Android 13 with NothingOS 2.0 skin on top. The phone comes with Qualcomm Snapdragon 8+ Gen 1 chipset. ==== Phone (2a) ==== Nothing announced its budget smartphone, the "Phone (2a)", on 5 March 2024. It came with the MediaTek Dimensity 7200 Pro chipset running Android 14 and with user interface NothingOS 2.5, a 6.7-inch 120-Hz OLED display, and a 5000 mAh battery. Phone (2a) surpassed 100,000 units sold in its first day after release. The Nothing Phone (2a) Plus, released in 2024, is an upgraded version of the standard Phone (2a) with enhanced features aimed at improving performance, display quality, and photography capabilities. Key upgrades include a MediaTek Dimensity 7350 Pro processor for smooth multitasking and better gaming performance, achieving impressive benchmark scores compared to its predecessors. ==== Phone (3a) and Phone (3a) Pro ==== Nothing Phone (3a) and Nothing Phone (3a) Pro were announced on March 4th 2025. Both phones have AMOLED 6.77 inches 120Hz displays, both exist in 128GB and 256GB storage variants, both have the same 5000 mAh battery and the same Snapdragon 7s Gen 3 (4 nm) chip. The main difference is in camera type (the Pro version has telephoto with 3x optical zoom vs only 2x zoom on 3a variant). === Audio products === ==== Ear 1 ==== The Nothing Ear 1, stylized as the "ear (1)", is Nothing's first product. Announced on 27 July 2021, the Ear 1 is a set of wireless earbuds. The earbuds can be connected by Bluetooth and have up to 34 hours of battery life when used with the charging case, and up to 5.7 hours of battery life with ANC off; with 24 hours with the case used and up to four hours for the earbuds themselves with ANC on. The earbuds went on sale on 17 August 2021, at $99/£99/€99. A Black version was also announced on 6 December 2021, and went on sale on 13 December. Nothing also announced on that day that the Ear 1 earbuds are now carbon neutral. On 18 October 2022, Nothing's CEO Carl Pei announced on X that the Ear 1's price will be increased to $149 starting on 26 October 2022 due to an increase in costs. ==== Ear (stick) ==== Ear (stick) is a pair of earbuds Nothing released on 4 November 2022. It is the second part of the Ear family and a lower tier version of Ear (1), and does not include noise cancellation, transparency mode, or wireless charging. Ear (stick) launched at a price of $99. ==== Ear (2) ==== In March 2023, Nothing announced the release of their second-generation earbuds, the Nothing Ear (2). These new earphones support the LHDC 5.0 low latency HD audio codec and come equipped with 11.6 mm speakers, similar to its predecessor. The earbuds were launched on 22 March 2023. ==== Ear ==== Nothing Ear is a 2024 refresh of the Nothing Ear (2), offering improved features over the Nothing Ear (2), particularly in battery life and sound quality. ==== Ear (a) ==== The Nothing Ear (a) earbuds, introduced as a more budget-friendly option, retain many of the core features of the premium Nothing Ear model, such as active noise cancellation (ANC) and clear sound quality. They use traditional polymer drivers, unlike the pricier ceramic drivers in the main Ear line, but still deliver solid audio performance across a wide range of genres. The Ear (a) also has a longer battery life, achieving about 5.5 hours with ANC on compared to 5.2 hours in the higher-end model. Despite a less powerful chipset, the Ear (a) provides a reliable user experience, offering customizable touch controls, in-ear detection, and similar levels of noise cancellation (up to 50 dB). ==== Ear (open) ==== === Applications === ==== Nothing Chats ==== Nothing Chats was an instant messaging application released by Nothing Technology Limited. The application, in beta on release, was available on Google Play for less than a day in November 2023 before it was pulled by Nothing Technology Limited due to widespread coverage of application's bugs, insecure networking, and unreliability. On 14 November 2023, Nothing announced a new messaging application which would become available on 17 November. The announcement stated that Nothing Chats was developed by company called Sunbird and would have limited compatibility with Apple's iMessage. Prior to its launch, Android Authority and Ars Technica expressed skepticism regarding the company's claims to use end-to-end encryption, reliability of message delivery and general application security, citing prior experiences with Sunbird. Ars Technica explicitly advised readers against giving their Apple username and password to a company which might not "understand and/or respect the security version of Pandora's box they are opening". On 17 November, shortly after launch, a third-party developer discovered that the app was using a version of a rival open-source project called BlueBubbles but Sunbird failed to procure a TLS certificate so the application was sending users' service credentials via insecure HTTP. The vulnerability could allow a third-party to intercept users' credentials (one at a time) and use them to impersonate the users to read and send messages. On 18 November, a different user reported that the app was sending all media attachments, including user images to error-logging service Sentry, and all data to Firebase, with the data being stored unencrypted in both places. At the time, the Firebase database contained over 630,000 media files. 9to5Google confirmed that anyone can intercept the application's Firebase credentials (from their own device or any other device), log into Firebase and see all other users' past and real-time messages. Another party developed a script for downloading this data automatically and published the code to GitHub. Within 24 hours, Nothing pulled the Nothing Chats application from Google Play. === Drinks === Beer (5.1%) is a beer created by Nothing Technology. The beer was initially announced on 1 April 2023, and was made available in the UK in October 2023. The drink is brewed by Free time Beer Co., which is based in Wales. == References == == External links == Official website
https://en.wikipedia.org/wiki/Nothing_(company)
The Technology Administration (TA) was an agency in the United States Department of Commerce that worked with United States industries to promote economic competitiveness. The TA used the web domain technology.gov. The TA was most recently led by former Under Secretary of Commerce for Technology Robert Cresanti. The TA oversaw three agencies: National Institute of Standards and Technology (NIST) National Technical Information Service (NTIS) Office of Technology Policy (OTP) == History == The Technology Administration was created by the Stevenson-Wydler Technology Innovation Act of 1980, 15 U.S.C. 3704. The TA was abolished by the America COMPETES Act of 2007. NIST and NTIS continue on as agencies. The Office of Technology Policy was abolished. == Office of Technology Policy == The Office of Technology Policy (OTP) was an office of the Technology Administration. The office worked with industry to promote competitiveness and advocated integrated policies for maximizing the impact of technology on economic growth. The OTP's stated goals included the creation of high-wage jobs and improvements in the United States' quality of life. == See also == Title 15 of the Code of Federal Regulations == References ==
https://en.wikipedia.org/wiki/Technology_Administration
Dreame Technology (Chinese: 追觅科技; referred to simply as Dreame), with the full name Dreame Technology Co., Ltd., also known as Dreametech, is a Chinese household appliance manufacturer founded by Yu Hao in 2017. Its main products include cordless vacuums, scrubbers, hair dryers, robotic lawn mowers, and robot vacuum cleaners and mops. The company specializes in the production of vacuum cleaners. In addition, it owns and operates an app called Dreamehome. Outside of China, Dreame products are available in overseas markets such as Malaysia, Australia, and the US. After its establishment, the company was backed by Xiaomi, Yunfeng Capital, and Shunwei Capital. In October 2021, it raised $563 million in a Series C funding round. == History == The company originated as a campus organization called "Skyworks". In 2017, Dreame was officially formed. In December 2018, the firm launched its first product. In 2020, Dreame developed a 150,000-rpm digital motor. In August, it secured an investment from IDG Capital. In December, its Suzhou smart factory started operations. In October 2021, Dreame reached a partnership with Borussia Dortmund. The company introduced its first robot vacuum-mop in January 2022. In September 2023, it exhibited at the IFA. == References ==
https://en.wikipedia.org/wiki/Dreame_Technology
Palantir Technologies Inc. is an American publicly-traded company that specializes in software platforms for big data analytics. Headquartered in Denver, Colorado, it was founded by Peter Thiel, Stephen Cohen, Joe Lonsdale, and Alex Karp in 2003. The company has four main projects: Palantir Gotham, Palantir Foundry, Palantir Apollo, and Palantir AIP. Palantir Gotham is an intelligence and defense tool used by militaries and counter-terrorism analysts. Its customers have included the United States Intelligence Community (USIC) and United States Department of Defense. Their software as a service (SaaS) is one of five offerings authorized for Mission Critical National Security Systems (IL5) by the U.S. Department of Defense. Palantir Foundry has been used for data integration and analysis by corporate clients such as Morgan Stanley, Merck KGaA, Airbus, Wejo, Lilium, PG&E and Fiat Chrysler Automobiles. Palantir Apollo is a platform to facilitate continuous integration/continuous delivery (CI/CD) across all environments. Palantir's original clients were federal agencies of the USIC. It has since expanded its customer base to serve both international as well as state and local governments, and also to private companies. == History == === 2003–2008: Founding and early years === Though usually listed as having been founded in 2004, SEC filings state Palantir's official incorporation to be in May 2003 by Peter Thiel (co-founder of PayPal), who named the start-up after the "seeing stone" in Tolkien's legendarium. Thiel saw Palantir as a "mission-oriented company" which could apply software similar to PayPal's fraud recognition systems to "reduce terrorism while preserving civil liberties." In 2004, Thiel bankrolled the creation of a prototype by PayPal engineer Nathan Gettings and Stanford University students Joe Lonsdale and Stephen Cohen. That same year, Thiel hired Alex Karp, a former colleague of his from Stanford Law School, as chief executive officer. Headquartered in Palo Alto, California, the company initially struggled to find investors. According to Karp, Sequoia Capital chairman Michael Moritz doodled through an entire meeting, and a Kleiner Perkins executive lectured the founders about the inevitable failure of their company. The only early investments were $2 million from the U.S. Central Intelligence Agency's venture capital arm In-Q-Tel, and $30 million from Thiel himself and his venture capital firm, Founders Fund. Palantir developed its technology by computer scientists and analysts from intelligence agencies over three years, through pilots facilitated by In-Q-Tel. The company stated computers alone using artificial intelligence could not defeat an adaptive adversary. Instead, Palantir proposed using human analysts to explore data from many sources, called intelligence augmentation. === 2010–2012: Expansion === In April 2010, Palantir announced a partnership with Thomson Reuters to sell the Palantir Metropolis product as "QA Studio" (a quantitative analysis tool). On June 18, 2010, Vice President Joe Biden and Office of Management and Budget Director Peter Orszag held a press conference at the White House announcing the success of fighting fraud in the stimulus by the Recovery Accountability and Transparency Board (RATB). Biden credited the success to the software, Palantir, being deployed by the federal government. He announced that the capability will be deployed at other government agencies, starting with Medicare and Medicaid. Estimates were $250 million in revenues in 2011. === 2013–2016: Additional funding === A document leaked to TechCrunch revealed that Palantir's clients as of 2013 included at least twelve groups within the U.S. government, including the CIA, the DHS, the NSA, the FBI, the CDC, the Marine Corps, the Air Force, the Special Operations Command, the United States Military Academy, the Joint Improvised-Threat Defeat Organization and Allies, the Recovery Accountability and Transparency Board and the National Center for Missing and Exploited Children. However, at the time, the United States Army continued to use its own data analysis tool. Also, according to TechCrunch, the U.S. spy agencies such as the CIA and FBI were linked for the first time with Palantir software, as their databases had previously been siloed. In September 2013, Palantir disclosed over $196 million in funding according to a U.S. Securities and Exchange Commission filing. It was estimated that the company would likely close almost $1 billion in contracts in 2014. CEO Alex Karp announced in 2013 that the company would not be pursuing an IPO, as going public would make "running a company like ours very difficult." In December 2013, the company began a round of financing, raising around $450 million from private funders. This raised the company's value to $9 billion, according to Forbes, with the magazine further explaining that the valuation made Palantir "among Silicon Valley’s most valuable private technology companies." In December 2014, Forbes reported that Palantir was looking to raise $400 million in an additional round of financing, after the company filed paperwork with the Securities and Exchange Commission the month before. The report was based on research by VC Experts. If completed, Forbes stated Palantir's funding could reach a total of $1.2 billion. As of December 2014, the company continued to have diverse private funders, Ken Langone and Stanley Druckenmiller, In-Q-Tel of the CIA, Tiger Global Management, and Founders Fund, which is a venture firm operated by Peter Thiel, the chairman of Palantir. The company was valued at $15 billion in November 2014. In June 2015, BuzzFeed reported the company was raising up to $500 million in new capital at a valuation of $20 billion. By December 2015, it had raised a further $880 million, while the company was still valued at $20 billion. In February 2016, Palantir bought Kimono Labs, a startup which makes it easy to collect information from public facing websites. In August 2016, Palantir acquired data visualization startup Silk. === 2020 === Palantir is one of four large technology firms to start working with the NHS on supporting COVID-19 efforts through the provision of software from Palantir Foundry and by April 2020, several countries had used Palantir's technology to track and contain the contagion. Palantir also developed Tiberius, a software for vaccine allocation used in the United States. In August 2020, Palantir Technologies relocated its headquarters to Denver, Colorado. In December 2020, Palantir was awarded a $44.4 million contract by the U.S. Food and Drug Administration, boosting its shares by about 21%. === Valuation === The company was valued at $9 billion in early 2014, with Forbes stating that the valuation made Palantir "among Silicon Valley's most valuable private technology companies". In January 2015, the company was valued at $15 billion after an undisclosed round of funding with $50 million in November 2014. This valuation rose to $20 billion in late 2015 as the company closed an $880 million round of funding. In 2018, Morgan Stanley valued the company at $6 billion. On October 18, 2018, The Wall Street Journal reported that Palantir was considering an IPO in the first half of 2019 following a $41 billion valuation. In July 2020, it was revealed the company had filed for an IPO. It ultimately went public on the New York Stock Exchange through a direct public offering on September 30, 2020 under the ticker symbol "PLTR". On September 6, 2024, S&P Global announced that the company would be added to the S&P 500 index. Palantir’s share price rose 14% the next trading day. On November 14 2024 Palantir Technologies Inc. announced its transfer of stock listing from the New York Stock Exchange (NYSE) to the Nasdaq Global Select Market, effective November 26, 2024. The company's Class A Common Stock will continue to trade under the ticker symbol "PLTR." === Investments === The company has invested over $400 million into nearly two dozen special-purpose acquisition company (SPAC) targets according to investment bank RBC Capital Markets, while bringing alongside those companies as customers. == Products == === Palantir Gotham === Released in 2008, Palantir Gotham is Palantir's defense and intelligence offering. It is an evolution of Palantir's longstanding work in the United States Intelligence Community, and is used by intelligence and defense agencies. Among other things, the software supports alerts, geospatial analysis, and prediction. Foreign customers include the Ukrainian military. Palantir Gotham has also been used as a predictive policing system, which has elicited some controversy over racism in their AI analytics. === Palantir Foundry === Palantir Foundry is a software platform offered for use in commercial and civil government sectors. It was popularized for use in the health sector by its use within the National Covid Cohort Collaborative, a secure enclave of Electronic Health Records from across the United States that produced hundreds of scientific manuscripts and won the NIH/FASEB Dataworks Grand Prize. Foundry was also used by the Center NHS England in dealing with the COVID-19 pandemic in England to analyze the operation of the vaccination program. A campaign was started against the company in June 2021 by Foxglove, a tech-justice nonprofit, because "Their background has generally been in contracts where people are harmed, not healed." Clive Lewis MP, supporting the campaign said Palantir had an "appalling track record." As of 2022, Foundry was also used for the administration of the UK Homes for Ukraine program. to give caseworkers employed by local authorities access to data held by the Department for Levelling Up, Housing and Communities, some of which is supplied by the UK Home Office. In November 2023, NHS England awarded a 7-year contract to Palantir for a federated data platform to access data from different systems through a single system, worth £330 million, criticized by the British Medical Association, Doctors Association UK and cybersecurity professionals. In 2024, picketing by medical professionals outside NHS England HQ demanding cancellation of the deal occurred. === Palantir Apollo === Palantir Apollo is a continuous delivery system that manages and deploys Palantir Gotham and Foundry. Apollo orchestrates updates to configurations and software in the Foundry and Gotham platforms using a micro-service architecture. === Other === The company has been involved in a number of business and consumer products, designing in part or in whole. For example, in 2014, they premiered Insightics, which according to the Wall Street Journal "extracts customer spending and demographic information from merchants’ credit-card records." It was created in tandem with credit processing company First Data. ==== Artificial Intelligence Platform (AIP) ==== In April 2023, the company launched Artificial Intelligence Platform (AIP) which integrates large language models into privately operated networks. The company demonstrated its use in war, where a military operator could deploy operations and receive responses via an AI chatbot. Citing potential risks of generative artificial intelligence, CEO Karp said that the product would not let the AI independently carry out targeting operations, but would require human oversight. Commercial companies have also used AIP across many domains. Applications include infrastructure planning, network analysis, and resource allocation. AIP lets users create LLMs called “agents” through a GUI interface. Agents can interact with a digital representation of a company’s business known as an ontology. This lets the models access an organization’s documents and other external resources. Users can define output schemas and test cases to validate AI-generated responses. AIP comes with a library of templates that can be extended by clients. Palantir also offers five-day boot camps to onboard prospective customers. Palantir hosts an annual AIPCon conference featuring demos from existing customers. ==== TITAN ==== Palantir’s TITAN (Tactical Intelligence Targeting Access Node) is a truck that is advertised as a mobile ground station for AI applications. After being prototyped with IRAD funds, the project is now developed in partnership with Anduril Industries, Northrop Grumman, and other contractors. The company claims that TITAN can improve customers’ ability to conduct long-range precision strikes. Palantir is under contract to deliver 10 units to the U.S. Army. ==== MetaConstellation ==== MetaConstellation is a satellite network that supports the deployment of AI models. Users can request information about specific locations, prompting the service to dispatch the necessary resources. MetaConstellation has been used by customers including the United States Northern Command. ==== Skykit ==== Skykit is a portable toolbox that supports intelligence operations in adverse environments. Palantir offers “Skykit Backpack” and “Skykit Maritime” to be transported by individuals and boats respectively. Contents include battery packs, a ruggedized laptop with company software, and a quadcopter supporting computer vision applications. Skykit can also connect to the MetaConstellation satellite network. In 2023, various sources reported that the Ukrainian military has begun receiving Skykit units. ==== Palantir Metropolis ==== Palantir Metropolis (formerly known as Palantir Finance) was software for data integration, information management and quantitative analytics. The software connects to commercial, proprietary and public data sets and discovers trends, relationships and anomalies, including predictive analytics. Aided by 120 "forward-deployed engineers" of Palantir during 2009, Peter Cavicchia III of JPMorgan used Metropolis to monitor employee communications and alert the insider threat team when an employee showed any signs of potential disgruntlement: the insider alert team would further scrutinize the employee and possibly conduct physical surveillance after hours with bank security personnel. The Metropolis team used emails, download activity, browser histories, and GPS locations from JPMorgan owned smartphones and their transcripts of digitally recorded phone conversations to search, aggregate, sort, and analyze this information for any specific keywords, phrases, and patterns of behavior. In 2013, Cavicchia may have shared this information with Frank Bisignano who had become the CEO of First Data Corporation. Palantir Metropolis was succeeded by Palantir Foundry. == Customers == === Corporate use === Founded as a defense contractor, Palantir has since expanded to the private sector. These activities now provide a large fraction of the company’s revenue. Palantir has had 55% year-over-year growth in the U.S. commercial market in Q2 2024, although the company serves foreign customers as well. Example applications include telecommunications and infrastructure planning. Palantir Metropolis was used by hedge funds, banks, and financial services firms. Palantir Foundry clients include Merck KGaA, Airbus and Ferrari. Palantir partner Information Warfare Monitor used Palantir software to uncover both the Ghostnet and the Shadow Network. === U.S. civil entities === Palantir's software was used by the Recovery Accountability and Transparency Board to detect and investigate fraud and abuse in the American Recovery and Reinvestment Act. Specifically, the Recovery Operations Center (ROC) used Palantir to integrate transactional data with open-source and private data sets that describe the entities receiving stimulus funds. Other clients as of 2019 included Polaris Project, the Centers for Disease Control and Prevention, the National Center for Missing and Exploited Children, the National Institutes of Health, Team Rubicon, and the United Nations World Food Programme. In October 2020, Palantir began helping the federal government set up a system that will track the manufacture, distribution and administration of COVID-19 vaccines across the country. === U.S. military, intelligence, and police === Palantir Gotham is used by counter-terrorism analysts at offices in the United States Intelligence Community and United States Department of Defense, fraud investigators at the Recovery Accountability and Transparency Board, and cyber analysts at Information Warfare Monitor (responsible for the GhostNet and the Shadow Network investigation). Gotham was used by fraud investigators at the Recovery Accountability and Transparency Board, a former US federal agency which operated from 2009 to 2015. Other clients as of 2013 included DHS, NSA, FBI, the Marine Corps, the Air Force, Special Operations Command, West Point, the Joint IED Defeat Organization and Allies. However, at the time the United States Army continued to use its own data analysis tool. Also, according to TechCrunch, "The U.S. spy agencies also employed Palantir to connect databases across departments. Before this, most of the databases used by the CIA and FBI were siloed, forcing users to search each database individually. Now everything is linked together using Palantir." U.S. military intelligence used the Palantir product to improve their ability to predict locations of improvised explosive devices in its war in Afghanistan. A small number of practitioners reported it to be more useful than the United States Army's Program of Record, the Distributed Common Ground System (DCGS-A). California Congressman Duncan D. Hunter complained of United States Department of Defense obstacles to its wider use in 2012. Palantir has also been reported to be working with various U.S. police departments, for example accepting a contract in 2013 to help the Northern California Regional Intelligence Center build a controversial license plates database for California. In 2012 New Orleans Police Department partnered with Palantir to create a predictive policing program. In 2014, US Immigration and Customs Enforcement (ICE) awarded Palantir a $41 million contract to build and maintain a new intelligence system called Investigative Case Management (ICM) to track personal and criminal records of legal and illegal immigrants. This application has originally been conceived by ICE's office of Homeland Security Investigations (HSI), allowing its users access to intelligence platforms maintained by other federal and private law enforcement entities. The system reached its "final operation capacity" under the Trump administration in September 2017. Palantir took over the Pentagon's Project Maven contract in 2019 after Google decided not to continue developing AI unmanned drones used for bombings and intelligence. In 2024, Palantir emerged as a "Trump trade" for further enforcing the law on illegal immigrants and profiting on federal spending for national security and immigration. === British National Health Service (NHS) === The firm has contracts relating to patient data from the British National Health Service. In 2020, it was awarded an emergency non-competitive contract to mine COVID-19 patient data and consolidate government databases to help ministers and officials respond to the pandemic. The contract was valued at more than £23.5 million and was extended for two more years. The awarding of the contract without competition was heavily criticised, prompting the NHS to pledge an open and transparent procurement process for any future data contract. The firm was encouraged by Liam Fox "to expand their software business" in Britain. It was said to be "critical to the success of the vaccination and PPE programmes,” but its involvement in the NHS was controversial among civil liberties groups. Conservative MP David Davis called for a judicial review into the sharing of patient data with Palantir. The procurement of a £480m Federated Data Platform by NHS England, launched in January 2023 has been described as a 'must win' for Palantir. The procurement has been described as a "farce" by civil liberties campaigners, alleging that Palantir have a competitive advantage as it "already has its feet under the table in NHS England" and benefits from a short procurement window. In April 2023 it was revealed that a consortium of UK companies had been unsuccessful in its bid for the contract. In April 2023, Conservative MP David Davis publicly expressed his concern over the procurement process, stating that it could become a "battle royale". Davis is one of a dozen MPs pressing the government over privacy concerns with the use of data. Labour peer and former Health Minister Philip Hunt voiced his concern about Palantir's use of data, stating “The current NHS and current government doesn’t have a good track record of getting the details right, and the procurement shows no sign of going better.” In April 2023, it was also reported that eleven NHS trusts had paused or suspended use of the Palantir Foundry software. A spokesperson for the Department of Health and Social Care stated that this was due to "operational issues". In January 2023 Palantir's founder, Peter Thiel, called Britain's affection for the NHS "Stockholm Syndrome" during a speech to the Oxford Union, going on to say that the NHS "makes people sick". A Palantir spokesman clarified that Thiel was "speaking as a private individual" and his comments "do not in any way reflect the views of Palantir". In March 2023 it was revealed that NHS hospitals had been 'ordered' to share patient data with Palantir, prompting renewed criticism from civil liberties groups, including for supporting genocide, privacy and security practices, and "buying way in". Campaign groups including the Doctors' Association UK, National Pensioners' Convention, and Just Treatment, subsequently threatened legal action over NHS England's procurement of the FDP contract citing concerns over the use of patient data. NHS England's former artificial intelligence chief, Indra Joshi, was recruited by Palantir in 2022. The company said they were planning to increase their team in the UK by 250. Palantir's UK head, Louis Moseley, grandson of the late British Union of Fascists leader Oswald Mosley, was quoted internally as saying that Palantir's strategy for entry into the British health industry was to "Buy our way in" by acquiring smaller rival companies with existing relationships with the NHS in order to “take a lot of ground and take down a lot of political resistance.” In November 2023, NHS England awarded Palantir a £330 million contract to create and manage the Federated Data Platform. In April 2024, medical professionals picketed on the entrance of NHS England HQ demanding end of contract with Palantir over contracts with IDF. === Europe === The Danish POL-INTEL predictive policing project has been operational since 2017 and is based on the Gotham system. According to the AP the Danish system "uses a mapping system to build a so-called heat map identifying areas with higher crime rates." The Gotham system has also been used by German state police in Hesse and Europol. The Norwegian Customs is using Palantir Gotham to screen passengers and vehicles for control. Known inputs are prefiled freight documents, passenger lists, the national Currency Exchange database (tracks all cross-border currency exchanges), the Norwegian Welfare Administrations employer- and employee-registry, the Norwegian stock holder registry and 30 public databases from InfoTorg. InfoTorg provides access to more than 30 databases, including the Norwegian National Citizen registry, European Business Register, the Norwegian DMV vehicle registry, various credit databases etc. These databases are supplemented by the Norwegian Customs Departments own intelligence reports, including results of previous controls. The system is also augmented by data from public sources such as social media. ==== Ukraine ==== Karp claims to have been the first CEO of a large U.S. company to visit Ukraine after the 2022 Russian invasion. Palantir's technology has since been used close to the front lines. It is used to shorten the "kill chain" in Russo-Ukrainian War. According to a December 2022 report by The Times, Palantir's AI has allowed Ukraine to increase the accuracy, speed, and deadliness of its artillery strikes. Ukraine's prosecutor general's office also plans to use Palantir's software to help document alleged Russian war crimes. === Israel === The London office of Palantir was the target of demonstrations by pro-Palestine protesters in December 2023 after it was awarded a large contract to manage NHS data. The protesters accused Palantir of being "complicit" in war crimes during the 2023 Israel-Hamas war because it provides the Israel Defence Force (IDF) with intelligence and surveillance services, including a form of predictive policing. In January 2024, Palantir agreed to a strategic partnership with the IDF under which it will provide the IDF with services to assist its "war-related missions". Karp has been emphatic in his public support for Israel. He has frequently criticized what he calls the inaction of other tech leaders. His position has prompted several employees to leave Palantir. In 2024, Irish politician and former employee of Palantir, Eoin Hayes was suspended by his party, the Social Democrats, after he was found to have misled the party about when he disposed of his shares in the company. Hayes had worked for Palantir between 2015 and 2017 but denied having any role relating to any military contracts. The Social Democrats have been some of the most vocal critics of the Israeli invasion of the Gaza Strip and Hayes has been accused by a rival politician of "profiting from genocide". === Other === Palantir Gotham was used by cyber analysts at Information Warfare Monitor, a Canadian public-private venture which operated from 2003 to 2012. Palantir was used by the International Atomic Energy Agency (IAEA) to verify if Iran was in compliance with the 2015 agreement. == Partnerships and contracts == === International Business Machines === On February 8, 2021, Palantir and IBM announced a new partnership that would use IBM's hybrid cloud data platform alongside Palantir's operations platform for building applications. The product, Palantir for IBM Cloud Pak for Data, is expected to simplify the process of building and deploying AI-integrated applications with IBM Watson. It will help businesses/users interpret and use large datasets without needing a strong technical background. Palantir for IBM Cloud Pak for Data will be available for general use in March 2021. === Amazon (AWS) === On March 5, 2021, Palantir announced its partnership with Amazon AWS. Palantir's ERP Suite was optimized to run on Amazon Web Services. The ERP suite was used by BP. === Microsoft === On Aug 8, 2024, Palantir and Microsoft announced a partnership where Palantir will deploy their suite of products on Microsoft Azure Government clouds. Palantir stock jumped more than 10% for the day. === Babylon Health === Palantir took a stake in Babylon Health in June 2021. Ali Parsa told the Financial Times that "nobody" has brought some of the tech that Palantir owns "into the realm of biology and health care". == Controversies == === Algorithm development === i2 Inc sued Palantir in Federal Court alleging fraud, conspiracy, and copyright infringement over Palantir's algorithm. Shyam Sankar, Palantir's director of business development, used a private eye company as the cutout for obtaining i2's code. i2 settled out of court for $10 million in 2011. === WikiLeaks proposals (2010) === In 2010, Hunton & Williams LLP allegedly asked Berico Technologies, Palantir, and HBGary Federal to draft a response plan to "the WikiLeaks Threat." In early 2011 Anonymous publicly released HBGary-internal documents, including the plan. The plan proposed that Palantir software would "serve as the foundation for all the data collection, integration, analysis, and production efforts." The plan also included slides, allegedly authored by HBGary CEO Aaron Barr, which suggested "[spreading] disinformation" and "disrupting" Glenn Greenwald's support for WikiLeaks. Palantir CEO Alex Karp ended all ties to HBGary and issued a statement apologizing to "progressive organizations ... and Greenwald ... for any involvement that we may have had in these matters." Palantir placed an employee on leave pending a review by a third-party law firm. The employee was later reinstated. === Racial discrimination lawsuit (2016) === On September 26, 2016, the Office of Federal Contract Compliance Programs of the U.S. Department of Labor filed a lawsuit against Palantir alleging that the company discriminated against Asian job applicants on the basis of their race. According to the lawsuit, the company "routinely eliminated" Asian applicants during the hiring process, even when they were "as qualified as white applicants" for the same jobs. Palantir settled the suit in April 2017 for $1.7 million while not admitting wrongdoing. === British Parliament inquiry (2018) === During questioning in front of the Digital, Culture, Media and Sport Select Committee, Christopher Wylie, the former research director of Cambridge Analytica, said that several meetings had taken place between Palantir and Cambridge Analytica, and that Alexander Nix, the chief executive of SCL, had facilitated their use of Aleksandr Kogan's data which had been obtained from his app "thisisyourdigitallife" by mining personal surveys. Kogan later established Global Science Research to share the data with Cambridge Analytica and others. Wylie confirmed that both employees from Cambridge Analytica and Palantir used Kogan's Global Science Research and harvested Facebook data together in the same offices. === ICE partnership (since 2014) === Palantir has come under criticism due to its partnership developing software for U.S. Immigration and Customs Enforcement. Palantir has responded that its software is not used to facilitate deportations. In a statement provided to the New York Times, the firm implied that because its contract was with HSI, a division of ICE focused on investigating criminal activities, it played no role in deportations. However, documents obtained by The Intercept show that this is not the case. According to these documents, Palantir's ICM software is considered 'mission critical' to ICE. Other groups critical of Palantir include the Brennan Center for Justice, National Immigration Project, the Immigrant Defense Project, the Tech Workers Coalition and Mijente. In one internal ICE report Mijente acquired, it was revealed that Palantir's software was critical in an operation to arrest the parents of children residing illegally. On September 28, 2020, Amnesty International released a report criticizing Palantir's failure to conduct human rights due diligence around its contracts with ICE. Concerns around Palantir's rights record were being scrutinized for contributing to human rights violations of asylum-seekers and migrants. In 2025, Palantir was reported to be working closely with US Immigration and Customs Enforcement to enable mass deportation in support of the Trump administration. === "HHS Protect Now" and privacy concerns === The COVID-19 pandemic prompted tech companies to respond to growing demand for citizen information from governments in order to conduct contact tracing and to analyze patient data. Consequently, data collection companies, such as Palantir, had been contracted to partake in pandemic data collection practices. Palantir's participation in "HHS Protect Now", a program launched by the United States Department of Health and Human Services to track the spread of the coronavirus, has attracted criticism from American lawmakers. Palantir's participation in COVID-19 response projects re-ignited debates over its controversial involvement in tracking illegal immigrants, especially its alleged effects on digital inequality and potential restrictions on online freedoms. Critics allege that confidential data acquired by HHS could be exploited by other federal agencies in unregulated and potentially harmful ways. Alternative proposals request greater transparency in the process to determine whether any of the data aggregated would be shared with the US Immigration and Customs Enforcement to single out illegal immigrants. === Project Maven (since 2018) === After protests from its employees, Google chose not to renew its contract with the Pentagon to work on Project Maven, a secret artificial intelligence program aimed at the unmanned operation of aerial vehicles. Palantir then took over the project. Critics warned that the technology could lead to autonomous weapons that decide who to strike without human input. == Corporate affairs == === Leadership === Jamie Fly, former Radio Free Europe president and CEO, serves as senior counselor to the CEO. Matthew Turpin, former director for China at the White House National Security Council and senior advisor for China to the Secretary of Commerce during the first Trump administration, serves as senior advisor. ==== Board of directors ==== As of December 2024, the board of directors of Palantir includes: Alex Karp, CEO of Palantir Alexander Moore, co-founder and former CEO of NodePrime Alexandra Schiff, former reporter of The Wall Street Journal Stephen Cohen, co-founder and president of Palantir Peter Thiel, co-founder of PayPal, Palantir and Founders Fund Lauren Friedman Stat, former Fractional Chief Administration Officer at Friendly Force Eric Woersching, former general partner at Initialized Capital === Ownership === The largest shareholders of Palantir in early 2024 were: === Finances === For the fiscal year 2023, Palantir reported earnings of US$210 million, with an annual revenue of US$2.2 billion, an increase of 16.8% over the previous fiscal cycle. == See also == Government by algorithm == References == == External links == Official website Business data for Palantir Technologies Inc.:
https://en.wikipedia.org/wiki/Palantir_Technologies
DXC Technology Company is an American multinational information technology (IT) services and consulting company headquartered in Ashburn, Virginia. == History == DXC Technology was founded on April 3, 2017, through a merger between Hewlett Packard Enterprise’s Enterprise Services business unit and Computer Sciences Corporation. The company provided business-to-business IT services. It began trading on the New York Stock Exchange under the symbol DXC. At the time of its creation, DXC Technology had revenues of $25 billion, with 6,000 enterprise and public sector clients across 70 countries, managed by around 170,000 staff. In July 2017, the company started a three-year plan to reduce the number of offices in India from 50 to 26, and reduce headcount by 5.9% (around 10,000) employees. In 2018, DXC split off its US public sector segment to create a new company, Perspecta Inc. In June 2019, with about 43,000 employees in India and one of its largest delivery engines for application outsourcing and software development, the company restructured its workforce to meet its new revenue profile. Mike Salvino, the former Accenture chief group executive, was named president and CEO of DXC Technology in September 2019. In February 2021, French technology services and consulting firm Atos ended talks for a potential acquisition of DXC. Atos had proposed for US$10 billion including debt for acquisition. As of November 2021, DXC had around 130,000 employees in over 70 countries in its global innovation and delivery centres; the largest among them is India, followed by the Philippines, Central Europe, and Vietnam. In May 2022, Salvino was appointed as the chairman of DXC's board, taking over Ian Read following his retirement in July 2022. In October 2023, DXC was delisted from S&P 500 Index, and moved to the S&P SmallCap 600 Index. In December 2023, it was announced that Salvino would no longer be CEO of DXC Technology. Raul Fernandez, who was on the board of directors, was appointed as the president and chief executive officer of DXC Technology on 1 February 2024. As of November 2024, DXC employs over 125,000 in over 70 countries of which over 43,000 are employed at 12 sites across 7 major cities in India. === Acquisitions === In July 2017, DXC purchased enterprise software company Tribridge and its affiliate company Concerto Cloud Services for $152 million. In 2018, it announced additional acquisitions, including Molina Medicaid Solutions (previously part of Molina Healthcare), Argodesign and two ServiceNow partners, BusinessNow and TESM. In January 2019, DXC Technology acquired Luxoft. The deal closed in June 2019. == Programs and sponsorships == === Dandelion Program === Piloted in Adelaide, South Australia, in 2014, the DXC Dandelion Program has grown to over 100 employees in Australia, working with more than 240 organizations in 71 countries to acquire sustainable employment for individuals with autism. In June 2021, DXC piloted the Dandelion Program in the UK. === Sports === The company sponsored Team Penske with 2016 Series Champion and 2019 Indianapolis 500 winner Simon Pagenaud, and in 2018, became title sponsor of IndyCar Series race DXC Technology 600. DXC is also a partner of Australian Rugby Union team Brumbies. In 2022, the company became the new sleeve sponsor for English football club Manchester United. In May 2023, the company signed a multi-year partnership with Scuderia Ferrari starting from the 2023 Miami Grand Prix onwards. == See also == List of IT consulting firms == References == == External links == Official website Business data for DXC Technology:
https://en.wikipedia.org/wiki/DXC_Technology
Maharashtra Knowledge Corporation Limited is a public limited company promoted by the Department of Higher and Technical Education, Government of Maharashtra, India. and was incorporated under the Companies Act. On 5 January 2018 the Department of Higher and Technical Education (H & TE), Government of Maharashtra (GOM) issued a Government Resolution, as per which, in place of H & TE Department, the General Administration Department (GAD) has become the Representative Department of GOM for matters concerning MKCL. == Operations == Company has its registered office and operations & Development center at ICC Pune. It has offices in India and in its subsidiaries MKCL Arabia. The present MD (Managing Director) of MKCL is Mr.Sameer Pande. Over 5,000 Authorized Learning Centers registered in Maharashtra. MKCL's endeavour in the field of IT education is marked by courses like MS-CIT (Maharashtra State Certificate in Information Technology), MS-ACIT and many other vocational courses affiliated to YCMOU under the brand KLiC (Knowledge Lit Careers), MKCL ERA (eLearning Revolution for All) etc. === In India === MKCL has established and is establishing Joint Venture companies with the various State Governments by investing MKCL’s funds towards 30% of the initial equity. The Odisha Knowledge Corporation Limited (OKCL) and the Haryana Knowledge Corporation Limited (HKCL) are such collaborative endeavours. === Abroad === MKCL has also created joint ventures abroad through its subsidiary viz. MKCL International FZE, Sharjah, UAE. MKCL Arabia Ltd. (in Saudi Arabia along with its branch in Egypt) and MKCL Lanka Ltd. (in Sri Lanka) are the existing Joint Ventures. == Courses == MS-CIT MS-CIT (Maharashtra State Certificate in Information Technology) is a widely recognized IT literacy course launched by the Maharashtra Knowledge Corporation Limited (MKCL) in 2001. Designed to enhance digital skills, the program has enrolled over 1.5 crore learners, offering eLearning modules, hands-on practice, and certified guidance through a vast network of over 5,000 Authorized Learning Centers across Maharashtra. The course is tailored for diverse groups, including students, housewives, and senior citizens, with a special focus on bridging the digital divide in rural areas. It has also empowered visually impaired individuals to pursue technology-driven careers. MS-CIT remains a cornerstone of digital literacy in Maharashtra, fostering inclusivity and preparing learners for the digital age. KLiC Courses KLiC (Knowledge Lit Careers), by Maharashtra Knowledge Corporation Limited (MKCL), offers career-oriented certificate courses to enhance employability skills. With over 32 courses across sectors like Programming, Digital Arts, and Management, it equips learners for service sector jobs. Courses range from 60 to 120 hours, with certifications from Yashwantrao Chavan Maharashtra Open University (YCMOU) for longer programs. Learners can access training online or through Authorized Learning Centers, with hands-on experience and placement assistance. KLiC bridges academic learning and professional careers, empowering individuals to "Work Locally, Earn Globally." == References == == External links == Official website
https://en.wikipedia.org/wiki/Maharashtra_Knowledge_Corporation
The Indian Institutes of Technology (IIT) are a network of engineering and technology institutions in India. Established in 1950, they are under the purview of the Ministry of Education of the Indian Government and are governed by the Institutes of Technology Act, 1961. The Act refers to them as Institutes of National Importance and lays down their powers, duties, and framework for governance as the country's premier institutions in the field of technology. 23 IITs currently fall under the tenor of this act. Each IIT operates autonomously and is linked to others through a common council called the IIT Council, which oversees their administration. The Minister of Education of India is the ex officio chairperson of the IIT Council. According to data obtained through Right to Information (RTI) applications, approximately 38% of Indian Institute of Technology (IIT) graduates from the class of 2024 have not secured job placements. This is the highest percentage in the past three years, with a steady increase from 19% in 2021 and 21% in 2022. == List of all Indian Institutes of Technology == == History == In the late 1940s, a 22-member committee, headed by Nalini Ranjan Sarkar, recommended the establishment of these institutions in various parts of India, along the lines of the Massachusetts Institute of Technology (MIT), with affiliated secondary institutions. The first Indian Institute of Technology was founded in May 1950 at the site of the Hijli Detention Camp in Kharagpur, West Bengal. The name "Indian Institute of Technology" was adopted before the formal inauguration of the institute on 18 August 1951 by Maulana Abul Kalam Azad. On 15 September 1956, the Parliament of India passed the Indian Institute of Technology (Kharagpur) Act, declaring it as an Institute of National Importance. Jawaharlal Nehru, first Prime Minister of India, in the first convocation address of IIT Kharagpur in 1956, said: Here in the place of that Hijli Detention Camp stands the fine monument of India, representing India's urges, India's future in the making. This picture seems to me symbolically of the changes coming to India. On the recommendations of the Sarkar Committee, four campuses were established at Bombay (1958), Madras (1959), Kanpur (1959), and Delhi (1961). The location of these campuses was chosen to be scattered throughout India to prevent regional imbalance. The Indian Institutes of Technology Act was amended to reflect the addition of new IITs. In the tenth meeting of IIT Council in 1972, it was also proposed to convert the then IT-BHU into an IIT and a committee was appointed by IIT Council for the purpose but because of political reasons, the desired conversion could not be achieved then. IT-BHU had been taking admissions through Indian Institute of Technology Joint Entrance Examination (IIT-JEE) for undergraduate courses and Graduate Aptitude Test in Engineering (GATE) for postgraduate courses since 1972. Finally, in 2012 the Institute of Technology, Banaras Hindu University was made a member of the IITs and renamed as IIT (BHU) Varanasi. Student agitations in the state of Assam made Prime Minister Rajiv Gandhi promise the creation of a new IIT in Assam. This led to the establishment of a sixth institution at Guwahati under the Assam Accord in 1994. In 2001, the University of Roorkee was converted into IIT Roorkee. Over the past few years, there have been several developments toward establishing new IITs. On 1 October 2003, Prime Minister Atal Bihari Vajpayee announced plans to create more IITs "by upgrading existing academic institutions that have the necessary promise and potential". Subsequent developments led to the formation of the S K Joshi Committee, in November 2003, to guide the selection of the five institutions which would be converted into IITs. Based on the initial recommendations of the Sarkar Committee, it was decided that new IITs should be spread throughout the country. When the government expressed its willingness to correct this regional imbalance, 16 states demanded IITs. Since the S K Joshi Committee prescribed strict guidelines for institutions aspiring to be IITs, only seven colleges were selected for final consideration. Plans are also reported to open IITs outside India, although there has not been much progress in this regard. Eventually in the 11th Five year plan, eight states were identified for establishment of new IITs. From 2008 to 2009, eight new IITs were set up in Gandhinagar, Jodhpur, Hyderabad, Indore, Patna, Bhubaneswar, Ropar, and Mandi. In 2015 to 2016, six new IITs in Tirupati, Palakkad, Dharwad, Bhilai, Goa, and Jammu, approved through a 2016 bill amendment, were founded, along with the conversion of Indian School of Mines Dhanbad into IIT, Dhanbad. The entire allocation by the central government for the 2017–18 budget for all Indian Institutes of Technology (IITs) was slightly over ₹70 billion (US$830 million). However, the aggregate money spent by Indian students for tertiary education in the United States was about six times more than what the central government spends on all IITs. In June 2023, education officials of India and Tanzania announced that the first foreign IIT campus would be established on the Tanzanian autonomous territory of Zanzibar, as a satellite campus of IIT Madras. The campus is scheduled to begin offering classes in October 2023. == Organisational Structure == The President of India is the ex officio Visitor, and has residual powers. Directly under the President is the IIT Council, comprising minister-in-charge of technical education in the Union Government, the Chairmen of all IITs, the Directors of all IITs, the Chairman of the University Grants Commission, the Director General of CSIR, the Chairman of IISc, the Director of IISc, three members of Parliament, the Joint Council Secretary of Ministry of Education, and three appointees each of the Union Government, AICTE, and the Visitor. Under the IIT Council is the Board of Governors of each IIT. Under the Board of Governors is the Director, who is the chief academic and executive officer of the IIT. Under the Director, in the organisational structure, comes the Deputy Director. Under the Director and the deputy director, come the Deans, Heads of Departments, Registrar, President of the Students' Council, and Chairman of the Hall Management Committee. The Registrar is the chief administrative officer of the IIT and overviews the day-to-day operations. Below the Heads of Department (HOD) are the faculty members (Professors, Associate Professors, and Assistant Professors). The Wardens come under the Chairman of the Hall Management Committee. === The Institutes of Technology Act === The Institute of Technology Act (parliamentary legislation) gives legal status, including degree-granting powers, to the Indian Institutes of Technology (IITs). It was notified in the gazette as Act Number 59 of 1961 on 20 December 1961 and came into effect on 1 April 1962. The Act also declares these institutes as Institutes of National Importance. == Academics == The IITs receive comparatively higher grants than other engineering colleges in India. While the total government funding to most other engineering colleges is around ₹ 100–200 million ($2–4 million) per year, the amount varies between ₹ 900–1300 million ($19–27 million) per year for each IIT. Other sources of funds include student fees and research funding from industry and contributions from the alumni. The faculty-to-student ratio in the IITs is between 1:6 and 1:8. The Standing Committee of IIT Council (SCIC) prescribes the lower limit for faculty-to-student ratio as 1:9, applied department wise. The IITs subsidize undergraduate student fees by approximately 80% and provide scholarships to all Master of Technology students and Research Scholars (PhD) to encourage students for higher studies, per the recommendations of the Thacker Committee (1959–1961). The cost borne by undergraduate students is around ₹180,000 per year. Students from the OBC, ST, SC categories, female students as well as physically challenged students are also entitled to scholarships. The various IITs function autonomously, and their special status as Institutes of National Importance facilitates the smooth running of IITs, virtually free from both regional as well as student politics. Such autonomy means that IITs can create their curricula and adapt rapidly to the changes in educational requirements, free from bureaucratic hurdles. The government has no direct control over internal policy decisions of IITs (like faculty recruitment and curricula) but has representation on the IIT Council. The medium of instruction in all IITs is English. The electronic libraries allow students to access online journals and periodicals. The IITs and IISc, Bengaluru have taken an initiative along with Ministry of Education to provide free online videos of actual lectures of different disciplines under National Programme on Technology Enhanced Learning. This initiative is undertaken to make quality education accessible to all students. The academic policies of each IIT are decided by its Senate. This comprises all professors of the IIT and student representatives. Unlike many Western universities that have an elected senate, the IITs have an academic senate. It controls and approves the curriculum, courses, examinations and results, and appoints committees to look into specific academic matters. The teaching, training and research activities of the institute are periodically reviewed by the senate to maintain educational standards. The Director of an IIT is the ex-officio Chairman of the Senate. All the IITs follow the credits system of performance evaluation, with proportional weighting of courses based on their importance. The total marks (usually out of 100) form the basis of grades, with a grade value (out of 10) assigned to a range of marks. Sometimes, relative grading is done considering the overall performance of the whole class. For each semester, the students are graded on a scale of 0 to 10 based on their performance, by taking a weighted average of the grade points from all the courses, with their respective credit points. Each semester evaluation is done independently and then the weighted average over all semesters is used to calculate the cumulative Grade Point Average (known as CGPA or CPI—Cumulative Performance Index). === Undergraduate education degrees === The Bachelor of Technology (BTech) degree is the most common undergraduate degree in the IITs in terms of student enrollment, although Bachelor of Science (BS) degree, dual degrees integrating Master of Science or Master of Arts are also offered. The BTech course is based on a 4-year program with eight semesters, while the Dual Degree and Integrated courses are 5-year programs with ten semesters. In all IITs, the first year of BTech and Dual Degree courses are marked by a common course structure for all the students, though in some IITs, a single department introduction-related course is also included. The common courses include the basics from most of the departments like Computers, Electronics, Mechanics, Chemistry, Electrical and Physics. At the end of the first year (the end of the first semester at IIT Madras, IIT Hyderabad, IIT Bhilai, IIT Palakkad, and IIT Roorkee), an option to change departments is given to meritorious students based on their performance in the first two semesters. Few such changes ultimately take place as the criteria for them are usually strict, limited to the most meritorious students. From the second year onward, the students study subjects exclusively from their respective departments. In addition to these, the students have to take compulsory advanced courses from other departments to broaden their education. Separate compulsory courses from humanities and social sciences departments, and sometimes management courses are also enforced. In the last year of their studies, most of the students are placed into industries and organisations via the placement process of the respective IIT, though some students opt out of this either when going for higher studies or when they take up jobs by applying to the companies directly. === Postgraduate education === ==== Master's degrees and postgraduate diplomas ==== The IITs offer several postgraduate programs including Master of Technology (MTech), Master of Business Administration (MBA), and Master of Science (MSc). Some IITs offer specialised graduate programmes such as Master of Design (M.Des.), the Post Graduate Diploma in Information Technology (PGDIT), Masters in Medical Science and Technology (MMST), Masters in City Planning (MCP), Master of Arts (MA), Postgraduate Diploma in intellectual property Law (PGDIPL), and the Postgraduate Diploma in Maritime Operation & Management (PGDMOM). Some of the IITs offer an M.S. (by research) program; the MTech and M.S. are similar to the US universities' non-thesis (course-based) and thesis (research-based) masters programs respectively. Admissions to master's programs in engineering are made using scores of the Graduate Aptitude Test in Engineering (GATE), while those to master's programs in science are made using scores of the Joint Admission Test for M.Sc. (JAM). Several IITs have schools of management offering master's degrees in management or business administration. In April 2015, IIT Bombay launched the first U.S.-India joint EMBA program alongside Washington University in St. Louis. ==== Bachelors-Masters dual degrees ==== The IITs also offer an unconventional BTech and MTech integrated educational program called "Dual Degree". It integrates undergraduate and postgraduate studies in selected areas of specialisation. It is completed in five years as against six years in conventional BTech (four years) followed by an MTech (two years). Integrated Master of Science programs are also offered at few IITs which integrates the Undergraduate and Postgraduate studies in Science streams in a single degree program against the conventional university system. These programs were started to allow its graduates to complete postgraduate studies from IIT rather than having to go to another institute. === Doctoral === The IITs also offer the Doctor of Philosophy degree (PhD) as part of their doctoral education programme. In it, the candidates are given a topic of academic interest by the ins or have to work on a consultancy project given by the industries. The duration of the program is usually unspecified and depends on the specific discipline. PhD candidates have to submit a dissertation as well as provide an oral defence for their thesis. Teaching Assistantships (TA) and Research Assistantships (RA) are often provided. The IITs, along with NITs and IISc, account for nearly 80% of all engineering PhDs in India. IITs now allow admission in PhD programs without the mandatory GATE score. == Culture and student life == All the IITs provide on-campus residential facilities to the students, research scholars and faculty. The students live in hostels (sometimes referred to as halls) throughout their stay in the IIT. Students in all IITs must choose among National Cadet Corps (NCC), National Service Scheme (NSS) and National Sports Organisation (NSO) in their first years. All the IITs have sports grounds for basketball, cricket, football (soccer), hockey, volleyball, lawn tennis, badminton, athletics and swimming pools for aquatic events. Usually, the hostels also have their own sports grounds. Moreover, an Inter IIT Sports Meet is organised annually where participants from all 23 IITs contest for the General Championship Trophy in 13 different sports. Along with Inter IIT Cultural Meet and Tech Meet, all of them generally happening on various dates in the month of December every year. === Technical and cultural festivals === All IITs organize annual technical festivals, typically lasting three or four days. The technical festivals are Shaastra (IIT Madras), Advitiya (IIT Ropar), Kshitij (IIT Kharagpur), Techfest (IIT Bombay), Technex (IIT-BHU Varanasi), Cognizance (IIT Roorkee), Concetto (IIT-ISM Dhanbad), Tirutsava (IIT Tirupati), Nvision (IIT Hyderabad), Meraz (IIT Bhilai), Amalthea, (IIT Gandhinagar), Techkriti (IIT Kanpur), Tryst (IIT Delhi), Techniche (IIT Guwahati), Wissenaire (IIT Bhubaneswar), Technunctus (IIT Jammu), Xpecto (IIT Mandi), Fluxus (IIT Indore), Celesta (IIT Patna) and IGNUS (IIT Jodhpur) Petrichor(IIT Palakkad). Most of them are organized in January or March. Techfest (IIT Bombay) is also one of the most popular and largest technical festivals in Asia in terms of participants and prize money involved. It has been granted patronage from the United Nations Educational, Scientific and Cultural Organisation (UNESCO) for providing a platform for students to showcase their talent in science and technology. Shaastra holds the distinction of being the first student-managed event in the world to implement a formal Quality Management System, earning ISO 9001:2000 certification. Kshitij, which is branded as a techno-management festival due to its emphasis on both technology and management, is the largest of these festivals by sponsorship money. Annual cultural festivals are also organized by the IITs and last three to four days. These include Thomso (IIT Roorkee), Kashiyatra (IIT BHU Varanasi), Alcheringa (IIT Guwahati), Exodia (IIT Mandi), Saarang and Paradox (annual fests of IIT Madras BTech and BS Degree respectively), Spring Fest (IIT Kharagpur, also known as SF), Rendezvous (IIT Delhi), Meraz (IIT Bhilai), Tirutsava (IIT Tirupati), Srijan, (earlier known as Saturnalia, IIT Dhanbad), Tarang (culfest) (previously Rave), Anwesha (IIT Patna), SPANDAN (IIT Jodhpur), Renao (IIT Jammu), Petrichor (IIT Palakkad), Blithchron (IIT Gandhinagar), ELAN (IIT Hyderabad), Alma Fiesta (IIT Bhubaneswar), Mood Indigo (IIT Bombay, also known as Mood-I), Antaragni (IIT Kanpur) and Zeitgeist (IIT Ropar). == Academic rankings == IITs have generally ranked above all other engineering colleges in India for Engineering. According to Outlook India's Top Engineering Colleges of 2017, the top four engineering colleges within India were IITs. In 2019 QS World University Ranking, IIT Bombay ranked highest at 162, followed by IIT Delhi (172), IIT Madras (264), IIT Kanpur (283), IIT Kharagpur (295), IIT Roorkee (381) and IIT Guwahati (472). In the 2022 NIRF rankings published by Ministry of Education, India, IIT Madras has been ranked 1st for seven consecutive years in the Engineering Category and for four consecutive years in the Overall Category. == Reservation Policy and Discrimination == IITs practice affirmative action and offer reservation to the "backward and weaker sections" of the society that includes SC/ST/OBC-NCL/EWS/PWD/Girl candidates. About 50% of seats are reserved for candidates holding backward-caste certificates, and 10% seats are further reserved for candidates from general (unreserved) category who fulfill the economically weaker section criteria. Furthermore, students from reserved categories pay significantly lower fees compared to students from the unreserved category. Despite the implementation of reservation policies, provision of economic assistance, and enforcement of the Scheduled Caste and Scheduled Tribe (Prevention of Atrocities) Act, 1989, IITs have faced allegations of caste-based discrimination. Instances of suicides among students from reserved categories are often cited to illustrate this ongoing issue. However, it's important to note that the suicide rates appear to be consistent among students from both reserved and non-reserved categories. == Criticism == The IITs have faced criticism from within and outside academia. Major concerns include allegations that they encourage brain drain and that their stringent entrance examinations encourage coaching colleges and put heavy pressure on the student's body. Recently some prominent IITians have also questioned the quality of teaching and research in IITs. With the tripling the number of IITs in recent decades, the newly created institutes have struggled to establish themselves compared to their peers. A 2021 report by Comptroller and Auditor General of India criticized the newer IITs for not meeting targets for research, faculty and student recruitment, students retention, as well as for being beset with infrastructure delays. In the recent past, the number of student suicides has attracted significant attention. === Brain drain === Among the criticisms of the IIT system by the media and academia, a common notion is that it encourages brain drain. Until liberalisation started in the early 1990s, India experienced large scale emigration of IIT graduates to developed countries, especially to the United States. Since 1953, nearly twenty-five thousand IIT graduates have settled in the US. Since the US benefited from subsidized education in IITs at the cost of Indian taxpayers' money, critics say that subsidising education in IITs is useless. Others support the emigration of graduates, arguing that the capital sent home by the IIT graduates has been a major source of the expansion of foreign exchange reserves for India, which, until the 1990s, had a substantial trade deficit. A 2023 study by the National Bureau of Economic Research found that among the top 1,000 JEE scorers, 36% migrated abroad, while for the top 100 scorers, the rate was 62%, primarily to the U.S. and for graduate school. This trend has been reversed somewhat (dubbed the reverse brain drain) as hundreds of IIT graduates, who have pursued further studies in the US, started returning to India in the 1990s. The extent of intellectual loss receded substantially over the 1990s and 2000s, with the percentage of students going abroad dropping from as high as 70% at one time to around 30% in 2005. This is largely attributed to the liberalization of the Indian economy and the opening of previously closed markets. Government initiatives are encouraging IIT students into entrepreneurship programs and are increasing foreign investment. Emerging scientific and manufacturing industries, and outsourcing of technical jobs from North America and Western Europe have created opportunities for aspiring graduates in India. Additionally, IIT alumni are giving back generously to their parent institutions. === Entrance competition === The highly competitive examination in the form of JEE-Advanced has led to the establishment of a large number of coaching institutes throughout the country that provide intensive, and specific preparation for the JEE-Advanced for substantial fees. It is argued that this favours students from specific regions and richer backgrounds. Some coaching institutes say that they have individually coached nearly 800 successful candidates year after year. According to some estimates, nearly 95% of all students who clear the JEE-Advanced had joined coaching classes. Indeed, this was the case regarding preparation for IIT entrance exams even decades ago. In a January 2010 lecture at the Indian Institute of Science, the 2009 Nobel laureate in Chemistry, Venkatraman Ramakrishnan revealed that he failed to get a seat at any of the Indian engineering and medical colleges. He also said that his parents, being old-fashioned, did not believe in coaching classes to prepare for the IIT entrance exam and considered them to be "nonsense". In a documentary aired by CBS, Vinod Khosla, co-founder of Sun Microsystems states, "The IITs probably are the hardest schools in the world to get into, to the best of my knowledge". The documentary further concludes, "Put Harvard, MIT, and Princeton together, and you begin to get an idea of the status of IIT in India" to depict the competition as well as demand for the elite institutes. Not all children are of a similar aptitude level and may be skilled in different paradigms and fields. This has led to criticism of the way the examinations are conducted and the way a student is forced in the Indian community. The IIT-JEE (Now JEE-Advanced) format was restructured in 2006 following these complaints. After the change to the objective pattern of questioning, even the students who initially considered themselves not fit for subjective pattern of IIT-JEE decided to take the examination. Though the restructuring was meant to reduce the dependence of students on coaching classes, it led to an increase in students registering for coaching classes. Some people (mostly IIT graduates) have criticized the changed pattern of the JEE-Advanced . They reason that while JEE-Advanced is traditionally used to test students' understanding of fundamentals and their ability to apply them to solve tough unseen problems, the current pattern does not stress much on the application part and might lead to a reduced quality of students. JEE-Advanced is conducted only in English and Hindi, making it harder for students with regional languages as their main language. In September 2011, the Gujarat High Court has acted on a Public Interest Litigation by the Gujarati Sahitya Parishad, for conducting the exams in Gujarati. A second petition was made in October by Navsari's Sayaji Vaibhav Sarvajanik Pustakalaya Trust. Another petition was made at the Madras High Court for conducting the exam in Tamil. In the petition, it was claimed that not conducting the exam in the regional languages violates article 14 of the Constitution of India. IIT council recommended major changes in entrance examination structure which is effective from 2017 onwards. == See also == Indian Institutes of Management (IIMs) Indian Institutes of Information Technology (IIITs) National Institutes of Technology (NITs) National Institute of Design (NID) Government Funded Technical Institutes (GFTIs) Institutes of National Importance (INIs) == References == == Further reading == == External links == Official website IIT Council The Institutes of Technology Act, 1961 (PDF)
https://en.wikipedia.org/wiki/Indian_Institutes_of_Technology
Information technology (IT) is a set of related fields within information and communications technology (ICT), that encompass computer systems, software, programming languages, data and information processing, and storage. Information technology is an application of computer science and computer engineering. The term is commonly used as a synonym for computers and computer networks, but it also encompasses other information distribution technologies such as television and telephones. Several products or services within an economy are associated with information technology, including computer hardware, software, electronics, semiconductors, internet, telecom equipment, and e-commerce. An information technology system (IT system) is generally an information system, a communications system, or, more specifically speaking, a computer system — including all hardware, software, and peripheral equipment — operated by a limited group of IT users, and an IT project usually refers to the commissioning and implementation of an IT system. IT systems play a vital role in facilitating efficient data management, enhancing communication networks, and supporting organizational processes across various industries. Successful IT projects require meticulous planning and ongoing maintenance to ensure optimal functionality and alignment with organizational objectives. Although humans have been storing, retrieving, manipulating, analysing and communicating information since the earliest writing systems were developed, the term information technology in its modern sense first appeared in a 1958 article published in the Harvard Business Review; authors Harold J. Leavitt and Thomas L. Whisler commented that "the new technology does not yet have a single established name. We shall call it information technology (IT)." Their definition consists of three categories: techniques for processing, the application of statistical and mathematical methods to decision-making, and the simulation of higher-order thinking through computer programs. == History == Based on the storage and processing technologies employed, it is possible to distinguish four distinct phases of IT development: pre-mechanical (3000 BC – 1450 AD), mechanical (1450 – 1840), electromechanical (1840 – 1940), and electronic (1940 to present). Ideas of computer science were first mentioned before the 1950s under the Massachusetts Institute of Technology (MIT) and Harvard University, where they had discussed and began thinking of computer circuits and numerical calculations. As time went on, the field of information technology and computer science became more complex and was able to handle the processing of more data. Scholarly articles began to be published from different organizations. During the early computing, Alan Turing, J. Presper Eckert, and John Mauchly were considered some of the major pioneers of computer technology in the mid-1900s. Giving them such credit for their developments, most of their efforts were focused on designing the first digital computer. Along with that, topics such as artificial intelligence began to be brought up as Turing was beginning to question such technology of the time period. Devices have been used to aid computation for thousands of years, probably initially in the form of a tally stick. The Antikythera mechanism, dating from about the beginning of the first century BC, is generally considered the earliest known mechanical analog computer, and the earliest known geared mechanism. Comparable geared devices did not emerge in Europe until the 16th century, and it was not until 1645 that the first mechanical calculator capable of performing the four basic arithmetical operations was developed. Electronic computers, using either relays or valves, began to appear in the early 1940s. The electromechanical Zuse Z3, completed in 1941, was the world's first programmable computer, and by modern standards one of the first machines that could be considered a complete computing machine. During the Second World War, Colossus developed the first electronic digital computer to decrypt German messages. Although it was programmable, it was not general-purpose, being designed to perform only a single task. It also lacked the ability to store its program in memory; programming was carried out using plugs and switches to alter the internal wiring. The first recognizably modern electronic digital stored-program computer was the Manchester Baby, which ran its first program on 21 June 1948. The development of transistors in the late 1940s at Bell Laboratories allowed a new generation of computers to be designed with greatly reduced power consumption. The first commercially available stored-program computer, the Ferranti Mark I, contained 4050 valves and had a power consumption of 25 kilowatts. By comparison, the first transistorized computer developed at the University of Manchester and operational by November 1953, consumed only 150 watts in its final version. Several other breakthroughs in semiconductor technology include the integrated circuit (IC) invented by Jack Kilby at Texas Instruments and Robert Noyce at Fairchild Semiconductor in 1959, silicon dioxide surface passivation by Carl Frosch and Lincoln Derick in 1955, the first planar silicon dioxide transistors by Frosch and Derick in 1957, the MOSFET demonstration by a Bell Labs team, the planar process by Jean Hoerni in 1959, and the microprocessor invented by Ted Hoff, Federico Faggin, Masatoshi Shima, and Stanley Mazor at Intel in 1971. These important inventions led to the development of the personal computer (PC) in the 1970s, and the emergence of information and communications technology (ICT). By 1984, according to the National Westminster Bank Quarterly Review, the term information technology had been redefined as "the convergence of telecommunications and computing technology (...generally known in Britain as information technology)." We then begin to see the appearance of the term in 1990 contained within documents for the International Organization for Standardization (ISO). Innovations in technology have already revolutionized the world by the twenty-first century as people have gained access to different online services. This has changed the workforce drastically as thirty percent of U.S. workers were already in careers in this profession. 136.9 million people were personally connected to the Internet, which was equivalent to 51 million households. Along with the Internet, new types of technology were also being introduced across the globe, which has improved efficiency and made things easier across the globe. As technology revolutionized society, millions of processes could be completed in seconds. Innovations in communication were crucial as people increasingly relied on computers to communicate via telephone lines and cable networks. The introduction of the email was considered revolutionary as "companies in one part of the world could communicate by e-mail with suppliers and buyers in another part of the world...". Not only personally, computers and technology have also revolutionized the marketing industry, resulting in more buyers of their products. In 2002, Americans exceeded $28 billion in goods just over the Internet alone while e-commerce a decade later resulted in $289 billion in sales. And as computers are rapidly becoming more sophisticated by the day, they are becoming more used as people are becoming more reliant on them during the twenty-first century. == Data processing == === Storage === Early electronic computers such as Colossus made use of punched tape, a long strip of paper on which data was represented by a series of holes, a technology now obsolete. Electronic data storage, which is used in modern computers, dates from World War II, when a form of delay-line memory was developed to remove the clutter from radar signals, the first practical application of which was the mercury delay line. The first random-access digital storage device was the Williams tube, which was based on a standard cathode ray tube. However, the information stored in it and delay-line memory was volatile in the fact that it had to be continuously refreshed, and thus was lost once power was removed. The earliest form of non-volatile computer storage was the magnetic drum, invented in 1932 and used in the Ferranti Mark 1, the world's first commercially available general-purpose electronic computer. IBM introduced the first hard disk drive in 1956, as a component of their 305 RAMAC computer system.: 6  Most digital data today is still stored magnetically on hard disks, or optically on media such as CD-ROMs.: 4–5  Until 2002 most information was stored on analog devices, but that year digital storage capacity exceeded analog for the first time. As of 2007, almost 94% of the data stored worldwide was held digitally: 52% on hard disks, 28% on optical devices, and 11% on digital magnetic tape. It has been estimated that the worldwide capacity to store information on electronic devices grew from less than 3 exabytes in 1986 to 295 exabytes in 2007, doubling roughly every 3 years. ==== Databases ==== Database Management Systems (DMS) emerged in the 1960s to address the problem of storing and retrieving large amounts of data accurately and quickly. An early such system was IBM's Information Management System (IMS), which is still widely deployed more than 50 years later. IMS stores data hierarchically, but in the 1970s Ted Codd proposed an alternative relational storage model based on set theory and predicate logic and the familiar concepts of tables, rows, and columns. In 1981, the first commercially available relational database management system (RDBMS) was released by Oracle. All DMS consist of components; they allow the data they store to be accessed simultaneously by many users while maintaining its integrity. All databases are common in one point that the structure of the data they contain is defined and stored separately from the data itself, in a database schema. In recent years, the extensible markup language (XML) has become a popular format for data representation. Although XML data can be stored in normal file systems, it is commonly held in relational databases to take advantage of their "robust implementation verified by years of both theoretical and practical effort." As an evolution of the Standard Generalized Markup Language (SGML), XML's text-based structure offers the advantage of being both machine- and human-readable. === Transmission === Data transmission has three aspects: transmission, propagation, and reception. It can be broadly categorized as broadcasting, in which information is transmitted unidirectionally downstream, or telecommunications, with bidirectional upstream and downstream channels. XML has been increasingly employed as a means of data interchange since the early 2000s, particularly for machine-oriented interactions such as those involved in web-oriented protocols such as SOAP, describing "data-in-transit rather than... data-at-rest". === Manipulation === Hilbert and Lopez identify the exponential pace of technological change (a kind of Moore's law): machines' application-specific capacity to compute information per capita roughly doubled every 14 months between 1986 and 2007; the per capita capacity of the world's general-purpose computers doubled every 18 months during the same two decades; the global telecommunication capacity per capita doubled every 34 months; the world's storage capacity per capita required roughly 40 months to double (every 3 years); and per capita broadcast information has doubled every 12.3 years. Massive amounts of data are stored worldwide every day, but unless it can be analyzed and presented effectively it essentially resides in what have been called data tombs: "data archives that are seldom visited". To address that issue, the field of data mining — "the process of discovering interesting patterns and knowledge from large amounts of data" — emerged in the late 1980s. == Services == === Email === The technology and services IT provides for sending and receiving electronic messages (called "letters" or "electronic letters") over a distributed (including global) computer network. In terms of the composition of elements and the principle of operation, electronic mail practically repeats the system of regular (paper) mail, borrowing both terms (mail, letter, envelope, attachment, box, delivery, and others) and characteristic features — ease of use, message transmission delays, sufficient reliability and at the same time no guarantee of delivery. The advantages of e-mail are: easily perceived and remembered by a person addresses of the form user_name@domain_name (for example, somebody@example.com); the ability to transfer both plain text and formatted, as well as arbitrary files; independence of servers (in the general case, they address each other directly); sufficiently high reliability of message delivery; ease of use by humans and programs. The disadvantages of e-mail include: the presence of such a phenomenon as spam (massive advertising and viral mailings); the theoretical impossibility of guaranteed delivery of a particular letter; possible delays in message delivery (up to several days); limits on the size of one message and on the total size of messages in the mailbox (personal for users). === Search system === A search system is software and hardware complex with a web interface that provides the ability to search for information on the Internet. A search engine usually means a site that hosts the interface (front-end) of the system. The software part of a search engine is a search engine (search engine) — a set of programs that provides the functionality of a search engine and is usually a trade secret of the search engine developer company. Most search engines look for information on World Wide Web sites, but there are also systems that can look for files on FTP servers, items in online stores, and information on Usenet newsgroups. Improving search is one of the priorities of the modern Internet (see the Deep Web article about the main problems in the work of search engines). == Commercial effects == Companies in the information technology field are often discussed as a group as the "tech sector" or the "tech industry." These titles can be misleading at times and should not be mistaken for "tech companies," which are generally large scale, for-profit corporations that sell consumer technology and software. It is also worth noting that from a business perspective, information technology departments are a "cost center" the majority of the time. A cost center is a department or staff which incurs expenses, or "costs," within a company rather than generating profits or revenue streams. Modern businesses rely heavily on technology for their day-to-day operations, so the expenses delegated to cover technology that facilitates business in a more efficient manner are usually seen as "just the cost of doing business." IT departments are allocated funds by senior leadership and must attempt to achieve the desired deliverables while staying within that budget. Government and the private sector might have different funding mechanisms, but the principles are more or less the same. This is an often overlooked reason for the rapid interest in automation and artificial intelligence, but the constant pressure to do more with less is opening the door for automation to take control of at least some minor operations in large companies. Many companies now have IT departments for managing the computers, networks, and other technical areas of their businesses. Companies have also sought to integrate IT with business outcomes and decision-making through a BizOps or business operations department. In a business context, the Information Technology Association of America has defined information technology as "the study, design, development, application, implementation, support, or management of computer-based information systems". The responsibilities of those working in the field include network administration, software development and installation, and the planning and management of an organization's technology life cycle, by which hardware and software are maintained, upgraded, and replaced. === Information services === Information services is a term somewhat loosely applied to a variety of IT-related services offered by commercial companies, as well as data brokers. === Ethics === The field of information ethics was established by mathematician Norbert Wiener in the 1940s.: 9  Some of the ethical issues associated with the use of information technology include:: 20–21  Breaches of copyright by those downloading files stored without the permission of the copyright holders Employers monitoring their employees' emails and other Internet usage Unsolicited emails Hackers accessing online databases Web sites installing cookies or spyware to monitor a user's online activities, which may be used by data brokers == IT projects == Research suggests that IT projects in business and public administration can easily become significant in scale. Work conducted by McKinsey in collaboration with the University of Oxford suggested that half of all large-scale IT projects (those with initial cost estimates of $15 million or more) often failed to maintain costs within their initial budgets or to complete on time. == See also == Information and communications technology (ICT) IT infrastructure Outline of information technology Knowledge society == Notes == == References == === Citations === === Bibliography === == Further reading == Allen, T.; Morton, M. S. Morton, eds. (1994), Information Technology and the Corporation of the 1990s, Oxford University Press. Gitta, Cosmas and South, David (2011). Southern Innovator Magazine Issue 1: Mobile Phones and Information Technology: United Nations Office for South-South Cooperation. ISSN 2222-9280. Gleick, James (2011).The Information: A History, a Theory, a Flood. New York: Pantheon Books. Price, Wilson T. (1981), Introduction to Computer Data Processing, Holt-Saunders International Editions, ISBN 978-4-8337-0012-2. Shelly, Gary, Cashman, Thomas, Vermaat, Misty, and Walker, Tim. (1999). Discovering Computers 2000: Concepts for a Connected World. Cambridge, Massachusetts: Course Technology. Webster, Frank, and Robins, Kevin. (1986). Information Technology — A Luddite Analysis. Norwood, NJ: Ablex. == External links == Learning materials related to Information technology at Wikiversity Media related to Information technology at Wikimedia Commons Quotations related to Information technology at Wikiquote
https://en.wikipedia.org/wiki/Information_technology
RCC Institute of Technology (RCC) was founded as the Radio College of Canada in 1928, making it one of the oldest private technology institutions in Canada. It is also the only private educational institute in Ontario to be approved by the Ministry of Training, Colleges and Universities to grant bachelor's degrees. In 2018, Yorkville University acquired RCC Institute of Technology. It was amalgamated with Yorkville to become Yorkville University/Ontario. == History == Radio College of Canada (RCC) was founded in 1928 by J. C. Wilson, who had previously amassed considerable radio experience in England and the United States. At the same time he established RCC Publications, which continues to supply technical data to service technicians in Canada. In 1930, as reported by The Globe newspaper, Rogers-Majestic Corporation and Radio College of Canada established a plan for registering radio servicemen of the entire Dominion. Examining and qualifying those who wished to become registered became RCC's role. In 1937 the college was acquired by R. Christopher Dobson. Shortly thereafter, additional and more advanced training programs were added, including courses in commercial radio operation. During this period the demand for radio operators increased sharply with the growth in aviation; consequently large classes of radio operators were trained for the Federal Department of Transport. In the 1940s Canada's contribution to the World War II effort required immediate and large-scale planning to ensure an adequate and continuing supply of well-trained technicians and operators. Training for Canada and allied governments was performed for essential services such as government departments, Merchant Marines, and, of course, the important manufacturing industry. Radio College established additional facilities and developed specialized training programs for the purpose. Several classes of women radio operators for the air stations established across the country by the Commonwealth air training scheme were trained. The students, who came from all parts of Canada, were selected by aptitude tests developed by the college. Radio College also furnished room, board, nursing and general supervision. After the war the college did extensive rehabilitation training for Canadian and United States veterans, and later for civilians under government auspices. Many Merchant Marine graduates of RCC have later requested from the college proof of their graduation and marine placement, thereby entitling them to the federal pension recently granted to World War II members of the Merchant Marine. When television started in the 1950s, the college trained factory and service personnel. The college developed a new concept in electronics education, electronic engineering technology, a high-level program designed to train "technologists" who would be equipped to assist professional engineers in matters of applied technology, thereby releasing the engineer for matters requiring more engineering expertise, a concept that exists today in most post-secondary technical institutes. RCC had a school located in Montreal on St. Denis street in the 1950s. In 1957 the Association of Professional Engineers of the Province of Ontario (APEO), now called Professional Engineers Ontario (PEO), appointed a Certification Board — a group of professional engineers — which included Robert Poulter, P.Eng., then president of Radio College. The board established standards for the certificates of qualified technologists and technicians, and also for the accreditation of schools offering advanced courses at the engineering technologist level. Radio College of Canada and Ryerson Polytechnical Institute (today Toronto Metropolitan University) were the first schools to be awarded full accreditation. The certification and accreditation programs continue to be carried out under the authority of the Canadian Council of Technicians and Technologists (CCTT) and the Ontario Association of Certified Engineering Technicians and Technologists (OACETT) by the Canadian Technology Accreditation Board. During the late 1960s and early 1970s, with the advent of digital electronics, RCC developed the curriculum to service the new digital, computer and microprocessor-based occupations in data communications, facsimile, mobile phone, and computer technology. In the early 1990s Hartley Nichol, president since 1985, assumed full responsibility for the college, and RCC moved to its present facility, a campus on Steeles Avenue West in Vaughan, Ontario, north of Toronto. On its 70th anniversary in 1998 the Radio College of Canada changed its name to RCC College of Technology. On June 24, 2004, the Ministry of Training, Colleges and Universities in Ontario, allowed RCC to grant bachelor's degrees after a successful audit by the Post-Secondary Education Quality Assessment Board (PEQAB). In 2008, RCC Institute of Technology acquired the International Academy of Design and Technology, a well-known private college, founded in 1983 as the International Academy of Merchandising and Design. The acquisition expanded RCC's offerings, facilitating a convergence between design and technology education. The Academy of Design became part of the family of RCC Institute of Technology schools and offered programs in interior design, graphic design & interactive media, video game design & development, fashion design and fashion merchandising & marketing. In 2010, RCC Institute of Technology reopened the Toronto Film School, adding programs in film production, scriptwriting for film and TV and acting for television, film and the theatre to its offerings. In 2011, RCC Institute of Technology created a faculty for its electronics and technology program offerings called the School of Engineering Technology & Computing; a faculty created for the delivery of electronics technology programs under the RCC umbrella. In all, RCC Institute of Technology housed three different schools – Academy of Design, School of Engineering Technology & Computing and Toronto Film School. In 2018, Yorkville University acquired RCC Institute of Technology, renaming it Yorkville University/Ontario. == See also == List of Ontario Universities Ontario Student Assistance Program == References == == External links == Official website Facebook RCC Alumni RCC Institute of Technology on Facebook
https://en.wikipedia.org/wiki/RCC_Institute_of_Technology
Civic technology, or civic tech, is the idea of using technology to enhance the relationship between people and government with software for communications, decision-making, service delivery, and political process. It includes information and communications technology supporting government with software built by community-led teams of volunteers, nonprofits, consultants, and private companies as well as embedded tech teams working within government. == Definition == Civic technology refers to the use of technology to enhance the relationship between citizens and their government. There are four different types of e-government services, and civic technology falls within the category of government-to-citizen (G2C). The other categories include government-to-business (G2B), government-to-government (G2G), and government-to-employees (G2E). A 2013 report from the Knight Foundation, an American non-profit, attempts to map different focuses within the civic technology space. It broadly categorizes civic technology projects into two categories: open government and community action. Citizens are also now given access to their representatives through social media. They are able to express their concerns directly to government officials through sites like Twitter and Facebook. There have even been past cases of online voting being a polling option for local elections, which have seen vastly increased turnouts, such as in an Arizona election in 2000 which saw a turnout double that of the previous election. It is asserted though that civic technology in government provides for a good management technique but lacks in providing fair democratic representation. Social media is also becoming a growing aspect of government, towards furthering the communication between the government and its citizenry and towards greater transparency within the governmental sectors. This innovation is facilitating a change towards a more progressive and open government, based on civic engagement and technology for the people. With social media as a communicating platform, it enables the government to provide information to the constituents and citizens on the legislative processes and what is occurring in the Congress, for the sake of the citizens' concerns with the government procedures. The definition of what constitutes civic technology is contested to a certain extent, especially with regards to companies engaged in the sharing economy, such as Uber, Lyft, and Airbnb. For example, Airbnb's ability to provide New York residents with housing during the aftermath of Superstorm Sandy could be considered a form of civic technology. However, Nathaniel Heller, managing director of the Research for Development Institute's Governance Program contends that for-profit platforms definitively fall outside of the scope of civic technology: Heller has said that "while citizen-to-citizen sharing is indeed involved, the mission of these companies is focused on maximizing profit for their investors, not any sort of experiment in building social capital." From a goal perspective, civic technology can be understood as "the use of technology for the public good". Microsoft's Technology & Civic Engagement Team have attempted to produce a precise taxonomy of civic technology through a bottom-up approach. They inventoried the existing initiatives and classified them according to: their functions the social processes they involved their users and customers the degree of change they sought the depth of the technology. Microsoft's Civic Graph is guiding the developing network of civic innovators, expanding "its visualizations of funding, data usage, collaboration and even influence". It is a new tool that is opening up the access to track the world of civic technology towards improving the credibility and progress of this sector. This graph will enable more opportunities for access by governmental institutions and corporations to discover these innovators and use them for progressing society towards the future of technology and civic engagement. To create an informed and insightful community, there needs to be a sense of civic engagement in this community, where there is the sharing of information through civic technology platforms and applications. "Community engagement applied to public-interest technology requires that members of a community participate." With communal participation in civic tech platforms, this enables more informed residents to convene in a more engaged, unified community that seeks to share information, politically and socially, for the benefit of its citizenry and their concerns. This work resulted in the Civic Tech Field Guide, a free, crowdsourced collection of civic technology tools and projects. Individuals from over 100 countries have contributed to the documentation of technology, resources, funding and general information concerning "tech for social good". Technology that is designed to benefit the citizenry places the governments under pressure "to change and innovate the way in which their bureaucracies relate to citizens". E-government initiatives have been established and supported in order to strengthen the democratic values of governmental institutions, which can include transparency in government, along with improving the efficiency of the legislative processes to make the government more accountable and reactive to citizens' concerns. These will further civic engagement within the political spectrum for the sake of greater direct representation and a more democratic political system. Civic hacking refers to problem-solving by programmers, designers, data scientists, communicators, organizers, entrepreneurs, and government employees. A civic hacker may work autonomously and independently from the government but may still coordinate or collaborate with them. For example, in 2008, civic hacker William Entriken created an open-source web application that publicly displayed a comparison of the actual arrival times of Philadelphia’s local SEPTA trains to their scheduled times. It also automatically sends messages to SEPTA to recommend updates to the train schedule. SEPTA’s response indicated interest in coordinating with this civic hacker directly to improve the application. Some projects are led by nonprofits, such as Code for America and mySociety, often involving paid staff and contributions from volunteers. As the field of civic technology advances, it seems that apps and handheld devices will become a key focus for development as more companies and municipalities reach out to developers to help with specific issues. Apps are being used in conjunction with handheld devices to simplify tasks such as communication, data tracking, and safety. The most cost-effective way for citizens to get help and information is through neighbors and others around them. By linking people through apps and websites that foster conversation and promote civil service, cities have found an inexpensive way to provide services to their residents. Civic technology represents "just a piece of the $25.5 billion that government spends on external information technology (IT)," indicating that this sector will likely grow, fostering more innovation in both public and private sectors and furthering civic engagement within these platforms. === Worldwide === A worldwide organization that supports civic tech is the Open Government Partnership (OGP). It "is a multilateral initiative that aims to secure concrete commitments from governments to promote transparency, empower citizens, fight corruption, and harness new technologies to strengthen governance". Created in 2011 by eight founding governments (Brazil, Indonesia, Mexico, Norway, the Philippines, South Africa, the United Kingdom and the United States), the OGP gathers every year for a summit. Countries involved are located mainly in America (North and South), Europe and South-Asia (Indonesia, Australia, South Korea). Only a few African countries are part of the OGP, though South Africa is one of the founding countries. Technological progress is rampant throughout the nations of the world, but there are dividing efforts and adoption techniques in how rapid certain countries are progressing compared to others. How countries are able to use information pertains to how devoted nations are to integrating technology into the lives of their citizens and businesses. Local and national governments are funding tens of billions of dollars towards information technology, for the sake of improving the functions and operations of this technology to work for the people and the governments. With more governments attaining a grasp on these technologies, it is paving the way towards more progressive and democratic political systems, for the concerns of future society and for those of the citizens of these nations. == Africa == === Burkina Faso === ==== Government-led initiatives ==== The government of Burkina Faso has a government website portal offering citizens online information about the government structure, their constitution, and laws. === Kenya === ==== Citizen-led initiatives ==== Launched in Kenya in 2014, "MajiVoice" is a joint initiative by the Water Services Regulatory Board (WASREB- the Water Sector Regulator in Kenya) and the World Bank's Water and Sanitation Program. As opposed to walk-in complaint centers, the initiative enables Kenyan citizens to report complaints with regards to water services via multiple channels of technology. The platform allows for communication between citizens and water service providers with the intention to improve service delivery in impoverished areas and user satisfaction. Users are given four options to report their water complaints. They can dial a number and report a complaint, send a text message (SMS) through their cell phone, or login to an online portal through a web browser on their phone or their laptop. One evaluation highlights the citizen engagement achieved after its implementation, from 400 complaints a month to 4000 complaints, and resolution rates from 46 percent to 94 percent. === South Africa === ==== Government-led initiatives ==== The South African government has a website portal for citizens called www.gov.za — this was created by Center for Public Service Innovation (CPSI) in partnership with the Department of Public Service and Administration and the State Information Technology Agency. The government portal allows the citizens to interact with their government and provide feedback, request forms online, as well as access online to laws and contact information for lawmakers. GovChat is the official citizen engagement platform for the South African Government — accessible via WhatsApp, Facebook Messenger, SMS and USSD, it offers information to citizens about a wide-array of services provided by the Government. ==== Citizen-led initiatives ==== Grassroot is a technology platform that supports community organizers to mobilize citizens, built for low-bandwidth, low-data settings that allows for smart-messaging through text message. Research by the MIT Governance Lab suggests that Grassroot can have important effects on the leadership capacity of community leaders, an effect that is most likely to be achieved through careful design, behavioral incentives, active coaching and iteration. === Uganda === ==== Government-led initiatives ==== The Ugandan government has a website portal created for citizens called Parliament Portal, which gives citizens online access to laws, their constitution, and election related news. ==== Citizen-led initiatives ==== U-Report, a mobile platform introduced by UNICEF Uganda in 2011, is an initiative that runs large scale polls with Ugandan youth on a wide range of issues, ranging from safety to access to education to inflation to early marriage. The goal of the initiative was to have Ugandan youth play a role in civic engagement within the context of local issues. U-Report is still active (as of April 2018), with over 240,000 users across Uganda. Support for the initiative primarily came from the aid of the government, NGO's, youth organizations, faith based organizations, and private companies. Users sign up for the program for free by sending a text on their phone, then every week "U-Reporters" answer a question regarding a public issue. Poll results are published in public media outlets such as newspapers, radio, etc. UNICEF takes these responses and provides members of parliament (MP's) a weekly review of these results, acting as a bridge between government and Ugandan youth. == Asia == === Taiwan === Taiwan is highly ranked internationally for its technological innovations including open data, digital inclusivity, and widespread internet participation. As of 2019, approximately 87% of Taiwan's citizens over 12 years old had connectivity to the internet. The widespread use of internet has facilitated online political participation by giving citizens a platform to express their political opinions. Through the internet, Taiwanese citizens can directly contact political figures through online channels and publicly voice their political beliefs. New innovations have continued to be made in Taiwan that foster more political participation. The online platform called "Join," for example, was created in 2015 to give Taiwanese citizens a way to discuss, review, and propose governmental policy. Overall, the development of the internet and the emergence of new technologies in Taiwan has shown to increase political participation among its citizens. ==== Government-led initiatives ==== Taiwan's Digital Minister Audrey Tang has made strides to increase communication and collaboration between the government and the general public. Networks of Participation Officers have been established in each minister to jointly create new governmental policies between the public sector, citizens. and other government departments through collaborative meetings. Taiwan has taken on a collaborative approach to civic technology as a way to encourage increased participation from the public. New governmental policies in Taiwan have helped foster technological advancement, such as the Financial Technology Development and Innovative Experimentation Act which passed in 2017 that created a Regulatory Sandbox platform to support the development of FinTech in Taiwan. This sandbox was created to support industry creativity by enabling entrepreneurs and companies to experiment freely with new technologies without legal constraints for a year. ==== Citizen-led initiatives ==== The g0v movement was created in 2012 with the goal of engaging more citizens in public affairs. It is a grassroots and decentralized civic tech community composed of coders, designers, NGO workers, civil servants and citizens designed to increase transparency of government information. All of g0v's projects are open-source and created by citizens. The g0v community has participated in a variety of social movements, including the Sunflower Student Movement where it provided a crowdsourcing platform, and the Hong Kong Umbrella Movement where it provided live broadcasting and a logistics system. The vTaiwan (v for virtual) was created initially by members of g0v and later as a collaboration with the Taiwan's government. vTaiwan is a digital space where participants can discuss controversial topics. It uses a conversation tool called pol.is that leverages machine learning to scale online discussion. Civic technology in Taiwan was a key component of the country's successful response to the COVID-19 epidemic. Partnering with the Taiwanese government, the civic tech community used open data to create maps available to citizens that visualized the availability of masks to make the distribution of PPE more efficient. Big data analytics and QR code scanning also were used in Taiwan's response to the pandemic, which enabled the government to send out real-time alerts during clinical visits and track citizens' travel history and health symptoms. The response to the COVID-19 pandemic in Taiwan is representative of the country's shift towards a 'techno-democratic statecraft' and positioned them as a new leader in the international sphere for digital infrastructure. Taiwan's handling and early response to the epidemic has gained them international praise, with the country having significantly fewer COVID-19 cases than their neighbors. === Japan === In Japan, the Civic Tech movement has been rapidly growing since around 2013. Japan's civic tech initiatives have been primarily citizen-led, but more recently, Japan has taken on government-led initiatives as well. ==== Citizen-led initiatives ==== The purpose of civic tech initiatives are to educate the population to use technology as a democratization tool and to access public information. Although the rapid growth of the civic tech movement in Japan started around 2013, the movement first came about in 2011 after the earthquake, tsunami and nuclear meltdowns that occurred in the Tōhoku region. After the Fukushima disaster, citizen-led initiative Safecast, which allows citizens to collect and distribute radiation data, was created. The mission of citizen-led initiative Code for All is to make data more accessible to the public and to encourage the use of technology for the democratization of governance. The Code for Japan chapter is one of several chapters started by Code for All. Although Code for Japan is a citizen-led initiative, it also works closely with the government. Policy Advisor of the Japanese Ministry of Internal Affairs, Naoki Ota, who is a promoter of Code for Japan's civic tech projects. In light of the COVID-19 pandemic, Code for Japan also developed stopcovid19.metro.tokyo.lg.jp for the Tokyo Metropolitan Government that informs the public about the number of coronavirus cases and reductions in metropolitan subway usage. A different citizen-led project led by JP-Mirai is working to release an app that allows migrant workers to file complaints and address issues regarding items like visas and taxation. The app currently remains unnamed. ==== Government-led initiatives ==== While civic technology initiatives in Japan had mostly been citizen-led, the inception of the coronavirus pandemic encouraged the Japanese government to transition to digitization. This is because former in-person practices moved to the digital space in lieu of the coronavirus. The government plans to focus on the digitization aspect of its functions: the implementation of more sophisticated systems in the central and local governments in order to increase the security of private and personal information and the transference from the primary use of Hanko –– a seal used in lieu of a signature on printed documents –– to digital verifications and documents in order to increase efficiency. The Tokyo Metropolitan Government has also made strides in light of the pandemic. Through the use of a copyright that allows for malleable content distribution Creative Commons licensing, and open-source development platform GitHub, the Tokyo Metropolitan Government has allowed other collaborators to add to the data and code of the project created by Code for Japan. === Pakistan === Pakistan's civic tech landscape is evolving rapidly, driven by both citizen-led and government-led initiatives. Civic technology in Pakistan is being used to address various socio-economic challenges, enhance governance, and improve public service delivery. The country is experiencing a growing trend of tech-driven solutions aimed at fostering transparency, accountability, and citizen engagement. Key areas of focus include open data initiatives, digital platforms for citizen services, and tools for civic participation. ==== Citizen-led initiatives ==== Code for Pakistan (CfP), founded in 2013, is a civic technology non-profit organization focused on bridging the gap between government and citizens via harnessing technology for civic and social good. CfP is an executive committee member of Code for All. CfP collaborates with government bodies to develop digital solutions to civic-facing problems, and it provides ways for people in Pakistan to be more civically engages. Notable projects include Civic Innovation Fellowship Programs with the governments of Khyber Pakhtunkhwa and Gilgit-Baltistan to create human-centered technology solutions for public services — and various open data initiatives that promote transparency and public participation. This includes creating the Khyber Pakhtunkhwa Open Data Portal in partnership with the Khyber Pakhtunkhwa government, and publishing Pakistan's first Open Data Playbook. CfP regularly organizes civic hackathons to address civic issues within Pakistan with the help of community members. Shehri Pakistan is dedicated to promoting urban planning and civic awareness around environmental issues. It runs projects that focuses on environmental and heritage conservation through public engagement and advocacy. ==== Government-led initiatives ==== The Pakistan Citizen's Portal (PCP) is a mobile application launched by the Government of Pakistan to facilitate citizen feedback and resolve public grievances. It features a grievance redressal system that allows citizens to lodge complaints regarding various government services and a performance monitoring system to track and monitor the performance of government officials in addressing complaints. Code for Pakistan assisted the government in the development of this application. The Punjab Information Technology Board (PITB) is an autonomous body set up by the Government of Punjab to promote IT in governance. Its key projects include e-Rozgaar, which provides digital skills training to youth for freelance work, and the School Information System, which digitizes school records and improves education management. The Khyber Pakhtunkhwa Information Technology Board (KPITB) is dedicated to the development of the IT sector in Khyber Pakhtunkhwa. Its major projects include Durshal, a network of co-working spaces and innovation labs across KP to support tech entrepreneurs, and Citizen Facilitation Centers, which provide one-stop digital services to citizens. Pakistan's civic tech ecosystem is characterized by a collaborative approach between citizens, tech communities, and government bodies. The ongoing efforts in this sector aim to empower citizens, improve governance, and address critical societal issues through innovative technological solutions. === Nepal === Civic technology in Nepal is growing, and has been utilized for tasks like mapping, migrant work technology, digital literacy and open data understanding in Nepal thus far. ==== Citizen-led initiatives ==== Kathmandu Living Labs (KLL), founded in 2013, is a civic technology company based in Nepal that works actively to train residents in Nepal and other Asian countries in mapping their communities via OpenStreetMap (OSM). During the 2015 earthquake in Nepal (magnitude of 7.3), organizations responsible for aid relief and reconstruction used OSM to navigate the disaster. In 2016, a new migration tool called Shuvayatra (Safe Journey) was launched in Nepal for the migrant workers of Nepal. The Asia Foundation worked with the Non-Residential Nepali Association (NRNA) and software firm, Young Innovations, in order to develop this mobile app that provides Nepali migrant workers with financial, education and training resources, as well as reliable employment services. The technology was developed in response to the often exploitative promises of working abroad as a migrant worker. In its beginnings, Code for Nepal, a non-profit organization that began in the United States, provided workshops in digital literacy for women in Kathmandu. Since, the organization has evolved to launching open data and civic tech products, as well as organizing conferences and scholarships for young men and women. Another civic tech non-profit called Open Knowledge Nepal has also been working to make data open and accessible to Nepali residents. == Oceania == === Australia === ==== Citizen-led initiatives ==== In Australia, a platform and proposed political party called MiVote has a mobile app for citizens to learn about policy and cast their vote for the policies they support. MiVote politicians elected to office would then vote in support of the majority position of the people using the app. Snap Send Solve is a mobile app for citizens to report to local councils and other authorities quickly and easily. In 2020, 430,000 reports where sent via the app. A January 2021 report in Melbourne's Herald Sun noted an increased number of reports for dumped rubbish. == Europe == === Denmark === ==== Government-led initiatives ==== In 2002, MindLab an innovation public sector service design group was established by the Danish ministries of Business and Growth, Employment, and Children and Education. MindLab was one of the world's first public sector design innovation labs and their work inspired the proliferation of similar labs and user-centered design methodologies deployed in many countries worldwide. The design methods used at MindLab are typically an iterative approach of rapid prototyping and testing to evolve not just their government projects, but also government organizational structure using ethnographic-inspired user research, creative ideation processes, and visualization and modeling of service prototypes. In Denmark, design within the public sector has been applied to a variety of projects including rethinking Copenhagen's waste management, improving social interactions between convicts and guards in Danish prisons, transforming services in Odense for mentally disabled adults and more. === Estonia === ==== Government-led initiatives ==== The process of digitalization in Estonia began in 2002, when local and central governments began building an infrastructure that allowed autonomous and interconnected data. That same year in 2002, Estonia launched a national ID system that was fully digitalized and paired with digital signatures. The national ID system allowed Estonians to pay taxes online, vote online, do online banking, access their health care records, as well as process 99% of Estonian public services online 24 hours a day, seven days. Estonia is well known internationally for its e-voting system. Internet voting (where citizens vote remotely with their own equipment) was piloted in Estonia in 2005 and has been in use since then. As of 2016, Estonia's Internet voting system has been implemented in three local elections, two European Parliament elections, and three parliamentary elections. In 2007, Estonia faced a politically motivated, large cyber attack which damaged most of the country's digital infrastructure, and as a result became the home of NATO Cyber Defense Centre of Excellence. The National Security Response was updated and approved in 2010 in response to the cyber attacks, and recognizes the growing threat of cyber crime in Estonia. In 2014, Estonia launched the e-Residency, which allowed users to create and manage a location independent business online from anywhere in the world. That was followed by an immigration visa for digital nomads, which was a novel way of approaching immigration policy. Citizen-led initiatives Several citizen designed e-democracy platforms have launched in Estonia. In 2013, the online platform People's Assembly (Rahvakogu) was launched for crowdsourcing ideas and proposals to amend Estonia's electoral laws, political party law, and other issues related to democracy. Citizen OS is another e-democracy platform and is free and open source. The platform was created with the goal of enabling Estonian citizens to engage in collaborative decision-making, encouraging users to initiate petitions and participate in meaningful discussion on issues in society. === France === The most dynamic French city regarding civic tech is Paris, with many initiatives moving in the Sentier, a neighborhood known for being a tech hub. According to Le Monde, French civic tech is "already a reality" but lacks investments to scale up. ==== Government-led initiatives ==== In France, public data are available on data.gouv.fr by the Etalab mission, located under the authority of the Prime Minister. Government agencies are also leading large citizen consultation through the Conseil national du numérique (National digital council), for example with the law about the digital republic (Projet de loi pour une république numérique). ==== Citizen-led initiatives ==== The French citizen community for civic tech is gathered in the collective Démocratie ouverte (Open democracy). The main purpose of this collective is to enhance democracy to increase citizen power, improve the way to decide collectively and update the political system. Démocratie ouverte gathers many projects focused on understanding politics, renewing institutions, participating in democracy, and public action. Several open-source, non-profit web platforms have been launched nationwide to support citizen's direct involvement: Communecter.org, Demodyne.org as well as Democracy OS France (derived from the Argentinian initiative). LaPrimaire.org organizes open primaries to allow the French to choose the candidates they wish to run for public elections === Iceland === The Icelandic constitutional reform, 2010–13 instituted a process for reviewing and redrafting their constitution after the 2008 financial crisis, using social media to gather feedback on twelve successive drafts. Beginning in October 2011, a Citizens Foundation platform called Betri Reykjavik had been implemented for citizens to inform each other and vote on issues. Each month the city council formally evaluates the top proposals before issuing an official response to each participant. As of 2017, the number of proposals approved by the city council reached 769. The Pirate Party (Iceland) uses the crowdsourcing platform Píratar for members to create party policies. === Italy === ==== Citizen-led initiatives ==== A consortium made by TOP-IX, FBK and RENA created the Italian civic tech school. The first edition was in May 2016 in Turin. The Five Star Movement, an Italian political party has a tool called Rousseau which gives members a way to communicate with their representatives. === Spain === The Madrid City Council has a department of Citizen Participation that facilitates a platform called Decide Madrid for registered users to discuss topics with others in the city, propose actions for the City Council, and submit ideas for how to spend a portion of the budget on projects voted on through participatory budgeting. Podemos (Spanish political party) uses a reddit called Plaza Podemos where anybody can propose and vote on ideas. === Sweden === The City of Stockholm has a make-a-suggestion page on stockholm.se and available as an app, allowing citizens to report any ideas for improvement in the city along with a photo and GPS. After receiving a suggestion, it is sent to the appropriate office that can place a work order. During 2016, one hundred thousand requests were recorded. This e-service began in September 2013. The city government of Gothenburg has an online participatory voting system, open for every citizen to propose changes and solutions. When a proposal receives more than 200 votes, it is delivered to the relevant political committee. === United Kingdom === ==== Government-led initiatives ==== In 2007 and 2008 documents from the British government explore the concept of "user-driven public services" and scenarios of highly personalized public services. The documents proposed a new view on the role of service providers and users in the development of new and highly customized public services, utilizing user involvement. This view has been explored through an initiative in the UK. Under the influence of the European Union, the possibilities of service design for the public sector are being researched, picked up, and promoted in countries such as Belgium. Care Opinion was set up to strengthen the voice of patients in the NHS in 2005. Behavioural Insights Team (BIT) (also known as Nudge) was originally part of the British cabinet and was founded in 2010, in order to apply nudge theory to try to improve British government policy, services and save money. As of 2014, BIT became a decentralized, semi-privatized company with Nesta (charity), BIT employees and the British government each owning a third of this new business. That same year a Nudge unit was added to the United States government under president Obama, referred to as the 'US Nudge Unit,' working within the White House Office of Science and Technology Policy. ==== Citizen-led initiatives ==== FixMyStreet.com is a website and app developed by mySociety, a UK based civic technology company that works to make online democracy tools for British citizens. FixMyStreet allows citizens in the United Kingdom to report public infrastructure issues (such as potholes, broken streetlights, etc.) to the proper local authority. FixMyStreet became inspiration to many countries around the world that followed suit to use civic technology to better public infrastructure. The website was funded by the Department for Constitutional Affairs Innovation fund and created by mySociety. Along with the platform itself, mySociety released FixMyStreet, a free and open-source software framework that allows users to create their own website to report street problems. mySociety has many different tools, like parliamentary monitoring ones, that work in many countries for different types of governance. When such tools are integrated into government systems, citizens can not only understand the inner workings of their now transparent government, but also have the means to "exert influence over the people in power". Newspeak House is a community space and venue focused on building a community of civic and political technology practitioners in the United Kingdom. Spacehive is a crowdfunding platform for civic improvement projects that allows citizens and local groups to propose project ideas such as improving a local park or starting a street market. Projects are then funding by a mix of citizens, companies and government bodies. The platform is used by several councils including the Mayor of London to co-fund projects. Democracy Club is a community interest company, founded in 2009 to provide British voters with easy access to candidate lists in upcoming elections. Democracy Club uses a network of volunteers to crowdsource information about candidates which is then presented to voters via a postcode search on the website whocanivotefor.co.uk. Democracy Club also works with the Electoral Commission to provide data for a national polling station finder at wheredoivote.co.uk and on the commission's own website. === Ukraine === ==== Government-led initiatives ==== In Ukraine, major civic tech movement started out with open data reform in 2014. As for now, public data are available on data.gov.ua, national open data portal. ==== Citizen-led initiatives ==== Some widely used Ukrainian civic tech projects are donor recruitment platform DonorUA, Ukrainian companies' data and court register monitoring service Open Data Bot, participatory budgeting platform "Громадський проект". The latter accounts for over 3 million users. In 2017, to foster the growth of civic tech initiatives, Ukrainian NGO SocialBoost launched 1991 Civic Tech Center, a dedicated community space in country's capital, Kyiv. The space opened following a $480,000 grant from Omidyar Network, the philanthropic investment firm established by eBay founder Pierre Omidyar. == North America == === Canada === ==== Government-led initiatives ==== Canadian Digital Service (CDS) was launched in 2017, as part of an attempt to bring better IT to the Canadian government. The CDS was established within the Treasury Board of Canada the Canadian agency that oversees spending within departments and the operations of the public service. Scott Brison, the president of the Canadian Treasury Board, launched CDS and was Canada's first minister of Digital Government. ==== Citizen-led initiatives ==== As in other countries, the Canadian civic technology movement is home to several organizations. Code for Canada is a non-profit group, following somewhat the model of Code for America. Several cities or regions host civic technology groups with regular meetings (in order from West to East): Vancouver, Calgary, Edmonton, Waterloo Region, Toronto, Ottawa, Fredericton, Saint John, and Halifax. === United States === ==== Government-led initiatives ==== The Clinton, Bush, and Obama administrations sought initiatives to further openness of the government, through either increased use of technology in political institutions or efficient ways to further civic engagement. The Obama administration pursued an Open Government Initiative based on principles of transparency and civic engagement. This strategy has paved the way for increased governmental transparency within other nations to improve democratically for the citizens' benefit and allow for greater participation within politics from a citizen's perspective. During his run for president, Obama was "tied directly to the extensive use of social media by the campaign". According to a study conducted by the International Data Corporation (IDC), an estimated $6.4 billion will be spent on civic technology in 2015 out of approximately $25.5 billion that governments in the United States will spend on external-facing technology projects. A Knight Foundation survey of the civic technology field found that the number of civic technology companies grew by roughly 23% annually between 2008 and 2013. Departments like 18F and the United States Digital Service have also been highlighted as examples of government investment in Civic Technology. Inspired by an appetite to build government technology with new processes, new digital agencies started the Digital Services Coalition to help build on the momentum. ==== Citizen-led initiatives ==== Civic technology is built by a variety of companies, organizations and volunteer groups. One prominent example is Code for America, a not-for-profit based in San Francisco, working toward addressing the gap between the government and citizens. College students from Harvard University created the national non-profit Coding it Forward that creates data science and technology internships for undergraduate and graduate students in United States federal agencies. Another example of a civic technology organization is the Chi Hack Night, based in Chicago. The Chi Hack Night is a weekly, volunteer-run event for building, sharing and learning about civic technology. Civic Hall is a coworking and event space in New York City for people who want to contribute to civic-minded projects using technology. And OpenGov creates software designed to enable public agencies to make data-driven decisions, improve budgeting and planning, and inform elected officials and citizens. OneBusAway, a mobile app that displays real-time transit info, exemplifies the open data use of civic technology. It is maintained by volunteers and has the civic utility of helping people navigate their way through cities. It follows the idea that technology can be a tool for which government can act as a society-equalizer. Princeton University Professor Andrew Appel set out to prove how easy it was to hack into a voting machine. On February 3, 2007, he and a graduate student, Alex Halderman, purchased a voting machine, and Halderman picked the lock in 7 seconds. They removed the 4 ROM chips and replaced them with modified versions of their own: a version of modified firmware that could throw off the machine's results, subtly altering the tally of votes, never to betray a hint to the voter. It took less than 7 minutes to complete the process. In September 2016, Appel wrote a testimony for the Congress House Subcommittee on Information Technology hearing on "Cybersecurity: Ensuring the Integrity of the Ballot Box", suggesting to for Congress to eliminate touchscreen voting machines after the election of 2016, and that it require all elections be subject to sensible auditing after every election to ensure that the systems are functioning properly and to prove to the American people that their votes are counted as cast. === Mexico === ==== Government-led initiatives ==== Within the Mexican president's office, there is a national digital strategy coordinator who works on Mexico's national digital strategy. The office has created the gob.mx portal, a website designed for Mexican citizen to engage with their government, as well as a system to share open government data. According to McKinsey & Company, in a 2018 survey Mexico had the worst-rated citizen experience (4.4 out of 10) for convenience and accessibility of Mexican government services, of the group of countries surveyed (Canada, France, Germany, Mexico, the United Kingdom, and the United States). ==== Citizen-led initiatives ==== Arena Electoral was an online platform created by Fundación Ethos during the Mexican electoral process of 2012 to promote responsible voting. An online simulation was created by taking the four presidential candidates in that election cycle and each were given policy issues based on the Mexican national agenda that they had to come up with a solution to. Once each candidate gave their solutions, the platform published it on their website and left it to the Mexican citizens to vote for the best policy. == Latin America == === Argentina === Partido de la Red (Net Party) is an Argentinean political party using the DemocracyOS open-source software with the goal of electing representatives who vote according to what citizens decide online. Caminos de la Villa is a citizen action platform where citizens can monitor the urbanization of the City of Buenos Aires. Users are able to view detailed information of the work the government is doing in the neighborhoods. Additionally, users are able to download documents, along with photos of what the government is doing. Users can also make reports of issues with public services to the platform. === Bolivia === Observatorio de Justicia Fiscal desde las Mujeres (English: The Women's Fiscal Justice Observatory), is an organization that reviews the fiscal policies of the country. They do this by using a system with the same name to process information regarding the spending of the country with a gender focus. This is done to have better equality in the expenditure of the country. === Brazil === In 2011, NOSSAS, a Brazilian organization that helps citizens and groups express their struggles and make change was founded. They have also made their own tech platform, BONDE. It is a platform in which other organizations can use to make their own website and use tools to spread their reach. Apart from BONDE, NOSSAS also provides support and programs to those who want to become activists. === Chile === CitizenLab is a civic technology company that is in many countries and local governments. Citizen Lab works so citizens are better informed in democracy to make public decisions. In 2019, they expanded to Chile and made teams to support them with engagement, budgeting, planning, and more. === Colombia === Founded in 2016, Movilizatorio was made to encourage and promote citizen participation in democracy. Movilizatorio works on many projects to address various issues in the country including political, social, behavioral, and cultural issues. One of their projects was able to get the local community together because an elementary school had not started classes. Shortly after the movement started, after getting signatures and going to the Secretary of Education, classes started. === Panama === Fundación para el Desarrollo de la Libertad Ciudadana (English: Foundation for the Development of Citizen Freedom) is an organization founded in 1995. The main goal is to improve democracy in Panama. Ways of achieving this is promoting transparency with the government to prevent corruption and engaging with citizens to increase democratic citizen participation. === Paraguay === TEDIC is an organization founded in 2012 that defends digital rights of citizens. TEDIC researches information on cybersecurity, copyright, artificial intelligence, and more. They also promote and develop their own software for people to use to make social change. They have worked on topics such as personal data, freedom of expression, gender and digital inclusion, and more. === Uruguay === A Tu Servicio, a civic tech platform, informs users and citizens on the country's public health services so that they are able to make informed decisions on medical providers. The platform was founded in 2015. It features a list users can use to compare 2 different health care providers. The data includes wait times, prices, number of users, workers, and more. DATA Uruguay is an organization that works on issues surrounding data. They work with other organizations and community to create tools with open data. DATA Uruguay promotes open data and transparency of public information. === Venezuela === Amidst the COVID-19 pandemic that was happening in Venezuela, programmers have made various apps with civic uses. One of which was Docti.App, it was an app that had a list of locations citizens can go to for emergencies. It had a filterable list to find whatever users needed, including medicine and oxygen bottles. Another example is Javenda, it was a web application used to find nearby hospitals. He gathered data from health centers, added it to a map, and made it accessible for users to locate them. == Effects == === Effects on social behavior and civic engagement === Because of the conveniences provided by civic technology, there are benefits as well as growing concern about the effects it may have on social behavior and civic engagement. New technology allows for connectivity and new communications, as well as changing how we interact with issues and contexts beyond one's intimate sphere. Civic technology affords transparency in government with open-government data, and allows more people of diverse socioeconomic levels to be able to build and engage with civic matters in a way that was not possible prior. ==== Communication ==== The importance of face-to-face interactions has also been called to question with the increase in e-mails and social media and a decrease in traditional, in-person social interaction. Technology as a whole may be responsible for this change in social norm, but it also holds potential for turning it around with audio and video communication capabilities. More research needs to be conducted in order to determine if these are appropriate substitutes for in-person interaction, or if any substitute is even feasible. Preece & Shneiderman discuss the important social aspect of civic technology with a discussion of the "reader-to-leader framework", which follows that users inform readers, who inform communicators, who then inform collaborators, before finally reaching leaders. This chain of communication allows for the interests of the masses to be communicated to the implementers. ==== Elections ==== Regarding elections and online polling, there is the potential for voters to make less informed decisions because of the ease of voting. Although many more voters will turn out, they may only be doing so because it is easy and may not be consciously making a decision based on their own synthesized opinion. It's suggested that if online voting becomes more common, so should constituent-led discussions regarding the issues or candidates being polled. Voting advice applications helps voters find candidates and parties closest to their preferences, with studies suggesting that the use of these applications tend to increase turnout and affect the choice of voters. An experiment during assembly elections in the Indian state of Uttar Pradesh showed that sending villages voice calls and text messages informing them of criminal charges of candidates increased the vote share of clean candidates and decreased the vote shares of violent criminal candidates. === Effects on socioeconomics === With advanced technologies coming at higher costs and with an increased reliance on civic technologies may leave low-income families in the dark if they cannot afford the platforms for civic technology, such as computers and tablets. This causes an increase in the gap between lower and middle/high socioeconomic class families. Knowledge of how to use computers is equally important when considering factors of accessing civic technology applications online, and is also generally lower in low-income households. According to a study performed by the National Center for Education Statistics, 14% of students between the age range of 3 and 18 do not have access to the internet. Those with a lower socio-economic status tend to cut their budgets by not installing internet in their homes. Public Schools have taken the lead in ensuring proper technology access and education in the classroom to better prepare children for the high-tech world, but there is still a clear difference between online contributions from those with and without experience on the internet. == See also == Collaborative e-democracy Comparison of civic technology platforms Digital citizen E-government Government by algorithm Open government Service design == References ==
https://en.wikipedia.org/wiki/Civic_technology
Assistive technology (AT) is a term for assistive, adaptive, and rehabilitative devices for people with disabilities and the elderly. Disabled people often have difficulty performing activities of daily living (ADLs) independently, or even with assistance. ADLs are self-care activities that include toileting, mobility (ambulation), eating, bathing, dressing, grooming, and personal device care. Assistive technology can ameliorate the effects of disabilities that limit the ability to perform ADLs. Assistive technology promotes greater independence by enabling people to perform tasks they were formerly unable to accomplish, or had great difficulty accomplishing, by providing enhancements to, or changing methods of interacting with, the technology needed to accomplish such tasks. For example, wheelchairs provide independent mobility for those who cannot walk, while assistive eating devices can enable people who cannot feed themselves to do so. Due to assistive technology, disabled people have an opportunity of a more positive and easygoing lifestyle, with an increase in "social participation", "security and control", and a greater chance to "reduce institutional costs without significantly increasing household expenses." In schools, assistive technology can be critical in allowing students with disabilities to access the general education curriculum. Students who experience challenges writing or keyboarding, for example, can use voice recognition software instead. Assistive technologies assist people who are recovering from strokes and people who have sustained injuries that affect their daily tasks. A recent study from India led by Dr Edmond Fernandes et al. from Edward & Cynthia Institute of Public Health which was published in WHO SEARO Journal informed that geriatric care policies which address functional difficulties among older people will ought to be mainstreamed, resolve out-of-pocket spending for assistive technologies will need to look at government schemes for social protection. == Adaptive technology == Adaptive technology and assistive technology are different. Assistive technology is something that is used to help disabled people, while adaptive technology covers items that are specifically designed for disabled people and would seldom be used by a non-disabled person. In other words, assistive technology is any object or system that helps people with disabilities, while adaptive technology is specifically designed for disabled people. Consequently, adaptive technology is a subset of assistive technology. Adaptive technology often refers specifically to electronic and information technology access. == Occupational therapy and assistive technology == Occupational Therapy (OT) utilizes everyday occupations as a therapeutic tool for enhancing or enabling participation in healthy occupations to promote health and well-being (AOTA, 2020). Occupations include activities of daily living (ADLs), instrumental activities of daily living (IADLs), health management, rest and sleep, education, work, play, leisure, and social participation (AOTA, 2020). “As occupational therapy professionals, we are uniquely trained to advocate for client-centered care that reduces barriers to participation in meaningful occupations and promotes overall well-being" (Clark, Iqbal & Myers, 2022) OT practitioners (OTP) utilize assistive technologies (AT) to modify environments and promote access and fit to facilitate independence. For example, voice activated smart home technology allows an individual to control devices such as light switches, thermostat, oven, blinds, and music from their location. OTP evaluate client's strengths and abilities and connects with desired tasks. OTP help empower the client to match specific goals to AT tools. The theoretical approaches or frameworks OTPs frequently use to guide a client's AT choices may include: 1) The HAAT model by Cook, Polgar & Encarnaçāo (2015) 2) The interdependence - Human Activity Assistive Technology Model (I-HAAT) by Lee, et al. (2020); 3) The SETT Framework by Zabala (2005); or 4) The Unified Theory of Acceptance and Use of Technology (UTAUT 2) by Venkatesh, Thong & Xu (2012). Also, OTPs may seek advanced training through the Rehabilitation Engineering and Assistive Technology Society of North America (RESNA) organization to receive their Assistive Technology Professional (ATP) Certification and/or Seating and Mobility Specialist (SMS) Certification. Additional trainings and certifications may specialize in a focus area such as the Certified Assistive Technology Instructional Specialist for Individuals with Visual Impairments (CATIS™) (ACVREP, 2024). == Mobility impairments == === Wheelchairs === Wheelchairs are devices that can be manually propelled or electrically propelled, and that include a seating system and are designed to be a substitute for the normal mobility that most people have. Wheelchairs and other mobility devices allow people to perform mobility-related activities of daily living which include feeding, toileting, dressing, grooming, and bathing. The devices come in a number of variations where they can be propelled either by hand or by motors where the occupant uses electrical controls to manage motors and seating control actuators through a joystick, sip-and-puff control, head switches or other input devices. Often there are handles behind the seat for someone else to do the pushing or input devices for caregivers. Wheelchairs are used by people for whom walking is difficult or impossible due to illness, injury, or disability. Ambulatory wheelchair users may also use other devices, such as walkers. Newer advancements in wheelchair design enable wheelchairs to climb stairs, go off-road or propel using segway technology or additional add-ons like handbikes or power assists. === Transfer devices === Patient transfer devices generally allow patients with impaired mobility to be moved by caregivers between beds, wheelchairs, commodes, toilets, chairs, stretchers, shower benches, automobiles, swimming pools, and other patient support systems (i.e., radiology, surgical, or examining tables). The most common devices are transfer benches, stretcher or convertible chairs (for lateral, supine transfer), sit-to-stand lifts (for moving patients from one seated position to another i.e., from wheelchairs to commodes), air bearing inflatable mattresses (for supine transfer i.e., transfer from a gurney to an operating room table), gait belts (or transfer belt) and a slider board (or transfer board), usually used for transfer from a bed to a wheelchair or from a bed to an operating table. Highly dependent patients who cannot assist their caregiver in moving them often require a patient lift (a floor or ceiling-suspended sling lift) which though invented in 1955 and in common use since the early 1960s is still considered the state-of-the-art transfer device by OSHA and the American Nursing Association. === Walkers === A walker or walking frame is a tool for disabled people who need additional support to maintain balance or stability while walking. It consists of a frame that is about waist high, approximately twelve inches deep and slightly wider than the user. Walkers are also available in other sizes, such as for children, or for heavy people. Modern walkers are height-adjustable. The front two legs of the walker may or may not have wheels attached depending on the strength and abilities of the person using it. It is also common to see caster wheels or glides on the back legs of a walker with wheels on the front. A walker with three or four wheels is often referred to as a Rollator. === Treadmills === Bodyweight-supported treadmill training (BWSTT) is used to enhance walking ability of people with neurological injury. These machines are therapist-assisted devices that are used in the clinical setting, but is limited by the personnel and labor requirements placed on physical therapists. The BWSTT device, and many others like it, assist physical therapists by providing task-specific practice of walking in people following neurological injury. === Prosthesis === A prosthesis, prosthetic, or prosthetic limb is a device that replaces a missing body part. It is part of the field of biomechatronics, the science of using mechanical devices with human muscular, musculoskeletal, and nervous systems to assist or enhance motor control lost by trauma, disease, or defect. Prostheses are typically used to replace parts lost by injury (traumatic) or missing from birth (congenital) or to supplement defective body parts. Inside the body, artificial heart valves are in common use with artificial hearts and lungs seeing less common use but under active technology development. Other medical devices and aids that can be considered prosthetics include hearing aids, artificial eyes, palatal obturator, gastric bands, and dentures. Prostheses are specifically not orthoses, although given certain circumstances a prosthesis might end up performing some or all of the same functionary benefits as an orthosis. Prostheses are technically the complete finished item. For instance, a C-Leg knee alone is not a prosthesis, but only a prosthetic component. The complete prosthesis would consist of the attachment system to the residual limb – usually a "socket", and all the attachment hardware components all the way down to and including the terminal device. Despite the technical difference, the terms are often used interchangeably. The terms "prosthetic" and "orthotic" are adjectives used to describe devices such as a prosthetic knee. The terms "prosthetics" and "orthotics" are used to describe the respective allied health fields. An Occupational Therapist's role in prosthetics include therapy, training and evaluations. Prosthetic training includes orientation to prosthetics components and terminology, donning and doffing, wearing schedule, and how to care for residual limb and the prosthesis. === Exoskeletons === A powered exoskeleton is a wearable mobile machine that is powered by a system of electric motors, pneumatics, levers, hydraulics, or a combination of technologies that allow for limb movement with increased strength and endurance. Its design aims to provide back support, sense the user's motion, and send a signal to motors which manage the gears. The exoskeleton supports the shoulder, waist and thigh, and assists movement for lifting and holding heavy items, while lowering back stress. === Adaptive seating and positioning === People with balance and motor function challenges often need specialized equipment to sit or stand safely and securely. This equipment is frequently specialized for specific settings such as in a classroom or nursing home. Positioning is often important in seating arrangements to ensure that user's body pressure is distributed equally without inhibiting movement in a desired way. Positioning devices have been developed to aid in allowing people to stand and bear weight on their legs without risk of a fall. These standers are generally grouped into two categories based on the position of the occupant. Prone standers distribute the body weight to the front of the individual and usually have a tray in front of them. This makes them good for users who are actively trying to carry out some task. Supine standers distribute the body weight to the back and are good for cases where the user has more limited mobility or is recovering from injury. === For children === Children with severe disabilities can develop learned helplessness, which makes them lose interest in their environment. Robotic arms are used to provide an alternative method to engage in joint play activities. These robotic arms allow children to manipulate real objects in the context of play activities. Children with disabilities have challenges in accessing play and social interactions. Play is essential for the physical, emotional, and social well-being of all children. The use of assistive technology has been recommended to facilitate the communication, mobility, and independence of children with disabilities. Augmentative Alternative Communication (AAC) devices have been shown to facilitate the growth and development of language as well as increase rates of symbolic play in children with cognitive disabilities. AAC devices can be no-tech (sign language and body language), low-tech (picture boards, paper and pencils), or high-tech (tablets and speech generating devices). The choice of AAC device is very important and should be determined on a case-by-case basis by speech therapists and assistive technology professionals. The early introduction of powered mobility has been shown to positively impact the play and psychosocial skills of children who are unable to move independently. Powered cars, such as the Go Baby Go program, have emerged as a cost-effective means of facilitating the inclusion of children with mobility impairments in school. == Visual impairments == Many people with serious visual impairments live independently, using a wide range of tools and techniques. Examples of assistive technology for visually impairment include screen readers, screen magnifiers, Braille embossers, desktop video magnifiers, and voice recorders. === Screen readers === Screen readers are used to help the visually impaired to easily access electronic information. These software programs run on a computer to convey the displayed information through voice (text-to-speech) or braille (refreshable braille displays) in combination with magnification for low vision users in some cases. There are a variety of platforms and applications available for a variety of costs with differing feature sets. Some example of screen readers are Apple VoiceOver, CheckMeister browser, Google TalkBack and Microsoft Narrator. Screen readers may rely on the assistance of text-to-speech tools. To use the text-to-speech tools, the documents must be in an electronic form, which is uploaded as the digital format. However, people usually will use the hard copy documents scanned into the computer, which cannot be recognized by the text-to-speech software. To solve this issue, people often use Optical Character Recognition technology accompanied with text-to-speech software. === Braille and braille technology === Braille is a system of raised dots formed into units called braille cells. A full braille cell is made up of six dots, with two parallel rows of three dots, but other combinations and quantities of dots represent other letters, numbers, punctuation marks, or words. People can then use their fingers to read the code of raised dots. Assistive technology using braille is called braille technology. === Braille translator === A braille translator is a computer program that can translate inkprint into braille or braille into inkprint. A braille translator can be an app on a computer or be built into a website, a smartphone, or a braille device. === Braille embosser === A braille embosser is, simply put, a printer for braille. Instead of a standard printer adding ink onto a page, the braille embosser imprints the raised dots of braille onto a page. Some braille embossers combine both braille and ink so the documents can be read with either sight or touch. === Refreshable braille display === A refreshable braille display or braille terminal is an electro-mechanical device for displaying braille characters, usually by means of round-tipped pins raised through holes in a flat surface. Computer users who cannot use a computer monitor use it to read a braille output version of the displayed text. === Desktop video magnifier === Desktop video magnifiers are electronic devices that use a camera and a display screen to perform digital magnification of printed materials. They enlarge printed pages for those with low vision. A camera connects to a monitor that displays real-time images, and the user can control settings such as magnification, focus, contrast, underlining, highlighting, and other screen preferences. They come in a variety of sizes and styles; some are small and portable with handheld cameras, while others are much larger and mounted on a fixed stand. === Screen magnification software === A screen magnifier is software that interfaces with a computer's graphical output to present enlarged screen content. It allows users to enlarge the texts and graphics on their computer screens for easier viewing. Similar to desktop video magnifiers, this technology assists people with low vision. After the user loads the software into their computer's memory, it serves as a kind of "computer magnifying glass". Wherever the computer cursor moves, it enlarges the area around it. This allows greater computer accessibility for a wide range of visual abilities. === Large-print and tactile keyboards === A large-print keyboard has large letters printed on the keys. On the keyboard shown, the round buttons at the top control software which can magnify the screen (zoom in), change the background color of the screen, or make the mouse cursor on the screen larger. The "bump dots" on the keys, installed in this case by the organization using the keyboards, help the user find the right keys in a tactile way. === Navigation assistance === Assistive technology for navigation has expanded on the IEEE Xplore database since 2000, with over 7,500 engineering articles written on assistive technologies and visual impairment in the past 25 years, and over 1,300 articles on solving the problem of navigation for people who are blind or visually impaired. As well, over 600 articles on augmented reality and visual impairment have appeared in the engineering literature since 2000. Most of these articles were published within the past five years, and the number of articles in this area is increasing every year. GPS, accelerometers, gyroscopes, and cameras can pinpoint the exact location of the user and provide information on what is in the immediate vicinity, and assistance in getting to a destination. === Wearable technology === Wearable technology are smart electronic devices that can be worn on the body as an implant or an accessory. New technologies are exploring how the visually impaired can receive visual information through wearable devices. Some wearable devices for visual impairment include: OrCam device, eSight and Brainport. == Personal emergency response systems == Personal emergency response systems (PERS), or Telecare (UK term), are a particular sort of assistive technology that use electronic sensors connected to an alarm system to help caregivers manage risk and help vulnerable people stay independent at home longer. An example would be the systems being put in place for senior people such as fall detectors, thermometers (for hypothermia risk), flooding and unlit gas sensors (for people with mild dementia). Notably, these alerts can be customized to the particular person's risks. When the alert is triggered, a message is sent to a caregiver or contact center who can respond appropriately. == Accessibility software == In human–computer interaction, computer accessibility (also known as accessible computing) refers to the accessibility of a computer system to all people, regardless of disability or severity of impairment, examples include web accessibility guidelines. Another approach is for the user to present a token to the computer terminal, such as a smart card, that has configuration information to adjust the computer speed, text size, etc. to their particular needs. This is useful where users want to access public computer based terminals in Libraries, ATM, Information kiosks etc. The concept is encompassed by the CEN EN 1332-4 Identification Card Systems – Man-Machine Interface. This development of this standard has been supported in Europe by SNAPI and has been successfully incorporated into the Lasseo specifications, but with limited success due to the lack of interest from public computer terminal suppliers. == Hearing impairments == People in the deaf and hard of hearing community have a more difficult time receiving auditory information as compared to hearing individuals. These individuals often rely on visual and tactile mediums for receiving and communicating information. The use of assistive technology and devices provides this community with various solutions to auditory communication needs by providing higher sound (for those who are hard of hearing), tactile feedback, visual cues and improved technology access. Individuals who are deaf or hard of hearing use a variety of assistive technologies that provide them with different access to information in numerous environments. Most devices either provide amplified sound or alternate ways to access information through vision and/or vibration. These technologies can be grouped into three general categories: Hearing Technology, alerting devices, and communication support. === Hearing aids === A hearing aid or deaf aid is an electro-acoustic device which is designed to amplify sound for the wearer, usually with the aim of making speech more intelligible, and to correct impaired hearing as measured by audiometry. This type of assistive technology helps people with hearing loss participate more fully in their hearing communities by allowing them to hear more clearly. They amplify any and all sound waves through use of a microphone, amplifier, and speaker. There is a wide variety of hearing aids available, including digital, in-the-ear, in-the-canal, behind-the-ear, and on-the-body aids. === Assistive listening devices === Assistive listening devices include FM, infrared, and loop assistive listening devices. This type of technology allows people with hearing difficulties to focus on a speaker or subject by getting rid of extra background noises and distractions, making places like auditoriums, classrooms, and meetings much easier to participate in. The assistive listening device usually uses a microphone to capture an audio source near to its origin and broadcast it wirelessly over an FM (Frequency Modulation) transmission, IR (Infra Red) transmission, IL (Induction Loop) transmission, or other transmission methods. The person who is listening may use an FM/IR/IL Receiver to tune into the signal and listen at his/her preferred volume. === Amplified telephone equipment === This type of assistive technology allows users to amplify the volume and clarity of their phone calls so that they can easily partake in this medium of communication. There are also options to adjust the frequency and tone of a call to suit their individual hearing needs. Additionally, there is a wide variety of amplified telephones to choose from, with different degrees of amplification. For example, a phone with 26 to 40 decibel is generally sufficient for mild hearing loss, while a phone with 71 to 90 decibel is better for more severe hearing loss. == Augmentative and alternative communication == Augmentative and alternative communication (AAC) is an umbrella term that encompasses methods of communication for those with impairments or restrictions on the production or comprehension of spoken or written language. AAC systems are extremely diverse and depend on the capabilities of the user. They may be as basic as pictures on a board that are used to request food, drink, or other care; or they can be advanced speech generating devices, based on speech synthesis, that are capable of storing hundreds of phrases and words. == Cognitive impairments == Assistive Technology for Cognition (ATC) is the use of technology (usually high tech) to augment and assist cognitive processes such as attention, memory, self-regulation, navigation, emotion recognition and management, planning, and sequencing activity. Systematic reviews of the field have found that the number of ATC are growing rapidly, but have focused on memory and planning, that there is emerging evidence for efficacy, that a lot of scope exists to develop new ATC. Examples of ATC include: NeuroPage which prompts users about meetings, Wakamaru, which provides companionship and reminds users to take medicine and calls for help if something is wrong, and telephone Reassurance systems. === Memory aids === Memory aids are any type of assistive technology that helps a user learn and remember certain information. Many memory aids are used for cognitive impairments such as reading, writing, or organizational difficulties. For example, a Smartpen records handwritten notes by creating both a digital copy and an audio recording of the text. Users simply tap certain parts of their notes, the pen saves it, and reads it back to them. From there, the user can also download their notes onto a computer for increased accessibility. Digital voice recorders are also used to record "in the moment" information for fast and easy recall at a later time. A 2017 Cochrane Review highlighted the current lack of high-quality evidence to determine whether assistive technology effectively supports people with dementia to manage memory issues. Thus, it is not presently sure whether or not assistive technology is beneficial for memory problems. === Educational software === Educational software is software that assists people with reading, learning, comprehension, and organizational difficulties. Any accommodation software such as text readers, notetakers, text enlargers, organization tools, word predictions, and talking word processors falls under the category of educational software. == Eating impairments == Adaptive eating devices include items commonly used by the general population like spoons and forks and plates. However they become assistive technology when they are modified to accommodate the needs of people who have difficulty using standard cutlery due to a disabling condition. Common modifications include increasing the size of the utensil handle to make it easier to grasp. Plates and bowls may have a guard on the edge that stops food being pushed off of the dish when it is being scooped. More sophisticated equipment for eating includes manual and powered feeding devices. These devices support those who have little or no hand and arm function and enable them to eat independently. == In sports == Assistive technology in sports is an area of technology design that is growing. Assistive technology is the array of new devices created to enable sports enthusiasts who have disabilities to play. Assistive technology may be used in adaptive sports, where an existing sport is modified to enable players with a disability to participate; or, assistive technology may be used to invent completely new sports with athletes with disabilities exclusively in mind. An increasing number of people with disabilities are participating in sports, leading to the development of new assistive technology. Assistive technology devices can be simple, or "low-technology", or they may use highly advanced technology. "Low-tech" devices can include velcro gloves and adaptive bands and tubes. "High-tech" devices can include all-terrain wheelchairs and adaptive bicycles. Accordingly, assistive technology can be found in sports ranging from local community recreation to the elite Paralympic Games. More complex assistive technology devices have been developed over time, and as a result, sports for people with disabilities "have changed from being a clinical therapeutic tool to an increasingly competition-oriented activity". == In education == In the United States there are two major pieces of legislation that govern the use of assistive technology within the school system. The first is Section 504 of the Rehabilitation Act of 1973 and the second being the Individuals with Disabilities Education Act (IDEA) which was first enacted in 1975 under the name The Education for All Handicapped Children Act. In 2004, during the reauthorization period for IDEA, the National Instructional Material Access Center (NIMAC) was created which provided a repository of accessible text including publisher's textbooks to students with a qualifying disability. Files provided are in XML format and used as a starting platform for braille readers, screen readers, and other digital text software. IDEA defines assistive technology as follows: "any item, piece of equipment, or product system, whether acquired commercially off the shelf, modified, or customized, that is used to increase, maintain, or improve functional capabilities of a child with a disability. (B) Exception.--The term does not include a medical device that is surgically implanted, or the replacement of such device." Assistive technology listed is a student's IEP is not only recommended, it is required (Koch, 2017). These devices help students both with and without disabilities access the curriculum in a way they were previously unable to (Koch, 2017). Occupational therapists play an important role in educating students, parents and teachers about the assistive technology they may interact with. Assistive technology in this area is broken down into low, mid, and high tech categories. Low tech encompasses equipment that is often low cost and does not include batteries or requires charging. Examples include adapted paper and pencil grips for writing or masks and color overlays for reading. Mid tech supports used in the school setting include the use of handheld spelling dictionaries and portable word processors used to keyboard writing. High tech supports involve the use of tablet devices and computers with accompanying software. Software supports for writing include the use of auditory feedback while keyboarding, word prediction for spelling, and speech to text. Supports for reading include the use of text to speech (TTS) software and font modification via access to digital text. Limited supports are available for math instruction and mostly consist of grid based software to allow younger students to keyboard equations and auditory feedback of more complex equations using MathML and Daisy. == Computer accessibility == One of the largest problems that affect disabled people is discomfort with prostheses. An experiment performed in Massachusetts used 20 people with various sensors attached to their arms. The subjects tried different arm exercises, and the sensors recorded their movements. All of the data helped engineers develop new engineering concepts for prosthetics. Assistive technology may attempt to improve the ergonomics of the devices themselves such as Dvorak and other alternative keyboard layouts, which offer more ergonomic layouts of the keys. Assistive technology devices have been created to enable disabled people to use modern touch screen mobile computers such as the iPad, iPhone and iPod Touch. The Pererro is a plug and play adapter for iOS devices which uses the built in Apple VoiceOver feature in combination with a basic switch. This brings touch screen technology to those who were previously unable to use it. Apple, with the release of iOS 7 had introduced the ability to navigate apps using switch control. Switch access could be activated either through an external bluetooth connected switch, single touch of the screen, or use of right and left head turns using the device's camera. Additional accessibility features include the use of Assistive Touch which allows a user to access multi-touch gestures through pre-programmed onscreen buttons. For users with physical disabilities a large variety of switches are available and customizable to the user's needs varying in size, shape, or amount of pressure required for activation. Switch access may be placed near any area of the body which has consistent and reliable mobility and less subject to fatigue. Common sites include the hands, head, and feet. Eye gaze and head mouse systems can also be used as an alternative mouse navigation. A user may use single or multiple switch sites and the process often involves a scanning through items on a screen and activating the switch once the desired object is highlighted. == Home automation == The form of home automation called assistive domotics focuses on making it possible for elderly and disabled people to live independently. Home automation is becoming a viable option for the elderly and disabled who would prefer to stay in their own homes rather than move to a healthcare facility. This field uses much of the same technology and equipment as home automation for security, entertainment, and energy conservation but tailors it towards elderly and disabled users. For example, automated prompts and reminders use motion sensors and pre-recorded audio messages; an automated prompt in the kitchen may remind the resident to turn off the oven, and one by the front door may remind the resident to lock the door. == Assistive technology and innovation == Innovation is happening in assistive technology either through improvements to existing devices or the creation of new products. In the WIPO published 2021 report on Technology Trends, assistive products are grouped into either conventional or emerging technologies. Conventional assisting technology tracks innovation within well-established assistive products, whereas emerging assistive technology refers to more advanced products. These identified advanced assistive products are distinguished from the conventional ones by the use of one or more enabling technologies (for instance, artificial intelligence, Internet of things, advanced sensors, new material, Additive Manufacturing, advanced robotics, augmented and virtual reality) or by the inclusion of implantable products/components. Such emerging assistive products are either more sophisticated or more functional versions of conventional assistive products, or completely novel assistive devices. For instance, in conventional self-care assistive technology, technologies involved typically include adaptive clothing, adaptive eating devices, incontinence products, assistive products for manicure, pedicure, hair and facial care, dental care, or assistive products for sexual activities. In comparison, emerging self-care assistive technologies include health and emotion monitoring, smart diapers, smart medication dispensing and management or feeding assistant robot. Although the distinction between conventional and emerging technologies is not always clear-cut, emerging assistive technology tends to be "smarter", using AI and being more connected and interactive, and including body-integrated solutions or components. To a great extent this « conventional » versus « emerging » classification is based on the WHO's Priority Assistive Products List and the ISO 9999 standard for assistive products for persons with disabilities, the APL delineating the absolute minimum that countries should be offering to their citizens and ISO 9999 defining those products which are already well established in the market. This "well-established status" is reflected in the patent filings between 2013 and 2017. Patent registrations for assistive technologies identified as conventional are nearly eight times larger than the ones for emerging assistive technologies. However, patent filings related to more recent emerging assistive technologies are growing almost three times as fast as those pertaining to conventional ones. Patent filings in both conventional and emerging assistive technology are highly concentrated on mobility, hearing and vision. Investment in emerging assistive technology also focuses on environment. In the conventional sector, mobility represent 54% of all patents fillings, and is an indication of increased interest in advanced mobility assistive product categories, such as advanced prosthetics, walking aids, wheelchairs, and exoskeletons. In the past, the top patent offices for filing, and therefore perceived target markets, in assistive technology have been the U.S. and Japan. Patenting activity has, however, been declining in these two jurisdictions. At the same time, there has been a surge in patent filings in China and an increase in filings in the Republic of Korea. This pattern is observed for both conventional and emerging assistive technology, with China's annual filings surpassing those of the U.S. in 2008 for conventional and 2014 for emerging assistive technology. Patent filings related to conventional assistive technology have also declined in Europe, especially in Germany, France, the Netherlands and Norway. Patenting activity indicates the amount of interest and the investment made in respect to an invention's applicability and its commercialization potential. There is typically a lag between filing a patent application and commercialization, with a product being classified in various stages of readiness levels, research concept, proof of concept, minimum viable product and finally commercial product. According to the 2021 WIPO report, the emerging technologies closest to a fully commercial product were for example: myoelectric control of advanced prosthetics and wheelchair control (mobility), environment-controlling hearing aids (hearing), multifocal intraocular lenses and artificial retina, along with Virtual and Augmented Reality wearables (vision); smart assistants and navigation aids (communication); smart home appliances (environment); medication management and smart diapers (self-care). The technology readiness level and the related patenting activity can also be explained through the following factors which contribute to a product's entry to market, such as the expected impact on a person's participation in different aspects of life, the ease of adoption (need for training, fitting, additional equipment for interoperability, and so on), the societal acceptance and potential ethical concerns, and the need for regulatory approval. This is mainly the case for assistive technology that qualifies as medical technology. Among these aspects, acceptability and ethical considerations are particularly relevant to those technologies that are extremely invasive (such as cortical or auditory brainstem implants), or replace the human caregiver and human interaction, or collect and use data on cloud-based services or interconnected devices (e.g., companion robots, smart nursing and health-monitoring technologies), raising privacy issues and requiring connectivity, or raise safety concerns, such as autonomous wheelchairs. Beyond the patent landscape, industrial designs have an added importance for the field of assistive technology. Assistive technology is often not adopted, or else abandoned entirely, because of issues to do with design (lack of appeal) or comfort (poor ergonomics). Design often plays a role after the patenting activity, as a product needs to be re-designed for mass production. == Impacts == Overall, assistive technology aims to allow disabled people to "participate more fully in all aspects of life (home, school, and community)" and increases their opportunities for "education, social interactions, and potential for meaningful employment". It creates greater independence and control for disabled individuals. For example, in one study of 1,342 infants, toddlers and preschoolers, all with some kind of developmental, physical, sensory, or cognitive disability, the use of assistive technology created improvements in child development. These included improvements in "cognitive, social, communication, literacy, motor, adaptive, and increases in engagement in learning activities". Additionally, it has been found to lighten caregiver load. Both family and professional caregivers benefit from assistive technology. Through its use, the time that a family member or friend would need to care for a patient significantly decreases. However, studies show that care time for a professional caregiver increases when assistive technology is used. Nonetheless, their work load is significantly easier as the assistive technology frees them of having to perform certain tasks. There are several platforms that use machine learning to identify the appropriate assistive device to suggest to patients, making assistive devices more accessible. == History == In 1988 the National Institute on Disability and Rehabilitation Research, NIDRR, awarded Gaulladet University a grant for the project "Robotic finger spelling hand for communication and access to text by deaf-blind persons". Researchers at the university developed and tested a robotic hand. Although it was never commercialized the concept is relevant for current and future research. Since this grant, many others have been written. NIDRR funded research appears to be moving from the fabrication of robotic arms that can be used by disabled persons to perform daily activities, to developing robotics that assist with therapy in the hopes of achieving long-term performance gains. If there is success in development of robotics, these mass-marketed products could assist tomorrow's longer-living elderly individuals enough to postpone nursing home stays. "Jim Osborn, executive director of the Quality of Life Technology Center, told a 2007 gathering of long-term care providers that if such advances could delay all nursing home admissions by a month, societal savings could be $1 billion monthly". Shortage of both paid personal assistants and available family members makes artificial assistance a necessity. == rATA Tool by World Health Organization == The rapid assistive technology assessment (rATA) is a tool developed by World Health Organization in order to undertake household surveys which can measure various parameters needed to access assistive technology and to make informed policies for governments around the world. == See also == == References == == Bibliography == American Speech-Language-Hearing Association. (2005). "Roles and Responsibilities of Speech-Language Pathologists With Respect to Augmentative and Alternative Communication: Position Statement". Archived from the original on February 13, 2009. Retrieved January 23, 2009. DeCoste, Denise C. (1997). "Chapter 10: Introduction to Augmentative and Alternative Communication Systems". In Glennen, Sharon; DeCoste, Denise C. (eds.). Handbook Of Augmentative And Alternative Communication. San Diego, CA: Singular Publishing Group. ISBN 978-1-56593-684-3. Schlosser, R. W.; Wendt, O. (2008). "Effects of augmentative and alternative communication intervention on speech production in children with autism: a systematic review". American Journal of Speech-Language Pathology. 17 (3): 212–230. doi:10.1044/1058-0360(2008/021). PMID 18663107. Beukelman, David R.; Mirenda, Pat (2005). Augmentative & alternative communication: supporting children & adults with complex communication needs (3rd ed.). Paul H. Brookes Publishing Company. ISBN 978-1-55766-684-0. Galvão Filho, T. (2009). Tecnologia Assistiva para uma Escola Inclusiva: apropriação, demandas e perspectivas (Doutorado em Educação) (in Portuguese). Salvador, Brazil: Faculdade de Educação, Universidade Federal da Bahia. Gillam, Ronald Bradley; Marquardt, Thomas P.; Martin, Frederick N. (2000). Communication sciences and disorders: from science to clinical practice. Jones & Bartlett Learning. ISBN 978-0-7693-0040-5. Mirenda, P. (2003). "Toward Functional Augmentative and Alternative Communication for Students With Autism: Manual Signs, Graphic Symbols, and Voice Output Communication Aids" (PDF). Language, Speech, and Hearing Services in Schools. 34 (3): 203–216. doi:10.1044/0161-1461(2003/017). PMID 27764322. S2CID 11595254. Archived (PDF) from the original on October 9, 2022. Mathy; Yorkston, K.; Guttman (2000). "Augmentative Communication for Individuals with Amyotrophic Lateral Sclerosis". In Beukelman, D.; Yorkston, K.; Reichle, J. (eds.). Augmentative and Alternative Communication Disorders for Adults with Acquired Neurologic Disorders. Baltimore: P. H. Brookes Pub. ISBN 978-1-55766-473-0. Jans, Deborah; Clark, Sue (1998). "Chapter 6: High Technology Aids to Communication". In Wilson, Allan (ed.). Augmentative Communication in Practice: An Introduction. University of Edinburgh. ISBN 978-1-898042-15-0. Archived from the original on July 8, 2015. Retrieved September 30, 2011. Parette, H. P.; Brotherson, M. J; Huer, M. B. (2000). "Giving families a voice in augmentative and alternative communication decision-making". Education and Training in Mental Retardation and Developmental Disabilities. 35: 177–190. Assistive Technology in Education: A Teacher's Guide, Amy Foxwell, 15 February 2022. == External links == WHO fact sheet on assistive technology
https://en.wikipedia.org/wiki/Assistive_technology
The Massachusetts Institute of Technology (MIT) is a private research university in Cambridge, Massachusetts, United States. Established in 1861, MIT has played a significant role in the development of many areas of modern technology and science. In response to the increasing industrialization of the United States, William Barton Rogers organized a school in Boston to create "useful knowledge." Initially funded by a federal land grant, the institute adopted a polytechnic model that stressed laboratory instruction in applied science and engineering. MIT moved from Boston to Cambridge in 1916 and grew rapidly through collaboration with private industry, military branches, and new federal basic research agencies, the formation of which was influenced by MIT faculty like Vannevar Bush. In the late twentieth century, MIT became a leading center for research in computer science, digital technology, artificial intelligence and big science initiatives like the Human Genome Project. Engineering remains its largest school, though MIT has also built programs in basic science, social sciences, business management, and humanities. The institute has an urban campus that extends more than a mile (1.6 km) along the Charles River. The campus is known for academic buildings interconnected by corridors and many significant modernist buildings. MIT's off-campus operations include the MIT Lincoln Laboratory and the Haystack Observatory, as well as affiliated laboratories such as the Broad and Whitehead Institutes. Campus life is often noted for demanding workloads, a hands-on approach to research and coursework, and elaborate practical jokes known as "hacks". As of October 2024, 105 Nobel laureates, 26 Turing Award winners, and 8 Fields Medalists have been affiliated with MIT as alumni, faculty members, or researchers. In addition, 58 National Medal of Science recipients, 29 National Medals of Technology and Innovation recipients, 50 MacArthur Fellows, 83 Marshall Scholars, 41 astronauts, 16 Chief Scientists of the US Air Force, and 8 foreign heads of state have been affiliated with MIT. The institute also has a strong entrepreneurial culture and MIT alumni have founded or co-founded many notable companies. == History == === Foundation and vision === [...] a school of industrial science aiding the advancement, development and practical application of science in connection with arts, agriculture, manufactures, and commerce [...] In 1859, a proposal was submitted to the Massachusetts General Court to use newly filled lands in Back Bay, Boston for a "Conservatory of Art and Science", but the proposal failed. A charter for the incorporation of the Massachusetts Institute of Technology, proposed by William Barton Rogers, was signed by John Albion Andrew, the governor of Massachusetts, on April 10, 1861. Rogers, a geologist who had recently arrived in Boston from the University of Virginia, wanted to establish an institution to address rapid scientific and technological advances. He did not wish to found a professional school, but a combination with elements of both professional and liberal education, proposing that: The true and only practicable object of a polytechnic school is, as I conceive, the teaching, not of the minute details and manipulations of the arts, which can be done only in the workshop, but the inculcation of those scientific principles which form the basis and explanation of them, and along with this, a full and methodical review of all their leading processes and operations in connection with physical laws. The Rogers Plan reflected the German research university model, emphasizing an independent faculty engaged in research, as well as instruction oriented around seminars and laboratories. === Early developments === Two days after MIT was chartered, the first battle of the Civil War broke out. After a long delay through the war years, MIT's first classes were held in the Mercantile Building in Boston in 1865. The new institute was founded as part of the Morrill Land-Grant Colleges Act to fund institutions "to promote the liberal and practical education of the industrial classes" and was a land-grant school. In 1863 under the same act, the Commonwealth of Massachusetts founded the Massachusetts Agricultural College, which developed as the University of Massachusetts Amherst. In 1866, the proceeds from land sales went toward new buildings in the Back Bay. MIT was informally called "Boston Tech". The institute adopted the European polytechnic university model and emphasized laboratory instruction from an early date. Despite chronic financial problems, the institute saw growth in the last two decades of the 19th century under President Francis Amasa Walker. Programs in electrical, chemical, marine, and sanitary engineering were introduced, new buildings were built, and the size of the student body increased to more than one thousand. The curriculum drifted to a vocational emphasis, with less focus on theoretical science. The fledgling school still suffered from chronic financial shortages which diverted the attention of the MIT leadership. During these "Boston Tech" years, MIT faculty and alumni rebuffed Harvard University president (and former MIT faculty) Charles W. Eliot's repeated attempts to merge MIT with Harvard College's Lawrence Scientific School. There would be at least six attempts to absorb MIT into Harvard. In its cramped Back Bay location, MIT could not afford to expand its overcrowded facilities, driving a desperate search for a new campus and funding. Eventually, the MIT Corporation approved a formal agreement to merge with Harvard and move to Allston, over the vehement objections of MIT faculty, students, and alumni. The merger plan collapsed in 1905 when the Massachusetts Supreme Judicial Court ruled that MIT could not sell its Back Bay land. In 1912, MIT acquired its current campus by purchasing a one-mile (1.6 km) tract of filled lands along the Cambridge side of the Charles River. The neoclassical "New Technology" campus was designed by William W. Bosworth and had been funded largely by anonymous donations from a mysterious "Mr. Smith", starting in 1912. In January 1920, the donor was revealed to be the industrialist George Eastman, an inventor of film production methods and founder of Eastman Kodak. Between 1912 and 1920, Eastman donated $20 million ($304.2 million in 2024 dollars) in cash and Kodak stock to MIT. In 1916, with the first academic buildings complete, the MIT administration and the MIT charter crossed the Charles River on the ceremonial barge Bucentaur built for the occasion. Needing funds to match Eastman's gift and cover retreating state support, President Richard MacLaurin launched an industry funding model known as the "Technology Plan" in 1920. As MIT grew under the Tech Plan, it built new postgraduate programs that stressed laboratory work on industry problems, including a new program in electrical engineering. Gerard Swope, MIT's chairman and head of General Electric, believed talented engineers needed scientific research training. In 1930, he recruited Karl Taylor Compton to helm MIT's transformation as a "technological" research university and to build more autonomy from private industry. === Curricular reforms === ... a special type of educational institution which can be defined as a university polarized around science, engineering, and the arts. We might call it a university limited in its objectives but unlimited in the breadth and the thoroughness with which it pursues these objectives. In the 1930s, President Karl Taylor Compton and Vice-President (effectively Provost) Vannevar Bush emphasized the importance of pure sciences like physics and chemistry and reduced the vocational practice required in shops and drafting studios. The Compton reforms "renewed confidence in the ability of the Institute to develop leadership in science as well as in engineering". Unlike Ivy League schools, MIT catered more to middle-class families, and depended more on tuition than on endowments or grants for its funding. Still, as late as 1949, the Lewis Committee lamented in its report on the state of education at MIT that "the Institute is widely conceived as basically a vocational school", a "partly unjustified" perception the committee sought to change. The report comprehensively reviewed the undergraduate curriculum, recommended offering a broader education, and warned against letting engineering and government-sponsored research detract from the sciences and humanities. The School of Humanities, Arts, and Social Sciences and the MIT Sloan School of Management were formed in 1950 to compete with the powerful Schools of Science and Engineering. Previously marginalized faculties in the areas of economics, management, political science, and linguistics emerged into cohesive and assertive departments by attracting respected professors and launching competitive graduate programs. Humanities and social science programs continued to develop under the successive terms of the more humanistically oriented presidents Howard W. Johnson and Jerome Wiesner between 1966 and 1980. === Defense research === MIT's involvement in military research projects surged during World War II. In 1941, Vannevar Bush was appointed head of the federal Office of Scientific Research and Development and directed funding to only a select group of universities, including MIT. Engineers and scientists from across the country gathered at MIT's Radiation Laboratory, established in 1940 to assist the British military in developing microwave radar. The work done there significantly affected both the war and subsequent research in the area. Other defense projects included gyroscope-based and other complex control systems for gunsight, bombsight, and inertial navigation under Charles Stark Draper's Instrumentation Laboratory; the development of a digital computer for flight simulations under Project Whirlwind; and high-speed and high-altitude photography under Harold Edgerton. By the end of the war, MIT became the nation's largest wartime R&D contractor (attracting some criticism of Bush), employing nearly 4000 in the Radiation Laboratory alone and receiving in excess of $100 million ($1.2 billion in 2015 dollars) before 1946. Work on defense projects continued even after then. Post-war government-sponsored research at MIT included SAGE and guidance systems for ballistic missiles and Project Apollo. These activities affected MIT profoundly. A 1949 report noted the lack of "any great slackening in the pace of life at the Institute" to match the return to peacetime, remembering the "academic tranquility of the prewar years", though acknowledging the significant contributions of military research to the increased emphasis on graduate education and rapid growth of personnel and facilities. The faculty doubled and the graduate student body quintupled during the presidential terms of Karl Taylor Compton (1930–1948), James Rhyne Killian (1948–1957), and chancellor Julius Adams Stratton (1952–1957), whose institution-building strategies shaped the expanding university. By the 1950s, MIT no longer simply benefited the industries with which it had worked for three decades, and it had developed closer working relationships with new patrons, philanthropic foundations and the federal government. In late 1960s and early 1970s, student and faculty activists protested against the Vietnam War and MIT's defense research. In this period MIT's various departments were researching helicopters, smart bombs and counterinsurgency techniques for the war in Vietnam as well as guidance systems for nuclear missiles. The Union of Concerned Scientists was founded on March 4, 1969 during a meeting of faculty members and students seeking to shift the emphasis on military research toward environmental and social problems. MIT ultimately divested itself from the Instrumentation Laboratory and moved all classified research off-campus to the MIT Lincoln Laboratory facility in 1973 in response to the protests. The student body, faculty, and administration remained comparatively unpolarized during what was a tumultuous time for many other universities. Johnson was seen to be highly successful in leading his institution to "greater strength and unity" after these times of turmoil. However six MIT students were sentenced to prison terms at this time and some former student leaders, such as Michael Albert and George Katsiaficas, are still indignant about MIT's role in military research and its suppression of these protests. (Richard Leacock's film, November Actions, records some of these tumultuous events.) In the 1980s, there was more controversy at MIT over its involvement in SDI (space weaponry) and CBW (chemical and biological warfare) research. More recently, MIT's research for the military has included work on robots, drones and 'battle suits'. === Recent history === MIT has kept pace with and helped to advance the digital age. In addition to developing the predecessors to modern computing and networking technologies, students, staff, and faculty members at Project MAC, the Artificial Intelligence Laboratory, and the Tech Model Railroad Club wrote some of the earliest interactive computer video games like Spacewar! and created much of modern hacker slang and culture. Several major computer-related organizations have originated at MIT since the 1980s: Richard Stallman's GNU Project and the subsequent Free Software Foundation were founded in the mid-1980s at the AI Lab; the MIT Media Lab was founded in 1985 by Nicholas Negroponte and Jerome Wiesner to promote research into novel uses of computer technology; the World Wide Web Consortium standards organization was founded at the Laboratory for Computer Science in 1994 by Tim Berners-Lee; the OpenCourseWare project has made course materials for over 2,000 MIT classes available online free of charge since 2002; and the One Laptop per Child initiative to expand computer education and connectivity to children worldwide was launched in 2005. MIT was named a sea-grant college in 1976 to support its programs in oceanography and marine sciences and was named a space-grant college in 1989 to support its aeronautics and astronautics programs. Despite diminishing government financial support over the past quarter century, MIT launched several successful development campaigns to significantly expand the campus: new dormitories and athletics buildings on west campus; the Tang Center for Management Education; several buildings in the northeast corner of campus supporting research into biology, brain and cognitive sciences, genomics, biotechnology, and cancer research; and a number of new "backlot" buildings on Vassar Street including the Stata Center. Construction on campus in the 2000s included expansions of the Media Lab, the Sloan School's eastern campus, and graduate residences in the northwest. In 2006, President Hockfield launched the MIT Energy Research Council to investigate the interdisciplinary challenges posed by increasing global energy consumption. In 2001, inspired by the open source and open access movements, MIT launched OpenCourseWare to make the lecture notes, problem sets, syllabi, exams, and lectures from the great majority of its courses available online for no charge, though without any formal accreditation for coursework completed. While the cost of supporting and hosting the project is high, OCW expanded in 2005 to include other universities as a part of the OpenCourseWare Consortium, which currently includes more than 250 academic institutions with content available in at least six languages. In 2011, MIT announced it would offer formal certification (but not credits or degrees) to online participants completing coursework in its "MITx" program, for a modest fee. The "edX" online platform supporting MITx was initially developed in partnership with Harvard and its analogous "Harvardx" initiative. The courseware platform is open source, and other universities have already joined and added their own course content. In March 2009 the MIT faculty adopted an open-access policy to make its scholarship publicly accessible online. MIT has its own police force. Three days after the Boston Marathon bombing of April 2013, MIT Police patrol officer Sean Collier was fatally shot by the suspects Dzhokhar and Tamerlan Tsarnaev, setting off a violent manhunt that shut down the campus and much of the Boston metropolitan area for a day. One week later, Collier's memorial service was attended by more than 10,000 people, in a ceremony hosted by the MIT community with thousands of police officers from the New England region and Canada. On November 25, 2013, MIT announced the creation of the Collier Medal, to be awarded annually to "an individual or group that embodies the character and qualities that Officer Collier exhibited as a member of the MIT community and in all aspects of his life". The announcement further stated that "Future recipients of the award will include those whose contributions exceed the boundaries of their profession, those who have contributed to building bridges across the community, and those who consistently and selflessly perform acts of kindness". In September 2017, the school announced the creation of an artificial intelligence research lab called the MIT-IBM Watson AI Lab. IBM will spend $240 million over the next decade, and the lab will be staffed by MIT and IBM scientists. In October 2018 MIT announced that it would open a new Schwarzman College of Computing dedicated to the study of artificial intelligence, named after lead donor and The Blackstone Group CEO Stephen Schwarzman. The focus of the new college is to study not just AI, but interdisciplinary AI education, and how AI can be used in fields as diverse as history and biology. The cost of buildings and new faculty for the new college is expected to be $1 billion upon completion. The Laser Interferometer Gravitational-Wave Observatory (LIGO) was designed and constructed by a team of scientists from California Institute of Technology, MIT, and industrial contractors, and funded by the National Science Foundation. It was designed to open the field of gravitational-wave astronomy through the detection of gravitational waves predicted by general relativity. Gravitational waves were detected for the first time by the LIGO detector in 2015. For contributions to the LIGO detector and the observation of gravitational waves, two Caltech physicists, Kip Thorne and Barry Barish, and MIT physicist Rainer Weiss won the Nobel Prize in physics in 2017. Weiss, who is also an MIT graduate, designed the laser interferometric technique, which served as the essential blueprint for the LIGO. In April 2024, MIT students joined other campuses across the United States in protests and setting up encampments against the Gaza war. Student likened their actions to the historic protests against the American invasion of Vietnam and MIT investments in South African apartheid; they called for ending ties to the Israeli Ministry of Defense. == Campus == MIT's 166-acre (67.2 ha) campus in the city of Cambridge spans approximately a mile along the north side of the Charles River basin. The campus is divided roughly in half by Massachusetts Avenue, with most dormitories and student life facilities to the west and most academic buildings to the east. The bridge closest to MIT is the Harvard Bridge, which is known for being marked off in a non-standard unit of length – the smoot. The Kendall/MIT MBTA Red Line station is located on the northeastern edge of the campus, in Kendall Square. The Cambridge neighborhoods surrounding MIT are a mixture of high tech companies occupying both modern office and rehabilitated industrial buildings, as well as socio-economically diverse residential neighborhoods. In early 2016, MIT presented a development plan for Kendall Square the City of Cambridge, adding high-rise educational, retail, residential, startup incubator, and office space around the MBTA station. The MIT Museum has moved immediately adjacent to a Kendall Square subway entrance, joining the List Visual Arts Center on the eastern end of the campus. Each building at MIT has a number (possibly preceded by a W, N, E, or NW) designation, and most have a name as well. Typically, academic and office buildings are referred to primarily by number while residence halls are referred to by name. The organization of building numbers roughly corresponds to the order in which the buildings were built and their location relative (north, west, and east) to the original center cluster of Maclaurin buildings. Many of the buildings are connected above ground as well as through an extensive network of tunnels, providing protection from the Cambridge weather as well as a venue for roof and tunnel hacking. The campus' primary energy source is natural gas. In connection with capital campaigns to expand the campus, the Institute has also extensively renovated existing buildings to improve their energy efficiency. MIT has also taken steps to reduce its environmental impact by running alternative fuel campus shuttles, subsidizing public transportation passes, constructing solar power offsets, and building a cogeneration plant to power campus electricity, heating, and cooling requirements. === Research facilities === MIT's on-campus nuclear reactor is one of the most powerful university-based nuclear reactors in the United States. The prominence of the reactor's containment building in a densely populated area has been controversial, but MIT maintains that it is well-secured. MIT Nano, also known as Building 12, is an interdisciplinary facility for nanoscale research. Its 100,000 sq ft (9,300 m2) cleanroom and research space, visible through expansive glass facades, is the largest research facility of its kind in the nation. With a cost of US$400 million, it is also one of the costliest buildings on campus. The facility also provides state-of-the-art nanoimaging capabilities with vibration damped imaging and metrology suites sitting atop a 5×10^6 lb (2,300,000 kg) slab of concrete underground. Other notable campus facilities include a pressurized wind tunnel for testing aerodynamic research, a towing tank for testing ship and ocean structure designs, and previously Alcator C-Mod, which was the largest fusion device operated by any university. MIT's campus-wide wireless network was completed in the fall of 2005 and consists of nearly 3,000 access points covering 9.4×10^6 sq ft (870,000 m2) of campus. === Architecture === MIT's School of Architecture, founded in 1865 and now called the School of Architecture and Planning, was the first formal architecture program in the United States, and it has a history of commissioning progressive buildings. The first buildings constructed on the Cambridge campus, completed in 1916, are sometimes called the "Maclaurin buildings" after Institute president Richard Maclaurin who oversaw their construction. Designed by William Welles Bosworth, these imposing buildings were built of reinforced concrete, a first for a non-industrial – much less university – building in the US. Bosworth's design was influenced by the City Beautiful Movement of the early 1900s and features the Pantheon-esque Great Dome housing the Barker Engineering Library. The Great Dome overlooks Killian Court, where graduation ceremonies are held each year. The friezes of the limestone-clad buildings around Killian Court are engraved with the names of important scientists and philosophers. The spacious Building 7 atrium at 77 Massachusetts Avenue is regarded as the entrance to the Infinite Corridor and the rest of the campus. Alvar Aalto's Baker House (1947), Eero Saarinen's MIT Chapel and Kresge Auditorium (1955), and I.M. Pei's Green, Dreyfus, Landau, and Wiesner buildings represent high forms of post-war modernist architecture. More recent buildings like Frank Gehry's Stata Center (2004), Steven Holl's Simmons Hall (2002), Charles Correa's Building 46 (2005), and Fumihiko Maki's Media Lab Extension (2009) stand out among the Boston area's classical architecture and serve as examples of contemporary campus "starchitecture". These buildings have not always been well received; in 2010, The Princeton Review included MIT in a list of twenty schools whose campuses are "tiny, unsightly, or both". === Housing === Undergraduates are guaranteed four-year housing in one of MIT's 11 undergraduate dormitories. Those living on campus can receive support and mentoring from live-in graduate student tutors, resident advisors, and faculty housemasters. Because housing assignments are made based on the preferences of the students themselves, diverse social atmospheres can be sustained in different living groups; for example, according to the Yale Daily News staff's The Insider's Guide to the Colleges, 2010, "The split between East Campus and West Campus is a significant characteristic of MIT. East Campus has gained a reputation as a thriving counterculture." MIT also has 5 dormitories for single graduate students and 2 apartment buildings on campus for married student families. MIT has an active Greek and co-op housing system, including thirty-six fraternities, sororities, and independent living groups (FSILGs). As of 2015, 98% of all undergraduates lived in MIT-affiliated housing; 54% of the men participated in fraternities and 20% of the women were involved in sororities. Most FSILGs are located across the river in Back Bay near where MIT was founded, and there is also a cluster of fraternities on MIT's West Campus that face the Charles River Basin. After the 1997 alcohol-related death of Scott Krueger, a new pledge at the Phi Gamma Delta fraternity, MIT required all freshmen to live in the dormitory system starting in 2002. Because FSILGs had previously housed as many as 300 freshmen off-campus, the new policy could not be implemented until Simmons Hall opened in that year. In 2013–2014, MIT abruptly closed and then demolished undergrad dorm Bexley Hall, citing extensive water damage that made repairs infeasible. In 2017, MIT shut down Senior House after a century of service as an undergrad dorm. That year, MIT administrators released data showing just 60% of Senior House residents had graduated in four years. Campus-wide, the four-year graduation rate is 84% (the cumulative graduation rate is significantly higher). === Off-campus real estate === MIT has substantial commercial real estate holdings in Cambridge on which it pays property taxes, plus an additional voluntary payment in lieu of taxes (PILOT) on academic buildings which are legally tax-exempt. As of 2017, it is the largest taxpayer in the city, contributing approximately 14% of the city's annual revenues. Holdings include Technology Square, parts of Kendall Square, University Park, and many properties in Cambridgeport and Area 4 neighboring the main campus. The land is held for investment purposes and potential long-term expansion. == Organization and administration == MIT is a state-chartered nonprofit corporation governed by a privately appointed board known as the MIT Corporation. The Corporation has 60–80 members at any time, some with fixed terms, some with life appointments, and eight who serve ex officio. The Corporation approves the budget, new programs, degrees and faculty appointments, and elects a president to manage the university and preside over the Institute's faculty. The current president is Sally Kornbluth, a cell biologist and former provost at Duke University, who became MIT's eighteenth president in January 2023. MIT has five schools (Science, Engineering, Architecture and Planning, Management, and Humanities, Arts, and Social Sciences) and one college (Schwarzman College of Computing), but no schools of law or medicine. Faculty committees have control over many areas of MIT's curriculum, research, student life, and administrative affairs, the chair of each of MIT's academic departments reports to the dean of that department's school, who in turn reports to the Provost under the President. Academic departments are also evaluated by "Visiting Committees", specialized bodies of Corporation members and outside experts who review the performance, activities, and needs of each department. MIT's endowment, real estate, and other financial assets are managed through by the MIT Investment Management Company (MITIMCo), a subsidiary of the MIT Corporation created in 2004. A minor revenue source for much of the Institute's history, the endowment's role in MIT operations has grown due to strong investment returns since the 1990s, making it one the largest U.S. university endowments. Among its holdings are a majority of shares in the audio equipment manufacturer Bose Corporation, as well as a commercial real estate portfolio in Kendall Square. == Academics == MIT is a large, highly residential, research university with a majority of enrollments in graduate and professional programs. The university has been accredited by the New England Association of Schools and Colleges since 1929. MIT operates on a 4–1–4 academic calendar with the fall semester beginning after Labor Day and ending in mid-December, a 4-week "Independent Activities Period" in the month of January, and the spring semester commencing in early February and ceasing in late May. MIT students refer to both their majors and classes using numbers or acronyms alone. Departments and their corresponding majors are numbered in the approximate order of their foundation; for example, Civil and Environmental Engineering is Course 1, while Linguistics and Philosophy is Course 24. Students majoring in Electrical Engineering and Computer Science (EECS), the most popular department, collectively identify themselves as "Course 6". MIT students use a combination of the department's course number and the number assigned to the class to identify their subjects; for instance, the introductory calculus-based classical mechanics course is simply "8.01" (pronounced eight-oh-one) at MIT. === Undergraduate program === The four-year, full-time undergraduate program maintains a balance between professional majors and those in the arts and sciences. In 2010, it was dubbed "most selective" by U.S. News, admitting few transfer students and 4.1% of its applicants in the 2020–2021 admissions cycle. It is need-blind for both domestic and international applicants. MIT offers 44 undergraduate degrees across its five schools. In the 2017–2018 academic year, 1,045 Bachelor of Science degrees (abbreviated "SB") were granted, the only type of undergraduate degree MIT now awards. In the 2011 fall term, among students who had designated a major, the School of Engineering was the most popular division, enrolling 63% of students in its 19 degree programs, followed by the School of Science (29%), School of Humanities, Arts, & Social Sciences (3.7%), Sloan School of Management (3.3%), and School of Architecture and Planning (2%). The largest undergraduate degree programs were in Electrical Engineering and Computer Science (Course 6–2), Computer Science and Engineering (Course 6–3), Mechanical Engineering (Course 2), Physics (Course 8), and Mathematics (Course 18). All undergraduates are required to complete a core curriculum called the General Institute Requirements (GIRs). The Science Requirement, generally completed during freshman year as prerequisites for classes in science and engineering majors, comprises two semesters of physics, two semesters of calculus, one semester of chemistry, and one semester of biology. There is a Laboratory Requirement, usually satisfied by an appropriate class in a course major. The Humanities, Arts, and Social Sciences (HASS) Requirement consists of eight semesters of classes in the humanities, arts, and social sciences, including at least one semester from each division as well as the courses required for a designated concentration in a HASS division. Under the Communication Requirement, two of the HASS classes, plus two of the classes taken in the designated major must be "communication-intensive", including "substantial instruction and practice in oral presentation". Finally, all students are required to complete a swimming test; non-varsity athletes must also take four quarters of physical education classes. Most classes rely on a combination of lectures, recitations led by associate professors or graduate students, weekly problem sets ("p-sets"), and periodic quizzes or tests. While the pace and difficulty of MIT coursework has been compared to "drinking from a fire hose", the freshmen retention rate at MIT is similar to other research universities. The "pass/no-record" grading system relieves some pressure for first-year undergraduates. For each class taken in the fall term, freshmen transcripts will either report only that the class was passed, or otherwise not have any record of it. In the spring term, passing grades (A, B, C) appear on the transcript while non-passing grades are again not recorded. (Grading had previously been "pass/no record" all freshman year, but was amended for the Class of 2006 to prevent students from gaming the system by completing required major classes in their freshman year.) Also, freshmen may choose to join alternative learning communities, such as Experimental Study Group, Concourse, or Terrascope. MIT's curriculum encourages students to apply scientific knowledge in practical domains, an idea summarized in the institute motto of mens et manus or "mind and hand." Courses emphasizes uses of engineering knowledge in arenas like product design competitions and control design. In 1969, Margaret MacVicar founded the Undergraduate Research Opportunities Program (UROP) to enable undergraduates to collaborate directly with faculty members and researchers. Students join or initiate research projects ("UROPs") for academic credit, pay, or on a volunteer basis through postings on the UROP website or by contacting faculty members directly. A substantial majority of undergraduates participate. Students often become published, file patent applications, and/or launch start-up companies based upon their experience in UROPs. The program has been widely emulated at other U.S. universities. In 1970, the then-Dean of Institute Relations, Benson R. Snyder, published The Hidden Curriculum, arguing that education at MIT was often slighted in favor of following a set of unwritten expectations and that graduating with good grades was more often the product of figuring out the system rather than a solid education. The successful student, according to Snyder, was the one who was able to discern which of the formal requirements were to be ignored in favor of which unstated norms. For example, organized student groups had compiled "course bibles"—collections of problem-set and examination questions and answers for later students to use as references. This sort of gamesmanship, Snyder argued, hindered development of a creative intellect and contributed to student discontent and unrest. === Graduate program === MIT's graduate program has high coexistence with the undergraduate program, and many courses are taken by qualified students at both levels. MIT offers a comprehensive doctoral program with degrees in the humanities, social sciences, and STEM fields as well as professional degrees, including the Master of Business Administration (MBA). The Institute offers graduate programs leading to academic degrees such as the Master of Science (which is abbreviated as MS at MIT), various Engineer's Degrees, Doctor of Philosophy (PhD), and Doctor of Science (DSc) and interdisciplinary graduate programs such as the MD-PhD (with Harvard Medical School) and a joint program in oceanography with Woods Hole Oceanographic Institution. Admission to graduate programs is decentralized; applicants apply directly to the department or degree program. More than 90% of doctoral students are supported by fellowships, research assistantships (RAs), or teaching assistantships (TAs). === Rankings === MIT places among the top five in many overall rankings of universities (see table right) and rankings based on students' revealed preferences. For several years, U.S. News & World Report, the QS World University Rankings, and the Academic Ranking of World Universities have ranked MIT's School of Engineering first, as did the 1995 National Research Council report. In the same lists, MIT's strongest showings apart from in engineering are in computer science, the natural sciences, business, architecture, economics, linguistics, mathematics, and, to a lesser extent, political science and philosophy. Times Higher Education has recognized MIT as one of the world's "six super brands" on its World Reputation Rankings, along with Berkeley, Cambridge, Harvard, Oxford, and Stanford. In 2019, it was ranked #3 among the universities around the world by SCImago Institutions Rankings. In 2017, the Times Higher Education World University Rankings also rated MIT the #2 university for arts and humanities. MIT was ranked #7 in 2015 and #6 in 2017 of the Nature Index Annual Tables, which measure the largest contributors to papers published in 82 leading journals. Georgetown University researchers ranked MIT #3 in the US for 20-year return on investment. === Collaborations === The university historically pioneered research and training collaborations between academia, industry and government. In 1946, President Compton, Harvard Business School professor Georges Doriot, and Massachusetts Investor Trust chairman Merrill Grisswold founded American Research and Development Corporation, the first American venture-capital firm. In 1948, Compton established the MIT Industrial Liaison Program. Throughout the late 1980s and early 1990s, American politicians and business leaders accused MIT and other universities of contributing to a declining economy by transferring taxpayer-funded research and technology to international – especially Japanese – firms that were competing with struggling American businesses. On the other hand, MIT's extensive collaboration with the federal government on research projects has led to several MIT leaders serving as presidential scientific advisers since 1940. MIT established a Washington Office in 1991 to continue effective lobbying for research funding and national science policy. The US Justice Department began an investigation in 1989, and in 1991 filed an antitrust suit against MIT, the eight Ivy League colleges, and eleven other institutions for allegedly engaging in price-fixing during their annual "Overlap Meetings", which were held to prevent bidding wars over promising prospective students from consuming funds for need-based scholarships. While the Ivy League institutions settled, MIT contested the charges, arguing that the practice was not anti-competitive because it ensured the availability of aid for the greatest number of students. MIT ultimately prevailed when the Justice Department dropped the case in 1994. MIT's proximity to Harvard University ("the other school up the river") has led to a substantial number of research collaborations such as the Harvard-MIT Division of Health Sciences and Technology and the Broad Institute. In addition, students at the two schools can cross-register for credits toward their own school's degrees without any additional fees. A cross-registration program between MIT and Wellesley College has also existed since 1969, and in 2002 the Cambridge–MIT Institute launched an undergraduate exchange program between MIT and the University of Cambridge. MIT also has a long-term partnership with Imperial College London, for both student exchanges and research collaboration. More modest cross-registration programs have been established with Boston University, Brandeis University, Tufts University, Massachusetts College of Art, and the School of the Museum of Fine Arts, Boston. MIT maintains substantial research and faculty ties with independent research organizations in the Boston area, such as the Charles Stark Draper Laboratory, the Whitehead Institute for Biomedical Research, and the Woods Hole Oceanographic Institution. Ongoing international research and educational collaborations include the Amsterdam Institute for Advanced Metropolitan Solutions (AMS Institute), Singapore-MIT Alliance, MIT-Politecnico di Milano, MIT-Zaragoza International Logistics Program, and projects in other countries through the MIT International Science and Technology Initiatives (MISTI) program. The mass-market magazine Technology Review is published by MIT through a subsidiary company, as is a special edition that also serves as an alumni magazine. The MIT Press is a major university press, publishing over 200 books and 30 journals annually, emphasizing science and technology as well as arts, architecture, new media, current events, and social issues. MIT Microphotonics Center and PhotonDelta founded the global roadmap for integrated photonics: Integrated Photonics Systems Roadmap – International (IPSR-I). The first edition has been published in 2020. The roadmap is an amalgamation of two previously independent roadmaps: the IPSR roadmap of MIT Microphotonics Center and AIM Photonics in the United States, and the WTMF (World Technology Mapping Forum) of PhotonDelta in Europe. In 2022, Open Philanthropy donated $13,277,348 to MIT to study potential risks from AI. === Libraries, collections, and museums === The MIT library system consists of five subject libraries: Barker (Engineering), Dewey (Economics), Hayden (Humanities and Science), Lewis (Music), and Rotch (Arts and Architecture). There are also various specialized libraries and archives. The libraries contain more than 2.9 million printed volumes, 2.4 million microforms, 49,000 print or electronic journal subscriptions, and 670 reference databases. The past decade has seen a trend of increased focus on digital over print resources in the libraries. Notable collections include the Lewis Music Library with an emphasis on 20th and 21st-century music and electronic music, the List Visual Arts Center's rotating exhibitions of contemporary art, and the Compton Gallery's cross-disciplinary exhibitions. MIT allocates a percentage of the budget for all new construction and renovation to commission and support its extensive public art and outdoor sculpture collection. The MIT Museum was founded in 1971 and collects, preserves, and exhibits artifacts significant to the culture and history of MIT. The museum now engages in significant educational outreach programs for the general public, including the annual Cambridge Science Festival, the first celebration of this kind in the United States. Since 2005, its official mission has been, "to engage the wider community with MIT's science, technology and other areas of scholarship in ways that will best serve the nation and the world in the 21st century". === Research === MIT was elected to the Association of American Universities in 1934 and is classified among "R1: Doctoral Universities – Very high research activity"; research expenditures totaled $952 million in 2017. The federal government was the largest source of sponsored research, with the Department of Health and Human Services granting $255.9 million, Department of Defense $97.5 million, Department of Energy $65.8 million, National Science Foundation $61.4 million, and NASA $27.4 million. MIT employs approximately 1300 researchers in addition to faculty. In 2011, MIT faculty and researchers disclosed 632 inventions, were issued 153 patents, earned $85.4 million in cash income, and received $69.6 million in royalties. Through programs like the Deshpande Center, MIT faculty leverage their research and discoveries into multi-million-dollar commercial ventures. In electronics, magnetic-core memory, radar, single-electron transistors, and inertial guidance controls were invented or substantially developed by MIT researchers. Harold Eugene Edgerton was a pioneer in high-speed photography and sonar. Claude E. Shannon developed much of modern information theory and discovered the application of Boolean logic to digital circuit design theory. In the domain of computer science, MIT faculty and researchers made fundamental contributions to cybernetics, artificial intelligence, computer languages, machine learning, robotics, and cryptography. At least nine Turing Award laureates and seven recipients of the Draper Prize in engineering have been or are currently associated with MIT. Current and previous physics faculty have won eight Nobel Prizes, four ICTP Dirac Medals, and three Wolf Prizes predominantly for their contributions to subatomic and quantum theory. Members of the chemistry department have been awarded three Nobel Prizes and one Wolf Prize for the discovery of novel syntheses and methods. MIT biologists have been awarded six Nobel Prizes for their contributions to genetics, immunology, oncology, and molecular biology. Professor Eric Lander was one of the principal leaders of the Human Genome Project. Positronium atoms, synthetic penicillin, synthetic self-replicating molecules, and the genetic bases for Amyotrophic lateral sclerosis (also known as ALS or Lou Gehrig's disease) and Huntington's disease were first discovered at MIT. Jerome Lettvin transformed the study of cognitive science with his paper "What the frog's eye tells the frog's brain". Researchers developed a system to convert MRI scans into 3D printed physical models. In the domain of humanities, arts, and social sciences, as of October 2019 MIT economists have been awarded seven Nobel Prizes and nine John Bates Clark Medals. Linguists Noam Chomsky and Morris Halle authored seminal texts on generative grammar and phonology. The MIT Media Lab, founded in 1985 within the School of Architecture and Planning and known for its unconventional research, has been home to influential researchers such as constructivist educator and Logo creator Seymour Papert. Spanning many of the above fields, MacArthur Fellowships (the so-called "Genius Grants") have been awarded to 50 people associated with MIT. Five Pulitzer Prize–winning writers currently work at or have retired from MIT. Four current or former faculty are members of the American Academy of Arts and Letters. Allegations of research misconduct or improprieties have received substantial press coverage. Professor David Baltimore, a Nobel Laureate, became embroiled in a misconduct investigation starting in 1986 that led to Congressional hearings in 1991. Professor Ted Postol has accused the MIT administration since 2000 of attempting to whitewash potential research misconduct at the Lincoln Lab facility involving a ballistic missile defense test, though a final investigation into the matter has not been completed. Associate Professor Luk Van Parijs was dismissed in 2005 following allegations of scientific misconduct and found guilty of the same by the United States Office of Research Integrity in 2009. In 2019, Clarivate Analytics named 54 members of MIT's faculty to its list of "Highly Cited Researchers". That number places MIT eighth among the world's universities. == Discoveries and innovation == === Natural sciences === Oncogene – Robert Weinberg discovered genetic basis of human cancer. Reverse transcription – David Baltimore independently isolated, in 1970 at MIT, two RNA tumor viruses: R-MLV and again RSV. Thermal death time – Samuel Cate Prescott and William Lyman Underwood from 1895 to 1898. Done for canning of food. Applications later found useful in medical devices, pharmaceuticals, and cosmetics. Electroweak interaction – Steven Weinberg proposed the electroweak unification theory, which gave rise to the modern formulation of the Standard Model, in 1967 at MIT. === Computer and applied sciences === Akamai Technologies – Daniel Lewin and Tom Leighton developed a faster content delivery network, now one of the world's largest distributed computing platforms, responsible for serving between 15 and 30 percent of all web traffic. Cryptography – MIT researchers Ron Rivest, Adi Shamir and Leonard Adleman developed one of the first practical public-key cryptosystems, the RSA cryptosystem, and started a company, RSA Security. Digital circuits – Claude Shannon, while a master's degree student at MIT, developed the digital circuit design theory which paved the way for modern computers. Electronic ink – developed by Joseph Jacobson at MIT Media Lab. Emacs (text editor) – development began during the 1970s at the MIT AI Lab. Flight recorder (black box) – Charles Stark Draper developed the black box at MIT's Instrumentation Laboratory. That lab later made the Apollo Moon landings possible through the Apollo Guidance Computer it designed for NASA. GNU Project – Richard Stallman formally founded the free software movement in 1983 by launching the GNU Project at MIT. Julia (programming language) – Development was started in 2009, by Jeff Bezanson, Stefan Karpinski, Viral B. Shah, and Alan Edelman, all at MIT at that time, and continued with the contribution of a dedicated MIT Julia Lab Lisp (programming language) – John McCarthy invented Lisp at MIT in 1958. Lithium-ion battery efficiencies – Yet-Ming Chiang and his group at MIT showed a substantial improvement in the performance of lithium batteries by boosting the material's conductivity by doping it with aluminium, niobium and zirconium. Macsyma, one of the oldest general-purpose computer algebra systems; the GPL-licensed version Maxima remains in wide use. MIT OpenCourseWare – the OpenCourseWare movement started in 1999 when the University of Tübingen in Germany published videos of lectures online for its timms initiative (Tübinger Internet Multimedia Server). The OCW movement only took off, however, with the launch of MIT OpenCourseWare and the Open Learning Initiative at Carnegie Mellon University in October 2002. The movement was soon reinforced by the launch of similar projects at Yale, Utah State University, the University of Michigan and the University of California, Berkeley. Perdix micro-drone – autonomous drone that uses artificial intelligence to swarm with many other Perdix drones. Project MAC – groundbreaking research in operating systems, artificial intelligence, and the theory of computation. DARPA funded project. Radar – developed at MIT's Radiation Laboratory during World War II. SKETCHPAD – invented by Ivan Sutherland at MIT (presented in his PhD thesis). It pioneered the way for human–computer interaction (HCI). Sketchpad is considered to be the ancestor of modern computer-aided design (CAD) programs as well as a major breakthrough in the development of computer graphics in general. VisiCalc – first spreadsheet computer program for personal computers, originally released for the Apple II by VisiCorp. MIT alumni Dan Bricklin and Bob Frankston rented time sharing at night on an MIT mainframe computer (that cost $1/hr for use). World Wide Web Consortium – founded in 1994 by Tim Berners-Lee, (W3C) is the main international standards organization for the World Wide Web X Window System – pioneering architecture-independent system for graphical user interfaces that has been widely used for Unix and Linux systems. === Companies and entrepreneurship === MIT alumni and faculty have founded numerous companies, some of which are shown below: Analog Devices, 1965, co-founders Ray Stata, (SB, SM) and Matthew Lorber (SB) BlackRock, 1988, co-founder Bennett Golub, (SB, SM, PhD) Bose Corporation, 1964, founder Amar Bose (SB, PhD) Boston Dynamics, 1992, founder Marc Raibert (PhD) Buzzfeed, 2006, co-founder Jonah Peretti (SM) Dropbox, 2007, founders Drew Houston (SB) and Arash Ferdowsi (drop-out) Hewlett-Packard, 1939, co-founder William R. Hewlett (SM) HuffPost, 2005, co-founder Jonah Peretti (SM) Intel, 1968, co-founder Robert Noyce (PhD) Khan Academy, 2008, founder Salman Khan (SB, SM) Koch Industries, 1940, founder Fred C. Koch (SB), sons William (SB, PhD), David (SB) Qualcomm, 1985, co-founders Irwin M. Jacobs (SM, PhD) and Andrew Viterbi (SB, SM) Raytheon, 1922, co-founder Vannevar Bush (DEng, Professor) Renaissance Technologies, 1982, founder James Simons (SB) Scale AI, 2016, founder Alexandr Wang (drop-out) Texas Instruments, 1930, founder Cecil Howard Green (SB, SM) TSMC, 1987, founder Morris Chang (SB, SM) VMware, 1998, co-founder Diane Greene (SM) == Traditions and student activities == The faculty and student body place a high value on meritocracy and on technical proficiency. MIT has never awarded an honorary degree, nor does it award athletic scholarships, ad eundem degrees, or Latin honors upon graduation. However, MIT has twice awarded honorary professorships: to Winston Churchill in 1949 and Salman Rushdie in 1993. Many upperclass students and alumni wear a large, heavy, distinctive class ring known as the "Brass Rat". Originally created in 1929, the ring's official name is the "Standard Technology Ring". The undergraduate ring design (a separate graduate student version exists as well) varies slightly from year to year to reflect the unique character of the MIT experience for that class, but always features a three-piece design, with the MIT seal and the class year each appearing on a separate face, flanking a large rectangular bezel bearing an image of a beaver. The initialism IHTFP, representing the informal school motto "I Hate This Fucking Place" and jocularly euphemized as "I Have Truly Found Paradise", "Institute Has The Finest Professors", "Institute of Hacks, TomFoolery and Pranks", "It's Hard to Fondle Penguins", and other variations, has occasionally been featured on the ring given its historical prominence in student culture. === Caltech Rivalry === MIT also shares a well-known rivalry with the California Institute of Technology (Caltech), stemming from both institutions' reputations as two of the highest ranked and most highly recognized science and engineering schools in the world. The rivalry is an unusual college rivalry given its focus on academics and pranks instead of sports, and due to the geographic distance between the two (their campuses are separated by about 2580 miles and are on opposite coasts of the United States). In 2005, Caltech students pranked MIT's Campus Preview Weekend by distributing t-shirts that read "MIT" on the front, and "...because not everyone can go to Caltech" on the back. Additionally, the word Massachusetts in the "Massachusetts Institute of Technology" engraving on the exterior of the Lobby 7 dome was covered with a banner so that it read "That Other Institute of Technology". In 2006, MIT retaliated by posing as contractors and stealing the 1.7-ton, 130-year-old Fleming cannon, a Caltech landmark. The cannon was relocated to Cambridge, where it was displayed in front of the Green Building during the 2006 Campus Preview Weekend. In September 2010, MIT students unsuccessfully tried to place a life-sized model of the TARDIS time machine from the Doctor Who (1963–present) television series on top of Baxter Hall at Caltech. A few months later, Caltech students collaborated to help MIT students place the TARDIS on top of their originally planned destination. The rivalry has continued, most recently in 2014, when a group of Caltech students gave out mugs sporting the MIT logo on the front and the words "The Institute of Technology" on the back. When heated, the mugs turned orange and read, "Caltech, The Hotter Institute of Technology". === Activities === MIT has over 500 recognized student activity groups, including a campus radio station, The Tech student newspaper, an annual entrepreneurship competition, a crime club, and weekly screenings of popular films by the Lecture Series Committee. Less traditional activities include the "world's largest open-shelf collection of science fiction" in English, a model railroad club, and a vibrant folk dance scene. Students, faculty, and staff are involved in over 50 educational outreach and public service programs through the MIT Museum, Edgerton Center, and MIT Public Service Center. Fraternities and sororities provide a base of activities in addition to housing. Approximately 1,000 undergrads, 48% of men and 30% of women, participate in one of several dozen Greek Life men's, women's and co-ed chapters on the campus. The Independent Activities Period is a four-week-long "term" offering hundreds of optional classes, lectures, demonstrations, and other activities throughout the month of January between the Fall and Spring semesters. Some of the most popular recurring IAP activities are Autonomous Robot Design (course 6.270), Robocraft Programming (6.370), and MasLab competitions, the annual "mystery hunt", and Charm School. More than 250 students pursue externships annually at companies in the US and abroad. Many MIT students also engage in "hacking", which encompasses both the physical exploration of areas that are generally off-limits (such as rooftops and steam tunnels), as well as elaborate practical jokes. Examples of high-profile hacks have included the abduction of Caltech's cannon, reconstructing a Wright Flyer atop the Great Dome, and adorning the John Harvard statue with the Master Chief's Mjölnir Helmet. === Athletics === MIT sponsors 31 varsity sports and has one of the three broadest NCAA Division III athletic programs. MIT participates in the NCAA's Division III, and the New England Women's and Men's Athletic Conference. It also participates in NCAA's Division I Patriot League for women's crew, and the Collegiate Water Polo Association (CWPA) for Men's Water Polo. Men's crew competes outside the NCAA in the Eastern Association of Rowing Colleges (EARC). MIT's intercollegiate sports teams, called the Engineers, won 22 Team National Championships and 42 Individual National Championships. MIT is the all-time Division III leader in producing Academic All-Americas (302) and ranks second across all NCAA Divisions, behind only the University of Nebraska. MIT Athletes won 13 Elite 90 awards and ranks first among NCAA Division III programs, and third among all divisions. In April 2009, budget cuts led to MIT eliminating eight of its 41 sports, including the mixed men's and women's teams in alpine skiing and pistol; separate teams for men and women in ice hockey and gymnastics; and men's programs in golf and wrestling. == People == === Students === MIT enrolled 4,602 undergraduates and 6,972 graduate students in 2018–2019. Undergraduate and graduate students came from all 50 US states as well as from 115 foreign countries. MIT received 33,240 applications for admission to the undergraduate Class of 2025: it admitted 1,365 (4.1 percent). In 2019, 29,114 applications were received for graduate and advanced degree programs across all departments; 3,670 were admitted (12.6 percent) and 2,312 enrolled (63 percent). In August 2024, after the U.S. Supreme Court overruled race-based affirmative action in Students for Fair Admissions v. Harvard (2023), the university reported that for the class of 2028, Black and Latino student enrollment decreased from previous averages to 5 and 11 percent, respectively, while Asian American enrollment increased to 47 percent. Undergraduate tuition and fees for 2019–2020 was $53,790 for nine months. 59% of students were awarded a need-based MIT scholarship. Graduate tuition and fees for 2019–2020 was also $53,790 for nine months, and summer tuition was $17,800. Financial support for graduate students are provided in large part by individual departments. They include fellowships, traineeships, teaching and research assistantships, and loans. The annual increase in expenses had led to a student tradition (dating back to the 1960s) of tongue-in-cheek "tuition riots". MIT has been nominally co-educational since admitting Ellen Swallow Richards in 1870. Richards also became the first female member of MIT's faculty, specializing in sanitary chemistry. Female students remained a small minority prior to the completion of the first wing of a women's dormitory, McCormick Hall, in 1963. Between 1993 and 2009 the proportion of women rose from 34 percent to 45 percent of undergraduates and from 20 percent to 31 percent of graduate students. As of 2009, women outnumbered men in Biology, Brain & Cognitive Sciences, Architecture, Urban Planning, and Biological Engineering. === Faculty and staff === As of 2025, MIT had 1,090 faculty members. Faculty are responsible for lecturing classes, for advising both graduate and undergraduate students, and for sitting on academic committees, as well as for conducting original research. Between 1964 and 2009 a total of seventeen faculty and staff members affiliated with MIT won Nobel Prizes (thirteen of them in the latter 25 years). As of October 2020, 37 MIT faculty members, past or present, have won Nobel Prizes, the majority in Economics or Physics. As of October 2013, current faculty and teaching staff included 67 Guggenheim Fellows, 6 Fulbright Scholars, and 22 MacArthur Fellows. Faculty members who have made extraordinary contributions to their research field as well as the MIT community are granted appointments as Institute Professors for the remainder of their tenures. Susan Hockfield, a molecular neurobiologist, served as MIT's president from 2004 to 2012. She was the first woman to hold the post. MIT faculty members have often been recruited to lead other colleges and universities. Founding faculty-member Charles W. Eliot became president of Harvard University in 1869, a post he would hold for 40 years, during which he wielded considerable influence both on American higher education and on secondary education. MIT alumnus and faculty member George Ellery Hale played a central role in the development of the California Institute of Technology (Caltech), and other faculty members have been key founders of Franklin W. Olin College of Engineering in nearby Needham, Massachusetts. As of 2014 former provost Robert A. Brown served as president of Boston University; former provost Mark Wrighton is chancellor of Washington University in St. Louis; former associate provost Alice Gast is president of Lehigh University; and former professor Suh Nam-pyo is president of KAIST. Former dean of the School of Science Robert J. Birgeneau was the chancellor of the University of California, Berkeley (2004–2013); former professor John Maeda was president of Rhode Island School of Design (RISD, 2008–2013); former professor David Baltimore was president of Caltech (1997–2006); and MIT alumnus and former assistant professor Hans Mark served as chancellor of the University of Texas system (1984–1992). In addition, faculty members have been recruited to lead governmental agencies; for example, former professor Marcia McNutt is president of the National Academy of Sciences, urban studies professor Xavier de Souza Briggs served as the associate director of the White House Office of Management and Budget, and biology professor Eric Lander was a co-chair of the President's Council of Advisors on Science and Technology. In 2013, faculty member Ernest Moniz was nominated by President Obama and later confirmed as United States Secretary of Energy. Former professor Hans Mark served as Secretary of the Air Force from 1979 to 1981. Alumna and Institute Professor Sheila Widnall served as Secretary of the Air Force between 1993 and 1997, making her the first female Secretary of the Air Force and first woman to lead an entire branch of the US military in the Department of Defense. A 1999 report, met by promises of change by President Charles Vest, found that senior female faculty in the School of Science were often marginalized, and in return for equal professional accomplishments received reduced "salary, space, awards, resources, and response to outside offers". As of 2017, MIT was the second-largest employer in the city of Cambridge. Based on feedback from employees, MIT was ranked No. 7 as a place to work, among US colleges and universities as of March 2013. Surveys cited a "smart", "creative", "friendly" environment, noting that the work-life balance tilts towards a "strong work ethic" but complaining about "low pay" compared to an industry position. === Notable alumni === Many of MIT's over 120,000 alumni have achieved considerable success in scientific research, public service, education, and business. As of October 2020, 41 MIT alumni have won Nobel Prizes, 48 have been selected as Rhodes Scholars, 61 have been selected as Marshall Scholars, and 3 have been selected as Mitchell Scholars. Alumni in United States politics and public service include former Chairman of the Federal Reserve Ben Bernanke, former MA-1 Representative John Olver, former CA-13 Representative Pete Stark, KY-4 Representative Thomas Massie, California Senator Alex Padilla, former National Economic Council chairman Lawrence H. Summers, and former Council of Economic Advisers chairman Christina Romer. MIT alumni in international politics include Foreign Affairs Minister of Iran Ali Akbar Salehi, Education Minister of Nepal Sumana Shrestha, President of Colombia Virgilio Barco Vargas, former President of the European Central Bank Mario Draghi, former Governor of the Reserve Bank of India Raghuram Rajan, former British Foreign Minister David Miliband, former Greek Prime Minister Lucas Papademos, former UN Secretary General Kofi Annan, former Iraqi Deputy Prime Minister Ahmed Chalabi, former Minister of Education and Culture of The Republic of Indonesia Yahya Muhaimin, former Jordanian Minister of Education, Higher Education and Scientific Research and former Jordanian Minister of Energy and Mineral Resources Khaled Toukan. Alumni in sports have included Olympic fencing champion Johan Harmenberg. MIT alumni founded or co-founded many notable companies, such as Intel, McDonnell Douglas, Texas Instruments, 3Com, Qualcomm, Bose, Raytheon, Apotex, Koch Industries, Rockwell International, Genentech, Dropbox, and Campbell Soup. According to the British newspaper The Guardian, "a survey of living MIT alumni found that they have formed 25,800 companies, employing more than three million people including about a quarter of the workforce of Silicon Valley. Those firms collectively generate global revenues of about $1.9 trillion (£1.2 trillion) a year". If the companies founded by MIT alumni were a country, they would have the 11th-highest GDP of any country in the world. MIT alumni have founded or co-founded many successful nonprofit organizations, such as Khan Academy. MIT alumni have led prominent institutions of higher education, including the University of California system, Harvard University, the New York Institute of Technology, Johns Hopkins University, Carnegie Mellon University, Tufts University, Rochester Institute of Technology, Rhode Island School of Design (RISD), UC Berkeley College of Environmental Design, the New Jersey Institute of Technology, Northeastern University, Tel Aviv University, Lahore University of Management Sciences, Rensselaer Polytechnic Institute, Tecnológico de Monterrey, Purdue University, Virginia Polytechnic Institute, Korea Advanced Institute of Science and Technology, and Quaid-e-Azam University. Berklee College of Music, the largest independent college of contemporary music in the world, was founded and led by MIT alumnus Lawrence Berk for more than three decades. More than one third of the United States' crewed spaceflights have included MIT-educated astronauts, a contribution exceeding that of any university excluding the United States service academies. Of the 12 people who have set foot on the Moon as of 2019, four graduated from MIT (among them Apollo 11 Lunar Module Pilot Buzz Aldrin). Alumnus and former faculty member Qian Xuesen led the Chinese nuclear-weapons program and became instrumental in Chinese rocket-program. MIT alumni played a significant role in the creation of the Atomic Energy Commission and Department of Energy. Carroll Wilson (a student and professor at MIT) served as the first General Manager of the Atomic Energy Commission. John Deutch served as Under Secretary of Energy for President Carter; William F. Martin served as Deputy Secretary of Energy for Ronald Reagan and Ernest Moniz served as Secretary of Energy for President Obama. Indeed, modern post World War II history has been influenced by MIT and its alumni in the fields of nuclear energy and high energy physics. Noted alumni in non-scientific fields include children's book author Hugh Lofting, sculptor Daniel Chester French, guitarist Tom Scholz of the band Boston, the British BBC and ITN correspondent and political advisor David Walter, The New York Times columnist and Nobel Prize-winning economist Paul Krugman, The Bell Curve author Charles Murray, United States Supreme Court building architect Cass Gilbert, Pritzker Prize-winning architects I.M. Pei and Gordon Bunshaft. == See also == Massachusetts Institute of Technology School of Engineering Whitehead Institute Eli and Edythe L. Broad Institute of MIT and Harvard Koch Institute for Integrative Cancer Research The Coop, campus bookstore == Notes == == References == === Sources === Also see the bibliography Archived 2012-02-22 at the Wayback Machine maintained by MIT's Institute Archives & Special Collections and Written Works in MIT in popular culture. == External links == Official website Athletics website Texts on Wikisource: "Massachusetts Institute of Technology". Collier's New Encyclopedia. 1921. "Massachusetts Institute of Technology, The". Encyclopedia Americana. 1920. "Massachusetts Institute of Technology". The New Student's Reference Work. 1914. "Massachusetts Institute of Technology". New International Encyclopedia. 1905. Swain, George Fillmore (July 1900). "Technical Education at the Massachusetts Institute of Technology". Popular Science Monthly. Vol. 57.
https://en.wikipedia.org/wiki/Massachusetts_Institute_of_Technology
The Georgia Institute of Technology (commonly referred to as Georgia Tech, GT, and simply Tech or the Institute) is a public research university and institute of technology in Atlanta, Georgia, United States. Established in 1885, it has the largest student enrollment of the University System of Georgia institutions and satellite campuses in Savannah, Georgia and Metz, France. The school was founded as the Georgia School of Technology as part of Reconstruction efforts to build an industrial economy in the Southern United States after the Civil War. Initially, it offered only a degree in mechanical engineering. By 1901, its curriculum had expanded to include electrical, civil, and chemical engineering. In 1948, the school changed its name to reflect its evolution from a trade school to a technical institute and research university. Georgia Tech is organized into seven colleges with about 31 departments and academic units. It emphasizes the academic fields of science and technology. Georgia Tech's $5.3 billion economic impact for fiscal year 2023 led all public institutions in the state. Georgia Tech fields eight men's and seven women's sports teams; these compete in NCAA Division I athletics and have won five national championships. The university is a member of the Atlantic Coast Conference. == History == === Establishment === The idea of a technology school in Georgia was introduced in 1865 during the Reconstruction period. Two former Confederate officers, Major John Fletcher Hanson (an industrialist) and Nathaniel Edwin Harris (a politician and eventually Governor of Georgia), who had become prominent citizens in the town of Macon, Georgia, after the Civil War, believed that the South needed to improve its technology to compete with the North's industrialization. Because the American South of that era was mainly populated by agricultural workers and few technical developments were occurring, they proposed to establish a technology school. In 1882, the Georgia State Legislature authorized a committee, led by Harris, to visit the Northeast to learn how technology schools worked. They were impressed by the polytechnic educational models developed at the Massachusetts Institute of Technology and the Worcester County Free Institute of Industrial Science (now Worcester Polytechnic Institute). The committee recommended adapting the Worcester model, which stressed a combination of "theory and practice", the "practice" component including student employment and production of consumer items to generate revenue for the school. On October 13, 1885, Georgia Governor Henry D. McDaniel signed the bill to create and fund the new school. In 1887, Atlanta pioneer Richard Peters donated to the state 4 acres (1.6 ha) of the site of a failed garden suburb called Peters Park. The site was bounded on the south by North Avenue, and on the west by Cherry Street. He then sold five adjoining acres of land to the state for US$10,000, (equivalent to $350,000 in 2024). This land was near Atlanta's northern city limits at the time of its founding, although the city has since expanded several miles beyond it. A historical marker on the large hill in Central Campus says that the site occupied by the school's first buildings once held fortifications to protect Atlanta during the Atlanta Campaign of the American Civil War. The surrender of the city took place in 1864 on what is today the southwestern boundary of the Georgia Tech campus. === Early years === The Georgia School of Technology opened in the fall of 1888 with two buildings. One building (now Tech Tower, an administrative headquarters) had classrooms to teach students; The second building featured a shop and had a foundry, forge, boiler room, and engine room. It was designed for students to work and produce goods to sell and fund the school. The two buildings were equal in size to show the importance of teaching both the mind and the hands, though, at the time, there was some disagreement to whether the machine shop should have been used to turn a profit. On October 20, 1905, U.S. President Theodore Roosevelt visited Georgia Tech. On the steps of Tech Tower, Roosevelt delivered a speech about the importance of technological education. He then shook hands with every student. Georgia Tech's Evening School of Commerce began holding classes in 1912. The evening school admitted its first female student in 1917, although the state legislature did not officially authorize attendance by women until 1920. Annie T. Wise became the first female graduate in 1919 and was Georgia Tech's first female faculty member the following year. In 1931, the Board of Regents transferred control of the Evening School of Commerce to the University of Georgia (UGA) and moved the civil and electrical engineering courses at UGA to Tech. Tech replaced the commerce school with what later became the College of Business. The commerce school would later split from UGA and eventually become Georgia State University. In 1934, the Engineering Experiment Station (later known as the Georgia Tech Research Institute) was founded by W. Harry Vaughan with an initial budget of $5,000 (equivalent to $117,525 in 2024) and 13 part-time faculty. In the mid to late 40s, President Blake Van Leer had a focus on making Georgia Tech the "MIT of the South." Van Leer lobbied government and business for funds for new facilities. The Research Building was expanded, and a $300,000 (equivalent to $4,000,000 in 2024) Westinghouse A-C network calculator was given to Georgia Tech by Georgia Power in 1947. A new $2,000,000 library was completed, new Textile and Architecture buildings completed and at the time the most modern gymnasium in the world was built. === Modern history === Founded as the Georgia School of Technology, Georgia Tech assumed its present name in 1948 to reflect a growing focus on advanced technological and scientific research. Under President Blake Ragsdale Van Leer's tenure, Tech went through a significant change, expanded its campus with new facilities, added new engineering courses, and became the largest engineering institute in the South and the third largest in the US. Van Leer also admitted the first female students to regular classes in 1952 and began steps toward integration. He stood up to Georgia governor Marvin Griffin's demand to bar Bobby Grier from participating in the 1956 Sugar Bowl game between Georgia Tech and Grier's University of Pittsburgh. After Van Leer's death, his wife Ella Lillian Wall Van Leer bought a house on campus and opened it to female students to support their success. She also set up the first sorority on campus along with a Society of Women Engineers chapter. In 1968 women could enroll in all programs at Tech. Industrial Management was the last program to open to women. The first women's dorm, Fulmer Hall, opened in 1969. Rena Faye Smith, appointed as a research assistant in the School of Physics in 1969 by Dr. Ray Young, in X-Ray Diffraction, became the first female faculty member (research) in the School of Physics. She went on to earn a Ph.D. at Georgia State University and taught physics and instructional technology at Black Hills State University – 1997–2005 as Rena Faye Norby. She served as a Fulbright Scholar in Russia 2004–2005. Women constituted 30.3% of the undergraduates and 25.3% of the graduate students enrolled in Spring 2009. In 1959, a meeting of 2,741 students voted by an overwhelming majority to endorse integration of qualified applicants, regardless of race. In September 1961, nine months after the University of Georgia's violent integration, Ralph A. Long Jr., Lawrence Williams, and Ford C. Greene enrolled at Tech, becoming the first African American students at Tech. Ronald Yancey enrolled the next year and in 1965 became the university's first African American graduate.Georgia Tech became the first university in the Deep South to desegregate without a court order. In the 1967–68 academic year 28 students out of 7,526 were black. In 1968, William Peace became the first black instructor and Marle Carter became the first black member of the homecoming court. In 1964, Dr. Calvin Huey became the first black player to play at Grant Field when he took the field for Navy. The first black person to play for Georgia Tech was Eddie McAshan in 1970. Similarly, there was little student reaction at Georgia Tech to the Vietnam War and United States involvement in the Cambodian Civil War. The student council defeated a resolution supporting the Vietnam Moratorium, and the extent of the Tech community's response to the Kent State shooting was limited to a student-organized memorial service, though the institute was ordered closed for two days, along with all other University System of Georgia schools. In 1988, President John Patrick Crecine pushed through a restructuring of the university. The institute at that point had three colleges: the College of Engineering, the College of Management, and the catch-all COSALS, the College of Sciences and Liberal Arts. Crecine reorganized the latter two into the College of Computing, the College of Sciences, and the Ivan Allen College of Management, Policy, and International Affairs. Crecine never asked for input regarding the changes and, consequently, many faculty members disliked his top-down management style; despite this, the changes passed by a slim margin. Crecine was also instrumental in securing the 1996 Summer Olympics for Atlanta. A large amount of construction occurred, creating most of what is now considered "West Campus" for Tech to serve as the Olympic Village, and significantly gentrifying Midtown Atlanta. The Undergraduate Living Center, Fourth Street Apartments, Sixth Street Apartments, Eighth Street Apartments, Hemphill Apartments (now named Crecine Apartments), and Center Street Apartments housed athletes and journalists. The Georgia Tech Aquatic Center was built for swimming events, and the Alexander Memorial Coliseum was renovated. The institute also erected the Kessler Campanile and fountain to serve as a landmark and symbol of the university on television broadcasts. In 1994, G. Wayne Clough became the first Georgia Tech alumnus to serve as the president of institution; he was in office during the 1996 Summer Olympics. In 1998, he separated the Ivan Allen College of Management, Policy, and International Affairs into the Ivan Allen College of Liberal Arts and returned the College of Management to "College" status (Crecine, the previous president, had demoted Management from "College" to "School" status as part of a controversial 1990 reorganization plan). His tenure focused on a dramatic expansion of the institute, a revamped Undergraduate Research Opportunities Program, and the creation of an International Plan. On March 15, 2008, he was appointed secretary of the Smithsonian Institution, effective July 1, 2008. Dr. Gary Schuster, Tech's provost and executive vice president for Academic Affairs, was named interim president, effective July 1, 2008. On April 1, 2009, G. P. "Bud" Peterson, previously the chancellor of the University of Colorado at Boulder, became the 11th president of Georgia Tech. On April 20, 2010, Georgia Tech was invited to join the Association of American Universities, the first new member institution in nine years. In 2014, Georgia Tech launched the first "massive online open degree" in computer science by partnering with Udacity and AT&T; a complete degree through that program costs students $7,000. It eventually expanded this program with its online masters in analytics in January 2017, as well as providing the option for advanced credits with a MicroMasters in collaboration with edX. On January 7, 2019, President G.P. Bud Peterson announced his intention to retire. Angel Cabrera, former President of George Mason University and Georgia Tech alum, was named his successor on June 13, 2019. Cabrera took office on September 3, 2019. == Campus sections == The Georgia Tech campus is located in Midtown, an area slightly north of downtown Atlanta. Although a number of skyscrapers—most visibly the headquarters of The Coca-Cola Company, and Bank of America—are visible from all points on campus, the campus itself has few buildings over four stories and has a great deal of greenery. This gives it a distinctly suburban atmosphere quite different from other Atlanta campuses such as that of Georgia State University. The campus is served by two stations on the MARTA rail system, Midtown and North Avenue. The campus is organized into four main parts: West Campus, East Campus, Central Campus, and Technology Square. West Campus and East Campus are both occupied primarily by student living complexes, while Central Campus is reserved primarily for teaching and research buildings. === West Campus === West Campus is occupied primarily by apartments and coed undergraduate dormitories. Apartments include Crecine, Center Street, 6th Street, Maulding, Graduate Living Center (GLC), and Eighth Street Apartments, while dorms include Freeman, Montag, Fitten, Folk, Caldwell, Armstrong, Hefner, Fulmer, and Woodruff Suites. The Campus Recreation Center (formerly the Student Athletic Complex); a volleyball court; a large, low natural green area known as the Burger Bowl; and a flat artificial green area known as the CRC (formerly SAC) Fields are all located on the western side of the campus. In 2017, West Village, a multipurpose facility featuring dining options, meeting space, School of Music classrooms, and offices to West Campus, opened. The Robert C. Williams Museum of Papermaking is located on West Campus. West Campus was formerly home to Under the Couch, which relocated to the Student Center in the fall of 2010. Also within walking distance of West Campus are several late-night eateries. West Campus was home to a convenience store, West Side Market, which closed following the opening of West Village in the fall of 2017. Due to limited space, all auto travel proceeds via a network of one-way streets which connects West Campus to Ferst Drive, the main road of the campus. Woodruff Dining Hall, or "Woody's", was the West Campus Dining Hall, before closing after the opening of West Village. It connected the Woodruff North and Woodruff South undergraduate dorms. === East Campus === East Campus houses all of the fraternities and sororities as well as most of the undergraduate freshman dormitories. East Campus abuts the Downtown Connector, granting residences quick access to Midtown and its businesses (for example, The Varsity) via a number of bridges over the highway. Georgia Tech football's home, Bobby Dodd Stadium is located on East Campus, as well as Georgia Tech basketball's home, McCamish Pavilion (formerly Alexander Memorial Coliseum). Brittain Dining Hall and North Ave Dining Hall are the main dining halls for East Campus. Britain Dining Hall is modeled after a medieval church, complete with carved columns and stained glass windows showing symbolic figures. The main road leading from East Campus to Central Campus is a steep ascending incline commonly known as "Freshman Hill" (in reference to the large number of freshman dorms near its foot). On March 8, 2007, the former Georgia State University Village apartments were transferred to Georgia Tech. Renamed North Avenue Apartments by the institute, they began housing students in the fall semester of 2007. === Central Campus === Central Campus is home to the majority of the academic, research, and administrative buildings. The Central Campus includes, among others: the Howey Physics Building; the Boggs Chemistry Building; the College of Computing Building; the Klaus Advanced Computing Building; the College of Design Building; the Skiles Classroom Building, which houses the School of Mathematics and the School of Literature, Media and Culture; the D. M. Smith Building, which houses the School of Public Policy; the Krone Engineered Biosystems Building, and the Ford Environmental Science & Technology Building. In 2005, the School of Modern Languages returned to the Swann Building, a 100-year-old former dormitory that now houses some of the most technology-equipped classrooms on campus. Tech's administrative buildings, such as Tech Tower, and the Bursar's Office, are also located on the Central Campus, in the recently renovated Georgia Tech Historic District. The campus library, the John Lewis Student Center (formerly the Fred B. Wenn Building), and the Student Services Building ("Flag Building") are also located on Central Campus. The Student Center provides a variety of recreational and social functions for students including: a computer lab, a game room ("Tech Rec"), the Student Post Office, a music venue, a movie theater, the Food Court, plus meeting rooms for various clubs and organizations. Adjacent to the eastern entrance of the Student Center is the Kessler Campanile (which is referred to by students as "The Shaft"). The former Hightower Textile Engineering building was demolished in 2002 to create Yellow Jacket Park. More greenspace now occupies the area around the Kessler Campanile for a more aesthetically pleasing look, in accordance with the official Campus Master Plan. In August 2011, the G. Wayne Clough Undergraduate Learning Commons opened next to the library and occupies part of the Yellow Jacket Park area. === Technology Square === Technology Square, also known as "Tech Square", is located across the Downtown Connector and embedded in the city east of East Campus. Opened in August 2003 at a cost of $179 million, the district was built over run-down neighborhoods and has sparked a revitalization of the entire Midtown area. Connected by the recently renovated Fifth Street Bridge, it is a pedestrian-friendly area comprising Georgia Tech facilities and retail locations. One complex contains the College of Business Building, holding classrooms and office space for the Scheller College of Business, as well as the Georgia Tech Hotel and Conference Center and the Georgia Tech Global Learning Center. Another part of Tech Square, the privately owned Centergy One complex, contains the Technology Square Research Building (TSRB), holding faculty and graduate student offices for the College of Computing and the School of Electrical and Computer Engineering, as well as the GVU Center, a multidisciplinary technology research center. The Advanced Technology Development Center (ATDC) is a science and business incubator, run by the Georgia Institute of Technology, and is also headquartered in Technology Square's Centergy One complex. Other Georgia Tech-affiliated buildings in the area host the Center for Quality Growth and Regional Development, the Georgia Tech Enterprise Innovation Institute, the Advanced Technology Development Center, VentureLab, the Georgia Electronics Design Center and the new CODA (mixed-use development). Technology Square also hosts a variety of restaurants and businesses, including the headquarters of notable consulting companies like Accenture and also including the official Institute bookstore, a Barnes & Noble bookstore, and a Georgia Tech-themed Waffle House. === Science Square === Science Square is a Georgia Tech mixed‐use development dedicated to life sciences and biomedical research. It is located southwest to Georgia Tech’s main campus, serving as a link between the institute and Atlanta’s rapidly evolving Westside community. Opened in April 2024, the district spans 18 acres and features over 1.8 million square feet of laboratory and office space, 500 residential units, and 25,000 square feet of retail area. Due to eventually be connected to the main campus by a pedestrian bridge, Science Square is the starting point for a multi-phase project designed to lure industry research partners closer to the campus. One of its central components is Science Square Labs, a 13-story tower designed to accommodate wet and dry laboratories for academia, industry, and startups. === Satellite campuses === In 1999, Georgia Tech began offering local degree programs to engineering students in Southeast Georgia, and in 2003 established a physical campus in Savannah, Georgia. Until 2013, Georgia Tech Savannah offered undergraduate and graduate programs in engineering in conjunction with Georgia Southern University, South Georgia College, Armstrong Atlantic State University, and Savannah State University. The university further collaborated with the National University of Singapore to set up The Logistics Institute–Asia Pacific in Singapore. The campus now serves the institute's hub for professional and continuing education and is home to the regional offices of the Georgia Tech Enterprise Innovation Institute, the Savannah Advanced Technology Development Center, and the Georgia Logistics Innovation Center. Georgia Tech also operates a campus in Metz, in northeastern France, known as Georgia Tech Europe (GTE). Opened in October 1990, it offers master's-level courses in Electrical and Computer Engineering, Computer Science and Mechanical Engineering and Ph.D. coursework in Electrical and Computer Engineering and Mechanical Engineering. Georgia Tech Europe was the defendant in a lawsuit pertaining to the language used in advertisements, which was a violation of the Toubon Law. Georgia Tech and Tianjin University cooperatively operated a campus in Shenzhen, Guangdong, China — Georgia Tech Shenzhen Institute, Tianjin University. Launched in 2014, the institute offered undergraduate and graduate programs in electrical and computer engineering, analytics, computer science, environmental engineering, and industrial design. Admission and degree requirements at the institute are the same as those in Atlanta. In September 2024, Georgia Tech announced that it was ending its partnership with Tianjin University following U.S. congressional scrutiny of potential ties to the People's Liberation Army. The College of Design (formerly College of Architecture) maintains a small permanent presence in Paris in affiliation with the École d'architecture de Paris-La Villette and the College of Computing has a similar program with the Barcelona School of Informatics at the Polytechnic University of Catalonia in Barcelona, Spain. There are additional programs in Athlone, Ireland, Shanghai, China, and Singapore. Georgia Tech was supposed to have set up two campuses for research and graduate education in the cities of Visakhapatnam and Hyderabad, Telangana, India by 2010, but it appeared the plans had been set on hold as of 2011. === Campus services === Georgia Tech Cable Network, or GTCN, is the college's branded cable source. Most non-original programming is obtained from Dish Network. GTCN currently has 100 standard-definition channels and 23 high-definition channels. The Office of Information Technology, or OIT, manages most of the institute's computing resources (and some related services such as campus telephones). With the exception of a few computer labs maintained by individual colleges, OIT is responsible for most of the computing facilities on campus. Student, faculty, and staff e-mail accounts are among its services. Georgia Tech's ResNet provides free technical support to all students and guests living in Georgia Tech's on-campus housing (excluding fraternities and sororities). ResNet is responsible for network, telephone, and television service, and most support is provided by part-time student employees. == Organization and administration == Georgia Tech's undergraduate and graduate programs are divided into seven colleges. Georgia Tech has sought to expand its undergraduate and graduate offerings in less technical fields, primarily those under the Ivan Allen College of Liberal Arts, which saw a 20% increase in admissions in 2008. Also, even in the Ivan Allen College, the Institute does not offer Bachelor of Arts and Masters of Arts degrees, only Bachelor of Science and Master of Science degrees. Georgia Tech's honors program is highly selective and designed to cater to the most intellectually curious undergraduates from all six colleges. === Funding === The Georgia Institute of Technology is a public institution that receives funds from the State of Georgia, tuition, fees, research grants, and alumni contributions. In 2014, the institute's revenue amounted to about $1.422 billion. Fifteen percent came from state appropriations and grants while 20% originated from tuition and fees. Grants and contracts accounted for 55% of all revenue. Expenditures were about $1.36 billion. Forty-eight percent went to research and 19% went to instruction. The Georgia Tech Foundation runs the university's endowment and was incorporated in 1932. It includes several wholly owned subsidiaries that own land on campus or in Midtown and lease the land back to the Georgia Board of Regents and other companies and organizations. Assets totaled $1.882 billion and liabilities totaled $0.478 billion in 2014. As of 2007, Georgia Tech had the most generous alumni donor base, percentage wise, of any public university ranked in the top 50. In 2015, the university received a $30 million grant from Atlanta philanthropist Diana Blank to build the "most environmentally-sound building ever constructed in the Southeast." == Academics == === Undergraduate admissions === The 2022 annual ranking of U.S. News & World Report categorizes Georgia Institute of Technology as "most selective." For the Class of 2029 (enrolled fall 2025), Georgia Tech received 66,895 applications from first-time, first-year students, and accepted 8,640 (12.74%). In the 2028 cycle, of those accepted, nearly 4,000 enrolled, a yield rate (the percentage of accepted students who choose to attend the university) of 45.8%. Of the 77% of the incoming freshman class who submitted SAT scores; the middle 50 percent Composite scores were 1440. Of the 35% of enrolled freshmen in 2023 who submitted ACT scores; the middle 50 percent Composite score was between 32 Georgia Tech's freshman retention rate is 98%, with 92% going on to graduate within six years. In the 2020–2021 academic year, 95 freshman students were National Merit Scholars which was the highest in Georgia. The institute is need-blind for domestic applicants. In 2017, Georgia Tech announced valedictorians and salutatorians from Georgia's accredited public and private high schools with 50 or more graduates will be the only students offered automatic undergraduate admission via its Georgia Tech Scholars Program. === Rankings === In 2021 U.S. News & World Report named Georgia Tech 3rd worldwide for both its Bachelor's in Analytics and Master of Science in Business Analytics degree programs. Also in the 2021 Times Higher Education subject rankings, Georgia Tech ranked 12th for engineering and 13th for computer science in the world. Tech's undergraduate engineering program was ranked 4th in the United States and its graduate engineering program ranked 4th by U.S. News & World Report for 2025. Tech's graduate engineering program rankings are aerospace (2nd), biomedical/bioengineering (2nd), chemical (3rd), civil (1st), computer (4th), electrical (4th), environmental (3rd), industrial (1st), materials (3rd), mechanical (2nd), and nuclear (9th). Tech's undergraduate computer science program ranked tied for 7th and its graduate computer science program ranked tied for 7th. Other graduate computer science program rankings are artificial intelligence (5th), theory (9th), systems (4th), and programming language (14th) Also for 2021, U.S. News & World Report ranked Tech 13th in the United States for most innovative university. == Research == === Facilities and classification === Georgia Tech is classified among "R1: Doctoral Universities – Very high research activity". The National Science Foundation ranked Georgia Tech 20th among American universities for research and development expenditures in 2021 with $1.11 billion. Much of this research is funded by large corporations or governmental organizations. Research is organizationally under the Executive Vice President for Research, Stephen E. Cross, who reports directly to the institute president. Nine "interdisciplinary research institutes" report to him, with all research centers, laboratories and interdisciplinary research activities at Georgia Tech reporting through one of those institutes. The oldest of those research institutes is a nonprofit research organization referred to as the Georgia Tech Research Institute (GTRI). GTRI provides sponsored research in a variety of technical specialties including radar, electro-optics, and materials engineering. Around 40% (by award value) of Georgia Tech's research, especially government-funded classified work, is conducted through this counterpart organization. GTRI employs around 3,000 people and had $941 million in revenue in fiscal year 2023. The other institutes include: the Parker H. Petit Institute for Bioengineering & Bioscience, the Georgia Tech Institute for Electronics and Nanotechnology, the Georgia Tech Strategic Energy Institute, the Brook Byers Institute for Sustainable Systems, the Georgia Tech Manufacturing Institute, the Institute of Paper Science and Technology, Institute for Materials and the Institute for People and Technology. === Entrepreneurship === Many startup companies are produced through research conducted at Georgia Tech, with the Advanced Technology Development Center and VentureLab ready to assist Georgia Tech's researchers and entrepreneurs in organization and commercialization. The Georgia Tech Research Corporation serves as Georgia Tech's contract and technology licensing agency. Georgia Tech is ranked fourth for startup companies, eighth in patents, and eleventh in technology transfer by the Milken Institute. Georgia Tech and GTRI devote 1,900,000 square feet (180,000 m2) of space to research purposes, including the new $90 million Marcus Nanotechnology Building, one of the largest nanotechnology research facilities in the Southeastern United States with over 30,000 square feet (2,800 m2) of clean room space. Georgia Tech encourages undergraduates to participate in research alongside graduate students and faculty. The Undergraduate Research Opportunities Program awards scholarships each semester to undergraduates who pursue research activities. These scholarships, called the President's Undergraduate Research Awards, take the form of student salaries or help cover travel expenses when students present their work at professional meetings. Additionally, undergraduates may participate in research and write a thesis to earn a "Research Option" credit on their transcripts. An undergraduate research journal, The Tower, was established in 2007 to provide undergraduates with a venue for disseminating their research and a chance to become familiar with the academic publishing process. Recent developments include a proposed graphene antenna. Georgia Tech and Emory University have a strong research partnership and jointly administer the Emory-Georgia Tech Predictive Health Institute. They also, along with Peking University, administer the Wallace H. Coulter Department of Biomedical Engineering. In 2015, Georgia Tech and Emory were awarded an $8.3 million grant by the National Institutes of Health (NIH) to establish a National Exposure Assessment Laboratory. In July 2015, Georgia Tech, Emory, and Children's Healthcare of Atlanta were awarded a four-year, $1.8 million grant by the Cystic Fibrosis Foundation in order to expand the Atlanta Cystic Fibrosis Research and Development Program. In 2015, the two universities received a five-year, $2.9 million grant from the National Science Foundation (NSF) to create new bachelor's, master's, and doctoral degree programs and concentrations in healthcare robotics, which will be the first program of its kind in the Southeastern United States. The Georgia Tech Panama Logistics Innovation & Research Center is an initiative between the H. Milton Stewart School of Industrial and Systems Engineering, the Ecuador National Secretariat of Science and Technology, and the government of Panama that aims to enhance Panama's logistics capabilities and performance through a number of research and education initiatives. The center is creating models of country level logistics capabilities that will support the decision-making process for future investments and trade opportunities in the growing region and has established dual degree programs in the University of Panama and other Panamanian universities with Georgia Tech. A similar center in Singapore, The Centre for Next Generation Logistics, was established in 2015 and is a collaboration between Georgia Tech and the National University of Singapore. The center will work closely with government agencies and the industry to perform research in logistics and supply chain systems for translation into innovations and commercialization to achieve transformative economic and societal impact. === Industry connections === Georgia Tech maintains close ties to the industrial world. Many of these connections are made through Georgia Tech's cooperative education and internship programs. Georgia Tech's Division of Professional Practice (DoPP), established in 1912 as the Georgia Institute of Technology Cooperative Division, operates the largest and fourth-oldest cooperative education program in the United States, and is accredited by the Accreditation Council for Cooperative Education. The Graduate Cooperative Education Program, established in 1983, is the largest such program in the United States. It allows graduate students pursuing master's degrees or doctorates in any field to spend a maximum of two consecutive semesters working full- or part-time with employers. The Undergraduate Professional Internship Program enables undergraduate students—typically juniors or seniors—to complete a one- or two-semester internship with employers. The Work Abroad Program hosts a variety of cooperative education and internship experiences for upperclassmen and graduate students seeking international employment and cross-cultural experiences. While all four programs are voluntary, they consistently attract high numbers of students—more than 3,000 at last count. Around 1,000 businesses and organizations hire these students, who collectively earn $20 million per year. Georgia Tech's cooperative education and internship programs have been externally recognized for their strengths. The Undergraduate Cooperative Education was recognized by U.S. News & World Report as one of the top 10 "Programs that Really Work" for five consecutive years. U.S. News & World Report additionally ranked Georgia Tech's internship and cooperative education programs among 14 "Academic Programs to Look For" in 2006 and 2007. On June 4, 2007, the University of Cincinnati inducted Georgia Tech into its Cooperative Education Hall of Honor. == Student life == Georgia Tech students benefit from many Institute-sponsored or related events on campus, as well as a wide selection of cultural options in the surrounding district of Midtown Atlanta, "Atlanta's Heart of the Arts". Home Park, a neighborhood that borders the north end of campus, is a popular living area for Tech students and recent graduates. === Student demographics === As of fall 2023, the student body consists of more than 47,000 undergraduate and graduate students, with graduate students making up 60% of the student body. The student body at Georgia Tech is approximately 60% male and 40% female. Around 50–55% of all Georgia Tech students are residents of the state of Georgia, around 20% come from outside the U.S., and 25–30% are residents of other U.S. states or territories. The top states of origin for all non-Georgia U.S. students are Florida, Texas, California, North Carolina, Virginia, New Jersey, and Maryland. Students at Tech represent all 50 states and 114 countries. The top three countries of origin for all international students are China, India, and South Korea. === Housing === Georgia Tech Housing is subject to a clear geographic division of campus into eastern and western areas that contain the vast majority of housing. East Campus is largely populated by freshmen and is served by Brittain Dining Hall and North Avenue Dining Hall. West Campus houses some freshmen, transfer, and returning students (upperclassmen), and is served by West Village. Graduate students typically live off-campus (for example, in Home Park) or on-campus in the Graduate Living Center or 10th and Home. Just off campus, students can choose from several restaurants, including a half-dozen in Technology Square alone. The institute's administration has implemented programs in an effort to reduce the levels of stress and anxiety felt by Tech students. The Familiarization and Adaptation to the Surroundings and Environs of Tech (FASET) Orientation and Freshman Experience (a freshman-only dorm life program to "encourage friendships and a feeling of social involvement") programs, which seek to help acclimate new students to their surroundings and foster a greater sense of community. As a result, the institute's retention rates improved. In the fall of 2007, the North Avenue Apartments were opened to Tech students. Originally built for the 1996 Olympics and belonging to Georgia State University, the buildings were given to Georgia Tech and have been used to accommodate Tech's expanding population. Georgia Tech freshmen students were the first to inhabit the dormitories in the Winter and Spring 1996 quarters, while much of East Campus was under renovation for the Olympics. The North Avenue Apartments (commonly known as "North Ave") are also noted as the first Georgia Tech buildings to rise above the top of Tech Tower. Open to second-year undergraduate students and above, the buildings are located on East Campus, across North Avenue and near Bobby Dodd Stadium, putting more upperclassmen on East Campus. In 2008, the North Avenue Apartments East and North buildings underwent extensive renovation to the façade. During their construction, the bricks were not all properly secured and thus were a safety hazard to pedestrians and vehicles on the Downtown Connector below. Two programs on campus as well have houses on East Campus: the International House (commonly referred to as the I-House); and Women, Science, and Technology. The I-House is housed in 4th Street East and Hayes. Women, Science, and Technology is housed in Goldin and Stein. The I-House hosts an International Coffee Hour every Monday night that class is in session from 6 to 7 pm, hosting both residents and their guests for discussions. Single graduate students may live in the Graduate Living Center (GLC) or at 10th and Home. 10th and Home is the designated family housing unit of Georgia Tech. Residents are zoned to Atlanta Public Schools. Residents are zoned to Centennial Place Elementary, Inman Middle School, and Midtown High School. === Student clubs and activities === Several extracurricular activities are available to students, including over 500 student organizations overseen by the Center for Student Engagement. The Student Government Association (SGA), Georgia Tech's student government, has separate executive, legislative, and judicial branches for undergraduate and graduate students. One of the SGA's primary duties is the disbursement of funds to student organizations in need of financial assistance. These funds are derived from the Student Activity Fee that all Georgia Tech students must pay, currently $123 per semester. The ANAK Society, a secret society and honor society established at Georgia Tech in 1908, claims responsibility for founding many of Georgia Tech's earliest traditions and oldest student organizations, including the SGA. === Arts === Georgia Tech's Music Department was established as part of the school's General College in 1963 under the leadership of Ben Logan Sisk. In 1976, the Music Department was assigned to the College of Sciences & Liberal Studies, and in 1991 it was relocated to its current home in the College of Design. In 2009, it was reorganized into the School of Music. The Georgia Tech Glee Club, founded in 1906, is one of the oldest student organizations on campus, and still operates today as part of the School of Music. The Glee Club was among the first collegiate choral groups to release a recording of their songs. The group has toured extensively and appeared on The Ed Sullivan Show twice, providing worldwide exposure to "Ramblin' Wreck from Georgia Tech". Today, the modern Glee Club performs dozens of times each semester for many different events, including official Georgia Tech ceremonies, banquets, and sporting events. It consists of 40 to 60 members and requires no audition or previous choral experience. The Georgia Tech Yellow Jacket Marching Band, also in the School of Music, represents Georgia Tech at athletic events and provides Tech students with a musical outlet. It was founded in 1908 by 14 students and Robert "Biddy" Bidez. The marching band consistently fields over 300 members. Members of the marching band travel to every football game. The School of Music is also home to a number of ensembles, such as the 80-to-90-member Symphony Orchestra, Jazz Ensemble, Concert Band, and Percussion and MIDI Ensembles. Students also can opt to form their own small Chamber Ensembles, either for course credit or independently. The contemporary Sonic Generator group, backed by the GVU and in collaboration with the Center for Music Technology, performs a diverse lineup of music featuring new technologies and recent composers. Georgia Tech also has a music scene that is made up of groups that operate independently from the Music Department. These groups include four student-led a cappella groups: Nothin' but Treble, Sympathetic Vibrations, Taal Tadka, and Infinite Harmony. Musician's Network, another student-led group, operates Under the Couch, a live music venue and recording facility that was formerly located beneath the Couch Building on West Campus and is now located in the Student Center. Many music, theatre, dance, and opera performances are held in the Ferst Center for the Arts. DramaTech is the campus' student-run theater. The theater has been entertaining Georgia Tech and the surrounding community since 1947. They are also home to Let's Try This! (the campus improv troupe) and VarietyTech (a song and dance troupe). Momocon is an annual anime/gaming/comics convention held on campus in March hosted by Anime O-Tekku, the Georgia Tech anime club. The convention has free admission and was held in the Student Center, Instructional Center, and surrounding outdoor areas until 2010. Beginning in 2011, the convention moved its venue to locations in Technology Square. === Student media === WREK is Georgia Tech's student run radio station. Broadcast at 91.1 MHz on the FM band the station is known as "Wrek Radio". The studio is on the second floor of the Student Center Commons. Broadcasting with 100 kW ERP, WREK is among the nation's most powerful college radio stations. WREK is a student operated and run radio station. In April 2007, a debate was held regarding the future of the radio station. The prospective purchasers were GPB and NPR. WREK maintained its independence after dismissing the notion with approval from the Radio Communications Board of Georgia Tech. The Georgia Tech Amateur Radio Club, founded in 1912, is among the oldest collegiate amateur radio clubs in the nation. The club provided emergency radio communications during several disasters including numerous hurricanes and the 1985 Mexico earthquake. The Technique, also known as the "'Nique", is Tech's official student newspaper. It is distributed weekly during the Fall and Spring semesters (on Fridays), and biweekly during the Summer semester (with certain exceptions). It was established on November 17, 1911. Blueprint is Tech's yearbook, established in 1908. Other student publications include Erato, Tech's literary magazine, The Tower, Tech's undergraduate research journal, T-Book, the student handbook detailing Tech traditions, and (intermittently) The North Avenue Review, Tech's "free-speech magazine". The offices of all student publications are located in the Student Services Building. === Greek life === Greek life at Georgia Tech includes over 50 active chapters of social fraternities and sororities. All of the groups are chapters of national organizations, including members of the North American Interfraternity Conference, National Panhellenic Conference, and National Pan-Hellenic Council. The first fraternity to establish a chapter at Georgia Tech was Alpha Tau Omega in 1888, before the school held its first classes. The first sorority to establish a chapter was Alpha Xi Delta in 1954. In 2019, 28% of undergraduate men and 33% of undergraduate women were active in Tech's Greek system. There are two sororities and three fraternities that make up the Multicultural Panhellenic Council. Nine sororities make up the Collegiate Panhellenic Council (CPC). == Athletics == Georgia Tech teams are variously known as the Yellow Jackets, the Ramblin' Wreck and the Engineers; but the official nickname is Yellow Jackets. They compete as a member of the National Collegiate Athletic Association (NCAA) Division I level (Football Bowl Subdivision (FBS) sub-level for football), as the Georgia Tech Yellow Jackets, primarily competing in the Atlantic Coast Conference (ACC) for all sports since the 1979–80 season (a year after they officially joined the conference before beginning conference play), Coastal Division in any sports split into a divisional format since the 2005–06 season. The Yellow Jackets previously competed as a charter member of the Metro Conference from 1975–76 to 1977–78, as a charter member of the Southeastern Conference (SEC) from 1932–33 to 1963–64, as a charter of the Southern Conference (SoCon) from 1921–22 to 1931–32, and as a charter member of the Southern Intercollegiate Athletic Association (SIAA) from 1895–96 to 1920–21. They also competed as an Independent from 1964–65 to 1974–75 and on the 1978–79 season. Men's sports include baseball, basketball, cross country, football, golf, swimming & diving, cheerleading, tennis and track & field; while women's sports include basketball, cross country, softball, swimming and diving, tennis, track & field, cheerleading, and volleyball. Their cheerleading squad has, in the past, only competed the National Cheerleaders & Dance Association (NCA & NDA) College Nationals along with Buzz and the Goldrush dance team competing here as well. However, in the 2022 season, Goldrush competed at the Universal Cheerleaders & Dance Association (UCA & UDA) College Nationals for the first time and in 2023 the cheer team will compete here for the first time as well. The Institute mascots are Buzz and the Ramblin' Wreck. The institute's traditional football rival is the University of Georgia; the rivalry is considered one of the fiercest in college football. The rivalry is commonly referred to as Clean, Old-Fashioned Hate, which is also the title of a book about the subject. There is also a long-standing rivalry with Clemson. Tech has eighteen varsity sports: football, women's and men's basketball, baseball, softball, volleyball, golf, men's and women's tennis, men's and women's swimming and diving, men's and women's track and field, men's and women's cross country, and coed cheerleading. Four Georgia Tech football teams were selected as national champions in news polls: 1917, 1928, 1952, and 1990. In May 2007, the women's tennis team won the NCAA National Championship with a 4–2 victory over UCLA, the first ever national title granted by the NCAA to Tech. === Fight songs === Tech's fight song "I'm a Ramblin' Wreck from Georgia Tech" is known worldwide. First published in the 1908 Blue Print, it was adapted from an old drinking song ("Son of a Gambolier") and embellished with trumpet flourishes by Frank Roman. Then-Vice President Richard Nixon and Soviet Premier Nikita Khrushchev sang the song together when they met in Moscow in 1958 to reduce the tension between them. As the story goes, Nixon did not know any Russian songs, but Khrushchev knew that one American song as it had been sung on The Ed Sullivan Show. "I'm a Ramblin' Wreck" has had many other notable moments in its history. It is reportedly the first school song to have been played in space. Gregory Peck sang the song while strumming a ukulele in the movie The Man in the Gray Flannel Suit. John Wayne whistled it in The High and the Mighty. Tim Holt's character sings a few bars of it in the movie His Kind of Woman. There are numerous stories of commanding officers in Higgins boats crossing the English Channel on the morning of D-Day leading their men in the song to calm their nerves. It is played after every Georgia Tech score in a football game. Another popular fight song is "Up With the White and Gold", which is usually played by the band preceding "Ramblin' Wreck". First published in 1919, "Up with the White and Gold" was also written by Frank Roman. The song's title refers to Georgia Tech's school colors and its lyrics contain the phrase, "Down with the Red and Black", an explicit reference to the school colors of the University of Georgia and the then-budding Georgia Tech–UGA rivalry. === Club sports === Georgia Tech participates in many non-NCAA sanctioned club sports, including archery, airsoft, boxing, crew, cricket, cycling (winning three consecutive Dirty South Collegiate Cycling Conference mountain bike championships), disc golf, equestrian, fencing, field hockey, gymnastics, ice hockey, kayaking, lacrosse, paintball, roller hockey, soccer, rugby union, sailing, skydiving, swimming, table tennis, taekwondo, triathlon, ultimate, water polo, water ski, and wrestling. Many club sports take place at the Georgia Tech Aquatic Center, where swimming, diving, water polo, and the swimming portion of the modern pentathlon competitions for the 1996 Summer Olympics were held. In 2018, the first annual College Club Swimming national championship meet was held at the McAuley Aquatic Center and the hosts, the Georgia Tech Swim Club, were crowned the first-ever club swimming and diving national champions. == Traditions == Georgia Tech has a number of legends and traditions, some of which have persisted for decades. Some are well-known; for example, the most notable of these is the popular but rare tradition of stealing the 'T' from Tech Tower. Tech Tower, Tech's historic primary administrative building, has the letters "TECH" hanging atop it on each of its four sides. There have been several attempts by students to orchestrate complex plans to steal the huge symbolic letter T, and on occasion they have carried this act out successfully. === School colors === Georgia Tech students hold a heated, long and ongoing rivalry with the University of Georgia, known as Clean, Old-Fashioned Hate. The first known hostilities between the two institutions trace back to 1891. The University of Georgia's literary magazine proclaimed UGA's colors to be "old gold, black, and crimson". Charles H. Herty, then President of the University of Georgia, felt that old gold was too similar to yellow and that it "symbolized cowardice". After the 1893 football game against Tech, Herty removed old gold as an official color. Tech would first use old gold for their uniforms, as a proverbial slap in the face to UGA, in their first unofficial football game against Auburn in 1891. Georgia Tech's school colors would henceforth be old gold and white. In April 2018 Georgia Tech went through a comprehensive brand redefinement solidifying the school colors into Tech Gold and White as the primary school colors while Navy Blue serves as the contrasting secondary color. The decision to move forward with gold, white and blue is rooted in history, as the first mention of official Georgia Tech class colors came in the Atlanta Constitution in 1891 (white, blue and gold) and the first GT class ring in 1894 also featured gold, white and blue. === Mascots === Costumed in plush to look like a yellow jacket, the official mascot of Georgia Tech is Buzz. Buzz enters the football games at the sound of swarming yellow jackets and proceeds to do a flip on the fifty-yard line GT logo. He then bull rushes the goal post and has been known to knock it out of alignment before football games. Buzz is also notorious for crowd surfing and general light-hearted trickery amongst Tech and rival fans. The Ramblin' Wreck was the first official mascot of Georgia Tech. It is a 1930 Ford Model A Sports Coupe. The Wreck has led the football team onto the field every home game since 1961. The Wreck features a gold and white paint job, two gold flags emblazoned with the words "To Hell With Georgia" and "Give 'Em Hell Tech", and a white soft top. The Wreck is maintained by the Ramblin' Reck Club, a selective student leadership organization on campus. === Spirit organizations === The Ramblin' Reck Club is charged with upholding all school traditions and creating new traditions such as the SWARM. The SWARM is a 900-member spirit group seated along the north end zone or on the court at basketball games. This is the group that typically features body painting, organized chants, and general fanaticism. The marching band that performs at halftime and after big plays during the football season is clad in all white and sits next to SWARM at football games providing a dichotomy of white and gold in the North End Zone. The band is also the primary student organization on campus that upholds the tradition of RAT caps, wherein band freshman wear the traditional yellow cap at all band events. === Fight songs and chants === The band plays the fight songs Ramblin' Wreck from Georgia Tech and Up With the White and Gold after every football score and between every basketball period. At the end of a rendition of either fight song, there is a series of drum beats followed by the cheer "Go Jackets" three times (each time followed by a second cheer of "bust their ass"), then a different drum beat and the cheer "Fight, Win, Drink, Get Naked!" The official cheer only includes "Fight, Win" but most present other than the band and cheerleaders will yell the extended version. It is also tradition for the band to play the "When You Say Budweiser" after the third quarter of football and during the second-to-last official timeout of every basketball game. During the "Budweiser Song", all of the fans in the stadium alternate bending their knees and standing up straight. Other notable band songs are Michael Jackson's Thriller for half-time at the Thrillerdome, Ludacris' Move Bitch for large gains in football. Another popular chant is called the Good Word and it begins with asking, "What's the Good Word?" The response from all Tech faithful is, "To Hell With Georgia." The same question is asked three times and then the followup is asked, "How 'bout them dogs?" And everyone yells, "Piss on 'em." == Notable people == There are many notable graduates, non-graduate former students and current students of Georgia Tech. Georgia Tech alumni are known as Yellow Jackets. According to the Georgia Tech Alumni Association: [the status of "alumni"] is open to all graduates of Georgia Tech, all former students of Georgia Tech who regularly matriculated and left Georgia Tech in good standing, active and retired members of the faculty and administration staff, and those who have rendered some special and conspicuous service to Georgia Tech or to [the alumni association]. The first class of 95 students entered Georgia Tech in 1888, and the first two graduates received their degrees in 1890. Since then, the institute has greatly expanded, with an enrollment of 14,558 undergraduates and 6,913 postgraduate students as of fall 2013. Jimmy Carter, the 39th President of the United States (1977 to 1981) and Nobel Peace Prize winner, briefly attended Georgia Tech in the early 1940s before matriculating at and graduating from the United States Naval Academy. Juan Carlos Varela, a 1985 industrial engineering graduate, was elected president of Panama in May 2014. Another Georgia Tech graduate and Nobel Prize winner, Kary Mullis, received the Nobel Prize in Chemistry in 1993. A large number of businesspeople (including but not limited to prominent CEOs and directors) began their careers at Georgia Tech. Some of the most successful of these are Charles "Garry" Betty (CEO Earthlink), David Dorman (CEO AT&T Corporation), Mike Duke (CEO Wal-Mart), David C. Garrett Jr. (CEO Delta Air Lines), and James D. Robinson III (CEO American Express and later director of The Coca-Cola Company). Tech graduates have been deeply influential in politics, military service, and activism. Atlanta mayor Ivan Allen Jr. and former United States Senator Sam Nunn have both made significant changes from within their elected offices. Former Georgia Tech President G. Wayne Clough was also a Tech graduate, the first Tech alumnus to serve in that position. Many notable military commanders are alumni; James A. Winnefeld Jr. who served as the ninth Vice Chairman of the Joint Chiefs of Staff, Philip M. Breedlove who served as the Commander, U.S. Air Forces in Europe, William L. Ball was the 67th Secretary of the Navy, John M. Brown III was the Commander of the United States Army Pacific Command, and Leonard Wood was Chief of Staff of the Army and a Medal of Honor recipient for helping capture of the Apache chief Geronimo. Wood was also Tech's first football coach and (simultaneously) the team captain, and was instrumental in Tech's first-ever football victory in a game against the University of Georgia. Thomas McGuire was the second-highest scoring American ace during World War II and a Medal of Honor recipient. Numerous astronauts and National Aeronautics and Space Administration (NASA) administrators spent time at Tech; most notably, Retired Vice Admiral Richard H. Truly was the eighth administrator of NASA, and later served as the president of the Georgia Tech Research Institute. John Young walked on the Moon as the commander of Apollo 16, first commander of the Space Shuttle and is the only person to have piloted four different classes of spacecraft. Georgia Tech has its fair share of noteworthy engineers, scientists, and inventors. Herbert Saffir developed the Saffir-Simpson Hurricane Scale, and W. Jason Morgan made significant contributions to the theory of plate tectonics and geodynamics. In computer science, Andy Hunt co-wrote The Pragmatic Programmer and an original signatory of The Agile Manifesto, Krishna Bharat developed Google News, and D. Richard Hipp developed SQLite. Architect Michael Arad designed the World Trade Center Memorial in New York City. Despite their highly technical backgrounds, Tech graduates are no strangers to the arts or athletic competition. Among them, comedian/actor Jeff Foxworthy of Blue Collar Comedy Tour fame and Randolph Scott both called Tech home. Several famous athletes have, as well; about 150 Tech students have gone into the National Football League (NFL), with many others going into the National Basketball Association (NBA) or Major League Baseball (MLB). Well-known American football athletes include all-time greats such as Joe Hamilton, Pat Swilling, Billy Shaw, and Joe Guyon, former Tech head football coaches Pepper Rodgers and Bill Fulcher, and recent students such as Calvin Johnson, Demaryius Thomas and Tashard Choice. Some of Tech's recent entrants into the NBA include Josh Okogie, Chris Bosh, Derrick Favors, Thaddeus Young, Jarrett Jack, and Iman Shumpert. Award-winning baseball stars include Kevin Brown, Mark Teixeira, Nomar Garciaparra, and Jason Varitek. In golf, Tech alumni include the legendary Bobby Jones, who founded The Masters, and David Duval, who was ranked the No. 1 golfer in the world in 1999. == See also == List of colleges and universities in metropolitan Atlanta == Notes == == References == == Further reading == Brittain, Marion L. (1948). The Story of Georgia Tech. Chapel Hill, NC: University of North Carolina Press. Cromartie, Bill (2002) [1977]. Clean Old-fashioned Hate: Georgia Vs. Georgia Tech. Strode Publishers. ISBN 0-932520-64-2. Clough, Wayne G. (2021). The Technological University Reimagined: Georgia Institute of Technology, 1994-2008. Mercer University Press. ISBN 978-0881468120. McMath, Robert C.; Ronald H. Bayor; James E. Brittain; Lawrence Foster; August W. Giebelhaus; Germaine M. Reed (1985). Engineering the New South: Georgia Tech 1885–1985. Athens, GA: University of Georgia Press. ISBN 0-8203-0784-X. Wallace, Robert (1969). Dress Her in WHITE and GOLD: A biography of Georgia Tech. Georgia Tech Foundation. == External links == Official website Georgia Tech Athletics website Georgia Tech Forum
https://en.wikipedia.org/wiki/Georgia_Tech
Engineering & Technology (E+T) is a science, engineering and technology magazine published by Redactive on behalf of IET Services, a wholly owned subsidiary of the Institution of Engineering and Technology (IET), a registered charity in the United Kingdom. The magazine is issued 6 times per year in print and online. The E+T website is also updated regularly with news stories. E+T is distributed to the 154,000 plus membership of the IET around the world. The magazine was launched in April 2008 as a result of the merger between the Institution of Electrical Engineers and the Institution of Incorporated Engineers on 31 March 2006. Prior to the merger, both organisations had their own membership magazine, the IEE's monthly IEE Review and the IIE's Engineering Technology. Engineering & Technology is an amalgamation of the two, and was initially published monthly. Alongside this, members also received one of seven other monthly magazines published by the IET relating to a field of the subject of their choice, with the option to purchase any of the other titles. In January 2008, the IET merged these seven titles into E+T to make a nearly fortnightly magazine with a larger pagination, providing all members with one magazine covering all topics. In January 2011 the frequency was reduced to 12 times per year and to 11 times per year in 2015 and 10 times per year in 2017. E+T journalists have been shortlisted and won multiple magazine industry awards, including those presented by the British Society of Magazine Editors, Trade And Business Publications International and the Professional Publishers Association. == References == == External links == Official website
https://en.wikipedia.org/wiki/Engineering_&_Technology
GIGA-BYTE Technology Co., Ltd. (commonly referred to as Gigabyte Technology or simply Gigabyte) is a Taiwanese manufacturer and distributor of computer hardware. Gigabyte's principal business is motherboards, It shipped 4.8 million motherboards in the first quarter of 2015, which allowed it to become the leading motherboard vendor. Gigabyte also manufactures custom graphics cards and laptop computers (including thin and light laptops under its Aero sub-brand). In 2010, Gigabyte was ranked 17th in "Taiwan's Top 20 Global Brands" by the Taiwan External Trade Development Council. The company is publicly held and traded on the Taiwan Stock Exchange, stock ID number TWSE: 2376. == History == Gigabyte Technology was established in 1986 by Pei-Cheng Yeh. One of Gigabyte's key advertised features on its motherboards is its "Ultra Durable" construction, advertised with "all solid capacitors". On 8 August 2006 Gigabyte announced a joint venture with Asus. Gigabyte developed the world's first software-controlled power supply in July 2007. An innovative method to charge the iPad and iPhone on the computer was introduced by Gigabyte in April 2010. Gigabyte launched the world's first Z68 motherboard on 31 May 2011, with an on-board mSATA connection for Intel SSD and Smart Response Technology. On 2 April 2012, Gigabyte released the world's first motherboard with 60A ICs from International Rectifier. In 2023, researchers at firmware-focused cybersecurity company Eclypsium said 271 models of Gigabyte motherboards are affected by backdoor vulnerabilities. Whenever a computer with the affected Gigabyte motherboard restarts, code within the motherboard's firmware initiates an updater program that downloads and executes another piece of software. Gigabyte has said it plans to fix the issues. == Products == Gigabyte designs and manufactures motherboards for both AMD and Intel platforms, and also produces graphics cards and notebooks in partnership with AMD and Nvidia, including Nvidia's Turing chipsets and AMD's Vega and Polaris chipsets. Gigabyte's components are used by Alienware, Falcon Northwest, CybertronPC, Origin PC, and exclusively in Technology Direct desktops. Other products of Gigabyte have included desktop computers, tablet computers, ultrabooks, mobile phones, personal digital assistants, server motherboards, server racks, networking equipment, optical drives, computer monitors, mice, keyboards, cooling components, power supplies, and cases. == Subsidiaries == Aorus is a registered sub-brand trademark of Gigabyte belonging to Aorus Pte. Ltd., which is a company registered in Singapore. Aorus specializes in gaming related products such as motherboards, graphics cards, notebooks, mice, keyboards, SSDs, headsets, cases, power supply and CPU coolers. == See also == List of companies of Taiwan == References == == External links == Official website Official Gigabyte forum Gigabyte - Better Business Bureau page
https://en.wikipedia.org/wiki/Gigabyte_Technology
TCL Technology Group Corp. (originally an abbreviation for Telecom Corporation Limited) is a Chinese partially state-owned electronics company headquartered in Huizhou, Guangdong province. TCL develops, manufactures, and sells consumer electronics like television sets, mobile phones, air conditioners, washing machines, refrigerators, and small electrical appliances. In 2010, it was the world's 25th-largest consumer electronics producer. On 7 February 2020, TCL Corporation changed its name to TCL Technology. It was the second-largest television manufacturer by market share in 2022 and 2023. TCL comprises five listed companies: TCL Technology, listed on the Shenzhen Stock Exchange (SZSE: 000100), TCL Electronics Holdings, Ltd. (SEHK: 1070), TCL Communication Technology Holdings, Ltd. (former code SEHK: 2618; delisted in 2016), China Display Optoelectronics Technology Holdings Ltd. (SEHK: 334), and Tonly Electronics Holdings Ltd. (SEHK: 1249), listed on the Hong Kong Stock Exchange. TCL Technology's business structure is focused on three major sectors: semiconductor display, semiconductor and semiconductor photovoltaic, and industrial finance and capital. == History == The company was founded in 1981 by 2 close friends, Tomson Li Dongsheng and Luca Situ under the brand name TTK as an audio cassette manufacturer. It was founded as a state-owned enterprise. In 1985, after being sued by Japanese cassette manufacturer TDK for intellectual property violation, the company changed its brand name to TCL by taking the initials from Telecom Corporation Limited. In 1999, TCL entered the Vietnamese market. On 19 September 2002, TCL announced the acquisition of all consumer electronics related assets of the former German company Schneider Rundfunkwerke, including the right to use its trademarks as Schneider, Dual, Albona, Joyce and Logix. In July 2003, TCL chairman Li Dongsheng formally announced a "Dragon and Tiger Plan" to establish two competitive TCL businesses in global markets ("Dragons") and three leading businesses inside China ("Tigers"). In November 2003, TCL and Vantiva (then-named Thomson SA) of France announced the creation of a joint venture to produce televisions and DVD players worldwide. TCL took a 67 percent stake in the joint venture, with Thomson SA holding the rest of the shares, and it was agreed that televisions made by TCL-Thomson would be marketed under the TCL brand in Asia, and the Thomson and RCA brands in Europe and North America. In April 2004, TCL and Alcatel announced the creation of a mobile phone manufacturing joint venture: Alcatel Mobile. TCL injected 55 million euros in the venture in return for a 55 per cent shareholding. In April 2005, TCL closed its manufacturing plant in Türkheim, Bavaria, laying off 120 former Schneider employees. In May 2005, TCL announced that its Hong Kong-listed unit would acquire Alcatel's 45 per cent stake in their mobile-phone joint venture for consideration of HK$63.34 million ($8.1 million) worth of TCL Communication shares. In June 2007, TCL announced that its mobile phone division planned to cease using the Alcatel brand and switch entirely to the TCL brand within five years. In April 2008, Samsung Electronics announced that it would be outsourcing the production of some LCD TV modules to TCL. In July 2008, TCL announced that it planned to raise 1.7 billion yuan ($249 million) via a share placement on the Shenzhen Stock Exchange to fund the construction of two production lines for LCD televisions; one for screens of up to 42 inches, and the other for screens of up to 56 inches. TCL sold a total of 4.18 million LCD TV sets in 2008, more than triple the number during 2007. In January 2009, TCL announced plans to double its LCD TV production capacity to 10 million units by the end of 2009. In November 2009, TCL announced that it had formed a joint-venture with the Shenzhen government to construct an 8.5-generation thin film transistor-liquid crystal display production facility in the city at a cost of $3.9 billion. In March 2010, TCL Electronics raised HK$525 million through the sale of shares on the Hong Kong Stock Exchange, in order to fund the development of its LCD and LED businesses and to generate working capital. In May 2011, TCL launched the China Smart Multimedia Terminal Technology Association in partnership with Hisense Electric Co. and Sichuan Changhong Electric Co., with the aim of helping to establish industry standards for smart televisions. In January 2013, TCL bought the naming rights for Grauman's Chinese Theatre for $5 million. In 2014, TCL changed the meaning of its identifying initials from "Telephone Communication Limited" to a branding slogan, "The Creative Life", for commercial purposes. In February 2014, TCL spent 280 million RMB to purchase 11 percent shareholdings of Tianjin 712 Communication & Broadcasting Co., Ltd, a Chinese military-owned company which produces communication devices and navigation systems for the Chinese army. In August 2014, TCL partnered with Roku for use as TCL's primary smart TV platform. TCL Corporation and Tonly Electronics was implicated in bribing a government official in Guangdong province in exchange for government subsidies. In October 2014, TCL acquired the Palm brand from HP for use on smartphones. In 2016, TCL reached an agreement with BlackBerry Limited to produce smartphones under the BlackBerry brand, under BlackBerry Mobile. This deal ceased on 31 August 2020. In 2019, due to restructuring, TCL completed the handover of major assets and was split into TCL Technology Group Corporation (TCL Technology) and TCL Industrial Holdings (TCL Industrials). In 2020, TCL Technology acquired Samsung Display's assets in Suzhou, China, including a Gen 8.5 fab and a co-located LCD module plant. == Operations == TCL is organized into five business divisions: Multimedia: TV sets Communications: cell phones and MIFI devices Home Appliances: AC units and laundry machines Home Electronics / Consumer Electronics: ODM products, like DVD, etc. Semiconductor Display and Materials: including China Star Optoelectronics Technology (CSOT), Guangdong Juhua Printing Display Technology Co., Ltd. and Guangzhou ChinaRay Optoelectronic Materials Co., Ltd. In addition, it has four affiliated business areas covering real estate and investment, logistics services, online education services, and finance. In 2021, TCL had 28 research and development (R&D) organizations, 10 joint laboratories, and 22 manufacturing bases. TCL Corporation also has its own research facility called TCL Corporate Research, which is located in Shenzhen, with the objective to research cutting-edge technology innovations for other subsidiaries. == Technology == In 2020, TCL introduced an innovative display technology known as TCL NXTPAPER, characterized by its reduction of blue light and anti-glare capabilities, aimed at enhancing visual comfort. == Products == TCL's primary products are TVs, DVD players, air conditioners, mobile phones, home appliances, electric lighting, and digital media. They also sell robot vacuum cleaners. It primarily sells its products under the following brand names: TCL for TVs and air conditioners in Africa, Asia, Australasia, Europe, North America, South America, and Russia Alcatel Mobile and Thomson for mobile phones (global) RCA-branded electrical products in the United States Some Roku models in United States Beginning in 2019, JB Hi-Fi in Australia started selling a new line of budget Smart TV's under the brand name FFalcon, which are manufactured by TCL, and contain TCL firmware, software and components. The company, as of April 2012, is in venture with Swedish furniture giant IKEA to provide the consumer electronics behind the Uppleva integrated HDTV and entertainment system product. === Smartphones === In 2016, it contract manufactured the Blackberry DTEK for BlackBerry Limited, under their flagship BlackBerry brand. In December 2016, it became a licensee of the BlackBerry brand, to manufacture, distribute, and design devices for the global market. Until August 2020, it distributed BlackBerry devices under the name of BlackBerry Mobile. TCL is also the owner of the Palm brand. The company launched the Palm "ultra-mobile companion" smartphone in 2018. In late 2019, TCL released their first own-branded Android phone, called the TCL Plex. TCL announced the 10 series for 2020, consisting of the TCL 10 SE, TCL 10L, TCL 10 Pro, TCL 10 Plus and TCL 10 5G. == TCL TV Plus == In 2015, TCL launched its own streaming television service: GoLive TV or simply GoLive. It was renamed TCL Channel in 2021 and was relaunched as TCL TV Plus (stylized as TCLtv+) in 2023. == References == == External links == Official website
https://en.wikipedia.org/wiki/TCL_Technology
Cyan Engineering was an American computer engineering company located in Grass Valley, California. It was founded by Steve Mayer and Larry Emmons. The company was purchased in 1973 by Atari, Inc. and developed the Atari Video Computer System console, which was released in 1977 and renamed the Atari 2600 in November 1982. It also carried out some robotics research and development work on behalf of Atari, including the Kermit mobile robot, originally intended as a stand-alone product intended to bring a beer. The company also programmed the original "portrait style" animatronics for Chuck E. Cheese's Pizza Time Theatre pizza chain in 1977. == Further reading == Goldberg, Marty; Vendel, Curt (November 26, 2012). Atari Inc.: Business is Fun. Syzygy Press. ISBN 978-0985597405. == References ==
https://en.wikipedia.org/wiki/Cyan_Engineering
Engineering cybernetics, also known as technical cybernetics or cybernetic engineering, is the branch of cybernetics concerned with applications in engineering, in fields such as control engineering and robotics. == History == Qian Xuesen (Hsue-Shen Tsien) defined engineering cybernetics as a theoretical field of "engineering science", the purpose of which is to "study those parts of the broad science of cybernetics which have direct engineering applications in designing controlled or guided systems". Published in 1954, Qian's published work "Engineering Cybernetics" describes the mathematical and engineering concepts of cybernetic ideas as understood at the time, breaking them down into granular scientific concepts for application. Qian's work is notable for going beyond model-based theories and arguing for the necessity of a new design principle for types of system the properties and characteristics of which are largely unknown. In the 2020s, concerns with the social consequences of cyber-physical systems, have led to calls to develop "a new branch of engineering", "drawing on the history of cybernetics and reimagining it for our 21st century challenges". == Popular usage == 1960's - An example of engineering cybernetics is a device designed in the mid-1960s by General Electric Company. Referred to as a CAM (cybernetic anthropomorphous machine), this machine was designed for use by the US Army ground troops. Operated by one man in a "cockpit" at the front end, the machine's "legs" steps were duplicates of the leg movements of the harnessed operator. A common use includes the treatment of neurological disorders with the purposeful application of neuromuscular electrical stimulation (NMES), or more precisely the use of functional electrical stimulation (FES). The most common used therapy is the 1980s introduced FES-cycling methods. Additional research is attempting to implement applications from control systems to improve FES-cycling. New research is being conducted using computer-controlled FES, where the musculoskeletal system is viewed as cybernetic system. == In Media == 1990's - Neon Genesis Evangelion the Japanese animation (anime) TV series featured giant robots piloted by humans that had a connection to the host machine via biological impulses. == See also == == References == == External links == Information on the program of study "Engineering Cybernetics" at the University of Stuttgart Information on the program of study "Technical Cybernetics" at the University of Magdeburg Department of Engineering Cybernetics at the Norwegian University of Science and technology
https://en.wikipedia.org/wiki/Engineering_cybernetics
Pharmaceutical engineering is a branch of engineering focused on discovering, formulating, and manufacturing medication, analytical and quality control processes, and on designing, building, and improving manufacturing sites that produce drugs. It utilizes the fields of chemical engineering, biomedical engineering, pharmaceutical sciences, and industrial engineering. == History == Humans have a long history of using derivatives of natural resources, such as plants, as medication. However, it was not until the late 19th century when the technological advancements of chemical companies were combined with medical research that scientists began to manipulate and engineer new medications, drug delivery techniques, and methods of mass production. === Synthesizing new medications === One of the first prominent examples of an engineered, synthetic medication was made by Paul Erlich. Erlich had found that Atoxyl, an arsenic-containing compound which is harmful to humans, was very effective at killing Treponema pallidum, the bacteria which causes syphilis. He hypothesized that if the structure of Atoxyl was altered, a "magic bullet" could potentially be identified which would kill the parasitic bacteria without having any adverse effects on human health. He developed many compounds stemming from the chemical structure of Atoxyl and eventually identified one compound which was the most effective against syphilis while being the least harmful to humans, which became known as Salvarsan. Salvarsan was widely used to treat syphilis within years of its discovery. === Beginning of mass production === In 1928, Alexander Fleming discovered a mold named Penicillium chrysogenum which prevented many types of bacteria from growing. Scientists identified the potential of this mold to provide treatment in humans against bacteria which cause infections. During World War II, the United Kingdom and the United States worked together to find a method of mass-producing penicillin, a derivative of the Penicillium mold, which had the potential to save many lives during the war since it could treat infections common in injured soldiers. Although penicillin could be isolated from the mold in a laboratory setting, there was no known way to obtain the amount of medication needed to treat the quantity of people who needed it. Scientists with major chemical companies such as Pfizer were able to develop a deep-fermentation process which could produce a high yield of penicillin. In 1944, Pfizer opened the first penicillin factory, and its products were exported to aid the war efforts overseas. === Controlled drug release === Tablets for oral consumption of medication have been utilized since approximately 1500 B.C.; however, for a long time the only method of drug release was immediate release, meaning all of the medication is released in the body at once. In the 1950s, sustained release technology was developed. Through mechanisms such as osmosis and diffusion, pills were designed that could release the medication over a 12-hour to 24-hour period. Smith, Kline & French developed one of the first major successful sustained release technologies. Their formulation consisted of a collection of small tablets taken at the same time, with varying amounts of wax coating that allowed some tablets to dissolve in the body faster than others. The result was a continuous release of the drug as it travelled through the intestinal tract. Although modern day research focuses on extending the controlled release timescale to the order of months, once-a-day and twice-a-day pills are still the most widely utilized controlled drug release method. === Formation of the ISPE === In 1980, the International Society for Pharmaceutical Engineering was formed to support and guide professionals in the pharmaceutical industry through all parts of the process of bringing new medications to the market. The ISPE writes standards and guidelines for individuals and companies to use and to model their practices after. The ISPE also hosts training sessions and conferences for professionals to attend, learn, and collaborate with others in the field. == See also == Drug discovery Drug development Modified-release dosage Pharmaceutical manufacturing Pharmaceutical industry == References ==
https://en.wikipedia.org/wiki/Pharmaceutical_engineering
Mental Engineering was a public television series where show creator and host John Forde leads a panel discussion featuring critical and humorous analysis of TV commercials. The show originated as a public-access television cable TV program on the Saint Paul Neighborhood Network (SPNN) in St. Paul, Minnesota in 1997. == Notable guests == Nationally-known comedians and satirists frequently appeared as panelists. Past guests include Al Franken, Lizz Winstead, Sam Simon, Greg Proops, Louis C.K., Paula Poundstone, Merrill Markoe, Naomi Klein, and Jeff Cesario. == History == Forde started Mental Engineering in 1998 on cable access in St. Paul. Mental Engineering is considered by some sources to be the first public-access television show to air nationally. By September 2001, the program was airing on various public TV outlets including WGBH in Boston and WNET in New York City. In 2002, the episode Super Commercials: A Mental Engineering Special followed Super Bowl XXXVI featured guest personalities Aisha Tyler and Lizz Winstead along with other guests from Minnesota. By the end of 2008 140 episodes had been produced. == Reviews and recognition == The series received positive reviews from several news outlets, including the New York Times, which called it "brilliant." Bill Moyers called it "the most interesting weekly half hour of social commentary and criticism on television," and PBS host Charlie Rose interviewed Forde on the ‘Charlie Rose’ show. == Funding History == As underwriters fund public broadcasting shows and are recognized in the show credits, ARNAN.com was the show's first carded underwriter when production moved to KTCA. Early funding assistance came from the Lutheran Brotherhood, a fortune 500 non-profit life insurance company that is now part of Thrivent Financial, and from PBS. Seeking broader funding, the show suspended production for 2003-2004, and returned to public TV in 2005. == Similar concepts == Two somewhat similar television shows aired on public TV stations in the 1960s: Public Broadcast Laboratory and Your Dollar's Worth, both sponsored by the Ford Foundation. The Gruen Transfer, a similar program deconstructing advertisements, was launched by the Australian public television network in 2008. The show is currently being marketed by Fox Look under the name "The Big Sell". == See also == Super Bowl Advertising == References == == Sources == Lash, Stephanie (September 4, 2000), "Forde's ad literacy, humor fight against consumer lust.", Current, archived from the original on February 13, 2005 "PBS goes for Mental Engineering on Super Bowl Sunday.", Current, January 28, 2002, archived from the original on February 13, 2005 Reid Day, Catherine (September 2001), "One Cultural Creative's Journey through the Between.", EDGE News Lambert, Brian (November 12, 2000), "Ad Nauseam: With healthy skepticism, St. Paul's Mental Engineering bites the advertising hand that feeds most of TV programming.", Saint Paul Pioneer Press St. Anthony, Neal (October 3, 2005), "Neal St. Anthony: Deconstructing advertisements", Star Tribune == External links == Mental Engineering official website Mental Engineering at IMDb
https://en.wikipedia.org/wiki/Mental_Engineering
Mathematical engineering (or engineering mathematics) is a branch of applied mathematics, concerning mathematical methods and techniques that are typically used in engineering and industry. Along with fields like engineering physics and engineering geology, both of which may belong in the wider category engineering science, engineering mathematics is an interdisciplinary subject motivated by engineers' needs both for practical, theoretical and other considerations outside their specialization, and to deal with constraints to be effective in their work. == Description == Historically, engineering mathematics consisted mostly of applied analysis, most notably: differential equations; real and complex analysis (including vector and tensor analysis); approximation theory (broadly construed, to include asymptotic, variational, and perturbative methods, representations, numerical analysis); Fourier analysis; potential theory; as well as linear algebra and applied probability, outside of analysis. These areas of mathematics were intimately tied to the development of Newtonian physics, and the mathematical physics of that period. This history also left a legacy: until the early 20th century subjects such as classical mechanics were often taught in applied mathematics departments at American universities, and fluid mechanics may still be taught in (applied) mathematics as well as engineering departments. The success of modern numerical computer methods and software has led to the emergence of computational mathematics, computational science, and computational engineering (the last two are sometimes lumped together and abbreviated as CS&E), which occasionally use high-performance computing for the simulation of phenomena and the solution of problems in the sciences and engineering. These are often considered interdisciplinary fields, but are also of interest to engineering mathematics. Specialized branches include engineering optimization and engineering statistics. Engineering mathematics in tertiary education typically consists of mathematical methods and models courses. == See also == Industrial mathematics Control theory, a mathematical discipline concerned with engineering Further mathematics and additional mathematics, A-level mathematics courses with similar content Mathematical methods in electronics, signal processing and radio engineering == References ==
https://en.wikipedia.org/wiki/Engineering_mathematics
Engineering notation or engineering form (also technical notation) is a version of scientific notation in which the exponent of ten is always selected to be divisible by three to match the common metric prefixes, i.e. scientific notation that aligns with powers of a thousand, for example, 531×103 instead of 5.31×105 (but on calculator displays written without the ×10 to save space). As an alternative to writing powers of 10, SI prefixes can be used, which also usually provide steps of a factor of a thousand. On most calculators, engineering notation is called "ENG" mode as scientific notation is denoted SCI. == History == An early implementation of engineering notation in the form of range selection and number display with SI prefixes was introduced in the computerized HP 5360A frequency counter by Hewlett-Packard in 1969. Based on an idea by Peter D. Dickinson the first calculator to support engineering notation displaying the power-of-ten exponent values was the HP-25 in 1975. It was implemented as a dedicated display mode in addition to scientific notation. In 1975, Commodore introduced a number of scientific calculators (like the SR4148/SR4148R and SR4190R) providing a variable scientific notation, where pressing the EE↓ and EE↑ keys shifted the exponent and decimal point by ±1 in scientific notation. Between 1976 and 1980 the same exponent shift facility was also available on some Texas Instruments calculators of the pre-LCD era such as early SR-40, TI-30 and TI-45 model variants utilizing (INV)EE↓ instead. This can be seen as a precursor to a feature implemented on many Casio calculators since 1978/1979 (e.g. in the FX-501P/FX-502P), where number display in engineering notation is available on demand by the single press of a (INV)ENG button (instead of having to activate a dedicated display mode as on most other calculators), and subsequent button presses would shift the exponent and decimal point of the number displayed by ±3 in order to easily let results match a desired prefix. Some graphical calculators (for example the fx-9860G) in the 2000s also support the display of some SI prefixes (f, p, n, μ, m, k, M, G, T, P, E) as suffixes in engineering mode. == Overview == Compared to normalized scientific notation, one disadvantage of using SI prefixes and engineering notation is that significant figures are not always readily apparent when the smallest significant digit or digits are 0. For example, 500 μm and 500×10−6 m cannot express the uncertainty distinctions between 5×10−4 m, 5.0×10−4 m, and 5.00×10−4 m. This can be solved by changing the range of the coefficient in front of the power from the common 1–1000 to 0.001–1.0. In some cases this may be suitable; in others it may be impractical. In the previous example, 0.5 mm, 0.50 mm, or 0.500 mm would have been used to show uncertainty and significant figures. It is also common to state the precision explicitly, such as "47 kΩ±5%" Another example: when the speed of light (exactly 299792458 m/s by the definition of the meter) is expressed as 3.00×108 m/s or 3.00×105 km/s then it is clear that it is between 299500 km/s and 300500 km/s, but when using 300×106 m/s, or 300×103 km/s, 300000 km/s, or the unusual but short 300 Mm/s, this is not clear. A possibility is using 0.300×109 m/s or 0.300 Gm/s. On the other hand, engineering notation allows the numbers to explicitly match their corresponding SI prefixes, which facilitates reading and oral communication. For example, 12.5×10−9 m can be read as "twelve-point-five nanometers" (10−9 being nano) and written as 12.5 nm, while its scientific notation equivalent 1.25×10−8 m would likely be read out as "one-point-two-five times ten-to-the-negative-eight meters". Engineering notation, like scientific notation generally, can use the E notation, such that 3.0×10−9 can be written as 3.0E−9 or 3.0e−9. The E (or e) should not be confused with the Euler's number e or the symbol for the exa-prefix. == Binary engineering notation == Just like decimal engineering notation can be viewed as a base-1000 scientific notation (103 = 1000), binary engineering notation relates to a base-1024 scientific notation (210 = 1024), where the exponent of two must be divisible by ten. This is closely related to the base-2 floating-point representation (B notation) commonly used in computer arithmetic, and the usage of IEC binary prefixes, e.g. 1B10 for 1 × 210, 1B20 for 1 × 220, 1B30 for 1 × 230, 1B40 for 1 × 240 etc. == See also == Significant figures Scientific notation Binary prefix International System of Units (SI) RKM code == Notes == == References == == External links == Engineering Prefix User Defined Function for Excel Perl CPAN module for converting number to engineering notation Java functions for converting between a string and a double type
https://en.wikipedia.org/wiki/Engineering_notation
Packaging engineering, also package engineering, packaging technology and packaging science, is a broad topic ranging from design conceptualization to product placement. All steps along the manufacturing process, and more, must be taken into account in the design of the package for any given product. Package engineering is an interdisciplinary field integrating science, engineering, technology and management to protect and identify products for distribution, storage, sale, and use. It encompasses the process of design, evaluation, and production of packages. It is a system integral to the value chain that impacts product quality, user satisfaction, distribution efficiencies, and safety. Package engineering includes industry-specific aspects of industrial engineering, marketing, materials science, industrial design and logistics. Packaging engineers must interact with research and development, manufacturing, marketing, graphic design, regulatory, purchasing, planning and so on. The package must sell and protect the product, while maintaining an efficient, cost-effective process cycle. Engineers develop packages from a wide variety of rigid and flexible materials. Some materials have scores or creases to allow controlled folding into package shapes (sometimes resembling origami). Packaging involves extrusion, thermoforming, molding and other processing technologies. Packages are often developed for high speed fabrication, filling, processing, and shipment. Packaging engineers use principles of structural analysis and thermal analysis in their evaluations. == Education == Some packaging engineers have backgrounds in other science, engineering, or design disciplines while some have college degrees specializing in this field. Formal packaging programs might be listed as package engineering, packaging science, packaging technology, etc. BE, BS, MS, M.Tech and PhD programs are available. Students in a packaging program typically begin with generalized science, business, and engineering classes before progressing into industry-specific topics such as shelf life stability, corrugated box design, cushioning, engineering design, labeling regulations, project management, food safety, robotics, RFID tags, quality management, package testing, packaging machinery, tamper-evident methods, recycling, computer-aided design, etc. == AI == Artificial intelligence is becoming useful in several aspects of packaging development. Packaging engineers are using AI systems in their operations; AI can also design novel packages. For example, the DABUS system designed containers for food and beverages with fractal patterns for gripping and for optical impact. Patent law is developing in this area. A World Patent has been issued with the inventor listed as DABUS but several jurisdictions indicate that a living person must be the inventor. == See also == Packaging Packing problems Queueing theory Engineering economics Manufacturing engineering Cutting stock problem Bin packing problem == Notes == == Bibliography == Yam, K. L., "Encyclopedia of Packaging Technology", John Wiley & Sons, 2009, ISBN 978-0-470-08704-6 Hanlon, Kelsey, and Forcinio, "Handbook of Package Engineering", CRC Press, 1998
https://en.wikipedia.org/wiki/Packaging_engineering
Requirements engineering (RE) is the process of defining, documenting, and maintaining requirements in the engineering design process. It is a common role in systems engineering and software engineering. The first use of the term requirements engineering was probably in 1964 in the conference paper "Maintenance, Maintainability, and System Requirements Engineering", but it did not come into general use until the late 1990s with the publication of an IEEE Computer Society tutorial in March 1997 and the establishment of a conference series on requirements engineering that has evolved into the International Requirements Engineering Conference. In the waterfall model, requirements engineering is presented as the first phase of the development process. Later development methods, including the Rational Unified Process (RUP) for software, assume that requirements engineering continues through a system's lifetime. Requirements management, which is a sub-function of Systems Engineering practices, is also indexed in the International Council on Systems Engineering (INCOSE) manuals. == Activities == The activities involved in requirements engineering vary widely, depending on the type of system being developed and the organization's specific practice(s) involved. These may include: Requirements inception or requirements elicitation – Developers and stakeholders meet; the latter are inquired concerning their needs and wants regarding the software product. Requirements analysis and negotiation – Requirements are identified (including new ones if the development is iterative), and conflicts with stakeholders are solved. Both written and graphical tools (the latter commonly used in the design phase, but some find them helpful at this stage, too) are successfully used as aids. Examples of written analysis tools: use cases and user stories. Examples of graphical tools: Unified Modeling Language (UML) and Lifecycle Modeling Language (LML). System modeling – Some engineering fields (or specific situations) require the product to be completely designed and modeled before its construction or fabrication starts. Therefore, the design phase must be performed in advance. For instance, blueprints for a building must be elaborated before any contract can be approved and signed. Many fields might derive models of the system with the LML, whereas others, might use UML. Note: In many fields, such as software engineering, most modeling activities are classified as design activities and not as requirement engineering activities. Requirements specification – Requirements are documented in a formal artifact called a Requirements Specification (RS), which will become official only after validation. A RS can contain both written and graphical (models) information if necessary. Example: Software requirements specification (SRS). Requirements validation – Checking that the documented requirements and models are consistent and meet the stakeholder's needs. Only if the final draft passes the validation process, the RS becomes official. Requirements management – Managing all the activities related to the requirements since inception, supervising as the system is developed, and even until after it is put into use (e. g., changes, extensions, etc.) These are sometimes presented as chronological stages although, in practice, there is considerable interleaving of these activities. Requirements engineering has been shown to clearly contribute to software project successes. == Problems == One limited study in Germany presented possible problems in implementing requirements engineering and asked respondents whether they agreed that they were actual problems. The results were not presented as being generalizable but suggested that the principal perceived problems were incomplete requirements, moving targets, and time boxing, with lesser problems being communications flaws, lack of traceability, terminological problems, and unclear responsibilities. == Criticism == Problem structuring, a key aspect of requirements engineering, has been speculated to reduce design performance. Some research suggests that it is possible if there are deficiencies in the requirements engineering process resulting in a situation where requirements do not exist, software requirements may be created regardless as an illusion misrepresenting design decisions as requirements == See also == List of requirements engineering tools Requirements analysis, requirements engineering focused in software engineering. Requirements Engineering Specialist Group (RESG) International Requirements Engineering Board (IREB) International Council on Systems Engineering (INCOSE) IEEE 12207 "Systems and software engineering – Software life cycle processes" TOGAF (Chapter 17) Concept of operations (ConOps) Operations management Software requirements Software requirements specification Software Engineering Body of Knowledge (SWEBOK) Design specification Specification (technical standard) Formal specification Software Quality Quality Management Scope Management == References == == External links == Systems and software engineering -- Life cycle processes --Requirements engineering. 2011. pp. 1–94. doi:10.1109/IEEESTD.2011.6146379. ISBN 978-0-7381-6591-2. {{cite book}}: |journal= ignored (help)("This standard replaces IEEE 830–1998, IEEE 1233–1998, IEEE 1362-1998 - https://standards.ieee.org/ieee/29148/5289/") Systems Engineering Body of Knowledge Requirements Engineering Management Handbook by FAA International Requirements Engineering Board (IREB) IBM Rational Resource Library by IEEE Spectrum
https://en.wikipedia.org/wiki/Requirements_engineering
Protein engineering is the process of developing useful or valuable proteins through the design and production of unnatural polypeptides, often by altering amino acid sequences found in nature. It is a young discipline, with much research taking place into the understanding of protein folding and recognition for protein design principles. It has been used to improve the function of many enzymes for industrial catalysis. It is also a product and services market, with an estimated value of $168 billion by 2017. There are two general strategies for protein engineering: rational protein design and directed evolution. These methods are not mutually exclusive; researchers will often apply both. In the future, more detailed knowledge of protein structure and function, and advances in high-throughput screening, may greatly expand the abilities of protein engineering. Eventually, even unnatural amino acids may be included, via newer methods, such as expanded genetic code, that allow encoding novel amino acids in genetic code. The applications in numerous fields, including medicine and industrial bioprocessing, are vast and numerous. == Approaches == === Rational design === In rational protein design, a scientist uses detailed knowledge of the structure and function of a protein to make desired changes. In general, this has the advantage of being inexpensive and technically easy, since site-directed mutagenesis methods are well-developed. However, its major drawback is that detailed structural knowledge of a protein is often unavailable, and, even when available, it can be very difficult to predict the effects of various mutations since structural information most often provide a static picture of a protein structure. However, programs such as Folding@home and Foldit have utilized crowdsourcing techniques in order to gain insight into the folding motifs of proteins. Computational protein design algorithms seek to identify novel amino acid sequences that are low in energy when folded to the pre-specified target structure. While the sequence-conformation space that needs to be searched is large, the most challenging requirement for computational protein design is a fast, yet accurate, energy function that can distinguish optimal sequences from similar suboptimal ones. === Multiple sequence alignment === Without structural information about a protein, sequence analysis is often useful in elucidating information about the protein. These techniques involve alignment of target protein sequences with other related protein sequences. This alignment can show which amino acids are conserved between species and are important for the function of the protein. These analyses can help to identify hot spot amino acids that can serve as the target sites for mutations. Multiple sequence alignment utilizes data bases such as PREFAB, SABMARK, OXBENCH, IRMBASE, and BALIBASE in order to cross reference target protein sequences with known sequences. Multiple sequence alignment techniques are listed below. This method begins by performing pair wise alignment of sequences using k-tuple or Needleman–Wunsch methods. These methods calculate a matrix that depicts the pair wise similarity among the sequence pairs. Similarity scores are then transformed into distance scores that are used to produce a guide tree using the neighbor joining method. This guide tree is then employed to yield a multiple sequence alignment. ==== Clustal omega ==== This method is capable of aligning up to 190,000 sequences by utilizing the k-tuple method. Next sequences are clustered using the mBed and k-means methods. A guide tree is then constructed using the UPGMA method that is used by the HH align package. This guide tree is used to generate multiple sequence alignments. ==== MAFFT ==== This method utilizes fast Fourier transform (FFT) that converts amino acid sequences into a sequence composed of volume and polarity values for each amino acid residue. This new sequence is used to find homologous regions. ==== K-Align ==== This method utilizes the Wu-Manber approximate string matching algorithm to generate multiple sequence alignments. ==== Multiple sequence comparison by log expectation (MUSCLE) ==== This method utilizes Kmer and Kimura distances to generate multiple sequence alignments. ==== T-Coffee ==== This method utilizes tree based consistency objective functions for alignment evolution. This method has been shown to be 5–10% more accurate than Clustal W. === Coevolutionary analysis === Coevolutionary analysis is also known as correlated mutation, covariation, or co-substitution. This type of rational design involves reciprocal evolutionary changes at evolutionarily interacting loci. Generally this method begins with the generation of a curated multiple sequence alignments for the target sequence. This alignment is then subjected to manual refinement that involves removal of highly gapped sequences, as well as sequences with low sequence identity. This step increases the quality of the alignment. Next, the manually processed alignment is utilized for further coevolutionary measurements using distinct correlated mutation algorithms. These algorithms result in a coevolution scoring matrix. This matrix is filtered by applying various significance tests to extract significant coevolution values and wipe out background noise. Coevolutionary measurements are further evaluated to assess their performance and stringency. Finally, the results from this coevolutionary analysis are validated experimentally. === Structural prediction === De novo generation of protein benefits from knowledge of existing protein structures. This knowledge of existing protein structure assists with the prediction of new protein structures. Methods for protein structure prediction fall under one of the four following classes: ab initio, fragment based methods, homology modeling, and protein threading. ==== Ab initio ==== These methods involve free modeling without using any structural information about the template. Ab initio methods are aimed at prediction of the native structures of proteins corresponding to the global minimum of its free energy. some examples of ab initio methods are AMBER, GROMOS, GROMACS, CHARMM, OPLS, and ENCEPP12. General steps for ab initio methods begin with the geometric representation of the protein of interest. Next, a potential energy function model for the protein is developed. This model can be created using either molecular mechanics potentials or protein structure derived potential functions. Following the development of a potential model, energy search techniques including molecular dynamic simulations, Monte Carlo simulations and genetic algorithms are applied to the protein. ==== Fragment based ==== These methods use database information regarding structures to match homologous structures to the created protein sequences. These homologous structures are assembled to give compact structures using scoring and optimization procedures, with the goal of achieving the lowest potential energy score. Webservers for fragment information are I-TASSER, ROSETTA, ROSETTA @ home, FRAGFOLD, CABS fold, PROFESY, CREF, QUARK, UNDERTAKER, HMM, and ANGLOR.: 72  ==== Homology modeling ==== These methods are based upon the homology of proteins. These methods are also known as comparative modeling. The first step in homology modeling is generally the identification of template sequences of known structure which are homologous to the query sequence. Next the query sequence is aligned to the template sequence. Following the alignment, the structurally conserved regions are modeled using the template structure. This is followed by the modeling of side chains and loops that are distinct from the template. Finally the modeled structure undergoes refinement and assessment of quality. Servers that are available for homology modeling data are listed here: SWISS MODEL, MODELLER, ReformAlign, PyMOD, TIP-STRUCTFAST, COMPASS, 3d-PSSM, SAMT02, SAMT99, HHPRED, FAGUE, 3D-JIGSAW, META-PP, ROSETTA, and I-TASSER. ==== Protein threading ==== Protein threading can be used when a reliable homologue for the query sequence cannot be found. This method begins by obtaining a query sequence and a library of template structures. Next, the query sequence is threaded over known template structures. These candidate models are scored using scoring functions. These are scored based upon potential energy models of both query and template sequence. The match with the lowest potential energy model is then selected. Methods and servers for retrieving threading data and performing calculations are listed here: GenTHREADER, pGenTHREADER, pDomTHREADER, ORFEUS, PROSPECT, BioShell-Threading, FFASO3, RaptorX, HHPred, LOOPP server, Sparks-X, SEGMER, THREADER2, ESYPRED3D, LIBRA, TOPITS, RAPTOR, COTH, MUSTER. For more information on rational design see site-directed mutagenesis. === Multivalent binding === Multivalent binding can be used to increase the binding specificity and affinity through avidity effects. Having multiple binding domains in a single biomolecule or complex increases the likelihood of other interactions to occur via individual binding events. Avidity or effective affinity can be much higher than the sum of the individual affinities providing a cost and time-effective tool for targeted binding. ==== Multivalent proteins ==== Multivalent proteins are relatively easy to produce by post-translational modifications or multiplying the protein-coding DNA sequence. The main advantage of multivalent and multispecific proteins is that they can increase the effective affinity for a target of a known protein. In the case of an inhomogeneous target using a combination of proteins resulting in multispecific binding can increase specificity, which has high applicability in protein therapeutics. The most common example for multivalent binding are the antibodies, and there is extensive research for bispecific antibodies. Applications of bispecific antibodies cover a broad spectrum that includes diagnosis, imaging, prophylaxis, and therapy. === Directed evolution === In directed evolution, random mutagenesis, e.g. by error-prone PCR or sequence saturation mutagenesis, is applied to a protein, and a selection regime is used to select variants having desired traits. Further rounds of mutation and selection are then applied. This method mimics natural evolution and, in general, produces superior results to rational design. An added process, termed DNA shuffling, mixes and matches pieces of successful variants to produce better results. Such processes mimic the recombination that occurs naturally during sexual reproduction. Advantages of directed evolution are that it requires no prior structural knowledge of a protein, nor is it necessary to be able to predict what effect a given mutation will have. Indeed, the results of directed evolution experiments are often surprising in that desired changes are often caused by mutations that were not expected to have some effect. The drawback is that they require high-throughput screening, which is not feasible for all proteins. Large amounts of recombinant DNA must be mutated and the products screened for desired traits. The large number of variants often requires expensive robotic equipment to automate the process. Further, not all desired activities can be screened for easily. Natural Darwinian evolution can be effectively imitated in the lab toward tailoring protein properties for diverse applications, including catalysis. Many experimental technologies exist to produce large and diverse protein libraries and for screening or selecting folded, functional variants. Folded proteins arise surprisingly frequently in random sequence space, an occurrence exploitable in evolving selective binders and catalysts. While more conservative than direct selection from deep sequence space, redesign of existing proteins by random mutagenesis and selection/screening is a particularly robust method for optimizing or altering extant properties. It also represents an excellent starting point for achieving more ambitious engineering goals. Allying experimental evolution with modern computational methods is likely the broadest, most fruitful strategy for generating functional macromolecules unknown to nature. The main challenges of designing high quality mutant libraries have shown significant progress in the recent past. This progress has been in the form of better descriptions of the effects of mutational loads on protein traits. Also computational approaches have shown large advances in the innumerably large sequence space to more manageable screenable sizes, thus creating smart libraries of mutants. Library size has also been reduced to more screenable sizes by the identification of key beneficial residues using algorithms for systematic recombination. Finally a significant step forward toward efficient reengineering of enzymes has been made with the development of more accurate statistical models and algorithms quantifying and predicting coupled mutational effects on protein functions. Generally, directed evolution may be summarized as an iterative two step process which involves generation of protein mutant libraries, and high throughput screening processes to select for variants with improved traits. This technique does not require prior knowledge of the protein structure and function relationship. Directed evolution utilizes random or focused mutagenesis to generate libraries of mutant proteins. Random mutations can be introduced using either error prone PCR, or site saturation mutagenesis. Mutants may also be generated using recombination of multiple homologous genes. Nature has evolved a limited number of beneficial sequences. Directed evolution makes it possible to identify undiscovered protein sequences which have novel functions. This ability is contingent on the proteins ability to tolerant amino acid residue substitutions without compromising folding or stability. Directed evolution methods can be broadly categorized into two strategies, asexual and sexual methods. === Asexual methods === Asexual methods do not generate any cross links between parental genes. Single genes are used to create mutant libraries using various mutagenic techniques. These asexual methods can produce either random or focused mutagenesis. ==== Random mutagenesis ==== Random mutagenic methods produce mutations at random throughout the gene of interest. Random mutagenesis can introduce the following types of mutations: transitions, transversions, insertions, deletions, inversion, missense, and nonsense. Examples of methods for producing random mutagenesis are below. ==== Error prone PCR ==== Error prone PCR utilizes the fact that Taq DNA polymerase lacks 3' to 5' exonuclease activity. This results in an error rate of 0.001–0.002% per nucleotide per replication. This method begins with choosing the gene, or the area within a gene, one wishes to mutate. Next, the extent of error required is calculated based upon the type and extent of activity one wishes to generate. This extent of error determines the error prone PCR strategy to be employed. Following PCR, the genes are cloned into a plasmid and introduced to competent cell systems. These cells are then screened for desired traits. Plasmids are then isolated for colonies which show improved traits, and are then used as templates the next round of mutagenesis. Error prone PCR shows biases for certain mutations relative to others. Such as biases for transitions over transversions. Rates of error in PCR can be increased in the following ways: Increase concentration of magnesium chloride, which stabilizes non complementary base pairing. Add manganese chloride to reduce base pair specificity. Increased and unbalanced addition of dNTPs. Addition of base analogs like dITP, 8 oxo-dGTP, and dPTP. Increase concentration of Taq polymerase. Increase extension time. Increase cycle time. Use less accurate Taq polymerase. Also see polymerase chain reaction for more information. ==== Rolling circle error-prone PCR ==== This PCR method is based upon rolling circle amplification, which is modeled from the method that bacteria use to amplify circular DNA. This method results in linear DNA duplexes. These fragments contain tandem repeats of circular DNA called concatamers, which can be transformed into bacterial strains. Mutations are introduced by first cloning the target sequence into an appropriate plasmid. Next, the amplification process begins using random hexamer primers and Φ29 DNA polymerase under error prone rolling circle amplification conditions. Additional conditions to produce error prone rolling circle amplification are 1.5 pM of template DNA, 1.5 mM MnCl2 and a 24 hour reaction time. MnCl2 is added into the reaction mixture to promote random point mutations in the DNA strands. Mutation rates can be increased by increasing the concentration of MnCl2, or by decreasing concentration of the template DNA. Error prone rolling circle amplification is advantageous relative to error prone PCR because of its use of universal random hexamer primers, rather than specific primers. Also the reaction products of this amplification do not need to be treated with ligases or endonucleases. This reaction is isothermal. ==== Chemical mutagenesis ==== Chemical mutagenesis involves the use of chemical agents to introduce mutations into genetic sequences. Examples of chemical mutagens follow. Sodium bisulfate is effective at mutating G/C rich genomic sequences. This is because sodium bisulfate catalyses deamination of unmethylated cytosine to uracil. Ethyl methane sulfonate alkylates guanidine residues. This alteration causes errors during DNA replication. Nitrous acid causes transversion by de-amination of adenine and cytosine. The dual approach to random chemical mutagenesis is an iterative two step process. First it involves the in vivo chemical mutagenesis of the gene of interest via EMS. Next, the treated gene is isolated and cloning into an untreated expression vector in order to prevent mutations in the plasmid backbone. This technique preserves the plasmids genetic properties. ==== Targeting glycosylases to embedded arrays for mutagenesis (TaGTEAM) ==== This method has been used to create targeted in vivo mutagenesis in yeast. This method involves the fusion of a 3-methyladenine DNA glycosylase to tetR DNA-binding domain. This has been shown to increase mutation rates by over 800 time in regions of the genome containing tetO sites. ==== Mutagenesis by random insertion and deletion ==== This method involves alteration in length of the sequence via simultaneous deletion and insertion of chunks of bases of arbitrary length. This method has been shown to produce proteins with new functionalities via introduction of new restriction sites, specific codons, four base codons for non-natural amino acids. ==== Transposon based random mutagenesis ==== Recently many methods for transposon based random mutagenesis have been reported. This methods include, but are not limited to the following: PERMUTE-random circular permutation, random protein truncation, random nucleotide triplet substitution, random domain/tag/multiple amino acid insertion, codon scanning mutagenesis, and multicodon scanning mutagenesis. These aforementioned techniques all require the design of mini-Mu transposons. Thermo scientific manufactures kits for the design of these transposons. ==== Random mutagenesis methods altering the target DNA length ==== These methods involve altering gene length via insertion and deletion mutations. An example is the tandem repeat insertion (TRINS) method. This technique results in the generation of tandem repeats of random fragments of the target gene via rolling circle amplification and concurrent incorporation of these repeats into the target gene. ==== Mutator strains ==== Mutator strains are bacterial cell lines which are deficient in one or more DNA repair mechanisms. An example of a mutator strand is the E. coli XL1-RED. This subordinate strain of E. coli is deficient in the MutS, MutD, MutT DNA repair pathways. Use of mutator strains is useful at introducing many types of mutation; however, these strains show progressive sickness of culture because of the accumulation of mutations in the strains own genome. ==== Focused mutagenesis ==== Focused mutagenic methods produce mutations at predetermined amino acid residues. These techniques require and understanding of the sequence-function relationship for the protein of interest. Understanding of this relationship allows for the identification of residues which are important in stability, stereoselectivity, and catalytic efficiency. Examples of methods that produce focused mutagenesis are below. ==== Site saturation mutagenesis ==== Site saturation mutagenesis is a PCR based method used to target amino acids with significant roles in protein function. The two most common techniques for performing this are whole plasmid single PCR, and overlap extension PCR. Whole plasmid single PCR is also referred to as site directed mutagenesis (SDM). SDM products are subjected to Dpn endonuclease digestion. This digestion results in cleavage of only the parental strand, because the parental strand contains a GmATC which is methylated at N6 of adenine. SDM does not work well for large plasmids of over ten kilobases. Also, this method is only capable of replacing two nucleotides at a time. Overlap extension PCR requires the use of two pairs of primers. One primer in each set contains a mutation. A first round of PCR using these primer sets is performed and two double stranded DNA duplexes are formed. A second round of PCR is then performed in which these duplexes are denatured and annealed with the primer sets again to produce heteroduplexes, in which each strand has a mutation. Any gaps in these newly formed heteroduplexes are filled with DNA polymerases and further amplified. ==== Sequence saturation mutagenesis (SeSaM) ==== Sequence saturation mutagenesis results in the randomization of the target sequence at every nucleotide position. This method begins with the generation of variable length DNA fragments tailed with universal bases via the use of template transferases at the 3' termini. Next, these fragments are extended to full length using a single stranded template. The universal bases are replaced with a random standard base, causing mutations. There are several modified versions of this method such as SeSAM-Tv-II, SeSAM-Tv+, and SeSAM-III. ==== Single primer reactions in parallel (SPRINP) ==== This site saturation mutagenesis method involves two separate PCR reaction. The first of which uses only forward primers, while the second reaction uses only reverse primers. This avoids the formation of primer dimer formation. ==== Mega primed and ligase free focused mutagenesis ==== This site saturation mutagenic technique begins with one mutagenic oligonucleotide and one universal flanking primer. These two reactants are used for an initial PCR cycle. Products from this first PCR cycle are used as mega primers for the next PCR. ==== Ω-PCR ==== This site saturation mutagenic method is based on overlap extension PCR. It is used to introduce mutations at any site in a circular plasmid. ==== PFunkel-ominchange-OSCARR ==== This method utilizes user defined site directed mutagenesis at single or multiple sites simultaneously. OSCARR is an acronym for one pot simple methodology for cassette randomization and recombination. This randomization and recombination results in randomization of desired fragments of a protein. Omnichange is a sequence independent, multisite saturation mutagenesis which can saturate up to five independent codons on a gene. ==== Trimer-dimer mutagenesis ==== This method removes redundant codons and stop codons. ==== Cassette mutagenesis ==== This is a PCR based method. Cassette mutagenesis begins with the synthesis of a DNA cassette containing the gene of interest, which is flanked on either side by restriction sites. The endonuclease which cleaves these restriction sites also cleaves sites in the target plasmid. The DNA cassette and the target plasmid are both treated with endonucleases to cleave these restriction sites and create sticky ends. Next the products from this cleavage are ligated together, resulting in the insertion of the gene into the target plasmid. An alternative form of cassette mutagenesis called combinatorial cassette mutagenesis is used to identify the functions of individual amino acid residues in the protein of interest. Recursive ensemble mutagenesis then utilizes information from previous combinatorial cassette mutagenesis. Codon cassette mutagenesis allows you to insert or replace a single codon at a particular site in double stranded DNA. === Sexual methods === Sexual methods of directed evolution involve in vitro recombination which mimic natural in vivo recombination. Generally these techniques require high sequence homology between parental sequences. These techniques are often used to recombine two different parental genes, and these methods do create cross overs between these genes. ==== In vitro homologous recombination ==== Homologous recombination can be categorized as either in vivo or in vitro. In vitro homologous recombination mimics natural in vivo recombination. These in vitro recombination methods require high sequence homology between parental sequences. These techniques exploit the natural diversity in parental genes by recombining them to yield chimeric genes. The resulting chimera show a blend of parental characteristics. ==== DNA shuffling ==== This in vitro technique was one of the first techniques in the era of recombination. It begins with the digestion of homologous parental genes into small fragments by DNase1. These small fragments are then purified from undigested parental genes. Purified fragments are then reassembled using primer-less PCR. This PCR involves homologous fragments from different parental genes priming for each other, resulting in chimeric DNA. The chimeric DNA of parental size is then amplified using end terminal primers in regular PCR. ==== Random priming in vitro recombination (RPR) ==== This in vitro homologous recombination method begins with the synthesis of many short gene fragments exhibiting point mutations using random sequence primers. These fragments are reassembled to full length parental genes using primer-less PCR. These reassembled sequences are then amplified using PCR and subjected to further selection processes. This method is advantageous relative to DNA shuffling because there is no use of DNase1, thus there is no bias for recombination next to a pyrimidine nucleotide. This method is also advantageous due to its use of synthetic random primers which are uniform in length, and lack biases. Finally this method is independent of the length of DNA template sequence, and requires a small amount of parental DNA. ==== Truncated metagenomic gene-specific PCR ==== This method generates chimeric genes directly from metagenomic samples. It begins with isolation of the desired gene by functional screening from metagenomic DNA sample. Next, specific primers are designed and used to amplify the homologous genes from different environmental samples. Finally, chimeric libraries are generated to retrieve the desired functional clones by shuffling these amplified homologous genes. ==== Staggered extension process (StEP) ==== This in vitro method is based on template switching to generate chimeric genes. This PCR based method begins with an initial denaturation of the template, followed by annealing of primers and a short extension time. All subsequent cycle generate annealing between the short fragments generated in previous cycles and different parts of the template. These short fragments and the templates anneal together based on sequence complementarity. This process of fragments annealing template DNA is known as template switching. These annealed fragments will then serve as primers for further extension. This method is carried out until the parental length chimeric gene sequence is obtained. Execution of this method only requires flanking primers to begin. There is also no need for Dnase1 enzyme. ==== Random chimeragenesis on transient templates (RACHITT) ==== This method has been shown to generate chimeric gene libraries with an average of 14 crossovers per chimeric gene. It begins by aligning fragments from a parental top strand onto the bottom strand of a uracil containing template from a homologous gene. 5' and 3' overhang flaps are cleaved and gaps are filled by the exonuclease and endonuclease activities of Pfu and taq DNA polymerases. The uracil containing template is then removed from the heteroduplex by treatment with a uracil DNA glcosylase, followed by further amplification using PCR. This method is advantageous because it generates chimeras with relatively high crossover frequency. However it is somewhat limited due to the complexity and the need for generation of single stranded DNA and uracil containing single stranded template DNA. ==== Synthetic shuffling ==== Shuffling of synthetic degenerate oligonucleotides adds flexibility to shuffling methods, since oligonucleotides containing optimal codons and beneficial mutations can be included. ==== In vivo Homologous Recombination ==== Cloning performed in yeast involves PCR dependent reassembly of fragmented expression vectors. These reassembled vectors are then introduced to, and cloned in yeast. Using yeast to clone the vector avoids toxicity and counter-selection that would be introduced by ligation and propagation in E. coli. ==== Mutagenic organized recombination process by homologous in vivo grouping (MORPHING) ==== This method introduces mutations into specific regions of genes while leaving other parts intact by utilizing the high frequency of homologous recombination in yeast. ==== Phage-assisted continuous evolution (PACE) ==== This method utilizes a bacteriophage with a modified life cycle to transfer evolving genes from host to host. The phage's life cycle is designed in such a way that the transfer is correlated with the activity of interest from the enzyme. This method is advantageous because it requires minimal human intervention for the continuous evolution of the gene. === In vitro non-homologous recombination methods === These methods are based upon the fact that proteins can exhibit similar structural identity while lacking sequence homology. === Exon shuffling === Exon shuffling is the combination of exons from different proteins by recombination events occurring at introns. Orthologous exon shuffling involves combining exons from orthologous genes from different species. Orthologous domain shuffling involves shuffling of entire protein domains from orthologous genes from different species. Paralogous exon shuffling involves shuffling of exon from different genes from the same species. Paralogous domain shuffling involves shuffling of entire protein domains from paralogous proteins from the same species. Functional homolog shuffling involves shuffling of non-homologous domains which are functional related. All of these processes being with amplification of the desired exons from different genes using chimeric synthetic oligonucleotides. This amplification products are then reassembled into full length genes using primer-less PCR. During these PCR cycles the fragments act as templates and primers. This results in chimeric full length genes, which are then subjected to screening. ==== Incremental truncation for the creation of hybrid enzymes (ITCHY) ==== Fragments of parental genes are created using controlled digestion by exonuclease III. These fragments are blunted using endonuclease, and are ligated to produce hybrid genes. THIOITCHY is a modified ITCHY technique which utilized nucleotide triphosphate analogs such as α-phosphothioate dNTPs. Incorporation of these nucleotides blocks digestion by exonuclease III. This inhibition of digestion by exonuclease III is called spiking. Spiking can be accomplished by first truncating genes with exonuclease to create fragments with short single stranded overhangs. These fragments then serve as templates for amplification by DNA polymerase in the presence of small amounts of phosphothioate dNTPs. These resulting fragments are then ligated together to form full length genes. Alternatively the intact parental genes can be amplified by PCR in the presence of normal dNTPs and phosphothioate dNTPs. These full length amplification products are then subjected to digestion by an exonuclease. Digestion will continue until the exonuclease encounters an α-pdNTP, resulting in fragments of different length. These fragments are then ligated together to generate chimeric genes. ==== SCRATCHY ==== This method generates libraries of hybrid genes inhibiting multiple crossovers by combining DNA shuffling and ITCHY. This method begins with the construction of two independent ITCHY libraries. The first with gene A on the N-terminus. And the other having gene B on the N-terminus. These hybrid gene fragments are separated using either restriction enzyme digestion or PCR with terminus primers via agarose gel electrophoresis. These isolated fragments are then mixed together and further digested using DNase1. Digested fragments are then reassembled by primerless PCR with template switching. ==== Recombined extension on truncated templates (RETT) ==== This method generates libraries of hybrid genes by template switching of uni-directionally growing polynucleotides in the presence of single stranded DNA fragments as templates for chimeras. This method begins with the preparation of single stranded DNA fragments by reverse transcription from target mRNA. Gene specific primers are then annealed to the single stranded DNA. These genes are then extended during a PCR cycle. This cycle is followed by template switching and annealing of the short fragments obtained from the earlier primer extension to other single stranded DNA fragments. This process is repeated until full length single stranded DNA is obtained. ==== Sequence homology-independent protein recombination (SHIPREC) ==== This method generates recombination between genes with little to no sequence homology. These chimeras are fused via a linker sequence containing several restriction sites. This construct is then digested using DNase1. Fragments are made are made blunt ended using S1 nuclease. These blunt end fragments are put together into a circular sequence by ligation. This circular construct is then linearized using restriction enzymes for which the restriction sites are present in the linker region. This results in a library of chimeric genes in which contribution of genes to 5' and 3' end will be reversed as compared to the starting construct. ==== Sequence independent site directed chimeragenesis (SISDC) ==== This method results in a library of genes with multiple crossovers from several parental genes. This method does not require sequence identity among the parental genes. This does require one or two conserved amino acids at every crossover position. It begins with alignment of parental sequences and identification of consensus regions which serve as crossover sites. This is followed by the incorporation of specific tags containing restriction sites followed by the removal of the tags by digestion with Bac1, resulting in genes with cohesive ends. These gene fragments are mixed and ligated in an appropriate order to form chimeric libraries. ==== Degenerate homo-duplex recombination (DHR) ==== This method begins with alignment of homologous genes, followed by identification of regions of polymorphism. Next the top strand of the gene is divided into small degenerate oligonucleotides. The bottom strand is also digested into oligonucleotides to serve as scaffolds. These fragments are combined in solution are top strand oligonucleotides are assembled onto bottom strand oligonucleotides. Gaps between these fragments are filled with polymerase and ligated. ==== Random multi-recombinant PCR (RM-PCR) ==== This method involves the shuffling of plural DNA fragments without homology, in a single PCR. This results in the reconstruction of complete proteins by assembly of modules encoding different structural units. ==== User friendly DNA recombination (USERec) ==== This method begins with the amplification of gene fragments which need to be recombined, using uracil dNTPs. This amplification solution also contains primers, PfuTurbo, and Cx Hotstart DNA polymerase. Amplified products are next incubated with USER enzyme. This enzyme catalyzes the removal of uracil residues from DNA creating single base pair gaps. The USER enzyme treated fragments are mixed and ligated using T4 DNA ligase and subjected to Dpn1 digestion to remove the template DNA. These resulting dingle stranded fragments are subjected to amplification using PCR, and are transformed into E. coli. ==== Golden Gate shuffling (GGS) recombination ==== This method allows you to recombine at least 9 different fragments in an acceptor vector by using type 2 restriction enzyme which cuts outside of the restriction sites. It begins with sub cloning of fragments in separate vectors to create Bsa1 flanking sequences on both sides. These vectors are then cleaved using type II restriction enzyme Bsa1, which generates four nucleotide single strand overhangs. Fragments with complementary overhangs are hybridized and ligated using T4 DNA ligase. Finally these constructs are then transformed into E. coli cells, which are screened for expression levels. ==== Phosphoro thioate-based DNA recombination method (PRTec) ==== This method can be used to recombine structural elements or entire protein domains. This method is based on phosphorothioate chemistry which allows the specific cleavage of phosphorothiodiester bonds. The first step in the process begins with amplification of fragments that need to be recombined along with the vector backbone. This amplification is accomplished using primers with phosphorothiolated nucleotides at 5' ends. Amplified PCR products are cleaved in an ethanol-iodine solution at high temperatures. Next these fragments are hybridized at room temperature and transformed into E. coli which repair any nicks. ==== Integron ==== This system is based upon a natural site specific recombination system in E. coli. This system is called the integron system, and produces natural gene shuffling. This method was used to construct and optimize a functional tryptophan biosynthetic operon in trp-deficient E. coli by delivering individual recombination cassettes or trpA-E genes along with regulatory elements with the integron system. ==== Y-Ligation based shuffling (YLBS) ==== This method generates single stranded DNA strands, which encompass a single block sequence either at the 5' or 3' end, complementary sequences in a stem loop region, and a D branch region serving as a primer binding site for PCR. Equivalent amounts of both 5' and 3' half strands are mixed and formed a hybrid due to the complementarity in the stem region. Hybrids with free phosphorylated 5' end in 3' half strands are then ligated with free 3' ends in 5' half strands using T4 DNA ligase in the presence of 0.1 mM ATP. Ligated products are then amplified by two types of PCR to generate pre 5' half and pre 3' half PCR products. These PCR product are converted to single strands via avidin-biotin binding to the 5' end of the primes containing stem sequences that were biotin labeled. Next, biotinylated 5' half strands and non-biotinylated 3' half strands are used as 5' and 3' half strands for the next Y-ligation cycle. == Semi-rational design == Semi-rational design uses information about a proteins sequence, structure and function, in tandem with predictive algorithms. Together these are used to identify target amino acid residues which are most likely to influence protein function. Mutations of these key amino acid residues create libraries of mutant proteins that are more likely to have enhanced properties. Advances in semi-rational enzyme engineering and de novo enzyme design provide researchers with powerful and effective new strategies to manipulate biocatalysts. Integration of sequence and structure based approaches in library design has proven to be a great guide for enzyme redesign. Generally, current computational de novo and redesign methods do not compare to evolved variants in catalytic performance. Although experimental optimization may be produced using directed evolution, further improvements in the accuracy of structure predictions and greater catalytic ability will be achieved with improvements in design algorithms. Further functional enhancements may be included in future simulations by integrating protein dynamics. Biochemical and biophysical studies, along with fine-tuning of predictive frameworks will be useful to experimentally evaluate the functional significance of individual design features. Better understanding of these functional contributions will then give feedback for the improvement of future designs. Directed evolution will likely not be replaced as the method of choice for protein engineering, although computational protein design has fundamentally changed the way protein engineering can manipulate bio-macromolecules. Smaller, more focused and functionally-rich libraries may be generated by using in methods which incorporate predictive frameworks for hypothesis-driven protein engineering. New design strategies and technical advances have begun a departure from traditional protocols, such as directed evolution, which represents the most effective strategy for identifying top-performing candidates in focused libraries. Whole-gene library synthesis is replacing shuffling and mutagenesis protocols for library preparation. Also highly specific low throughput screening assays are increasingly applied in place of monumental screening and selection efforts of millions of candidates. Together, these developments are poised to take protein engineering beyond directed evolution and towards practical, more efficient strategies for tailoring biocatalysts. == Screening and selection techniques == Once a protein has undergone directed evolution, ration design or semi-ration design, the libraries of mutant proteins must be screened to determine which mutants show enhanced properties. Phage display methods are one option for screening proteins. This method involves the fusion of genes encoding the variant polypeptides with phage coat protein genes. Protein variants expressed on phage surfaces are selected by binding with immobilized targets in vitro. Phages with selected protein variants are then amplified in bacteria, followed by the identification of positive clones by enzyme linked immunosorbent assay. These selected phages are then subjected to DNA sequencing. Cell surface display systems can also be utilized to screen mutant polypeptide libraries. The library mutant genes are incorporated into expression vectors which are then transformed into appropriate host cells. These host cells are subjected to further high throughput screening methods to identify the cells with desired phenotypes. Cell free display systems have been developed to exploit in vitro protein translation or cell free translation. These methods include mRNA display, ribosome display, covalent and non covalent DNA display, and in vitro compartmentalization.: 53  === Enzyme engineering === Enzyme engineering is the application of modifying an enzyme's structure (and, thus, its function) or modifying the catalytic activity of isolated enzymes to produce new metabolites, to allow new (catalyzed) pathways for reactions to occur, or to convert from certain compounds into others (biotransformation). These products are useful as chemicals, pharmaceuticals, fuel, food, or agricultural additives. An enzyme reactor consists of a vessel containing a reactional medium that is used to perform a desired conversion by enzymatic means. Enzymes used in this process are free in the solution. Also Microorganisms are one of important origin for genuine enzymes . == Examples of engineered proteins == Computing methods have been used to design a protein with a novel fold, such as Top7, and sensors for unnatural molecules. The engineering of fusion proteins has yielded rilonacept, a pharmaceutical that has secured Food and Drug Administration (FDA) approval for treating cryopyrin-associated periodic syndrome. Another computing method, IPRO, successfully engineered the switching of cofactor specificity of Candida boidinii xylose reductase. Iterative Protein Redesign and Optimization (IPRO) redesigns proteins to increase or give specificity to native or novel substrates and cofactors. This is done by repeatedly randomly perturbing the structure of the proteins around specified design positions, identifying the lowest energy combination of rotamers, and determining whether the new design has a lower binding energy than prior ones. The iterative nature of this process allows IPRO to make additive mutations to a protein sequence that collectively improve the specificity toward desired substrates and/or cofactors. Computation-aided design has also been used to engineer complex properties of a highly ordered nano-protein assembly. A protein cage, E. coli bacterioferritin (EcBfr), which naturally shows structural instability and an incomplete self-assembly behavior by populating two oligomerization states, is the model protein in this study. Through computational analysis and comparison to its homologs, it has been found that this protein has a smaller-than-average dimeric interface on its two-fold symmetry axis due mainly to the existence of an interfacial water pocket centered on two water-bridged asparagine residues. To investigate the possibility of engineering EcBfr for modified structural stability, a semi-empirical computational method is used to virtually explore the energy differences of the 480 possible mutants at the dimeric interface relative to the wild type EcBfr. This computational study also converges on the water-bridged asparagines. Replacing these two asparagines with hydrophobic amino acids results in proteins that fold into alpha-helical monomers and assemble into cages as evidenced by circular dichroism and transmission electron microscopy. Both thermal and chemical denaturation confirm that, all redesigned proteins, in agreement with the calculations, possess increased stability. One of the three mutations shifts the population in favor of the higher order oligomerization state in solution as shown by both size exclusion chromatography and native gel electrophoresis. A in silico method, PoreDesigner, was developed to redesign bacterial channel protein (OmpF) to reduce its 1 nm pore size to any desired sub-nm dimension. Transport experiments on the narrowest designed pores revealed complete salt rejection when assembled in biomimetic block-polymer matrices. == See also == == References == == External links == servers for protein engineering and related topics based on the WHAT IF software Enzymes Built from Scratch – Researchers engineer never-before-seen catalysts using a new computational technique, Technology Review, March 10, 2008
https://en.wikipedia.org/wiki/Protein_engineering
Marine engineering is the engineering of boats, ships, submarines, and any other marine vessel. Here it is also taken to include the engineering of other ocean systems and structures – referred to in certain academic and professional circles as "ocean engineering". After completing this degree one can join a ship as an officer in engine department and eventually rise to the rank of a chief engineer. This rank is one of the top ranks onboard and is equal to the rank of a ship's captain. Marine engineering is the highly preferred course to join merchant Navy as an officer as it provides ample opportunities in terms of both onboard and onshore jobs. Marine engineering applies a number of engineering sciences, including mechanical engineering, electrical engineering, electronic engineering, and computer Engineering, to the development, design, operation and maintenance of watercraft propulsion and ocean systems. It includes but is not limited to power and propulsion plants, machinery, piping, automation and control systems for marine vehicles of any kind, as well as coastal and offshore structures. == History == Archimedes is traditionally regarded as the first marine engineer, having developed a number of marine engineering systems in antiquity. Modern marine engineering dates back to the beginning of the Industrial Revolution (early 1700s). In 1807, Robert Fulton successfully used a steam engine to propel a vessel through the water. Fulton's ship used the engine to power a small wooden paddle wheel as its marine propulsion system. The integration of a steam engine into a watercraft to create a marine steam engine was the start of the marine engineering profession. Only twelve years after Fulton's Clermont had her first voyage, the Savannah marked the first sea voyage from America to Europe. Around 50 years later the steam powered paddle wheels had a peak with the creation of the Great Eastern, which was as big as one of the cargo ships of today, 700 feet in length, weighing 22,000 tons. Paddle steamers would become the front runners of the steamship industry for the next thirty years till the next type of propulsion came around. == Training == There are several educational paths to becoming a marine engineer, all of which includes earning a university or college degree, such as a Bachelor of Engineering (B.Eng. or B.E.), Bachelor of Science (B.Sc. or B.S.), Bachelor of Technology (B.Tech.), Bachelor of Technology Management and Marine Engineering (B.TecMan & MarEng), or a Bachelor of Applied Science (B.A.Sc.) in Marine Engineering. Depending on the country and jurisdiction, to be licensed as a Marine engineer, a Master's degree, such as a Master of Engineering (M.Eng.), Master of Science (M.Sc or M.S.), or Master of Applied Science (M.A.Sc.) may be required. Some marine engineers join the profession laterally, entering from other disciplines, like Mechanical Engineering, Civil Engineering, Electrical Engineering, Geomatics Engineering and Environmental Engineering, or from science-based fields, such as Geology, Geophysics, Physics, Geomatics, Earth Science, and Mathematics. To qualify as a marine engineer, those changing professions are required to earn a graduate Marine Engineering degree, such as an M.Eng, M.S., M.Sc., or M.A.Sc., after graduating from a different quantitative undergraduate program. The fundamental subjects of marine engineering study usually include: Mathematics; Calculus, Algebra, Differential Equations, Numerical Analysis Geoscience; Geochemistry, Geophysics, Mineralogy, Geomatics Mechanics; Rock mechanics, Soil Mechanics, Geomechanics Thermodynamics; Heat Transfer, Work (thermodynamics), Mass Transfer Hydrogeology Fluid Mechanics; Fluid statics, Fluid Dynamics Geostatistics; Spatial Analysis, Statistics Control Engineering; Control Theory, Instrumentation Surface Mining; Open-pit mining == Related Fields == === Naval architecture === In the engineering of seagoing vessels, naval architecture is concerned with the overall design of the ship and its propulsion through the water, while marine engineering ensures that the ship systems function as per the design. Although they have distinctive disciplines, naval architects and marine engineers often work side-by-side. === Ocean engineering (and combination with Marine engineering) === Ocean engineering is concerned with other structures and systems in or adjacent to the ocean, including offshore platforms, coastal structures such as piers and harbors, and other ocean systems such as ocean wave energy conversion and underwater life-support systems. This in fact makes ocean engineering a distinctive field from marine engineering, which is concerned with the design and application of shipboard systems specifically. However, on account of its similar nomenclature and multiple overlapping core disciplines (e.g. hydrodynamics, hydromechanics, and materials science), "ocean engineering" sometimes operates under the umbrella term of "marine engineering", especially in industry and academia outside of the U.S. The same combination has been applied to the rest of this article. === Oceanography === Oceanography is a scientific field concerned with the acquisition and analysis of data to characterize the ocean. Although separate disciplines, marine engineering and oceanography are closely intertwined: marine engineers often use data gathered by oceanographers to inform their design and research, and oceanographers use tools designed by marine engineers (more specifically, oceanographic engineers) to advance their understanding and exploration of the ocean. === Mechanical engineering === Marine engineering incorporates many aspects of mechanical engineering. One manifestation of this relationship lies in the design of shipboard propulsion systems. Mechanical engineers design the main propulsion plant, the powering and mechanization aspects of the ship functions such as steering, anchoring, cargo handling, heating, ventilation, air conditioning interior and exterior communication, and other related requirements. Electrical power generation and electrical power distribution systems are typically designed by their suppliers; the only design responsibility of the marine engineering is installation. Furthermore, an understanding of mechanical engineering topics such as fluid dynamics, fluid mechanics, linear wave theory, strength of materials, structural mechanics, and structural dynamics is essential to a marine engineer's repertoire of skills. These and other mechanical engineering subjects serve as an integral component of the marine engineering curriculum. === Civil Engineering === Civil engineering concepts play in an important role in many marine engineering projects such as the design and construction of ocean structures, ocean bridges and tunnels, and port/harbor design. ==== Coastal engineering ==== === Electronics and Robotics === Marine engineering often deals in the fields of electrical engineering and robotics, especially in applications related to employing deep-sea cables and UUVs. ==== Deep-sea cables ==== A series of transoceanic fiber optic cables are responsible for connecting much of the world's communication via the internet, carrying as much as 99 percent of total global internet and signal traffic. These cables must be engineered to withstand deep-sea environments that are remote and often unforgiving, with extreme pressures and temperatures as well as potential interference by fishing, trawling, and sea life. ==== UUV autonomy and networks ==== The use of unmanned underwater vehicles (UUVs) stands to benefit from the use of autonomous algorithms and networking. Marine engineers aim to learn how advancements in autonomy and networking can be used to enhance existing UUV technologies and facilitate the development of more capable underwater vehicles. === Petroleum Engineering === A knowledge of marine engineering proves useful in the field of petroleum engineering, as hydrodynamics and seabed integration serve as key elements in the design and maintenance of offshore oil platforms. === Marine construction === Marine construction is the process of building structures in or adjacent to large bodies of water, usually the sea. These structures can be built for a variety of purposes, including transportation, energy production, and recreation. Marine construction can involve the use of a variety of building materials, predominantly steel and concrete. Some examples of marine structures include ships, offshore platforms, moorings, pipelines, cables, wharves, bridges, tunnels, breakwaters and docks. == Challenges specific to marine engineering == === Hydrodynamic loading === In the same way that civil engineers design to accommodate wind loads on building and bridges, marine engineers design to accommodate a ship or submarine struck by waves millions of times over the course of the vessel's life. These load conditions are also found in marine construction and coastal engineering === Stability === Any seagoing vessel has the constant need for hydrostatic stability. A naval architect, like an airplane designer, is concerned with stability. What makes the naval architect's job unique is that a ship operates in two fluids simultaneously: water and air. Even after a ship has been designed and put to sea, marine engineers face the challenge of balancing cargo, as stacking containers vertically increases the mass of the ship and shifts the center of gravity higher. The weight of fuel also presents a problem, as the pitch of the ship may cause the liquid to shift, resulting in an imbalance. In some vessels, this offset will be counteracted by storing water inside larger ballast tanks. Marine engineers are responsible for the task of balancing and tracking the fuel and ballast water of a ship. Floating offshore structures have similar constraints. === Corrosion === The saltwater environment faced by seagoing vessels makes them highly susceptible to corrosion. In every project, marine engineers are concerned with surface protection and preventing galvanic corrosion. Corrosion can be inhibited through cathodic protection by introducing pieces of metal (e.g. zinc) to serve as a "sacrificial anode" in the corrosion reaction. This causes the metal to corrode instead of the ship's hull. Another way to prevent corrosion is by sending a controlled amount of low DC current through the ship's hull, thereby changing the hull's electrical charge and delaying the onset of electro-chemical corrosion. Similar problems are encountered in coastal and offshore structures. === Anti-fouling === Anti-fouling is the process of eliminating obstructive organisms from essential components of seawater systems. Depending on the nature and location of marine growth, this process is performed in a number of different ways: Marine organisms may grow and attach to the surfaces of the outboard suction inlets used to obtain water for cooling systems. Electro-chlorination involves running high electrical current through sea water, altering the water's chemical composition to create sodium hypochlorite, purging any bio-matter. An electrolytic method of anti-fouling involves running electrical current through two anodes (Scardino, 2009). These anodes typically consist of copper and aluminum (or alternatively, iron). The first metal, copper anode, releases its ion into the water, creating an environment that is too toxic for bio-matter. The second metal, aluminum, coats the inside of the pipes to prevent corrosion. Other forms of marine growth such as mussels and algae may attach themselves to the bottom of a ship's hull. This growth interferes with the smoothness and uniformity of the ship's hull, causing the ship to have a less hydrodynamic shape that causes it to be slower and less fuel-efficient. Marine growth on the hull can be remedied by using special paint that prevents the growth of such organisms. === Pollution control === ==== Sulfur emission ==== The burning of marine fuels releases harmful pollutants into the atmosphere. Ships burn marine diesel in addition to heavy fuel oil. Heavy fuel oil, being the heaviest of refined oils, releases sulfur dioxide when burned. Sulfur dioxide emissions have the potential to raise atmospheric and ocean acidity causing harm to marine life. However, heavy fuel oil may only be burned in international waters due to the pollution created. It is commercially advantageous due to the cost effectiveness compared to other marine fuels. It is prospected that heavy fuel oil will be phased out of commercial use by the year 2020 (Smith, 2018). ==== Oil and water discharge ==== Water, oil, and other substances collect at the bottom of the ship in what is known as the bilge. Bilge water is pumped overboard, but must pass a pollution threshold test of 15 ppm (parts per million) of oil to be discharged. Water is tested and either discharged if clean or recirculated to a holding tank to be separated before being tested again. The tank it is sent back to, the oily water separator, utilizes gravity to separate the fluids due to their viscosity. Ships over 400 gross tons are required to carry the equipment to separate oil from bilge water. Further, as enforced by MARPOL, all ships over 400 gross tons and all oil tankers over 150 gross tons are required to log all oil transfers in an oil record book (EPA, 2011). === Cavitation === Cavitation is the process of forming an air bubble in a liquid due to the vaporization of that liquid cause by an area of low pressure. This area of low pressure lowers the boiling point of a liquid allowing it to vaporize into a gas. Cavitation can take place in pumps, which can cause damage to the impeller that moves the fluids through the system. Cavitation is also seen in propulsion. Low pressure pockets form on the surface of the propeller blades as its revolutions per minute increase (IIMS, 2015). Cavitation on the propeller causes a small but violent implosion which could warp the propeller blade. To remedy the issue, more blades allow the same amount of propulsion force but at a lower rate of revolutions. This is crucial for submarines as the propeller needs to keep the vessel relatively quiet to stay hidden. With more propeller blades, the vessel is able to achieve the same amount of propulsion force at lower shaft revolutions. == Applications == The following categories provide a number of focus areas in which marine engineers direct their efforts. === Arctic Engineering === In designing systems that operate in the arctic (especially scientific equipment such as meteorological instrumentation and oceanographic buoys), marine engineers must overcome an array of design challenges. Equipment must be able to operate at extreme temperatures for prolonged periods of time, often with little to no maintenance. This creates the need for exceptionally temperature-resistant materials and durable precision electronic components. === Coastal Design and Restoration === Coastal engineering applies a mixture of civil engineering and other disciplines to create coastal solutions for areas along or near the ocean. In protecting coastlines from wave forces, erosion, and sea level rise, marine engineers must consider whether they will use a "gray" infrastructure solution - such as a breakwater, culvert, or sea wall made from rocks and concrete - or a "green" infrastructure solution that incorporates aquatic plants, mangroves, and/or marsh ecosystems. It has been found that gray infrastructure costs more to build and maintain, but it may provide better protection against ocean forces in high-energy wave environments. A green solution is generally less expensive and more well-integrated with local vegetation, but may be susceptible to erosion or damage if executed improperly. In many cases engineers will select a hybrid approach that combines elements of both gray and green solutions. === Deep Sea Systems === ==== Life Support ==== The design of underwater life-support systems such as underwater habitats presents a unique set of challenges requiring a detailed knowledge of pressure vessels, diving physiology, and thermodynamics. ==== Unmanned Underwater Vehicles ==== Marine engineers may design or make frequent use of unmanned underwater vehicles, which operate underwater without a human aboard. UUVs often perform work in locations which would be otherwise impossible or difficult to access by humans due to a number of environmental factors (e.g. depth, remoteness, and/or temperature). UUVs can be remotely operated by humans, like in the case of remotely operated vehicles, semi-autonomous, or autonomous. ==== Sensors and instrumentation ==== The development of oceanographic sciences, subsea engineering and the ability to detect, track and destroy submarines (anti-submarine warfare) required the parallel development of a host of marine scientific instrumentation and sensors. Visible light is not transferred far underwater, so the medium for transmission of data is primarily acoustic. High-frequency sound is used to measure the depth of the ocean, determine the nature of the seafloor, and detect submerged objects. The higher the frequency, the higher the definition of the data that is returned. Sound Navigation and Ranging or SONAR was developed during the First World War to detect submarines, and has been greatly refined through to the present day. Submarines similarly use sonar equipment to detect and target other submarines and surface ships, and to detect submerged obstacles such as seamounts that pose a navigational obstacle. Simple echo-sounders point straight down and can give an accurate reading of ocean depth (or look up at the underside of sea-ice). More advanced echo sounders use a fan-shaped beam or sound, or multiple beams to derive highly detailed images of the ocean floor. High power systems can penetrate the soil and seabed rocks to give information about the geology of the seafloor, and are widely used in geophysics for the discovery of hydrocarbons, or for engineering survey. For close-range underwater communications, optical transmission is possible, mainly using blue lasers. These have a high bandwidth compared with acoustic systems, but the range is usually only a few tens of metres, and ideally at night. As well as acoustic communications and navigation, sensors have been developed to measure ocean parameters such as temperature, salinity, oxygen levels and other properties including nitrate levels, levels of trace chemicals and environmental DNA. The industry trend has been towards smaller, more accurate and more affordable systems so that they can be purchased and used by university departments and small companies as well as large corporations, research organisations and governments. The sensors and instruments are fitted to autonomous and remotely-operated systems as well as ships, and are enabling these systems to take on tasks that hitherto required an expensive human-crewed platform. Manufacture of marine sensors and instruments mainly takes place in Asia, Europe and North America. Products are advertised in specialist journals, and through Trade Shows such as Oceanology International and Ocean Business which help raise awareness of the products. === Environmental Engineering === In every coastal and offshore project, environmental sustainability is an important consideration for the preservation of ocean ecosystems and natural resources. Instances in which marine engineers benefit from knowledge of environmental engineering include creation of fisheries, clean-up of oil spills, and creation of coastal solutions. === Offshore Systems === A number of systems designed fully or in part by marine engineers are used offshore - far away from coastlines. ==== Offshore oil platforms ==== The design of offshore oil platforms involves a number of marine engineering challenges. Platforms must be able to withstand ocean currents, wave forces, and saltwater corrosion while remaining structurally integral and fully anchored into the seabed. Additionally, drilling components must be engineered to handle these same challenges with a high factor of safety to prevent oil leaks and spills from contaminating the ocean. ==== Offshore wind farms ==== Offshore wind farms encounter many similar marine engineering challenges to oil platforms. They provide a source of renewable energy with a higher yield than wind farms on land, while encountering less resistance from the general public (see NIMBY). ==== Ocean wave energy ==== Marine engineers continue to investigate the possibility of ocean wave energy as a viable source of power for distributed or grid applications. Many designs have been proposed and numerous prototypes have been built, but the problem of harnessing wave energy in a cost-effective manner remains largely unresolved. === Port and Harbor Design === A marine engineer may also deal with the planning, creation, expansion, and modification of port and harbor designs. Harbors can be natural or artificial and protect anchored ships from wind, waves, and currents. Ports can be defined as a city, town, or place where ships are moored, loaded, or unloaded. Ports typically reside within a harbor and are made up of one or more individual terminals that handle a particular cargo including passengers, bulk cargo, or containerized cargo. Marine engineers plan and design various types of marine terminals and structures found in ports, and they must understand the loads imposed on these structures over the course of their lifetime. === Salvage and Recovery === Marine salvage techniques are continuously modified and improved to recover shipwrecks. Marine engineers use their skills to assist at some stages of this process. == Career == === Industry === With a diverse engineering background, marine engineers work in a variety of industry jobs across every field of math, science, technology, and engineering. A few companies such as Oceaneering International and Van Oord specialize in marine engineering, while other companies consult marine engineers for specific projects. Such consulting commonly occurs in the oil industry, with companies such as ExxonMobil and BP hiring marine engineers to manage aspects of their offshore drilling projects. === Military === Marine engineering lends itself to a number of military applications – mostly related to the Navy. The United States Navy's Seabees, Civil Engineer Corps, and Engineering Duty Officers often perform work related to marine engineering. Military contractors (especially those in naval shipyards) and the Army Corps of Engineers play a role in certain marine engineering projects as well. === Expected Growth === In 2012, the average annual earnings for marine engineers in the U.S. were $96,140 with average hourly earnings of $46.22. As a field, marine engineering is predicted to grow approximately 12% from 2016 to 2026. Currently, there are about 8,200 naval architects and marine engineers employed, however, this number is expected to increase to 9,200 by 2026 (BLS, 2017). This is due at least in part to the critical role of the shipping industry on the global market supply chain; 80% of the world's trade by volume is done overseas by close to 50,000 ships, all of which require marine engineers aboard and shoreside (ICS, 2017). Additionally, offshore energy continues to grow, and a greater need exists for coastal solutions due to sea level rise. == Education == Maritime universities are dedicated to teaching and training students in maritime professions. Marine engineers generally have a bachelor's degree in marine engineering, marine engineering technology, or marine systems engineering. Practical training is valued by employers alongside the bachelor's degree. === Professional institutions === IMarEST World Maritime University Society for Underwater Technology IEEE Oceanic Engineering Society Marine Engineering and Research Institute Indian Maritime University Royal Institution of Naval Architects (RINA) Pakistan Marine Academy Society of Naval Architects and Marine Engineers (SNAME) is a worldwide society that is focused on the advancement of the maritime industry. SNAME was founded in 1893. American Society of Naval Engineers (ASNE) SIMAC === Degrees in ocean engineering === A number of institutions - including MIT, UC Berkeley, the U.S. Naval Academy, and Texas A&M University - offer a four-year Bachelor of Science degree specifically in ocean engineering. Accredited programs consist of basic undergraduate math and science subjects such as calculus, statistics, chemistry, and physics; fundamental engineering subjects such as statics, dynamics, electrical engineering, and thermodynamics; and more specialized subjects such as ocean structural analysis, hydromechanics, and coastal management. Graduate students in ocean engineering take classes on more advanced, in-depth subjects while conducting research to complete a graduate-level thesis. The Massachusetts Institute of Technology offers master's and PhD degrees specifically in ocean engineering. Additionally, MIT co-hosts a joint program with the Woods Hole Oceanographic Institution for students studying ocean engineering and other ocean-related topics at the graduate level. === Journals and Conferences === Journals about ocean engineering include Ocean Engineering, the IEEE Journal of Oceanic Engineering and the Journal of Waterway, Port, Coastal, and Ocean Engineering. Conferences in the field of marine engineering include the IEEE Oceanic Engineering Society's OCEANS Conference and Exposition and the European Wave and Tidal Energy Conference (EWTEC). == Marine Engineering Achievements == The Delta Works is a series of 13 projects designed to protect the Netherlands against flooding from the North Sea. The American Society of Civil Engineers named it one of the "Seven Wonders of the Modern World". As of April 2021 twenty-two people have descended to Challenger Deep, the lowest point in the Earth's ocean located in the Mariana Trench. Recovery of Soviet submarine K-219 by a joint team of U.S. Navy and CIA engineers aboard Glomar Explorer. == Notable Marine Engineers == === In Industry === Pieter van Oord, CEO of Royal van Oord === In Academia === Michael E. McCormick, Professor Emeritus of the Department of Naval Architecture and Ocean Engineering at the U.S. Naval Academy and pioneer of wave energy research == In Media and Popular Culture == Marine engineers performed an important role in the clean-up of oil spills such as Exxon Valdez and British Petroleum. James Cameron's documentary Deepsea Challenge follows the story of the team that built a submersible in which Cameron made the first solo descent to Challenger Deep, the lowest point in the Earth's ocean. == See also == Engine room – Space where the propulsion machinery is installed aboard a ship Engineering officer (ship) – Licensed mariner responsible for propulsion plants and support systems Marine architecture – Branch of architecture focused on coastal, near-shore and off-shore construction Marine electronics – electronics (devices) designed and classed for use in the marine environment on board ships and yachts where impact of salt water may break its normal functioningPages displaying wikidata descriptions as a fallback Naval architecture – Engineering discipline of marine vessels Oceanography – Study of physical, chemical, and biological processes in the ocean == References ==
https://en.wikipedia.org/wiki/Marine_engineering
An interlock is a feature that makes the state of two mechanisms or functions mutually dependent. It may consist of any electrical or mechanical devices, or systems. In most applications, an interlock is used to help prevent any damage to the machine or to the operator handling the machine. For example, elevators are equipped with an interlock that prevents the moving elevator from opening its doors and prevents the stationary elevator (with open doors) from moving. Interlocks may include sophisticated elements such as curtains of infrared beams, photodetectors, simple switches, and locks. It can also be a computer containing an interlocking computer program with digital or analogue electronics. == Trapped-key interlocking == Trapped-key interlocking is a method of ensuring safety in industrial environments by forcing the operator through a predetermined sequence using a defined selection of keys, locks and switches. It is called trapped key as it works by releasing and trapping keys in a predetermined sequence. After the control or power has been isolated, a key is released that can be used to grant access to individual or multiple doors. Below is an example of what a trapped key interlock transfer block would look like. This is a part of a trapped key interlocking system. In order to obtain the keys in this system, a key must be inserted and turned (like the key at the bottom of the system of the picture). Once the key is turned, the operator may retrieve the remaining keys that will be used to open other doors. Once all keys are returned, then the operator will be allowed to take out the original key from the beginning. The key will not turn unless the remaining keys are put back in place. Another example is an electric kiln. To prevent access to the inside of an electric kiln, a trapped key system may be used to interlock a disconnecting switch and the kiln door. While the switch is turned on, the key is held by the interlock attached to the disconnecting switch. To open the kiln door, the switch is first opened, which releases the key. The key can then be used to unlock the kiln door. While the key is removed from the switch interlock, a plunger from the interlock mechanically prevents the switch from closing. Power cannot be re-applied to the kiln until the kiln door is locked, releasing the key, and the key is then returned to the disconnecting switch interlock. A similar two-part interlock system can be used anywhere it is necessary to ensure the energy supply to a machine is interrupted before the machine is entered for adjustment or maintenance. == Mechanical == Interlocks may be strictly mechanical. An example of a mechanical interlock is a steering wheel of a car. In modern days, most cars have an anti-theft feature that restricts the turning of the steering wheel if the key is not inserted in the ignition. This prevents an individual from pushing the car since the mechanical interlock restricts the directional motion of the front wheels of the car. In the operation of a device such as a press or cutter that is hand fed or the workpiece hand removed, the use of two buttons to actuate the device, one for each hand, greatly reduces the possibility of operation endangering the operator. No such system is fool-proof, and such systems are often augmented by the use of cable–pulled gloves worn by the operator; these are retracted away from the danger area by the stroke of the machine. A major problem in engineering operator safety is the tendency of operators to ignore safety precautions or even outright disabling forced interlocks due to work pressure and other factors. Therefore, such safeties require and perhaps must facilitate operator cooperation. == Electrical == Many people use generators to supplement power to a home or business in the event that main (municipal) power has gone offline. In order to safely transfer the power source from a generator (and back to the main), a safety interlock is often employed. The interlock consists of one or more switches that prevent both main power and generator power from powering the dwelling simultaneously. Without this safeguard, both power sources running at once could cause an overload condition, or generator power back-feed onto the main could cause the dangerous voltage to reach a lineman repairing the main feed far outside the building. An interlock device is designed to allow a generator to provide backup power in such a way that it (a) prevents main and generator power to be connected at the same time, and (b) allows circuit breakers to operate normally without interference in the event of an overload condition. Most interlock devices for electrical systems employ a mechanical device to manage the movement of circuit breakers. Some also allow for the use of padlocks to prevent someone from accidentally activating the main power system without authorization. == Defeatable == Interlocks prevent injuries by preventing direct contact with energized parts of electrical equipment. Only qualified personnel, who must use a tool (such as a screwdriver), are allowed to bypass the interlock. Such interlocks are called defeatable interlocks, and are specified by Underwriters Laboratory (UL) standard UL508a, and National Electrical Code (NEC) Article 409.2. Defeatable interlocks are allowed on electrical equipment up to 600 volts. == Security == In high-security buildings, access control systems are sometimes set up so that ability to open one door requires another one to be closed first. Such setups are called a mantrap. Interlocks can be used as a high level entrance security. There are two kinds of interlocking systems for security. The first form of interlocking security is more mechanical. For example, if an individual is entering a building, there may be two sets of doors to enter from. As the individual enters the first door, that door will close before they enter through the second door. This type of interlocking security can prevent piggybacking or tailgating. The second form of interlocking security is electronic. This is in the form of detection and identification systems. Examples of such systems can be PIN codes, face recognition, and/or fingerprint recognition. == Microprocessors == In microprocessor architecture, an interlock is digital electronic circuitry that stalls a pipeline (inserts bubbles) when a hazard is detected until the hazard is cleared. One example of a hazard is if a software program loads data from the system bus and calls for use of that data in the following cycle in a system in which loads take multiple cycles (a load-to-use hazard). An interlock may be used to prevent undesired states in a finite-state machine. == See also == Fail-safe Railway interlocking Breath alcohol ignition interlock device Safety instrumented system Piggybacking Tailgating Lockout-tagout == References ==
https://en.wikipedia.org/wiki/Interlock_(engineering)